/**
* 将统计结果写入到MySQL中
*/
try {
videoAccessTopNDF.foreachPartition(partitionOfRecords => {
// val list = new ListBuffer[DayVideoAccessStat]
// partitionOfRecords.foreach(info => {
// val day = info.getAs[String]("day")
// val cmsId = info.getAs[Long]("cmsId")
// val times = info.getAs[Long]("times")
//
// /**
// * 不建议大家在此处进行数据库的数据插入
// */
//
// list.append(DayVideoAccessStat(day, cmsId, times))
// })
//
// StatDAO.insertDayVideoAccessTopN(list)
})
} catch {
case e:Exception => e.printStackTrace()
}
这段代码的这一行: videoAccessTopNDF.foreachPartition(partitionOfRecords => {
在idea中不报错,但编译的时候报下面的错,查了一个晚上,不知道什么问题,请老师指点
Error:(153, 25) ambiguous reference to overloaded definition,
both method foreachPartition in class Dataset of type (func: org.apache.spark.api.java.function.ForeachPartitionFunction[org.apache.spark.sql.Row])Unit
and method foreachPartition in class Dataset of type (f: Iterator[org.apache.spark.sql.Row] => Unit)Unit
match argument types (Object => Unit)
videoAccessTopNDF.foreachPartition(partitionOfRecords => {