首页 文章

Slick 2.1.0 GroupBy并获取所有列

提问于
浏览
2

对于下表结构:

id externalId name version
1  10         n1   1
2  65         n2   2
3  10         n3   2
4  77         n4   1

我正在尝试获取具有由externalId分组的最大版本的所有条目(所有列) . 预期结果应该是:

id externalId name version
2  65         n2   2
3  10         n3   2
4  77         n4   1

为此,我定义了以下光滑的查询:

val resulting = myTableEntries
 .groupBy(x => x.externalID)
 .map { 
   case (id, group) => 
     (id, group.map(_.version).max) 
 }

如何获取所有列而不仅仅是id和版本?

实际上我需要的是以下SQL的Slick版本:

select myTable.id, myTable.name, myTable.externalId, myTable.version
    from MyTable myTable
    where version = 
        (select max(revision) from MyTable myTable1 where myTable.id=myTable1.id)

2 回答

  • 2

    根据this examplethis answer,我认为您需要以下内容:

    val maxVersionByExternalID = myTableEntries
     .groupBy(x => x.externalID)
     .map { 
       case (externalID, group) => 
         (externalID, group.map(_.version).max)
    //We have the pair of ids and max versions _not yet run_ so we can join
    val resulting = (myTableEntries join maxVersionByExternalID on (
        (entries, maxPair) => //join condition
           entries.externalID === maxPair._1 && entries.version === maxPair._2))
      .map{case (entries, maxPair) => entries}//keep only the original
      .run //materialize at the end
    
  • 0

    I have not worked on Slick ,但是看看scala代码和你的要求,我想到了下面提到的问题 .

    解决问题的另一种方法是使用 Rank function in SQLgive rank to every record within group, in your case externalID and rank over version in your case .

    一旦你完成了,你可以 select all the records with rank as 1

    有关如何解决此类案件,请参阅以下问题 .

    Spark two level aggregation

相关问题