[Solved] Why the result of ‘0.3 * 3’ is 0.89999999999999 in scala language? [duplicate]

Floating point calculation is reasonably complex subject. This has to do with the binary representation of a floating point number, that doesn’t guarantee every possible number (obviously), to have an exact representation, which can lead to errors in operations, and yes, these errors can propagate. Here is a link on the subject, although it isn’t … Read more

[Solved] Get the highest price with smaller ID when two ID have the same highest price in Scala

Try this. scala> val df = Seq((4, 30),(2,50),(3,10),(5,30),(1,50),(6,25)).toDF(“id”,”price”) df: org.apache.spark.sql.DataFrame = [id: int, price: int] scala> df.show +—+—–+ | id|price| +—+—–+ | 4| 30| | 2| 50| | 3| 10| | 5| 30| | 1| 50| | 6| 25| +—+—–+ scala> df.sort(desc(“price”), asc(“id”)).show +—+—–+ | id|price| +—+—–+ | 1| 50| | 2| 50| | 4| … Read more

[Solved] How to split a string in scala?

Scala string split method uses regular expression, { is a special character in regular expression which is used for quantifying matched patterns. If you want to treat it as literal, you need to escape the character with , \\{: val s = “””word, {“..Json Structure…”}””” // s: String = word, {“..Json Structure…”} s.split(“, \\{“) // … Read more

[Solved] binary tree creation in SCALA

You probably want something like this: trait BinTree[+A] case object Leaf extends BinTree[Nothing] case class Branch[+A](node: A, left: BinTree[A], right: BinTree[A]) extends BinTree[A] def buildTree[A](list: List[A]): BinTree[A] = list match { case Nil => Leaf case x :: xs => val (left, right) = xs.splitAt(xs.length/2) Branch(x, buildTree(left), buildTree(right)) } But you really need to get … Read more

[Solved] spark- scala:How to read data from .dat file transform it and finally store in HDFS

Please find the solution val rdd = sc.textFile(“/path/Test.dat”) val rddmap = rdd.map(i => i.split(” “)).map(i => (i(1),i(2))).sortByKey().map(i => i._1 + “%$” + i._2) rddmap.repartition(1).saveAsTextFile(“/path/TestOut1.dat”) output Jasper%$Pinto Jhon%$Ward Shally%$Stun 1 solved spark- scala:How to read data from .dat file transform it and finally store in HDFS

[Solved] scala how do you group elements in a map

If this is what you’re aiming for: List(List(r1), List(r2), List(r3 chain, r4), List(r5 chain, r6 chain, r7)) then here is a possibility: val rules = List(“r1”, “r2”, “r3 chain”, “r4”, “r5 chain”, “r6 chain”, “r7”) val (groups, last) = rules.foldLeft(List[List[String]](), List[String]()) { case ((groups, curGroup), rule) if rule.contains(“chain”) => (groups, rule :: curGroup) case ((groups, … Read more

[Solved] How to add columns using Scala

val grouped = df.groupBy($”id”).count val res = df.join(grouped,Seq(“id”)) .withColumnRenamed(“count”,”repeatedcount”) Group By will give count of each id’s. Join that with original dataframe to get count against each id. solved How to add columns using Scala

[Solved] Ineer Join query with where in scala slick

For inner join you could use slick applicative-join with filter clause. For example: val query = for { (address, userAddressMapping) <- Address join UserAddressMapping on (_.id === _.addressId) if userAddressMapping.userId === 1 } yield (address.id, address.name) dbConfig.run(query.result) 12 solved Ineer Join query with where in scala slick

[Solved] Finding average value in spark scala gives blank result

I would suggest you to use sqlContext api and use the schema you have defined val df = sqlContext.read .format(“com.databricks.spark.csv”) .option(“delimiter”, “\\t”) .schema(schema) .load(“path to your text file”) the schema is val schema = StructType(Seq( StructField(“ID”, IntegerType, true), StructField(“col1”, DoubleType, true), StructField(“col2”, IntegerType, true), StructField(“col3”, DoubleType, true), StructField(“col4”, DoubleType, true), StructField(“col5”, DoubleType, true), StructField(“col6”, DoubleType, … Read more