SQL, Spark, Scala, Java, Python
Spark Enriched API
Access all tables as Spark Dataframes: All tables in SnappyData are accessible as Spark DataFrames without any copying when data is collocated with Spark. Data is stored in a Spark native column format so even when data has to arrive to Spark compute nodes it incurs little to no overhead associated with serialization.
Open and Flexible APIs: Use all of Spark’s rich APIs like Streaming, ML. For instance, you can store ML trained models in the Snappy store and continuously score as data is streamed in.
Stream processing using declarative SQL: SnappyData introduces declarative SQL constructs for stream processing allowing streams to be treated just like tables.
ANSI compatible SQL: SnappyData extends Spark SQL by adding support for inserts, updates, deletes, distributed transactions and indexes.