Hello Anisha, Spark Streaming can be used to perform real time analytics within milliseconds. Hadoop is suitable for performing batch queries on data for gathering insights, and works well for use cases like analyzing vast amounts of customer data for interesting patterns. Since all data cannot be processed by batch querying, Spark Streaming brings streaming computation to Hadoop which means processing occurs in real-time on data streamed from a source. It is an extension of core Spark API taht enables high throughput, fault-tolerant stream processing of live data streams. Data can be ingested from many sources like Kafka, Flume, Twitter, ZeroMQ or TCP and processed using complex algorithms with high-level functions like map, reduce, join and window. Processed data can be pushed out to file systems, databases and live dashboards. Since, Spark Streaming is built on the top of Spark, Spark's built in machine learning algorithms (MLib), graph processing algorithms (GraphX) can be applied on data streams.