InterviewSolution
This section includes InterviewSolutions, each offering curated multiple-choice questions to sharpen your knowledge and support exam preparation. Choose a topic below to get started.
| 1. |
What are the industrial benefits of PySpark? |
|
Answer» These days, almost every industry makes use of big data to evaluate where they stand and grow. When you hear the term big data, Apache Spark comes to mind. Following are the industry benefits of using PySpark that supports Spark:
|
|
| 2. |
What is PySpark UDF? |
|
Answer» UDF stands for User Defined FUNCTIONS. In PySpark, UDF can be created by creating a python function and wrapping it with PySpark SQL’s udf() method and using it on the DataFrame or SQL. These are generally created when we do not have the functionalities supported in PySpark’s LIBRARY and we have to use our own LOGIC on the data. UDFS can be reused on any number of SQL expressions or DataFrames. |
|
| 3. |
What are the types of PySpark’s shared variables and why are they useful? |
|
Answer» Whenever PySpark performs the transformation operation using FILTER(), map() or reduce(), they are run on a remote node that uses the variables shipped with tasks. These variables are not reusable and cannot be shared across different tasks because they are not returned to the Driver. To SOLVE the issue of reusability and sharing, we have shared variables in PySpark. There are two types of shared variables, they are: Broadcast variables: These are also known as read-only shared variables and are used in cases of data lookup requirements. These variables are cached and are made available on all the cluster nodes so that the tasks can make USE of them. The variables are not sent with every task. They are rather distributed to the nodes using efficient algorithms for reducing the cost of communication. When we run an RDD JOB operation that makes use of Broadcast variables, the following things are done by PySpark:
Broadcast variables are created in PySpark by making use of the broadcast(variable) method from the SparkContext class. The syntax for this goes as follows: broadcastVar = sc.broadcast([10, 11, 22, 31])broadcastVar.value # access broadcast variableAn important point of using broadcast variables is that the variables are not sent to the tasks when the broadcast function is called. They will be sent when the variables are first required by the executors. Accumulator variables: These variables are called updatable shared variables. They are added through associative and commutative operations and are used for performing counter or sum operations. PySpark supports the creation of numeric type accumulators by default. It also has the ability to add custom accumulator types. The custom types can be of two types:
Here, we will see the Accumulable section that has the sum of the Accumulator values of the variables modified by the tasks listed in the Accumulator column present in the Tasks table.
Accumulator variables can be created by using SparkContext.longAccumulator(variable) as shown in the example below: ac = sc.longAccumulator("sumaccumulator")sc.parallelize([2, 23, 1]).foreach(lambda x: ac.add(x))Depending on the type of accumulator variable data - double, long and collection, PySpark provide DoubleAccumulator, LongAccumulator and CollectionAccumulator respectively. |
|
| 4. |
What is SparkSession in Pyspark? |
|
Answer» SparkSession is the entry point to PYSPARK and is the replacement of SparkContext since PySpark version 2.0. This acts as a starting point to access all of the PySpark FUNCTIONALITIES related to RDDs, DataFrame, DATASETS etc. It is also a Unified API that is used in replacing the SQLContext, StreamingContext, HiveContext and all other contexts. The SparkSession internally creates SparkContext and SparkConfig based on the details provided in SparkSession. SparkSession can be created by making USE of BUILDER patterns. |
|
| 5. |
What do you understand about PySpark DataFrames? |
|
Answer» PySpark DataFrame is a distributed collection of well-organized data that is equivalent to tables of the relational databases and are placed into named columns. PySpark DataFrame has better optimisation when compared to R or python. These can be created from different sources LIKE Hive Tables, Structured Data Files, existing RDDs, EXTERNAL databases etc as SHOWN in the IMAGE below: The data in the PySpark DataFrame is distributed across different machines in the cluster and the operations performed on this would be run PARALLELLY on all the machines. These can handle a large collection of structured or semi-structured data of a range of petabytes. |
|
| 6. |
Is PySpark faster than pandas? |
|
Answer» PySpark SUPPORTS parallel execution of statements in a DISTRIBUTED environment, i.e on different cores and different machines which are not PRESENT in PANDAS. This is why PySpark is faster than pandas. |
|
| 7. |
What are the advantages of PySpark RDD? |
|
Answer» PySpark RDDs have the following advantages:
|
|
| 8. |
What are the different cluster manager types supported by PySpark? |
|
Answer» A cluster manager is a cluster mode platform that helps to run Spark by providing all resources to WORKER nodes BASED on the REQUIREMENTS. The above figure shows the position of cluster manager in the Spark ECOSYSTEM. Consider a master node and multiple worker nodes present in the cluster. The master nodes provide the worker nodes with the resources like memory, processor allocation etc depending on the nodes requirements with the help of the cluster manager. PySpark supports the following cluster manager types:
|
|
| 9. |
Does PySpark provide a machine learning API? |
|
Answer» Similar to Spark, PySpark provides a machine learning API which is known as MLlib that supports various ML algorithms like:
|
|
| 10. |
What are RDDs in PySpark? |
|
Answer» RDDs expand to Resilient Distributed Datasets. These are the elements that are used for running and operating on multiple nodes to perform parallel processing on a cluster. Since RDDs are suited for parallel processing, they are immutable elements. This means that once we create RDD, we cannot modify it. RDDs are also fault-tolerant which means that whenever failure happens, they can be RECOVERED automatically. Multiple operations can be performed on RDDs to perform a certain task. The operations can be of 2 types:
The above code FILTERS all the elements in the LIST that has ‘interview’ in the element. The output of the above code would be: [ "interview", "interviewbit"]
In this class, we count the number of elements in the spark RDDs. The output of this code is Count of elements in RDD -> 5 |
|
| 11. |
What are PySpark serializers? |
|
Answer» The serialization process is used to conduct performance tuning on Spark. The data sent or RECEIVED over the network to the disk or memory should be persisted. PySpark supports serializers for this purpose. It supports two types of serializers, they are:
Consider an example of serialization which makes use of MarshalSerializer: # --serializing.py----from pyspark.context import SparkContextfrom pyspark.serializers import MarshalSerializersc = SparkContext("LOCAL", "Marshal Serialization", serializer = MarshalSerializer()) #Initialize spark context and serializerprint(sc.parallelize(list(range(1000))).map(lambda x: 3 * x).take(5))sc.stop()When we run the file using the command: $SPARK_HOME/bin/spark-submit serializing.pyThe OUTPUT of the code WOULD be the list of size 5 of numbers multiplied by 3: [0, 3, 6, 9, 12] |
|
| 12. |
Why do we use PySpark SparkFiles? |
|
Answer» PySpark’s SparkFiles are used for LOADING the files onto the Spark application. This functionality is present under SparkContext and can be called USING the sc.addFile() method for loading files on Spark. SparkFiles can also be used for GETTING the PATH using the SparkFiles.get() method. It can also be used to resolve paths to files added using the sc.addFile() method. |
|
| 13. |
What is PySpark SparkContext? |
|
Answer» PySpark SparkContext is an initial entry point of the spark functionality. It also represents Spark Cluster Connection and can be used for creating the Spark RDDS (Resilient DISTRIBUTED Datasets) and broadcasting the variables on the cluster. The following diagram represents the architectural diagram of PySpark’s SparkContext: When we want to run the Spark application, a driver program that has the main FUNCTION will be STARTED. From this point, the SparkContext that we defined gets initiated. Later on, the driver program performs operations inside the executors of the worker NODES. Additionally, JVM will be launched using Py4J which in turn creates JavaSparkContext. Since PySpark has default SparkContext available as “sc”, there will not be a creation of a new SparkContext. |
|
| 14. |
What are the advantages and disadvantages of PySpark? |
|
Answer» Advantages of PySpark:
Disadvantages of PySpark:
|
|
| 15. |
What are the characteristics of PySpark? |
|
Answer» There are 4 CHARACTERISTICS of PySpark:
|
|
| 16. |
What is PySpark? |
|
Answer» PySpark is an Apache Spark interface in Python. It is used for collaborating with Spark using APIs written in Python. It also supports Spark’s FEATURES like Spark DataFrame, Spark SQL, Spark Streaming, Spark MLlib and Spark Core. It provides an interactive PySpark shell to analyze STRUCTURED and semi-structured data in a distributed environment. PySpark supports reading data from multiple sources and different formats. It also facilitates the use of RDDs (RESILIENT Distributed DATASETS). PySpark features are implemented in the py4j library in python. PySpark can be installed using PyPi by using the COMMAND: pip install pyspark |
|