In Hadoop, a block is the physical representation of data.
InputSplit is the logical representation of data in a block. It is primarily used in the MapReduce program or other data processing techniques.
The HDFS block size is set to 128MB by default, but you can modify it to SUIT your needs. Except for the last block, which can be the same size or LESS, all HDFS blocks are the same size.
By default, the InputSplit size is NEARLY equal to the block size.