Class PartitionSizeIterator
- java.lang.Object
-
- org.apache.cassandra.spark.sparksql.PartitionSizeIterator
-
- All Implemented Interfaces:
java.io.Closeable,java.lang.AutoCloseable,org.apache.spark.sql.connector.read.PartitionReader<org.apache.spark.sql.catalyst.InternalRow>
public class PartitionSizeIterator extends java.lang.Object implements org.apache.spark.sql.connector.read.PartitionReader<org.apache.spark.sql.catalyst.InternalRow>Wrapper iterator around IndexIterator to read all Index.db files and return SparkSQL rows containing all partition keys and the associated on-disk uncompressed and compressed sizes.
-
-
Constructor Summary
Constructors Constructor Description PartitionSizeIterator(int partitionId, DataLayer dataLayer)
-
Method Summary
All Methods Instance Methods Concrete Methods Modifier and Type Method Description voidclose()org.apache.spark.sql.catalyst.InternalRowget()booleannext()The expected schema is defined inDataLayer.partitionSizeStructType().
-
-
-
Constructor Detail
-
PartitionSizeIterator
public PartitionSizeIterator(int partitionId, @NotNull DataLayer dataLayer)
-
-
Method Detail
-
next
public boolean next() throws java.io.IOExceptionThe expected schema is defined inDataLayer.partitionSizeStructType(). It consists of the Cassandra partition keys, appended with the columns "uncompressed" and "compressed".- Specified by:
nextin interfaceorg.apache.spark.sql.connector.read.PartitionReader<org.apache.spark.sql.catalyst.InternalRow>- Throws:
java.io.IOException
-
get
public org.apache.spark.sql.catalyst.InternalRow get()
- Specified by:
getin interfaceorg.apache.spark.sql.connector.read.PartitionReader<org.apache.spark.sql.catalyst.InternalRow>
-
close
public void close() throws java.io.IOException- Specified by:
closein interfacejava.lang.AutoCloseable- Specified by:
closein interfacejava.io.Closeable- Throws:
java.io.IOException
-
-