All Classes Interface Summary Class Summary Enum Summary
| Class |
Description |
| AbstractBulkWriterContext |
Abstract base class for BulkWriterContext implementations.
|
| AnalyticsSidecarClient |
|
| BooleanType |
|
| BroadcastableClusterInfo |
Broadcastable wrapper for single cluster with ZERO transient fields to optimize Spark broadcasting.
|
| BroadcastableClusterInfoGroup |
Broadcastable wrapper for coordinated writes with ZERO transient fields to optimize Spark broadcasting.
|
| BroadcastableJobInfo |
Broadcastable wrapper for job information with ZERO transient fields to optimize Spark broadcasting.
|
| BroadcastableSchemaInfo |
Broadcastable wrapper for schema information with ZERO transient fields to optimize Spark broadcasting.
|
| BroadcastableTableSchema |
Broadcastable wrapper for TableSchema with ZERO transient fields to optimize Spark broadcasting.
|
| BroadcastableTokenPartitioner |
Broadcastable wrapper for TokenPartitioner with ZERO transient fields to optimize Spark broadcasting.
|
| BulkSparkConf |
|
| BulkWriterConfig |
Immutable configuration data class for BulkWriter jobs that is safe to broadcast to Spark executors.
|
| BulkWriterContext |
Context for bulk write operations, providing access to cluster, job, schema, and transport information.
|
| BulkWriterContextFactory |
|
| BulkWriterKeyStoreValidation |
|
| BulkWriterTrustStoreValidation |
|
| BulkWriteValidator |
A validator for bulk write result against the target cluster(s).
|
| Bundle |
Bundle represents a set of SSTables bundled, as per bundle size set by clients through writer option.
|
| BundleManifest |
Manifest of all SSTables in the bundle
It is a variant of HashMap, for the convenience of json serialization
|
| BundleManifest.Entry |
Manifest of a single SSTable
componentsChecksum include checksums of individual SSTable components
startToken and endToken represents the token range of the SSTable
|
| BundleNameGenerator |
Generate names for SSTable bundles
|
| BundleStorageObject |
Storage object of the uploaded bundle, including object key and checksum
|
| BytesType |
|
| CancelJobEvent |
A simple data structure to describe an event that leads to job cancellation.
|
| CassandraBridgeFactory |
|
| CassandraBridgeFactory.VersionSpecificBridge |
|
| CassandraBulkSourceRelation |
|
| CassandraBulkWriterContext |
BulkWriterContext implementation for single cluster write operations.
|
| CassandraCloudStorageTransportContext |
|
| CassandraClusterInfo |
Driver-only implementation of ClusterInfo for single cluster operations.
|
| CassandraClusterInfoGroup |
A group of ClusterInfo.
|
| CassandraContext |
|
| CassandraCoordinatedBulkWriterContext |
BulkWriterContext for coordinated write to multiple clusters.
|
| CassandraDataLayer |
|
| CassandraDataLayer.Serializer |
|
| CassandraDataSink |
|
| CassandraDataSource |
|
| CassandraDataSourceHelper |
A helper class for the CassandraBulkDataSource
|
| CassandraDirectDataTransportContext |
|
| CassandraJobInfo |
|
| CassandraSchemaInfo |
|
| CassandraTableProvider |
|
| CassandraTopologyMonitor |
A monitor that check whether the cassandra topology has changed.
|
| CassandraValidation |
A startup validation that checks the connectivity and health of Cassandra
|
| ClientConfig |
|
| ClientConfig.ClearSnapshotStrategy |
|
| ClientConfig.ClearSnapshotStrategy.NoOp |
|
| ClientConfig.ClearSnapshotStrategy.OnCompletion |
|
| ClientConfig.ClearSnapshotStrategy.OnCompletionOrTTL |
|
| ClientConfig.ClearSnapshotStrategy.TTL |
|
| CloudStorageDataTransferApi |
The collection of APIs for cloud-storage-based data transfer
|
| CloudStorageDataTransferApiFactory |
|
| CloudStorageDataTransferApiImpl |
|
| CloudStorageDataTransferApiImpl.ExecutorCreateSliceRetryPolicy |
SidecarClient by default retries till 200 Http response.
|
| CloudStorageStreamResult |
|
| CloudStorageStreamSession |
StreamSession implementation that is used for streaming bundled SSTables for S3_COMPAT transport option.
|
| ClusterInfo |
Interface for cluster information used in bulk write operations.
|
| CollectionType<EntryType,IntermediateType> |
|
| ColumnType<T> |
|
| ColumnTypes |
|
| ColumnUtil |
|
| CommitCoordinator |
|
| CommitError |
|
| CommitResult |
|
| ConsistencyLevel |
|
| ConsistencyLevel.CL |
|
| CoordinatedCloudStorageDataTransferApi |
|
| CoordinatedCloudStorageDataTransferApiExtension |
Additional APIs to support coordinated write
|
| CoordinatedImportCoordinator |
Import coordinator that implements the two phase import for coordinated write.
|
| CoordinatedWriteConf |
Data class containing the configurations required for coordinated write.
|
| CoordinatedWriteConf.ClusterConf |
|
| CoordinatedWriteConf.SimpleClusterConf |
|
| CoordinationSignalListener |
A listener interface that receives coordination signals.
|
| CqlTableInfoProvider |
An implementation of the TableInfoProvider interface that leverages the CqlTable to
provide table information
|
| CreatedRestoreSlice |
A serializable wrapper of CreateSliceRequestPayload and also implements hashcode and equals
|
| CreatedRestoreSlice.ConsistencyLevelCheckResult |
|
| CredentialChangeListener |
A listener interface that is notified on access token changes
|
| DataChunker |
DataChunker helps break down data into chunks according to maxChunkSizeInBytes set.
|
| DataLayer |
|
| DataObjectBuilder<T extends DataObjectBuilder<?,?>,R> |
Interface to build data objects
|
| DataTransport |
|
| DataTransportInfo |
|
| DecoratedKey |
|
| Digest |
Interface that represents the computed digest
|
| DigestAlgorithm |
Interface that computes a Digest
|
| DigestAlgorithms |
Represents the user-provided digest type configuration to be used to validate SSTable files during bulk writes
|
| DigestAlgorithmSupplier |
|
| DirectDataTransferApi |
|
| DirectDataTransferApi.RemoteCommitResult |
|
| DirectStreamResult |
|
| DirectStreamSession |
|
| DoubleType |
|
| FastByteOperations |
Utility code to do optimized byte-array comparison.
|
| FastByteOperations.ByteOperations |
|
| FastByteOperations.PureJavaOperations |
|
| FastByteOperations.UnsafeOperations |
|
| FilterUtils |
|
| IBroadcastableClusterInfo |
Minimal interface for cluster information that can be safely broadcast to Spark executors.
|
| ImportCompletionCoordinator |
Import coordinator that wait for import of all slices to complete.
|
| ImportCoordinator |
A coordinator conducts import.
|
| IntegerType |
|
| IOUtils |
|
| JobInfo |
Provides job-specific configuration and information for bulk write operations.
|
| KryoRegister |
Helper class to register classes for Kryo serialization
|
| KryoRegister.V40 |
|
| KryoRegister.V41 |
|
| KryoRegister.V50 |
|
| ListType<T> |
|
| ListType.CQLListEntry<T> |
|
| LocalDataLayer |
Basic DataLayer implementation to read SSTables from local file system.
|
| LocalDataLayer.Serializer |
|
| LocalDataSource |
|
| LocalPartitionSizeSource |
|
| LongType |
|
| MapType<K,V> |
|
| MD5Digest |
An implementation of Digest that represents an MD5 digest
|
| MD5DigestAlgorithm |
|
| MultiClusterContainer<T> |
A container to hold values per cluster.
|
| MultiClusterReplicaAwareFailureHandler<I extends org.apache.cassandra.spark.common.model.CassandraInstance> |
A ReplicaAwareFailureHandler that can handle multiple clusters, including the case of single cluster.
|
| MultiClusterSupport<T> |
Support storing values per cluster, iteration and lookup
|
| MultiDCReplicas |
This class is SSTable supplier replicas in multiple data centers.
|
| MultipleReplicas |
Return a set of SSTables for a token range, returning enough replica copies to satisfy consistency level
|
| ObjectFailureListener |
A listener interface that is notified on failures processing an object
|
| PartitionedDataLayer |
DataLayer that partitions token range by the number of Spark partitions
and only lists SSTables overlapping with range
|
| PartitionedDataLayer.AvailabilityHint |
|
| PartitionedDataLayer.ReplicaSet |
|
| PartitionSizeIterator |
Wrapper iterator around IndexIterator to read all Index.db files and return SparkSQL
rows containing all partition keys and the associated on-disk uncompressed and compressed sizes.
|
| PartitionSizeTableProvider |
|
| RecordWriter |
|
| RecordWriter.SSTableWriterFactory |
|
| ReplicaAwareFailureHandler<I extends org.apache.cassandra.spark.common.model.CassandraInstance> |
Handles write failures of a single cluster
|
| RingInstance |
|
| SbwJavaSerializer |
Lifted from Kryo 4.0 to fix issues with ObjectInputStream not using the correct class loader
See https://github.com/EsotericSoftware/kryo/blob/19a6b5edee7125fbaf54c64084a8d0e13509920b/src/com/esotericsoftware/kryo/serializers/JavaSerializer.java
|
| SbwKryoRegistrator |
|
| ScalaFunctions |
|
| ScalaFunctions.Function0Wrapper |
|
| SchemaInfo |
Provides schema information for bulk write operations.
|
| SetType<T> |
|
| SidecarDataTransferApi |
|
| SidecarInstanceFactory |
|
| SidecarProvisionedSSTable |
An SSTable that is streamed from Sidecar
|
| SidecarTableSizeProvider |
Implementation of TableSizeProvider that uses Sidecar's client to calculate the table
size
|
| SidecarValidation |
A startup validation that checks the connectivity and health of Sidecar
|
| SimpleTaskScheduler |
Scheduler for simple and short tasks
|
| SingleReplica |
Return a set of SSTables for a single Cassandra Instance
|
| SizingFactory |
A factory class that creates Sizing based on the client-supplied configuration
|
| SortedSSTableWriter |
SSTableWriter that expects sorted data
Note for implementor: the bulk writer always sort the data in entire spark partition before writing.
|
| SparkCellIterator |
|
| SparkRowIterator |
Wrapper iterator around SparkCellIterator to normalize cells into Spark SQL rows
|
| SqlToCqlTypeConverter |
|
| SqlToCqlTypeConverter.Converter<T> |
|
| SqlToCqlTypeConverter.DateConverter |
|
| SqlToCqlTypeConverter.DurationConverter |
|
| SqlToCqlTypeConverter.TimeConverter |
|
| SqlToCqlTypeConverter.TimestampConverter |
|
| SqlToCqlTypeConverter.UdtConverter |
|
| SslValidation |
A startup validation that checks the SSL configuration
|
| SSTableCollector |
Collect SSTables from listing the included directories
|
| SSTableCollector.SSTableFilesAndRange |
Simple record class containing SSTable component file paths, summary and size
|
| SSTableLister |
|
| SSTablesBundler |
SSTablesBundler bundles SSTables in the output directory provided by
SSTableWriter.
|
| SSTableWriterFactory |
|
| StartupValidatable |
An interface for a class that requires and can perform startup validation using StartupValidator
|
| StorageAccessConfiguration |
Holds relevant information to access the bucket in the region
|
| StorageAccessConfiguration.Serializer |
|
| StorageClient |
Client used for upload SSTable bundle to S3 bucket
|
| StorageClientConfig |
|
| StorageCredentialPair |
A class representing the pair of credentials needed to complete an analytics operation using the Storage transport.
|
| StorageCredentials |
StorageCredentials are used to represent the security information required to read from or write to a storage endpoint.
|
| StorageCredentials.Serializer |
|
| StorageTransportConfiguration |
Holds information about the cloud storage configuration
|
| StorageTransportConfiguration.Serializer |
|
| StorageTransportExtension |
The facade interface defines the contract of the extension for cloud storage data transport.
|
| StorageTransportHandler |
|
| StreamError |
|
| StreamResult |
|
| StreamSession<T extends TransportContext> |
|
| StringType |
|
| StringUuidType |
|
| TableInfoProvider |
|
| TableSchema |
Schema information for bulk write operations.
|
| TaskContextUtils |
|
| TimestampOption |
|
| TimestampType |
Provides functionality to convert ByteBuffers to a Date column type and to serialize
Date types to ByteBuffers
|
| Tokenizer |
|
| TokenPartitioner |
Spark Partitioner for distributing data across Cassandra token ranges.
|
| TokenRangeMapping<I extends org.apache.cassandra.spark.common.model.CassandraInstance> |
|
| TokenUtils |
This is utility class for computing Cassandra token for a CQL row.
|
| TransportContext |
An interface that defines the transport context required to perform the bulk writes
|
| TransportContext.CloudStorageTransportContext |
Context used when SSTables are uploaded to cloud
|
| TransportContext.DirectDataBulkWriterContext |
Context used when prepared SSTables are directly written to C* through Sidecar
|
| TransportContext.TransportContextProvider |
|
| TransportExtensionUtils |
|
| TTLOption |
|
| UuidType |
|
| WriteAvailability |
Availability of a node to take writes
|
| WriteMode |
|
| WriteResult |
A holder class for the results of a write operation executed by the bulk-write
job Spark executors.
|
| WriterOption |
|
| WriterOptions |
Spark options to configure bulk writer
|
| XXHash32Digest |
An implementation of Digest that represents an XXHash32 digest
|
| XXHash32DigestAlgorithm |
|