java.lang.UnsatisfiedLinkError: failed to map segment from shared when starting a Kafka Streams application

Photo by Jess Bailey on Unsplash

java.lang.UnsatisfiedLinkError: failed to map segment from shared when starting a Kafka Streams application

·

1 min read

Problem

ERROR stream-client [favcar-colour-streams-80448e93-ed7a-4a3a-a0cd-9f864ac4c88f] Encountered the following exception during processing and Kafka Streams opted to SHUTDOWN_CLIENT. The streams client is going to shut down now.  (org.apache.kafka.streams.KafkaStreams:529) 
java.lang.UnsatisfiedLinkError: /tmp/librocksdbjni9410115511242084837.so: /tmp/librocksdbjni9410115511242084837.so: failed to map segment from shared object
    at java.base/java.lang.ClassLoader$NativeLibrary.load0(Native Method)
    at java.base/java.lang.ClassLoader$NativeLibrary.load(ClassLoader.java:2452)
    at java.base/java.lang.ClassLoader$NativeLibrary.loadLibrary(ClassLoader.java:2508)
    at java.base/java.lang.ClassLoader.loadLibrary0(ClassLoader.java:2704)
    at java.base/java.lang.ClassLoader.loadLibrary(ClassLoader.java:2637)
    at java.base/java.lang.Runtime.load0(Runtime.java:745)
    at java.base/java.lang.System.load(System.java:1871)
    at org.rocksdb.NativeLibraryLoader.loadLibraryFromJar(NativeLibraryLoader.java:79)
    at org.rocksdb.NativeLibraryLoader.loadLibrary(NativeLibraryLoader.java:57)
    at org.rocksdb.RocksDB.loadLibrary(RocksDB.java:69)
    at org.rocksdb.RocksDB.<clinit>(RocksDB.java:38)
    at org.rocksdb.DBOptions.<clinit>(DBOptions.java:22)
    at org.apache.kafka.streams.state.internals.RocksDBStore.openDB(RocksDBStore.java:126)
    at org.apache.kafka.streams.state.internals.RocksDBStore.init(RocksDBStore.java:250)
    at org.apache.kafka.streams.state.internals.WrappedStateStore.init(WrappedStateStore.java:55)
    at org.apache.kafka.streams.state.internals.ChangeLoggingKeyValueBytesStore.init(ChangeLoggingKeyValueBytesStore.java:56)
    at org.apache.kafka.streams.state.internals.WrappedStateStore.init(WrappedStateStore.java:55)
    at org.apache.kafka.streams.state.internals.MeteredKeyValueStore.lambda$init$1(MeteredKeyValueStore.java:125)
    at org.apache.kafka.streams.processor.internals.metrics.StreamsMetricsImpl.maybeMeasureLatency(StreamsMetricsImpl.java:809)
    at org.apache.kafka.streams.state.internals.MeteredKeyValueStore.init(MeteredKeyValueStore.java:125)
    at org.apache.kafka.streams.processor.internals.ProcessorStateManager.registerStateStores(ProcessorStateManager.java:205)
    at org.apache.kafka.streams.processor.internals.StateManagerUtil.registerStateStores(StateManagerUtil.java:97)
    at org.apache.kafka.streams.processor.internals.StreamTask.initializeIfNeeded(StreamTask.java:231)
    at org.apache.kafka.streams.processor.internals.TaskManager.tryToCompleteRestoration(TaskManager.java:457)
    at org.apache.kafka.streams.processor.internals.StreamThread.initializeAndRestorePhase(StreamThread.java:880)
    at org.apache.kafka.streams.processor.internals.StreamThread.runOnce(StreamThread.java:762)
    at org.apache.kafka.streams.processor.internals.StreamThread.runLoop(StreamThread.java:604)
    at org.apache.kafka.streams.processor.internals.StreamThread.run(StreamThread.java:576)

Solution

sudo mount -o remount exec /tmp