Seven Stars

95% disk space used by logserver error. How to resolve it.

Hi All,

 

My server 95% disk space is utilized by logserver. When I am opening log file it shows following error.

 

[logstash-2017.10.16][[logstash-2017.10.16][2]] IndexShardRecoveryException[failed to recovery from gateway]; nested: EngineCreationFailureException[failed to create engine]; nested: FileSystemException[/opt/Talend-6.3.1/logserv/elasticsearch-2.4.0/data/elasticsearch/nodes/0/indices/logstash-2017.10.16/2/translog/translog-12.ckp: Too many open files];
at org.elasticsearch.index.shard.StoreRecoveryService.recoverFromStore(StoreRecoveryService.java:250)
at org.elasticsearch.index.shard.StoreRecoveryService.access$100(StoreRecoveryService.java:56)
at org.elasticsearch.index.shard.StoreRecoveryService$1.run(StoreRecoveryService.java:129)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
at java.lang.Thread.run(Thread.java:748)
Caused by: [logstash-2017.10.16][[logstash-2017.10.16][2]] EngineCreationFailureException[failed to create engine]; nested: FileSystemException[/opt/Talend-6.3.1/logserv/elasticsearch-2.4.0/data/elasticsearch/nodes/0/indices/logstash-2017.10.16/2/translog/translog-12.ckp: Too many open files];
at org.elasticsearch.index.engine.InternalEngine.<init>(InternalEngine.java:152)
at org.elasticsearch.index.engine.InternalEngineFactory.newReadWriteEngine(InternalEngineFactory.java:25)
at org.elasticsearch.index.shard.IndexShard.newEngine(IndexShard.java:1509)
at org.elasticsearch.index.shard.IndexShard.createNewEngine(IndexShard.java:1493)
at org.elasticsearch.index.shard.IndexShard.internalPerformTranslogRecovery(IndexShard.java:966)
at org.elasticsearch.index.shard.IndexShard.performTranslogRecovery(IndexShard.java:938)
at org.elasticsearch.index.shard.StoreRecoveryService.recoverFromStore(StoreRecoveryService.java:241)
... 5 more
Caused by: java.nio.file.FileSystemException: /opt/Talend-6.3.1/logserv/elasticsearch-2.4.0/data/elasticsearch/nodes/0/indices/logstash-2017.10.16/2/translog/translog-12.ckp: Too many open files
at sun.nio.fs.UnixException.translateToIOException(UnixException.java:91)
at sun.nio.fs.UnixException.rethrowAsIOException(UnixException.java:102)
at sun.nio.fs.UnixException.rethrowAsIOException(UnixException.java:107)
at sun.nio.fs.UnixFileSystemProvider.newByteChannel(UnixFileSystemProvider.java:214)
at java.nio.file.Files.newByteChannel(Files.java:361)
at java.nio.file.Files.newByteChannel(Files.java:407)
at java.nio.file.spi.FileSystemProvider.newInputStream(FileSystemProvider.java:384)
at java.nio.file.Files.newInputStream(Files.java:152)
at org.elasticsearch.index.translog.Checkpoint.read(Checkpoint.java:82)
at org.elasticsearch.index.translog.Translog.recoverFromFiles(Translog.java:330)
at org.elasticsearch.index.translog.Translog.<init>(Translog.java:179)
at org.elasticsearch.index.engine.InternalEngine.openTranslog(InternalEngine.java:205)
at org.elasticsearch.index.engine.InternalEngine.<init>(InternalEngine.java:148)
... 11 more

 

 

Thanks & Regards
A Ravi Kumar
Mobile Number : +91 852-762-1083
Email-id : a.ravikumar104@gmail.com
Skype Id : ammanannaravikumar
1 ACCEPTED SOLUTION

Accepted Solutions
Employee

Re: 95% disk space used by logserver error. How to resolve it.

A possible solution is to update the Elasticsearch log configuration and to use the MaxBackupIndex option to determine how many backup files are kept before the oldest is erased. The Elasticsearch logs configuration can be found in Logserver/elasticsearch-1.5.2/config/logging.yml. In this case, you can update the logging.yml configuration file as follows: 


file: 
type: rollingFile 
file: ${path.logs}/${cluster.name}.log 
maxFileSize : 200KB 
maxBackupIndex: 4 
layout: 
type: pattern 
conversionPattern: "[%d{ISO8601}][%-5p][%-25c] %m%n" 


With the default configuration the log files will increase and it can lead to the disk full problem.

 

You can also delete the old logs and up the ulimit on your system.

-----
Please don't forget to kudo and accept as resolution if this resolves the issue.
2 REPLIES
Moderator

Re: 95% disk space used by logserver error. How to resolve it.

Hello,

With your subscription solution, could you please create a case on talend support portal about this issue that your server 95% disk space is utilized by logserver? In this way, we can give you a remote assistance(webex) to see if there is any configuration issue from your side through support cycle with priority.

Thanks for your time.

Best regards

Sabrina

 

--
Don't forget to give kudos when a reply is helpful and click Accept the solution when you think you're good with it.
Employee

Re: 95% disk space used by logserver error. How to resolve it.

A possible solution is to update the Elasticsearch log configuration and to use the MaxBackupIndex option to determine how many backup files are kept before the oldest is erased. The Elasticsearch logs configuration can be found in Logserver/elasticsearch-1.5.2/config/logging.yml. In this case, you can update the logging.yml configuration file as follows: 


file: 
type: rollingFile 
file: ${path.logs}/${cluster.name}.log 
maxFileSize : 200KB 
maxBackupIndex: 4 
layout: 
type: pattern 
conversionPattern: "[%d{ISO8601}][%-5p][%-25c] %m%n" 


With the default configuration the log files will increase and it can lead to the disk full problem.

 

You can also delete the old logs and up the ulimit on your system.

-----
Please don't forget to kudo and accept as resolution if this resolves the issue.