Glusterfs fuse client consuming high memory - memory leak

+4 votes

I have set up my glusterfs cluster as Striped-Replicated on GCP Servers, but I am facing memory leak issue with Glusterfs fuse client. Its memory consumption is increasing every day.

Both glusterfs server and glusterfs fuse client is using the latest version(Client-4.1.5, Server- 4.1), but the below process is consuming high memory on the client servers.

glusterfs --fopen-keep-cache=off --volfile-server=gluster1 --volfile-id=/+ 

Every day I can see that memory consumption of the above process is increasing, a temporary fix is to unmount and kill the process.

In my glusterfs client server, I am getting 'Stale File Handle error'. Not sure this error caused the memory leak.

[2018-10-21 04:08:46.921985] W [fuse-bridge.c:1201:fuse_setattr_cbk] 0-glusterfs-fuse: 3705309: SETATTR() /Production/example.com/wp-content/cache/wpsol-cache/4bd4f0bf132901ecb17261f388864fd3 => -1 (Stale file handle)

Also, my glusterfs server is using default settings. Please help if there is any patches or fix.

Oct 22, 2018 in Linux Administration by Edureka
• 200 points
419 views
Thank you, let me downgrade it.
I think 3.12.6. is not available to download, can I use 3.12.15 ? I am using ubuntu client machines.
Try 3.12.8 if .6 is not possible.
@kalgi, It is also not available

https://launchpad.net/~gluster/+archive/ubuntu/glusterfs-3.12

Available versions are .15, .12, .5

3 answers to this question.

Your answer

Your name to display (optional):
Privacy: Your email address will only be used for sending these notifications.
+1 vote
'Stale File Handle error' occurs when the directory is deleted. So basically the pointer your shell has to that particular inode does not exist. It's just a way system treats the invalid inode pointers. In your case, I think this error occurs because you're unmounting and killing the process.
answered Oct 22, 2018 by Kalgi
• 35,800 points
Okay, that is inode issue. We used unmounting to temporarily fix the load issue
0 votes

@Kalgi, you mean the backend error, yes am continuously getting this error in backend

answered Oct 22, 2018 by Edureka
• 200 points
0 votes
Heyy @Jithin,

Try taking statedumbs from different intervals and post your readings.
answered Oct 22, 2018 by Hannah
• 14,080 points

edited Oct 22, 2018 by Hannah

Related Questions In Linux Administration

0 votes
1 answer

How to increase Swap Memory in CentOS 7?

Follow the below steps and procedures: Prerequisites Must have ...READ MORE

answered Oct 5, 2018 in Linux Administration by Frankie
• 9,570 points
262 views
0 votes
1 answer

Is it possible to create a memory leak with Java

Here's a good way to create a memory ...READ MORE

answered May 30, 2018 in Java by parth295
• 4,590 points
39 views
0 votes
1 answer
0 votes
1 answer

How can I solve this: To prevent a memory leak, the JDBC Driver has been forcibly unregistered

What can you do? Ignore these warnings. Tomcat ...READ MORE

answered Jan 8 in Java by Daisy
• 8,020 points
1,161 views
0 votes
1 answer

How does the HDFS Client knows the block size while writing?

HDFS is designed in a way where ...READ MORE

answered Mar 27, 2018 in Big Data Hadoop by kurt_cobain
• 9,260 points
15 views
0 votes
1 answer

Differences between Hadoop-common, Hadoop-core and Hadoop-client?

To help provide some additional details regarding ...READ MORE

answered Mar 28, 2018 in Big Data Hadoop by kurt_cobain
• 9,260 points
236 views
0 votes
1 answer
+3 votes
3 answers

Terraform AWS Cognito App Client

This feature is not currently supported by ...READ MORE

answered Aug 28, 2018 in AWS by eatcodesleeprepeat
• 4,670 points
438 views
+1 vote
2 answers

How do I get my AWS Glue client in JAVA?

Hey, you've been using a correct code ...READ MORE

answered Apr 17, 2018 in AWS by Cloud gunner
• 4,240 points
637 views

© 2018 Brain4ce Education Solutions Pvt. Ltd. All rights Reserved.
"PMP®","PMI®", "PMI-ACP®" and "PMBOK®" are registered marks of the Project Management Institute, Inc. MongoDB®, Mongo and the leaf logo are the registered trademarks of MongoDB, Inc.