Dependency Graph

Dependency Graph
related to related to child of child of duplicate of duplicate of

View Issue Details

IDProjectCategoryView StatusLast Update
0006907Kali LinuxGeneral Bugpublic2022-12-07 16:36
Reporterklbt001 Assigned To 
PrioritynormalSeverityminorReproducibilityalways
Status closedResolutionopen 
Product Version2020.4 
Summary0006907: System Run Away while working with GVM 20.8
Description

System run away while working with GVM 20.8 which consumes all RAM and fills up complete swap space. In this situation complete system is inaccessible.
Only hard reset would bring system out of this state.

Steps To Reproduce

In my case I have a scan with about 2500 results in a single report.
Then user should go to report page for this specific scan and switch to the results tab.
Then he must reload the page/tab several times by adding filter directives.
Reloading (i.e. adding a new filter directive) should be done while results page is still loading.

Additional Information

Bug 0006763 is still there, but with GVM 20 and PostgrSQL 13.

It seems, that old database searches are still running (the page is still loading) and new searches are added to them.
After a while all system ressources are consumed from old, no longer needed searches.

I think we need an additional function which interrupts old, no longer needed searches when the specific user is starting a new search by changing filter directives.
Or, if interruption of old, no longer needed searches isn't possible, the user must not be able to do anythin new within UI until current action, i.e. the search, is finished.

Relationships

has duplicate 0006960 closeddaniruiz System Run Away while working with GVM 20.8 

Activities

klbt001

klbt001

2020-12-08 23:20

reporter   ~0013956

I was able to get a result of top short before the run-away:


%Cpu(s): 0.5 us, 16.5 sy, 0.0 ni, 0.0 id, 82.7 wa, 0.0 hi, 0.3 si, 0.0 st
MiB Mem : 15892.4 total, 284.7 free, 15421.1 used, 186.6 buff/cache
MiB Swap: 32752.0 total, 23268.5 free, 9483.5 used. 139.7 avail Mem

PID USER      PR  NI    VIRT    RES    SHR S  %CPU  %MEM     TIME+ COMMAND                                                      
 96 root      20   0       0      0      0 D  43.7   0.0   0:43.47 kswapd0                                                      

191972 _gvm 20 0 11.6g 5.2g 1392 S 39.8 33.6 0:31.21 gsad
192086 root 20 0 0 0 0 D 8.7 0.0 0:03.28 kworker/u16:7+kcryptd/254:1
192094 root 20 0 0 0 0 D 8.7 0.0 0:02.62 kworker/u16:16+kcryptd/254:1
160877 root 20 0 0 0 0 D 7.8 0.0 0:07.62 kworker/u16:6+kcryptd/254:1
161454 root 20 0 0 0 0 D 7.8 0.0 0:07.04 kworker/u16:1+kcryptd/254:1
162805 root 20 0 0 0 0 I 7.8 0.0 0:06.62 kworker/u16:8-kcryptd/254:0
164000 root 20 0 0 0 0 D 7.8 0.0 0:04.95 kworker/u16:11+kcryptd/254:1
176231 root 20 0 0 0 0 D 7.8 0.0 0:04.39 kworker/u16:4+kcryptd/254:1
191982 root 20 0 0 0 0 D 7.8 0.0 0:02.52 kworker/u16:3+kcryptd/254:1
192040 root 20 0 0 0 0 D 7.8 0.0 0:04.43 kworker/u16:5+kcryptd/254:1
192116 root 20 0 7220 3552 2668 R 4.9 0.0 0:00.53 top
192089 root 20 0 0 0 0 I 2.9 0.0 0:01.76 kworker/u16:12-kcryptd/254:0
587 root 20 0 0 0 0 D 1.9 0.0 0:02.15 dmcrypt_write/2
11 root 20 0 0 0 0 I 1.0 0.0 3:31.27 rcu_sched
2348 user 20 0 879808 33460 5268 S 1.0 0.2 18:13.93 Xorg
2640 user 20 0 413852 10740 6320 S 1.0 0.1 34:01.93 vino-server
15487 root 20 0 0 0 0 I 1.0 0.0 0:10.07 kworker/3:2-mm_percpu_wq
159047 root 20 0 270088 79364 1252 S 1.0 0.5 1:50.18 nessusd
192059 _gvm 20 0 231420 27096 1684 D 1.0 0.2 0:00.09 gvmd
192088 root 20 0 0 0 0 I 1.0 0.0 0:02.34 kworker/u16:10-kcryptd/254:0
1 root 20 0 171424 4972 2452 S 0.0 0.0 0:21.87 systemd
2 root 20 0 0 0 0 S 0.0 0.0 0:00.06 kthreadd
3 root 0 -20 0 0 0 I 0.0 0.0 0:00.00 rcu_gp
4 root 0 -20 0 0 0 I 0.0 0.0 0:00.00 rcu_par_gp
6 root 0 -20 0 0 0 I 0.0 0.0 0:00.00 kworker/0:0H-kblockd
9 root 0 -20 0 0 0 I 0.0 0.0 0:00.00 mm_percpu_wq
10 root 20 0 0 0 0 S 0.0 0.0 0:00.34 ksoftirqd/0
12 root rt 0 0 0 0 S 0.0 0.0 0:00.75 migration/0
13 root 20 0 0 0 0 S 0.0 0.0 0:00.00 cpuhp/0
14 root 20 0 0 0 0 S 0.0 0.0 0:00.00 cpuhp/1
15 root rt 0 0 0 0 S 0.0 0.0 0:00.87 migration/1
16 root 20 0 0 0 0 S 0.0 0.0 0:00.26 ksoftirqd/1
18 root 0 -20 0 0 0 I 0.0 0.0 0:00.00 kworker/1:0H-events_highpri
19 root 20 0 0 0 0 S 0.0 0.0 0:00.00 cpuhp/2
20 root rt 0 0 0 0 S 0.0 0.0 0:00.85 migration/2


It's a matter of some seconds. Now we see that gsad is consuming the memory and filling up swap space. Does we have some kind of memory leak?

klbt001

klbt001

2020-12-08 23:32

reporter   ~0013957

The result of top of note #0013956 was taken after login to GSA and complete load of default dashboard.

sbrun

sbrun

2021-01-05 10:03

manager   ~0014034

You can test this tip:
https://bugs.kali.org/view.php?id=6763#c13981

klbt001

klbt001

2021-01-05 14:57

reporter   ~0014037

@sbrun, I'm the reporter of ticket 0006763. Ticket 0006763 is about high CPU load because of PostgreSQL processes running.

This ticket 0006907 is about extreme RAM/swap consumption. While swapping the CPU load is also extreme because of swapping, but the main cause is the extreme RAM consumption of gsad. The system where I see the problem has 16 GB RAM and gsad is consuming all swap up to 32 GB so that gsad consumes in total 48 GB memory while it's normal operation.

g0tmi1k

g0tmi1k

2022-12-07 16:36

administrator   ~0017181

This report has been filed against an old version of Kali. We will be closing this ticket due to inactivity.

Please could you see if you are able to replicate this issue with the latest version of Kali Linux (https://www.kali.org/get-kali/)?

If you are still facing the same problem, feel free to re-open the ticket. If you choose to do this, could you provide more information to the issue you are facing, and also give information about your setup?
For more information, please read: https://www.kali.org/docs/community/submitting-issues-kali-bug-tracker/

Issue History

Date Modified Username Field Change
2020-12-06 12:48 klbt001 New Issue
2020-12-06 20:12 131Curious131 Issue cloned: 0006911
2020-12-08 23:20 klbt001 Note Added: 0013956
2020-12-08 23:32 klbt001 Note Added: 0013957
2020-12-30 18:40 Hakan16 Issue cloned: 0006960
2021-01-05 10:03 sbrun Note Added: 0014034
2021-01-05 14:57 klbt001 Note Added: 0014037
2021-01-11 09:54 daniruiz Relationship added has duplicate 0006960
2021-06-30 08:49 g0tmi1k Priority urgent => normal
2022-03-25 13:57 g0tmi1k Severity crash => feature
2022-03-25 13:58 g0tmi1k Severity feature => minor
2022-12-07 16:36 g0tmi1k Note Added: 0017181
2022-12-07 16:36 g0tmi1k Status new => closed