View Issue Details

IDProjectCategoryView StatusLast Update
0006960Kali LinuxGeneral Bugpublic2021-01-11 09:54
ReporterHakan16 Assigned Todaniruiz  
PriorityurgentSeveritycrashReproducibilityalways
Status closedResolutionduplicate 
Product Version2020.4 
Summary0006960: System Run Away while working with GVM 20.8
Description

System run away while working with GVM 20.8 which consumes all RAM and fills up complete swap space. In this situation complete system is inaccessible.
Only hard reset would bring system out of this state.

Steps To Reproduce

In my case I have a scan with about 2500 results in a single report.
Then user should go to report page for this specific scan and switch to the results tab.
Then he must reload the page/tab several times by adding filter directives.
Reloading (i.e. adding a new filter directive) should be done while results page is still loading.

Additional Information

Bug 0006763 is still there, but with GVM 20 and PostgrSQL 13.

It seems, that old database searches are still running (the page is still loading) and new searches are added to them.
After a while all system ressources are consumed from old, no longer needed searches.

I think we need an additional function which interrupts old, no longer needed searches when the specific user is starting a new search by changing filter directives.
Or, if interruption of old, no longer needed searches isn't possible, the user must not be able to do anythin new within UI until current action, i.e. the search, is finished.

Relationships

duplicate of 0006907 closed System Run Away while working with GVM 20.8 

Activities

klbt001

klbt001

2020-12-08 23:20

reporter   ~0014015

I was able to get a result of top short before the run-away:


%Cpu(s): 0.5 us, 16.5 sy, 0.0 ni, 0.0 id, 82.7 wa, 0.0 hi, 0.3 si, 0.0 st
MiB Mem : 15892.4 total, 284.7 free, 15421.1 used, 186.6 buff/cache
MiB Swap: 32752.0 total, 23268.5 free, 9483.5 used. 139.7 avail Mem

PID USER      PR  NI    VIRT    RES    SHR S  %CPU  %MEM     TIME+ COMMAND                                                      
 96 root      20   0       0      0      0 D  43.7   0.0   0:43.47 kswapd0                                                      

191972 _gvm 20 0 11.6g 5.2g 1392 S 39.8 33.6 0:31.21 gsad
192086 root 20 0 0 0 0 D 8.7 0.0 0:03.28 kworker/u16:7+kcryptd/254:1
192094 root 20 0 0 0 0 D 8.7 0.0 0:02.62 kworker/u16:16+kcryptd/254:1
160877 root 20 0 0 0 0 D 7.8 0.0 0:07.62 kworker/u16:6+kcryptd/254:1
161454 root 20 0 0 0 0 D 7.8 0.0 0:07.04 kworker/u16:1+kcryptd/254:1
162805 root 20 0 0 0 0 I 7.8 0.0 0:06.62 kworker/u16:8-kcryptd/254:0
164000 root 20 0 0 0 0 D 7.8 0.0 0:04.95 kworker/u16:11+kcryptd/254:1
176231 root 20 0 0 0 0 D 7.8 0.0 0:04.39 kworker/u16:4+kcryptd/254:1
191982 root 20 0 0 0 0 D 7.8 0.0 0:02.52 kworker/u16:3+kcryptd/254:1
192040 root 20 0 0 0 0 D 7.8 0.0 0:04.43 kworker/u16:5+kcryptd/254:1
192116 root 20 0 7220 3552 2668 R 4.9 0.0 0:00.53 top
192089 root 20 0 0 0 0 I 2.9 0.0 0:01.76 kworker/u16:12-kcryptd/254:0
587 root 20 0 0 0 0 D 1.9 0.0 0:02.15 dmcrypt_write/2
11 root 20 0 0 0 0 I 1.0 0.0 3:31.27 rcu_sched
2348 user 20 0 879808 33460 5268 S 1.0 0.2 18:13.93 Xorg
2640 user 20 0 413852 10740 6320 S 1.0 0.1 34:01.93 vino-server
15487 root 20 0 0 0 0 I 1.0 0.0 0:10.07 kworker/3:2-mm_percpu_wq
159047 root 20 0 270088 79364 1252 S 1.0 0.5 1:50.18 nessusd
192059 _gvm 20 0 231420 27096 1684 D 1.0 0.2 0:00.09 gvmd
192088 root 20 0 0 0 0 I 1.0 0.0 0:02.34 kworker/u16:10-kcryptd/254:0
1 root 20 0 171424 4972 2452 S 0.0 0.0 0:21.87 systemd
2 root 20 0 0 0 0 S 0.0 0.0 0:00.06 kthreadd
3 root 0 -20 0 0 0 I 0.0 0.0 0:00.00 rcu_gp
4 root 0 -20 0 0 0 I 0.0 0.0 0:00.00 rcu_par_gp
6 root 0 -20 0 0 0 I 0.0 0.0 0:00.00 kworker/0:0H-kblockd
9 root 0 -20 0 0 0 I 0.0 0.0 0:00.00 mm_percpu_wq
10 root 20 0 0 0 0 S 0.0 0.0 0:00.34 ksoftirqd/0
12 root rt 0 0 0 0 S 0.0 0.0 0:00.75 migration/0
13 root 20 0 0 0 0 S 0.0 0.0 0:00.00 cpuhp/0
14 root 20 0 0 0 0 S 0.0 0.0 0:00.00 cpuhp/1
15 root rt 0 0 0 0 S 0.0 0.0 0:00.87 migration/1
16 root 20 0 0 0 0 S 0.0 0.0 0:00.26 ksoftirqd/1
18 root 0 -20 0 0 0 I 0.0 0.0 0:00.00 kworker/1:0H-events_highpri
19 root 20 0 0 0 0 S 0.0 0.0 0:00.00 cpuhp/2
20 root rt 0 0 0 0 S 0.0 0.0 0:00.85 migration/2


It's a matter of some seconds. Now we see that gsad is consuming the memory and filling up swap space. Does we have some kind of memory leak?

klbt001

klbt001

2020-12-08 23:32

reporter   ~0014016

The result of top of note #0013956 was taken after login to GSA and complete load of default dashboard.

kali-bugreport

kali-bugreport

2021-01-10 11:16

reporter   ~0014086

Duplicate of 0006907

Issue History

Date Modified Username Field Change
2020-12-30 18:40 Hakan16 New Issue
2020-12-30 18:40 Hakan16 Issue generated from: 0006907
2020-12-30 18:41 Hakan16 Issue cloned: 0006962
2021-01-10 11:16 kali-bugreport Note Added: 0014086
2021-01-11 09:54 daniruiz Assigned To => daniruiz
2021-01-11 09:54 daniruiz Status new => closed
2021-01-11 09:54 daniruiz Resolution open => duplicate
2021-01-11 09:54 daniruiz Relationship added duplicate of 0006907