Discussion:
[bug #52697] More varieties to -j switch
anonymous
2017-12-19 20:46:10 UTC
Permalink
URL:
<http://savannah.gnu.org/bugs/?52697>

Summary: More varieties to -j switch
Project: make
Submitted by: None
Submitted on: Tue 19 Dec 2017 08:46:09 PM UTC
Severity: 3 - Normal
Item Group: Enhancement
Status: None
Privacy: Public
Assigned to: None
Open/Closed: Open
Discussion Lock: Any
Component Version: None
Operating System: Any
Fixed Release: None
Triage Status: None

_______________________________________________________

Details:

On modern Systems there is an increasing number of CPU cores/threads. For
example my phone has a gentoo chroot, 8cores and 3GB RAM (shared with the
Android). Compiling c with make -j works fine, python usually too, c++ makes
the system crash. Some packages are written in more than one language.
Additionally the -j switch is set globally in /etc/portage/make.conf and
cannot be switched during an update or rebuild of multiple packages which is a
crucial thing in using a Gentoo system.
I wish to have more options deciding the number of compile threads based on
the amount of ram used or which programming language gets compiled.




_______________________________________________________

Reply to this item at:

<http://savannah.gnu.org/bugs/?52697>

_______________________________________________
Message sent via/by Savannah
http://savannah.gnu.org/
anonymous
2017-12-20 01:25:03 UTC
Permalink
Follow-up Comment #1, bug #52697 (project make):

I see there is a Typo: The 3rd Sentence should be "Compiling c with make -j8
works ..."

_______________________________________________________

Reply to this item at:

<http://savannah.gnu.org/bugs/?52697>

_______________________________________________
Message sent via/by Savannah
http://savannah.gnu.org/
anonymous
2017-12-20 01:49:23 UTC
Permalink
Follow-up Comment #2, bug #52697 (project make):

Maybe I forgot to make clear what the main problem is:
the -j switch takes not care about ram and this is becoming more and more the
important limit - especially in object orientated languages there should be
something preventing starting more compile processes. Even on my 12GB
workstation I ran into memory problems *, while I would like the -j13 to speed
up compiling the packages where it works.

Thanks for your patience. I think I should go to sleep before I write more
hard to understand stuff. Anyways, I have the idea since a long time and
thought about ist also when I was not tired. ;-)

* With certain packages the whole system gets stuck swapping and I have to
stop the whole update (which can be tricky when the system is hardly
responding), edit my make.conf, do a "emerge --oneshot" on that package, edit
my make.conf back to -j and continue with my "emerge --deep --newuse --update
@world"

_______________________________________________________

Reply to this item at:

<http://savannah.gnu.org/bugs/?52697>

_______________________________________________
Message sent via/by Savannah
http://savannah.gnu.org/
Edward Welbourne
2018-01-02 18:51:31 UTC
Permalink
(Sorry if I'm a fortnight late to the party and this has been said
already; I can't see it in my in-box, but that may just be because it's
crowded following a fortnight off ...)

> Maybe I forgot to make clear what the main problem is: the -j switch
> takes not care about ram and this is becoming more and more the
> important limit

That sounds more like you need an orthogonal control, just as -l
controls load, rather than a variant on -j; a limit on RAM use that
would preclude starting new jobs if more than some specified amount of
(or perhaps fraction of available) RAM is in use.

Given that you mention swapping as a symptom, have you, at least, tried
using -l to limit the load ? Does that mitigate your problems at all ?

Eddy.
anonymous
2018-08-18 20:53:45 UTC
Permalink
Follow-up Comment #3, bug #52697 (project make):

I'm pretty sure something like this was already proposed, discussed and/or
even implemented somewhere, but anyway:

here is a quick and dirty hack I'm using to successfully compile gcc (together
with some other required measures) on my raspberry pi which has four cores but
'only' 1 GB of RAM: an -M [minavail] option, similar to -l [load].

Specifying this option will inhibit launching new jobs until there is at least
MINAVAIL megabytes of RAM is available. Works only on Linux by scanning
/proc/meminfo for MemAvailable string.


---
job.c | 38 +++++++++++++++++++++++++++++++++++++-
main.c | 7 +++++++
makeint.h | 2 ++
3 files changed, 46 insertions(+), 1 deletion(-)

Index: make-4.2.1/job.c
===================================================================
--- make-4.2.1.orig/job.c
+++ make-4.2.1/job.c
@@ -214,6 +214,7 @@ int getloadavg (double loadavg[], int ne
static void free_child (struct child *);
static void start_job_command (struct child *child);
static int load_too_high (void);
+static int memory_load_high (void);
static int job_next_command (struct child *);
static int start_waiting_job (struct child *);


@@ -1557,7 +1558,7 @@ start_waiting_job (struct child *c)
/* If we are running at least one job already and the load average
is too high, make this one wait. */
if (!c->remote
- && ((job_slots_used > 0 && load_too_high ())
+ && ((job_slots_used > 0 && (load_too_high () || memory_load_high ()))
#ifdef WINDOWS32
|| (process_used_slots () >= MAXIMUM_WAIT_OBJECTS)
#endif
@@ -1882,6 +1883,41 @@ job_next_command (struct child *child)
return 1;
}

+static int
+get_mem_avail (void)
+{
+ FILE *fp;
+ char s [64];
+ long m = 0;
+ int n;
+
+ fp = fopen ("/proc/meminfo", "r");
+ if (fp == NULL)
+ return 0; // out of memory already or no /proc, huh
+
+ while (!feof (fp))
+ {
+ if (!fgets (s, 64, fp))
+ break;
+
+ n = sscanf (s, "MemAvailable: %ld kB", &m);
+ if (n == 1)
+ break;
+ }
+ fclose (fp);
+
+ return m / 1024;
+}
+
+static int
+memory_load_high (void)
+{
+ if (mem_limit && get_mem_avail () < mem_limit)
+ return 1;
+ else
+ return 0;
+}
+
/* Determine if the load average on the system is too high to start a new
job.
The real system load average is only recomputed once a second. However,
a
very parallel make can easily start tens or even hundreds of jobs in a
Index: make-4.2.1/main.c
===================================================================
--- make-4.2.1.orig/main.c
+++ make-4.2.1/main.c
@@ -282,6 +282,11 @@ int max_load_average = -1;
int default_load_average = -1;
#endif

+/* Minimum memory available (megabytes) required before starting job.
+ Zero means unlimited. */
+int mem_limit = 0;
+static int default_mem_limit = 0;
+
/* List of directories given with -C switches. */

static struct stringlist *directories = 0;
@@ -457,6 +462,8 @@ static const struct command_switch switc
{ 'l', positive_int, &max_load_average, 1, 1, 0, &default_load_average,
&default_load_average, "load-average" },
#endif
+ { 'M', positive_int, &mem_limit, 1, 1, 0, &default_mem_limit,
+ &default_mem_limit, "mem-limit" },
{ 'o', filename, &old_files, 0, 0, 0, 0, 0, "old-file" },
{ 'O', string, &output_sync_option, 1, 1, 0, "target", 0, "output-sync"
},
{ 'W', filename, &new_files, 0, 0, 0, 0, 0, "what-if" },
Index: make-4.2.1/makeint.h
===================================================================
--- make-4.2.1.orig/makeint.h
+++ make-4.2.1/makeint.h
@@ -666,6 +666,8 @@ extern double max_load_average;
extern int max_load_average;
#endif

+extern int mem_limit;
+
#ifdef WINDOWS32
extern char *program;
#else


_______________________________________________________

Reply to this item at:

<http://savannah.gnu.org/bugs/?52697>

_______________________________________________
Message sent via Savannah
https://savannah.gnu.org/
Loading...