Skip to main content.
home | support | download

Back to List Archive

Re: Segfaults

From: Bill Moseley <moseley(at)not-real.hank.org>
Date: Wed Sep 27 2000 - 16:53:18 GMT
At 06:02 PM 09/27/00 +0200, jmruiz@boe.es wrote:
>If swish-e does not have enough memory it may print the following 
>message to stderror:
>swish: Ran out of memory ...
>This message should be in apache's error_log

Nope, I'm not seeing that.  I do see
Sun Sep 24 10:15:22 2000] [error] (12)Not enough space: fork: Unable to
fork new process
Out of memory!
[Sun Sep 24 10:15:32 2000] [error] (12)Not enough space: fork: Unable to
fork new process

But that's Apache complaining -- probably trying to fork of a CGI program.

Anyway, if swish was segfaulting it may not have a chance to complain --
especially if it's trying to use memory that wasn't really allocated.

>I have checked apache in a SUN box to see the size of the the httpd 
>process. Here is the output:
># ps -o rss -o comm -fu nobody | sort -u
> RSS COMMAND
>1544 /usr/local/apache/bin/httpd
>1554 /usr/local/apache/bin/httpd
>
>is'nt 11M too high for an apache proccess?

I'm running mod_perl, so I have much bigger processes.  There's no perl
fork, just a swish fork.  Just wait until I get the swish library added --
each mod_perl process may end up around 20M or more! A good chunk of that
memory is shared, of course, and if configured correctly, will use less
resources than Apache forking perl, forking swish.

Again, if swish is causing a SEGV, doesn't that mean that swish is
accessing something it shouldn't?  Is it possible it's asking for memory,
and then using that memory without checking for errors?  Or would that just
be a null pointer assignment?




Bill Moseley
mailto:moseley@hank.org
Received on Wed Sep 27 16:53:44 2000