Variable argument lists.

John F. Haugh II jfh at rpp386.UUCP
Fri May 20 11:52:35 AEST 1988


In article <7893 at brl-smoke.ARPA> gwyn at brl.arpa (Doug Gwyn (VLD/VMB) <gwyn>) writes:
>In article <1740 at rpp386.UUCP> jfh at rpp386.UUCP (The Beach Bum) writes:
>>the addition of an extra instruction to stack the number of arguments
>>can hardly be considered significantly slowing down a function call.
>
>WRONG.  Multiply the extra cost per function call by the number of
>function calls per day, to see how much computer time you would be
>wasting each day.

CORRECT.  regardless of how many function calls are made, the decrease
in time is a constant factor (when averaged out over the collection
of jobs in the mix).  if we make the assumption that PUSH, CALL and
RETURN as generic operators have similiar execution times, at worst
for an empty function you see 50% slowdown.  (i know, 50% is a lot ...)
more reasonable figures are << 10%, see the example below.

an improvement of 50% could be had by removing some of the function
calls and writing the damned thing in assembler.  furthermore, a significant
fraction of the execution time of any real world application will be
spent doing I/O of some form.  thus, any increase in CPU requirement
will be diluted when you consider the fraction of the time which
is _actually_ spent executing instructions.

thus, the actual slowdown will be somewhat less.  if you are interested
in the results for one application which i converted to use argument
counts, the demonstration is presented below.

- john.
--
the times below are for a toy i am working on which draws maps from the
location info in uucp maps.  this is the user time for processing 540
sites (east texas repeated 10 times, more or less).  the first is without
argument counts in each function, the second with.

uudraw: 11.1 seconds user		-- no argument counts.
nndraw: 11.3 seconds user		-- argument counts.

the speed penalty is ~1.8%, which as i said earlier is hardly significant.
the argument counts were implemented as

	func (n_args, arg1, arg2, ...);

for calls with

func (nargs, arg1, arg2, ...)
int	nargs;
...
{
	if (nargs != expected_number_arguments)
		abort ();
	...
}

for definitions.  the program itself is highly modular and contains a
larger number of function calls than what i would consider to be ``normal''.
thus, i suggest that the < 2% penalty is actually high.  furthermore,
since this is code which was not automatically included by the compiler,
i further suggest that this number could be reduced were the compiler
permitted to have full control over this process.

benchmarks which have been conducted show speed improvements are available
by selective use of assembler code.  the C library for this system has
assembler coded string library functions which increase the Dhrystone
benchmark rating on the order of 20%.  a paper written by dennis ritchie
himself described the performance decreases which were had as a
result of using C versus assembler in writting the unix operating system.
yet, as i recall statements were made to the effect that the advantages
of coding in C outweighed the space/time advantages of assembler.

while adding runtime checking for argument counts may not be the savior
computer science has been waiting for, it surely is not without ``prior
art'' in the form of array bounds checking, null pointer checking and
enumerated subrange checking which is performed in other languages under
various options.  as a tool for development and debugging work, why
reject it?  other tests are near impossible.  with this being so damned
cheap and easy it seems like a waste to not consider it.

- john.
-- 
John F. Haugh II                 | "You see, I want a lot. Perhaps I want every
River Parishes Programming       | -thing.  The darkness that comes with every
UUCP:   ihnp4!killer!rpp386!jfh  | infinite fall and the shivering blaze of
DOMAIN: jfh at rpp386.uucp          | every step up ..." -- Rainer Maria Rilke



More information about the Comp.lang.c mailing list