not using syslogd in the first place

Dan Bernstein brnstnd at kramden.acf.nyu.edu
Thu Aug 2 07:33:55 AEST 1990


In article <1990Aug1.052525.22007 at athena.mit.edu> jik at athena.mit.edu (Jonathan I. Kamens) writes:
  [ best jokes first ]
>   Syslogd doesn't have that problem; syslogd is secure.

An Athena person claiming that one of the least secure logging schemes
in existence is secure?

On this (typical) Sun 4, /dev/log is mode 666, as it has to be to handle
errors from users other than root. But it does *no* authentication!
NONE! ZERO! ZIP! A secure system lets me, e.g., put fake badsu's in the
logs with absolutely no indication of the forgery?

I can flood /dev/log with messages, clogging syslog. That's secure?

If I were a cracker who had just achieved root, I would have to replace
or restart *one* program to avoid *all* future detection. That's right,
all security logging goes through *one* hook. There is *no* reliability.
There is *no* backup. That's secure?

Need I continue?

(Oh, that's right. I forgot. Athena only cares about network security.)

  [ so much for the jokes, on to the silliness ]
> In article <18210:Aug103:35:0890 at kramden.acf.nyu.edu>, brnstnd at kramden.acf.nyu.edu (Dan Bernstein) writes:
>>>4. There are some programs that run interactively that need to be able to
>>> both output errors to stderr to the users and to log messages in the system
>>> logs.  For example, su.  How would su print error messages if it couldn't
>>> use stderr because it was piped through an error logging program.
>> The reason that such programs *need* two error streams is security. su
>> should be logging directly to a file, not to a separate daemon that's
>> easy to clog with spurious messages. See A.
>   Great, so it logs directly to a file, and you have to be logged into that
> machine to read the file.

That's really silly.

Actually, you're absolutely right. su can't *both* write to a file and
write to your (network) error logger; that would defeat the structured
programming principle of, uh, ummm, singlemindedness. And once something
is stuck in a file, it's lost forever. It can't be sent over the
network. Files are sinks, not sources. Remember: Never put something in
a file if you ever want to read it again.

> How
> would that facility be provided if syslogd logged directly to a file?

That's really silly. I said that *secure* programs should log directly
to files. (You continue in this confusion below.)

> |> That's really dumb. ``stdin and stdout are controlled by the user. Hence
> |> programs must not read input or produce output.'' Obviously when there's
> |> a security issue the program should be writing directly to files. In
> |> other cases, the user is supposed to be in control. Also see A.
>   No, it's not dumb at all.  Stdin, stdout and stderr are controlled by the
> user, so programs that depend on security should not depend on them.

That's really silly. Read what I said. ``Obviously when there's a
security issue the program should be writing directly to files.'' Then
read the next sentence, which addresses the real issue: ``In other
cases, the user is supposed to be in control.''

You made essentially a blanket assertion that programs should not use
stderr. Like I said, that's really dumb. Feel free to continue the
discussion with dmr at alice.

>   Incidentally, what if "a malicious hacker type" breaks into your system and
> manages to get root, and wants to do something that'll let him continue to dig
> around without you noticing.
  [ all he has to do is restart every daemon with stderr misdirected ]

That's really silly. In a syslog-based system, *all* he has to do is
subvert syslog. Do you admit that it's easier to break one program than
every daemon on the system?

Anyway, we've discussed various aspects of this scenario a lot through
e-mail... What do you think of this: Daemon foo reopens (reconnects,
whatever) stderr as /dev/log by default. (This is done through the
standard library procedure logstderr().) On the other hand, if you say
foo -2, it'll leave stderr alone. Like it?

>   Your whole argument appears to be, "Syslogd is silly, errors should always
> be piped to a program that knows how to deal with them."

No. syslog is an insecure, poorly implemented model that will not handle
future needs.

Does the new Berkeley syslog code remember to always connect to /dev/log
on the first openlog(), hence making flags like NDELAY irrelevant? Just
wondering---otherwise the ftpd problem that started this thread will not
be solved.

>   So far, the ONLY reason I've seen that could explain why syslog/syslogd is a
> "Bad Thing" is the fact that /dev/log disappears after a chroot(),

syslog is amazingly insecure. It does not provide for adding extra flags
to the error messages that can be interpreted in standard ways. It
deludes the programmer into not worrying about what happens when stderr
blocks. It focuses a major aspect of security (namely, error logging) on
a single, easily subverted point. It does not let the user control
noncentrally where error messages are sent---so that I can't run a
straight telnetd on a separate port with a different login program,
because it stupidly syslog()s all errors through the usual file, without
even an indication that it's the nonstandard version. It is too complex
for simple tasks---it doesn't provide a single, uniform model for all
error messages. (Don't use perror, use syslog! :-) ) It does not allow
more complicated, text-based separation and analysis of messages.

Whoops, I just took more than half a second to think up that last one. I
guess I'll stop here.

> |> No. See B and C. If you want, you can set up named pipes or sockets
> |> /dev/log*, each feeding into a different type of error processor; as
> |> this special case of my ``proposal'' is a generalization of what syslog
> |> does, your efficiency argument is silly.
>   Excuse me, but wasn't "special devices in /dev" one of the reasons you gave
> for proposing this change in the first place?  How have we reduced complexity
> by going from one socket, /dev/log, to several sockets, /dev/log*?

Silly. If there's just one error processor (syslogd) then there's just
one /dev/log. I'm only pointing this out because it proves that sensible
stderr use includes syslog as a special case. Hence stderr is more
flexible.

>   Furthermore, I don't see named pipes anywhere on my BSD system.  Granted,
> they should be there, but they aren't, and BSD4.3 isn't the only Unix without
> named pipes (then again, there are also Unices without sockets, so this is
> sort of a red herring :-).

Yes, it is a red herring.

> |> > 3. Under your scheme, every time I start up a process that I want to log
> |> >    messages, I have to pipe its stderr though this logging process of yours. 
> |> Ever heard of shell scripts? And see A.
>   So every daemon is going to have to have a shell-script front-end to it? 
> That means more files on the system that don't really need to be there, and
> slower start-up time for the daemons.

Well, how do you like my foo/foo -2 idea? (Which you would have thought
of yourself, had you looked at general point A like I said.)

---Dan



More information about the Comp.unix.wizards mailing list