Just Wondering

Rahul Dhesi dhesi at bsu-cs.bsu.edu
Sun Apr 23 03:53:02 AEST 1989


(This issue may soon begin to get on people's nerves.  I think there
should be a newsgroup dedicated to controversial issues of interest to
everybody in the computer-related fields.  Perhaps it could be called
comp.tech-issues.  Then when a flame war is about to begin, people can
redirect it to comp.tech-issues.)

The issue here is whether C ought to be case-sensitive.   Italicized
words are here enclosed in vertical bars in my discussion.

In article <12481 at lanl.gov> jlg at lanl.gov (Jim Giles) writes:
>Ah, but do you intend to imply that "BecAUSe peopLE arE CaSE senSITive,
>as YOU CAn noW see" has a different _MEANING_ from "Because people are
>case sensitive, as you can now see?"  The fact is that most people are
>_NOT_ case sensitive with respect to the _MEANINGS_ of the words.

Let's look at the pros and cons.

Why a language should not be case-sensitive:

o    People use |COUNT| and |count| to mean the same thing, so the compiler
     ought to do the same.

o    People will often declare SendChar and later type Sendchar, and it's
     painful to have to fix the error messages that a case-sensitive
     compiler will generate.

Why languages should be case-sensitive:

o    People may use |COUNT| and |count| to mean the same thing, but
     mathematicians don't.  In mathematical expressions it's very useful
     to use case distinctions for related entities.  For example,

          Consider a graph G(V,E)

          for each vertex v in V do
            find an edge e in E such that e is incident on v
            ...

     Since programming languages are meant for use by technical people,
     and since computer programming and mathematics are so intimately
     related, it pays to let computer programmers use the same tools
     that mathematicians do.  Not only should programming languages be
     case-sensitive, but they should allow the use of subscripts,
     superscripts, and Greek letters too, to make the notation more
     powerful and more intuitive.  Right now we have to go through some
     trouble to compact mathematical notation to a verbose format just
     because the computer's character set is so inadequate.

o    If I declare |COUNT| and |SendChar| but use |count| and |Sendchar|
     later, there is a good possibility that I did so in error.  After
     all, if I said |COUNT|, it was for a reason (why else would I have
     gone to the trouble of pressing my shift key?). So if I say
     |count| or |Sendchar| later the compiler ought to point out the
     discrepency.

So what's the ideal compromise?

1.   Keep languages case-sensitive.
2.   Declare and use identifiers the same way everywhere, and don't
     use both xY and Xy to refer to the same identifier.
3.   (Here's the new idea)  Change the compiler (or lint) to accept
     an optional switch that will warn you any time you declare two
     identifiers that match except for case.  This will help you avoid
     having both x and X in the same program if you *want* to avoid
     this, but it won't prevent you from using a mathematical
     convention that deliberately uses case to denote a similarity.
-- 
Rahul Dhesi <dhesi at bsu-cs.bsu.edu>
UUCP:    ...!{iuvax,pur-ee}!bsu-cs!dhesi



More information about the Comp.lang.c mailing list