LJ Archive

Letters

Readers sound off.

Gnull and Voyd

Just when I thought you could not possibly come up with a column more irritating than Marcel Gagné's Cooking with Linux (full of useful information embedded in silliness), you bring us Gnull and Voyd. Give us the tech tips; please lose the trailer trash schtick.


Patrick Wiseman

As you (and others) wish. Tech tips stay; Gnull and Voyd go. —Ed.

!Gnull and Voyd

Keep up the good work. I enjoy your /var/opinion column, and I continue to look for Reuven Lerner's fine columns. You folks have helped me become a better system administrator over the years, and I thank you for that.


David MacDougall

Forgiveness

A few months ago, I wrote to you saying we had to break up—you had become too chatty and opinion-filled for my taste. I have to admit that it turns out I was bluffing. I love you too much, and today, I renewed my subscription.

You did take a significant dip in quality, but things have gotten better the last couple of months. More important, though, is the fact that you started from such a high-quality base that even when it dipped, you were still the only Linux publication for me.


Andy Balaam

Breaking Numbers Down

I read with interest “Breaking Numbers Down” [Dave Taylor's Work the Shell, LJ, December 2006], having recently dealt with this problem myself. I believe bc is a generally overlooked, but very powerful, UNIX utility. Taylor's script nicely illustrates the use of bc, and it is very useful for most purposes, but unfortunately, it doesn't scale well when dealing with the entire range of binary prefixes (en.wikipedia.org/wiki/Binary_prefixes).

First, the numeric test -lt used to find the first non-zero {kilo|mega|giga}int fails with numbers larger than 273-1 (at least on bash-3.0-8.2 running under kernel 2.6.8-24.25-smp).

Second, to extend this to deal with petabytes, exabytes, zettabytes and yottabytes, a total of 16 calls to bc must be employed, with the attendant overhead of shelling out to the system for each.

An alternative, which uses bc more efficiently and avoids testing a number too large for shell, follows:

# total letters
nc=`echo -n $1 | wc -c`

# numeric letters
nn=`echo -n $1 | tr -cd '[0-9]' | wc -c`

if [ -z "$1" -o $nc -ne $nn ] ; then
   echo "Usage:  kmgp <integer>"
   echo "        where  0 <= integer <= (2^100)-1 "
   exit 1
fi

SIprefix=" KMGTPEZY" # kilo, mega, giga, tera, peta,
↪exa, zetta, yotta

# what is the closest power of 1024?
# ( ln(1024)=6.93147180559945309417)
order=`echo "scale=0 ; 1 +
↪l($1)/6.93147180559945309417" | bc -l`

# find the letter associated with this power of 1024
letter=`echo "$SIprefix" | cut -c $order`


if [ $nn -gt 3 ]
then scale=3
else scale=0
fi

value=`echo "scale=$scale ; $1/(1024 ^($order-1))"
↪| bc -l`

echo "$value$letter"

This version contains two calls to bc and one to cut. The calls to bc merit some discussion: The first:

# what is the closest power of 1024?
# ( ln(1024)=6.93147180559945309417)
order=`echo "scale=0 ; 1 +
↪l($1)/6.93147180559945309417" | bc -l`

determines the closest power of 210 by using the fact that dividing a logarithm by the logarithm of N is the same as taking its Nth root. The offset by one compensates for the fact that cut is one-based, not zero-based. Note that we are loading bc's math libraries by using bc -l.

The second:

value=`echo "scale=$scale ;
↪$1/(1024 ^($order-1))" | bc -l`

divides by 1024 raised to the correct order and scales to an appropriate number of decimal places.

Averaging the time taken for both scripts on 400 arbitrary numbers, I find that the logarithm-based script is a little more than three times faster. On the other hand, if numbers larger than several hundred gigabytes are of no interest, the original script is faster.


John

Organized Repression

Your /var/opinion [January 2007] regarding the trade-offs between GPLs versions 2 and 3, reminded me of a wry remark that's worth repeating: “There's no such thing as freedom—it's just a question of how the repression is organised!”


Struan Bartlett

More Than Just Developers

You claim that “the only people who are truly harmed by having the software on ROM are the tiny minority of hackers who want to run a modified version of the software on the gadget” [January 2007 /var/opinion]. This statement is false. Hackers may be the only people whose goals are directly impeded by immutable embedded software. But where does the software created by hackers eventually trickle down to? The user, who would know no more about the software than how to load it into the device and make use of whatever enhanced features it provides. The “vast majority” of users are harmed by the chilling effect on a “tiny minority” of capable developers because they do not benefit from the software that otherwise would have been developed.


Ryan Underwood

Point taken. But if the source code is available, as it must be under GPLv2, then developers can learn from it and use it, just not on a particular device. —Ed.

DocBook XML and CSS

David Lynch's November 2006 article on using DocBook XML to build simple Web sites was timely for me. For many years, I'd been writing the documentation for my open-source projects in raw HTML, but I've recently “seen the light” and now use a combination of DocBook XML and CSS. However, my deployment approach is quite different from David's. Instead of relying on frames and the browser's ability to convert XML to HTML—and suffer from the complications this brings—I simply convert the DocBook XML to HTML off-line, then upload the output to my Web site. This is a much simpler approach that still preserves the key advantages of DocBook. I recommend it to anyone writing software manuals or simple Web sites, looking for a clean split between content and presentation. For an example of how it's done, download the HR-XSL Project (hr-xsl.sourceforge.net), and look in the doc directory.


Trevor Harmon

Ode to Joy

Jon Hall is making some extremely weak arguments against patents [Beachhead, January 2007]. First and foremost, the argument should not be if we should have software patents. The argument should be on how software patents and patents in general are implemented and maintained.

Although it may take me several years to come up with a completely new operating system, it may take someone else only a few weeks or months. This does not mean that this new, novel operating system should not be patented and protected so that big companies cannot steal it.

You see, to invent something, the inventor is usually intimately involved in that field or research. Sometimes the best ideas just appear out of nowhere. The idea itself may be easy or hard to implement, it may require more or less time, but what matters in the end is the ingenuity and usefulness.

This is one thing everyone who is complaining about patents is missing. Patents are there to protect the small guy. It is not the other way around. It may look like that today, as the implementation and enforcement of the patent laws may be unfortunate, but ultimately, the idea behind a patent is to protect your invention.

Imagine a world with no copy protection, trademarks, patents or other laws protecting ingenuity and uniqueness. In a short period of time, there would be no invention, no new music or works of art. We would see only repeats and same stuff over and over again. There would be no incentive to innovate. It would simply not be a smart investment to invest in invention. That kind of world would be just terrible.

To some extent, this is already happening with software development. Small shareware developers that used to drive invention and put pressure on big companies are now having very little reason to invent. It is hard to protect the invention, and if they don't protect it, someone bigger will come along and take their market, or if that doesn't happen, a less usable, but free version will be published. Why invent? It's better to steal someone's idea, hire some cheap labor and just put the money into marketing rather than R&D.

On a side note regarding the music and visual art comments Jon made: imagine if I could copy Ode to Joy, then add two notes at the end of the piece and claim it as my own. If I could market it better and more strongly than the original composer (Beethoven), who would be able to say who actually wrote that piece of music in the first place (assuming I was living in the same time period)?

More important, if that were to happen to Beethoven, would he be able to write a single piece of music again without being afraid someone will steal it? Would he write music at all?


Nebojsa Djogo

Jon “maddog” Hall replies: I agree entirely that “big companies” (or anyone else) should not be able to steal your work, but I disagree that software patents are the way to make sure that doesn't happen.

Copyright, contract and licensing law were applied to software a long time before software patents generally became coded into law in 1995. People were protecting their software way before software patents were generally available.

Regarding the big and small point—the small software creator does not have the money to fight a software patent battle in court. Witness the contest between RIM and NTP recently, where three of NTP's four claimed patents were overturned at a cost of millions of dollars in legal fees. The fourth one might have been overturned, but RIM and NTP decided to “settle”. The only people who won from this debacle were the lawyers.

I did not advocate a world without “copy protection”, only software patents. I (and most of the Free Software community) appreciate copyrights, trademark and trade secret laws for the protection of people's ingenuity. Free Software relies on copyrights for its licensing.

Regarding the Beethoven scenario—Beethoven would have sued you for copyright infringement and probably would have won in court. But, he would not have been able to block you from using a triplet, or some other “process” of writing music.

Unfortunately, patents are not foolproof in protecting an invention. Witness the issues around Alexander Graham Bell and Antonio Meucci (www.italianhistorical.org/MeucciStory.htm).

All Beethoven would have had to do was publish his symphony in any public, dated document (a much simpler and less costly procedure than a patent application), and it would have been protected by copyright law.

Thank you for writing your letter, but I stand my ground against software patents.

At the Forge

Reuven's column in Linux Journal is one of my favorites, and I read and read it again, but the one in the January 2007 issue is one of the best articles I have ever read in Linux Journal. Please offer my thanks to Reuven for his job.


Stefano Canepa

Myths?

I appreciated Paul McKenney's article explaining recent advancements in real-time Linux [“SMP and Embedded Real Time”, January 2007], and I especially enjoyed the “priority boost” comic strip in Figure 13. However, I was a bit disappointed at his attempts to dispel certain “myths” about parallel and real-time programming.

In Myth #2, for instance, I was hoping for some insight as to why parallel programming is not “mind crushingly difficult”. Unfortunately, Dr McKenney's explanation was nothing more than a declaration that “it is really not all that hard”. Until I see a more substantial argument to dispel this so-called myth, I'm inclined to believe that parallel programming is in fact quite difficult. To paraphrase Edward A. Lee: insanity is to do the same thing over and over again and expect the results to be different; therefore, programmers of parallel systems must be insane.

Also, in Myth #5, Dr McKenney is propagating a myth instead of dispelling one. He notes that as Web sites become spread across multiple servers—load balancers, firewalls, database servers and so on—the response time of each server must “fall firmly into real-time territory”. Here he equates “real time” with “real fast”, which is such a common myth, it probably should be at position #1 in his list. In fact, real-time systems are all about predictability, not speed, and their design often sacrifices performance for predictability. And yet, Dr McKenney implies that moving Web servers to a real-time environment will magically reduce their response times. This is simply not true. The response time of a Web server goes up only in the presence of overload—too many people hitting it at once—and when this happens, a real-time Web server will fail just as easily as a non-real-time server would.

I hope that any Web admins out there who have been inspired by Dr McKenney's article will put down their copy of Ingo Molnar's real-time preemption patch and forget about real-time Linux. Simply adding another server behind their load balancer will have a much greater impact in improving overall response time—and require far less effort!


Trevor Harmon

Typo

There is a mistake in David Lynch's January 2007 article “How to Port Linux when the Hardware Turns Soft”. He says that BSP stands for Broad Support Package. This is incorrect. The correct expansion is Board Support Package.


Trevor

LJ Archive