Coconuts annually kill more people every year than sharks -- every year there are approximately 150 coconut-related deaths.
WTF?!? I've got to find out more about that. Are people dying from coconuts falling on their heads (a la Gilligan's Island), or do they choke on the extremely rare coconut pearls, or are there complications while using the old coconut-juice-as-plasma trick, or what? This concerns me to no end.
And the fact that coconuts are more dangerous than sharks? Boy, that was a surprise. I can imagine a lot of things being more deadly than a shark, but I've got to admit that coconuts were not on my list. Not until now, that is.
I wonder if I should talk to my homeowner's association about all the palm trees in my neighborhood. Should we start licensing people who purchase and plant coconut-bearing trees? These are dangerous times...
UPDATE: apparently this was old news. The original source of this story seems to be from a May 2002 article in the Daily University Science News, although they don't quote the source of the statistic anywhere. So I guess you can just believe it...or not.
UPDATE #2: thanks to Jerry Carter for his diligent Internet research. He dug up this web page that has good information about the validity (or not) of this whole coconut thing:
"Great," I thought, "we've still got an hour until the flight leaves, so I've got plenty of time to run back to the ticket counter and grab a pass."
Now, JAX isn't a huge airport, and on Saturday morning there really weren't very many people there. For Delta, the e-ticket line had about 8 people in it, the regular ticket line had about 5 people, and the First Class line had maybe 2. I saw at least 3 people working various terminals at the counter, so I was feeling pretty lucky.
On my way to the ticketing line, I caught the eye of the woman who was providing random acts of customer service for the e-ticket passengers (there are computer terminals for e-ticket self check-in, so a lot of people get confused), so I asked her about the pass, and asked if I was going to the right line.
She was a little frazzled from all the other customers she was helping, but she said "I've got two terminals down right now and I'm short one person this morning, so it's a little hectic, but I can get a pass for you in just a minute." Oh cool, I wouldn't have to stand in line, and she could just run back and grab a pass. I asked her if she was sure, because I could just stand in line, but she assured me that she'd take care of it for me, so I stood and waited.
After she helped a couple more people with their e-tickets, she went back behind the counter and started doing something. Then she started doing something else, and talking to the ticket agents, and typing various things on various computers. Then she disappeared for a while. 15 minutes had passed since she first told me that she could get me the pass, and not only did I not have a pass, but the woman who was "helping" me was gone.
So I decided to stand in the ticketing line and wait, in case she never came back. There were [still] only 5 people in it, so I thought if she came back with my pass I'd jump out of line and be gone, and if she didn't then I'd at least get it from someone else once I got to the front of the line. Unfortunately, after 5 more minutes, the line I was in hadn't moved an inch, and it was time for my family to hustle through security and get on their plane -- after all, I had their boarding passes in my hand because I needed them to get this mysterious pass. Still no sign of the woman who was supposed to be getting my pass, so I walked quickly back to the security area and said goodbye to everyone there.
So here's my beef (for anyone who's still reading): if the customer service person at Delta wasn't going to be able to help me, she shouldn't have told me that she could. If she was too busy or didn't have the authority or the knowledge to do it, she should have told me to stand in line instead. At least that way I would have had a realistic expectation of how long it would take, and I could make my decision to stay or go based on a more concrete set of data. I realize that she was truly trying to be helpful, but because she was unable to ultimately fulfill my request (probably because she was just too busy), she ended up being unhelpful. There are plenty of roads that are paved with good intentions...
In the larger, more philisophical sense, I think it's just bad customer service to tell someone you can fix a problem that you can't fix -- even if the reason why you can't fix it is because you just don't have the time to do it. Sure, when the customer first hears that they can get their problem solved right away, they're happy and thankful and temporarily pleased with their experience. But ultimately when the problem doesn't get fixed, the customer is more angry than before because they still have a problem and their time has been wasted on top of it. You haven't fixed the problem at all, you've only pretended you were going to.
What I really want is a realistic expectation of when my issue is going to be resolved. If it's not going to be right away, then at least I want to know so I can plan accordingly. For example, if I drop my car off at the mechanic and they tell me it's going to be a month before they can fix it, then I might be angry about that answer but I can plan around it. If they tell me that it's going to be a week, but then they keep stalling and stalling and it ends up taking a month, then they've really screwed up my month because I kept expecting to have my car and it was never there. Especially in a time-sensitive situation like the one I had in the airport (or, say, when I'm on the phone with a help desk because my server's down), you shouldn't be wasting my time.
Just a thought.
The first obvious change of interest is to 'cache' find.length() in a local variable. This eliminates a method call for each substring found. Another change is to use the char of the String to avoid temporary String construction using String.substring(). Since StringBuffer has an append(char buf, int offset, int len) method, it is actually quite easy to do it this way. Finally, if you examine the bytecode, it is more efficient to 'stack' append calls than to code each one as an independent method call on the StringBuffer.
You probably won't be able to see the efficiency of these changes using the 3 iterations that your posted code uses. But if you increase to 30 iterations then the difference will become obvious. Because of the eliminated temporary String object construction, there is also an unmeasurable improvement in reduced garbage collection which becomes important in an application server environment.
Great stuff. And I'm embarrassed to admit this, but he also pointed out (very politely) that I should change the line:
if (find == null || find == "") return str;
if (find == null || find.length() == 0) return str;
Doh! I swear, if I'm programming in Java 20 years from now, I'll probably still be making that mistake. I know what the difference is, but that particular bug continues to sneak into my code. I just can't help myself. Oh well, at least I'm not alone.
Anyway, I reposted the ReplaceSubstring testing routines, with the addition of Jim's method and an increase in iterations from 3 to 50. I had it set low before because the non-StringBuffer methods are so darn slow, but I added a line to stop the test for any method that's taking longer than 10 seconds so you don't fall asleep waiting for the test to finish.
As always, enjoy.
To this, I say "Bah, that's not real security. That'll never stop a 45 calibre slug!" Here's what I'm talking about:
Of course, these are primarily mobile solutions, geared towards the road warriors among us. For your home computer environment, you might want to consider a safe room or something similar.
You'd better hurry! Only 98 shopping days left...
One thing that might be interesting about the code (if you could care less about ReplaceSubstring functions) is the way I'm doing the testing itself. I used Java reflection to run the tests, so I didn't have to copy and paste the same code block every time I wanted to add a new function to test. Reflection allows you to simply pass the name of a function (okay, it's really a method, but whatever) to the testing and validation routines, and run them like that. Poof, no more redundant code.
Also, you might notice that the first ReplaceSubstring routine doesn't end up passing all the validation tests, and you might further notice that it's the same routine I have in the example code in my Lotus Notes Java Samples database. Oops! Sorry about the poor testing the first time around. It just goes to show, you always need to test the code yourself before you put it into production. I'll see about getting that JavaScraps database updated in the near future.
The latest edition of the newsletter (I'll add a link as soon as I can find it) actually ended up including some of the code that I sent in, which was kind of fun for me to see "in print". Of course, I think I ended up sending in about 7 e-mails overall, so I probably came across as a code stalker or something :). A few thoughts about that code, just for the record:
Declare Sub GeoCrypt Lib "GeoCrypt.exe" (Byval encString As String)
The encryption/decryption will be done to the string in-place.
I hope that James Hoopes and the e-Pro gang think up a few more "challenges" like that in the future. That was a fun distraction for a geek like me.
Oh, and speaking of e-Pro Magazine, I'd like to point out that Libby made some good responses to my last blog entry in the comments section (and so did Ed and Ben L and Jen, of course, but I kind of put Libby on the spot with the entry itself...). I only mention it because I know from my stats that a lot of you only read this site using an RSS aggregator, so you might miss the comments. I thought it ended up being an interesting discussion overall.
I have to admit that I laughed a little when I read this part of the editorial:
Editor's Note: The decision to publish news about a product that could potentially be used to crack Domino passwords is never an easy one. In this case, the product is currently available and being talked about in other arenas, so we felt it would be irresponsible not to bring it to your attention so that you could take steps to protect yourself from anyone who might use it for illegal purposes.
The thing that struck me as funny was the wording "talked about in other arenas". Oh, you mean blogs? Like that thing that Libby, one of the editors of your magazine has? You're allowed to admit that you read them, you know.
I think it's funny because I've read things before that indicate some reluctance by "members of the press" to admit that blogs (or "weblogs", if you hate the name that much) even exist, or that they might have some usefulness beyond mere idle chatter. For example, there were a couple paragraphs in the back of Lotus Advisor magazine last year entitled "blogs are cute", with a subtitle of "Why are 'blogs getting so much, er, buzz? Haven't they really been around for two decades or more?". I can't remember exactly what it said, but I think there was something to the effect that blogs are "cute" and trendy, but don't take them seriously. I don't remember it being very complimentary, anyway. In any case, I wonder what the attitude is now that editor Rocky has a "cute" little blog?
I'm certainly not trying to say that all magazines or magazine employees have this attitude -- Network World is one example of a magazine that really seems to embrace blogging. I'm just saying that some people may have yet to publicly acknowledge the existence of blogs as useful "arenas" of editorial, technical, or general information, or if they do they might just see blogging as some sort of seedy underbelly of the Internet that lots of people visit but you're not really supposed to admit that you go there.
That's okay. For anyone who has that attitude, we bloggers will wait for you to come around. We're not going anywhere...
For example, the other day I ran across a link to this old thread asking about the origin of the cryptic background design in the bookmark pane of the R5 Notes Designer client. If you don't happen to have the Designer up right now, here's the background, and I also made a copy with the contrast cranked up so you can see the words a little more clearly (you'll probably want to view these in a graphics program instead of a browser if you're really going to examine them).
Anyway, a senior member of the Iris/Lotus/IBM Design Team named Bill Andreas ultimately responded to the question:
And the answer is:
The primary image is from a manuscript dating to the 13th century that a colleague of mine was working on. It's written in Medieval Latin (including some words that are early French), in three different hands. It's from a manuscript of stories (fables). There's an argument as to its place of origin (probably not France).
We used it because it's pretty, it's designerish, and if you read carefully enough, it does include the word "notes" in two places.
That's pretty cool, although I never was able to find the word "notes" anywhere. I even checked a couple of Latin translations of the word "note", but nothing seemed to match. If anyone else has a better copy of that background image (mine is just a screenshot), maybe you could find it...
So in the meantime, panic sets in as people are trying to make their computers work, despite the obvious (and global) network problem. This is especially funny first thing in the morning, as people are booting up and realize that something is amiss. Here are some of the funny things I've heard from the other cubes during such troubled times (keep in mind that no one can access the network for anything):
And then there's one that I haven't heard yet, but I'm sure I will: "I'll just clean out my e-mail inbox while I'm waiting. Hey, is e-mail down too?"
This made me think about efficiency in general, and I think I'll take this opportunity to offer my thoughts about making code more efficient. You've probably heard all this before, but it's always good to keep this sort of stuff fresh in your head.
1. Efficiency is a balance between the speed of your code, the portability of your code, and the elegance of your code. While speed is usually the goal, faster is not always better. If you increase the speed of your code by 1% by taking a 5 line function and turning it into a 50 line function, the original function was probably better. In a similar vein, taking a nice generic piece of code and making it slightly faster by adding all sorts of handling and checking for one very specific case is usually a bad idea. Be especially wary of gaining small increases in speed by hardcoding things into your routines.
2. Don't assume that you know where the efficiencies or inefficiencies of your code are. Things that look fast might be slow, and things that look slow might be fast. Use some kind of profiling method before you start rewriting all your code.
3. Creating fewer objects is almost always faster than creating more objects. While this sounds obvious in concept, it's not always obvious in your code. Creating an instance of an object or even calling a method of an object might trigger the creation of a large number of other objects in the background. For example, I've heard that the SimpleDateFormat class in Java is notorious for background object creation, so if I had slow code with a number of instances of that class I might look at replacing or avoiding that class (keeping my profiling, portability, and elegance in mind, of course).
4. Strip out all non-essential parts of your code before you start timing it. This includes calls to logging, alerting, and debug routines. In some cases, maybe it's just all the "print" statements you're making that are slowing down your code.
5. Make sure you know what "fast" means for your situation. Something that feels like it's running very slowly might actually be running at a reasonable speed. If it takes 3 seconds to loop through a process 30,000 times, then maybe that's just how long the process happens to take.
6. If your code is interfacing with another system, make sure it's not the other system that's slow. I've seen this plenty of times with database connections -- you have some code that executes a SQL query, and when it runs slowly then you start tweaking your code instead of checking the database itself to see if that might be the bottleneck.
7. If your code is slow, check to see if it's always slow. Run it several different times on a few different machines. Maybe it's only slow under certain conditions (like it happens to be running while the machine is under a heavy load, or while another process is running, or under a particular machine configuration). If that's the case, then you need to figure out what the slowdown conditions are before you start looking at the code.
8. Continually benchmark your updated code against your original code. This is good not only as a sanity check (sometimes the new code isn't faster after all), but also in case you end up mangling the code so badly in an efficiency frenzy that you just need to start over with a fresh copy of the old code.
9. Make sure you have good unit tests for your code before you try to make it efficient. You should be testing it for correctness in many different ways, including unusual cases, loops, simultaneous instances, and cases that should spawn error conditions. Often times, making your code efficient for one specific case will slow down or break the code for other cases.
10. Make your code work properly before you start making it efficient. It's more important to have working code than it is to have fast code, since your code might have to go into production before you're finished making it fast. It's also pretty hard to troubleshoot changes if the code doesn't work right in the first place.
11. Use test data that's similar to your production data. Optimizing your text parsing routine against 5k test files won't necessarily be valid if your production files are all 100k or larger. You can especially run into problems if you're using some internal caching or buffering to provide efficiencies, because your memory use could go through the roof when you start trying to run your code against large data sets. Along the same lines, make sure your test data is valid before you start testing against it (of course, if you have good unit tests and you're running benchmarks against known working code, this should be immediately noticeable).
12. Know when to say when. At some point, you've just got to stop trying to rework your code and call it a day. You've got other things to do and deadlines to meet. This is yet another reason to make sure you have working code before you have efficient code -- that way you've always got something to fall back on.
Does everyone else remember that one? This isn't really news to any of the old timers out there, but it might be a fun stroll down memory lane and a little history for all the young R5+ whippersnappers out there. For a little background on the "conspiracy" aspect of things, there's a good two or three page discussion towards the end of the Steven Levy book Crypto, and I also found a short article about it from 1999.
The gist of the problem was this: at the time that Lotus Notes 4 was getting ready to ship, the U.S. government was still insisting that strong crypto couldn't be exported to other countries. 40-bit encryption was the strongest exportable crypto allowed, and most international customers were a little uneasy about that. Ray Ozzie and Charles Kaufman at Lotus/Iris came up with what they thought was a compromise, and in 1995 they filed two patents for something called "Differential Workgroup Factor Cryptography". Using this, the international versions of Notes would be allowed to have a 64-bit key, but [somehow] 24 bits of the key would be decryptable by the NSA by means of a "National Security Access Field" (NSAF) -- rendering the true encryption strength back to 40-bit encryption for the NSA hackers (this is also sometimes referred to as the "Workfactor Reduction Field", but I think NSAF is a little more appropriate).
In any case, because of all this we had to deal with three different versions of Notes IDs up through release 4.6 -- North American, International, and French (read this article from the Iris Today archives for more information).
If you're interested, I also found a short discussion by a guy who tried to reverse-engineer this process. His discovery? The heirarchical name on the NSA part of the key was:
O=MiniTruth CN=Big Brother
A little backdoor humor, embedded there by the programming staff at Lotus. Sort of a crypto Easter egg, I suppose.