Saturday, August 14, 2010

Strive To Be Unaverage

What's so good about normal anyway?

That is all.

Tuesday, June 15, 2010

The Meaning of Strife

Had a good thought last week that I wanted to share: Strife is what you get when you live your life with no concern at all for others. Happiness is what you get when you live your live with no concern for yourself.

Short, but sweet.

Sunday, May 23, 2010

Echo Dimensionality

The math behind string theory defines a world with eleven dimension. The dimensions we are familiar with are height, width, depth and time. This theory posits seven additional dimensions and describes the higher dimensions as micro dimensions that are wrapped around the more conventional dimensions. There's more to this than just this of course, but in a pinch this explanation will do.

It is regarding these micro dimensions that I have been thinking, allowing my imagination to run around a bit, and although I don't know all the mathematics behind them, it occurs to me that each of these micro dimensions behave as almost a vibration in each of the more conventional dimensions. An example of what I'm thinking can be represented by dropping a pebble at a 90 degree angle to a pool of water. As the pebble is falling, it is moving in the Y-axis of our old friend, the Cartesian coordinate system. When the pebble impacts the water, rings are produced in the pool, along the Z-axis and X-axis and are a function of movement along the Y-axis. There does not need to be any lateral movement to produce a lateral effect. This is similar to the effect that each dimension has on reality, only there is a implied directionality alluded to.

The extra dimensions aren't so much micro dimensions wrapped around the conventional dimensions smaller than we can ever perceive, instead they are actually a function of the conventional dimensions that effect a wobble in the conventional dimensions that resemble extra dimensions and fit the math, but are actually just functional echoes created by the addition of a time factor. The are a reflection of the conventional dimensions that fit the math, but don't actually exist any more than the Tooth Fairy.

Thursday, April 22, 2010

The Proof

Slurpee = Good?
  1. Slurpee = Partially Frozen Sugary Beverage
  2. Partially Frozen Sugary Beverage = Thirst Quenching Enjoyment
  3. Thirst Quenching Enjoyment = Taste Bud Happiness
  4. Taste Bud Happiness = Neuron Excitement
  5. Neuron Excitement = Endorphin Release
  6. Endorphin Release = Good
Slurpee = Good

Monday, March 29, 2010

It Happened Again!

The last post that I wrote was about calculus. Go figure that the next post that I would write would also be about advanced math, but hey I ran into it again and I was amazed. I guess lightening does strike twice, however this time it wasn't calculus, it was trigonometry.

I was selling a Leslie organ speaker on Craigslist.org and I was talking about the speaker with one of my coworkers that didn't know what is special about them. I was explaining how some of the speakers inside the cabinet were connected to a motor that caused them to rotate and that this produced a simulated vibrato in the higher and lower registers. Of course they didn't know what vibrato was so I started explaining it to them. I guess the most commonly known application of vibrato is the whammy bar (tremolo arm) on an electric guitar, but other instruments have other techniques to create vibrato too. On my euphonium I add a little vibrato to some notes just by shaking the horn a bit. Other instruments can produce vibrato in similar ways, by hand movements or by blocking some of the sound, like the cups you often see at the end of trumpets that look like plungers. Some singers do this too, but totally with their voice. You've heard this before, but you may just not have known what it was that you were hearing.

Vibrato sounds like a wobble in the sound or like the sound is bent just a little bit. It can be pleasing to the ears, but there is hard line mathematics behind what's going on. I promise I won't go into it that much because I just don't know acoustics all that much, but I'll give the basics.

Sound is a wave, we all know that. Vibrato changes that wave in a special application of an operation that is commonly known as the Doppler effect. If you don't know what the Doppler effect is, imagine a train approaching you and how the sound of that train increases in frequency the closer or faster it gets. When it passes you, the frequency of the sound you hear decreases the farther it gets from you. So the sound gets higher and lower (not louder and softer) as it moves. The motion however for the Doppler effect is constant because your only dealing with one vector (oh no, physics!) and vibrato deals with more than one vector.

Whether it's shaking an instrument, stretching strings or a speaker that rotates in a circle, the effect is very similar. The easiest change to work out is shaking, where an instrument goes back and forth a small distance. What's happening here is just like the train coming at you and going away from you, but over and over again. So if you look at it, if it's happening over and over again with regularity, what we are creating is also a waveform. So vibrato is actually one wave multiplied by another wave, hence trigonometry.

I counted and I shake my euphonium about four times a second when I'm producing vibrato on it, so the frequency of the wave I'm multiplying the sound wave by is about 4 hertz. I could get into the circle of fifths and the chromatic scale right here and all the frequency of the notes that I play, but instead I will simply say that if my math is right, my 4 hertz movement can change the frequency of the notes I'm playing up to 4-5%, which is significant. BTW, lower notes change more noticeably than higher notes because they have a lower frequency to start with and 4 hertz is a higher percentage of change than in higher notes. Neat.

Now, think about the Leslie speaker. On the bottom of the cabinet there is a bass speaker that rotates in a circle. The speaker is a cylinder (great, now geometry?) about two feet in diameter and eight inches tall. The speaker only points out one side of the box. Now as this speaker travels around it's axis, it is mimicking the action of the Unit-circle in trigonometry with it's motion. I won't go into it, but you can look it up if you're interested. I don't know what the frequency of the speaker rotation is, but I'm sure it can't be much more that I can do myself on my horn, but the rotation produces the exact same effect, vibrato.

Well, I'm done, but I'm beginning to suspect that higher mathematics might be lurking around everywhere, just waiting for us to grow complacent so it can strike totally unsuspected. Oh well, I'm sure it will happen again, and when it does, I'll probably write about it again.

Wednesday, December 9, 2009

I Ran Into Calculus One Morning

I've heard the statement "You'll never use more than 10th grade math in your real life" or a similar statement often. For the most part it's true, but there are exceptions to just about every rule. This is my story of one of those exceptions. If you're not a geek, you probably want to stop reading now.

In my last job, me and one other guy had a task to harvest security data from approximately 500 servers, weed out some initial garbage data and create one report for each server containing relevant information each month. As this was not our only responsibility, we had to approach the problem with a programmatic solution, or in other words we had to write a program to harvest the data for us. We did this, and the first time it ran, it took just under 7 days to finish, running non-stop. As this was a month-end problem, starting it on the first of the month and waiting a week to get started on the analysis portion of the task was unacceptable, so we had to find another way to do this.

We were talking in one of our managers offices and just having a brainstorming session when it came upon me that we could emulate clustering if we did it the right way. Clustering, simply put, is using multiple computers at the same time to do work faster. Before we had been accomplishing the harvesting serially, but now we had the power of parallel processing. There were some unique challenges as we were only allowed to use DOS batch scripting on these servers, but we did get it to work. We also noticed that each session used so little processor power (both clock time and memory) that we were able to run multiple sessions of the program on one server, so we didn't have to have tons of resources tied up while this was running either. Incidentally, the way we solved the clustering problem also introduced collision protection as a freebie, so that meant that no two sessions would be doing the same work or get hung up trying to do the same work. We started off 10 sessions of the program the first time we tried it and viola, the task was done overnight.

This was a good solution to our problem, but we wanted to get some more information on what was going on, and the next month we had added logging that checked not only when each server began and ended, but also when the whole process began and ended. We bumped up to 20 sessions on 2 servers and on our second run were done in about 4 hours. This meant that we could actually get started on the other work we had to do on the same day that we ran our harvesting program if we wanted to, but since we wanted to see just how fast we could make it, the next month we pushed it up a bit more.

On our third attempt, we ran 40 sessions on two servers, and the task was completed in about 2 1/2 hours. On first look this might seem just about right as we doubled the sessions and it finished in about half of the time, but we were wondering exactly what was going on, because the last time we doubled the sessions from 10 to 20, we had over a 60% gain in efficiency, but this time we only had 37.5% gain in efficiency, and 60% was with 10 sessions, but 37.5% was with 20 sessions, so if you divide the numbers out, on our second run, each additional session added a 6% efficiency jump, but in the third run each added a 1.875% efficiency jump. How did we get both 6% and 1.875% results by doing the same thing? For a bit we were stymied.

We decided to test what was going on in our fourth run. So that we'd have more data to analyze, we ramped up to 80 sessions on 8 servers and the task completed in 2 hours. That's only 1/2 an hour faster, or 20% spread out across 40 sessions, or per session they added 0.5% efficiency. We went down from 6% to 1.875% then to 0.5%? As Joe (my scripting partner) and I were talking about this over a coffee, I had an epiphany and realized that I had seen something like this once in a Calculus class I'd taken way back in 1994, so I looked it up and sure enough, we'd managed to bump into a Calculus principle without realizing it.

The particular bit of Calculus that was hindering us was something called "The Limit of a Function" which looks at a function containing a variable and sees what happens to the function as the variable's value changes. What we had was a finite amount of work that we were dividing between a variable amount of help. It doesn't matter if I count the work as 1 task or as 500 servers, the principle still applies, but our function could be looked at as either 1/x or 500/x and over the four months x moved from 1 to eventually 80. The notation x=1-->80 is similar to how it looks in Calculus. Let's look at the problem with small numbers first and see if we can see how this was working.

If we go from 1 to 2 sessions, we should finish in 50% of the time or if you subtract that from 100%, which was how much of the time it was at 1 session, we have a 50% time savings. When we go from 2 to 3 though, we should finish in 33.3% of the time, which when we subtract that from 50%, we get a 16.7% time savings. From 3 to 4 we get a 8.3% savings and from 4 to 5 we are down to 5%. I've created a chart to show you just how fast this number drop off from there. By the time your at 10 sessions, that session only saved about 1% and at 20 it only saved 1/4%. The real life variations in our data with concern to time was that our data on each server was different sizes, what we were collecting were different sizes, so those variables added into our data would explain any other discrepancies in our actual time versus this function.

Another thing we noticed was that about 1% of our servers were taking up to 2 hours each to complete. This was due to them being hard to access as they were in South America and on slower lines, but since they had to be done for the whole task to be complete, we always had to wait for them. Since we sorted alphabetically and these particular server's names started with either a B or C, we only had to get through the beginning of the C's to get them started in our initial launch, which was about 25 sessions. If we started fewer sessions than that, we had to wait for some of the sessions to go on to their next servers before these would get picked up, but even if we waited, the difference was negligible.

These five or so servers would take a couple hours no matter what you did, so after this we focused on finishing most of the work efficiently. We had a queue of servers to get to, and as long as there was a server in the queue, each session would stay alive, but when it was empty, each session would terminate when it completed the server it was on. We could push up our first termination by increasing sessions, but we couldn't actually finish the whole task faster by doing it. Since it wasn't really to our benefit to finish part of the task faster and not all of the task faster, what we did was try to bring these two numbers together by reducing sessions so that the queue was emptying at about the same time as the slow servers were finishing, thus balancing optimal efficiency with use of resources. What we determined was that somewhere around 30 sessions was our sweet spot, where the task was evenly spread across as many sessions as we could get and all the sessions were terminating as near to the end of the task as we could get.

So what we determined was that if we added more than 30 sessions to our cluster, the overall result wasn't significant. 100 sessions couldn't really do all the work any faster than 30 could, even though that just seems illogical, but the operative word is all. What was the point of trying to squeak out a fraction of a percentage point of efficiency by adding more sessions? What was humorous about this situation was trying to explain this concept to our managers who never took an advanced math class in their life, and believed with utmost faith that they would never need to use more than 10th grade math in their real life. Fortunately some of the managers just trusted us, and the speed jump from 7 days to a couple hours was enough for them, even if they couldn't figure out just why we could never seem to finish the task in less than 2 hours.

Saturday, October 24, 2009

Just a thought

When an actress says the name Johnny in a movie, why does it invariably turn out creepy?