Tuesday, August 19, 2008

 

t (x,y,z) , tj (x,y,z)j

t (x, y, z) , tj (x, y, z) j

The way stuff in space as we know it should be defined.

t = time
x, y, z = spatial dimensions or positions
j = physical version of the imaginary number, where j^2 = -1.

So by tallying up these components of space, you get 8 dimensions.

Might have some uses if someone can figure it out. The neat thing is that if you produce a product via the j-dimensional interactions, you get a negative value standard dimension result. Combine this with applied vector products, and it might be possible to use this to produce negative values in various applications where negative values are "naturally" impossible. If this could pan out, one might be able to do some really neat things with electricity or optics. (If you figure out how to make a negative modifier for gravitational attraction, things would get really interesting.)

The j-space stuff also would tie in pretty well to older work in scalar research. Scalar nodes are pretty much like the product results of electromagnetic waves (more or less radio) propagating through j-space. The product should then appear in standard space as a negative value of the two j-space factors.

The other thing is that if something propagates or exists in j-space with out a product producing reaction, it's pretty much non-interactive with standard space. So you either have to put in a j-space emitter to interact with a j-space EM transmission and be upon the vector necessary to intercept it, or be sitting on the node point of two separate j-space EM beams. This particular aspect could be useful for communications applications.

Crazy, huh?

*note: I'm wondering if this j-space thing is what sci-fi meant by hyperspace or subspace? Applications arising from understanding how to interact with a normally unobservable dimensional set defining space would be quite profound.

Labels: , , , , , ,


Monday, August 11, 2008

 

Trying tennis again after 10+ years...

I've had this relatively unused racket that's been sitting around since I got it back in my Navy days... But I haven't really played much since high school... Nice and cool enough, so why not - and I go and give it a shot.

Things I've learned:

Labels: , , , ,


Sunday, August 10, 2008

 

Comparing a brain to a computer

It's funny how some people make a comparison of an organic brain to a computer when making comparisons of processing power. But in a lot of cases, the over-simplification gets it entirely wrong.

The wrong way: Saying a transistor is comparable to a neuron. No! No! No! A transistor pretty much has two states I/O. And then relays back or holds one of those two states depending on what its function is. A neuron however has many many states, and relays back much more complicated information.

So what would be a better way?

A neuron in itself, is a CPU + Memory + a very nice routing management system. So a neuron isn't the equivalent of a transistor, but rather an entire computer. The neuron gets its information in an analog waveform which contains more and much subtler information than I/O. (It also gets other data via chemical means, with the chemical route being slower - it probably provides bias as to how the electronic data is handled.) Upon recieving the analog waveform, the neuron can route the information, evaluate it, store it, command a physical action, or any combination of those. Seeing how it operates in a massively parallel fashion and using weighting for evaluation, it gives up precision and accuracy for the speed and raw power of a somewhat noisy yet noise-tolerant processing cluster.

Instead of comparing a brain to a computer, instead it should be compared to a large network of computers. A mouse's brain might be a pretty good sized server farm, as where a human brain would be a nice chunk of the internet. Using this analogy would be much closer towards how the architecture of an organic brain actually works.

Labels: , , ,


This page is powered by Blogger. Isn't yours?