Something that's brewed in my mind for a while is the idea of widening the band within neural networks. I don't have any reason to believe this would be useful, but it might be a nifty thing to play with.
A neural net (or Artificial Neural Network) is a model of the cells in the brain that perform computations. Each cell (or "neuron") has a threshold, represented in biological neurons by some electrochemical process that I am unaware of, and in a computer by a number such as 0.5. Each neuron has an output called an "axon" that connects to the inputs of other neurons (or "dendrites"). Each dendrite has a weight, represented by a number. The output on an axon, which becomes the input on a dendrite is represented by 1 or 0 for on or off ("on" being an electrical pulse). The neuron then either turns its output on or off, which it decides by multiplying each input's value (1 or 0) by its weight, adding up the results, and then checking whether the sum is over the threshold.
A neuron with a threshold of 0.5, and two dendrites each with a weight of 0.3, would turn itself on if one dendrite was on, but not both. In computer terms it performs the function "exclusive or" or XOR.
A couple ways the band could be widened include passing other kinds of values than just 1 and 0 (other numbers, strings, xml documents), and allowing real functions for crunching the dendrite input and determining the axon output. This would cause problems for allowing the network to be automatically trainable, but it would be backwards compatible enough to implement the same kinds of functions as existing neural nets, if you already know the weights and thresholds.
I first tried to implement this in Java, because I want to be able to build these nets graphically, and Java is what I know well enough to do that. A JVM scripting language would be better so you could define your functions at runtime, but I don't know one. Anyway, I thought I was close to having the underlying implementation done, but I ran into a weird error that I blame on NetBeans.
It was surprisingly easy to rewrite what I had in Lisp. Having a toplevel helps. To test it, I made a chain of neurons that would propagate a number, increasing it by powers of two. I found it was necessary to pull the inputs for all neruons and update the outputs as two separate steps; otherwise, they would all just stay on or off.
I think there's a Common Lisp implementation for the JVM, so maybe I'll look into that.