My name is Derek Monner, and my work lies at the intersection of bits, neurons, networks, and minds. Most often, that means I'm working with artificial neural networks in pursuit of solutions to interesting learning problems. These domains have ranged from language comprehension and production in my early days to my more recent work with "big data" problems that focus on accurate inference over large, interconnected networks of people, papers, particles, or just about anything else you can think of.
One major research goal of mine is to develop a suite of neural network tools that functions as a Swiss army knife for "big data" problems involving not only entities, but relationships among those entities. I've already developed a tool called RNCC that can be used to perform collective inference over a network of entities connected by relationships. I'm currently working on extending this technique to allow learning over even more complex data structures, such as entire relational databases.
Another major research goal is to advance our understanding of how distributed representations in neural systems can support symbol composition and manipulation while maintaining their well-known aptitude for solving problems with soft constraints. After all, our brain's architecture is itself highly distributed, yet we have no trouble with symbolic computation. I am fascinated with so-called Vector Symbolic Architectures (pdf), and am currently investigating their relationship to more mainstream neural network learning methods.
Just about everything I can be bothered to write down about myself is available on the Research and Teaching pages, and in my CV (pdf). You can find links to other projects and free code here. If you need to reach me, you're looking for this page.