Natural Language Processing with Neurons by Chris Huyck It's clear that all of the Natural Language Processing that people do is done with neurons. We take several years to learn how to recognise and produce language, and then spend the rest of our lives using it. It's really quite a mystery how that's done. While there is some psycholinguistic and neuro-psycholinguistic evidence on how we process language, neural models of language processing are needed to further our understanding. Moreover, those neural models can be used for proper language engineering tasks. This talk will cover a range of work we've been doing using systems of simulated neurons. This work includes: a neuro-cognitive model of parsing; this applies grammar rules implemented in neurons, and uses short-term potentiation for binding with the output being the semantics of a sentence. machine learning tasks; this uses Hebbian learning rules to solve standard categorisation tasks, a neurally implemented reinforcement learning to implement a cognitive model, and a cognitive model of categorisation. agents in virtual environments; in addition to language, these systems have vision, planning, and cognitive mapping. the Human Brain Project, agents including language processing have been implemented in standard neural simulators and in neuromorphic hardware. It is hoped that this mechanism will incorporate work by other researchers, in addition to our own extensions, to build increasingly sophisticated and biologically accurate agents. and the Telluride Neuromorphic Cognition Engineering Workshop; we quickly developed parsers integrated with memory systems. The group also worked on bag of word techniques. Clearly, we're a long way from Turing test passing systems, but I will argue that continuing along this path is the best way to get there.