英文摘要 |
We are investigating to what extent can neural networks learn to parse a natural language. In particular, we present a recurrent neural network architecture and the learning experiments used to train the neural network. We train the recurrent neural network using the extended error backpropagation method by giving a sequence of lexicons as input whose categories may be ambiguous (more than one category is possible). Instead of encoding the parse tree within the neural network, the correct phrasal links as well as the lexical categories are clamped at the output layer of the network at the training phase while lexical categories are being fed into the neural network. With phrasal links, however, a complete parse tree can be easily reconstructed. Our results indicate that with a few training examples, the neural network can parse not only syntactically ambiguous sentences but also some ill-formed sentences that it has never seen before. |