I need some tips to a Sequence-to-sequence project | AI

I have made a implementation of a seq2seq machine based on the example from Keras blog, I changed the logic to make a word level translation. Also I have tried to increase the model adding multilayer, bidirectional LSTM and attention, but none of this give me a good translation. I’m using the same dataset as the Keras’ example.

I don’t finded a tip or adivice to make a huge translation machine in the articles and papers that I readed. I don’t know if the problem is the machine’s architecture, or the dataset, or my knowledgement.

If someone can help with some tip, paper or experience case, I appreciate it.

(sorry the poor english, I’m from Brazil)

You might also like More from author

Leave A Reply

Your email address will not be published.