This technology allows for the creation of treatment specific to stroke patients that suffer from bilingual aphasia. So far this study has focused on bilingual patients that speak Spanish and English. One of the things the researchers have learnt is to figure out which language to use in treatment, when the right language is used in treatment the other language also improves.
The neural network model allows researchers to try multiple different treatments in minutes to figure out the best approach for treatment. Though this specific study focuses on strokes, this type of innovation can be used to treat language impairment from other conditions as well. This is a part of a current trend in medicine to use computer science to personalize treatments for patients. As Dr. Patterns of language impairment in multilingual stroke patients are very diverse.
Sometimes language impairment affects all languages the person speaks equally, while other times it affects one language more than the other.
The way in which a stroke affects a multilingual patient depends on many different variables such as when each language was learned, how frequently each one is used etc. This makes creating treatment of multilingual victims of stroke much more difficult. In their latest research, UT professor Risto Miikkulainen and research scientist Uli Grasemann , among others, explore how neural networks could be used to model language patterns and create personalized treatment.
Uli Graseman met Risto Miikkulainen, a UT Computer Science professor specializing in cognitive science, during his undergraduate years in an exchange program. Upon completing his undergraduate degree he went back to Germany, eventually deciding to come back to UT Computer Science to earn his PhD. Suhaib Abdulquddos. Adrian Agogino. Matthew Alden. Timothy D. Erkin Bahceci. Garrett Bingham. Yonatan Bisk.
Brian D. Joseph Bruce. Bobby D. Chris Bush. Chun-Chi Chen. Alex van Eck Conradie. Ryan Cornelius. James Craver. Thomas D'Silva. Matthew de Wet. Nirav Desai. Adam C. Eliana Feasley. Brad Fullmer. Aliza Gold. Faustino Gomez. Aravind Gowrisankar. Brian Greer. Todd Greer. Patrick Haley. Nabil M. Kay E. Matthew Johnston. Riitta Katila. Leslie M. Nate Kohl. Joel Lehman. Dan Lessin. Xun Li. Jason Zhi Liang. Alan J. Alex Lubberts. Reza Mahjourian.
Paul H. Elliot Meyerson. Risto Miikkulainen. German Monroy. David E. Timothy Nodine. Andres Santiago Perez-Bergquist. Praveen Pilly. Daniel Polani. Melissa Redford. Joseph Reisinger. Norman Richards. David Robson. Jacob Schrum. Rini Sherony. Yiu Fai Sit. Kenneth Stanley. This is the part that allows the system to learn complicated relationships from the training data.
The difference is that in an artificial neural network, we have the flexibility to choose different activation functions," said Bingham. While machine learning researchers can pick any number of these functions, the team noticed that researchers tended to stick with "a handful of activation functions that usually work pretty well. Limiting the scope of activation functions used in research can hurt innovations in the machine learning space.
0コメント