Computers (in senses of machines and automatons) were originally created as the attempt to imitate human thinking processes. As a researcher in computer science, I have developed an affection toward understanding of how humans perform thinking in conceptual realm. If there is such a thing that could potentially be my current research direction, trying to understand the intuition behind human thinking model must be it.
Gaugeable AI
Simply knowing how to train neural networks does not acquaint someone with the insight into the brain. It simply creates another kind of apparatuses that is comparatively hard to understand. I have spent years to look for a more worthwhile means to understand the cognitive process of thinking. The hope is that one day if I could lay down the foundation of how thinking works, I could then be able to create a transcendental version of it. One that allows quantitative justification of any decision it makes.
My first venture into this direction resulted in a conceptual model that's similar to the transformer model. Unlike the transformer model, I was obsessed with the computational complexity of the model than the practicality.
With the advent of language models, the research direction has never been more clear. Language models like humans, can be trained to do so many things. But one thing it cannot yet do is to tell us how much confidence it has in its own prediction, and that the answer will lead to the desirable outcome given all the data it has seen.
Learning to search
In this work, I venture into the the learning to learn paradigm using a neural network. The result is a neural network that learns to search through the given input space to find a configuration that matches the provided figure. Learning to search can be considered an alternative to learning with less data, or even one-shot learning. It does not need any variation in the training example; because the model has to search for it with the possibility to extend to multi-object classification.
Robocup small size league
I had participated in the Robocup small size league around 2006-2008 as a programmer for Plama-Z, one of the famous RoboCup small-size league teams from Thailand. My role in the team involved developing a visual feedback system and a strategy-level artificial intelligence. Our team's final match with CM-Dragon at Georgia tech, Atlanta 2007 was considered to be one of the best match in the history of SSL with the draw of the two teams during the normal play. The winner, CM-Dragon, was designated in penalty kicks.
Our team loss at that time was a drive for me to do research on how to predict ball drop positions from initial trajectory data. The result was an algorithm that allows the system to predict ball path and move the robots to intercept it in time.
Spatial conformation
Since I am not smart enough to always write a correct program for a robot, so I make a robot follows my erroneous program in a meaningful way instead.
When humans try to explain some concepts to others, the recipients may somehow be able to understand them even if the explanations are highly abstract and missing full details. I would like to apply the same idea into robotics where we are not trying to give perfect plans to machines; instead, we are going to show only the abstract, inaccurate, and incomplete versions of the plans and let the machines deduce the relations to the real scenarios by themselves. With basic intuition, the problem can be regarded as the localization problem of the current state into the abstract plans through spatial matching. I have been developing algorithms that can be used to find relation between patterns in situations and plans. Details can be found in my thesis.
Terrain Coverage
I had been participated in the terrain coverage research with Professor Ioannis Rekleitis and Mr. Anqi Xu in the Mobile Robotics lab at McGill. The goal of this research is to develop an optimal strategy for controling an agent(s) to perform area coverage in a given bounded region. I find myself interested in this research because any coverage algorithm can be applied to reinforcement learning as a policy for initial exploration.