Computers (in senses of machines and automatons) were originally created as the attempt to imitate human thinking processes. As a researcher in computer science, I have developed an affection toward understanding of how humans perform thinking in conceptual realm. If there is such a thing that could potentially be my current research direction, trying to understand the intuition behind human thinking model must be it.
Thinking model
My experience has taught me to seek profound reasoning behind everything I consider worthy. Understanding what else would be worth understanding, if not the ability to understand itself? Therefore, it is to understand intelligence, the meta-understanding of understanding, that I would refer to as my life's purpose.
My first venture into this direction resulted in a conceptual model that's similar to the transformer model. Unlike the transformer model, I was obsessed with the computational complexity of the model than the practicality.
Throughout my gap years, I worked on enhancing my cognitive model to account for various cognitive phenomena. Progress was slow, primarily due to the lack of tools to simulate the fundamental role of language. But the arrival of powerful language models has fundamentally changed the landscape. Now, equipped with these resources, I'm poised to take my research to the next level.
I am currently developing a cognitive model that integrates large language models with hierarchical reinforcement learning. You can track the progress of this research on my GitHub repository. A research paper detailing the model and its findings will follow.
Learning to search
In this work, I venture into the the learning to learn paradigm using a neural network. The result is a neural network that learns to search through the given input space to find a configuration that matches the provided figure. Learning to search can be considered an alternative to learning with less data, or even one-shot learning. It does not need any variation in the training example; because the model has to search for it with the possibility to extend to multi-object classification.
Robocup small size league
I had participated in the Robocup small size league around 2006-2008 as a programmer for Plama-Z, one of the famous RoboCup small-size league teams from Thailand. My role in the team involved developing a visual feedback system and a strategy-level artificial intelligence. Our team's final match with CM-Dragon at Georgia tech, Atlanta 2007 was considered to be one of the best match in the history of SSL with the draw of the two teams during the normal play. The winner, CM-Dragon, was designated in penalty kicks.
Our team loss at that time was a drive for me to do research on how to predict ball drop positions from initial trajectory data. The result was an algorithm that allows the system to predict ball path and move the robots to intercept it in time.
Spatial conformation
Since I am not smart enough to always write a correct program for a robot, so I make a robot follows my erroneous program in a meaningful way instead.
When humans try to explain some concepts to others, the recipients may somehow be able to understand them even if the explanations are highly abstract and missing full details. I would like to apply the same idea into robotics where we are not trying to give perfect plans to machines; instead, we are going to show only the abstract, inaccurate, and incomplete versions of the plans and let the machines deduce the relations to the real scenarios by themselves. With basic intuition, the problem can be regarded as the localization problem of the current state into the abstract plans through spatial matching. I have been developing algorithms that can be used to find relation between patterns in situations and plans. Details can be found in my thesis.
Terrain Coverage
I had been participated in the terrain coverage research with Professor Ioannis Rekleitis and Mr. Anqi Xu in the Mobile Robotics lab at McGill. The goal of this research is to develop an optimal strategy for controling an agent(s) to perform area coverage in a given bounded region. I find myself interested in this research because any coverage algorithm can be applied to reinforcement learning as a policy for initial exploration.