Humans, we are told, are irrational and there is nothing to be done about it. We are bad with probabilities, systematically biased, overestimaters, underestimators, prone to rationalize our decisions in subservient ways.
But if there is no hope for us, there might still be some for our robot children. Educated properly, they could grow up to be the sensible adults we never were. They could transcend the cognitive eccentrities pushing us toward the brink.
For that to happen we should figure out what it means to be rational, and write it down in a manner they, the robots, will understand. What do robots understand? Mathematics, logic, algorithms.
What do we understand? That it is good to change your mind, except when it is not. That it is good to work with others, except when it is not. That you can argue with someone and be a little wiser at the end. How do we make the rational agents of tomorrow see that?
Things such as these are very interesting, I think.
One part of me thinks about how to make belief change work for various Knowledge Representation (KR) formalisms. Questions such as: what does it mean to revise an argumentation framework? how can we aggregate Horn formulas? does it make sense to update preferences?
Issues like these are useful, on the one hand, for taking belief change out of its propositional ivory tower and making it more relevant to the broader field of AI; and, on the other hand, for making belief change relevant to resource bounded agents: some KR formalisms are designed to make reasoning easy, and one expects a lean reasoner to change its representation in an informed way.
But if one stares at a belief change operator long enough, one sees that it is essentially a decision procedure. Revision is about having to decide what information to hold on to and what to discard. Merging is about combining information from different sources. In both cases the candidates are bits of information, and the belief change procedure selects the best choices available. The problems are interesting in their own right.
So another part of me wants to think about problems of social choice: what properties are desirable when aggregating different types of information? what is possible and what is not? what does it mean to be rational, either on the individual or on the collective level? how do we model the beliefs, preferences or intentions of a group?
More broadly, I am interested in understanding and modelling interactions between agents: economic, cognitive, cultural.
Currently, I am a Project Assistant within the project EMBArg: Extending Methods in Belief Change to Advance Dynamics of Argumentation, led by Johannes Wallner. Since May, 2015, I am an associate student of the LogiCS DK at TU Wien, under the supervision of Stefan Woltran and Thomas Eiter.
Previously I was Project Assistant in the project Fragment-Driven Belief Change, led by Stefan Woltran.
2014-2020: | PhD in Theoretical Computer Science, TU Wien |
2012-2014: | MSc in Computational Logic, TU Wien/TU Dresden/FU Bolzano within the, sadly now terminated, EMCL program |
2010-2012: | MA in Theoretical Philosophy and Philosophy of Science, University of Bucharest |
2010-2012: | BSc in Mathematics, University of Bucharest (discontinued for EMCL purposes) |
2007-2010: | BA in Theoretical Philosophy, University of Bucharest |
2003-2007: | High school, Mihai Eminescu High School, Botosani, Romania |
1999-2003: | Middle school, School Nr. 7, Botosani, Romania |
1995-1999: | Primary school, School Nr. 11, Botosani, Romania |
1993-1995: | Kindergarten, Kindergarten Nr. 9 and Nr. 2, Botosani, Romania |
1988-1993: | No school, Botosani, Romania |
I've served, or will serve, as a reviewer or subreviewer for the following conferences:
ECAI 2020 |
AAMAS 2020 |
AAAI 2020 |
FoIKS 2020 |
IJCAI 2019 |
AAMAS 2019 |
post-CLAR 2018 |
CLAR 2018 |
KI 2018 |
Commonsense 2017 |
And for the following journals:
JAIR |
I enjoy it.
I've served as a teaching assistant for the Research and Career Planning Course, held by prof. Georg Gottlob at TU Wien, since 2016. I've also talked about belief change and its connections to rational (individual and social) choice in the course on Preferences in AI, also at TU Wien.
Over the years I have been fortunate enough to receive support from a number of sources.
I was the beneficiary of a KUWI grant from TU Wien for a short-term stay abroad in Paris, at Univ. Paris-Dauphine, from October to December, 2017. I was then awarded a Marietta Blau grant from the OEAD for a stay abroad, also in Paris and also at Univ. Paris-Dauphine, for the period February-July, 2018.
During this time I collaborated on some papers, saw the Bois de Boulogne go from green to grey to luscious green again, and listened to gospel music in the Saint-Vincent de Paul church.
I was hosted in Paris by Jerome Lang, to whom I am very grateful. Under his guidance I visited Toulouse and got to meet and collaborate with Umberto and Arianna.
The Paris stay also resulted in a collaboration with Meltem, Stefano and Hossein.
I'm additionally grateful to Nadia and Odile, who hosted me in Marseille at the end of January.
In 2015 I was able to attend IJCAI 2015 in Buenos Aires, Argentina, thanks to a travel grant from the IJCAI organization. During my EMCL studies (2012-2014) I lived off a scholarship granted by the EMCL fund, without which I would probably not have been able to attend the program.
I helped organize:
The Workshop on New Trends in Formal Argumentation, August 17, 2017, Vienna, Austria |
The Workshop on New Trends in Belief Change, May 10, 2016, Vienna, Austria |
The EMCL Student Workshop, February 18-19, 2014, Vienna, Austria |
What else? Oh yes: I was a volunteer at IJCAI 2018, KR 2018, KR 2016 and the Bucharest Colloquium in Analytic Philosophy 2011. I was involved in web-site maintenance for the projects that employed me, for the EDBT-ICDT 2018 conference and, from 2008 to about 2010, I co-edited the Romanian Philosophy Newsletter.