3rd International Conference on NLP & Signal Processing (NLPSIG 2025)

September 29 ~ 30, 2025, Virtual Conference

Accepted Papers


About Ontology,absoluteness-relativity of Scientific Cognition and the Unified Method Substantiation Ofscientific Theories.

Alexander Voin, International Solomon University, Ukraine

ABSTRACT

The problem of ontology is inextricably linked with the problem of absoluteness-relativity of scientific knowledge. The article shows the erroneousness of the solution of these problems both in the classical rationalism of Descartes, Pascal, Bacon, Newton, which absolutized scientific knowledge, and in the post-positivism of Quine, Kuhn, Feyerabend, Popper, Lakatos that replaced it, which excessively relativized it. The article proposes a solution to these problems based on the unified method of substantiation of scientific theories developed by the author. When replacing one theory substantiated by a unified method of substantiation with another (Newton - Einstein), although contrary to classical rationalism, the definitions of concepts (that means ontology) and formulas change, but contrary to post-positivists, both theories guarantee the truth of their predictions with a given accuracy and probability in the area of action of each of these theories. Only these areas do not coincide. (The area of action of the theory of relativity is larger than the area of Newton's mechanics and includes it).

Keywords

ontology, concept, theory, truth, cognition


Ozqyrqbert - Towards a Universal Turkic Language ¨part-of-speech Tagger

Yuanhao Zou, Nikhil Lyles, Stanford University, USA

ABSTRACT

Part-of-speech (POS) tagging for low-resource languages presents unique challenges due to limited annotated data and suboptimal tokenization. For this project, we make the first steps towards building a universal Turkic Language Part-of-Speech Tagger by developing OzQyrqBERT, a model that ¨ performs the task on both Uzbek and Kyrgyz, with the latter being a low resource language. We fine-tune an Uzbek POS tagging model on Kyrgyz data, systematically improving performance through enhanced tokenization. We evaluate our model using accuracy and confusion matrices, demonstrating how improved tokenization significantly reduces misclassifications. Our results highlight the effectiveness of adapting models from linguistically related languages for low-resource NLP tasks.