About Journal
Aarhat Multidisciplinary International Education Research Journal (AMIERJ) is an official journal of Multidisciplinary Scholarly Research Association, India running Association with Aarhat Publication and Aarhat Journals, India. It is an open-access, Refereed, Peer Reviewed online qualitative journal. It publishes original, Refereed, Qualitative, Quantitative scientific outputs. It neither accepts nor commissions third party content.
Aarhat Multidisciplinary International Education Research Journal (AMIERJ) recognised internationally as the leading peer-reviewed Refereed Multidisciplinary journal devoted to Qualitative & Quantitative publication of original papers. www.aarhat.com/amierj accepts multidisciplinary papers with topics such as:
All Fields of Social Sciences, Arts, and Humanities ,Science, Management, Engineering, Library and Information Sciences ,Archaeology, Education, Law, Economics, Accounting, Finance, Human Resource Management, Marketing, Architecture, Epigraphy, History of science, sociology, psychology, Morphology, Museology, Papyrology, Philology, Preparation/conservation, Religion, Underwater archaeology, English Literature, Geography, Mathematics etc
Aarhat Multidisciplinary International Education Research Journal (AMIERJ) is now published in English as well as in Hindi & Marathi and it is open for submission by authors from all over the world. It is currently published 6 times a year, in Feb, April, June, August, October, and December.
Recently Published Articles
Original Research Article
|
Feb. 28, 2026
52 Downloads
AI AND NEURAL INTERFACES: EMPOWERING COMMUNICATION FOR PHYSICALLY CHALLENGED INDIVIDUALS
Kartik Bhalerao & Meet Naik
DOI : 10.5281/amierj.18641486
Abstract
Certificate
Severe physical conditions such as locked-in syndrome, amyotrophic lateral sclerosis (ALS), and post-stroke paralysis can greatly limit a person’s ability to speak or move, even though their thinking and understanding remain unaffected. This mismatch between cognitive ability and physical expression creates major obstacles in communication and everyday independence. This paper investigates how artificial intelligence (AI), when combined with neural interface technologies, can help overcome these limitations and provide more effective means of interaction.
The study focuses on the use of brain–computer interfaces (BCIs) and neuroprosthetic systems that capture and interpret neural signals directly from the brain. AI-based approaches are applied to process these signals and transform them into practical outputs, including text, speech, or control commands for assistive technologies. Adaptive learning models allow the systems to adjust to individual users, leading to improved performance and reliability over continued use.
The findings indicate that AI-supported neural interfaces significantly enhance communication efficiency and usability compared to conventional assistive methods. Beyond communication, these technologies also enable users to operate computers, mobility devices, robotic aids, and smart systems within their environment. Overall, the paper concludes that AI-driven neural interfaces hold considerable promise for improving communication, autonomy, and quality of life for individuals with severe physical impairments.
Original Research Article
|
Feb. 28, 2026
78 Downloads
A HYBRID IOT AND AI ARCHITECTURE FOR INTELLIGENT RIDER PROTECTION SYSTEMS
Shweta T. Jha, Vedha K. Kalmani, Atharva J. Jadhav & Shyam Sunder P. Maurya
DOI : 10.5281/amierj.18610909
Abstract
Certificate
Road accidents remain a major public safety challenge, particularly for two-wheeler riders, where delayed emergency response and lack of real-time safety monitoring significantly increase injury severity and fatality risk. This paper proposes an AI-enabled smart helmet–based safety and monitoring framework designed to improve rider protection through continuous assessment of critical riding conditions. The proposed system focuses on three primary safety objectives: detection of accident-like events, identification of unsafe riding behaviour such as potential intoxication, and verification of helmet compliance. To enhance reliability and reduce false alerts, the framework incorporates sensor-fusion-driven machine learning that classifies riding events more accurately than conventional threshold-based approaches. In addition, the design supports a hybrid communication strategy to ensure emergency alerts can be triggered even under limited network availability, while also enabling optional cloud/dashboard-based visualization and long-term analytics. The proposed approach further introduces rider risk scoring and anomaly detection to provide preventive warnings and decision support. Overall, this work presents a scalable and research-oriented blueprint for intelligent rider safety systems that combines edge intelligence with real-time monitoring for improved road safety outcomes.
Original Research Article
|
Feb. 28, 2026
49 Downloads
A STUDY ON THE USE OF AI-ENHANCED Q-COMMERCE APPLICATIONS IN EVERYDAY LIFE
Ms. Sreelatha S. Rajaram & Prof. CA.R.P. Bambardekar
DOI : 10.5281/amierj.18638170
Abstract
Certificate
The rapid expansion of quick-commerce (Q-Commerce) applications has transformed everyday purchasing by offering ultra-fast delivery supported by artificial intelligence. These applications increasingly use AI to streamline shopping decisions, enhance efficiency, and influence consumer behaviour. This study examines the use of AI-enhanced quick-commerce applications in everyday life, with a specific focus on their influence on efficiency in meeting daily needs and the role of trust in shaping consumers’ intention to continue using such applications. Using a survey-based quantitative approach, primary data were collected from 50 users of quick-commerce platforms and analysed using descriptive statistics, correlation, and regression analysis in Microsoft Excel. The results reveal a strong positive correlation between the use of AI-enhanced applications and efficiency in meeting daily needs, which is further supported by regression analysis indicating high explanatory power. Trust in AI-enhanced applications also shows a significant positive relationship with consumers’ intention to continue use, though with comparatively moderate explanatory strength. Overall, the findings confirm that AI-enhanced quick-commerce applications significantly simplify daily purchases, save time, and enhance consumer convenience, while trust emerges as a critical factor influencing continued usage. The study contributes to the growing literature on AI-driven consumer behaviour and highlights important societal implications of technology-enabled consumption in everyday life.
Original Research Article
|
Feb. 28, 2026
60 Downloads
AI AS A SUPPORT TOOL FOR TRAFFIC WARDENS: SURVEY EVIDENCE ON FAIRNESS, PRIVACY AND DISPUTE REDUCTION
Sambhav Gosar
DOI : 10.5281/amierj.18638040
Abstract
Certificate
India’s traffic challan system relies heavily on traffic wardens who issue fines on the spot. While this human-driven process allows flexibility, it often suffers from errors. Drivers may be fined due to misjudgement, incomplete evidence, or bias, while genuine violations sometimes go unnoticed in crowded or complex traffic situations. These mistakes frustrate citizens, waste administrative effort, and weaken trust in enforcement.
This paper explores how AI can support traffic wardens in making fairer and more accurate decisions. Instead of replacing wardens, AI tools can act as assistants: mobile apps that verify license plate details instantly, machine learning models that flag likely violations based on context, and decision-support systems that help wardens distinguish between genuine offenses and unavoidable actions (such as stopping briefly to avoid an accident). By reducing false positives and strengthening true violation detection, AI can make manual enforcement more transparent and trustworthy.
The vision is a hybrid system where human judgment is enhanced and not replaced by AI, leading to smarter enforcement and stronger public confidence in traffic governance.
Original Research Article
|
Feb. 28, 2026
56 Downloads
ARTIFICIAL GENERAL INTELLIGENCE (AGI): MYTH, REALITY AND FUTURE PROSPECTS
Asst. Prof. Swapna Ramesh Merugu
DOI : 10.5281/amierj.18610168
Abstract
Certificate
Artificial General Intelligence (AGI) represents a pivotal yet elusive goal in artificial intelligence research, promising machines capable of human-like reasoning across diverse domains. This paper examines AGI through scholarly lenses, distinguishing conceptual myths from empirical realities, reviewing key literature, and analyzing methodological challenges. Drawing on peer-reviewed sources, it identifies research gaps in evaluation benchmarks and ethical frameworks while discussing practical implications for society. Findings suggest AGI remains theoretically feasible but distant, necessitating robust governance.[1][2]
Original Research Article
|
Feb. 28, 2026
82 Downloads
OPTIMIZING INITIAL INTAKE: A COMPARATIVE STUDY OF AI-DRIVEN ASSESSMENT VS. TRADITIONAL HUMAN-LED SCREENING IN OUTPATIENT COUNSELING
Asst. Prof. Sudhendu Kashikar
DOI : 10.5281/amierj.18642145
Abstract
Certificate
As global mental health systems face an unprecedented surge in demand, the traditional intake process has become a significant bottleneck, often delaying critical care for weeks or months. This study explores the efficacy of Artificial Intelligence (AI) as a frontline tool for preliminary psychological screening, comparing its diagnostic precision and patient-reported outcomes against traditional human-led clinical interviews. In a controlled experimental setting, we recruited N = 120 adult participants seeking outpatient services. These participants were randomly assigned to either an AI-led intake cohort (using a fine-tuned Natural Language Processing model) or a control group led by Licensed Master Social Workers (LMSWs).
Our primary metrics included diagnostic congruence with a "gold standard" independent evaluation, the speed of symptom disclosure, and the quality of the working alliance. The findings indicate a paradoxical "Disinhibitory Effect": participants in the AI cohort demonstrated an 88% diagnostic alignment with independent supervisors, statistically surpassing the human-led group’s 82%. Crucially, the AI system elicited disclosures of "sensitive" clinical data—including substance abuse and suicidal ideation—significantly earlier in the interaction. While the AI group reported lower scores on the Working Alliance Inventory (WAI) regarding empathy, the data suggests that the perceived anonymity of the machine reduces social desirability bias and impression management. This study concludes that AI-driven intake tools offer a robust, scalable solution for clinical triaging. By standardizing the data collection phase, these systems allow human clinicians to focus their expertise on high-level therapeutic intervention, effectively bridging the gap between clinical efficiency and human-centered care.
Original Research Article
|
Feb. 28, 2026
139 Downloads
A COMPARATIVE REVIEW OF HALLUCINATIONS IN LARGE LANGUAGE MODELS AND HUMAN PERCEPTIONS OF BIAS
Muzammil Mehboob Khan
DOI : 10.5281/amierj.18637894
Abstract
Certificate
Large Language Models (LLMs) have become integral to a wide range of applications, raising concerns about their tendency to generate hallucinated content and exhibit biases inherited from training data. While prior research has examined hallucination behavior across different AI models, less attention has been given to how these limitations align with human perceptions of bias and trust.
This paper presents a comparative review of existing research on hallucinations in contemporary LLMs, synthesizing findings across multiple studies to identify common trends, evaluation approaches, and reported limitations. In parallel, a human perception study examines how users interpret and judge bias, reliability, and trustworthiness in AI-generated outputs. Participants provide subjective assessments of perceived bias and confidence in model responses, enabling comparison with conclusions drawn in prior technical literature.
The findings reveal a clear divergence between empirically reported hallucination behavior and user perception. Models identified as having lower hallucination tendencies are not consistently perceived as less biased or more trustworthy. Instead, fluent and confident responses often lead to higher perceived reliability, regardless of documented limitations. This highlights a disconnect between technical evaluation and human judgment.
This study emphasizes integrating human-centered perspectives into LLM evaluation and underscores the need for transparency, clearer communication of limitations, and trust-aware deployment.