Javascript is required
Search
/
/
Journal of Engineering AI Verification and Validation
JDGOD
Journal of Engineering AI Verification and Validation (JEAVV)
JEMSE
ISSN (print): 3134-7207
ISSN (online): 3134-7215
Submit to JEAVV
Review for JEAVV
Propose a Special Issue
Current State
Issue
-
Volume
-
Archive
Home

Journal of Engineering AI Verification and Validation (JEAVV) is a peer-reviewed, open-access journal that publishes research on methods for verifying, validating, testing, and evaluating artificial intelligence components deployed in engineering systems. The journal focuses on analytical, experimental, and system-level studies that examine how performance, reliability, and safety can be demonstrated under realistic operating conditions. It welcomes contributions presenting well-founded evaluation approaches supported by empirical, computational, or applied investigation, particularly in areas such as testing frameworks, validation protocols, benchmarking methods, uncertainty analysis, and certification-oriented assessment. Relevant application domains include industrial, infrastructure, energy, transportation, manufacturing, and cyber-physical systems. JEAVV is published quarterly by Acadlore, with issues released in March, June, September, and December.

  • Professional Editorial Standards - All submissions are evaluated through a standard peer-review process involving independent reviewers and editorial assessment before acceptance.

  • Efficient Publication - The journal follows a defined review, revision, and production workflow to ensure regular, predictable publication of accepted manuscripts.

  • Open Access - JEAVV is an open-access journal. All published articles are made available online without subscription or access fees.

Editor(s)-in-chief(0)

Aims & Scope

Aims

Journal of Engineering AI Verification and Validation (JEAVV) is an international, peer-reviewed, open-access journal dedicated to research on the rigorous assessment of artificial intelligence components embedded in engineering systems. The journal focuses on how such systems are examined, tested, and confirmed to meet defined performance, safety, reliability, and operational requirements under realistic conditions.

The journal emphasises the methodological foundations through which evidence about system behaviour is established. Rather than concentrating solely on algorithmic development, it addresses how evaluation criteria are formulated, how testing procedures are designed, how validation results are interpreted, and how confidence in system performance is demonstrated through structured analysis and verifiable evidence.

JEAVV serves as a venue for studies that analyse, develop, or apply methods for examining engineering AI systems at different levels of abstraction, from component testing to full system evaluation. Contributions may draw on approaches from systems engineering, experimental methodology, reliability analysis, statistics, safety engineering, software and hardware testing, or domain-specific engineering practice, provided that the central contribution concerns verification, validation, or evaluation.

The journal publishes work that advances understanding of how engineering AI systems can be assessed in a technically sound and reproducible manner. Submissions are expected to present clear methodological reasoning, transparent assumptions, and evidence that supports the conclusions reached. Studies that examine system behaviour under realistic operating conditions, limited data, uncertainty, or environmental variability are particularly encouraged.

JEAVV is published quarterly by Acadlore and follows established peer-review and editorial procedures intended to ensure consistency, fairness, and technical rigour in the evaluation of submissions.

Key features of JEAVV include:

  • The journal concentrates on verification, validation, testing, and evaluation methodologies for engineering AI systems rather than on algorithm design alone;

  • Particular attention is given to experimental design, benchmarking, reproducibility, and structured performance assessment carried out under practical engineering constraints;

  • The journal values contributions that connect methodological approaches with concrete engineering contexts and provide analytical or empirical support for their claims;

  • Research addressing reliability, safety, robustness, and uncertainty is considered, where these aspects are analysed through explicit evaluation or validation procedures;

  • Comparative studies examining alternative testing or assessment strategies across different engineering domains are welcomed;

  • Editorial decisions prioritise clarity of argument, transparency of method, and strength of evidence so that published work provides a dependable basis for further research and practical implementation.

Scope

JEAVV welcomes original research articles, theoretical studies, methodological analyses, systematic reviews, and carefully documented empirical or computational investigations in areas including, but not limited to, the following:

Verification and Validation of Engineering AI Systems

This area concerns formal approaches for assessing whether AI-enabled engineering systems satisfy defined functional, performance, and safety requirements.

  • Verification frameworks and validation methodologies

  • Performance evaluation criteria and measurement approaches

  • Evidence generation and validation procedures

  • Reproducibility and repeatability analysis

Testing Architectures and Experimental Design

This area addresses how testing environments and experimental procedures are structured to support reliable evaluation.

  • Testbed development and simulation-based testing

  • Benchmark and dataset construction

  • Scenario-based and stress testing strategies

  • Experimental design under engineering constraints

Reliability, Safety, and Risk Assessment

This area focuses on structured methods for identifying and analysing potential failure modes and uncertainties affecting system performance.

  • Reliability testing and fault analysis

  • Safety evaluation methodologies

  • Uncertainty quantification and sensitivity analysis

  • Robustness and failure propagation studies

System-Level Evaluation and Integration Testing

This area examines how AI components behave when incorporated into full engineering systems.

  • Integration testing of AI modules

  • System-level performance assessment

  • Interaction between AI and physical components

  • Cross-component verification approaches

Monitoring, Diagnostics, and Runtime Evaluation

This area concerns assessment methods applied during system operation.

  • Runtime monitoring and performance verification

  • Diagnostic testing and anomaly evaluation

  • Operational auditing methods

  • Continuous or lifecycle evaluation strategies

Data Quality, Evidence, and Measurement Uncertainty

This area explores how limitations in data and measurement affect evaluation credibility.

  • Validation under limited or imperfect data

  • Ground-truth construction methods

  • Statistical confidence assessment

  • Data-centred evaluation techniques

Interpretability and Evaluation Transparency

This area examines how interpretability and transparency can be assessed as measurable properties of engineering AI systems.

  • Methods for evaluating explainability

  • Interpretability testing protocols

  • Evidence traceability and documentation

  • Transparency metrics and assessment frameworks

Domain-Specific Engineering Applications

This area includes empirical investigations in engineering domains where rigorous evaluation is essential before deployment.

  • Infrastructure and transportation systems

  • Industrial and manufacturing systems

  • Energy and environmental systems

  • Robotics and cyber-physical systems

Standards, Certification, and Governance Frameworks

This area addresses procedures through which engineering AI systems are formally assessed for compliance and approval.

  • Certification-oriented evaluation methods

  • Engineering standards and compliance testing

  • Regulatory assessment procedures

  • Documentation and auditability practices

Decision and Acceptance Processes

This area focuses on how validation results inform engineering or organisational decisions.

  • Acceptance criteria and performance thresholds

  • Evaluation-informed system modification

  • Decision processes based on testing evidence

  • Governance of safety-critical deployments

Articles
Recent Articles
Most Downloaded
Most Cited
- no more data -
- no more data -
Most cited articles, updated regularly using citation data from CrossRef.
- no more data -