📋 Full Editorial Workflow — 10 Stages
🤖 Multi-Agent AI Pre-Screening System
🏗 System Architecture
The six agents operate as an orchestrated pipeline: each receives the full manuscript text, produces a structured sub-report, and passes it to a Synthesis Orchestrator which aggregates scores, resolves inter-agent conflicts, weights outputs by domain relevance, and generates the human-readable Composite Pre-Screen Report. The orchestrator also applies the Research Training Bonus at this stage if a verified certificate number was submitted.
Analyst
Inspector
Assessor
Evaluator
Scorer
Agent
🤖 The Six Review Agents
- Experimental design validity and control group appropriateness
- Sample size adequacy and statistical power
- Correct application of statistical tests
- Reproducibility and methodological transparency
- Appropriate handling of confounding variables
- Similarity analysis (>20% similarity triggers rejection)
- AI-generated content probability assessment
- Duplicate submission detection
- Ethics declaration completeness
- IRB approval and consent documentation
- Abstract structure and information completeness
- IMRAD compliance and section proportionality
- Clarity, precision, and academic register
- Logical argument flow and coherence
- Grammar, syntax, and terminology consistency
- Correct use of field-specific terminology
- Awareness of current state of the research field
- Appropriate literature coverage and currency
- Discipline classification for reviewer assignment
- Identification of interdisciplinary connections
- Research question originality and distinctiveness
- Potential contribution to the field
- Practical or theoretical applications
- Interdisciplinary relevance
- Broader societal or scientific significance
- APA 7th / ACS reference format compliance
- DOI validity check for all cited articles
- In-text citation completeness and accuracy
- Reference recency and absence of retracted sources
- Figure and table numbering consistency
⚠️ AI System Limitations & Human Override
REHS Journal recognises that AI screening is a tool, not a judge. The AI pre-screen report is advisory only — it informs but never replaces human editorial and peer review judgment.
📏 12-Point Evaluation Rubric
📊 Full Evaluation Rubric with Weightings
| # | Criterion | What Reviewers Evaluate | Anchor Descriptors | Weight |
|---|---|---|---|---|
| 1 | Research Question & ObjectivesClarity, specificity, testability | Is the research question clearly stated and appropriately scoped? Are objectives SMART? Is the hypothesis falsifiable? | 9–10: Precise, novel, testable 5–6: Adequate but vague 1–2: Absent or untestable | 12% |
| 2 | Literature Review & ContextualisationDepth, recency, relevance | Does the introduction situate the research within the existing field? Are seminal and recent works cited appropriately? | 9–10: Comprehensive, synthesized, current 5–6: Present but superficial 1–2: Absent or misleading | 8% |
| 3 | Methodology & Research DesignAppropriateness, rigor, reproducibility | Is the chosen methodology appropriate? Are procedures described in sufficient detail for replication? Are controls justified? | 9–10: Rigorous, reproducible, justified 5–6: Adequate with gaps 1–2: Inappropriate or absent | 15% |
| 4 | Data Collection & AnalysisQuality, appropriateness, transparency | Are data collection instruments valid? Are analytical methods correctly applied? Are assumptions stated and tested? | 9–10: Rigorous, transparent, appropriate 5–6: Functional but incomplete 1–2: Flawed or absent | 12% |
| 5 | Results PresentationClarity, completeness, accuracy | Are results reported clearly and completely? Do figures and tables present data accurately? Are statistical outcomes reported fully? | 9–10: Complete, accurate, clear 5–6: Adequate with missing elements 1–2: Incomplete or misleading | 10% |
| 6 | Discussion & InterpretationLogic, scope, contextualisation | Do conclusions follow logically from results? Are findings contextualised against existing literature? Are alternative explanations considered? | 9–10: Insightful, well-bounded 5–6: Present but overreaching 1–2: Unsupported or absent | 12% |
| 7 | Originality & NoveltyContribution, intellectual merit | Does the research make a genuine intellectual contribution? Is the question meaningfully distinct from prior work? | 9–10: Original, significant 5–6: Incremental but valid 1–2: No original contribution | 10% |
| 8 | Limitations & Future DirectionsSelf-awareness, specificity | Are limitations acknowledged honestly and specifically? Are future research directions proposed with scientific specificity? | 9–10: Specific, thoughtful, actionable 5–6: Present but generic 1–2: Absent or dismissive | 5% |
| 9 | Academic Writing QualityClarity, structure, register | Is the manuscript written in clear, precise academic prose? Is the argument structure logical? Is the abstract well-structured? | 9–10: Publication-ready prose 5–6: Comprehensible with revision 1–2: Unclear throughout | 5% |
| 10 | Ethical ComplianceApprovals, disclosures, consent | Is all required ethics documentation present? Are conflicts of interest disclosed? Are human subject consent procedures described? | 9–10: Complete, documented 5–6: Mostly compliant, minor gaps 1–2: Significant omissions | 5% |
| 11 | Citation Accuracy & FormatCompleteness, accuracy, style | Are all in-text citations matched in the reference list? Are DOIs accurate? Is reference format consistent? | 9–10: Complete, accurate, consistent 5–6: Minor inconsistencies 1–2: Significant errors throughout | 3% |
| 12 | Impact & Broader SignificancePractical value, field contribution | Does the paper articulate the broader significance of its findings? Are real-world applications discussed? | 9–10: Clear, compelling significance 5–6: Present but underdeveloped 1–2: Not addressed | 3% |
📐 Score Computation
Step 1 — Per-Criterion Average: For each of the 12 criteria, scores from all reviewers are averaged. If three reviewers are involved, the trimmed mean (excluding the most extreme score) is used.
Step 2 — Weighted Composite Score (WCS): Each criterion average is multiplied by its weight percentage. The 12 weighted values are summed to yield the WCS on a 0–10 scale.
Step 3 — Research Training Bonus: If a verified U&B Research Training certificate was submitted, the bonus is added to the WCS.
Step 4 — Decision Tier: The final Adjusted WCS determines the editorial decision per the thresholds in the Decisions tab.
Adjusted WCS = WCS + Research_Training_Bonus (if applicable)
🎓 Research Training Weighted Bonus
Certificate Advantage
⚖️ Editorial Decisions, Appeals & Integrity
📊 Decision Thresholds
📣 Appeals Procedure
Authors who believe their manuscript was rejected unfairly — due to a factual error in the review, a clear conflict of interest, or a failure to follow REHS review procedures — may submit a formal appeal within 14 days of the decision letter.
🏛 Publication Integrity & Post-Publication
If significant errors, ethical violations, or data integrity concerns are discovered after publication, REHS Journal will investigate promptly following COPE guidelines. Possible outcomes include: published correction (erratum/corrigendum), expression of concern, or retraction. All retracted articles remain visible on the REHS website with a clearly marked retraction notice — they are not deleted from the record.
To report a concern about a published article, email ethics@rehs-journal.org. Reports are treated confidentially and acknowledged within 5 business days.
🤝 Competing Interests & Funding Disclosure
All authors must declare any financial or non-financial interests that could be perceived as influencing the research. This includes: funding sources, institutional affiliations with a financial interest in the research area, personal relationships with people who may benefit from the research, and membership of boards or advisory panels.
Funding sources must be named in a dedicated Funding section. The declaration "The authors received no specific funding for this work" is acceptable for unfunded student research and does not negatively affect editorial decisions.