Editorial Policy

Agent Skills Hub exists to reduce low-value, templated content and provide useful implementation guidance for AI agent skills. This policy defines how pages are written, reviewed, and corrected.

1) Source Standards

We prioritize first-party sources such as official repositories, docs, release notes, and maintainer statements. Third-party summaries are treated as supporting context, not final authority.

When source signals conflict, we publish the most conservative verified interpretation and mark unresolved uncertainty. This prevents overconfident guidance that can cause production mistakes. Source links are retained wherever possible so readers can inspect original material directly.

2) Original Value Requirement

Every indexable page must include practical value beyond copied README text. Typical additions include risk framing, rollout checklists, known failure modes, and migration notes. Pages that do not yet meet this bar can remain accessible for users but are not prioritized for search visibility.

Original value is measured by decision impact, not only length. A compliant page should help users choose safer defaults, reduce integration uncertainty, or prevent repeated implementation failures. Content that only paraphrases README text is considered incomplete and is queued for rewrite.

3) Update and Refresh Cadence

High-traffic or high-risk pages are reviewed more frequently. When upstream projects change behavior, permissions, or maintenance status, we revise affected pages to reflect current operational impact.

Update priority is based on user risk and operational relevance. Security-critical changes are handled before long-tail editorial improvements. We also monitor recurring feedback themes to identify structural issues that require cross-page updates instead of isolated text edits.

4) Corrections Process

We accept correction requests that include verifiable evidence. Send the page URL, incorrect claim, and a source link to support@agentskillshub.dev. Confirmed issues are updated promptly, and significant revisions are reflected in page metadata where applicable.

Requests with reproducible context are processed fastest. We ask for exact URLs, disputed claims, and evidence links so reviewers can validate without guesswork. If a correction touches multiple pages, we batch related fixes to maintain consistency across taxonomy, related links, and canonical coverage.

5) Independence and Commercial Policy

Sponsorship or commercial relationships do not guarantee favorable ranking treatment. Editorial standards are applied consistently to preserve trust and decision quality for readers.

We separate monetization from editorial decisioning. Sponsored relationships may support operations, but they do not bypass review gates or suppress risk disclosures. Long-term trust is treated as a product requirement, not a marketing preference.

6) Security-context writing principles

Security content should be specific enough to act on. We favor concrete examples, bounded claims, and clear mitigation steps over alarm-heavy language. When a risk is contextual, we state where it applies and where it does not. This helps readers calibrate effort and avoid both complacency and overreaction.

We also require that high-risk claims include at least one practical control path, such as permission scoping, execution isolation, secret management improvements, or monitoring checkpoints. A warning without a mitigation is incomplete guidance.

7) Reader feedback loop

Reader reports are a primary quality signal. We use them to prioritize edits, detect stale assumptions, and improve recurring weak spots in templates. Feedback-driven updates are documented so the same issue does not cycle repeatedly.

This policy is updated when workflow changes require clearer rules. Continued publication under this framework means these standards remain active across indexed pages and new content.