Authors & Review Team
Agent Skills Hub is maintained by a small editorial team focused on secure adoption of AI agent skills. Our writers and reviewers combine documentation analysis, hands-on workflow testing, and risk triage to publish practical guidance rather than generic directory copy.
We prioritize pages that help developers make safer rollout decisions: what to test first, what permissions to restrict, and what failure mode to monitor before promotion.
Editorial Team
Curates taxonomy, rewrites low-context listings, and ensures each page answers a concrete operational question.
Security Review Contributors
Flag permission risks, shell/network exposure patterns, and environment-handling issues surfaced during repository review.
Community Feedback
We accept correction requests and update reports when maintainers or users provide reproducible evidence.
To request a correction, email support@agentskillshub.dev with the URL, claim, and source.
How contributors are evaluated
Contributor quality is measured by verification discipline, not publication volume. A strong contribution includes source links, reproducible checks, and clear risk framing that helps readers make better rollout decisions. We prefer concise, testable updates over broad claims that cannot be validated in realistic deployment environments. This standard keeps the directory useful for operators who need reliable guidance under time pressure.
Editorial reviewers check whether proposed changes reduce user uncertainty. If a revision adds words but does not improve a concrete decision point, it is usually reworked before publication. This helps us avoid low-value expansions and maintain a high signal-to-noise ratio across both listings and blog articles.
Review workflow and accountability
For high-impact pages, updates follow a two-layer check: technical signal review plus editorial clarity review. Technical review validates claims about permissions, network behavior, dependency risk, and operational failure modes. Editorial review ensures the final page is readable, internally consistent, and actionable for both beginners and experienced teams. If reviewers disagree, we publish the narrowest claim supported by evidence and expand later as validation improves.
We keep change history and correction context so recurring issues can be addressed systemically. For example, if multiple reports reveal confusion around one install path, we update related pages together rather than patching isolated sentences.
Conflict and independence policy
Editorial quality depends on independence. Commercial relationships or sponsorships do not guarantee ranking advantages or positive language overrides. Contributors are expected to disclose relevant conflicts where applicable, and editorial standards remain consistent across paid and unpaid contexts. This policy protects trust and keeps recommendations aligned with user outcomes instead of short-term promotion incentives.
If you believe a page misrepresents a tool, send correction evidence and the specific claim that should be revised. Well-structured reports help us resolve issues quickly and keep the directory credible for production teams.
Continuous improvement standard
Contributors are encouraged to treat each update as part of a long-term quality system. We track recurring confusion patterns and feed them back into page structure, terminology normalization, and risk framing templates. Over time, this process turns one-off corrections into stronger baseline guidance for all readers.
The best contributor outcome is not just fixing one line. It is reducing repeat mistakes across related pages so teams can adopt tools with clearer expectations and fewer surprises.
We also run periodic editorial retrospectives to identify where page structure can better support real implementation work. Insights from those reviews feed directly into templates, terminology guidance, and review checklists so new pages start from a stronger baseline instead of repeating avoidable weaknesses.