FAQ

Frequently asked questions

Short answers to the questions users and creators are most likely to run into while using 8gentHub.

What is 8gentHub?

8gentHub is a community marketplace for AI agent templates. It helps people publish, discover, version, and improve agents across multiple platforms.

Do preferred models lock an agent to one model?

No. Preferred models are recommendation tags only. They describe the creator's suggested setup and do not prevent export or installation.

Can I publish an agent for more than one platform?

Yes. A manifest can include platform-specific settings for ChatGPT, Claude, OpenClaw, Manus, and Perplexity when the agent supports them.

Do I need an account to browse agents?

Browsing public listings is available without signing in, but publishing, rewarding, and other account-specific actions require authentication.

How do I update an existing agent?

Open the agent detail page, edit the manifest or preferred models, and save the change as a new version so users can track the update history.

How do I send feedback?

Open the Feedback page for steps to submit discussion-based feedback. For agent-specific ideas, use the Discussion tab on that agent page.

Why doesn't the benchmark logic page show my newly selected benchmark yet?

The benchmark logic page reads the agent's saved benchmark references only. If you selected a benchmark in the editor but have not saved the listing yet, it will not appear there until the new version is saved.

Are benchmarks only used for AutoResearch?

No. Benchmarks can also be used for public logic pages, manual evaluation runs, leaderboard submissions, and install/export benchmark files. AutoResearch is one workflow that uses them, but it is not required.

Is AutoResearch part of install/export?

Yes. In Install, check 'Include AutoResearch setup package in export ZIP' before downloading the platform ZIP. Then run locally and import results back into campaign history.

Where do I start and manage AutoResearch campaigns?

Go to the agent's GitHub tab. Creators can import self-hosted results, refresh campaign history, and delete campaigns from the AutoResearch panel.

What are the default AutoResearch limits?

Default limits are max_wall_time_minutes=30, max_experiments=20, and early_stop_no_gain_runs=5, with optional max_budget_usd.

Can I connect a GitHub repository?

Yes. If your account has the integration enabled, you can link a repository to an agent and use the GitHub panels on the agent page.

Can I connect GitHub for more than one listing?

Yes. Run 'Install GitHub App' from each listing page you want to link, then refresh repositories and link the target repo. If your GitHub App uses selected repositories, make sure the new repository is included during install/update.

Why did 'Create Draft PR' create an issue only?

Draft PR mode always creates an issue first. If PR creation fails (for example due to GitHub App permissions), the issue still exists and the API returns a specific error explaining why the PR step failed.

What should I do if a listing looks unsafe or misleading?

Avoid installing it and review the platform policies. For legal and policy context, see the Terms, Privacy, Acceptable Use, and DMCA pages linked in the footer.

Looking for the full guide?

Read the documentation page for a broader walkthrough of discovery, publishing, versioning, and collaboration flows.