Theme
AI Resources
General365
General365 is a manually curated benchmark for evaluating general reasoning in LLMs across difficult and diverse tasks, with a focus on reasoning over K-12-scope knowledge rather than domain-specialist knowledge.
The official repository presents General365 as the benchmark release for the paper on general reasoning under high difficulty and diversity, with public questions, variants, model-response formatting, grading code, project links, leaderboard materials, and a Hugging Face dataset link. This page is a factual editorial overview for reference, not an endorsement or exhaustive review. Project terms and usage conditions can differ, so readers should review the original materials independently.
What it is
A general-reasoning benchmark
General365 is framed as a benchmark for testing broad reasoning ability, with manually crafted seed problems and variants intended to reduce overreliance on narrow domain knowledge or rote memorization.
Why it stands out
Difficulty and diversity focus
The notable angle is the combination of high-difficulty tasks, broad scenario coverage, K-12-scope knowledge constraints, held-out questions, and hybrid scoring that mixes rule-based and model-based checks.
Availability
Repo, dataset, project page, and leaderboard
The official materials include the GitHub repository, paper link, project page, leaderboard, Hugging Face dataset, grading script, model-response format, and example workflow for running evaluations.
Why it matters
Why readers may notice it
General365 matters because reasoning benchmarks can blur into knowledge tests. This project is positioned around separating general reasoning skill from specialist academic knowledge, while keeping tasks difficult and varied.
What readers may want to know
Where it fits
This belongs in the benchmark and dataset layer rather than the model, app, or agent layer. It is most relevant to readers comparing model reasoning evaluations and benchmark design choices.
Reporting note
What appears notable
Based on the repository, what readers may want to notice is the manually curated question design, 365 seed problems, 1,095 variants, held-out test-set note, leaderboard materials, and hybrid scoring workflow.
Before using
What readers may want to review
Which public questions, variants, and held-out limitations are described in the official materials.
The grading script, model-response JSONL format, and scoring method before adapting the benchmark.
Whether the benchmark is being used to compare general reasoning, data contamination risk, or a narrower model capability claim.
Best fit
Who may find it relevant
Readers following LLM reasoning benchmarks and model-comparison methods.
Builders and researchers comparing difficult general-reasoning tasks across model families.
Less relevant for readers looking for a model checkpoint, finished app, or agent framework.
Editorial note
Why it is included here
Lifehubber includes General365 because it gives readers a current benchmark reference for reasoning evaluation: not only whether models know facts, but how they handle varied, difficult reasoning tasks under clearer constraints.
Source links
Original materials
Get occasional updates when new AI resources are added
More in Datasets
Keep browsing this category
A few more places to continue in datasets.
ClawMark
evolvent-ai/ClawMark
A living-world benchmark for multi-day, multimodal coworker agents, spanning 100 tasks across professional domains and real tool environments.
LARYBench
meituan-longcat/LARYBench
A benchmark for evaluating latent action representations, with pipelines for action semantics, robotic control regression, and broader vision-to-action alignment.
Monitorability Evals
openai/monitorability-evals
An OpenAI evaluation-data release for studying monitorability, with public eval splits, prompt templates, dataset mappings, and metric code from the Monitoring Monitorability paper.
Related in Lifehubber
Continue browsing
Keep browsing across AI, including AI Resources for more tools and projects to explore, AI Ballot for a clearer view of what readers are leaning toward, and AI Guides for help with choosing and using AI tools well.