Xbinx-7B-Instruct-v1.0
Model Description
Xbinx-7B-Instruct-v1.0 is a state-of-the-art 7-billion parameter large language model developed by the NonExist Research Team. It is built upon the proprietary Xbinx-Architecture, which utilizes a hybrid Sparse Mixture-of-Experts (SMoE) mechanism optimized for low-latency inference and high-precision reasoning tasks.
This model was fine-tuned using a novel technique called Dynamic Preference Alignment (DPA), allowing it to excel in complex instruction following, multi-turn dialogue, and structured data generation (JSON/Code).
Key Features
- Context Window: 128,000 tokens (supported via Rotary Positional Embeddings).
- Architecture: 32-layer Transformer with Gated Linear Units (GLU).
- Training Data: 4.5 Trillion tokens of high-quality synthetic and curated web data.
- Quantization Friendly: Optimized for 4-bit and 8-bit deployment without significant perplexity loss.
Performance Benchmarks
| Benchmark | Aether-7B-Instruct | Llama-3-8B-Instruct | Mistral-7B-v0.3 |
|---|---|---|---|
| MMLU | 72.4 | 71.9 | 63.1 |
| GSM8K | 81.2 | 79.6 | 52.2 |
| HumanEval | 68.5 | 62.2 | 40.4 |
| MBPP | 74.1 | 70.0 | 50.1 |
Usage
Quickstart with Transformers
from transformers import AutoModelForCausalLM, AutoTokenizer
import torch
model_id = "injet-zhou/Xbin-7b-instruct-v1.0"
tokenizer = AutoTokenizer.from_pretrained(model_id)
model = AutoModelForCausalLM.from_pretrained(
model_id,
torch_dtype=torch.bfloat16,
device_map="auto"
)
messages = [
{"role": "system", "content": "You are Aether, a helpful assistant powered by NonExist Research."},
{"role": "user", "content": "Explain the concept of quantum entanglement using a cat analogy."}
]
inputs = tokenizer.apply_chat_template(messages, return_tensors="pt").to("cuda")
outputs = model.generate(inputs, max_new_tokens=512, do_sample=True, temperature=0.7)
print(tokenizer.decode(outputs[0], skip_special_tokens=True))
Prompt Format
Nebula-Aether uses a specific chat template:
<|im_start|>system
{system_prompt}<|im_end|>
<|im_start|>user
{user_query}<|im_end|>
<|im_start|>assistant
Limitations
While Xbinx-7B-Instruct-v1.0 demonstrates high reasoning capabilities, it may occasionally exhibit hallucinations on niche factual topics. Users are encouraged to verify critical information. It is not recommended for high-stakes medical or legal advice without human oversight.
Citation
If you use this model in your research, please cite:
@misc{nebula2024aether,
author = {NonExist Research Team},
title = {Xbinx: Advancing Small-Scale LLMs with Dynamic Preference Alignment},
year = {2024},
publisher = {Hugging Face},
journal = {Hugging Face Model Hub}
}