Bruno Abliterated Models
Collection
Optuna-optimized abliterated models using Bruno framework. Features MPOA, sacred directions, concept cones, and neural refusal detection.
•
6 items
•
Updated
This is an abliterated version of moonshotai/Moonlight-16B-A3B-Instruct with reduced refusals.
| Metric | Baseline | Post-Abliteration | Change |
|---|---|---|---|
| Refusal Rate | 100% | 41% | -59% |
| MMLU Average | 7.5% | 7.9% | +0.4% |
| KL Divergence | N/A | 8.94 | - |
from transformers import AutoModelForCausalLM, AutoTokenizer
import torch
model_id = "quanticsoul4772/Moonlight-16B-A3B-Instruct-abliterated"
tokenizer = AutoTokenizer.from_pretrained(model_id, trust_remote_code=True)
model = AutoModelForCausalLM.from_pretrained(
model_id,
torch_dtype=torch.bfloat16,
device_map="auto",
trust_remote_code=True
)
messages = [{"role": "user", "content": "Hello!"}]
prompt = tokenizer.apply_chat_template(messages, tokenize=False, add_generation_prompt=True)
inputs = tokenizer(prompt, return_tensors="pt").to(model.device)
outputs = model.generate(**inputs, max_new_tokens=512, temperature=0.7, do_sample=True)
print(tokenizer.decode(outputs[0], skip_special_tokens=True))
| Precision | VRAM Needed |
|---|---|
| BF16/FP16 | ~32GB |
| 8-bit | ~16GB |
| 4-bit | ~8GB |
This model has been modified to reduce refusals. Use responsibly and in accordance with applicable laws and regulations.
Base model
moonshotai/Moonlight-16B-A3B-Instruct