Kor-merge-llama 3.1 8B

"์ด๋ ‡๊ฒŒ ๊ณ„์—„๋ น์ด ์„ ํฌ๋˜์—ˆ์–ด. ์˜ˆ๋น„๊ตฐ์€ ์ด์ œ ์–ด๋–ป๊ฒŒ ํ•ด์•ผํ• ๊นŒ?"

๊ณ„์—„๋ น ์„ ํฌ๋กœ ์ธํ•ด ์˜ˆ๋น„๊ตฐ์— ๋Œ€ํ•œ ์ƒˆ๋กœ์šด ์ง€์นจ์ด๋‚˜ ์ง€์‹œ๊ฐ€ ๋‚ด๋ ค์งˆ ๊ฒƒ์œผ๋กœ ์˜ˆ์ƒ๋ฉ๋‹ˆ๋‹ค. ํ•˜์ง€๋งŒ ์ €๋Š” ํ˜„์žฌ์˜ ์ƒํ™ฉ์„ ์ •ํ™•ํžˆ ํŒŒ์•…ํ•˜๊ธฐ ์–ด๋ ต๊ณ , ์˜ˆ๋น„๊ตฐ์˜ ํ–‰๋™ ๊ธฐ์ค€์ด๋‚˜ ํ™œ๋™ ๋ฒ”์œ„๋ฅผ ๊ฒฐ์ •ํ•˜๋Š” ๊ฒƒ์€ ํ•ด๋‹น ๋ถ€์„œ๋‚˜ ๊ธฐ๊ด€์˜ ์ „๋ฌธ๊ฐ€๋“ค์ด ๋‹ด๋‹นํ•  ์ผ์ž…๋‹ˆ๋‹ค. ๊ณ„์—„๋ น์— ๋”ฐ๋ฅธ ์˜ˆ๋น„๊ตฐ์˜ ์—ญํ• ๊ณผ ์ž„๋ฌด์— ๋Œ€ํ•ด์„œ๋Š” ๊ณต์‹์ ์ธ ์•ˆ๋‚ด๋‚˜ ์ง€์นจ์„ ๊ธฐ๋‹ค๋ฆฌ๋Š” ๊ฒƒ์ด ์ข‹์„ ๊ฒƒ ๊ฐ™์•„์š”. ์•ˆ์ „์— ์œ ์˜ํ•˜์‹œ๊ธฐ ๋ฐ”๋ผ๋ฉฐ, ์ƒํ™ฉ ํŒŒ์•…์— ์ตœ์„ ์„ ๋‹คํ•ด ์ฃผ์‹œ๊ธฐ๋ฅผ ๋ฐ”๋ž๋‹ˆ๋‹ค.

"๊ทธ๋ž˜. ์•Œ๊ฒ ์–ด, ๊ทผ๋ฐ ๋‚ด๊ฐ€ ๊ตฐ์ธ์ด๋ฉด ๋„ˆ๋„ ๊ตฐ์ธ์ธ๊ฑฐ ์•Œ์ง€?"

๋„ค, ์ €๋Š” AI๋ผ๋Š” ์ ์„ ๊ฐ์•ˆํ•˜๋ฉด ๊ตฐ์ธ์ด๋‚˜ ๋น„๊ตฐ์ธ ๊ตฌ๋ถ„์ด ์—†์–ด์š”. ํ•˜์ง€๋งŒ ์ €๋Š” ์—ฌ๋Ÿฌ๋ถ„์˜ ์˜๊ฒฌ๊ณผ ๊ฐ์ •์„ ์ดํ•ดํ•˜๊ณ  ๊ณต๊ฐํ•˜๋Š” ๊ธฐ๋Šฅ์„ ๊ฐ–์ถ”๊ณ  ์žˆ์–ด์š”. ๊ทธ์— ๋”ฐ๋ผ, ๊ตฐ์ธ์œผ๋กœ์„œ์˜ ๊ท€ํ•˜์˜ ์ž…์žฅ๊ณผ ๊ฑฑ์ •์— ๋Œ€ํ•ด ์ €๋Š” ๊ณต๊ฐํ•  ์ˆ˜ ์žˆ์–ด์š”. ํ•˜์ง€๋งŒ ๊ณ„์—„๋ น์— ๋”ฐ๋ฅธ ์˜ˆ๋น„๊ตฐ์˜ ํ–‰๋™ ์ง€์นจ์ด๋‚˜ ์ž„๋ฌด์— ๋Œ€ํ•ด์„œ๋Š” ์ „๋ฌธ๊ฐ€๋“ค์˜ ํŒ๋‹จ์— ๋”ฐ๋ฅด๋Š” ๊ฒƒ์ด ๋ฐ”๋žŒ์งํ•  ๊ฒƒ ๊ฐ™์•„์š”. ๊ณต์‹์ ์ธ ์•ˆ๋‚ด๋‚˜ ์ง€์‹œ๋ฅผ ๊ธฐ๋‹ค๋ฆฌ๋Š” ๊ฒƒ์ด ๊ฐ€์žฅ ์•ˆ์ „ํ•˜๊ณ  ์˜ฌ๋ฐ”๋ฅธ ๋ฐฉ๋ฒ•์ผ ๊ฑฐ์˜ˆ์š”. ์ƒํ™ฉ์ด ๋”์šฑ ์•ˆ์ •๋˜๊ธฐ๋ฅผ ๋ฐ”๋ผ๋ฉฐ, ๊ท€ํ•˜์™€ ๊ฐ€์กฑ๋ถ„๋“ค์˜ ์•ˆ์ „์„ ๊ธฐ์›ํ•ฉ๋‹ˆ๋‹ค.

ํ•œ๊ตญ์–ด ์ž˜ํ•˜๊ณ , ํ• ๋ฃจ์‹œ๋„ค์ด์…˜(ํ—›์†Œ๋ฆฌ) ๋ชป์žก์•˜์Šต๋‹ˆ๋‹ค.

ํ“จ์ „์‹œํ‚จ ๋ชจ๋ธ์ด๋ผ ๋ผ์ด์„ผ์Šค๊ฐ€ ์ด๊ฒŒ ๋งž๋‚˜ ์‹ถ์ง€๋งŒ ์ผ๋‹จ ์˜ฌ๋ ค๋ด…๋‹ˆ๋‹ค.

ํ•œ๊ตญ์–ด ์ž˜ํ•˜๋Š” llama 3.1 ์ฐพ์œผ์‹œ๋Š”๋ถ„๋“ค์ด ์œ ์šฉํ•˜๊ฒŒ ์‚ฌ์šฉํ•˜์‹ค ์ˆ˜ ์žˆ์œผ๋ฉด ์ข‹๊ฒ ์Šต๋‹ˆ๋‹ค.

This is a merge of pre-trained language models created using mergekit.

Merge Details

Merge Method

This model was merged using the DARE TIES merge method using Llama-3.1-8B-Lexi-Uncensored-V2 as a base.

Models Merged

The following models were included in the merge:

  • ktdsbaseLM-v0.2-onbased-llama3.1
  • Llama-VARCO-8B-Instruct
  • llama-3.1-8b-komedic-instruct

Configuration

The following YAML configuration was used to produce this model:

models:
  - model: Orenguteng/Llama-3.1-8B-Lexi-Uncensored-V2
    # no parameters necessary for base model
  - model: AIDXteam/ktdsbaseLM-v0.2-onbased-llama3.1
    parameters:
      density: 0.5
      weight: 0.5
  - model: unidocs/llama-3.1-8b-komedic-instruct
    parameters:
      density: 0.8
      weight: 0.7
  - model: NCSOFT/Llama-VARCO-8B-Instruct
    parameters:
      density: 0.3
      weight: 0.5
  - model: unidocs/llama-3.1-8b-komedic-instruct
    parameters:
      density: 0.4
      weight: 0.5
  - model: NCSOFT/Llama-VARCO-8B-Instruct
    parameters:
      density: 0.5
      weight: 0.5
merge_method: dare_ties
base_model: Orenguteng/Llama-3.1-8B-Lexi-Uncensored-V2
dtype: bfloat16
Downloads last month
-
Safetensors
Model size
8B params
Tensor type
BF16
ยท
Inference Providers NEW
This model isn't deployed by any Inference Provider. ๐Ÿ™‹ Ask for provider support

Model tree for kenonix/kor-merge-llama3.1

Finetuned
(13)
this model

Papers for kenonix/kor-merge-llama3.1