Loading model file models/open_llama_7b_preview_200bt/open_llama_7b_preview_200bt_transformers_weights/pytorch_model-00001-of-00002.bin
Traceback (most recent call last):
File "convert-pth-to-ggml.py", line 11, in <module>
convert.main(['--outtype', 'f16' if args.ftype == 1 else 'f32', '--', args.dir_model])
File "/Volumes/mac/Dev/llama.cpp/convert.py", line 1145, in main
model_plus = load_some_model(args.model)
File "/Volumes/mac/Dev/llama.cpp/convert.py", line 1071, in load_some_model
models_plus.append(lazy_load_file(path))
File "/Volumes/mac/Dev/llama.cpp/convert.py", line 865, in lazy_load_file
return lazy_load_torch_file(fp, path)
File "/Volumes/mac/Dev/llama.cpp/convert.py", line 737, in lazy_load_torch_file
model = unpickler.load()
I get the same error on an M series MacBook (Ventura). However from the repo README.md it looks like make should work instead of cmake, I’ll give that a try.