2024-02-12T21:19:45,869 Created temporary directory: /tmp/pip-build-tracker-9ldbb1u6 2024-02-12T21:19:45,870 Initialized build tracking at /tmp/pip-build-tracker-9ldbb1u6 2024-02-12T21:19:45,871 Created build tracker: /tmp/pip-build-tracker-9ldbb1u6 2024-02-12T21:19:45,871 Entered build tracker: /tmp/pip-build-tracker-9ldbb1u6 2024-02-12T21:19:45,872 Created temporary directory: /tmp/pip-wheel-59yyi7nt 2024-02-12T21:19:45,875 Created temporary directory: /tmp/pip-ephem-wheel-cache-qb1yhzvq 2024-02-12T21:19:45,897 Looking in indexes: https://pypi.org/simple, https://www.piwheels.org/simple 2024-02-12T21:19:45,901 2 location(s) to search for versions of ctransformer-core: 2024-02-12T21:19:45,901 * https://pypi.org/simple/ctransformer-core/ 2024-02-12T21:19:45,901 * https://www.piwheels.org/simple/ctransformer-core/ 2024-02-12T21:19:45,901 Fetching project page and analyzing links: https://pypi.org/simple/ctransformer-core/ 2024-02-12T21:19:45,902 Getting page https://pypi.org/simple/ctransformer-core/ 2024-02-12T21:19:45,904 Found index url https://pypi.org/simple/ 2024-02-12T21:19:46,121 Fetched page https://pypi.org/simple/ctransformer-core/ as application/vnd.pypi.simple.v1+json 2024-02-12T21:19:46,125 Skipping link: No binaries permitted for ctransformer-core: https://files.pythonhosted.org/packages/42/e8/3a0a157e54798a335e5efb27fb7a08554fa39cbf4ee678cadb2e5e01e1c7/ctransformer_core-0.0.1-py3-none-any.whl (from https://pypi.org/simple/ctransformer-core/) 2024-02-12T21:19:46,126 Found link https://files.pythonhosted.org/packages/0e/75/603544a8cf7bfa85448ca57c3943a9e4ba2ed3b2f71e9aed8c3df9dc1a1c/ctransformer_core-0.0.1.tar.gz (from https://pypi.org/simple/ctransformer-core/), version: 0.0.1 2024-02-12T21:19:46,127 Skipping link: No binaries permitted for ctransformer-core: https://files.pythonhosted.org/packages/12/27/aad3158ac9dfbd4ce22e9396ccd575d15535cd8745c383a28ac5b1f57f70/ctransformer_core-0.0.2-py3-none-any.whl (from https://pypi.org/simple/ctransformer-core/) 2024-02-12T21:19:46,127 Found link https://files.pythonhosted.org/packages/e8/82/4a409f65bef9daa444d598eaf7b07cb8d433eb2d54b8ed72388b9e1c7cba/ctransformer_core-0.0.2.tar.gz (from https://pypi.org/simple/ctransformer-core/), version: 0.0.2 2024-02-12T21:19:46,128 Skipping link: No binaries permitted for ctransformer-core: https://files.pythonhosted.org/packages/ba/05/c3f6f2addeebe1bf8ed540a24b648bfaf44c14f6e1d4cad6f94103d8025b/ctransformer_core-0.0.3-py3-none-any.whl (from https://pypi.org/simple/ctransformer-core/) 2024-02-12T21:19:46,129 Found link https://files.pythonhosted.org/packages/e1/3f/c45216332d3226911dc7e700d48246d5405b8add433cf42dbe4b1a603a22/ctransformer_core-0.0.3.tar.gz (from https://pypi.org/simple/ctransformer-core/), version: 0.0.3 2024-02-12T21:19:46,129 Skipping link: No binaries permitted for ctransformer-core: https://files.pythonhosted.org/packages/36/1a/357888d79f0303df7e0886283adb35d1adb71dbaf54d9941dedb64a9aef1/ctransformer_core-0.0.4-py3-none-any.whl (from https://pypi.org/simple/ctransformer-core/) 2024-02-12T21:19:46,130 Found link https://files.pythonhosted.org/packages/9d/6a/369044799ac3fac3960e2e9140e9d46dc7f3a5fd38337b97b7ac140c582c/ctransformer_core-0.0.4.tar.gz (from https://pypi.org/simple/ctransformer-core/), version: 0.0.4 2024-02-12T21:19:46,131 Skipping link: No binaries permitted for ctransformer-core: https://files.pythonhosted.org/packages/1e/b3/16e9a51b9d0c5f34ccaa209e6faa96ce4da76eb68ef15080b6d2b47970a3/ctransformer_core-0.0.5-cp311-cp311-win_amd64.whl (from https://pypi.org/simple/ctransformer-core/) 2024-02-12T21:19:46,131 Skipping link: No binaries permitted for ctransformer-core: https://files.pythonhosted.org/packages/32/c1/daf1d695ec66b154a3e9fb86cde7fe9cf4c90f04860483cc2b622757f2ad/ctransformer_core-0.0.5-cp312-cp312-macosx_11_0_x86_64.whl (from https://pypi.org/simple/ctransformer-core/) 2024-02-12T21:19:46,132 Found link https://files.pythonhosted.org/packages/75/79/4c1b57d5af5a8d0ce94041b1ef8d6a7ac663f92255923058582aecc1e6e5/ctransformer_core-0.0.5.tar.gz (from https://pypi.org/simple/ctransformer-core/), version: 0.0.5 2024-02-12T21:19:46,133 Skipping link: No binaries permitted for ctransformer-core: https://files.pythonhosted.org/packages/d7/73/13aec334c525172db3df0a5b0ef0452c25a53d2823c5462613ed73a37e8f/ctransformer_core-0.0.6-cp311-cp311-win_amd64.whl (from https://pypi.org/simple/ctransformer-core/) 2024-02-12T21:19:46,134 Skipping link: No binaries permitted for ctransformer-core: https://files.pythonhosted.org/packages/95/48/c96cca632684d83e4b1862c31a5e5f256c499421c9958104bb4d3c83c414/ctransformer_core-0.0.6-cp312-cp312-macosx_11_0_x86_64.whl (from https://pypi.org/simple/ctransformer-core/) 2024-02-12T21:19:46,134 Found link https://files.pythonhosted.org/packages/d3/4f/55ff9cbd8f5562ac07a90c61315dc40304f15204c680f5cf0621f3f8c5f4/ctransformer_core-0.0.6.tar.gz (from https://pypi.org/simple/ctransformer-core/), version: 0.0.6 2024-02-12T21:19:46,135 Skipping link: No binaries permitted for ctransformer-core: https://files.pythonhosted.org/packages/9e/ec/abe481a235baeda7832a98d743dd4a27f76fc25172a9def14f8ca318b2f1/ctransformer_core-0.2.28-cp311-cp311-win_amd64.whl (from https://pypi.org/simple/ctransformer-core/) (requires-python:>=3.8) 2024-02-12T21:19:46,135 Skipping link: No binaries permitted for ctransformer-core: https://files.pythonhosted.org/packages/5d/ce/7ec4ce8d298f772e7ef70c6a38d09a7b7e8966168f778ee4aecb63c4cc24/ctransformer_core-0.2.28-cp312-cp312-macosx_11_0_x86_64.whl (from https://pypi.org/simple/ctransformer-core/) (requires-python:>=3.8) 2024-02-12T21:19:46,136 Found link https://files.pythonhosted.org/packages/7d/a2/19a0bf200fc9f0b055b1cb2e6dc6437e70156c6d145abf525b50b3ac1537/ctransformer_core-0.2.28.tar.gz (from https://pypi.org/simple/ctransformer-core/) (requires-python:>=3.8), version: 0.2.28 2024-02-12T21:19:46,137 Fetching project page and analyzing links: https://www.piwheels.org/simple/ctransformer-core/ 2024-02-12T21:19:46,137 Getting page https://www.piwheels.org/simple/ctransformer-core/ 2024-02-12T21:19:46,138 Found index url https://www.piwheels.org/simple/ 2024-02-12T21:19:46,293 Fetched page https://www.piwheels.org/simple/ctransformer-core/ as text/html 2024-02-12T21:19:46,295 Skipping link: No binaries permitted for ctransformer-core: https://www.piwheels.org/simple/ctransformer-core/ctransformer_core-0.0.4-py3-none-any.whl#sha256=ca633851e204089b65373eacd53de2025a6d1459bcb532b0ad5455e86eac17d6 (from https://www.piwheels.org/simple/ctransformer-core/) 2024-02-12T21:19:46,296 Skipping link: No binaries permitted for ctransformer-core: https://www.piwheels.org/simple/ctransformer-core/ctransformer_core-0.0.3-py3-none-any.whl#sha256=f8c024a796a536cc08273239df09109ae3de387ff6125e56b7784a202c4b201d (from https://www.piwheels.org/simple/ctransformer-core/) 2024-02-12T21:19:46,297 Skipping link: No binaries permitted for ctransformer-core: https://www.piwheels.org/simple/ctransformer-core/ctransformer_core-0.0.2-py3-none-any.whl#sha256=0416c33f6d6060d7a82762346d29ebd0ee3c4c18abe9b1cfc5050ddb04e60de3 (from https://www.piwheels.org/simple/ctransformer-core/) 2024-02-12T21:19:46,297 Skipping link: No binaries permitted for ctransformer-core: https://www.piwheels.org/simple/ctransformer-core/ctransformer_core-0.0.1-py3-none-any.whl#sha256=5eff77b8ee4706aa0a8d3071bb52a2b77cd90bf61ae96a03605685b3e3e97a40 (from https://www.piwheels.org/simple/ctransformer-core/) 2024-02-12T21:19:46,298 Skipping link: not a file: https://www.piwheels.org/simple/ctransformer-core/ 2024-02-12T21:19:46,298 Skipping link: not a file: https://pypi.org/simple/ctransformer-core/ 2024-02-12T21:19:46,317 Given no hashes to check 1 links for project 'ctransformer-core': discarding no candidates 2024-02-12T21:19:46,335 Collecting ctransformer-core==0.2.28 2024-02-12T21:19:46,338 Created temporary directory: /tmp/pip-unpack-ilprkr0j 2024-02-12T21:19:46,563 Downloading ctransformer_core-0.2.28.tar.gz (10.6 MB) 2024-02-12T21:19:49,937 Added ctransformer-core==0.2.28 from https://files.pythonhosted.org/packages/7d/a2/19a0bf200fc9f0b055b1cb2e6dc6437e70156c6d145abf525b50b3ac1537/ctransformer_core-0.2.28.tar.gz to build tracker '/tmp/pip-build-tracker-9ldbb1u6' 2024-02-12T21:19:49,942 Created temporary directory: /tmp/pip-build-env-img3dz7b 2024-02-12T21:19:49,947 Installing build dependencies: started 2024-02-12T21:19:49,948 Running command pip subprocess to install build dependencies 2024-02-12T21:19:51,078 Using pip 23.3.1 from /usr/local/lib/python3.11/dist-packages/pip (python 3.11) 2024-02-12T21:19:51,592 Looking in indexes: https://pypi.org/simple, https://www.piwheels.org/simple 2024-02-12T21:19:52,034 Collecting scikit-build-core[pyproject] 2024-02-12T21:19:52,055 Using cached https://www.piwheels.org/simple/scikit-build-core/scikit_build_core-0.8.0-py3-none-any.whl (139 kB) 2024-02-12T21:19:52,346 Collecting packaging>=20.9 (from scikit-build-core[pyproject]) 2024-02-12T21:19:52,361 Using cached https://www.piwheels.org/simple/packaging/packaging-23.2-py3-none-any.whl (53 kB) 2024-02-12T21:19:52,474 Collecting pathspec>=0.10.1 (from scikit-build-core[pyproject]) 2024-02-12T21:19:52,496 Using cached https://www.piwheels.org/simple/pathspec/pathspec-0.12.1-py3-none-any.whl (31 kB) 2024-02-12T21:19:52,653 Collecting pyproject-metadata>=0.5 (from scikit-build-core[pyproject]) 2024-02-12T21:19:52,667 Using cached https://www.piwheels.org/simple/pyproject-metadata/pyproject_metadata-0.7.1-py3-none-any.whl (7.4 kB) 2024-02-12T21:19:55,221 Installing collected packages: pathspec, packaging, scikit-build-core, pyproject-metadata 2024-02-12T21:19:56,267 Successfully installed packaging-23.2 pathspec-0.12.1 pyproject-metadata-0.7.1 scikit-build-core-0.8.0 2024-02-12T21:19:56,544 [notice] A new release of pip is available: 23.3.1 -> 24.0 2024-02-12T21:19:56,544 [notice] To update, run: python3 -m pip install --upgrade pip 2024-02-12T21:19:56,780 Installing build dependencies: finished with status 'done' 2024-02-12T21:19:56,783 Getting requirements to build wheel: started 2024-02-12T21:19:56,784 Running command Getting requirements to build wheel 2024-02-12T21:19:57,221 Getting requirements to build wheel: finished with status 'done' 2024-02-12T21:19:57,248 Created temporary directory: /tmp/pip-modern-metadata-c43skc5l 2024-02-12T21:19:57,250 Preparing metadata (pyproject.toml): started 2024-02-12T21:19:57,251 Running command Preparing metadata (pyproject.toml) 2024-02-12T21:19:57,778 *** scikit-build-core 0.8.0 using CMake 3.25.1 (metadata_wheel) 2024-02-12T21:19:57,864 Preparing metadata (pyproject.toml): finished with status 'done' 2024-02-12T21:19:57,870 Source in /tmp/pip-wheel-59yyi7nt/ctransformer-core_fd90308d070c4acf9464ef7c5eb8b114 has version 0.2.28, which satisfies requirement ctransformer-core==0.2.28 from https://files.pythonhosted.org/packages/7d/a2/19a0bf200fc9f0b055b1cb2e6dc6437e70156c6d145abf525b50b3ac1537/ctransformer_core-0.2.28.tar.gz 2024-02-12T21:19:57,871 Removed ctransformer-core==0.2.28 from https://files.pythonhosted.org/packages/7d/a2/19a0bf200fc9f0b055b1cb2e6dc6437e70156c6d145abf525b50b3ac1537/ctransformer_core-0.2.28.tar.gz from build tracker '/tmp/pip-build-tracker-9ldbb1u6' 2024-02-12T21:19:57,876 Created temporary directory: /tmp/pip-unpack-splo_6g2 2024-02-12T21:19:57,877 Created temporary directory: /tmp/pip-unpack-vzjjhift 2024-02-12T21:19:57,884 Building wheels for collected packages: ctransformer-core 2024-02-12T21:19:57,889 Created temporary directory: /tmp/pip-wheel-t4gzmpr5 2024-02-12T21:19:57,890 Destination directory: /tmp/pip-wheel-t4gzmpr5 2024-02-12T21:19:57,892 Building wheel for ctransformer-core (pyproject.toml): started 2024-02-12T21:19:57,893 Running command Building wheel for ctransformer-core (pyproject.toml) 2024-02-12T21:19:58,636 *** scikit-build-core 0.8.0 using CMake 3.25.1 (wheel) 2024-02-12T21:19:58,655 *** Configuring CMake... 2024-02-12T21:19:58,750 loading initial cache file /tmp/tmpedjk528w/build/CMakeInit.txt 2024-02-12T21:19:59,016 -- The C compiler identification is GNU 12.2.0 2024-02-12T21:19:59,364 -- The CXX compiler identification is GNU 12.2.0 2024-02-12T21:19:59,413 -- Detecting C compiler ABI info 2024-02-12T21:19:59,671 -- Detecting C compiler ABI info - done 2024-02-12T21:19:59,708 -- Check for working C compiler: /usr/bin/cc - skipped 2024-02-12T21:19:59,710 -- Detecting C compile features 2024-02-12T21:19:59,712 -- Detecting C compile features - done 2024-02-12T21:19:59,731 -- Detecting CXX compiler ABI info 2024-02-12T21:20:00,053 -- Detecting CXX compiler ABI info - done 2024-02-12T21:20:00,111 -- Check for working CXX compiler: /usr/bin/c++ - skipped 2024-02-12T21:20:00,113 -- Detecting CXX compile features 2024-02-12T21:20:00,117 -- Detecting CXX compile features - done 2024-02-12T21:20:00,150 -- Found Git: /usr/bin/git (found version "2.39.2") 2024-02-12T21:20:00,221 -- Performing Test CMAKE_HAVE_LIBC_PTHREAD 2024-02-12T21:20:00,514 -- Performing Test CMAKE_HAVE_LIBC_PTHREAD - Success 2024-02-12T21:20:00,518 -- Found Threads: TRUE 2024-02-12T21:20:00,531 -- Warning: ccache not found - consider installing it for faster compilation or disable this warning with LLAMA_CCACHE=OFF 2024-02-12T21:20:00,632 -- CMAKE_SYSTEM_PROCESSOR: armv7l 2024-02-12T21:20:00,632 -- ARM detected 2024-02-12T21:20:00,635 -- Performing Test COMPILER_SUPPORTS_FP16_FORMAT_I3E 2024-02-12T21:20:01,002 -- Performing Test COMPILER_SUPPORTS_FP16_FORMAT_I3E - Success 2024-02-12T21:20:01,032 CMake Warning (dev) at CMakeLists.txt:17 (install): 2024-02-12T21:20:01,032 Target llama has PUBLIC_HEADER files but no PUBLIC_HEADER DESTINATION. 2024-02-12T21:20:01,033 This warning is for project developers. Use -Wno-dev to suppress it. 2024-02-12T21:20:01,034 CMake Warning (dev) at CMakeLists.txt:25 (install): 2024-02-12T21:20:01,034 Target llama has PUBLIC_HEADER files but no PUBLIC_HEADER DESTINATION. 2024-02-12T21:20:01,035 This warning is for project developers. Use -Wno-dev to suppress it. 2024-02-12T21:20:01,041 -- Configuring done 2024-02-12T21:20:01,112 -- Generating done 2024-02-12T21:20:01,130 -- Build files have been written to: /tmp/tmpedjk528w/build 2024-02-12T21:20:01,142 *** Building project with Ninja... 2024-02-12T21:20:01,478 [1/22] cd /tmp/pip-wheel-59yyi7nt/ctransformer-core_fd90308d070c4acf9464ef7c5eb8b114/vendor/llama.cpp && /usr/bin/cmake -DMSVC= -DCMAKE_C_COMPILER_VERSION=12.2.0 -DCMAKE_C_COMPILER_ID=GNU -DCMAKE_VS_PLATFORM_NAME= -DCMAKE_C_COMPILER=/usr/bin/cc -P /tmp/pip-wheel-59yyi7nt/ctransformer-core_fd90308d070c4acf9464ef7c5eb8b114/vendor/llama.cpp/common/../scripts/gen-build-info-cpp.cmake 2024-02-12T21:20:01,479 -- Found Git: /usr/bin/git (found version "2.39.2") 2024-02-12T21:20:01,617 [2/22] /usr/bin/c++ -D_GNU_SOURCE -D_XOPEN_SOURCE=600 -O3 -DNDEBUG -fPIC -Wall -Wextra -Wpedantic -Wcast-qual -Wno-unused-function -Wmissing-declarations -Wmissing-noreturn -Wno-array-bounds -Wno-format-truncation -Wextra-semi -mfp16-format=ieee -mfpu=neon-fp-armv8 -mno-unaligned-access -funsafe-math-optimizations -std=gnu++11 -MD -MT vendor/llama.cpp/common/CMakeFiles/build_info.dir/build-info.cpp.o -MF vendor/llama.cpp/common/CMakeFiles/build_info.dir/build-info.cpp.o.d -o vendor/llama.cpp/common/CMakeFiles/build_info.dir/build-info.cpp.o -c /tmp/pip-wheel-59yyi7nt/ctransformer-core_fd90308d070c4acf9464ef7c5eb8b114/vendor/llama.cpp/common/build-info.cpp 2024-02-12T21:20:03,777 [3/22] /usr/bin/cc -D_GNU_SOURCE -D_XOPEN_SOURCE=600 -I/tmp/pip-wheel-59yyi7nt/ctransformer-core_fd90308d070c4acf9464ef7c5eb8b114/vendor/llama.cpp/. -O3 -DNDEBUG -fPIC -Wall -Wextra -Wpedantic -Wcast-qual -Wno-unused-function -Wshadow -Wstrict-prototypes -Wpointer-arith -Wmissing-prototypes -Werror=implicit-int -Werror=implicit-function-declaration -Wdouble-promotion -mfp16-format=ieee -mfpu=neon-fp-armv8 -mno-unaligned-access -funsafe-math-optimizations -std=gnu11 -MD -MT vendor/llama.cpp/CMakeFiles/ggml.dir/ggml-alloc.c.o -MF vendor/llama.cpp/CMakeFiles/ggml.dir/ggml-alloc.c.o.d -o vendor/llama.cpp/CMakeFiles/ggml.dir/ggml-alloc.c.o -c /tmp/pip-wheel-59yyi7nt/ctransformer-core_fd90308d070c4acf9464ef7c5eb8b114/vendor/llama.cpp/ggml-alloc.c 2024-02-12T21:20:07,576 [4/22] /usr/bin/cc -D_GNU_SOURCE -D_XOPEN_SOURCE=600 -I/tmp/pip-wheel-59yyi7nt/ctransformer-core_fd90308d070c4acf9464ef7c5eb8b114/vendor/llama.cpp/. -O3 -DNDEBUG -fPIC -Wall -Wextra -Wpedantic -Wcast-qual -Wno-unused-function -Wshadow -Wstrict-prototypes -Wpointer-arith -Wmissing-prototypes -Werror=implicit-int -Werror=implicit-function-declaration -Wdouble-promotion -mfp16-format=ieee -mfpu=neon-fp-armv8 -mno-unaligned-access -funsafe-math-optimizations -std=gnu11 -MD -MT vendor/llama.cpp/CMakeFiles/ggml.dir/ggml-backend.c.o -MF vendor/llama.cpp/CMakeFiles/ggml.dir/ggml-backend.c.o.d -o vendor/llama.cpp/CMakeFiles/ggml.dir/ggml-backend.c.o -c /tmp/pip-wheel-59yyi7nt/ctransformer-core_fd90308d070c4acf9464ef7c5eb8b114/vendor/llama.cpp/ggml-backend.c 2024-02-12T21:20:11,339 [5/22] /usr/bin/c++ -D_GNU_SOURCE -D_XOPEN_SOURCE=600 -I/tmp/pip-wheel-59yyi7nt/ctransformer-core_fd90308d070c4acf9464ef7c5eb8b114/vendor/llama.cpp/common/. -I/tmp/pip-wheel-59yyi7nt/ctransformer-core_fd90308d070c4acf9464ef7c5eb8b114/vendor/llama.cpp/. -O3 -DNDEBUG -fPIC -Wall -Wextra -Wpedantic -Wcast-qual -Wno-unused-function -Wmissing-declarations -Wmissing-noreturn -Wno-array-bounds -Wno-format-truncation -Wextra-semi -mfp16-format=ieee -mfpu=neon-fp-armv8 -mno-unaligned-access -funsafe-math-optimizations -std=gnu++11 -MD -MT vendor/llama.cpp/common/CMakeFiles/common.dir/console.cpp.o -MF vendor/llama.cpp/common/CMakeFiles/common.dir/console.cpp.o.d -o vendor/llama.cpp/common/CMakeFiles/common.dir/console.cpp.o -c /tmp/pip-wheel-59yyi7nt/ctransformer-core_fd90308d070c4acf9464ef7c5eb8b114/vendor/llama.cpp/common/console.cpp 2024-02-12T21:20:15,953 [6/22] /usr/bin/c++ -D_GNU_SOURCE -D_XOPEN_SOURCE=600 -I/tmp/pip-wheel-59yyi7nt/ctransformer-core_fd90308d070c4acf9464ef7c5eb8b114/vendor/llama.cpp/common/. -I/tmp/pip-wheel-59yyi7nt/ctransformer-core_fd90308d070c4acf9464ef7c5eb8b114/vendor/llama.cpp/. -O3 -DNDEBUG -fPIC -Wall -Wextra -Wpedantic -Wcast-qual -Wno-unused-function -Wmissing-declarations -Wmissing-noreturn -Wno-array-bounds -Wno-format-truncation -Wextra-semi -mfp16-format=ieee -mfpu=neon-fp-armv8 -mno-unaligned-access -funsafe-math-optimizations -std=gnu++11 -MD -MT vendor/llama.cpp/common/CMakeFiles/common.dir/sampling.cpp.o -MF vendor/llama.cpp/common/CMakeFiles/common.dir/sampling.cpp.o.d -o vendor/llama.cpp/common/CMakeFiles/common.dir/sampling.cpp.o -c /tmp/pip-wheel-59yyi7nt/ctransformer-core_fd90308d070c4acf9464ef7c5eb8b114/vendor/llama.cpp/common/sampling.cpp 2024-02-12T21:20:22,715 [7/22] /usr/bin/c++ -D_GNU_SOURCE -D_XOPEN_SOURCE=600 -I/tmp/pip-wheel-59yyi7nt/ctransformer-core_fd90308d070c4acf9464ef7c5eb8b114/vendor/llama.cpp/common/. -I/tmp/pip-wheel-59yyi7nt/ctransformer-core_fd90308d070c4acf9464ef7c5eb8b114/vendor/llama.cpp/. -O3 -DNDEBUG -fPIC -Wall -Wextra -Wpedantic -Wcast-qual -Wno-unused-function -Wmissing-declarations -Wmissing-noreturn -Wno-array-bounds -Wno-format-truncation -Wextra-semi -mfp16-format=ieee -mfpu=neon-fp-armv8 -mno-unaligned-access -funsafe-math-optimizations -std=gnu++11 -MD -MT vendor/llama.cpp/common/CMakeFiles/common.dir/grammar-parser.cpp.o -MF vendor/llama.cpp/common/CMakeFiles/common.dir/grammar-parser.cpp.o.d -o vendor/llama.cpp/common/CMakeFiles/common.dir/grammar-parser.cpp.o -c /tmp/pip-wheel-59yyi7nt/ctransformer-core_fd90308d070c4acf9464ef7c5eb8b114/vendor/llama.cpp/common/grammar-parser.cpp 2024-02-12T21:20:29,609 [8/22] /usr/bin/c++ -DLLAMA_BUILD -DLLAMA_SHARED -I/tmp/pip-wheel-59yyi7nt/ctransformer-core_fd90308d070c4acf9464ef7c5eb8b114/vendor/llama.cpp/examples/llava/. -I/tmp/pip-wheel-59yyi7nt/ctransformer-core_fd90308d070c4acf9464ef7c5eb8b114/vendor/llama.cpp/examples/llava/../.. -I/tmp/pip-wheel-59yyi7nt/ctransformer-core_fd90308d070c4acf9464ef7c5eb8b114/vendor/llama.cpp/examples/llava/../../common -I/tmp/pip-wheel-59yyi7nt/ctransformer-core_fd90308d070c4acf9464ef7c5eb8b114/vendor/llama.cpp/. -O3 -DNDEBUG -fPIC -Wno-cast-qual -MD -MT vendor/llama.cpp/examples/llava/CMakeFiles/llava.dir/llava.cpp.o -MF vendor/llama.cpp/examples/llava/CMakeFiles/llava.dir/llava.cpp.o.d -o vendor/llama.cpp/examples/llava/CMakeFiles/llava.dir/llava.cpp.o -c /tmp/pip-wheel-59yyi7nt/ctransformer-core_fd90308d070c4acf9464ef7c5eb8b114/vendor/llama.cpp/examples/llava/llava.cpp 2024-02-12T21:20:32,836 [9/22] /usr/bin/c++ -D_GNU_SOURCE -D_XOPEN_SOURCE=600 -I/tmp/pip-wheel-59yyi7nt/ctransformer-core_fd90308d070c4acf9464ef7c5eb8b114/vendor/llama.cpp/common/. -I/tmp/pip-wheel-59yyi7nt/ctransformer-core_fd90308d070c4acf9464ef7c5eb8b114/vendor/llama.cpp/. -O3 -DNDEBUG -fPIC -Wall -Wextra -Wpedantic -Wcast-qual -Wno-unused-function -Wmissing-declarations -Wmissing-noreturn -Wno-array-bounds -Wno-format-truncation -Wextra-semi -mfp16-format=ieee -mfpu=neon-fp-armv8 -mno-unaligned-access -funsafe-math-optimizations -std=gnu++11 -MD -MT vendor/llama.cpp/common/CMakeFiles/common.dir/train.cpp.o -MF vendor/llama.cpp/common/CMakeFiles/common.dir/train.cpp.o.d -o vendor/llama.cpp/common/CMakeFiles/common.dir/train.cpp.o -c /tmp/pip-wheel-59yyi7nt/ctransformer-core_fd90308d070c4acf9464ef7c5eb8b114/vendor/llama.cpp/common/train.cpp 2024-02-12T21:20:36,498 [10/22] /usr/bin/cc -D_GNU_SOURCE -D_XOPEN_SOURCE=600 -I/tmp/pip-wheel-59yyi7nt/ctransformer-core_fd90308d070c4acf9464ef7c5eb8b114/vendor/llama.cpp/. -O3 -DNDEBUG -fPIC -Wall -Wextra -Wpedantic -Wcast-qual -Wno-unused-function -Wshadow -Wstrict-prototypes -Wpointer-arith -Wmissing-prototypes -Werror=implicit-int -Werror=implicit-function-declaration -Wdouble-promotion -mfp16-format=ieee -mfpu=neon-fp-armv8 -mno-unaligned-access -funsafe-math-optimizations -std=gnu11 -MD -MT vendor/llama.cpp/CMakeFiles/ggml.dir/ggml-quants.c.o -MF vendor/llama.cpp/CMakeFiles/ggml.dir/ggml-quants.c.o.d -o vendor/llama.cpp/CMakeFiles/ggml.dir/ggml-quants.c.o -c /tmp/pip-wheel-59yyi7nt/ctransformer-core_fd90308d070c4acf9464ef7c5eb8b114/vendor/llama.cpp/ggml-quants.c 2024-02-12T21:20:42,880 [11/22] /usr/bin/c++ -I/tmp/pip-wheel-59yyi7nt/ctransformer-core_fd90308d070c4acf9464ef7c5eb8b114/vendor/llama.cpp/common/. -I/tmp/pip-wheel-59yyi7nt/ctransformer-core_fd90308d070c4acf9464ef7c5eb8b114/vendor/llama.cpp/. -I/tmp/pip-wheel-59yyi7nt/ctransformer-core_fd90308d070c4acf9464ef7c5eb8b114/vendor/llama.cpp/examples/llava/. -I/tmp/pip-wheel-59yyi7nt/ctransformer-core_fd90308d070c4acf9464ef7c5eb8b114/vendor/llama.cpp/examples/llava/../.. -I/tmp/pip-wheel-59yyi7nt/ctransformer-core_fd90308d070c4acf9464ef7c5eb8b114/vendor/llama.cpp/examples/llava/../../common -O3 -DNDEBUG -MD -MT vendor/llama.cpp/examples/llava/CMakeFiles/llava-cli.dir/llava-cli.cpp.o -MF vendor/llama.cpp/examples/llava/CMakeFiles/llava-cli.dir/llava-cli.cpp.o.d -o vendor/llama.cpp/examples/llava/CMakeFiles/llava-cli.dir/llava-cli.cpp.o -c /tmp/pip-wheel-59yyi7nt/ctransformer-core_fd90308d070c4acf9464ef7c5eb8b114/vendor/llama.cpp/examples/llava/llava-cli.cpp 2024-02-12T21:20:59,272 [12/22] /usr/bin/c++ -D_GNU_SOURCE -D_XOPEN_SOURCE=600 -I/tmp/pip-wheel-59yyi7nt/ctransformer-core_fd90308d070c4acf9464ef7c5eb8b114/vendor/llama.cpp/common/. -I/tmp/pip-wheel-59yyi7nt/ctransformer-core_fd90308d070c4acf9464ef7c5eb8b114/vendor/llama.cpp/. -O3 -DNDEBUG -fPIC -Wall -Wextra -Wpedantic -Wcast-qual -Wno-unused-function -Wmissing-declarations -Wmissing-noreturn -Wno-array-bounds -Wno-format-truncation -Wextra-semi -mfp16-format=ieee -mfpu=neon-fp-armv8 -mno-unaligned-access -funsafe-math-optimizations -std=gnu++11 -MD -MT vendor/llama.cpp/common/CMakeFiles/common.dir/common.cpp.o -MF vendor/llama.cpp/common/CMakeFiles/common.dir/common.cpp.o.d -o vendor/llama.cpp/common/CMakeFiles/common.dir/common.cpp.o -c /tmp/pip-wheel-59yyi7nt/ctransformer-core_fd90308d070c4acf9464ef7c5eb8b114/vendor/llama.cpp/common/common.cpp 2024-02-12T21:20:59,273 In file included from /usr/include/c++/12/vector:70, 2024-02-12T21:20:59,273 from /tmp/pip-wheel-59yyi7nt/ctransformer-core_fd90308d070c4acf9464ef7c5eb8b114/vendor/llama.cpp/common/grammar-parser.h:14, 2024-02-12T21:20:59,274 from /tmp/pip-wheel-59yyi7nt/ctransformer-core_fd90308d070c4acf9464ef7c5eb8b114/vendor/llama.cpp/common/sampling.h:5, 2024-02-12T21:20:59,275 from /tmp/pip-wheel-59yyi7nt/ctransformer-core_fd90308d070c4acf9464ef7c5eb8b114/vendor/llama.cpp/common/common.h:7, 2024-02-12T21:20:59,276 from /tmp/pip-wheel-59yyi7nt/ctransformer-core_fd90308d070c4acf9464ef7c5eb8b114/vendor/llama.cpp/common/common.cpp:1: 2024-02-12T21:20:59,278 /usr/include/c++/12/bits/vector.tcc: In member function ‘void std::vector<_Tp, _Alloc>::_M_realloc_insert(iterator, _Args&& ...) [with _Args = {const llama_model_kv_override&}; _Tp = llama_model_kv_override; _Alloc = std::allocator]’: 2024-02-12T21:20:59,279 /usr/include/c++/12/bits/vector.tcc:439:7: note: parameter passing for argument of type ‘std::vector::iterator’ changed in GCC 7.1 2024-02-12T21:20:59,279 439 | vector<_Tp, _Alloc>:: 2024-02-12T21:20:59,280 | ^~~~~~~~~~~~~~~~~~~ 2024-02-12T21:20:59,281 /usr/include/c++/12/bits/vector.tcc: In member function ‘void std::vector<_Tp, _Alloc>::_M_realloc_insert(iterator, _Args&& ...) [with _Args = {}; _Tp = llama_model_kv_override; _Alloc = std::allocator]’: 2024-02-12T21:20:59,282 /usr/include/c++/12/bits/vector.tcc:439:7: note: parameter passing for argument of type ‘std::vector::iterator’ changed in GCC 7.1 2024-02-12T21:20:59,283 In file included from /usr/include/c++/12/vector:64: 2024-02-12T21:20:59,284 In member function ‘void std::vector<_Tp, _Alloc>::push_back(const value_type&) [with _Tp = llama_model_kv_override; _Alloc = std::allocator]’, 2024-02-12T21:20:59,285 inlined from ‘bool gpt_params_parse_ex(int, char**, gpt_params&)’ at /tmp/pip-wheel-59yyi7nt/ctransformer-core_fd90308d070c4acf9464ef7c5eb8b114/vendor/llama.cpp/common/common.cpp:850:42: 2024-02-12T21:20:59,286 /usr/include/c++/12/bits/stl_vector.h:1287:28: note: parameter passing for argument of type ‘__gnu_cxx::__normal_iterator >’ changed in GCC 7.1 2024-02-12T21:20:59,286 1287 | _M_realloc_insert(end(), __x); 2024-02-12T21:20:59,287 | ~~~~~~~~~~~~~~~~~^~~~~~~~~~~~ 2024-02-12T21:20:59,288 In member function ‘void std::vector<_Tp, _Alloc>::emplace_back(_Args&& ...) [with _Args = {}; _Tp = llama_model_kv_override; _Alloc = std::allocator]’, 2024-02-12T21:20:59,289 inlined from ‘bool gpt_params_parse_ex(int, char**, gpt_params&)’ at /tmp/pip-wheel-59yyi7nt/ctransformer-core_fd90308d070c4acf9464ef7c5eb8b114/vendor/llama.cpp/common/common.cpp:895:41: 2024-02-12T21:20:59,290 /usr/include/c++/12/bits/vector.tcc:123:28: note: parameter passing for argument of type ‘__gnu_cxx::__normal_iterator >’ changed in GCC 7.1 2024-02-12T21:20:59,291 123 | _M_realloc_insert(end(), std::forward<_Args>(__args)...); 2024-02-12T21:20:59,292 | ~~~~~~~~~~~~~~~~~^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ 2024-02-12T21:21:05,725 [13/22] /usr/bin/cc -D_GNU_SOURCE -D_XOPEN_SOURCE=600 -I/tmp/pip-wheel-59yyi7nt/ctransformer-core_fd90308d070c4acf9464ef7c5eb8b114/vendor/llama.cpp/. -O3 -DNDEBUG -fPIC -Wall -Wextra -Wpedantic -Wcast-qual -Wno-unused-function -Wshadow -Wstrict-prototypes -Wpointer-arith -Wmissing-prototypes -Werror=implicit-int -Werror=implicit-function-declaration -Wdouble-promotion -mfp16-format=ieee -mfpu=neon-fp-armv8 -mno-unaligned-access -funsafe-math-optimizations -std=gnu11 -MD -MT vendor/llama.cpp/CMakeFiles/ggml.dir/ggml.c.o -MF vendor/llama.cpp/CMakeFiles/ggml.dir/ggml.c.o.d -o vendor/llama.cpp/CMakeFiles/ggml.dir/ggml.c.o -c /tmp/pip-wheel-59yyi7nt/ctransformer-core_fd90308d070c4acf9464ef7c5eb8b114/vendor/llama.cpp/ggml.c 2024-02-12T21:21:05,956 [14/22] : && /usr/bin/cc -fPIC -O3 -DNDEBUG -shared -Wl,-soname,libggml_shared.so -o vendor/llama.cpp/libggml_shared.so vendor/llama.cpp/CMakeFiles/ggml.dir/ggml.c.o vendor/llama.cpp/CMakeFiles/ggml.dir/ggml-alloc.c.o vendor/llama.cpp/CMakeFiles/ggml.dir/ggml-backend.c.o vendor/llama.cpp/CMakeFiles/ggml.dir/ggml-quants.c.o && : 2024-02-12T21:21:06,075 [15/22] : && /usr/bin/cmake -E rm -f vendor/llama.cpp/libggml_static.a && /usr/bin/ar qc vendor/llama.cpp/libggml_static.a vendor/llama.cpp/CMakeFiles/ggml.dir/ggml.c.o vendor/llama.cpp/CMakeFiles/ggml.dir/ggml-alloc.c.o vendor/llama.cpp/CMakeFiles/ggml.dir/ggml-backend.c.o vendor/llama.cpp/CMakeFiles/ggml.dir/ggml-quants.c.o && /usr/bin/ranlib vendor/llama.cpp/libggml_static.a && : 2024-02-12T21:21:23,137 [16/22] /usr/bin/c++ -DLLAMA_BUILD -DLLAMA_SHARED -I/tmp/pip-wheel-59yyi7nt/ctransformer-core_fd90308d070c4acf9464ef7c5eb8b114/vendor/llama.cpp/examples/llava/. -I/tmp/pip-wheel-59yyi7nt/ctransformer-core_fd90308d070c4acf9464ef7c5eb8b114/vendor/llama.cpp/examples/llava/../.. -I/tmp/pip-wheel-59yyi7nt/ctransformer-core_fd90308d070c4acf9464ef7c5eb8b114/vendor/llama.cpp/examples/llava/../../common -I/tmp/pip-wheel-59yyi7nt/ctransformer-core_fd90308d070c4acf9464ef7c5eb8b114/vendor/llama.cpp/. -O3 -DNDEBUG -fPIC -Wno-cast-qual -MD -MT vendor/llama.cpp/examples/llava/CMakeFiles/llava.dir/clip.cpp.o -MF vendor/llama.cpp/examples/llava/CMakeFiles/llava.dir/clip.cpp.o.d -o vendor/llama.cpp/examples/llava/CMakeFiles/llava.dir/clip.cpp.o -c /tmp/pip-wheel-59yyi7nt/ctransformer-core_fd90308d070c4acf9464ef7c5eb8b114/vendor/llama.cpp/examples/llava/clip.cpp 2024-02-12T21:21:23,415 [17/22] : && /usr/bin/cmake -E rm -f vendor/llama.cpp/examples/llava/libllava_static.a && /usr/bin/ar qc vendor/llama.cpp/examples/llava/libllava_static.a vendor/llama.cpp/examples/llava/CMakeFiles/llava.dir/llava.cpp.o vendor/llama.cpp/examples/llava/CMakeFiles/llava.dir/clip.cpp.o && /usr/bin/ranlib vendor/llama.cpp/examples/llava/libllava_static.a && : 2024-02-12T21:21:53,121 [18/22] /usr/bin/c++ -DLLAMA_BUILD -DLLAMA_SHARED -D_GNU_SOURCE -D_XOPEN_SOURCE=600 -Dllama_EXPORTS -I/tmp/pip-wheel-59yyi7nt/ctransformer-core_fd90308d070c4acf9464ef7c5eb8b114/vendor/llama.cpp/. -O3 -DNDEBUG -fPIC -Wall -Wextra -Wpedantic -Wcast-qual -Wno-unused-function -Wmissing-declarations -Wmissing-noreturn -Wno-array-bounds -Wno-format-truncation -Wextra-semi -mfp16-format=ieee -mfpu=neon-fp-armv8 -mno-unaligned-access -funsafe-math-optimizations -std=gnu++11 -MD -MT vendor/llama.cpp/CMakeFiles/llama.dir/llama.cpp.o -MF vendor/llama.cpp/CMakeFiles/llama.dir/llama.cpp.o.d -o vendor/llama.cpp/CMakeFiles/llama.dir/llama.cpp.o -c /tmp/pip-wheel-59yyi7nt/ctransformer-core_fd90308d070c4acf9464ef7c5eb8b114/vendor/llama.cpp/llama.cpp 2024-02-12T21:21:53,121 In file included from /usr/include/c++/12/vector:64, 2024-02-12T21:21:53,122 from /tmp/pip-wheel-59yyi7nt/ctransformer-core_fd90308d070c4acf9464ef7c5eb8b114/vendor/llama.cpp/llama.h:908, 2024-02-12T21:21:53,122 from /tmp/pip-wheel-59yyi7nt/ctransformer-core_fd90308d070c4acf9464ef7c5eb8b114/vendor/llama.cpp/llama.cpp:2: 2024-02-12T21:21:53,123 /usr/include/c++/12/bits/stl_vector.h: In function ‘std::vector<_Tp, _Alloc>::vector(std::initializer_list<_Tp>, const allocator_type&) [with _Tp = long long int; _Alloc = std::allocator]’: 2024-02-12T21:21:53,123 /usr/include/c++/12/bits/stl_vector.h:673:7: note: parameter passing for argument of type ‘std::initializer_list’ changed in GCC 7.1 2024-02-12T21:21:53,124 673 | vector(initializer_list __l, 2024-02-12T21:21:53,125 | ^~~~~~ 2024-02-12T21:21:53,125 /tmp/pip-wheel-59yyi7nt/ctransformer-core_fd90308d070c4acf9464ef7c5eb8b114/vendor/llama.cpp/llama.cpp: In function ‘bool llm_load_tensors(llama_model_loader&, llama_model&, int, llama_split_mode, int, const float*, bool, llama_progress_callback, void*)’: 2024-02-12T21:21:53,126 /tmp/pip-wheel-59yyi7nt/ctransformer-core_fd90308d070c4acf9464ef7c5eb8b114/vendor/llama.cpp/llama.cpp:3528:54: note: parameter passing for argument of type ‘std::initializer_list’ changed in GCC 7.1 2024-02-12T21:21:53,126 3528 | model.tok_embd = ml.create_tensor(ctx_input, tn(LLM_TENSOR_TOKEN_EMBD, "weight"), {n_embd, n_vocab}); 2024-02-12T21:21:53,127 | ~~~~~~~~~~~~~~~~^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ 2024-02-12T21:21:53,128 /tmp/pip-wheel-59yyi7nt/ctransformer-core_fd90308d070c4acf9464ef7c5eb8b114/vendor/llama.cpp/llama.cpp:3532:61: note: parameter passing for argument of type ‘std::initializer_list’ changed in GCC 7.1 2024-02-12T21:21:53,128 3532 | model.output_norm = ml.create_tensor(ctx_output, tn(LLM_TENSOR_OUTPUT_NORM, "weight"), {n_embd}); 2024-02-12T21:21:53,128 | ~~~~~~~~~~~~~~~~^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ 2024-02-12T21:21:53,129 /tmp/pip-wheel-59yyi7nt/ctransformer-core_fd90308d070c4acf9464ef7c5eb8b114/vendor/llama.cpp/llama.cpp:3533:61: note: parameter passing for argument of type ‘std::initializer_list’ changed in GCC 7.1 2024-02-12T21:21:53,130 3533 | model.output = ml.create_tensor(ctx_output_split, tn(LLM_TENSOR_OUTPUT, "weight"), {n_embd, n_vocab}); 2024-02-12T21:21:53,130 | ~~~~~~~~~~~~~~~~^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ 2024-02-12T21:21:53,131 /tmp/pip-wheel-59yyi7nt/ctransformer-core_fd90308d070c4acf9464ef7c5eb8b114/vendor/llama.cpp/llama.cpp:3563:62: note: parameter passing for argument of type ‘std::initializer_list’ changed in GCC 7.1 2024-02-12T21:21:53,131 3563 | layer.ffn_gate = ml.create_tensor(ctx_split, tn(LLM_TENSOR_FFN_GATE, "weight", i), {n_embd, n_ff}); 2024-02-12T21:21:53,132 | ~~~~~~~~~~~~~~~~^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ 2024-02-12T21:21:53,132 /tmp/pip-wheel-59yyi7nt/ctransformer-core_fd90308d070c4acf9464ef7c5eb8b114/vendor/llama.cpp/llama.cpp:3564:62: note: parameter passing for argument of type ‘std::initializer_list’ changed in GCC 7.1 2024-02-12T21:21:53,132 3564 | layer.ffn_down = ml.create_tensor(ctx_split, tn(LLM_TENSOR_FFN_DOWN, "weight", i), { n_ff, n_embd}); 2024-02-12T21:21:53,133 | ~~~~~~~~~~~~~~~~^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ 2024-02-12T21:21:53,134 /tmp/pip-wheel-59yyi7nt/ctransformer-core_fd90308d070c4acf9464ef7c5eb8b114/vendor/llama.cpp/llama.cpp:3565:62: note: parameter passing for argument of type ‘std::initializer_list’ changed in GCC 7.1 2024-02-12T21:21:53,134 3565 | layer.ffn_up = ml.create_tensor(ctx_split, tn(LLM_TENSOR_FFN_UP, "weight", i), {n_embd, n_ff}); 2024-02-12T21:21:53,135 | ~~~~~~~~~~~~~~~~^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ 2024-02-12T21:21:53,135 /tmp/pip-wheel-59yyi7nt/ctransformer-core_fd90308d070c4acf9464ef7c5eb8b114/vendor/llama.cpp/llama.cpp:3581:54: note: parameter passing for argument of type ‘std::initializer_list’ changed in GCC 7.1 2024-02-12T21:21:53,136 3581 | model.tok_embd = ml.create_tensor(ctx_input, tn(LLM_TENSOR_TOKEN_EMBD, "weight"), {n_embd, n_vocab}); 2024-02-12T21:21:53,136 | ~~~~~~~~~~~~~~~~^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ 2024-02-12T21:21:53,137 /tmp/pip-wheel-59yyi7nt/ctransformer-core_fd90308d070c4acf9464ef7c5eb8b114/vendor/llama.cpp/llama.cpp:3583:61: note: parameter passing for argument of type ‘std::initializer_list’ changed in GCC 7.1 2024-02-12T21:21:53,138 3583 | model.output_norm = ml.create_tensor(ctx_output, tn(LLM_TENSOR_OUTPUT_NORM, "weight"), {n_embd}); 2024-02-12T21:21:53,138 | ~~~~~~~~~~~~~~~~^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ 2024-02-12T21:21:53,139 /tmp/pip-wheel-59yyi7nt/ctransformer-core_fd90308d070c4acf9464ef7c5eb8b114/vendor/llama.cpp/llama.cpp:3584:61: note: parameter passing for argument of type ‘std::initializer_list’ changed in GCC 7.1 2024-02-12T21:21:53,140 3584 | model.output = ml.create_tensor(ctx_output_split, tn(LLM_TENSOR_OUTPUT, "weight"), {n_embd, n_vocab}); 2024-02-12T21:21:53,141 | ~~~~~~~~~~~~~~~~^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ 2024-02-12T21:21:53,141 /tmp/pip-wheel-59yyi7nt/ctransformer-core_fd90308d070c4acf9464ef7c5eb8b114/vendor/llama.cpp/llama.cpp:3596:52: note: parameter passing for argument of type ‘std::initializer_list’ changed in GCC 7.1 2024-02-12T21:21:53,142 3596 | layer.wk = ml.create_tensor(ctx_split, tn(LLM_TENSOR_ATTN_K, "weight", i), {n_embd, n_embd_gqa}); 2024-02-12T21:21:53,142 | ~~~~~~~~~~~~~~~~^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ 2024-02-12T21:21:53,143 /tmp/pip-wheel-59yyi7nt/ctransformer-core_fd90308d070c4acf9464ef7c5eb8b114/vendor/llama.cpp/llama.cpp:3604:58: note: parameter passing for argument of type ‘std::initializer_list’ changed in GCC 7.1 2024-02-12T21:21:53,143 3604 | layer.ffn_up = ml.create_tensor(ctx_split, tn(LLM_TENSOR_FFN_UP, "weight", i), {n_embd, n_ff}); 2024-02-12T21:21:53,144 | ~~~~~~~~~~~~~~~~^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ 2024-02-12T21:21:53,144 /tmp/pip-wheel-59yyi7nt/ctransformer-core_fd90308d070c4acf9464ef7c5eb8b114/vendor/llama.cpp/llama.cpp:3609:54: note: parameter passing for argument of type ‘std::initializer_list’ changed in GCC 7.1 2024-02-12T21:21:53,145 3609 | model.tok_embd = ml.create_tensor(ctx_input, tn(LLM_TENSOR_TOKEN_EMBD, "weight"), {n_embd, n_vocab}); 2024-02-12T21:21:53,146 | ~~~~~~~~~~~~~~~~^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ 2024-02-12T21:21:53,146 /tmp/pip-wheel-59yyi7nt/ctransformer-core_fd90308d070c4acf9464ef7c5eb8b114/vendor/llama.cpp/llama.cpp:3613:63: note: parameter passing for argument of type ‘std::initializer_list’ changed in GCC 7.1 2024-02-12T21:21:53,147 3613 | model.output_norm = ml.create_tensor(ctx_output, tn(LLM_TENSOR_OUTPUT_NORM, "weight"), {n_embd}); 2024-02-12T21:21:53,147 | ~~~~~~~~~~~~~~~~^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ 2024-02-12T21:21:53,148 /tmp/pip-wheel-59yyi7nt/ctransformer-core_fd90308d070c4acf9464ef7c5eb8b114/vendor/llama.cpp/llama.cpp:3614:63: note: parameter passing for argument of type ‘std::initializer_list’ changed in GCC 7.1 2024-02-12T21:21:53,148 3614 | model.output_norm_b = ml.create_tensor(ctx_output, tn(LLM_TENSOR_OUTPUT_NORM, "bias"), {n_embd}); 2024-02-12T21:21:53,149 | ~~~~~~~~~~~~~~~~^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ 2024-02-12T21:21:53,150 /tmp/pip-wheel-59yyi7nt/ctransformer-core_fd90308d070c4acf9464ef7c5eb8b114/vendor/llama.cpp/llama.cpp:3616:60: note: parameter passing for argument of type ‘std::initializer_list’ changed in GCC 7.1 2024-02-12T21:21:53,150 3616 | model.output = ml.create_tensor(ctx_output_split, tn(LLM_TENSOR_OUTPUT, "weight"), {n_embd, n_vocab}); 2024-02-12T21:21:53,151 | ~~~~~~~~~~~~~~~~^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ 2024-02-12T21:21:53,152 /tmp/pip-wheel-59yyi7nt/ctransformer-core_fd90308d070c4acf9464ef7c5eb8b114/vendor/llama.cpp/llama.cpp:3618:60: note: parameter passing for argument of type ‘std::initializer_list’ changed in GCC 7.1 2024-02-12T21:21:53,152 3618 | model.output = ml.create_tensor(ctx_output_split, tn(LLM_TENSOR_TOKEN_EMBD, "weight"), {n_embd, n_vocab}); // needs to be on GPU 2024-02-12T21:21:53,153 | ~~~~~~~~~~~~~~~~^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ 2024-02-12T21:21:53,154 /tmp/pip-wheel-59yyi7nt/ctransformer-core_fd90308d070c4acf9464ef7c5eb8b114/vendor/llama.cpp/llama.cpp:3630:61: note: parameter passing for argument of type ‘std::initializer_list’ changed in GCC 7.1 2024-02-12T21:21:53,154 3630 | layer.attn_norm_b = ml.create_tensor(ctx_layer, tn(LLM_TENSOR_ATTN_NORM, "bias", i), {n_embd}); 2024-02-12T21:21:53,155 | ~~~~~~~~~~~~~~~~^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ 2024-02-12T21:21:53,155 /tmp/pip-wheel-59yyi7nt/ctransformer-core_fd90308d070c4acf9464ef7c5eb8b114/vendor/llama.cpp/llama.cpp:3633:67: note: parameter passing for argument of type ‘std::initializer_list’ changed in GCC 7.1 2024-02-12T21:21:53,156 3633 | layer.attn_norm_2 = ml.create_tensor(ctx_layer, tn(LLM_TENSOR_ATTN_NORM_2, "weight", i), {n_embd}); 2024-02-12T21:21:53,157 | ~~~~~~~~~~~~~~~~^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ 2024-02-12T21:21:53,157 /tmp/pip-wheel-59yyi7nt/ctransformer-core_fd90308d070c4acf9464ef7c5eb8b114/vendor/llama.cpp/llama.cpp:3634:67: note: parameter passing for argument of type ‘std::initializer_list’ changed in GCC 7.1 2024-02-12T21:21:53,158 3634 | layer.attn_norm_2_b = ml.create_tensor(ctx_layer, tn(LLM_TENSOR_ATTN_NORM_2, "bias", i), {n_embd}); 2024-02-12T21:21:53,158 | ~~~~~~~~~~~~~~~~^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ 2024-02-12T21:21:53,159 /tmp/pip-wheel-59yyi7nt/ctransformer-core_fd90308d070c4acf9464ef7c5eb8b114/vendor/llama.cpp/llama.cpp:3637:54: note: parameter passing for argument of type ‘std::initializer_list’ changed in GCC 7.1 2024-02-12T21:21:53,159 3637 | layer.wqkv = ml.create_tensor(ctx_split, tn(LLM_TENSOR_ATTN_QKV, "weight", i), {n_embd, n_embd + 2*n_embd_gqa}); 2024-02-12T21:21:53,160 | ~~~~~~~~~~~~~~~~^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ 2024-02-12T21:21:53,160 /tmp/pip-wheel-59yyi7nt/ctransformer-core_fd90308d070c4acf9464ef7c5eb8b114/vendor/llama.cpp/llama.cpp:3638:54: note: parameter passing for argument of type ‘std::initializer_list’ changed in GCC 7.1 2024-02-12T21:21:53,161 3638 | layer.wo = ml.create_tensor(ctx_split, tn(LLM_TENSOR_ATTN_OUT, "weight", i), {n_embd, n_embd}); 2024-02-12T21:21:53,162 | ~~~~~~~~~~~~~~~~^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ 2024-02-12T21:21:53,162 /tmp/pip-wheel-59yyi7nt/ctransformer-core_fd90308d070c4acf9464ef7c5eb8b114/vendor/llama.cpp/llama.cpp:3641:58: note: parameter passing for argument of type ‘std::initializer_list’ changed in GCC 7.1 2024-02-12T21:21:53,163 3641 | layer.ffn_up = ml.create_tensor(ctx_split, tn(LLM_TENSOR_FFN_UP, "weight", i), {n_embd, n_ff}); 2024-02-12T21:21:53,164 | ~~~~~~~~~~~~~~~~^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ 2024-02-12T21:21:53,164 /tmp/pip-wheel-59yyi7nt/ctransformer-core_fd90308d070c4acf9464ef7c5eb8b114/vendor/llama.cpp/llama.cpp:3646:54: note: parameter passing for argument of type ‘std::initializer_list’ changed in GCC 7.1 2024-02-12T21:21:53,165 3646 | model.tok_embd = ml.create_tensor(ctx_input, tn(LLM_TENSOR_TOKEN_EMBD, "weight"), {n_embd, n_vocab}); 2024-02-12T21:21:53,166 | ~~~~~~~~~~~~~~~~^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ 2024-02-12T21:21:53,166 /tmp/pip-wheel-59yyi7nt/ctransformer-core_fd90308d070c4acf9464ef7c5eb8b114/vendor/llama.cpp/llama.cpp:3647:54: note: parameter passing for argument of type ‘std::initializer_list’ changed in GCC 7.1 2024-02-12T21:21:53,167 3647 | model.pos_embd = ml.create_tensor(ctx_input, tn(LLM_TENSOR_POS_EMBD, "weight"), {n_embd, hparams.n_ctx_train}); 2024-02-12T21:21:53,167 | ~~~~~~~~~~~~~~~~^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ 2024-02-12T21:21:53,168 /tmp/pip-wheel-59yyi7nt/ctransformer-core_fd90308d070c4acf9464ef7c5eb8b114/vendor/llama.cpp/llama.cpp:3651:63: note: parameter passing for argument of type ‘std::initializer_list’ changed in GCC 7.1 2024-02-12T21:21:53,168 3651 | model.output_norm = ml.create_tensor(ctx_output, tn(LLM_TENSOR_OUTPUT_NORM, "weight"), {n_embd}); 2024-02-12T21:21:53,169 | ~~~~~~~~~~~~~~~~^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ 2024-02-12T21:21:53,169 /tmp/pip-wheel-59yyi7nt/ctransformer-core_fd90308d070c4acf9464ef7c5eb8b114/vendor/llama.cpp/llama.cpp:3652:63: note: parameter passing for argument of type ‘std::initializer_list’ changed in GCC 7.1 2024-02-12T21:21:53,170 3652 | model.output_norm_b = ml.create_tensor(ctx_output, tn(LLM_TENSOR_OUTPUT_NORM, "bias"), {n_embd}); 2024-02-12T21:21:53,170 | ~~~~~~~~~~~~~~~~^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ 2024-02-12T21:21:53,171 /tmp/pip-wheel-59yyi7nt/ctransformer-core_fd90308d070c4acf9464ef7c5eb8b114/vendor/llama.cpp/llama.cpp:3653:63: note: parameter passing for argument of type ‘std::initializer_list’ changed in GCC 7.1 2024-02-12T21:21:53,171 3653 | model.output = ml.create_tensor(ctx_output_split, tn(LLM_TENSOR_OUTPUT, "weight"), {n_embd, n_vocab}); 2024-02-12T21:21:53,172 | ~~~~~~~~~~~~~~~~^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ 2024-02-12T21:21:53,173 /tmp/pip-wheel-59yyi7nt/ctransformer-core_fd90308d070c4acf9464ef7c5eb8b114/vendor/llama.cpp/llama.cpp:3665:54: note: parameter passing for argument of type ‘std::initializer_list’ changed in GCC 7.1 2024-02-12T21:21:53,173 3665 | layer.wqkv = ml.create_tensor(ctx_split, tn(LLM_TENSOR_ATTN_QKV, "weight", i), {n_embd, n_embd + 2*n_embd_gqa}); 2024-02-12T21:21:53,174 | ~~~~~~~~~~~~~~~~^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ 2024-02-12T21:21:53,174 /tmp/pip-wheel-59yyi7nt/ctransformer-core_fd90308d070c4acf9464ef7c5eb8b114/vendor/llama.cpp/llama.cpp:3669:54: note: parameter passing for argument of type ‘std::initializer_list’ changed in GCC 7.1 2024-02-12T21:21:53,175 3669 | layer.bo = ml.create_tensor(ctx_layer, tn(LLM_TENSOR_ATTN_OUT, "bias", i), {n_embd}); 2024-02-12T21:21:53,175 | ~~~~~~~~~~~~~~~~^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ 2024-02-12T21:21:53,176 /tmp/pip-wheel-59yyi7nt/ctransformer-core_fd90308d070c4acf9464ef7c5eb8b114/vendor/llama.cpp/llama.cpp:3671:60: note: parameter passing for argument of type ‘std::initializer_list’ changed in GCC 7.1 2024-02-12T21:21:53,177 3671 | layer.ffn_norm = ml.create_tensor(ctx_layer, tn(LLM_TENSOR_FFN_NORM, "weight", i), {n_embd}); 2024-02-12T21:21:53,178 | ~~~~~~~~~~~~~~~~^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ 2024-02-12T21:21:53,178 /tmp/pip-wheel-59yyi7nt/ctransformer-core_fd90308d070c4acf9464ef7c5eb8b114/vendor/llama.cpp/llama.cpp:3674:60: note: parameter passing for argument of type ‘std::initializer_list’ changed in GCC 7.1 2024-02-12T21:21:53,179 3674 | layer.ffn_down = ml.create_tensor(ctx_split, tn(LLM_TENSOR_FFN_DOWN, "weight", i), {n_ff, n_embd}); 2024-02-12T21:21:53,179 | ~~~~~~~~~~~~~~~~^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ 2024-02-12T21:21:53,180 /tmp/pip-wheel-59yyi7nt/ctransformer-core_fd90308d070c4acf9464ef7c5eb8b114/vendor/llama.cpp/llama.cpp:3683:54: note: parameter passing for argument of type ‘std::initializer_list’ changed in GCC 7.1 2024-02-12T21:21:53,180 3683 | model.tok_embd = ml.create_tensor(ctx_input, tn(LLM_TENSOR_TOKEN_EMBD, "weight"), {n_embd, n_vocab}); 2024-02-12T21:21:53,181 | ~~~~~~~~~~~~~~~~^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ 2024-02-12T21:21:53,181 /tmp/pip-wheel-59yyi7nt/ctransformer-core_fd90308d070c4acf9464ef7c5eb8b114/vendor/llama.cpp/llama.cpp:3686:64: note: parameter passing for argument of type ‘std::initializer_list’ changed in GCC 7.1 2024-02-12T21:21:53,182 3686 | model.output_norm = ml.create_tensor(ctx_output, tn(LLM_TENSOR_OUTPUT_NORM, "weight"), {n_embd}); 2024-02-12T21:21:53,182 | ~~~~~~~~~~~~~~~~^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ 2024-02-12T21:21:53,183 /tmp/pip-wheel-59yyi7nt/ctransformer-core_fd90308d070c4acf9464ef7c5eb8b114/vendor/llama.cpp/llama.cpp:3687:64: note: parameter passing for argument of type ‘std::initializer_list’ changed in GCC 7.1 2024-02-12T21:21:53,183 3687 | model.output_norm_b = ml.create_tensor(ctx_output, tn(LLM_TENSOR_OUTPUT_NORM, "bias"), {n_embd}); 2024-02-12T21:21:53,184 | ~~~~~~~~~~~~~~~~^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ 2024-02-12T21:21:53,185 /tmp/pip-wheel-59yyi7nt/ctransformer-core_fd90308d070c4acf9464ef7c5eb8b114/vendor/llama.cpp/llama.cpp:3688:64: note: parameter passing for argument of type ‘std::initializer_list’ changed in GCC 7.1 2024-02-12T21:21:53,186 3688 | model.output = ml.create_tensor(ctx_output_split, tn(LLM_TENSOR_OUTPUT, "weight"), {n_embd, n_vocab}); 2024-02-12T21:21:53,186 | ~~~~~~~~~~~~~~~~^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ 2024-02-12T21:21:53,187 /tmp/pip-wheel-59yyi7nt/ctransformer-core_fd90308d070c4acf9464ef7c5eb8b114/vendor/llama.cpp/llama.cpp:3698:63: note: parameter passing for argument of type ‘std::initializer_list’ changed in GCC 7.1 2024-02-12T21:21:53,187 3698 | layer.attn_norm_b = ml.create_tensor(ctx_layer, tn(LLM_TENSOR_ATTN_NORM, "bias", i), {n_embd}); 2024-02-12T21:21:53,188 | ~~~~~~~~~~~~~~~~^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ 2024-02-12T21:21:53,189 /tmp/pip-wheel-59yyi7nt/ctransformer-core_fd90308d070c4acf9464ef7c5eb8b114/vendor/llama.cpp/llama.cpp:3703:63: note: parameter passing for argument of type ‘std::initializer_list’ changed in GCC 7.1 2024-02-12T21:21:53,189 3703 | layer.wo = ml.create_tensor(ctx_split, tn(LLM_TENSOR_ATTN_OUT, "weight", i), {n_embd, n_embd}); 2024-02-12T21:21:53,190 | ~~~~~~~~~~~~~~~~^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ 2024-02-12T21:21:53,191 /tmp/pip-wheel-59yyi7nt/ctransformer-core_fd90308d070c4acf9464ef7c5eb8b114/vendor/llama.cpp/llama.cpp:3704:63: note: parameter passing for argument of type ‘std::initializer_list’ changed in GCC 7.1 2024-02-12T21:21:53,192 3704 | layer.bo = ml.create_tensor(ctx_layer, tn(LLM_TENSOR_ATTN_OUT, "bias", i), {n_embd}); 2024-02-12T21:21:53,193 | ~~~~~~~~~~~~~~~~^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ 2024-02-12T21:21:53,193 /tmp/pip-wheel-59yyi7nt/ctransformer-core_fd90308d070c4acf9464ef7c5eb8b114/vendor/llama.cpp/llama.cpp:3707:63: note: parameter passing for argument of type ‘std::initializer_list’ changed in GCC 7.1 2024-02-12T21:21:53,194 3707 | layer.ffn_down_b = ml.create_tensor(ctx_layer, tn(LLM_TENSOR_FFN_DOWN, "bias", i), {n_embd}); 2024-02-12T21:21:53,194 | ~~~~~~~~~~~~~~~~^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ 2024-02-12T21:21:53,195 /tmp/pip-wheel-59yyi7nt/ctransformer-core_fd90308d070c4acf9464ef7c5eb8b114/vendor/llama.cpp/llama.cpp:3709:63: note: parameter passing for argument of type ‘std::initializer_list’ changed in GCC 7.1 2024-02-12T21:21:53,195 3709 | layer.ffn_up = ml.create_tensor(ctx_split, tn(LLM_TENSOR_FFN_UP, "weight", i), {n_embd, n_ff}); 2024-02-12T21:21:53,196 | ~~~~~~~~~~~~~~~~^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ 2024-02-12T21:21:53,196 /tmp/pip-wheel-59yyi7nt/ctransformer-core_fd90308d070c4acf9464ef7c5eb8b114/vendor/llama.cpp/llama.cpp:3712:63: note: parameter passing for argument of type ‘std::initializer_list’ changed in GCC 7.1 2024-02-12T21:21:53,197 3712 | layer.ffn_norm = ml.create_tensor(ctx_layer, tn(LLM_TENSOR_FFN_NORM, "weight", i), {n_embd}); 2024-02-12T21:21:53,198 | ~~~~~~~~~~~~~~~~^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ 2024-02-12T21:21:53,198 /tmp/pip-wheel-59yyi7nt/ctransformer-core_fd90308d070c4acf9464ef7c5eb8b114/vendor/llama.cpp/llama.cpp:3715:63: note: parameter passing for argument of type ‘std::initializer_list’ changed in GCC 7.1 2024-02-12T21:21:53,199 3715 | layer.attn_q_norm = ml.create_tensor(ctx_layer, tn(LLM_TENSOR_ATTN_Q_NORM, "weight", i), {64}); 2024-02-12T21:21:53,199 | ~~~~~~~~~~~~~~~~^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ 2024-02-12T21:21:53,200 /tmp/pip-wheel-59yyi7nt/ctransformer-core_fd90308d070c4acf9464ef7c5eb8b114/vendor/llama.cpp/llama.cpp:3716:63: note: parameter passing for argument of type ‘std::initializer_list’ changed in GCC 7.1 2024-02-12T21:21:53,200 3716 | layer.attn_q_norm_b = ml.create_tensor(ctx_layer, tn(LLM_TENSOR_ATTN_Q_NORM, "bias", i), {64}); 2024-02-12T21:21:53,201 | ~~~~~~~~~~~~~~~~^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ 2024-02-12T21:21:53,202 /tmp/pip-wheel-59yyi7nt/ctransformer-core_fd90308d070c4acf9464ef7c5eb8b114/vendor/llama.cpp/llama.cpp:3719:63: note: parameter passing for argument of type ‘std::initializer_list’ changed in GCC 7.1 2024-02-12T21:21:53,202 3719 | layer.attn_k_norm_b = ml.create_tensor(ctx_layer, tn(LLM_TENSOR_ATTN_K_NORM, "bias", i), {64}); 2024-02-12T21:21:53,203 | ~~~~~~~~~~~~~~~~^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ 2024-02-12T21:21:53,203 /tmp/pip-wheel-59yyi7nt/ctransformer-core_fd90308d070c4acf9464ef7c5eb8b114/vendor/llama.cpp/llama.cpp:3724:56: note: parameter passing for argument of type ‘std::initializer_list’ changed in GCC 7.1 2024-02-12T21:21:53,204 3724 | model.tok_embd = ml.create_tensor(ctx_input, tn(LLM_TENSOR_TOKEN_EMBD, "weight"), {n_embd, n_vocab}); 2024-02-12T21:21:53,205 | ~~~~~~~~~~~~~~~~^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ 2024-02-12T21:21:53,205 /tmp/pip-wheel-59yyi7nt/ctransformer-core_fd90308d070c4acf9464ef7c5eb8b114/vendor/llama.cpp/llama.cpp:3725:56: note: parameter passing for argument of type ‘std::initializer_list’ changed in GCC 7.1 2024-02-12T21:21:53,206 3725 | model.tok_norm = ml.create_tensor(ctx_input, tn(LLM_TENSOR_TOKEN_EMBD_NORM, "weight"), {n_embd}); 2024-02-12T21:21:53,206 | ~~~~~~~~~~~~~~~~^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ 2024-02-12T21:21:53,207 /tmp/pip-wheel-59yyi7nt/ctransformer-core_fd90308d070c4acf9464ef7c5eb8b114/vendor/llama.cpp/llama.cpp:3726:56: note: parameter passing for argument of type ‘std::initializer_list’ changed in GCC 7.1 2024-02-12T21:21:53,207 3726 | model.tok_norm_b = ml.create_tensor(ctx_input, tn(LLM_TENSOR_TOKEN_EMBD_NORM, "bias"), {n_embd}); 2024-02-12T21:21:53,208 | ~~~~~~~~~~~~~~~~^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ 2024-02-12T21:21:53,208 /tmp/pip-wheel-59yyi7nt/ctransformer-core_fd90308d070c4acf9464ef7c5eb8b114/vendor/llama.cpp/llama.cpp:3730:63: note: parameter passing for argument of type ‘std::initializer_list’ changed in GCC 7.1 2024-02-12T21:21:53,209 3730 | model.output_norm = ml.create_tensor(ctx_output, tn(LLM_TENSOR_OUTPUT_NORM, "weight"), {n_embd}); 2024-02-12T21:21:53,210 | ~~~~~~~~~~~~~~~~^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ 2024-02-12T21:21:53,211 /tmp/pip-wheel-59yyi7nt/ctransformer-core_fd90308d070c4acf9464ef7c5eb8b114/vendor/llama.cpp/llama.cpp:3731:63: note: parameter passing for argument of type ‘std::initializer_list’ changed in GCC 7.1 2024-02-12T21:21:53,211 3731 | model.output_norm_b = ml.create_tensor(ctx_output, tn(LLM_TENSOR_OUTPUT_NORM, "bias"), {n_embd}); 2024-02-12T21:21:53,212 | ~~~~~~~~~~~~~~~~^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ 2024-02-12T21:21:53,213 /tmp/pip-wheel-59yyi7nt/ctransformer-core_fd90308d070c4acf9464ef7c5eb8b114/vendor/llama.cpp/llama.cpp:3732:63: note: parameter passing for argument of type ‘std::initializer_list’ changed in GCC 7.1 2024-02-12T21:21:53,214 3732 | model.output = ml.create_tensor(ctx_output_split, tn(LLM_TENSOR_OUTPUT, "weight"), {n_embd, n_vocab}); 2024-02-12T21:21:53,214 | ~~~~~~~~~~~~~~~~^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ 2024-02-12T21:21:53,215 /tmp/pip-wheel-59yyi7nt/ctransformer-core_fd90308d070c4acf9464ef7c5eb8b114/vendor/llama.cpp/llama.cpp:3741:61: note: parameter passing for argument of type ‘std::initializer_list’ changed in GCC 7.1 2024-02-12T21:21:53,216 3741 | layer.attn_norm = ml.create_tensor(ctx_layer, tn(LLM_TENSOR_ATTN_NORM, "weight", i), {n_embd}); 2024-02-12T21:21:53,217 | ~~~~~~~~~~~~~~~~^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ 2024-02-12T21:21:53,217 /tmp/pip-wheel-59yyi7nt/ctransformer-core_fd90308d070c4acf9464ef7c5eb8b114/vendor/llama.cpp/llama.cpp:3742:61: note: parameter passing for argument of type ‘std::initializer_list’ changed in GCC 7.1 2024-02-12T21:21:53,218 3742 | layer.attn_norm_b = ml.create_tensor(ctx_layer, tn(LLM_TENSOR_ATTN_NORM, "bias", i), {n_embd}); 2024-02-12T21:21:53,219 | ~~~~~~~~~~~~~~~~^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ 2024-02-12T21:21:53,220 /tmp/pip-wheel-59yyi7nt/ctransformer-core_fd90308d070c4acf9464ef7c5eb8b114/vendor/llama.cpp/llama.cpp:3744:54: note: parameter passing for argument of type ‘std::initializer_list’ changed in GCC 7.1 2024-02-12T21:21:53,221 3744 | layer.wqkv = ml.create_tensor(ctx_split, tn(LLM_TENSOR_ATTN_QKV, "weight", i), {n_embd, n_embd + 2*n_embd_gqa}); 2024-02-12T21:21:53,221 | ~~~~~~~~~~~~~~~~^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ 2024-02-12T21:21:53,222 /tmp/pip-wheel-59yyi7nt/ctransformer-core_fd90308d070c4acf9464ef7c5eb8b114/vendor/llama.cpp/llama.cpp:3745:54: note: parameter passing for argument of type ‘std::initializer_list’ changed in GCC 7.1 2024-02-12T21:21:53,222 3745 | layer.bqkv = ml.create_tensor(ctx_layer, tn(LLM_TENSOR_ATTN_QKV, "bias", i), {n_embd + 2*n_embd_gqa}); 2024-02-12T21:21:53,223 | ~~~~~~~~~~~~~~~~^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ 2024-02-12T21:21:53,223 /tmp/pip-wheel-59yyi7nt/ctransformer-core_fd90308d070c4acf9464ef7c5eb8b114/vendor/llama.cpp/llama.cpp:3748:54: note: parameter passing for argument of type ‘std::initializer_list’ changed in GCC 7.1 2024-02-12T21:21:53,224 3748 | layer.bo = ml.create_tensor(ctx_layer, tn(LLM_TENSOR_ATTN_OUT, "bias", i), {n_embd}); 2024-02-12T21:21:53,225 | ~~~~~~~~~~~~~~~~^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ 2024-02-12T21:21:53,226 /tmp/pip-wheel-59yyi7nt/ctransformer-core_fd90308d070c4acf9464ef7c5eb8b114/vendor/llama.cpp/llama.cpp:3750:60: note: parameter passing for argument of type ‘std::initializer_list’ changed in GCC 7.1 2024-02-12T21:21:53,226 3750 | layer.ffn_norm = ml.create_tensor(ctx_layer, tn(LLM_TENSOR_FFN_NORM, "weight", i), {n_embd}); 2024-02-12T21:21:53,227 | ~~~~~~~~~~~~~~~~^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ 2024-02-12T21:21:53,228 /tmp/pip-wheel-59yyi7nt/ctransformer-core_fd90308d070c4acf9464ef7c5eb8b114/vendor/llama.cpp/llama.cpp:3754:60: note: parameter passing for argument of type ‘std::initializer_list’ changed in GCC 7.1 2024-02-12T21:21:53,229 3754 | layer.ffn_down_b = ml.create_tensor(ctx_layer, tn(LLM_TENSOR_FFN_DOWN, "bias", i), {n_embd}); 2024-02-12T21:21:53,230 | ~~~~~~~~~~~~~~~~^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ 2024-02-12T21:21:53,230 /tmp/pip-wheel-59yyi7nt/ctransformer-core_fd90308d070c4acf9464ef7c5eb8b114/vendor/llama.cpp/llama.cpp:3762:54: note: parameter passing for argument of type ‘std::initializer_list’ changed in GCC 7.1 2024-02-12T21:21:53,231 3762 | model.tok_embd = ml.create_tensor(ctx_input, tn(LLM_TENSOR_TOKEN_EMBD, "weight"), {n_embd, n_vocab}); 2024-02-12T21:21:53,232 | ~~~~~~~~~~~~~~~~^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ 2024-02-12T21:21:53,233 /tmp/pip-wheel-59yyi7nt/ctransformer-core_fd90308d070c4acf9464ef7c5eb8b114/vendor/llama.cpp/llama.cpp:3766:63: note: parameter passing for argument of type ‘std::initializer_list’ changed in GCC 7.1 2024-02-12T21:21:53,234 3766 | model.output_norm = ml.create_tensor(ctx_output, tn(LLM_TENSOR_OUTPUT_NORM, "weight"), {n_embd}); 2024-02-12T21:21:53,235 | ~~~~~~~~~~~~~~~~^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ 2024-02-12T21:21:53,236 /tmp/pip-wheel-59yyi7nt/ctransformer-core_fd90308d070c4acf9464ef7c5eb8b114/vendor/llama.cpp/llama.cpp:3767:63: note: parameter passing for argument of type ‘std::initializer_list’ changed in GCC 7.1 2024-02-12T21:21:53,236 3767 | model.output = ml.create_tensor(ctx_output_split, tn(LLM_TENSOR_OUTPUT, "weight"), {n_embd, n_vocab}); 2024-02-12T21:21:53,237 | ~~~~~~~~~~~~~~~~^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ 2024-02-12T21:21:53,237 /tmp/pip-wheel-59yyi7nt/ctransformer-core_fd90308d070c4acf9464ef7c5eb8b114/vendor/llama.cpp/llama.cpp:3776:59: note: parameter passing for argument of type ‘std::initializer_list’ changed in GCC 7.1 2024-02-12T21:21:53,238 3776 | layer.attn_norm = ml.create_tensor(ctx_layer, tn(LLM_TENSOR_ATTN_NORM, "weight", i), {n_embd}); 2024-02-12T21:21:53,239 | ~~~~~~~~~~~~~~~~^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ 2024-02-12T21:21:53,239 /tmp/pip-wheel-59yyi7nt/ctransformer-core_fd90308d070c4acf9464ef7c5eb8b114/vendor/llama.cpp/llama.cpp:3778:54: note: parameter passing for argument of type ‘std::initializer_list’ changed in GCC 7.1 2024-02-12T21:21:53,240 3778 | layer.wqkv = ml.create_tensor(ctx_split, tn(LLM_TENSOR_ATTN_QKV, "weight", i), {n_embd, n_embd + 2*n_embd_gqa}); 2024-02-12T21:21:53,240 | ~~~~~~~~~~~~~~~~^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ 2024-02-12T21:21:53,241 /tmp/pip-wheel-59yyi7nt/ctransformer-core_fd90308d070c4acf9464ef7c5eb8b114/vendor/llama.cpp/llama.cpp:3779:54: note: parameter passing for argument of type ‘std::initializer_list’ changed in GCC 7.1 2024-02-12T21:21:53,241 3779 | layer.wo = ml.create_tensor(ctx_split, tn(LLM_TENSOR_ATTN_OUT, "weight", i), {n_embd, n_embd}); 2024-02-12T21:21:53,242 | ~~~~~~~~~~~~~~~~^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ 2024-02-12T21:21:53,243 /tmp/pip-wheel-59yyi7nt/ctransformer-core_fd90308d070c4acf9464ef7c5eb8b114/vendor/llama.cpp/llama.cpp:3786:57: note: parameter passing for argument of type ‘std::initializer_list’ changed in GCC 7.1 2024-02-12T21:21:53,243 3786 | layer.ffn_act = ml.create_tensor(ctx_layer, tn(LLM_TENSOR_FFN_ACT, "scales", i), {n_ff}, false); 2024-02-12T21:21:53,244 | ~~~~~~~~~~~~~~~~^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ 2024-02-12T21:21:53,245 /tmp/pip-wheel-59yyi7nt/ctransformer-core_fd90308d070c4acf9464ef7c5eb8b114/vendor/llama.cpp/llama.cpp:3791:54: note: parameter passing for argument of type ‘std::initializer_list’ changed in GCC 7.1 2024-02-12T21:21:53,245 3791 | model.tok_embd = ml.create_tensor(ctx_input, tn(LLM_TENSOR_TOKEN_EMBD, "weight"), {n_embd, n_vocab}); 2024-02-12T21:21:53,246 | ~~~~~~~~~~~~~~~~^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ 2024-02-12T21:21:53,247 /tmp/pip-wheel-59yyi7nt/ctransformer-core_fd90308d070c4acf9464ef7c5eb8b114/vendor/llama.cpp/llama.cpp:3795:63: note: parameter passing for argument of type ‘std::initializer_list’ changed in GCC 7.1 2024-02-12T21:21:53,248 3795 | model.output_norm_b = ml.create_tensor(ctx_output, tn(LLM_TENSOR_OUTPUT_NORM, "bias"), {n_embd}); 2024-02-12T21:21:53,248 | ~~~~~~~~~~~~~~~~^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ 2024-02-12T21:21:53,249 /tmp/pip-wheel-59yyi7nt/ctransformer-core_fd90308d070c4acf9464ef7c5eb8b114/vendor/llama.cpp/llama.cpp:3796:63: note: parameter passing for argument of type ‘std::initializer_list’ changed in GCC 7.1 2024-02-12T21:21:53,249 3796 | model.output_norm = ml.create_tensor(ctx_output, tn(LLM_TENSOR_OUTPUT_NORM, "weight"), {n_embd}); 2024-02-12T21:21:53,250 | ~~~~~~~~~~~~~~~~^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ 2024-02-12T21:21:53,250 /tmp/pip-wheel-59yyi7nt/ctransformer-core_fd90308d070c4acf9464ef7c5eb8b114/vendor/llama.cpp/llama.cpp:3797:63: note: parameter passing for argument of type ‘std::initializer_list’ changed in GCC 7.1 2024-02-12T21:21:53,251 3797 | model.output = ml.create_tensor(ctx_output_split, tn(LLM_TENSOR_OUTPUT, "weight"), {n_embd, n_vocab}); 2024-02-12T21:21:53,251 | ~~~~~~~~~~~~~~~~^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ 2024-02-12T21:21:53,252 /tmp/pip-wheel-59yyi7nt/ctransformer-core_fd90308d070c4acf9464ef7c5eb8b114/vendor/llama.cpp/llama.cpp:3806:61: note: parameter passing for argument of type ‘std::initializer_list’ changed in GCC 7.1 2024-02-12T21:21:53,252 3806 | layer.attn_norm = ml.create_tensor(ctx_layer, tn(LLM_TENSOR_ATTN_NORM, "weight", i), {n_embd}); 2024-02-12T21:21:53,253 | ~~~~~~~~~~~~~~~~^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ 2024-02-12T21:21:53,253 /tmp/pip-wheel-59yyi7nt/ctransformer-core_fd90308d070c4acf9464ef7c5eb8b114/vendor/llama.cpp/llama.cpp:3807:61: note: parameter passing for argument of type ‘std::initializer_list’ changed in GCC 7.1 2024-02-12T21:21:53,254 3807 | layer.attn_norm_b = ml.create_tensor(ctx_layer, tn(LLM_TENSOR_ATTN_NORM, "bias", i), {n_embd}); 2024-02-12T21:21:53,255 | ~~~~~~~~~~~~~~~~^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ 2024-02-12T21:21:53,255 /tmp/pip-wheel-59yyi7nt/ctransformer-core_fd90308d070c4acf9464ef7c5eb8b114/vendor/llama.cpp/llama.cpp:3809:52: note: parameter passing for argument of type ‘std::initializer_list’ changed in GCC 7.1 2024-02-12T21:21:53,256 3809 | layer.wq = ml.create_tensor(ctx_split, tn(LLM_TENSOR_ATTN_Q, "weight", i), {n_embd, n_embd}); 2024-02-12T21:21:53,257 | ~~~~~~~~~~~~~~~~^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ 2024-02-12T21:21:53,262 /tmp/pip-wheel-59yyi7nt/ctransformer-core_fd90308d070c4acf9464ef7c5eb8b114/vendor/llama.cpp/llama.cpp:3810:52: note: parameter passing for argument of type ‘std::initializer_list’ changed in GCC 7.1 2024-02-12T21:21:53,262 3810 | layer.wk = ml.create_tensor(ctx_split, tn(LLM_TENSOR_ATTN_K, "weight", i), {n_embd, n_embd_gqa}); 2024-02-12T21:21:53,263 | ~~~~~~~~~~~~~~~~^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ 2024-02-12T21:21:53,264 /tmp/pip-wheel-59yyi7nt/ctransformer-core_fd90308d070c4acf9464ef7c5eb8b114/vendor/llama.cpp/llama.cpp:3816:52: note: parameter passing for argument of type ‘std::initializer_list’ changed in GCC 7.1 2024-02-12T21:21:53,264 3816 | layer.bk = ml.create_tensor(ctx_layer, tn(LLM_TENSOR_ATTN_K, "bias", i), {n_embd_gqa}, false); 2024-02-12T21:21:53,265 | ~~~~~~~~~~~~~~~~^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ 2024-02-12T21:21:53,266 /tmp/pip-wheel-59yyi7nt/ctransformer-core_fd90308d070c4acf9464ef7c5eb8b114/vendor/llama.cpp/llama.cpp:3820:60: note: parameter passing for argument of type ‘std::initializer_list’ changed in GCC 7.1 2024-02-12T21:21:53,267 3820 | layer.ffn_norm_b = ml.create_tensor(ctx_layer, tn(LLM_TENSOR_FFN_NORM, "bias", i), {n_embd}); 2024-02-12T21:21:53,268 | ~~~~~~~~~~~~~~~~^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ 2024-02-12T21:21:53,268 /tmp/pip-wheel-59yyi7nt/ctransformer-core_fd90308d070c4acf9464ef7c5eb8b114/vendor/llama.cpp/llama.cpp:3824:58: note: parameter passing for argument of type ‘std::initializer_list’ changed in GCC 7.1 2024-02-12T21:21:53,269 3824 | layer.ffn_up = ml.create_tensor(ctx_split, tn(LLM_TENSOR_FFN_UP, "weight", i), {n_embd, n_ff}); 2024-02-12T21:21:53,269 | ~~~~~~~~~~~~~~~~^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ 2024-02-12T21:21:53,270 /tmp/pip-wheel-59yyi7nt/ctransformer-core_fd90308d070c4acf9464ef7c5eb8b114/vendor/llama.cpp/llama.cpp:3829:54: note: parameter passing for argument of type ‘std::initializer_list’ changed in GCC 7.1 2024-02-12T21:21:53,270 3829 | model.tok_embd = ml.create_tensor(ctx_input, tn(LLM_TENSOR_TOKEN_EMBD, "weight"), {n_embd, n_vocab}); 2024-02-12T21:21:53,271 | ~~~~~~~~~~~~~~~~^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ 2024-02-12T21:21:53,271 /tmp/pip-wheel-59yyi7nt/ctransformer-core_fd90308d070c4acf9464ef7c5eb8b114/vendor/llama.cpp/llama.cpp:3833:61: note: parameter passing for argument of type ‘std::initializer_list’ changed in GCC 7.1 2024-02-12T21:21:53,272 3833 | model.output_norm = ml.create_tensor(ctx_output, tn(LLM_TENSOR_OUTPUT_NORM, "weight"), {n_embd}); 2024-02-12T21:21:53,272 | ~~~~~~~~~~~~~~~~^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ 2024-02-12T21:21:53,273 /tmp/pip-wheel-59yyi7nt/ctransformer-core_fd90308d070c4acf9464ef7c5eb8b114/vendor/llama.cpp/llama.cpp:3834:61: note: parameter passing for argument of type ‘std::initializer_list’ changed in GCC 7.1 2024-02-12T21:21:53,273 3834 | model.output = ml.create_tensor(ctx_output_split, tn(LLM_TENSOR_OUTPUT, "weight"), {n_embd, n_vocab}); 2024-02-12T21:21:53,274 | ~~~~~~~~~~~~~~~~^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ 2024-02-12T21:21:53,275 /tmp/pip-wheel-59yyi7nt/ctransformer-core_fd90308d070c4acf9464ef7c5eb8b114/vendor/llama.cpp/llama.cpp:3843:59: note: parameter passing for argument of type ‘std::initializer_list’ changed in GCC 7.1 2024-02-12T21:21:53,275 3843 | layer.attn_norm = ml.create_tensor(ctx_layer, tn(LLM_TENSOR_ATTN_NORM, "weight", i), {n_embd}); 2024-02-12T21:21:53,276 | ~~~~~~~~~~~~~~~~^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ 2024-02-12T21:21:53,277 /tmp/pip-wheel-59yyi7nt/ctransformer-core_fd90308d070c4acf9464ef7c5eb8b114/vendor/llama.cpp/llama.cpp:3846:54: note: parameter passing for argument of type ‘std::initializer_list’ changed in GCC 7.1 2024-02-12T21:21:53,278 3846 | layer.bqkv = ml.create_tensor(ctx_layer, tn(LLM_TENSOR_ATTN_QKV, "bias", i), {n_embd*3}); 2024-02-12T21:21:53,278 | ~~~~~~~~~~~~~~~~^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ 2024-02-12T21:21:53,279 /tmp/pip-wheel-59yyi7nt/ctransformer-core_fd90308d070c4acf9464ef7c5eb8b114/vendor/llama.cpp/llama.cpp:3851:58: note: parameter passing for argument of type ‘std::initializer_list’ changed in GCC 7.1 2024-02-12T21:21:53,280 3851 | layer.ffn_gate = ml.create_tensor(ctx_split, tn(LLM_TENSOR_FFN_GATE, "weight", i), {n_embd, n_ff/2}); 2024-02-12T21:21:53,281 | ~~~~~~~~~~~~~~~~^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ 2024-02-12T21:21:53,281 /tmp/pip-wheel-59yyi7nt/ctransformer-core_fd90308d070c4acf9464ef7c5eb8b114/vendor/llama.cpp/llama.cpp:3853:58: note: parameter passing for argument of type ‘std::initializer_list’ changed in GCC 7.1 2024-02-12T21:21:53,282 3853 | layer.ffn_up = ml.create_tensor(ctx_split, tn(LLM_TENSOR_FFN_UP, "weight", i), {n_embd, n_ff/2}); 2024-02-12T21:21:53,283 | ~~~~~~~~~~~~~~~~^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ 2024-02-12T21:21:53,283 /tmp/pip-wheel-59yyi7nt/ctransformer-core_fd90308d070c4acf9464ef7c5eb8b114/vendor/llama.cpp/llama.cpp:3858:54: note: parameter passing for argument of type ‘std::initializer_list’ changed in GCC 7.1 2024-02-12T21:21:53,284 3858 | model.tok_embd = ml.create_tensor(ctx_input, tn(LLM_TENSOR_TOKEN_EMBD, "weight"), {n_embd, n_vocab}); 2024-02-12T21:21:53,285 | ~~~~~~~~~~~~~~~~^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ 2024-02-12T21:21:53,285 /tmp/pip-wheel-59yyi7nt/ctransformer-core_fd90308d070c4acf9464ef7c5eb8b114/vendor/llama.cpp/llama.cpp:3862:61: note: parameter passing for argument of type ‘std::initializer_list’ changed in GCC 7.1 2024-02-12T21:21:53,286 3862 | model.output_norm = ml.create_tensor(ctx_output, tn(LLM_TENSOR_OUTPUT_NORM, "weight"), {n_embd}); 2024-02-12T21:21:53,287 | ~~~~~~~~~~~~~~~~^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ 2024-02-12T21:21:53,287 /tmp/pip-wheel-59yyi7nt/ctransformer-core_fd90308d070c4acf9464ef7c5eb8b114/vendor/llama.cpp/llama.cpp:3863:61: note: parameter passing for argument of type ‘std::initializer_list’ changed in GCC 7.1 2024-02-12T21:21:53,288 3863 | model.output = ml.create_tensor(ctx_output_split, tn(LLM_TENSOR_OUTPUT, "weight"), {n_embd, n_vocab}); 2024-02-12T21:21:53,289 | ~~~~~~~~~~~~~~~~^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ 2024-02-12T21:21:53,290 /tmp/pip-wheel-59yyi7nt/ctransformer-core_fd90308d070c4acf9464ef7c5eb8b114/vendor/llama.cpp/llama.cpp:3872:59: note: parameter passing for argument of type ‘std::initializer_list’ changed in GCC 7.1 2024-02-12T21:21:53,291 3872 | layer.attn_norm = ml.create_tensor(ctx_layer, tn(LLM_TENSOR_ATTN_NORM, "weight", i), {n_embd}); 2024-02-12T21:21:53,292 | ~~~~~~~~~~~~~~~~^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ 2024-02-12T21:21:53,293 /tmp/pip-wheel-59yyi7nt/ctransformer-core_fd90308d070c4acf9464ef7c5eb8b114/vendor/llama.cpp/llama.cpp:3876:52: note: parameter passing for argument of type ‘std::initializer_list’ changed in GCC 7.1 2024-02-12T21:21:53,294 3876 | layer.wv = ml.create_tensor(ctx_split, tn(LLM_TENSOR_ATTN_V, "weight", i), {n_embd, n_embd_gqa}); 2024-02-12T21:21:53,295 | ~~~~~~~~~~~~~~~~^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ 2024-02-12T21:21:53,296 /tmp/pip-wheel-59yyi7nt/ctransformer-core_fd90308d070c4acf9464ef7c5eb8b114/vendor/llama.cpp/llama.cpp:3877:52: note: parameter passing for argument of type ‘std::initializer_list’ changed in GCC 7.1 2024-02-12T21:21:53,297 3877 | layer.wo = ml.create_tensor(ctx_split, tn(LLM_TENSOR_ATTN_OUT, "weight", i), {n_embd, n_embd}); 2024-02-12T21:21:53,298 | ~~~~~~~~~~~~~~~~^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ 2024-02-12T21:21:53,299 /tmp/pip-wheel-59yyi7nt/ctransformer-core_fd90308d070c4acf9464ef7c5eb8b114/vendor/llama.cpp/llama.cpp:3881:52: note: parameter passing for argument of type ‘std::initializer_list’ changed in GCC 7.1 2024-02-12T21:21:53,299 3881 | layer.bk = ml.create_tensor(ctx_layer, tn(LLM_TENSOR_ATTN_K, "bias", i), {n_embd_gqa}); 2024-02-12T21:21:53,300 | ~~~~~~~~~~~~~~~~^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ 2024-02-12T21:21:53,301 /tmp/pip-wheel-59yyi7nt/ctransformer-core_fd90308d070c4acf9464ef7c5eb8b114/vendor/llama.cpp/llama.cpp:3882:52: note: parameter passing for argument of type ‘std::initializer_list’ changed in GCC 7.1 2024-02-12T21:21:53,301 3882 | layer.bv = ml.create_tensor(ctx_layer, tn(LLM_TENSOR_ATTN_V, "bias", i), {n_embd_gqa}); 2024-02-12T21:21:53,302 | ~~~~~~~~~~~~~~~~^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ 2024-02-12T21:21:53,304 /tmp/pip-wheel-59yyi7nt/ctransformer-core_fd90308d070c4acf9464ef7c5eb8b114/vendor/llama.cpp/llama.cpp:3887:58: note: parameter passing for argument of type ‘std::initializer_list’ changed in GCC 7.1 2024-02-12T21:21:53,305 3887 | layer.ffn_down = ml.create_tensor(ctx_split, tn(LLM_TENSOR_FFN_DOWN, "weight", i), { n_ff, n_embd}); 2024-02-12T21:21:53,306 | ~~~~~~~~~~~~~~~~^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ 2024-02-12T21:21:53,307 /tmp/pip-wheel-59yyi7nt/ctransformer-core_fd90308d070c4acf9464ef7c5eb8b114/vendor/llama.cpp/llama.cpp:3893:54: note: parameter passing for argument of type ‘std::initializer_list’ changed in GCC 7.1 2024-02-12T21:21:53,308 3893 | model.tok_embd = ml.create_tensor(ctx_input, tn(LLM_TENSOR_TOKEN_EMBD, "weight"), {n_embd, n_vocab}); 2024-02-12T21:21:53,309 | ~~~~~~~~~~~~~~~~^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ 2024-02-12T21:21:53,311 /tmp/pip-wheel-59yyi7nt/ctransformer-core_fd90308d070c4acf9464ef7c5eb8b114/vendor/llama.cpp/llama.cpp:3897:63: note: parameter passing for argument of type ‘std::initializer_list’ changed in GCC 7.1 2024-02-12T21:21:53,312 3897 | model.output_norm = ml.create_tensor(ctx_output, tn(LLM_TENSOR_OUTPUT_NORM, "weight"), {n_embd}); 2024-02-12T21:21:53,313 | ~~~~~~~~~~~~~~~~^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ 2024-02-12T21:21:53,314 /tmp/pip-wheel-59yyi7nt/ctransformer-core_fd90308d070c4acf9464ef7c5eb8b114/vendor/llama.cpp/llama.cpp:3898:63: note: parameter passing for argument of type ‘std::initializer_list’ changed in GCC 7.1 2024-02-12T21:21:53,315 3898 | model.output_norm_b = ml.create_tensor(ctx_output, tn(LLM_TENSOR_OUTPUT_NORM, "bias"), {n_embd}); 2024-02-12T21:21:53,315 | ~~~~~~~~~~~~~~~~^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ 2024-02-12T21:21:53,316 /tmp/pip-wheel-59yyi7nt/ctransformer-core_fd90308d070c4acf9464ef7c5eb8b114/vendor/llama.cpp/llama.cpp:3899:63: note: parameter passing for argument of type ‘std::initializer_list’ changed in GCC 7.1 2024-02-12T21:21:53,317 3899 | model.output = ml.create_tensor(ctx_output_split, tn(LLM_TENSOR_OUTPUT, "weight"), {n_embd, n_vocab}); 2024-02-12T21:21:53,318 | ~~~~~~~~~~~~~~~~^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ 2024-02-12T21:21:53,318 /tmp/pip-wheel-59yyi7nt/ctransformer-core_fd90308d070c4acf9464ef7c5eb8b114/vendor/llama.cpp/llama.cpp:3900:63: note: parameter passing for argument of type ‘std::initializer_list’ changed in GCC 7.1 2024-02-12T21:21:53,319 3900 | model.output_b = ml.create_tensor(ctx_output, tn(LLM_TENSOR_OUTPUT, "bias"), {n_vocab}); 2024-02-12T21:21:53,320 | ~~~~~~~~~~~~~~~~^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ 2024-02-12T21:21:53,320 /tmp/pip-wheel-59yyi7nt/ctransformer-core_fd90308d070c4acf9464ef7c5eb8b114/vendor/llama.cpp/llama.cpp:3909:61: note: parameter passing for argument of type ‘std::initializer_list’ changed in GCC 7.1 2024-02-12T21:21:53,321 3909 | layer.attn_norm = ml.create_tensor(ctx_layer, tn(LLM_TENSOR_ATTN_NORM, "weight", i), {n_embd}); 2024-02-12T21:21:53,322 | ~~~~~~~~~~~~~~~~^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ 2024-02-12T21:21:53,322 /tmp/pip-wheel-59yyi7nt/ctransformer-core_fd90308d070c4acf9464ef7c5eb8b114/vendor/llama.cpp/llama.cpp:3910:61: note: parameter passing for argument of type ‘std::initializer_list’ changed in GCC 7.1 2024-02-12T21:21:53,323 3910 | layer.attn_norm_b = ml.create_tensor(ctx_layer, tn(LLM_TENSOR_ATTN_NORM, "bias", i), {n_embd}); 2024-02-12T21:21:53,323 | ~~~~~~~~~~~~~~~~^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ 2024-02-12T21:21:53,324 /tmp/pip-wheel-59yyi7nt/ctransformer-core_fd90308d070c4acf9464ef7c5eb8b114/vendor/llama.cpp/llama.cpp:3912:54: note: parameter passing for argument of type ‘std::initializer_list’ changed in GCC 7.1 2024-02-12T21:21:53,325 3912 | layer.wqkv = ml.create_tensor(ctx_split, tn(LLM_TENSOR_ATTN_QKV, "weight", i), {n_embd, n_embd + 2*n_embd_gqa}, false); 2024-02-12T21:21:53,326 | ~~~~~~~~~~~~~~~~^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ 2024-02-12T21:21:53,326 /tmp/pip-wheel-59yyi7nt/ctransformer-core_fd90308d070c4acf9464ef7c5eb8b114/vendor/llama.cpp/llama.cpp:3916:56: note: parameter passing for argument of type ‘std::initializer_list’ changed in GCC 7.1 2024-02-12T21:21:53,327 3916 | layer.wq = ml.create_tensor(ctx_split, tn(LLM_TENSOR_ATTN_Q, "weight", i), {n_embd, n_embd}); 2024-02-12T21:21:53,328 | ~~~~~~~~~~~~~~~~^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ 2024-02-12T21:21:53,329 /tmp/pip-wheel-59yyi7nt/ctransformer-core_fd90308d070c4acf9464ef7c5eb8b114/vendor/llama.cpp/llama.cpp:3917:56: note: parameter passing for argument of type ‘std::initializer_list’ changed in GCC 7.1 2024-02-12T21:21:53,329 3917 | layer.bq = ml.create_tensor(ctx_layer, tn(LLM_TENSOR_ATTN_Q, "bias", i), {n_embd}); 2024-02-12T21:21:53,330 | ~~~~~~~~~~~~~~~~^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ 2024-02-12T21:21:53,331 /tmp/pip-wheel-59yyi7nt/ctransformer-core_fd90308d070c4acf9464ef7c5eb8b114/vendor/llama.cpp/llama.cpp:3919:56: note: parameter passing for argument of type ‘std::initializer_list’ changed in GCC 7.1 2024-02-12T21:21:53,332 3919 | layer.wk = ml.create_tensor(ctx_split, tn(LLM_TENSOR_ATTN_K, "weight", i), {n_embd, n_embd_gqa}); 2024-02-12T21:21:53,332 | ~~~~~~~~~~~~~~~~^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ 2024-02-12T21:21:53,333 /tmp/pip-wheel-59yyi7nt/ctransformer-core_fd90308d070c4acf9464ef7c5eb8b114/vendor/llama.cpp/llama.cpp:3920:56: note: parameter passing for argument of type ‘std::initializer_list’ changed in GCC 7.1 2024-02-12T21:21:53,334 3920 | layer.bk = ml.create_tensor(ctx_layer, tn(LLM_TENSOR_ATTN_K, "bias", i), {n_embd_gqa}); 2024-02-12T21:21:53,336 | ~~~~~~~~~~~~~~~~^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ 2024-02-12T21:21:53,337 /tmp/pip-wheel-59yyi7nt/ctransformer-core_fd90308d070c4acf9464ef7c5eb8b114/vendor/llama.cpp/llama.cpp:3922:56: note: parameter passing for argument of type ‘std::initializer_list’ changed in GCC 7.1 2024-02-12T21:21:53,338 3922 | layer.wv = ml.create_tensor(ctx_split, tn(LLM_TENSOR_ATTN_V, "weight", i), {n_embd, n_embd_gqa}); 2024-02-12T21:21:53,339 | ~~~~~~~~~~~~~~~~^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ 2024-02-12T21:21:53,340 /tmp/pip-wheel-59yyi7nt/ctransformer-core_fd90308d070c4acf9464ef7c5eb8b114/vendor/llama.cpp/llama.cpp:3923:56: note: parameter passing for argument of type ‘std::initializer_list’ changed in GCC 7.1 2024-02-12T21:21:53,341 3923 | layer.bv = ml.create_tensor(ctx_layer, tn(LLM_TENSOR_ATTN_V, "bias", i), {n_embd_gqa}); 2024-02-12T21:21:53,343 | ~~~~~~~~~~~~~~~~^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ 2024-02-12T21:21:53,344 /tmp/pip-wheel-59yyi7nt/ctransformer-core_fd90308d070c4acf9464ef7c5eb8b114/vendor/llama.cpp/llama.cpp:3926:54: note: parameter passing for argument of type ‘std::initializer_list’ changed in GCC 7.1 2024-02-12T21:21:53,344 3926 | layer.wo = ml.create_tensor(ctx_split, tn(LLM_TENSOR_ATTN_OUT, "weight", i), {n_embd, n_embd}); 2024-02-12T21:21:53,345 | ~~~~~~~~~~~~~~~~^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ 2024-02-12T21:21:53,346 /tmp/pip-wheel-59yyi7nt/ctransformer-core_fd90308d070c4acf9464ef7c5eb8b114/vendor/llama.cpp/llama.cpp:3929:60: note: parameter passing for argument of type ‘std::initializer_list’ changed in GCC 7.1 2024-02-12T21:21:53,346 3929 | layer.ffn_down = ml.create_tensor(ctx_split, tn(LLM_TENSOR_FFN_DOWN, "weight", i), {n_ff, n_embd}); 2024-02-12T21:21:53,347 | ~~~~~~~~~~~~~~~~^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ 2024-02-12T21:21:53,348 /tmp/pip-wheel-59yyi7nt/ctransformer-core_fd90308d070c4acf9464ef7c5eb8b114/vendor/llama.cpp/llama.cpp:3930:60: note: parameter passing for argument of type ‘std::initializer_list’ changed in GCC 7.1 2024-02-12T21:21:53,348 3930 | layer.ffn_down_b = ml.create_tensor(ctx_layer, tn(LLM_TENSOR_FFN_DOWN, "bias", i), {n_embd}); 2024-02-12T21:21:53,349 | ~~~~~~~~~~~~~~~~^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ 2024-02-12T21:21:53,350 /tmp/pip-wheel-59yyi7nt/ctransformer-core_fd90308d070c4acf9464ef7c5eb8b114/vendor/llama.cpp/llama.cpp:3933:60: note: parameter passing for argument of type ‘std::initializer_list’ changed in GCC 7.1 2024-02-12T21:21:53,350 3933 | layer.ffn_up_b = ml.create_tensor(ctx_layer, tn(LLM_TENSOR_FFN_UP, "bias", i), {n_ff}); 2024-02-12T21:21:53,351 | ~~~~~~~~~~~~~~~~^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ 2024-02-12T21:21:53,352 /tmp/pip-wheel-59yyi7nt/ctransformer-core_fd90308d070c4acf9464ef7c5eb8b114/vendor/llama.cpp/llama.cpp:3938:54: note: parameter passing for argument of type ‘std::initializer_list’ changed in GCC 7.1 2024-02-12T21:21:53,352 3938 | model.tok_embd = ml.create_tensor(ctx_input, tn(LLM_TENSOR_TOKEN_EMBD, "weight"), {n_embd, n_vocab}); 2024-02-12T21:21:53,353 | ~~~~~~~~~~~~~~~~^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ 2024-02-12T21:21:53,354 /tmp/pip-wheel-59yyi7nt/ctransformer-core_fd90308d070c4acf9464ef7c5eb8b114/vendor/llama.cpp/llama.cpp:3942:61: note: parameter passing for argument of type ‘std::initializer_list’ changed in GCC 7.1 2024-02-12T21:21:53,354 3942 | model.output_norm = ml.create_tensor(ctx_output, tn(LLM_TENSOR_OUTPUT_NORM, "weight"), {n_embd}); 2024-02-12T21:21:53,355 | ~~~~~~~~~~~~~~~~^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ 2024-02-12T21:21:53,355 /tmp/pip-wheel-59yyi7nt/ctransformer-core_fd90308d070c4acf9464ef7c5eb8b114/vendor/llama.cpp/llama.cpp:3943:61: note: parameter passing for argument of type ‘std::initializer_list’ changed in GCC 7.1 2024-02-12T21:21:53,356 3943 | model.output = ml.create_tensor(ctx_output_split, tn(LLM_TENSOR_OUTPUT, "weight"), {n_embd, n_vocab}); 2024-02-12T21:21:53,356 | ~~~~~~~~~~~~~~~~^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ 2024-02-12T21:21:53,357 /tmp/pip-wheel-59yyi7nt/ctransformer-core_fd90308d070c4acf9464ef7c5eb8b114/vendor/llama.cpp/llama.cpp:3952:59: note: parameter passing for argument of type ‘std::initializer_list’ changed in GCC 7.1 2024-02-12T21:21:53,357 3952 | layer.attn_norm = ml.create_tensor(ctx_layer, tn(LLM_TENSOR_ATTN_NORM, "weight", i), {n_embd}); 2024-02-12T21:21:53,357 | ~~~~~~~~~~~~~~~~^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ 2024-02-12T21:21:53,358 /tmp/pip-wheel-59yyi7nt/ctransformer-core_fd90308d070c4acf9464ef7c5eb8b114/vendor/llama.cpp/llama.cpp:3954:52: note: parameter passing for argument of type ‘std::initializer_list’ changed in GCC 7.1 2024-02-12T21:21:53,358 3954 | layer.wq = ml.create_tensor(ctx_split, tn(LLM_TENSOR_ATTN_Q, "weight", i), {n_embd, n_embd}); 2024-02-12T21:21:53,359 | ~~~~~~~~~~~~~~~~^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ 2024-02-12T21:21:53,360 /tmp/pip-wheel-59yyi7nt/ctransformer-core_fd90308d070c4acf9464ef7c5eb8b114/vendor/llama.cpp/llama.cpp:3955:52: note: parameter passing for argument of type ‘std::initializer_list’ changed in GCC 7.1 2024-02-12T21:21:53,360 3955 | layer.wk = ml.create_tensor(ctx_split, tn(LLM_TENSOR_ATTN_K, "weight", i), {n_embd, n_embd_gqa}); 2024-02-12T21:21:53,361 | ~~~~~~~~~~~~~~~~^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ 2024-02-12T21:21:53,361 /tmp/pip-wheel-59yyi7nt/ctransformer-core_fd90308d070c4acf9464ef7c5eb8b114/vendor/llama.cpp/llama.cpp:3959:58: note: parameter passing for argument of type ‘std::initializer_list’ changed in GCC 7.1 2024-02-12T21:21:53,362 3959 | layer.ffn_gate = ml.create_tensor(ctx_split, tn(LLM_TENSOR_FFN_GATE, "weight", i), {n_embd, n_ff}); 2024-02-12T21:21:53,363 | ~~~~~~~~~~~~~~~~^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ 2024-02-12T21:21:53,363 /tmp/pip-wheel-59yyi7nt/ctransformer-core_fd90308d070c4acf9464ef7c5eb8b114/vendor/llama.cpp/llama.cpp:3966:54: note: parameter passing for argument of type ‘std::initializer_list’ changed in GCC 7.1 2024-02-12T21:21:53,364 3966 | model.tok_embd = ml.create_tensor(ctx_input, tn(LLM_TENSOR_TOKEN_EMBD, "weight"), {n_embd, n_vocab}); 2024-02-12T21:21:53,365 | ~~~~~~~~~~~~~~~~^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ 2024-02-12T21:21:53,365 /tmp/pip-wheel-59yyi7nt/ctransformer-core_fd90308d070c4acf9464ef7c5eb8b114/vendor/llama.cpp/llama.cpp:3967:54: note: parameter passing for argument of type ‘std::initializer_list’ changed in GCC 7.1 2024-02-12T21:21:53,366 3967 | model.pos_embd = ml.create_tensor(ctx_input, tn(LLM_TENSOR_POS_EMBD, "weight"), {n_embd, hparams.n_ctx_train}); 2024-02-12T21:21:53,367 | ~~~~~~~~~~~~~~~~^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ 2024-02-12T21:21:53,367 /tmp/pip-wheel-59yyi7nt/ctransformer-core_fd90308d070c4acf9464ef7c5eb8b114/vendor/llama.cpp/llama.cpp:3971:63: note: parameter passing for argument of type ‘std::initializer_list’ changed in GCC 7.1 2024-02-12T21:21:53,368 3971 | model.output_norm = ml.create_tensor(ctx_output, tn(LLM_TENSOR_OUTPUT_NORM, "weight"), {n_embd}); 2024-02-12T21:21:53,368 | ~~~~~~~~~~~~~~~~^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ 2024-02-12T21:21:53,369 /tmp/pip-wheel-59yyi7nt/ctransformer-core_fd90308d070c4acf9464ef7c5eb8b114/vendor/llama.cpp/llama.cpp:3972:63: note: parameter passing for argument of type ‘std::initializer_list’ changed in GCC 7.1 2024-02-12T21:21:53,369 3972 | model.output_norm_b = ml.create_tensor(ctx_output, tn(LLM_TENSOR_OUTPUT_NORM, "bias"), {n_embd}); 2024-02-12T21:21:53,370 | ~~~~~~~~~~~~~~~~^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ 2024-02-12T21:21:53,371 /tmp/pip-wheel-59yyi7nt/ctransformer-core_fd90308d070c4acf9464ef7c5eb8b114/vendor/llama.cpp/llama.cpp:3973:63: note: parameter passing for argument of type ‘std::initializer_list’ changed in GCC 7.1 2024-02-12T21:21:53,371 3973 | model.output = ml.create_tensor(ctx_output_split, tn(LLM_TENSOR_OUTPUT, "weight"), {n_embd, n_vocab}); 2024-02-12T21:21:53,372 | ~~~~~~~~~~~~~~~~^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ 2024-02-12T21:21:53,372 /tmp/pip-wheel-59yyi7nt/ctransformer-core_fd90308d070c4acf9464ef7c5eb8b114/vendor/llama.cpp/llama.cpp:3988:54: note: parameter passing for argument of type ‘std::initializer_list’ changed in GCC 7.1 2024-02-12T21:21:53,373 3988 | layer.wo = ml.create_tensor(ctx_split, tn(LLM_TENSOR_ATTN_OUT, "weight", i), {n_embd, n_embd}); 2024-02-12T21:21:53,374 | ~~~~~~~~~~~~~~~~^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ 2024-02-12T21:21:53,375 /tmp/pip-wheel-59yyi7nt/ctransformer-core_fd90308d070c4acf9464ef7c5eb8b114/vendor/llama.cpp/llama.cpp:3998:60: note: parameter passing for argument of type ‘std::initializer_list’ changed in GCC 7.1 2024-02-12T21:21:53,375 3998 | layer.ffn_up_b = ml.create_tensor(ctx_layer, tn(LLM_TENSOR_FFN_UP, "bias", i), {n_ff}); 2024-02-12T21:21:53,376 | ~~~~~~~~~~~~~~~~^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ 2024-02-12T21:21:53,377 /tmp/pip-wheel-59yyi7nt/ctransformer-core_fd90308d070c4acf9464ef7c5eb8b114/vendor/llama.cpp/llama.cpp:4003:54: note: parameter passing for argument of type ‘std::initializer_list’ changed in GCC 7.1 2024-02-12T21:21:53,377 4003 | model.tok_embd = ml.create_tensor(ctx_input, tn(LLM_TENSOR_TOKEN_EMBD, "weight"), {n_embd, n_vocab}); 2024-02-12T21:21:53,378 | ~~~~~~~~~~~~~~~~^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ 2024-02-12T21:21:53,379 /tmp/pip-wheel-59yyi7nt/ctransformer-core_fd90308d070c4acf9464ef7c5eb8b114/vendor/llama.cpp/llama.cpp:4007:63: note: parameter passing for argument of type ‘std::initializer_list’ changed in GCC 7.1 2024-02-12T21:21:53,379 4007 | model.output_norm = ml.create_tensor(ctx_output, tn(LLM_TENSOR_OUTPUT_NORM, "weight"), {n_embd}); 2024-02-12T21:21:53,380 | ~~~~~~~~~~~~~~~~^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ 2024-02-12T21:21:53,381 /tmp/pip-wheel-59yyi7nt/ctransformer-core_fd90308d070c4acf9464ef7c5eb8b114/vendor/llama.cpp/llama.cpp:4008:63: note: parameter passing for argument of type ‘std::initializer_list’ changed in GCC 7.1 2024-02-12T21:21:53,382 4008 | model.output_norm_b = ml.create_tensor(ctx_output, tn(LLM_TENSOR_OUTPUT_NORM, "bias"), {n_embd}); 2024-02-12T21:21:53,382 | ~~~~~~~~~~~~~~~~^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ 2024-02-12T21:21:53,383 /tmp/pip-wheel-59yyi7nt/ctransformer-core_fd90308d070c4acf9464ef7c5eb8b114/vendor/llama.cpp/llama.cpp:4009:63: note: parameter passing for argument of type ‘std::initializer_list’ changed in GCC 7.1 2024-02-12T21:21:53,384 4009 | model.output = ml.create_tensor(ctx_output_split, tn(LLM_TENSOR_OUTPUT, "weight"), {n_embd, n_vocab}); 2024-02-12T21:21:53,385 | ~~~~~~~~~~~~~~~~^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ 2024-02-12T21:21:53,385 /tmp/pip-wheel-59yyi7nt/ctransformer-core_fd90308d070c4acf9464ef7c5eb8b114/vendor/llama.cpp/llama.cpp:4019:61: note: parameter passing for argument of type ‘std::initializer_list’ changed in GCC 7.1 2024-02-12T21:21:53,386 4019 | layer.attn_norm_b = ml.create_tensor(ctx_layer, tn(LLM_TENSOR_ATTN_NORM, "bias", i), {n_embd}); 2024-02-12T21:21:53,386 | ~~~~~~~~~~~~~~~~^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ 2024-02-12T21:21:53,387 /tmp/pip-wheel-59yyi7nt/ctransformer-core_fd90308d070c4acf9464ef7c5eb8b114/vendor/llama.cpp/llama.cpp:4024:54: note: parameter passing for argument of type ‘std::initializer_list’ changed in GCC 7.1 2024-02-12T21:21:53,388 4024 | layer.wo = ml.create_tensor(ctx_split, tn(LLM_TENSOR_ATTN_OUT, "weight", i), {n_embd, n_embd}); 2024-02-12T21:21:53,389 | ~~~~~~~~~~~~~~~~^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ 2024-02-12T21:21:53,389 /tmp/pip-wheel-59yyi7nt/ctransformer-core_fd90308d070c4acf9464ef7c5eb8b114/vendor/llama.cpp/llama.cpp:4031:60: note: parameter passing for argument of type ‘std::initializer_list’ changed in GCC 7.1 2024-02-12T21:21:53,390 4031 | layer.ffn_down_b = ml.create_tensor(ctx_layer, tn(LLM_TENSOR_FFN_DOWN, "bias", i), {n_embd}); 2024-02-12T21:21:53,391 | ~~~~~~~~~~~~~~~~^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ 2024-02-12T21:21:53,392 /tmp/pip-wheel-59yyi7nt/ctransformer-core_fd90308d070c4acf9464ef7c5eb8b114/vendor/llama.cpp/llama.cpp:4039:54: note: parameter passing for argument of type ‘std::initializer_list’ changed in GCC 7.1 2024-02-12T21:21:53,392 4039 | model.tok_embd = ml.create_tensor(ctx_input, tn(LLM_TENSOR_TOKEN_EMBD, "weight"), {n_embd, n_vocab}); 2024-02-12T21:21:53,393 | ~~~~~~~~~~~~~~~~^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ 2024-02-12T21:21:53,394 /tmp/pip-wheel-59yyi7nt/ctransformer-core_fd90308d070c4acf9464ef7c5eb8b114/vendor/llama.cpp/llama.cpp:4041:63: note: parameter passing for argument of type ‘std::initializer_list’ changed in GCC 7.1 2024-02-12T21:21:53,395 4041 | model.output_norm = ml.create_tensor(ctx_output, tn(LLM_TENSOR_OUTPUT_NORM, "weight"), {n_embd}); 2024-02-12T21:21:53,395 | ~~~~~~~~~~~~~~~~^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ 2024-02-12T21:21:53,396 /tmp/pip-wheel-59yyi7nt/ctransformer-core_fd90308d070c4acf9464ef7c5eb8b114/vendor/llama.cpp/llama.cpp:4042:63: note: parameter passing for argument of type ‘std::initializer_list’ changed in GCC 7.1 2024-02-12T21:21:53,397 4042 | model.output_norm_b = ml.create_tensor(ctx_output, tn(LLM_TENSOR_OUTPUT_NORM, "bias"), {n_embd}); 2024-02-12T21:21:53,397 | ~~~~~~~~~~~~~~~~^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ 2024-02-12T21:21:53,398 /tmp/pip-wheel-59yyi7nt/ctransformer-core_fd90308d070c4acf9464ef7c5eb8b114/vendor/llama.cpp/llama.cpp:4043:63: note: parameter passing for argument of type ‘std::initializer_list’ changed in GCC 7.1 2024-02-12T21:21:53,398 4043 | model.output = ml.create_tensor(ctx_output_split, tn(LLM_TENSOR_OUTPUT, "weight"), {n_embd, n_vocab}); 2024-02-12T21:21:53,399 | ~~~~~~~~~~~~~~~~^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ 2024-02-12T21:21:53,400 /tmp/pip-wheel-59yyi7nt/ctransformer-core_fd90308d070c4acf9464ef7c5eb8b114/vendor/llama.cpp/llama.cpp:4055:52: note: parameter passing for argument of type ‘std::initializer_list’ changed in GCC 7.1 2024-02-12T21:21:53,400 4055 | layer.wk = ml.create_tensor(ctx_split, tn(LLM_TENSOR_ATTN_K, "weight", i), {n_embd, n_embd_gqa}); 2024-02-12T21:21:53,401 | ~~~~~~~~~~~~~~~~^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ 2024-02-12T21:21:53,402 /tmp/pip-wheel-59yyi7nt/ctransformer-core_fd90308d070c4acf9464ef7c5eb8b114/vendor/llama.cpp/llama.cpp:4069:54: note: parameter passing for argument of type ‘std::initializer_list’ changed in GCC 7.1 2024-02-12T21:21:53,402 4069 | model.tok_embd = ml.create_tensor(ctx_input, tn(LLM_TENSOR_TOKEN_EMBD, "weight"), {n_embd, n_vocab}); 2024-02-12T21:21:53,403 | ~~~~~~~~~~~~~~~~^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ 2024-02-12T21:21:53,404 /tmp/pip-wheel-59yyi7nt/ctransformer-core_fd90308d070c4acf9464ef7c5eb8b114/vendor/llama.cpp/llama.cpp:4073:61: note: parameter passing for argument of type ‘std::initializer_list’ changed in GCC 7.1 2024-02-12T21:21:53,404 4073 | model.output_norm = ml.create_tensor(ctx_output, tn(LLM_TENSOR_OUTPUT_NORM, "weight"), {n_embd}); 2024-02-12T21:21:53,405 | ~~~~~~~~~~~~~~~~^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ 2024-02-12T21:21:53,406 /tmp/pip-wheel-59yyi7nt/ctransformer-core_fd90308d070c4acf9464ef7c5eb8b114/vendor/llama.cpp/llama.cpp:4074:61: note: parameter passing for argument of type ‘std::initializer_list’ changed in GCC 7.1 2024-02-12T21:21:53,406 4074 | model.output = ml.create_tensor(ctx_output_split, tn(LLM_TENSOR_OUTPUT, "weight"), {n_embd, n_vocab}); 2024-02-12T21:21:53,407 | ~~~~~~~~~~~~~~~~^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ 2024-02-12T21:21:53,408 In file included from /usr/include/c++/12/vector:70: 2024-02-12T21:21:53,409 /usr/include/c++/12/bits/vector.tcc: In member function ‘void std::vector<_Tp, _Alloc>::_M_realloc_insert(iterator, _Args&& ...) [with _Args = {const double&}; _Tp = double; _Alloc = std::allocator]’: 2024-02-12T21:21:53,410 /usr/include/c++/12/bits/vector.tcc:439:7: note: parameter passing for argument of type ‘std::vector::iterator’ changed in GCC 7.1 2024-02-12T21:21:53,410 439 | vector<_Tp, _Alloc>:: 2024-02-12T21:21:53,411 | ^~~~~~~~~~~~~~~~~~~ 2024-02-12T21:21:53,411 In member function ‘void std::vector<_Tp, _Alloc>::push_back(const value_type&) [with _Tp = double; _Alloc = std::allocator]’, 2024-02-12T21:21:53,412 inlined from ‘std::back_insert_iterator<_Container>& std::back_insert_iterator<_Container>::operator=(const typename _Container::value_type&) [with _Container = std::vector]’ at /usr/include/c++/12/bits/stl_iterator.h:735:22, 2024-02-12T21:21:53,413 inlined from ‘_OutputIterator std::partial_sum(_InputIterator, _InputIterator, _OutputIterator) [with _InputIterator = __gnu_cxx::__normal_iterator >; _OutputIterator = back_insert_iterator >]’ at /usr/include/c++/12/bits/stl_numeric.h:270:17, 2024-02-12T21:21:53,413 inlined from ‘void std::discrete_distribution<_IntType>::param_type::_M_initialize() [with _IntType = int]’ at /usr/include/c++/12/bits/random.tcc:2679:23: 2024-02-12T21:21:53,414 /usr/include/c++/12/bits/stl_vector.h:1287:28: note: parameter passing for argument of type ‘__gnu_cxx::__normal_iterator >’ changed in GCC 7.1 2024-02-12T21:21:53,415 1287 | _M_realloc_insert(end(), __x); 2024-02-12T21:21:53,415 | ~~~~~~~~~~~~~~~~~^~~~~~~~~~~~ 2024-02-12T21:21:53,416 In member function ‘void std::vector<_Tp, _Alloc>::push_back(const value_type&) [with _Tp = double; _Alloc = std::allocator]’, 2024-02-12T21:21:53,416 inlined from ‘std::back_insert_iterator<_Container>& std::back_insert_iterator<_Container>::operator=(const typename _Container::value_type&) [with _Container = std::vector]’ at /usr/include/c++/12/bits/stl_iterator.h:735:22, 2024-02-12T21:21:53,417 inlined from ‘_OutputIterator std::partial_sum(_InputIterator, _InputIterator, _OutputIterator) [with _InputIterator = __gnu_cxx::__normal_iterator >; _OutputIterator = back_insert_iterator >]’ at /usr/include/c++/12/bits/stl_numeric.h:274:16, 2024-02-12T21:21:53,418 inlined from ‘void std::discrete_distribution<_IntType>::param_type::_M_initialize() [with _IntType = int]’ at /usr/include/c++/12/bits/random.tcc:2679:23: 2024-02-12T21:21:53,418 /usr/include/c++/12/bits/stl_vector.h:1287:28: note: parameter passing for argument of type ‘__gnu_cxx::__normal_iterator >’ changed in GCC 7.1 2024-02-12T21:21:53,419 1287 | _M_realloc_insert(end(), __x); 2024-02-12T21:21:53,420 | ~~~~~~~~~~~~~~~~~^~~~~~~~~~~~ 2024-02-12T21:21:53,503 [19/22] : && /usr/bin/c++ -fPIC -O3 -DNDEBUG -shared -Wl,-soname,libllama.so -o vendor/llama.cpp/libllama.so vendor/llama.cpp/CMakeFiles/ggml.dir/ggml.c.o vendor/llama.cpp/CMakeFiles/ggml.dir/ggml-alloc.c.o vendor/llama.cpp/CMakeFiles/ggml.dir/ggml-backend.c.o vendor/llama.cpp/CMakeFiles/ggml.dir/ggml-quants.c.o vendor/llama.cpp/CMakeFiles/llama.dir/llama.cpp.o && : 2024-02-12T21:21:53,817 [20/22] : && /usr/bin/cmake -E rm -f vendor/llama.cpp/common/libcommon.a && /usr/bin/ar qc vendor/llama.cpp/common/libcommon.a vendor/llama.cpp/common/CMakeFiles/build_info.dir/build-info.cpp.o vendor/llama.cpp/common/CMakeFiles/common.dir/common.cpp.o vendor/llama.cpp/common/CMakeFiles/common.dir/sampling.cpp.o vendor/llama.cpp/common/CMakeFiles/common.dir/console.cpp.o vendor/llama.cpp/common/CMakeFiles/common.dir/grammar-parser.cpp.o vendor/llama.cpp/common/CMakeFiles/common.dir/train.cpp.o && /usr/bin/ranlib vendor/llama.cpp/common/libcommon.a && : 2024-02-12T21:21:53,882 [21/22] : && /usr/bin/c++ -fPIC -O3 -DNDEBUG -shared -Wl,-soname,libllava.so -o vendor/llama.cpp/examples/llava/libllava.so vendor/llama.cpp/examples/llava/CMakeFiles/llava.dir/llava.cpp.o vendor/llama.cpp/examples/llava/CMakeFiles/llava.dir/clip.cpp.o vendor/llama.cpp/CMakeFiles/ggml.dir/ggml.c.o vendor/llama.cpp/CMakeFiles/ggml.dir/ggml-alloc.c.o vendor/llama.cpp/CMakeFiles/ggml.dir/ggml-backend.c.o vendor/llama.cpp/CMakeFiles/ggml.dir/ggml-quants.c.o -Wl,-rpath,/tmp/tmpedjk528w/build/vendor/llama.cpp: vendor/llama.cpp/libllama.so && : 2024-02-12T21:21:54,135 [22/22] : && /usr/bin/c++ -O3 -DNDEBUG vendor/llama.cpp/examples/llava/CMakeFiles/llava.dir/llava.cpp.o vendor/llama.cpp/examples/llava/CMakeFiles/llava.dir/clip.cpp.o vendor/llama.cpp/examples/llava/CMakeFiles/llava-cli.dir/llava-cli.cpp.o -o vendor/llama.cpp/examples/llava/llava-cli -Wl,-rpath,/tmp/tmpedjk528w/build/vendor/llama.cpp: vendor/llama.cpp/common/libcommon.a vendor/llama.cpp/libllama.so && : 2024-02-12T21:21:54,140 *** Installing project into wheel... 2024-02-12T21:21:54,177 -- Install configuration: "Release" 2024-02-12T21:21:54,182 -- Installing: /tmp/tmpedjk528w/wheel/platlib/lib/libggml_shared.so 2024-02-12T21:21:54,234 -- Installing: /tmp/tmpedjk528w/wheel/platlib/lib/cmake/Llama/LlamaConfig.cmake 2024-02-12T21:21:54,237 -- Installing: /tmp/tmpedjk528w/wheel/platlib/lib/cmake/Llama/LlamaConfigVersion.cmake 2024-02-12T21:21:54,240 -- Installing: /tmp/tmpedjk528w/wheel/platlib/include/ggml.h 2024-02-12T21:21:54,244 -- Installing: /tmp/tmpedjk528w/wheel/platlib/include/ggml-alloc.h 2024-02-12T21:21:54,247 -- Installing: /tmp/tmpedjk528w/wheel/platlib/include/ggml-backend.h 2024-02-12T21:21:54,250 -- Installing: /tmp/tmpedjk528w/wheel/platlib/lib/libllama.so 2024-02-12T21:21:54,320 -- Installing: /tmp/tmpedjk528w/wheel/platlib/include/llama.h 2024-02-12T21:21:54,325 -- Installing: /tmp/tmpedjk528w/wheel/platlib/bin/convert.py 2024-02-12T21:21:54,330 -- Installing: /tmp/tmpedjk528w/wheel/platlib/bin/convert-lora-to-ggml.py 2024-02-12T21:21:54,336 -- Installing: /tmp/tmpedjk528w/wheel/platlib/ctransformer_core/libllama.so 2024-02-12T21:21:54,403 -- Installing: /tmp/pip-wheel-59yyi7nt/ctransformer-core_fd90308d070c4acf9464ef7c5eb8b114/ctransformer_core/libllama.so 2024-02-12T21:21:54,475 -- Installing: /tmp/tmpedjk528w/wheel/platlib/lib/libllava.so 2024-02-12T21:21:54,501 -- Set runtime path of "/tmp/tmpedjk528w/wheel/platlib/lib/libllava.so" to "" 2024-02-12T21:21:54,557 -- Installing: /tmp/tmpedjk528w/wheel/platlib/bin/llava-cli 2024-02-12T21:21:54,570 -- Set runtime path of "/tmp/tmpedjk528w/wheel/platlib/bin/llava-cli" to "" 2024-02-12T21:21:54,597 -- Installing: /tmp/tmpedjk528w/wheel/platlib/ctransformer_core/libllava.so 2024-02-12T21:21:54,616 -- Set runtime path of "/tmp/tmpedjk528w/wheel/platlib/ctransformer_core/libllava.so" to "" 2024-02-12T21:21:54,657 -- Installing: /tmp/pip-wheel-59yyi7nt/ctransformer-core_fd90308d070c4acf9464ef7c5eb8b114/ctransformer_core/libllava.so 2024-02-12T21:21:54,677 -- Set runtime path of "/tmp/pip-wheel-59yyi7nt/ctransformer-core_fd90308d070c4acf9464ef7c5eb8b114/ctransformer_core/libllava.so" to "" 2024-02-12T21:21:54,724 *** Making wheel... 2024-02-12T21:21:56,027 *** Created ctransformer_core-0.2.28-cp311-cp311-manylinux_2_36_armv7l.whl... 2024-02-12T21:21:56,084 Building wheel for ctransformer-core (pyproject.toml): finished with status 'done' 2024-02-12T21:21:56,113 Created wheel for ctransformer-core: filename=ctransformer_core-0.2.28-cp311-cp311-manylinux_2_36_armv7l.whl size=2356709 sha256=c1052d1f6071940a47c33877937130702a3895acae70119528ab28533e31795d 2024-02-12T21:21:56,114 Stored in directory: /tmp/pip-ephem-wheel-cache-qb1yhzvq/wheels/8b/04/d2/e7f6830b06542fabebe14ff7a67449e15260b3b006ebaf41a3 2024-02-12T21:21:56,126 Successfully built ctransformer-core 2024-02-12T21:21:56,184 Removed build tracker: '/tmp/pip-build-tracker-9ldbb1u6'