2024-02-28T07:53:53,948 Created temporary directory: /tmp/pip-build-tracker-tlab4r61 2024-02-28T07:53:53,949 Initialized build tracking at /tmp/pip-build-tracker-tlab4r61 2024-02-28T07:53:53,949 Created build tracker: /tmp/pip-build-tracker-tlab4r61 2024-02-28T07:53:53,950 Entered build tracker: /tmp/pip-build-tracker-tlab4r61 2024-02-28T07:53:53,950 Created temporary directory: /tmp/pip-wheel-1e0qhhza 2024-02-28T07:53:53,954 Created temporary directory: /tmp/pip-ephem-wheel-cache-rhbxdfpo 2024-02-28T07:53:53,975 Looking in indexes: https://pypi.org/simple, https://www.piwheels.org/simple 2024-02-28T07:53:53,979 2 location(s) to search for versions of llama-cpp-python: 2024-02-28T07:53:53,979 * https://pypi.org/simple/llama-cpp-python/ 2024-02-28T07:53:53,979 * https://www.piwheels.org/simple/llama-cpp-python/ 2024-02-28T07:53:53,980 Fetching project page and analyzing links: https://pypi.org/simple/llama-cpp-python/ 2024-02-28T07:53:53,981 Getting page https://pypi.org/simple/llama-cpp-python/ 2024-02-28T07:53:53,982 Found index url https://pypi.org/simple/ 2024-02-28T07:53:54,133 Fetched page https://pypi.org/simple/llama-cpp-python/ as application/vnd.pypi.simple.v1+json 2024-02-28T07:53:54,156 Found link https://files.pythonhosted.org/packages/17/9c/813d8c83d81cb9ab42e5ee66657f8d3670bacdcd67df4aa7728e8dccbcfd/llama_cpp_python-0.1.1.tar.gz (from https://pypi.org/simple/llama-cpp-python/), version: 0.1.1 2024-02-28T07:53:54,157 Found link https://files.pythonhosted.org/packages/42/22/07711b8fc85ed188182c923aa424254a451ee23a58d6c45a033e05e57f9a/llama_cpp_python-0.1.2.tar.gz (from https://pypi.org/simple/llama-cpp-python/), version: 0.1.2 2024-02-28T07:53:54,157 Found link https://files.pythonhosted.org/packages/13/a2/a3a6e665905992e2ed2c79b7af2dce4a36f23c5147959f0f56d9bd72543c/llama_cpp_python-0.1.3.tar.gz (from https://pypi.org/simple/llama-cpp-python/), version: 0.1.3 2024-02-28T07:53:54,158 Found link https://files.pythonhosted.org/packages/00/b6/3069b31e8cd0073685aa059e161e4b8dc3a4e3c77c4f8f433fa5ebc01655/llama_cpp_python-0.1.4.tar.gz (from https://pypi.org/simple/llama-cpp-python/), version: 0.1.4 2024-02-28T07:53:54,159 Found link https://files.pythonhosted.org/packages/cd/32/e2380800128e64542f719c3d7287b2818e7234e268298b95273164cb0a3d/llama_cpp_python-0.1.5.tar.gz (from https://pypi.org/simple/llama-cpp-python/), version: 0.1.5 2024-02-28T07:53:54,160 Found link https://files.pythonhosted.org/packages/9f/d3/9904d8616a5af9515b8852c441472c930b780db1879f13cae240bd4eb05f/llama_cpp_python-0.1.6.tar.gz (from https://pypi.org/simple/llama-cpp-python/), version: 0.1.6 2024-02-28T07:53:54,160 Found link https://files.pythonhosted.org/packages/20/ff/c192e4469e14be86d3b11fdee4b56aca486033e4256174e2cf8425840e54/llama_cpp_python-0.1.7.tar.gz (from https://pypi.org/simple/llama-cpp-python/), version: 0.1.7 2024-02-28T07:53:54,161 Found link https://files.pythonhosted.org/packages/7e/3b/b5f7e1ec5f43a4e980733c63bd4f05e1b7e14fd3b7aa72d9ca91f2415323/llama_cpp_python-0.1.8.tar.gz (from https://pypi.org/simple/llama-cpp-python/), version: 0.1.8 2024-02-28T07:53:54,162 Found link https://files.pythonhosted.org/packages/9f/24/45a5a3beee1354f668d916eb1a2146835a0eda4dbad0da45252170e105a6/llama_cpp_python-0.1.9.tar.gz (from https://pypi.org/simple/llama-cpp-python/), version: 0.1.9 2024-02-28T07:53:54,163 Found link https://files.pythonhosted.org/packages/71/ad/e3f373300efdfbcd67dc3909512a5b80dd6c5f2092102cbea66bad75ec4d/llama_cpp_python-0.1.10.tar.gz (from https://pypi.org/simple/llama-cpp-python/), version: 0.1.10 2024-02-28T07:53:54,164 Found link https://files.pythonhosted.org/packages/bb/5e/c15d23176dd5783b1f62fd1b89c38fa655c9c1b524451e34a240fabffca8/llama_cpp_python-0.1.11.tar.gz (from https://pypi.org/simple/llama-cpp-python/) (requires-python:>=3.7), version: 0.1.11 2024-02-28T07:53:54,165 Found link https://files.pythonhosted.org/packages/ad/61/91b0c968596bcca9b09c6e40a38852500d31ed5f8649e25cfab293dc9af0/llama_cpp_python-0.1.12.tar.gz (from https://pypi.org/simple/llama-cpp-python/) (requires-python:>=3.7), version: 0.1.12 2024-02-28T07:53:54,166 Found link https://files.pythonhosted.org/packages/63/8f/1bb0a901a1be8c243e741a17ece1588615a1c5c4b9578ce80f12ce809d14/llama_cpp_python-0.1.13.tar.gz (from https://pypi.org/simple/llama-cpp-python/) (requires-python:>=3.7), version: 0.1.13 2024-02-28T07:53:54,167 Found link https://files.pythonhosted.org/packages/25/bc/83364cb8c3fff7da82fadd10e0d1ec221278a5403ab4222dd0745bfa6709/llama_cpp_python-0.1.14.tar.gz (from https://pypi.org/simple/llama-cpp-python/) (requires-python:>=3.7), version: 0.1.14 2024-02-28T07:53:54,168 Found link https://files.pythonhosted.org/packages/d8/6b/0b89436a26c2a7a5e1b57809d6f692c4f0afd87b19c31fe5425ddb19f54b/llama_cpp_python-0.1.15.tar.gz (from https://pypi.org/simple/llama-cpp-python/) (requires-python:>=3.7), version: 0.1.15 2024-02-28T07:53:54,169 Found link https://files.pythonhosted.org/packages/7f/ef/aa0d2e4ef92173bf7e3539b5fa3338e7f9f88a66e7a90cb2f00052b7a9cb/llama_cpp_python-0.1.16.tar.gz (from https://pypi.org/simple/llama-cpp-python/) (requires-python:>=3.7), version: 0.1.16 2024-02-28T07:53:54,169 Found link https://files.pythonhosted.org/packages/71/d6/bb0a4bb92abf16dee92a933b45ba16f0e6c0a1b63ee8877c678a54c373a8/llama_cpp_python-0.1.17.tar.gz (from https://pypi.org/simple/llama-cpp-python/) (requires-python:>=3.7), version: 0.1.17 2024-02-28T07:53:54,170 Found link https://files.pythonhosted.org/packages/c2/08/7c12856cbe4523e518e280914674f4b65f5f62076408a7984b69d9771494/llama_cpp_python-0.1.18.tar.gz (from https://pypi.org/simple/llama-cpp-python/) (requires-python:>=3.7), version: 0.1.18 2024-02-28T07:53:54,171 Found link https://files.pythonhosted.org/packages/63/48/977cd0ffdbfb9446e758c8c69aa49025a7477058d42bd30bef67f42c556c/llama_cpp_python-0.1.19.tar.gz (from https://pypi.org/simple/llama-cpp-python/) (requires-python:>=3.7), version: 0.1.19 2024-02-28T07:53:54,172 Found link https://files.pythonhosted.org/packages/dc/2e/730cc405e0227ce6f49dd2bab4d6ce69963cb65bc3452fd33a552c9b8630/llama_cpp_python-0.1.20.tar.gz (from https://pypi.org/simple/llama-cpp-python/) (requires-python:>=3.7), version: 0.1.20 2024-02-28T07:53:54,173 Found link https://files.pythonhosted.org/packages/52/1a/d122abc9571e09e17ad8909d2f8710ea0abe26ced1287ae82828fc80aaa3/llama_cpp_python-0.1.21.tar.gz (from https://pypi.org/simple/llama-cpp-python/) (requires-python:>=3.7), version: 0.1.21 2024-02-28T07:53:54,174 Found link https://files.pythonhosted.org/packages/cf/94/4c35d7e3011ce86f063e3c754afd71f3a6f1f2a0ec9616deb55e8f3743a1/llama_cpp_python-0.1.22.tar.gz (from https://pypi.org/simple/llama-cpp-python/) (requires-python:>=3.7), version: 0.1.22 2024-02-28T07:53:54,175 Found link https://files.pythonhosted.org/packages/03/6e/3e0768c396be6807b9e835c223ce37385d574eaf9e4d0ac80116325f6775/llama_cpp_python-0.1.23.tar.gz (from https://pypi.org/simple/llama-cpp-python/) (requires-python:>=3.7), version: 0.1.23 2024-02-28T07:53:54,176 Found link https://files.pythonhosted.org/packages/bc/8b/618c42fdfa078a3cec9ed871b9c1bb6cca65b66e4e3ce0bf690f8109eaa1/llama_cpp_python-0.1.24.tar.gz (from https://pypi.org/simple/llama-cpp-python/) (requires-python:>=3.7), version: 0.1.24 2024-02-28T07:53:54,177 Found link https://files.pythonhosted.org/packages/6c/64/bd9d98588aa8b6c49c0cfa1d0b4ef4ec5a1a05e4d8d67c1aed3587ae2e1a/llama_cpp_python-0.1.25.tar.gz (from https://pypi.org/simple/llama-cpp-python/) (requires-python:>=3.7), version: 0.1.25 2024-02-28T07:53:54,178 Found link https://files.pythonhosted.org/packages/c1/cf/c81b3ba5340398820cc12c247e33f3f1ee15c4043794596968dc31ebac9c/llama_cpp_python-0.1.26.tar.gz (from https://pypi.org/simple/llama-cpp-python/) (requires-python:>=3.7), version: 0.1.26 2024-02-28T07:53:54,178 Found link https://files.pythonhosted.org/packages/fa/b8/0a6fafae31b2c40997c282cd9220743c419dd8b372f09c57e551792bb899/llama_cpp_python-0.1.27.tar.gz (from https://pypi.org/simple/llama-cpp-python/) (requires-python:>=3.7), version: 0.1.27 2024-02-28T07:53:54,179 Found link https://files.pythonhosted.org/packages/fb/6a/0c7421119d6e536ee1ca02ad5555dbbda7a38189333b0ac67f582cd5a84f/llama_cpp_python-0.1.28.tar.gz (from https://pypi.org/simple/llama-cpp-python/) (requires-python:>=3.7), version: 0.1.28 2024-02-28T07:53:54,180 Found link https://files.pythonhosted.org/packages/fa/e3/3a12c770007f9a3c5903f7e2904aff4af5fa7d36cb06843c65cfaadccdd2/llama_cpp_python-0.1.29.tar.gz (from https://pypi.org/simple/llama-cpp-python/) (requires-python:>=3.7), version: 0.1.29 2024-02-28T07:53:54,182 Found link https://files.pythonhosted.org/packages/e5/8e/b8dfcb10fdb1b2556a688cb23fd3d1b7b60c2b24ddc1cb9fc61a915c94d0/llama_cpp_python-0.1.30.tar.gz (from https://pypi.org/simple/llama-cpp-python/) (requires-python:>=3.7), version: 0.1.30 2024-02-28T07:53:54,182 Found link https://files.pythonhosted.org/packages/c9/46/e37f0120bf5996b644c373c8fea9d2bf31ceb30e18724f2ae0876cb25b96/llama_cpp_python-0.1.31.tar.gz (from https://pypi.org/simple/llama-cpp-python/) (requires-python:>=3.7), version: 0.1.31 2024-02-28T07:53:54,183 Found link https://files.pythonhosted.org/packages/39/f2/9d9c98ccb9ffe2ca7c9aeef235d5e45a4694f3148dfc9559e672c346f6ea/llama_cpp_python-0.1.32.tar.gz (from https://pypi.org/simple/llama-cpp-python/) (requires-python:>=3.7), version: 0.1.32 2024-02-28T07:53:54,184 Found link https://files.pythonhosted.org/packages/70/b3/a1497e783b921cc8cd0d2f7fabe9d0b5c2bf95ab9fd56503d282862ce720/llama_cpp_python-0.1.33.tar.gz (from https://pypi.org/simple/llama-cpp-python/) (requires-python:>=3.7), version: 0.1.33 2024-02-28T07:53:54,185 Found link https://files.pythonhosted.org/packages/b3/f0/82690e424b3fdb0d1738f312095a7a88cbe06cb910be9c5f5d4c7e3bdde8/llama_cpp_python-0.1.34.tar.gz (from https://pypi.org/simple/llama-cpp-python/) (requires-python:>=3.7), version: 0.1.34 2024-02-28T07:53:54,186 Found link https://files.pythonhosted.org/packages/e9/47/013240af1272400ad49422f8ebfc47476a4d82e3375dd05dbd1440da3c50/llama_cpp_python-0.1.35.tar.gz (from https://pypi.org/simple/llama-cpp-python/) (requires-python:>=3.7), version: 0.1.35 2024-02-28T07:53:54,187 Found link https://files.pythonhosted.org/packages/1b/ea/3f2aff10fd7195c6bc8c52375d9ff027a551151569c50e0d47581b14b7c1/llama_cpp_python-0.1.36.tar.gz (from https://pypi.org/simple/llama-cpp-python/) (requires-python:>=3.7), version: 0.1.36 2024-02-28T07:53:54,188 Found link https://files.pythonhosted.org/packages/5d/10/e037dc290ed7435dd6f5fa5dcce2453f1cf145b84f1e8e40d0a63ac62aa2/llama_cpp_python-0.1.37.tar.gz (from https://pypi.org/simple/llama-cpp-python/) (requires-python:>=3.7), version: 0.1.37 2024-02-28T07:53:54,189 Found link https://files.pythonhosted.org/packages/e6/2a/d898551013b9f0863b8134dbcb5863a306f5d9c2ad4a394c68a2988a77a0/llama_cpp_python-0.1.38.tar.gz (from https://pypi.org/simple/llama-cpp-python/) (requires-python:>=3.7), version: 0.1.38 2024-02-28T07:53:54,190 Found link https://files.pythonhosted.org/packages/5a/41/955ac2e592949ca95a29efc5f544afcbc9ca3fc5484cb0272837d98c6b5a/llama_cpp_python-0.1.39.tar.gz (from https://pypi.org/simple/llama-cpp-python/) (requires-python:>=3.7), version: 0.1.39 2024-02-28T07:53:54,190 Found link https://files.pythonhosted.org/packages/fc/2c/62c5ce16f88348f928320565cf6c0dfe8220a03615bff14e47e4f3b4e439/llama_cpp_python-0.1.40.tar.gz (from https://pypi.org/simple/llama-cpp-python/) (requires-python:>=3.7), version: 0.1.40 2024-02-28T07:53:54,191 Found link https://files.pythonhosted.org/packages/d1/fe/852d447828bdcdfe1c8aa88061517b5de9e5c12389dd852076d5c913936a/llama_cpp_python-0.1.41.tar.gz (from https://pypi.org/simple/llama-cpp-python/) (requires-python:>=3.7), version: 0.1.41 2024-02-28T07:53:54,192 Found link https://files.pythonhosted.org/packages/8d/bb/48129f3696fcc125fac1c91a5a6df5ab472e561d74ed5818e6fca748a432/llama_cpp_python-0.1.42.tar.gz (from https://pypi.org/simple/llama-cpp-python/) (requires-python:>=3.7), version: 0.1.42 2024-02-28T07:53:54,193 Found link https://files.pythonhosted.org/packages/eb/43/ac841dc1a3f5f618e4546ce69fe7da0d976cb141c92b8d1f735f2baf0b85/llama_cpp_python-0.1.43.tar.gz (from https://pypi.org/simple/llama-cpp-python/) (requires-python:>=3.7), version: 0.1.43 2024-02-28T07:53:54,194 Found link https://files.pythonhosted.org/packages/29/69/b73ae145d6f40683656f537b8526ca27e8348c7ff9af9c014a6a723fda5f/llama_cpp_python-0.1.44.tar.gz (from https://pypi.org/simple/llama-cpp-python/) (requires-python:>=3.7), version: 0.1.44 2024-02-28T07:53:54,195 Found link https://files.pythonhosted.org/packages/62/b7/299b9d537037a95d4433498c73c1a8024de230a26d0c94b3e889364038d4/llama_cpp_python-0.1.45.tar.gz (from https://pypi.org/simple/llama-cpp-python/) (requires-python:>=3.7), version: 0.1.45 2024-02-28T07:53:54,196 Found link https://files.pythonhosted.org/packages/c2/12/450986c9506525096cc77fcb6584ee02ec7d0017df0d34e6c79b9dba5a58/llama_cpp_python-0.1.46.tar.gz (from https://pypi.org/simple/llama-cpp-python/) (requires-python:>=3.7), version: 0.1.46 2024-02-28T07:53:54,197 Found link https://files.pythonhosted.org/packages/28/95/11fcced0778cb9b82a81cd61c93760a379527ef13d90a66254fdc2e982df/llama_cpp_python-0.1.47.tar.gz (from https://pypi.org/simple/llama-cpp-python/) (requires-python:>=3.7), version: 0.1.47 2024-02-28T07:53:54,198 Found link https://files.pythonhosted.org/packages/35/04/63f43ff24bd8948abbe2d7c9c3e3d235c0e7501ec8b1e72d01676051f75d/llama_cpp_python-0.1.48.tar.gz (from https://pypi.org/simple/llama-cpp-python/) (requires-python:>=3.7), version: 0.1.48 2024-02-28T07:53:54,199 Found link https://files.pythonhosted.org/packages/1b/60/be610e7e95eb53e949ac74024b30d5fa763244928b07a16815d16643b7ab/llama_cpp_python-0.1.49.tar.gz (from https://pypi.org/simple/llama-cpp-python/) (requires-python:>=3.7), version: 0.1.49 2024-02-28T07:53:54,200 Found link https://files.pythonhosted.org/packages/82/2c/9614ef76422168fde5326095559f271a22b1926185add8ae739901e113b9/llama_cpp_python-0.1.50.tar.gz (from https://pypi.org/simple/llama-cpp-python/) (requires-python:>=3.7), version: 0.1.50 2024-02-28T07:53:54,201 Found link https://files.pythonhosted.org/packages/f9/65/78748102cca92fb148e111c41827433ecc2cb79eed9de0a72a4d7a4361c0/llama_cpp_python-0.1.51.tar.gz (from https://pypi.org/simple/llama-cpp-python/) (requires-python:>=3.7), version: 0.1.51 2024-02-28T07:53:54,202 Found link https://files.pythonhosted.org/packages/87/cb/21c00f6f5b3a680671cb9c7e7ec5e07a6c03df70e28cd54f6197744c1f12/llama_cpp_python-0.1.52.tar.gz (from https://pypi.org/simple/llama-cpp-python/) (requires-python:>=3.7), version: 0.1.52 2024-02-28T07:53:54,202 Found link https://files.pythonhosted.org/packages/d6/8d/d1700e37bd9b8965154e12008620e3bd3ed7ed585ad86650294074577629/llama_cpp_python-0.1.53.tar.gz (from https://pypi.org/simple/llama-cpp-python/) (requires-python:>=3.7), version: 0.1.53 2024-02-28T07:53:54,203 Found link https://files.pythonhosted.org/packages/24/a7/e2904574d326e24338aab2e5fd618f007ef8b51c2a29618791f9c24269e2/llama_cpp_python-0.1.54.tar.gz (from https://pypi.org/simple/llama-cpp-python/) (requires-python:>=3.7), version: 0.1.54 2024-02-28T07:53:54,204 Found link https://files.pythonhosted.org/packages/b2/9b/15a40971444775d7aa5aee934991fa97eee285ae3a77c98c70c382f2ed60/llama_cpp_python-0.1.55.tar.gz (from https://pypi.org/simple/llama-cpp-python/) (requires-python:>=3.7), version: 0.1.55 2024-02-28T07:53:54,206 Found link https://files.pythonhosted.org/packages/2e/d7/36eccf10a611e2f3040cec775b9734ea51cf9938b2d911e30cbf71dd321b/llama_cpp_python-0.1.56.tar.gz (from https://pypi.org/simple/llama-cpp-python/) (requires-python:>=3.7), version: 0.1.56 2024-02-28T07:53:54,207 Found link https://files.pythonhosted.org/packages/4d/e5/b337c9e7330695eb5efa2329d25b2d964fe10364429698c89140729ebaaf/llama_cpp_python-0.1.57.tar.gz (from https://pypi.org/simple/llama-cpp-python/) (requires-python:>=3.7), version: 0.1.57 2024-02-28T07:53:54,207 Found link https://files.pythonhosted.org/packages/91/0f/8156d3f1b6bbbea68f28df5e325a2863ed736362b0f93f7936acba424e70/llama_cpp_python-0.1.59.tar.gz (from https://pypi.org/simple/llama-cpp-python/) (requires-python:>=3.7), version: 0.1.59 2024-02-28T07:53:54,209 Found link https://files.pythonhosted.org/packages/e9/18/9531e94f7a4cd402cf200a9e6257fc08d162b8a8d57adf6f4049f60ba05b/llama_cpp_python-0.1.61.tar.gz (from https://pypi.org/simple/llama-cpp-python/) (requires-python:>=3.7), version: 0.1.61 2024-02-28T07:53:54,210 Found link https://files.pythonhosted.org/packages/cc/ed/fe9bbe6c4f2156fc5e887d9e669872bc1722f80a2932a78a8166d7a82877/llama_cpp_python-0.1.62.tar.gz (from https://pypi.org/simple/llama-cpp-python/) (requires-python:>=3.7), version: 0.1.62 2024-02-28T07:53:54,211 Found link https://files.pythonhosted.org/packages/a8/01/7e39377ad0d20d2379b01b7019aad9b3595ea21ced1705ccc49c78936088/llama_cpp_python-0.1.63.tar.gz (from https://pypi.org/simple/llama-cpp-python/) (requires-python:>=3.7), version: 0.1.63 2024-02-28T07:53:54,212 Found link https://files.pythonhosted.org/packages/ad/c1/4083e90a0b31e1abb72d3f00f8d1403bdc9384301e1e370d0915f73519f5/llama_cpp_python-0.1.64.tar.gz (from https://pypi.org/simple/llama-cpp-python/) (requires-python:>=3.7), version: 0.1.64 2024-02-28T07:53:54,213 Found link https://files.pythonhosted.org/packages/84/7d/a659b65132db354147654bf2b6b2c8820b25aa10833b4849ec6b66e69117/llama_cpp_python-0.1.65.tar.gz (from https://pypi.org/simple/llama-cpp-python/) (requires-python:>=3.7), version: 0.1.65 2024-02-28T07:53:54,214 Found link https://files.pythonhosted.org/packages/59/43/6dfbaed1f70ef013279b03e436b8f58f9f2ab0835e04034927fc31bb8fc9/llama_cpp_python-0.1.66.tar.gz (from https://pypi.org/simple/llama-cpp-python/) (requires-python:>=3.7), version: 0.1.66 2024-02-28T07:53:54,215 Found link https://files.pythonhosted.org/packages/96/79/3dbc78c1a6e14d088673d21549a736aa27ca69ef1734541a07c36f349cf7/llama_cpp_python-0.1.67.tar.gz (from https://pypi.org/simple/llama-cpp-python/) (requires-python:>=3.7), version: 0.1.67 2024-02-28T07:53:54,216 Found link https://files.pythonhosted.org/packages/87/0a/f99cdd3befe25e414f9a758fb89bf70ca5278d68430af140391fc262bb55/llama_cpp_python-0.1.68.tar.gz (from https://pypi.org/simple/llama-cpp-python/) (requires-python:>=3.7), version: 0.1.68 2024-02-28T07:53:54,217 Found link https://files.pythonhosted.org/packages/e6/a2/86200ff91d374311fbb704079d95927edacfc47592ae34c3c48a47863eea/llama_cpp_python-0.1.69.tar.gz (from https://pypi.org/simple/llama-cpp-python/) (requires-python:>=3.7), version: 0.1.69 2024-02-28T07:53:54,218 Found link https://files.pythonhosted.org/packages/78/60/5cfb3842ef25db4ee1555dc2a70b99c569ad27c0438e7d9704c1672828b8/llama_cpp_python-0.1.70.tar.gz (from https://pypi.org/simple/llama-cpp-python/) (requires-python:>=3.7), version: 0.1.70 2024-02-28T07:53:54,219 Found link https://files.pythonhosted.org/packages/4b/d1/24602670353e3f08f07c9bf36dca5ef5466ac3c0d27b5d5be0685e8032a7/llama_cpp_python-0.1.71.tar.gz (from https://pypi.org/simple/llama-cpp-python/) (requires-python:>=3.7), version: 0.1.71 2024-02-28T07:53:54,220 Found link https://files.pythonhosted.org/packages/7f/59/b17486fa68bd3bce14fad89e049ea2700cf9ca36e7710d9380e2facbe182/llama_cpp_python-0.1.72.tar.gz (from https://pypi.org/simple/llama-cpp-python/) (requires-python:>=3.7), version: 0.1.72 2024-02-28T07:53:54,221 Found link https://files.pythonhosted.org/packages/c5/c5/3bcee8d4fa2a3faef625dd1223e945ab15aa7d2f180158f30762eaa597b1/llama_cpp_python-0.1.73.tar.gz (from https://pypi.org/simple/llama-cpp-python/) (requires-python:>=3.7), version: 0.1.73 2024-02-28T07:53:54,222 Found link https://files.pythonhosted.org/packages/73/09/99e6bf5d56e96a15a67628b15b705afbddf27279e6738018c4d7866d05c7/llama_cpp_python-0.1.74.tar.gz (from https://pypi.org/simple/llama-cpp-python/) (requires-python:>=3.7), version: 0.1.74 2024-02-28T07:53:54,223 Found link https://files.pythonhosted.org/packages/b3/61/85c4defcdd3157004611feff6c95e8b4776d8671ca754ff2ed91fbc85154/llama_cpp_python-0.1.76.tar.gz (from https://pypi.org/simple/llama-cpp-python/) (requires-python:>=3.7), version: 0.1.76 2024-02-28T07:53:54,224 Found link https://files.pythonhosted.org/packages/28/57/6db0db4582e31ced78487c6f28a4ee127fe38a22a85c573c39c7e5a03e2f/llama_cpp_python-0.1.77.tar.gz (from https://pypi.org/simple/llama-cpp-python/) (requires-python:>=3.7), version: 0.1.77 2024-02-28T07:53:54,225 Found link https://files.pythonhosted.org/packages/dd/98/3d2382ac0b462b175519de360c57d514fbe5d33a5e67e42e82dc03bfb0f9/llama_cpp_python-0.1.78.tar.gz (from https://pypi.org/simple/llama-cpp-python/) (requires-python:>=3.7), version: 0.1.78 2024-02-28T07:53:54,226 Found link https://files.pythonhosted.org/packages/f2/85/39c90a6b2306fbf91fc9dd2346bb4599c57e5c29aec15981fe5d662cef34/llama_cpp_python-0.1.79.tar.gz (from https://pypi.org/simple/llama-cpp-python/) (requires-python:>=3.7), version: 0.1.79 2024-02-28T07:53:54,227 Found link https://files.pythonhosted.org/packages/af/c7/e3cee337dc44024bece8faf7683e40d015bae55b0dfaddd1a97ab4d1b432/llama_cpp_python-0.1.80.tar.gz (from https://pypi.org/simple/llama-cpp-python/) (requires-python:>=3.7), version: 0.1.80 2024-02-28T07:53:54,227 Found link https://files.pythonhosted.org/packages/ae/92/c10ee59095bc1336edbecc8f6eea98d9d2f4df1d944b9df9b4484ea268ae/llama_cpp_python-0.1.81.tar.gz (from https://pypi.org/simple/llama-cpp-python/) (requires-python:>=3.7), version: 0.1.81 2024-02-28T07:53:54,228 Found link https://files.pythonhosted.org/packages/81/b5/b63dbe0b799b9063208543a84b0e99b622f8a8d19de9564fc1d2877e1c9e/llama_cpp_python-0.1.82.tar.gz (from https://pypi.org/simple/llama-cpp-python/) (requires-python:>=3.7), version: 0.1.82 2024-02-28T07:53:54,229 Found link https://files.pythonhosted.org/packages/6e/c7/651fa47b77d2189a46b00caa44627d17476bf41bcbeb0b72906295d6de79/llama_cpp_python-0.1.83.tar.gz (from https://pypi.org/simple/llama-cpp-python/) (requires-python:>=3.7), version: 0.1.83 2024-02-28T07:53:54,230 Found link https://files.pythonhosted.org/packages/39/f2/a64d37bdaecb2ad66cfc2faab95201acf66b537affbd042656b27dc135f4/llama_cpp_python-0.1.84.tar.gz (from https://pypi.org/simple/llama-cpp-python/) (requires-python:>=3.7), version: 0.1.84 2024-02-28T07:53:54,231 Found link https://files.pythonhosted.org/packages/ed/f2/2fb3b4c3886de5d1bcfbd258932159e374d1d9a0d52d6850805e26cc9fc2/llama_cpp_python-0.1.85.tar.gz (from https://pypi.org/simple/llama-cpp-python/) (requires-python:>=3.7), version: 0.1.85 2024-02-28T07:53:54,232 Found link https://files.pythonhosted.org/packages/5b/a6/a49b40d4c0ac9aa703bf11e5783d38beb3924a6ba5165a393518646894c9/llama_cpp_python-0.2.0.tar.gz (from https://pypi.org/simple/llama-cpp-python/) (requires-python:>=3.8), version: 0.2.0 2024-02-28T07:53:54,233 Found link https://files.pythonhosted.org/packages/e4/3a/7c65dbed3913086ec0a84549acdd4002ef4e1ef9fbb1d31596a4c1fd64a3/llama_cpp_python-0.2.1.tar.gz (from https://pypi.org/simple/llama-cpp-python/) (requires-python:>=3.8), version: 0.2.1 2024-02-28T07:53:54,234 Found link https://files.pythonhosted.org/packages/d0/28/ef9e91c4ed9e96a2a0bcd6a8327f2d039745b59946eccc6ccb1a9ee2dedf/llama_cpp_python-0.2.2.tar.gz (from https://pypi.org/simple/llama-cpp-python/) (requires-python:>=3.8), version: 0.2.2 2024-02-28T07:53:54,235 Found link https://files.pythonhosted.org/packages/99/e6/19d9c978dc634d91b05416c8fc502171af6b27a20683669048afa5738b74/llama_cpp_python-0.2.3.tar.gz (from https://pypi.org/simple/llama-cpp-python/) (requires-python:>=3.8), version: 0.2.3 2024-02-28T07:53:54,236 Found link https://files.pythonhosted.org/packages/7b/26/be5c224560ccbe64592afbdbe0710ae5b0a8413e1416cc8c2c0b093b713b/llama_cpp_python-0.2.4.tar.gz (from https://pypi.org/simple/llama-cpp-python/) (requires-python:>=3.8), version: 0.2.4 2024-02-28T07:53:54,237 Found link https://files.pythonhosted.org/packages/04/9d/1f8fe06199b5fda5a691f23ef5622b32d5fe717da748f4fc2c9cbde60223/llama_cpp_python-0.2.5.tar.gz (from https://pypi.org/simple/llama-cpp-python/) (requires-python:>=3.8), version: 0.2.5 2024-02-28T07:53:54,238 Found link https://files.pythonhosted.org/packages/ff/ca/8c45e45abb21069f6274efe3f1cf0aca29a1fd089fec6acf924ee4a67c46/llama_cpp_python-0.2.6.tar.gz (from https://pypi.org/simple/llama-cpp-python/) (requires-python:>=3.8), version: 0.2.6 2024-02-28T07:53:54,239 Found link https://files.pythonhosted.org/packages/b1/78/bd5e6653102ea16ce53a044cec606f257811da99c9c2a760af6a93cdfef3/llama_cpp_python-0.2.7.tar.gz (from https://pypi.org/simple/llama-cpp-python/) (requires-python:>=3.8), version: 0.2.7 2024-02-28T07:53:54,240 Found link https://files.pythonhosted.org/packages/6d/60/edbd982673a71c6c27fa6818914ad61c6171d165de4e777d489539f1d959/llama_cpp_python-0.2.8.tar.gz (from https://pypi.org/simple/llama-cpp-python/) (requires-python:>=3.8), version: 0.2.8 2024-02-28T07:53:54,241 Found link https://files.pythonhosted.org/packages/98/2e/357d936ff7418591c56a27b9472e2b3581bd9eeb90c4221580fae5e00588/llama_cpp_python-0.2.9.tar.gz (from https://pypi.org/simple/llama-cpp-python/) (requires-python:>=3.8), version: 0.2.9 2024-02-28T07:53:54,242 Found link https://files.pythonhosted.org/packages/d4/a2/ff96c80f91d7d534a6b65517247c09680b1bbf064d6388feda9aac3201dd/llama_cpp_python-0.2.10.tar.gz (from https://pypi.org/simple/llama-cpp-python/) (requires-python:>=3.8), version: 0.2.10 2024-02-28T07:53:54,243 Found link https://files.pythonhosted.org/packages/5b/b9/1ea446f1dcccb13313ea1e651c73bd5cc4db2aabf6cae1894064bddf1fc4/llama_cpp_python-0.2.11.tar.gz (from https://pypi.org/simple/llama-cpp-python/) (requires-python:>=3.8), version: 0.2.11 2024-02-28T07:53:54,244 Found link https://files.pythonhosted.org/packages/11/35/0185e28cfcdb59ab17e09a6cc6e19c7271db236ee1c9d41143a082b463b7/llama_cpp_python-0.2.12.tar.gz (from https://pypi.org/simple/llama-cpp-python/) (requires-python:>=3.8), version: 0.2.12 2024-02-28T07:53:54,245 Found link https://files.pythonhosted.org/packages/da/58/55a26595009d76237273b340d718e04d9a33c5afd440e45552f45a16b1d9/llama_cpp_python-0.2.13.tar.gz (from https://pypi.org/simple/llama-cpp-python/) (requires-python:>=3.8), version: 0.2.13 2024-02-28T07:53:54,245 Found link https://files.pythonhosted.org/packages/82/2c/e742d611024256b5540380e7a62cd1fdc3cc1b47f5d2b86610f545804acd/llama_cpp_python-0.2.14.tar.gz (from https://pypi.org/simple/llama-cpp-python/) (requires-python:>=3.8), version: 0.2.14 2024-02-28T07:53:54,246 Found link https://files.pythonhosted.org/packages/0c/e9/0d48a445430bed484791f76a4ab1d7950e57468127a3ee6a6ec494f46ae5/llama_cpp_python-0.2.15.tar.gz (from https://pypi.org/simple/llama-cpp-python/) (requires-python:>=3.8), version: 0.2.15 2024-02-28T07:53:54,247 Found link https://files.pythonhosted.org/packages/a8/3e/b0bd26d0d0d0dd9187a6e4e46c2744c1d7d52cc2834b35db61776af00219/llama_cpp_python-0.2.16.tar.gz (from https://pypi.org/simple/llama-cpp-python/) (requires-python:>=3.8), version: 0.2.16 2024-02-28T07:53:54,248 Found link https://files.pythonhosted.org/packages/d1/2c/e75e2e5b08b805d23066f1c1f8dbb1777a5bd3b43f057d16d4b2634d9ae1/llama_cpp_python-0.2.17.tar.gz (from https://pypi.org/simple/llama-cpp-python/) (requires-python:>=3.8), version: 0.2.17 2024-02-28T07:53:54,249 Found link https://files.pythonhosted.org/packages/1b/be/3ce85cdf2f3b7c035ca52e0158b98d244d4ce40a51908b22e0b45c3ef75f/llama_cpp_python-0.2.18.tar.gz (from https://pypi.org/simple/llama-cpp-python/) (requires-python:>=3.8), version: 0.2.18 2024-02-28T07:53:54,250 Found link https://files.pythonhosted.org/packages/9d/1a/f74ce61893791530a9af61fe8925bd569d8fb087545dc1973d617c03ce11/llama_cpp_python-0.2.19.tar.gz (from https://pypi.org/simple/llama-cpp-python/) (requires-python:>=3.8), version: 0.2.19 2024-02-28T07:53:54,251 Found link https://files.pythonhosted.org/packages/f0/6a/3e161b68097fe2f9901e01dc7ec2afb4753699495004a37d2abdc3b1fd07/llama_cpp_python-0.2.20.tar.gz (from https://pypi.org/simple/llama-cpp-python/) (requires-python:>=3.8), version: 0.2.20 2024-02-28T07:53:54,252 Found link https://files.pythonhosted.org/packages/15/7a/49906adb90113f628c1f07dc746ca0978b8aa99a8f7325a8d961ce2a1919/llama_cpp_python-0.2.22.tar.gz (from https://pypi.org/simple/llama-cpp-python/) (requires-python:>=3.8), version: 0.2.22 2024-02-28T07:53:54,253 Found link https://files.pythonhosted.org/packages/9b/30/fb7cd2d9a395d64f39b25eb36ba86163fd5bbb3c1427b9f2381b7d798d3a/llama_cpp_python-0.2.23.tar.gz (from https://pypi.org/simple/llama-cpp-python/) (requires-python:>=3.8), version: 0.2.23 2024-02-28T07:53:54,254 Found link https://files.pythonhosted.org/packages/fe/fd/498415767be24e802135c409922c0072947adc5d73ea85ce6c98c42f2e63/llama_cpp_python-0.2.24.tar.gz (from https://pypi.org/simple/llama-cpp-python/) (requires-python:>=3.8), version: 0.2.24 2024-02-28T07:53:54,255 Found link https://files.pythonhosted.org/packages/f7/3f/e21c6af55661e7499133245ab622871e375b716af5a96d83770f2ad6d602/llama_cpp_python-0.2.25.tar.gz (from https://pypi.org/simple/llama-cpp-python/) (requires-python:>=3.8), version: 0.2.25 2024-02-28T07:53:54,256 Found link https://files.pythonhosted.org/packages/ce/64/16a6bbae31c24d07d1ef6f488b81d13e0eb009147f583d9047371216b7a0/llama_cpp_python-0.2.26.tar.gz (from https://pypi.org/simple/llama-cpp-python/) (requires-python:>=3.8), version: 0.2.26 2024-02-28T07:53:54,257 Found link https://files.pythonhosted.org/packages/a9/83/e3b7405f36b2f3dd4ae76c32e9331232c5692078deda7f84c1f0ede071ab/llama_cpp_python-0.2.27.tar.gz (from https://pypi.org/simple/llama-cpp-python/) (requires-python:>=3.8), version: 0.2.27 2024-02-28T07:53:54,258 Found link https://files.pythonhosted.org/packages/1b/7c/ebe6be46264fad03bf3490fdd48d03608c5e5f10656ffc0155f23b7872a9/llama_cpp_python-0.2.28.tar.gz (from https://pypi.org/simple/llama-cpp-python/) (requires-python:>=3.8), version: 0.2.28 2024-02-28T07:53:54,259 Found link https://files.pythonhosted.org/packages/12/b6/91ec62d6b2b9648f013d77350446e0351b5685bd89129f188dae60157032/llama_cpp_python-0.2.29.tar.gz (from https://pypi.org/simple/llama-cpp-python/) (requires-python:>=3.8), version: 0.2.29 2024-02-28T07:53:54,260 Found link https://files.pythonhosted.org/packages/04/fb/13c99d504497ab63833600f8ae2196e28c04ad2a1cb43987cc9b51dc0a56/llama_cpp_python-0.2.30.tar.gz (from https://pypi.org/simple/llama-cpp-python/) (requires-python:>=3.8), version: 0.2.30 2024-02-28T07:53:54,261 Found link https://files.pythonhosted.org/packages/a1/c8/7831d0908b23670112663913b1789a7adb47dc70e28318ee889afc7fc3be/llama_cpp_python-0.2.31.tar.gz (from https://pypi.org/simple/llama-cpp-python/) (requires-python:>=3.8), version: 0.2.31 2024-02-28T07:53:54,262 Found link https://files.pythonhosted.org/packages/80/65/01fd26598cdd3cd09b6ce006cca2290bb762a4cc9f76e1a2c9c5a00b8cff/llama_cpp_python-0.2.32.tar.gz (from https://pypi.org/simple/llama-cpp-python/) (requires-python:>=3.8), version: 0.2.32 2024-02-28T07:53:54,263 Found link https://files.pythonhosted.org/packages/d4/5e/c544cd520169f55e6cad63d3b8dec9c4e47326b1cb4095a91dce942be1a7/llama_cpp_python-0.2.33.tar.gz (from https://pypi.org/simple/llama-cpp-python/) (requires-python:>=3.8), version: 0.2.33 2024-02-28T07:53:54,263 Found link https://files.pythonhosted.org/packages/78/5f/d46a72081d6e0e77e44abf092b11517267e4d290a3f20cf3b9a9faab7705/llama_cpp_python-0.2.34.tar.gz (from https://pypi.org/simple/llama-cpp-python/) (requires-python:>=3.8), version: 0.2.34 2024-02-28T07:53:54,264 Found link https://files.pythonhosted.org/packages/45/3e/c5eb7a5a2689c15657beb08d0c6915cc61a9a20311ff00a567fc7a70a530/llama_cpp_python-0.2.35.tar.gz (from https://pypi.org/simple/llama-cpp-python/) (requires-python:>=3.8), version: 0.2.35 2024-02-28T07:53:54,265 Found link https://files.pythonhosted.org/packages/6a/25/02e865aee5472e28ec65ee0994ed9fce179ee106b41a9783e7e1816c557a/llama_cpp_python-0.2.36.tar.gz (from https://pypi.org/simple/llama-cpp-python/) (requires-python:>=3.8), version: 0.2.36 2024-02-28T07:53:54,266 Found link https://files.pythonhosted.org/packages/ee/82/ce00de6b3b2adde8d59791ec986992b4e736da592cfafb22ccbdac14a049/llama_cpp_python-0.2.37.tar.gz (from https://pypi.org/simple/llama-cpp-python/) (requires-python:>=3.8), version: 0.2.37 2024-02-28T07:53:54,267 Found link https://files.pythonhosted.org/packages/90/41/7774fb44546685c88193629f95e20adad3a3078a0bdb9aeacb174a6ee9ca/llama_cpp_python-0.2.38.tar.gz (from https://pypi.org/simple/llama-cpp-python/) (requires-python:>=3.8), version: 0.2.38 2024-02-28T07:53:54,268 Found link https://files.pythonhosted.org/packages/af/a6/6b836876620823551650db19d217118b9ef0983a936aa7895ed5d05df9c0/llama_cpp_python-0.2.39.tar.gz (from https://pypi.org/simple/llama-cpp-python/) (requires-python:>=3.8), version: 0.2.39 2024-02-28T07:53:54,269 Found link https://files.pythonhosted.org/packages/1a/d2/dbf69d882517a534c5640e7b7f1cca360882cbd53c8c5c25ff0a7a854e07/llama_cpp_python-0.2.40.tar.gz (from https://pypi.org/simple/llama-cpp-python/) (requires-python:>=3.8), version: 0.2.40 2024-02-28T07:53:54,270 Found link https://files.pythonhosted.org/packages/35/73/b2abe489ae7a7fbe096266457a00a8f801b83c6929c9ee7a2fd0c43baff0/llama_cpp_python-0.2.41.tar.gz (from https://pypi.org/simple/llama-cpp-python/) (requires-python:>=3.8), version: 0.2.41 2024-02-28T07:53:54,271 Found link https://files.pythonhosted.org/packages/71/71/d5acd94964c599b348e81714aac9e75a578f51d224ac0343e27e6d9c38fc/llama_cpp_python-0.2.42.tar.gz (from https://pypi.org/simple/llama-cpp-python/) (requires-python:>=3.8), version: 0.2.42 2024-02-28T07:53:54,272 Found link https://files.pythonhosted.org/packages/2c/07/b2bbd5e826d5910be3fd96eb639ba717349b3c2b0cc1360b13c63c50338a/llama_cpp_python-0.2.43.tar.gz (from https://pypi.org/simple/llama-cpp-python/) (requires-python:>=3.8), version: 0.2.43 2024-02-28T07:53:54,273 Found link https://files.pythonhosted.org/packages/a3/1d/fc000e07680831b074446f059611b02844fd9d949d70146b1ae7b2df9ccc/llama_cpp_python-0.2.44.tar.gz (from https://pypi.org/simple/llama-cpp-python/) (requires-python:>=3.8), version: 0.2.44 2024-02-28T07:53:54,274 Found link https://files.pythonhosted.org/packages/7a/cb/3e958c169fabb2df7ffaeb170a5d2b2cc8370ff31621e23b778ebcd8ab24/llama_cpp_python-0.2.45.tar.gz (from https://pypi.org/simple/llama-cpp-python/) (requires-python:>=3.8), version: 0.2.45 2024-02-28T07:53:54,275 Found link https://files.pythonhosted.org/packages/25/b0/1df28f6ec4d14432dddc56e04bb05c0e78c40bc5611c1a54132fe2244d1a/llama_cpp_python-0.2.46.tar.gz (from https://pypi.org/simple/llama-cpp-python/) (requires-python:>=3.8), version: 0.2.46 2024-02-28T07:53:54,276 Found link https://files.pythonhosted.org/packages/b9/af/30371683d30a0485080448f0382ceec2272d1bce1a711904bb6a3cf3b38b/llama_cpp_python-0.2.47.tar.gz (from https://pypi.org/simple/llama-cpp-python/) (requires-python:>=3.8), version: 0.2.47 2024-02-28T07:53:54,277 Found link https://files.pythonhosted.org/packages/2e/cf/ab532896aa3837755dca592962552ae5c9114b71590bee2d959c57e97710/llama_cpp_python-0.2.48.tar.gz (from https://pypi.org/simple/llama-cpp-python/) (requires-python:>=3.8), version: 0.2.48 2024-02-28T07:53:54,278 Found link https://files.pythonhosted.org/packages/21/e9/71ceed04be64ca9ae36214ba94a8d271817ad83196af003db6435b9ca333/llama_cpp_python-0.2.49.tar.gz (from https://pypi.org/simple/llama-cpp-python/) (requires-python:>=3.8), version: 0.2.49 2024-02-28T07:53:54,279 Found link https://files.pythonhosted.org/packages/e8/ff/492c54a6dde08db51fc4ae0b4c9f3e4c7bc5036eeab223ebdd51bc34a146/llama_cpp_python-0.2.50.tar.gz (from https://pypi.org/simple/llama-cpp-python/) (requires-python:>=3.8), version: 0.2.50 2024-02-28T07:53:54,280 Found link https://files.pythonhosted.org/packages/9d/3a/5476da33c736830b73393f05851c8eccea6f5a54ec2a0e35fc1297d1b219/llama_cpp_python-0.2.51.tar.gz (from https://pypi.org/simple/llama-cpp-python/) (requires-python:>=3.8), version: 0.2.51 2024-02-28T07:53:54,281 Found link https://files.pythonhosted.org/packages/4c/09/a1fefdac604d70b211918a0dbe47d65573368db8988a5fa4f0777e950f12/llama_cpp_python-0.2.52.tar.gz (from https://pypi.org/simple/llama-cpp-python/) (requires-python:>=3.8), version: 0.2.52 2024-02-28T07:53:54,282 Found link https://files.pythonhosted.org/packages/61/a1/6a4f3df444ddd3903d07d35f3ef7a2a2f2711ced64944fd5ee3f0ed1ef39/llama_cpp_python-0.2.53.tar.gz (from https://pypi.org/simple/llama-cpp-python/) (requires-python:>=3.8), version: 0.2.53 2024-02-28T07:53:54,283 Fetching project page and analyzing links: https://www.piwheels.org/simple/llama-cpp-python/ 2024-02-28T07:53:54,283 Getting page https://www.piwheels.org/simple/llama-cpp-python/ 2024-02-28T07:53:54,285 Found index url https://www.piwheels.org/simple/ 2024-02-28T07:53:54,714 Fetched page https://www.piwheels.org/simple/llama-cpp-python/ as text/html 2024-02-28T07:53:54,749 Skipping link: No binaries permitted for llama-cpp-python: https://www.piwheels.org/simple/llama-cpp-python/llama_cpp_python-0.2.50-cp311-cp311-manylinux_2_36_armv7l.whl#sha256=ae6213e1a296eb7773de5ace4c9709bb0c6c26b569f698401f02ec8c0b1f70e1 (from https://www.piwheels.org/simple/llama-cpp-python/) (requires-python:>=3.8) 2024-02-28T07:53:54,750 Skipping link: No binaries permitted for llama-cpp-python: https://www.piwheels.org/simple/llama-cpp-python/llama_cpp_python-0.2.50-cp39-cp39-manylinux_2_31_armv7l.whl#sha256=e64328caf961eb1349ba103303b224160d9ec1d905b360b8776f05ef8a56548c (from https://www.piwheels.org/simple/llama-cpp-python/) (requires-python:>=3.8) 2024-02-28T07:53:54,750 Skipping link: No binaries permitted for llama-cpp-python: https://www.piwheels.org/simple/llama-cpp-python/llama_cpp_python-0.2.49-cp311-cp311-manylinux_2_36_armv7l.whl#sha256=7ee779b157b2285faa8f687ddfc37a70add93387f65c6d369c6f55af00e7991e (from https://www.piwheels.org/simple/llama-cpp-python/) (requires-python:>=3.8) 2024-02-28T07:53:54,751 Skipping link: No binaries permitted for llama-cpp-python: https://www.piwheels.org/simple/llama-cpp-python/llama_cpp_python-0.2.49-cp39-cp39-manylinux_2_31_armv7l.whl#sha256=bebac0c1a654b69ac3ad5b033f47aefdf9c5cffaf561177fafd1f9137cd6a109 (from https://www.piwheels.org/simple/llama-cpp-python/) (requires-python:>=3.8) 2024-02-28T07:53:54,751 Skipping link: No binaries permitted for llama-cpp-python: https://www.piwheels.org/simple/llama-cpp-python/llama_cpp_python-0.2.48-cp311-cp311-manylinux_2_36_armv7l.whl#sha256=a770ff0c49d7b89225f276e004c0365dd2e2a29aade3813f181bf849bcfae172 (from https://www.piwheels.org/simple/llama-cpp-python/) (requires-python:>=3.8) 2024-02-28T07:53:54,752 Skipping link: No binaries permitted for llama-cpp-python: https://www.piwheels.org/simple/llama-cpp-python/llama_cpp_python-0.2.48-cp39-cp39-manylinux_2_31_armv7l.whl#sha256=44c1145db5af2f0e64f7dfeabbccad1b8a8f3f8191ac6d772942bc9707617774 (from https://www.piwheels.org/simple/llama-cpp-python/) (requires-python:>=3.8) 2024-02-28T07:53:54,752 Skipping link: No binaries permitted for llama-cpp-python: https://www.piwheels.org/simple/llama-cpp-python/llama_cpp_python-0.2.44-cp311-cp311-manylinux_2_36_armv7l.whl#sha256=bbed4880ea24c9048cd079bf375e248fcb287c58b4da1fa55db699f50498fc47 (from https://www.piwheels.org/simple/llama-cpp-python/) (requires-python:>=3.8) 2024-02-28T07:53:54,753 Skipping link: No binaries permitted for llama-cpp-python: https://www.piwheels.org/simple/llama-cpp-python/llama_cpp_python-0.2.44-cp39-cp39-manylinux_2_31_armv7l.whl#sha256=77f7353d0548dfc077df72accec9e446eb41d08531135f33e912e777f3709b7d (from https://www.piwheels.org/simple/llama-cpp-python/) (requires-python:>=3.8) 2024-02-28T07:53:54,754 Skipping link: No binaries permitted for llama-cpp-python: https://www.piwheels.org/simple/llama-cpp-python/llama_cpp_python-0.2.43-cp311-cp311-manylinux_2_36_armv7l.whl#sha256=74bf2edb0d26028c31a8ddb255ca8cbf09754f32f11ec4ba92b3c38f28c49689 (from https://www.piwheels.org/simple/llama-cpp-python/) (requires-python:>=3.8) 2024-02-28T07:53:54,754 Skipping link: No binaries permitted for llama-cpp-python: https://www.piwheels.org/simple/llama-cpp-python/llama_cpp_python-0.2.43-cp39-cp39-manylinux_2_31_armv7l.whl#sha256=902b3d68b16f3592a31bfab67f2743b6c880983d2388a86c794f1ff70dd0fbd5 (from https://www.piwheels.org/simple/llama-cpp-python/) (requires-python:>=3.8) 2024-02-28T07:53:54,755 Skipping link: No binaries permitted for llama-cpp-python: https://www.piwheels.org/simple/llama-cpp-python/llama_cpp_python-0.2.42-cp311-cp311-manylinux_2_36_armv7l.whl#sha256=0725254705f761711ae588c6ec374610b02c2be4fb42cef43c613f94b24e09fd (from https://www.piwheels.org/simple/llama-cpp-python/) (requires-python:>=3.8) 2024-02-28T07:53:54,755 Skipping link: No binaries permitted for llama-cpp-python: https://www.piwheels.org/simple/llama-cpp-python/llama_cpp_python-0.2.42-cp39-cp39-manylinux_2_31_armv7l.whl#sha256=eeb01d04e368a69d9e72b3f624bdb8e5ba1c7fd5bb05b867cb7db9aa2ecef712 (from https://www.piwheels.org/simple/llama-cpp-python/) (requires-python:>=3.8) 2024-02-28T07:53:54,756 Skipping link: No binaries permitted for llama-cpp-python: https://www.piwheels.org/simple/llama-cpp-python/llama_cpp_python-0.2.41-cp311-cp311-manylinux_2_36_armv7l.whl#sha256=605a1f8ad4a71ffaa08d9d81c061583f888c98daa92f90ae6702a0d132379ec5 (from https://www.piwheels.org/simple/llama-cpp-python/) (requires-python:>=3.8) 2024-02-28T07:53:54,757 Skipping link: No binaries permitted for llama-cpp-python: https://www.piwheels.org/simple/llama-cpp-python/llama_cpp_python-0.2.41-cp39-cp39-manylinux_2_31_armv7l.whl#sha256=b591c74c6abce1fb0130a7c5e1fcb00d995376bc5aa50bc57591cf3f0d78fcfd (from https://www.piwheels.org/simple/llama-cpp-python/) (requires-python:>=3.8) 2024-02-28T07:53:54,758 Skipping link: No binaries permitted for llama-cpp-python: https://www.piwheels.org/simple/llama-cpp-python/llama_cpp_python-0.2.40-cp311-cp311-manylinux_2_36_armv7l.whl#sha256=32e6eb85956ea95b1392b657da6eb22968aacb7fae0f1bf9d11c07cd4c2cef33 (from https://www.piwheels.org/simple/llama-cpp-python/) (requires-python:>=3.8) 2024-02-28T07:53:54,758 Skipping link: No binaries permitted for llama-cpp-python: https://www.piwheels.org/simple/llama-cpp-python/llama_cpp_python-0.2.40-cp39-cp39-manylinux_2_31_armv7l.whl#sha256=704761bc81d1deb8abe84f1a14ddc3cfc3d8d34ae64f74d4dfc6494552728619 (from https://www.piwheels.org/simple/llama-cpp-python/) (requires-python:>=3.8) 2024-02-28T07:53:54,759 Skipping link: No binaries permitted for llama-cpp-python: https://www.piwheels.org/simple/llama-cpp-python/llama_cpp_python-0.2.39-cp311-cp311-manylinux_2_36_armv7l.whl#sha256=150fa80b5bbf8b0d4e185d3f9dba6a0f955442d434c1aba5102d8813fff0e97b (from https://www.piwheels.org/simple/llama-cpp-python/) (requires-python:>=3.8) 2024-02-28T07:53:54,759 Skipping link: No binaries permitted for llama-cpp-python: https://www.piwheels.org/simple/llama-cpp-python/llama_cpp_python-0.2.39-cp39-cp39-manylinux_2_31_armv7l.whl#sha256=48c95175d440fe2d7c0988c258ce439668b4c601e3fe03f7b1600fdec5f1c381 (from https://www.piwheels.org/simple/llama-cpp-python/) (requires-python:>=3.8) 2024-02-28T07:53:54,760 Skipping link: No binaries permitted for llama-cpp-python: https://www.piwheels.org/simple/llama-cpp-python/llama_cpp_python-0.2.38-cp311-cp311-manylinux_2_36_armv7l.whl#sha256=5f74ae2118cfc667cd8c6fae15341e0465f3aa5f607a4639c4cd20abcad4c0ef (from https://www.piwheels.org/simple/llama-cpp-python/) (requires-python:>=3.8) 2024-02-28T07:53:54,760 Skipping link: No binaries permitted for llama-cpp-python: https://www.piwheels.org/simple/llama-cpp-python/llama_cpp_python-0.2.38-cp39-cp39-manylinux_2_31_armv7l.whl#sha256=5666d2bc6ca576a5b1db7ba9eb18f098f6dd45561e96ddc465ee68bb3242af7f (from https://www.piwheels.org/simple/llama-cpp-python/) (requires-python:>=3.8) 2024-02-28T07:53:54,761 Skipping link: No binaries permitted for llama-cpp-python: https://www.piwheels.org/simple/llama-cpp-python/llama_cpp_python-0.2.37-cp311-cp311-manylinux_2_36_armv7l.whl#sha256=52bf796d479f4e9df37e1828dabb987f8ce1d7c34bf968d86a08fac609b01d7d (from https://www.piwheels.org/simple/llama-cpp-python/) (requires-python:>=3.8) 2024-02-28T07:53:54,762 Skipping link: No binaries permitted for llama-cpp-python: https://www.piwheels.org/simple/llama-cpp-python/llama_cpp_python-0.2.37-cp39-cp39-manylinux_2_31_armv7l.whl#sha256=2c82f345afce240e7b7d385551145d1ce5ebab4461e943bb6024e4e9f9dffdf4 (from https://www.piwheels.org/simple/llama-cpp-python/) (requires-python:>=3.8) 2024-02-28T07:53:54,762 Skipping link: No binaries permitted for llama-cpp-python: https://www.piwheels.org/simple/llama-cpp-python/llama_cpp_python-0.2.36-cp311-cp311-manylinux_2_36_armv7l.whl#sha256=bb505332fcf1e70bfc1814f6159c487915a5df69dfab10907053a070e3b53449 (from https://www.piwheels.org/simple/llama-cpp-python/) (requires-python:>=3.8) 2024-02-28T07:53:54,763 Skipping link: No binaries permitted for llama-cpp-python: https://www.piwheels.org/simple/llama-cpp-python/llama_cpp_python-0.2.36-cp39-cp39-manylinux_2_31_armv7l.whl#sha256=eb2eabd471c1af5c77a8b7d69061e8f0dc3e56e10a5a6c4a77acd033f6183cd8 (from https://www.piwheels.org/simple/llama-cpp-python/) (requires-python:>=3.8) 2024-02-28T07:53:54,764 Skipping link: No binaries permitted for llama-cpp-python: https://www.piwheels.org/simple/llama-cpp-python/llama_cpp_python-0.2.35-cp311-cp311-manylinux_2_36_armv7l.whl#sha256=ab8f24a87f39bab7655d052963692ccd334b9eb78f2f2e9fb19d02616a6d647c (from https://www.piwheels.org/simple/llama-cpp-python/) (requires-python:>=3.8) 2024-02-28T07:53:54,764 Skipping link: No binaries permitted for llama-cpp-python: https://www.piwheels.org/simple/llama-cpp-python/llama_cpp_python-0.2.35-cp39-cp39-manylinux_2_31_armv7l.whl#sha256=b49f9efa08a556a02732d0a22ed36e2e879ae2789ca9f10ec8ad9daf5c8b2f3f (from https://www.piwheels.org/simple/llama-cpp-python/) (requires-python:>=3.8) 2024-02-28T07:53:54,765 Skipping link: No binaries permitted for llama-cpp-python: https://www.piwheels.org/simple/llama-cpp-python/llama_cpp_python-0.2.34-cp311-cp311-manylinux_2_36_armv7l.whl#sha256=da6a7d8c431275720252bac217874d7eb9b8e97294e237c3053f01ba2659e611 (from https://www.piwheels.org/simple/llama-cpp-python/) (requires-python:>=3.8) 2024-02-28T07:53:54,765 Skipping link: No binaries permitted for llama-cpp-python: https://www.piwheels.org/simple/llama-cpp-python/llama_cpp_python-0.2.34-cp39-cp39-manylinux_2_31_armv7l.whl#sha256=b1389b56552f7b5a058bb447510394fe93e11aca6be95177dfdfd72ea58ef4d7 (from https://www.piwheels.org/simple/llama-cpp-python/) (requires-python:>=3.8) 2024-02-28T07:53:54,766 Skipping link: No binaries permitted for llama-cpp-python: https://www.piwheels.org/simple/llama-cpp-python/llama_cpp_python-0.2.33-cp311-cp311-manylinux_2_36_armv7l.whl#sha256=00eb7f4a087a39240cfc70aa2f1611a76ecc52c4c1d46e2b6fb8d88ea642bae3 (from https://www.piwheels.org/simple/llama-cpp-python/) (requires-python:>=3.8) 2024-02-28T07:53:54,766 Skipping link: No binaries permitted for llama-cpp-python: https://www.piwheels.org/simple/llama-cpp-python/llama_cpp_python-0.2.33-cp39-cp39-manylinux_2_31_armv7l.whl#sha256=96fff83d6f39dd9a71ea1da8da5bd4fe22561795a33d06b6cc01fdf0bff85ccf (from https://www.piwheels.org/simple/llama-cpp-python/) (requires-python:>=3.8) 2024-02-28T07:53:54,767 Skipping link: No binaries permitted for llama-cpp-python: https://www.piwheels.org/simple/llama-cpp-python/llama_cpp_python-0.2.32-cp311-cp311-manylinux_2_36_armv7l.whl#sha256=7930ee195d390635aac4b7af0fd7cd490fc6f93e3922585e4cb156979b4e1660 (from https://www.piwheels.org/simple/llama-cpp-python/) (requires-python:>=3.8) 2024-02-28T07:53:54,767 Skipping link: No binaries permitted for llama-cpp-python: https://www.piwheels.org/simple/llama-cpp-python/llama_cpp_python-0.2.32-cp39-cp39-manylinux_2_31_armv7l.whl#sha256=78c4efbe665a8e60cc63cf23d37500de80fa88f33d62b4ee476a2957aecac4fa (from https://www.piwheels.org/simple/llama-cpp-python/) (requires-python:>=3.8) 2024-02-28T07:53:54,768 Skipping link: No binaries permitted for llama-cpp-python: https://www.piwheels.org/simple/llama-cpp-python/llama_cpp_python-0.2.31-cp311-cp311-manylinux_2_36_armv7l.whl#sha256=fd6731ded8988f781e89c5a096a015772f9b4d8af17447ccb542030169c315c0 (from https://www.piwheels.org/simple/llama-cpp-python/) (requires-python:>=3.8) 2024-02-28T07:53:54,769 Skipping link: No binaries permitted for llama-cpp-python: https://www.piwheels.org/simple/llama-cpp-python/llama_cpp_python-0.2.31-cp39-cp39-manylinux_2_31_armv7l.whl#sha256=dd1044d26325193b7d26c03a05ba125774c659b32a2160bcb2763c1686aedab4 (from https://www.piwheels.org/simple/llama-cpp-python/) (requires-python:>=3.8) 2024-02-28T07:53:54,770 Skipping link: No binaries permitted for llama-cpp-python: https://www.piwheels.org/simple/llama-cpp-python/llama_cpp_python-0.2.30-cp311-cp311-manylinux_2_36_armv7l.whl#sha256=a58864e906f616d77145c2e9dd5cfc6e6db1ac219be004ff3f6b39c25a56a2a3 (from https://www.piwheels.org/simple/llama-cpp-python/) (requires-python:>=3.8) 2024-02-28T07:53:54,770 Skipping link: No binaries permitted for llama-cpp-python: https://www.piwheels.org/simple/llama-cpp-python/llama_cpp_python-0.2.30-cp39-cp39-manylinux_2_31_armv7l.whl#sha256=3cbff75b267c242ab801da2dc7e15a0fe32c029fa269e8628aff3469dade6e70 (from https://www.piwheels.org/simple/llama-cpp-python/) (requires-python:>=3.8) 2024-02-28T07:53:54,771 Skipping link: No binaries permitted for llama-cpp-python: https://www.piwheels.org/simple/llama-cpp-python/llama_cpp_python-0.2.29-cp311-cp311-manylinux_2_36_armv7l.whl#sha256=fc5f5cf3982532109d89e596be5659aece0c248a39a81bb5264fa0652d04d24e (from https://www.piwheels.org/simple/llama-cpp-python/) (requires-python:>=3.8) 2024-02-28T07:53:54,772 Skipping link: No binaries permitted for llama-cpp-python: https://www.piwheels.org/simple/llama-cpp-python/llama_cpp_python-0.2.29-cp39-cp39-manylinux_2_31_armv7l.whl#sha256=04b881d5d2ce4e66256f858b4ac3ff037f96eb221d5bc558a73f4807bc2fd426 (from https://www.piwheels.org/simple/llama-cpp-python/) (requires-python:>=3.8) 2024-02-28T07:53:54,772 Skipping link: No binaries permitted for llama-cpp-python: https://www.piwheels.org/simple/llama-cpp-python/llama_cpp_python-0.2.28-cp311-cp311-manylinux_2_36_armv7l.whl#sha256=2c53b3e5679c7ee69fae7304e592e567dfd20adeaf161dada7a34fdb94b5fc5f (from https://www.piwheels.org/simple/llama-cpp-python/) (requires-python:>=3.8) 2024-02-28T07:53:54,773 Skipping link: No binaries permitted for llama-cpp-python: https://www.piwheels.org/simple/llama-cpp-python/llama_cpp_python-0.2.27-cp311-cp311-manylinux_2_36_armv7l.whl#sha256=02bd20ff69c48b49fedb4901900bc34e42031c1e155f14e077b61de5389a776d (from https://www.piwheels.org/simple/llama-cpp-python/) (requires-python:>=3.8) 2024-02-28T07:53:54,773 Skipping link: No binaries permitted for llama-cpp-python: https://www.piwheels.org/simple/llama-cpp-python/llama_cpp_python-0.2.27-cp39-cp39-manylinux_2_31_armv7l.whl#sha256=300e823a56d1ab95cfcbbca1a3dacc3217a9eadc208b203867b9f79e9ede9b89 (from https://www.piwheels.org/simple/llama-cpp-python/) (requires-python:>=3.8) 2024-02-28T07:53:54,774 Skipping link: No binaries permitted for llama-cpp-python: https://www.piwheels.org/simple/llama-cpp-python/llama_cpp_python-0.2.26-cp311-cp311-manylinux_2_36_armv7l.whl#sha256=ddca4eda48f63369879ef1bf209013469154af220d4c87677cba833316d255fb (from https://www.piwheels.org/simple/llama-cpp-python/) (requires-python:>=3.8) 2024-02-28T07:53:54,775 Skipping link: No binaries permitted for llama-cpp-python: https://www.piwheels.org/simple/llama-cpp-python/llama_cpp_python-0.2.26-cp39-cp39-manylinux_2_31_armv7l.whl#sha256=8af06b93145405d9da8ec41db07fac77f9b7295b3da925d96b7cc360d135f9cd (from https://www.piwheels.org/simple/llama-cpp-python/) (requires-python:>=3.8) 2024-02-28T07:53:54,775 Skipping link: No binaries permitted for llama-cpp-python: https://www.piwheels.org/simple/llama-cpp-python/llama_cpp_python-0.2.25-cp311-cp311-manylinux_2_36_armv7l.whl#sha256=01d603b372965f4d2c44a69560535cbd9e1927f3a914f47b4aa5015a8fe2358a (from https://www.piwheels.org/simple/llama-cpp-python/) (requires-python:>=3.8) 2024-02-28T07:53:54,776 Skipping link: No binaries permitted for llama-cpp-python: https://www.piwheels.org/simple/llama-cpp-python/llama_cpp_python-0.2.25-cp39-cp39-manylinux_2_31_armv7l.whl#sha256=89c1f7bd5208ad77af971a36a94812302a08e26d5c68dc5823dd56144600736d (from https://www.piwheels.org/simple/llama-cpp-python/) (requires-python:>=3.8) 2024-02-28T07:53:54,777 Skipping link: No binaries permitted for llama-cpp-python: https://www.piwheels.org/simple/llama-cpp-python/llama_cpp_python-0.2.24-cp311-cp311-manylinux_2_36_armv7l.whl#sha256=a10093f712345a2248e8b1b3516ab1683da2d5ea888bc2879d3a2c60d5d7f303 (from https://www.piwheels.org/simple/llama-cpp-python/) (requires-python:>=3.8) 2024-02-28T07:53:54,777 Skipping link: No binaries permitted for llama-cpp-python: https://www.piwheels.org/simple/llama-cpp-python/llama_cpp_python-0.2.24-cp39-cp39-manylinux_2_31_armv7l.whl#sha256=8a7693c022ebbcfa411231c816b64816996dcc6841d7d67301b115af9187d8a9 (from https://www.piwheels.org/simple/llama-cpp-python/) (requires-python:>=3.8) 2024-02-28T07:53:54,778 Skipping link: No binaries permitted for llama-cpp-python: https://www.piwheels.org/simple/llama-cpp-python/llama_cpp_python-0.2.23-cp311-cp311-manylinux_2_36_armv7l.whl#sha256=8b3ce6db762da6790eb010227b523953e11bb6e2c38110ccdf57ec80a8609bb6 (from https://www.piwheels.org/simple/llama-cpp-python/) (requires-python:>=3.8) 2024-02-28T07:53:54,779 Skipping link: No binaries permitted for llama-cpp-python: https://www.piwheels.org/simple/llama-cpp-python/llama_cpp_python-0.2.23-cp39-cp39-manylinux_2_31_armv7l.whl#sha256=88148b8856335f05b52be26b07bc639db37be9e5162ecacdc481c0f1b658dcf1 (from https://www.piwheels.org/simple/llama-cpp-python/) (requires-python:>=3.8) 2024-02-28T07:53:54,779 Skipping link: No binaries permitted for llama-cpp-python: https://www.piwheels.org/simple/llama-cpp-python/llama_cpp_python-0.2.22-cp311-cp311-manylinux_2_36_armv7l.whl#sha256=d63553eca926a129319ccbc2f586b1a1cd3b5eb4aca1f18180142b3e3a27f72d (from https://www.piwheels.org/simple/llama-cpp-python/) (requires-python:>=3.8) 2024-02-28T07:53:54,780 Skipping link: No binaries permitted for llama-cpp-python: https://www.piwheels.org/simple/llama-cpp-python/llama_cpp_python-0.2.22-cp39-cp39-manylinux_2_31_armv7l.whl#sha256=11403337141805f35ae8325c881e9ce3741f2578f4cdb45b724a307dd635f829 (from https://www.piwheels.org/simple/llama-cpp-python/) (requires-python:>=3.8) 2024-02-28T07:53:54,781 Skipping link: No binaries permitted for llama-cpp-python: https://www.piwheels.org/simple/llama-cpp-python/llama_cpp_python-0.2.20-cp311-cp311-manylinux_2_36_armv7l.whl#sha256=da14ce17a2476a706a8e8b7489a303536550d7c8cdff9db42cb4b56985c7688f (from https://www.piwheels.org/simple/llama-cpp-python/) (requires-python:>=3.8) 2024-02-28T07:53:54,781 Skipping link: No binaries permitted for llama-cpp-python: https://www.piwheels.org/simple/llama-cpp-python/llama_cpp_python-0.2.20-cp39-cp39-manylinux_2_31_armv7l.whl#sha256=5e05d20b69f7e652531141ef60200d9351118692a28d8b87a8a8a7d527928e9a (from https://www.piwheels.org/simple/llama-cpp-python/) (requires-python:>=3.8) 2024-02-28T07:53:54,782 Skipping link: No binaries permitted for llama-cpp-python: https://www.piwheels.org/simple/llama-cpp-python/llama_cpp_python-0.2.19-cp311-cp311-manylinux_2_36_armv7l.whl#sha256=c1833281926198d9276c3c08ac7cb0f49630c164ce8f29bab9c41e00d55e721f (from https://www.piwheels.org/simple/llama-cpp-python/) (requires-python:>=3.8) 2024-02-28T07:53:54,783 Skipping link: No binaries permitted for llama-cpp-python: https://www.piwheels.org/simple/llama-cpp-python/llama_cpp_python-0.2.19-cp39-cp39-manylinux_2_31_armv7l.whl#sha256=dd16bdc23237ef0e4cc1c9c4c29f6624c9b510052a3dfdaba483957289ac48d5 (from https://www.piwheels.org/simple/llama-cpp-python/) (requires-python:>=3.8) 2024-02-28T07:53:54,784 Skipping link: No binaries permitted for llama-cpp-python: https://www.piwheels.org/simple/llama-cpp-python/llama_cpp_python-0.2.18-cp311-cp311-manylinux_2_36_armv7l.whl#sha256=7f066f10c0560b76776560045941383e9b8627d7696362b387fd4e652db00dad (from https://www.piwheels.org/simple/llama-cpp-python/) (requires-python:>=3.8) 2024-02-28T07:53:54,784 Skipping link: No binaries permitted for llama-cpp-python: https://www.piwheels.org/simple/llama-cpp-python/llama_cpp_python-0.2.18-cp39-cp39-manylinux_2_31_armv7l.whl#sha256=885911c08b103762c507be6075b91de5ecb5b5422f913d7b9f844dcf3ab9b6ae (from https://www.piwheels.org/simple/llama-cpp-python/) (requires-python:>=3.8) 2024-02-28T07:53:54,785 Skipping link: No binaries permitted for llama-cpp-python: https://www.piwheels.org/simple/llama-cpp-python/llama_cpp_python-0.1.57-cp37-cp37m-linux_armv6l.whl#sha256=c46f12906971196ab3fa8250c23e5ae1f72581c00d910fadf491a710a97cb3d7 (from https://www.piwheels.org/simple/llama-cpp-python/) (requires-python:>=3.7) 2024-02-28T07:53:54,785 Skipping link: No binaries permitted for llama-cpp-python: https://www.piwheels.org/simple/llama-cpp-python/llama_cpp_python-0.1.57-cp37-cp37m-linux_armv7l.whl#sha256=c46f12906971196ab3fa8250c23e5ae1f72581c00d910fadf491a710a97cb3d7 (from https://www.piwheels.org/simple/llama-cpp-python/) (requires-python:>=3.7) 2024-02-28T07:53:54,786 Skipping link: No binaries permitted for llama-cpp-python: https://www.piwheels.org/simple/llama-cpp-python/llama_cpp_python-0.1.57-cp39-cp39-linux_armv6l.whl#sha256=888f3796690ccb21c9fac07b1ff83afa7b56fedfaa70ce1568572f1b7fdb3f27 (from https://www.piwheels.org/simple/llama-cpp-python/) (requires-python:>=3.7) 2024-02-28T07:53:54,787 Skipping link: No binaries permitted for llama-cpp-python: https://www.piwheels.org/simple/llama-cpp-python/llama_cpp_python-0.1.57-cp39-cp39-linux_armv7l.whl#sha256=888f3796690ccb21c9fac07b1ff83afa7b56fedfaa70ce1568572f1b7fdb3f27 (from https://www.piwheels.org/simple/llama-cpp-python/) (requires-python:>=3.7) 2024-02-28T07:53:54,787 Skipping link: No binaries permitted for llama-cpp-python: https://www.piwheels.org/simple/llama-cpp-python/llama_cpp_python-0.1.57-cp311-cp311-linux_armv6l.whl#sha256=caf38ff85ab251e84b4f951438454931514bde01dc36643d034d340ed14736d9 (from https://www.piwheels.org/simple/llama-cpp-python/) (requires-python:>=3.7) 2024-02-28T07:53:54,788 Skipping link: No binaries permitted for llama-cpp-python: https://www.piwheels.org/simple/llama-cpp-python/llama_cpp_python-0.1.57-cp311-cp311-linux_armv7l.whl#sha256=caf38ff85ab251e84b4f951438454931514bde01dc36643d034d340ed14736d9 (from https://www.piwheels.org/simple/llama-cpp-python/) (requires-python:>=3.7) 2024-02-28T07:53:54,788 Skipping link: No binaries permitted for llama-cpp-python: https://www.piwheels.org/simple/llama-cpp-python/llama_cpp_python-0.1.56-cp37-cp37m-linux_armv6l.whl#sha256=6ca6e31293dbf909df09e8c0ff119a6706c3b279bbf05716bdd04e99b6ff1665 (from https://www.piwheels.org/simple/llama-cpp-python/) (requires-python:>=3.7) 2024-02-28T07:53:54,789 Skipping link: No binaries permitted for llama-cpp-python: https://www.piwheels.org/simple/llama-cpp-python/llama_cpp_python-0.1.56-cp37-cp37m-linux_armv7l.whl#sha256=6ca6e31293dbf909df09e8c0ff119a6706c3b279bbf05716bdd04e99b6ff1665 (from https://www.piwheels.org/simple/llama-cpp-python/) (requires-python:>=3.7) 2024-02-28T07:53:54,790 Skipping link: No binaries permitted for llama-cpp-python: https://www.piwheels.org/simple/llama-cpp-python/llama_cpp_python-0.1.56-cp39-cp39-linux_armv6l.whl#sha256=2e5b18a3b1b32ea7c1ec0205c6d65ab42b07e29da5bea6c6fc8d17cdd9ee22bd (from https://www.piwheels.org/simple/llama-cpp-python/) (requires-python:>=3.7) 2024-02-28T07:53:54,790 Skipping link: No binaries permitted for llama-cpp-python: https://www.piwheels.org/simple/llama-cpp-python/llama_cpp_python-0.1.56-cp39-cp39-linux_armv7l.whl#sha256=2e5b18a3b1b32ea7c1ec0205c6d65ab42b07e29da5bea6c6fc8d17cdd9ee22bd (from https://www.piwheels.org/simple/llama-cpp-python/) (requires-python:>=3.7) 2024-02-28T07:53:54,791 Skipping link: No binaries permitted for llama-cpp-python: https://www.piwheels.org/simple/llama-cpp-python/llama_cpp_python-0.1.55-cp39-cp39-linux_armv6l.whl#sha256=d9a4ec585cfc04a6b43e815fefdf6c493a08b569cace3fd7c9bbe4d2ebd97fc5 (from https://www.piwheels.org/simple/llama-cpp-python/) (requires-python:>=3.7) 2024-02-28T07:53:54,792 Skipping link: No binaries permitted for llama-cpp-python: https://www.piwheels.org/simple/llama-cpp-python/llama_cpp_python-0.1.55-cp39-cp39-linux_armv7l.whl#sha256=d9a4ec585cfc04a6b43e815fefdf6c493a08b569cace3fd7c9bbe4d2ebd97fc5 (from https://www.piwheels.org/simple/llama-cpp-python/) (requires-python:>=3.7) 2024-02-28T07:53:54,792 Skipping link: No binaries permitted for llama-cpp-python: https://www.piwheels.org/simple/llama-cpp-python/llama_cpp_python-0.1.55-cp37-cp37m-linux_armv6l.whl#sha256=edb85834fe2145fc906e933a5643471bdebf3f3e376675ab3e914785fd1ec21d (from https://www.piwheels.org/simple/llama-cpp-python/) (requires-python:>=3.7) 2024-02-28T07:53:54,793 Skipping link: No binaries permitted for llama-cpp-python: https://www.piwheels.org/simple/llama-cpp-python/llama_cpp_python-0.1.55-cp37-cp37m-linux_armv7l.whl#sha256=edb85834fe2145fc906e933a5643471bdebf3f3e376675ab3e914785fd1ec21d (from https://www.piwheels.org/simple/llama-cpp-python/) (requires-python:>=3.7) 2024-02-28T07:53:54,794 Skipping link: No binaries permitted for llama-cpp-python: https://www.piwheels.org/simple/llama-cpp-python/llama_cpp_python-0.1.55-cp311-cp311-linux_armv6l.whl#sha256=6abbec187e6c40b192040ba4dee145ae69de4aaa65c3a350fe0cea85bf6aa197 (from https://www.piwheels.org/simple/llama-cpp-python/) (requires-python:>=3.7) 2024-02-28T07:53:54,795 Skipping link: No binaries permitted for llama-cpp-python: https://www.piwheels.org/simple/llama-cpp-python/llama_cpp_python-0.1.55-cp311-cp311-linux_armv7l.whl#sha256=6abbec187e6c40b192040ba4dee145ae69de4aaa65c3a350fe0cea85bf6aa197 (from https://www.piwheels.org/simple/llama-cpp-python/) (requires-python:>=3.7) 2024-02-28T07:53:54,795 Skipping link: No binaries permitted for llama-cpp-python: https://www.piwheels.org/simple/llama-cpp-python/llama_cpp_python-0.1.54-cp37-cp37m-linux_armv6l.whl#sha256=221d6012bf80f402d83593047359ddb7c767e83147d2c8445d24e58466b050bc (from https://www.piwheels.org/simple/llama-cpp-python/) (requires-python:>=3.7) 2024-02-28T07:53:54,795 Skipping link: No binaries permitted for llama-cpp-python: https://www.piwheels.org/simple/llama-cpp-python/llama_cpp_python-0.1.54-cp37-cp37m-linux_armv7l.whl#sha256=221d6012bf80f402d83593047359ddb7c767e83147d2c8445d24e58466b050bc (from https://www.piwheels.org/simple/llama-cpp-python/) (requires-python:>=3.7) 2024-02-28T07:53:54,796 Skipping link: No binaries permitted for llama-cpp-python: https://www.piwheels.org/simple/llama-cpp-python/llama_cpp_python-0.1.54-cp39-cp39-linux_armv6l.whl#sha256=064c3e8c4f3dd76aef301720241909048b2b4da4b0d1564b0436693e6efd1ddd (from https://www.piwheels.org/simple/llama-cpp-python/) (requires-python:>=3.7) 2024-02-28T07:53:54,796 Skipping link: No binaries permitted for llama-cpp-python: https://www.piwheels.org/simple/llama-cpp-python/llama_cpp_python-0.1.54-cp39-cp39-linux_armv7l.whl#sha256=064c3e8c4f3dd76aef301720241909048b2b4da4b0d1564b0436693e6efd1ddd (from https://www.piwheels.org/simple/llama-cpp-python/) (requires-python:>=3.7) 2024-02-28T07:53:54,797 Skipping link: No binaries permitted for llama-cpp-python: https://www.piwheels.org/simple/llama-cpp-python/llama_cpp_python-0.1.54-cp311-cp311-linux_armv6l.whl#sha256=2552020ab6570979cc92527cce3acf131f181b466944034d0ae4bc4674934989 (from https://www.piwheels.org/simple/llama-cpp-python/) (requires-python:>=3.7) 2024-02-28T07:53:54,798 Skipping link: No binaries permitted for llama-cpp-python: https://www.piwheels.org/simple/llama-cpp-python/llama_cpp_python-0.1.54-cp311-cp311-linux_armv7l.whl#sha256=2552020ab6570979cc92527cce3acf131f181b466944034d0ae4bc4674934989 (from https://www.piwheels.org/simple/llama-cpp-python/) (requires-python:>=3.7) 2024-02-28T07:53:54,798 Skipping link: No binaries permitted for llama-cpp-python: https://www.piwheels.org/simple/llama-cpp-python/llama_cpp_python-0.1.53-cp37-cp37m-linux_armv6l.whl#sha256=b177a40248c14829c96942ccdc570d96cf86f94d2ccd1fba2440ca7f496432b1 (from https://www.piwheels.org/simple/llama-cpp-python/) (requires-python:>=3.7) 2024-02-28T07:53:54,799 Skipping link: No binaries permitted for llama-cpp-python: https://www.piwheels.org/simple/llama-cpp-python/llama_cpp_python-0.1.53-cp37-cp37m-linux_armv7l.whl#sha256=b177a40248c14829c96942ccdc570d96cf86f94d2ccd1fba2440ca7f496432b1 (from https://www.piwheels.org/simple/llama-cpp-python/) (requires-python:>=3.7) 2024-02-28T07:53:54,800 Skipping link: No binaries permitted for llama-cpp-python: https://www.piwheels.org/simple/llama-cpp-python/llama_cpp_python-0.1.53-cp39-cp39-linux_armv6l.whl#sha256=fb5ba2f1e57a03c0d0e587577ab280d1a4a4fed295f1fcd4e19a3313a2ef07ac (from https://www.piwheels.org/simple/llama-cpp-python/) (requires-python:>=3.7) 2024-02-28T07:53:54,800 Skipping link: No binaries permitted for llama-cpp-python: https://www.piwheels.org/simple/llama-cpp-python/llama_cpp_python-0.1.53-cp39-cp39-linux_armv7l.whl#sha256=fb5ba2f1e57a03c0d0e587577ab280d1a4a4fed295f1fcd4e19a3313a2ef07ac (from https://www.piwheels.org/simple/llama-cpp-python/) (requires-python:>=3.7) 2024-02-28T07:53:54,801 Skipping link: No binaries permitted for llama-cpp-python: https://www.piwheels.org/simple/llama-cpp-python/llama_cpp_python-0.1.53-cp311-cp311-linux_armv6l.whl#sha256=5afe359d7635ee4081bf60ef6cdbc35860b635d75699f7185b4d637c55ac2572 (from https://www.piwheels.org/simple/llama-cpp-python/) (requires-python:>=3.7) 2024-02-28T07:53:54,802 Skipping link: No binaries permitted for llama-cpp-python: https://www.piwheels.org/simple/llama-cpp-python/llama_cpp_python-0.1.53-cp311-cp311-linux_armv7l.whl#sha256=5afe359d7635ee4081bf60ef6cdbc35860b635d75699f7185b4d637c55ac2572 (from https://www.piwheels.org/simple/llama-cpp-python/) (requires-python:>=3.7) 2024-02-28T07:53:54,802 Skipping link: No binaries permitted for llama-cpp-python: https://www.piwheels.org/simple/llama-cpp-python/llama_cpp_python-0.1.52-cp37-cp37m-linux_armv6l.whl#sha256=bf3bc680532ad36080ca0e375cdb349c91ba90a6880c0c9090b83bb41463aacc (from https://www.piwheels.org/simple/llama-cpp-python/) (requires-python:>=3.7) 2024-02-28T07:53:54,803 Skipping link: No binaries permitted for llama-cpp-python: https://www.piwheels.org/simple/llama-cpp-python/llama_cpp_python-0.1.52-cp37-cp37m-linux_armv7l.whl#sha256=bf3bc680532ad36080ca0e375cdb349c91ba90a6880c0c9090b83bb41463aacc (from https://www.piwheels.org/simple/llama-cpp-python/) (requires-python:>=3.7) 2024-02-28T07:53:54,803 Skipping link: No binaries permitted for llama-cpp-python: https://www.piwheels.org/simple/llama-cpp-python/llama_cpp_python-0.1.52-cp39-cp39-linux_armv6l.whl#sha256=2d8f9447a21a804a90f8adcee77052587ab9ace32dbe36eca23c72cb2ce20fac (from https://www.piwheels.org/simple/llama-cpp-python/) (requires-python:>=3.7) 2024-02-28T07:53:54,804 Skipping link: No binaries permitted for llama-cpp-python: https://www.piwheels.org/simple/llama-cpp-python/llama_cpp_python-0.1.52-cp39-cp39-linux_armv7l.whl#sha256=2d8f9447a21a804a90f8adcee77052587ab9ace32dbe36eca23c72cb2ce20fac (from https://www.piwheels.org/simple/llama-cpp-python/) (requires-python:>=3.7) 2024-02-28T07:53:54,804 Skipping link: No binaries permitted for llama-cpp-python: https://www.piwheels.org/simple/llama-cpp-python/llama_cpp_python-0.1.52-cp311-cp311-linux_armv6l.whl#sha256=fefadcad700a08bdc860fb3a5f45f54d635e130484bbadf9015da2268f57cb44 (from https://www.piwheels.org/simple/llama-cpp-python/) (requires-python:>=3.7) 2024-02-28T07:53:54,805 Skipping link: No binaries permitted for llama-cpp-python: https://www.piwheels.org/simple/llama-cpp-python/llama_cpp_python-0.1.52-cp311-cp311-linux_armv7l.whl#sha256=fefadcad700a08bdc860fb3a5f45f54d635e130484bbadf9015da2268f57cb44 (from https://www.piwheels.org/simple/llama-cpp-python/) (requires-python:>=3.7) 2024-02-28T07:53:54,805 Skipping link: No binaries permitted for llama-cpp-python: https://www.piwheels.org/simple/llama-cpp-python/llama_cpp_python-0.1.51-cp37-cp37m-linux_armv6l.whl#sha256=23d1e81835a4f9d2cd07c25dfe46adb3541bc7e7104c92b9e4ce40d8042f40e0 (from https://www.piwheels.org/simple/llama-cpp-python/) (requires-python:>=3.7) 2024-02-28T07:53:54,806 Skipping link: No binaries permitted for llama-cpp-python: https://www.piwheels.org/simple/llama-cpp-python/llama_cpp_python-0.1.51-cp37-cp37m-linux_armv7l.whl#sha256=23d1e81835a4f9d2cd07c25dfe46adb3541bc7e7104c92b9e4ce40d8042f40e0 (from https://www.piwheels.org/simple/llama-cpp-python/) (requires-python:>=3.7) 2024-02-28T07:53:54,806 Skipping link: No binaries permitted for llama-cpp-python: https://www.piwheels.org/simple/llama-cpp-python/llama_cpp_python-0.1.51-cp39-cp39-linux_armv6l.whl#sha256=b7620dc9874978dd791e463c32bcd526f5eb3eb53b8b4221b9eaec21eabd7958 (from https://www.piwheels.org/simple/llama-cpp-python/) (requires-python:>=3.7) 2024-02-28T07:53:54,807 Skipping link: No binaries permitted for llama-cpp-python: https://www.piwheels.org/simple/llama-cpp-python/llama_cpp_python-0.1.51-cp39-cp39-linux_armv7l.whl#sha256=b7620dc9874978dd791e463c32bcd526f5eb3eb53b8b4221b9eaec21eabd7958 (from https://www.piwheels.org/simple/llama-cpp-python/) (requires-python:>=3.7) 2024-02-28T07:53:54,808 Skipping link: No binaries permitted for llama-cpp-python: https://www.piwheels.org/simple/llama-cpp-python/llama_cpp_python-0.1.51-cp311-cp311-linux_armv6l.whl#sha256=6b45a1fb53ff22631be2564cb7274fe170f2982b28a5476e8ff905770eec557e (from https://www.piwheels.org/simple/llama-cpp-python/) (requires-python:>=3.7) 2024-02-28T07:53:54,808 Skipping link: No binaries permitted for llama-cpp-python: https://www.piwheels.org/simple/llama-cpp-python/llama_cpp_python-0.1.51-cp311-cp311-linux_armv7l.whl#sha256=6b45a1fb53ff22631be2564cb7274fe170f2982b28a5476e8ff905770eec557e (from https://www.piwheels.org/simple/llama-cpp-python/) (requires-python:>=3.7) 2024-02-28T07:53:54,809 Skipping link: No binaries permitted for llama-cpp-python: https://www.piwheels.org/simple/llama-cpp-python/llama_cpp_python-0.1.50-cp37-cp37m-linux_armv6l.whl#sha256=5b64a8dc60df2396aa83907b89ccc8e6db4ab43e017b3b3a26c091714099da12 (from https://www.piwheels.org/simple/llama-cpp-python/) (requires-python:>=3.7) 2024-02-28T07:53:54,809 Skipping link: No binaries permitted for llama-cpp-python: https://www.piwheels.org/simple/llama-cpp-python/llama_cpp_python-0.1.50-cp37-cp37m-linux_armv7l.whl#sha256=5b64a8dc60df2396aa83907b89ccc8e6db4ab43e017b3b3a26c091714099da12 (from https://www.piwheels.org/simple/llama-cpp-python/) (requires-python:>=3.7) 2024-02-28T07:53:54,810 Skipping link: No binaries permitted for llama-cpp-python: https://www.piwheels.org/simple/llama-cpp-python/llama_cpp_python-0.1.50-cp311-cp311-linux_armv6l.whl#sha256=715122f66811a350122cd555ac8a883fd243f0008711c03138d76742162d1e63 (from https://www.piwheels.org/simple/llama-cpp-python/) (requires-python:>=3.7) 2024-02-28T07:53:54,810 Skipping link: No binaries permitted for llama-cpp-python: https://www.piwheels.org/simple/llama-cpp-python/llama_cpp_python-0.1.50-cp311-cp311-linux_armv7l.whl#sha256=715122f66811a350122cd555ac8a883fd243f0008711c03138d76742162d1e63 (from https://www.piwheels.org/simple/llama-cpp-python/) (requires-python:>=3.7) 2024-02-28T07:53:54,811 Skipping link: No binaries permitted for llama-cpp-python: https://www.piwheels.org/simple/llama-cpp-python/llama_cpp_python-0.1.49-cp37-cp37m-linux_armv6l.whl#sha256=6bb78e03dfe2c72307aede0cb78e223e8d69e29948c964f2a0651654c6d62d55 (from https://www.piwheels.org/simple/llama-cpp-python/) (requires-python:>=3.7) 2024-02-28T07:53:54,811 Skipping link: No binaries permitted for llama-cpp-python: https://www.piwheels.org/simple/llama-cpp-python/llama_cpp_python-0.1.49-cp37-cp37m-linux_armv7l.whl#sha256=6bb78e03dfe2c72307aede0cb78e223e8d69e29948c964f2a0651654c6d62d55 (from https://www.piwheels.org/simple/llama-cpp-python/) (requires-python:>=3.7) 2024-02-28T07:53:54,812 Skipping link: No binaries permitted for llama-cpp-python: https://www.piwheels.org/simple/llama-cpp-python/llama_cpp_python-0.1.48-cp37-cp37m-linux_armv6l.whl#sha256=67af3df96f6ba459ca0a542bf8ec23e3cafefef3b7f6ed6ec7fe5b2ac6be3a2f (from https://www.piwheels.org/simple/llama-cpp-python/) (requires-python:>=3.7) 2024-02-28T07:53:54,813 Skipping link: No binaries permitted for llama-cpp-python: https://www.piwheels.org/simple/llama-cpp-python/llama_cpp_python-0.1.48-cp37-cp37m-linux_armv7l.whl#sha256=67af3df96f6ba459ca0a542bf8ec23e3cafefef3b7f6ed6ec7fe5b2ac6be3a2f (from https://www.piwheels.org/simple/llama-cpp-python/) (requires-python:>=3.7) 2024-02-28T07:53:54,813 Skipping link: No binaries permitted for llama-cpp-python: https://www.piwheels.org/simple/llama-cpp-python/llama_cpp_python-0.1.48-cp311-cp311-linux_armv6l.whl#sha256=7b4ce590d0f5b3f1b5c967a79a59067684631fb424510f80bed3f109f74a2d43 (from https://www.piwheels.org/simple/llama-cpp-python/) (requires-python:>=3.7) 2024-02-28T07:53:54,814 Skipping link: No binaries permitted for llama-cpp-python: https://www.piwheels.org/simple/llama-cpp-python/llama_cpp_python-0.1.48-cp311-cp311-linux_armv7l.whl#sha256=7b4ce590d0f5b3f1b5c967a79a59067684631fb424510f80bed3f109f74a2d43 (from https://www.piwheels.org/simple/llama-cpp-python/) (requires-python:>=3.7) 2024-02-28T07:53:54,815 Skipping link: No binaries permitted for llama-cpp-python: https://www.piwheels.org/simple/llama-cpp-python/llama_cpp_python-0.1.47-cp37-cp37m-linux_armv6l.whl#sha256=c4b404a9a588ba34c86302ea053359619a9aa93f844a933a08637cc65dfdb6e4 (from https://www.piwheels.org/simple/llama-cpp-python/) (requires-python:>=3.7) 2024-02-28T07:53:54,816 Skipping link: No binaries permitted for llama-cpp-python: https://www.piwheels.org/simple/llama-cpp-python/llama_cpp_python-0.1.47-cp37-cp37m-linux_armv7l.whl#sha256=c4b404a9a588ba34c86302ea053359619a9aa93f844a933a08637cc65dfdb6e4 (from https://www.piwheels.org/simple/llama-cpp-python/) (requires-python:>=3.7) 2024-02-28T07:53:54,816 Skipping link: No binaries permitted for llama-cpp-python: https://www.piwheels.org/simple/llama-cpp-python/llama_cpp_python-0.1.47-cp311-cp311-linux_armv6l.whl#sha256=a2ac92bd0de7e00a32f9fa9b5a1a22ead4031f5711ad81e191a2ebc8e9df3dcf (from https://www.piwheels.org/simple/llama-cpp-python/) (requires-python:>=3.7) 2024-02-28T07:53:54,817 Skipping link: No binaries permitted for llama-cpp-python: https://www.piwheels.org/simple/llama-cpp-python/llama_cpp_python-0.1.47-cp311-cp311-linux_armv7l.whl#sha256=a2ac92bd0de7e00a32f9fa9b5a1a22ead4031f5711ad81e191a2ebc8e9df3dcf (from https://www.piwheels.org/simple/llama-cpp-python/) (requires-python:>=3.7) 2024-02-28T07:53:54,817 Skipping link: No binaries permitted for llama-cpp-python: https://www.piwheels.org/simple/llama-cpp-python/llama_cpp_python-0.1.46-cp37-cp37m-linux_armv6l.whl#sha256=851936642b661501ebdd692b9cc1a9f420b54d4b6c1568a0b5561c0c313c3375 (from https://www.piwheels.org/simple/llama-cpp-python/) (requires-python:>=3.7) 2024-02-28T07:53:54,818 Skipping link: No binaries permitted for llama-cpp-python: https://www.piwheels.org/simple/llama-cpp-python/llama_cpp_python-0.1.46-cp37-cp37m-linux_armv7l.whl#sha256=851936642b661501ebdd692b9cc1a9f420b54d4b6c1568a0b5561c0c313c3375 (from https://www.piwheels.org/simple/llama-cpp-python/) (requires-python:>=3.7) 2024-02-28T07:53:54,819 Skipping link: No binaries permitted for llama-cpp-python: https://www.piwheels.org/simple/llama-cpp-python/llama_cpp_python-0.1.45-cp37-cp37m-linux_armv6l.whl#sha256=b01d7783f853028706cb7cd4833e1040430f089a3770cc4dcf88af128329b3e9 (from https://www.piwheels.org/simple/llama-cpp-python/) (requires-python:>=3.7) 2024-02-28T07:53:54,820 Skipping link: No binaries permitted for llama-cpp-python: https://www.piwheels.org/simple/llama-cpp-python/llama_cpp_python-0.1.45-cp37-cp37m-linux_armv7l.whl#sha256=b01d7783f853028706cb7cd4833e1040430f089a3770cc4dcf88af128329b3e9 (from https://www.piwheels.org/simple/llama-cpp-python/) (requires-python:>=3.7) 2024-02-28T07:53:54,820 Skipping link: No binaries permitted for llama-cpp-python: https://www.piwheels.org/simple/llama-cpp-python/llama_cpp_python-0.1.44-cp37-cp37m-linux_armv6l.whl#sha256=35dc305c6d40fbbc0ef489c18521a842192156419573134f83bfbb4ec4bfd3d9 (from https://www.piwheels.org/simple/llama-cpp-python/) (requires-python:>=3.7) 2024-02-28T07:53:54,821 Skipping link: No binaries permitted for llama-cpp-python: https://www.piwheels.org/simple/llama-cpp-python/llama_cpp_python-0.1.44-cp37-cp37m-linux_armv7l.whl#sha256=35dc305c6d40fbbc0ef489c18521a842192156419573134f83bfbb4ec4bfd3d9 (from https://www.piwheels.org/simple/llama-cpp-python/) (requires-python:>=3.7) 2024-02-28T07:53:54,822 Skipping link: No binaries permitted for llama-cpp-python: https://www.piwheels.org/simple/llama-cpp-python/llama_cpp_python-0.1.43-cp37-cp37m-linux_armv6l.whl#sha256=49440356659a24d945119356b9c6e352e9dacf9e873e4d0d2167a501a7050592 (from https://www.piwheels.org/simple/llama-cpp-python/) (requires-python:>=3.7) 2024-02-28T07:53:54,822 Skipping link: No binaries permitted for llama-cpp-python: https://www.piwheels.org/simple/llama-cpp-python/llama_cpp_python-0.1.43-cp37-cp37m-linux_armv7l.whl#sha256=49440356659a24d945119356b9c6e352e9dacf9e873e4d0d2167a501a7050592 (from https://www.piwheels.org/simple/llama-cpp-python/) (requires-python:>=3.7) 2024-02-28T07:53:54,823 Skipping link: No binaries permitted for llama-cpp-python: https://www.piwheels.org/simple/llama-cpp-python/llama_cpp_python-0.1.42-cp37-cp37m-linux_armv6l.whl#sha256=48b89ad5d0e3274b6b637c58a8067672586596856f146a2e9c580c6f9ca285ef (from https://www.piwheels.org/simple/llama-cpp-python/) (requires-python:>=3.7) 2024-02-28T07:53:54,824 Skipping link: No binaries permitted for llama-cpp-python: https://www.piwheels.org/simple/llama-cpp-python/llama_cpp_python-0.1.42-cp37-cp37m-linux_armv7l.whl#sha256=48b89ad5d0e3274b6b637c58a8067672586596856f146a2e9c580c6f9ca285ef (from https://www.piwheels.org/simple/llama-cpp-python/) (requires-python:>=3.7) 2024-02-28T07:53:54,824 Skipping link: No binaries permitted for llama-cpp-python: https://www.piwheels.org/simple/llama-cpp-python/llama_cpp_python-0.1.41-cp37-cp37m-linux_armv6l.whl#sha256=eb37310c71596893c50ba3e1e2b45a236b54c86025f4e61e369e0e916e3ea927 (from https://www.piwheels.org/simple/llama-cpp-python/) (requires-python:>=3.7) 2024-02-28T07:53:54,825 Skipping link: No binaries permitted for llama-cpp-python: https://www.piwheels.org/simple/llama-cpp-python/llama_cpp_python-0.1.41-cp37-cp37m-linux_armv7l.whl#sha256=eb37310c71596893c50ba3e1e2b45a236b54c86025f4e61e369e0e916e3ea927 (from https://www.piwheels.org/simple/llama-cpp-python/) (requires-python:>=3.7) 2024-02-28T07:53:54,826 Skipping link: No binaries permitted for llama-cpp-python: https://www.piwheels.org/simple/llama-cpp-python/llama_cpp_python-0.1.40-cp37-cp37m-linux_armv6l.whl#sha256=438062696c2aa9e624eba548d48c7e72e677f8514be6641dd76cad874124d08b (from https://www.piwheels.org/simple/llama-cpp-python/) (requires-python:>=3.7) 2024-02-28T07:53:54,826 Skipping link: No binaries permitted for llama-cpp-python: https://www.piwheels.org/simple/llama-cpp-python/llama_cpp_python-0.1.40-cp37-cp37m-linux_armv7l.whl#sha256=438062696c2aa9e624eba548d48c7e72e677f8514be6641dd76cad874124d08b (from https://www.piwheels.org/simple/llama-cpp-python/) (requires-python:>=3.7) 2024-02-28T07:53:54,827 Skipping link: not a file: https://www.piwheels.org/simple/llama-cpp-python/ 2024-02-28T07:53:54,828 Skipping link: not a file: https://pypi.org/simple/llama-cpp-python/ 2024-02-28T07:53:54,862 Given no hashes to check 1 links for project 'llama-cpp-python': discarding no candidates 2024-02-28T07:53:54,881 Collecting llama-cpp-python==0.2.53 2024-02-28T07:53:54,883 Created temporary directory: /tmp/pip-unpack-vr4wtame 2024-02-28T07:53:55,171 Downloading llama_cpp_python-0.2.53.tar.gz (36.8 MB) 2024-02-28T07:54:03,916 Added llama-cpp-python==0.2.53 from https://files.pythonhosted.org/packages/61/a1/6a4f3df444ddd3903d07d35f3ef7a2a2f2711ced64944fd5ee3f0ed1ef39/llama_cpp_python-0.2.53.tar.gz to build tracker '/tmp/pip-build-tracker-tlab4r61' 2024-02-28T07:54:03,922 Created temporary directory: /tmp/pip-build-env-pgsw12_s 2024-02-28T07:54:03,926 Installing build dependencies: started 2024-02-28T07:54:03,927 Running command pip subprocess to install build dependencies 2024-02-28T07:54:05,090 Using pip 23.3.1 from /usr/local/lib/python3.11/dist-packages/pip (python 3.11) 2024-02-28T07:54:05,606 Looking in indexes: https://pypi.org/simple, https://www.piwheels.org/simple 2024-02-28T07:54:06,403 Collecting scikit-build-core>=0.5.1 (from scikit-build-core[pyproject]>=0.5.1) 2024-02-28T07:54:06,427 Using cached https://www.piwheels.org/simple/scikit-build-core/scikit_build_core-0.8.1-py3-none-any.whl (139 kB) 2024-02-28T07:54:06,811 Collecting packaging>=20.9 (from scikit-build-core>=0.5.1->scikit-build-core[pyproject]>=0.5.1) 2024-02-28T07:54:06,829 Using cached https://www.piwheels.org/simple/packaging/packaging-23.2-py3-none-any.whl (53 kB) 2024-02-28T07:54:07,074 Collecting pathspec>=0.10.1 (from scikit-build-core[pyproject]>=0.5.1) 2024-02-28T07:54:07,091 Using cached https://www.piwheels.org/simple/pathspec/pathspec-0.12.1-py3-none-any.whl (31 kB) 2024-02-28T07:54:07,210 Collecting pyproject-metadata>=0.5 (from scikit-build-core[pyproject]>=0.5.1) 2024-02-28T07:54:07,227 Using cached https://www.piwheels.org/simple/pyproject-metadata/pyproject_metadata-0.7.1-py3-none-any.whl (7.4 kB) 2024-02-28T07:54:09,710 Installing collected packages: pathspec, packaging, scikit-build-core, pyproject-metadata 2024-02-28T07:54:10,424 Successfully installed packaging-23.2 pathspec-0.12.1 pyproject-metadata-0.7.1 scikit-build-core-0.8.1 2024-02-28T07:54:10,720 [notice] A new release of pip is available: 23.3.1 -> 24.0 2024-02-28T07:54:10,721 [notice] To update, run: python3 -m pip install --upgrade pip 2024-02-28T07:54:10,959 Installing build dependencies: finished with status 'done' 2024-02-28T07:54:10,962 Getting requirements to build wheel: started 2024-02-28T07:54:10,963 Running command Getting requirements to build wheel 2024-02-28T07:54:11,383 Getting requirements to build wheel: finished with status 'done' 2024-02-28T07:54:11,406 Created temporary directory: /tmp/pip-modern-metadata-60ls8a2g 2024-02-28T07:54:11,408 Preparing metadata (pyproject.toml): started 2024-02-28T07:54:11,409 Running command Preparing metadata (pyproject.toml) 2024-02-28T07:54:11,914 *** scikit-build-core 0.8.1 using CMake 3.25.1 (metadata_wheel) 2024-02-28T07:54:12,009 Preparing metadata (pyproject.toml): finished with status 'done' 2024-02-28T07:54:12,015 Source in /tmp/pip-wheel-1e0qhhza/llama-cpp-python_5906e59816184d0d863e6ff77b057617 has version 0.2.53, which satisfies requirement llama-cpp-python==0.2.53 from https://files.pythonhosted.org/packages/61/a1/6a4f3df444ddd3903d07d35f3ef7a2a2f2711ced64944fd5ee3f0ed1ef39/llama_cpp_python-0.2.53.tar.gz 2024-02-28T07:54:12,016 Removed llama-cpp-python==0.2.53 from https://files.pythonhosted.org/packages/61/a1/6a4f3df444ddd3903d07d35f3ef7a2a2f2711ced64944fd5ee3f0ed1ef39/llama_cpp_python-0.2.53.tar.gz from build tracker '/tmp/pip-build-tracker-tlab4r61' 2024-02-28T07:54:12,025 Created temporary directory: /tmp/pip-unpack-sotvvrlj 2024-02-28T07:54:12,026 Created temporary directory: /tmp/pip-unpack-i39gi_u0 2024-02-28T07:54:12,084 Building wheels for collected packages: llama-cpp-python 2024-02-28T07:54:12,088 Created temporary directory: /tmp/pip-wheel-gw0yl5zl 2024-02-28T07:54:12,089 Destination directory: /tmp/pip-wheel-gw0yl5zl 2024-02-28T07:54:12,091 Building wheel for llama-cpp-python (pyproject.toml): started 2024-02-28T07:54:12,092 Running command Building wheel for llama-cpp-python (pyproject.toml) 2024-02-28T07:54:12,588 *** scikit-build-core 0.8.1 using CMake 3.25.1 (wheel) 2024-02-28T07:54:12,607 *** Configuring CMake... 2024-02-28T07:54:12,701 loading initial cache file /tmp/tmp67mfmdk4/build/CMakeInit.txt 2024-02-28T07:54:12,975 -- The C compiler identification is GNU 12.2.0 2024-02-28T07:54:13,277 -- The CXX compiler identification is GNU 12.2.0 2024-02-28T07:54:13,327 -- Detecting C compiler ABI info 2024-02-28T07:54:13,586 -- Detecting C compiler ABI info - done 2024-02-28T07:54:13,642 -- Check for working C compiler: /usr/bin/cc - skipped 2024-02-28T07:54:13,645 -- Detecting C compile features 2024-02-28T07:54:13,648 -- Detecting C compile features - done 2024-02-28T07:54:13,672 -- Detecting CXX compiler ABI info 2024-02-28T07:54:13,990 -- Detecting CXX compiler ABI info - done 2024-02-28T07:54:14,028 -- Check for working CXX compiler: /usr/bin/c++ - skipped 2024-02-28T07:54:14,030 -- Detecting CXX compile features 2024-02-28T07:54:14,032 -- Detecting CXX compile features - done 2024-02-28T07:54:14,064 -- Found Git: /usr/bin/git (found version "2.39.2") 2024-02-28T07:54:14,115 -- Performing Test CMAKE_HAVE_LIBC_PTHREAD 2024-02-28T07:54:14,413 -- Performing Test CMAKE_HAVE_LIBC_PTHREAD - Success 2024-02-28T07:54:14,420 -- Found Threads: TRUE 2024-02-28T07:54:14,432 -- Warning: ccache not found - consider installing it for faster compilation or disable this warning with LLAMA_CCACHE=OFF 2024-02-28T07:54:14,577 -- CMAKE_SYSTEM_PROCESSOR: armv7l 2024-02-28T07:54:14,578 -- ARM detected 2024-02-28T07:54:14,582 -- Performing Test COMPILER_SUPPORTS_FP16_FORMAT_I3E 2024-02-28T07:54:14,897 -- Performing Test COMPILER_SUPPORTS_FP16_FORMAT_I3E - Success 2024-02-28T07:54:14,926 CMake Warning (dev) at CMakeLists.txt:21 (install): 2024-02-28T07:54:14,927 Target llama has PUBLIC_HEADER files but no PUBLIC_HEADER DESTINATION. 2024-02-28T07:54:14,927 This warning is for project developers. Use -Wno-dev to suppress it. 2024-02-28T07:54:14,928 CMake Warning (dev) at CMakeLists.txt:30 (install): 2024-02-28T07:54:14,928 Target llama has PUBLIC_HEADER files but no PUBLIC_HEADER DESTINATION. 2024-02-28T07:54:14,929 This warning is for project developers. Use -Wno-dev to suppress it. 2024-02-28T07:54:14,935 -- Configuring done 2024-02-28T07:54:15,008 -- Generating done 2024-02-28T07:54:15,026 -- Build files have been written to: /tmp/tmp67mfmdk4/build 2024-02-28T07:54:15,038 *** Building project with Ninja... 2024-02-28T07:54:15,318 [1/22] cd /tmp/pip-wheel-1e0qhhza/llama-cpp-python_5906e59816184d0d863e6ff77b057617/vendor/llama.cpp && /usr/bin/cmake -DMSVC= -DCMAKE_C_COMPILER_VERSION=12.2.0 -DCMAKE_C_COMPILER_ID=GNU -DCMAKE_VS_PLATFORM_NAME= -DCMAKE_C_COMPILER=/usr/bin/cc -P /tmp/pip-wheel-1e0qhhza/llama-cpp-python_5906e59816184d0d863e6ff77b057617/vendor/llama.cpp/common/../scripts/gen-build-info-cpp.cmake 2024-02-28T07:54:15,319 -- Found Git: /usr/bin/git (found version "2.39.2") 2024-02-28T07:54:15,533 [2/22] /usr/bin/c++ -D_GNU_SOURCE -D_XOPEN_SOURCE=600 -O3 -DNDEBUG -fPIC -Wmissing-declarations -Wmissing-noreturn -Wall -Wextra -Wpedantic -Wcast-qual -Wno-unused-function -Wno-array-bounds -Wno-format-truncation -Wextra-semi -mfp16-format=ieee -mfpu=neon-fp-armv8 -mno-unaligned-access -funsafe-math-optimizations -std=gnu++11 -MD -MT vendor/llama.cpp/common/CMakeFiles/build_info.dir/build-info.cpp.o -MF vendor/llama.cpp/common/CMakeFiles/build_info.dir/build-info.cpp.o.d -o vendor/llama.cpp/common/CMakeFiles/build_info.dir/build-info.cpp.o -c /tmp/pip-wheel-1e0qhhza/llama-cpp-python_5906e59816184d0d863e6ff77b057617/vendor/llama.cpp/common/build-info.cpp 2024-02-28T07:54:18,203 [3/22] /usr/bin/cc -D_GNU_SOURCE -D_XOPEN_SOURCE=600 -I/tmp/pip-wheel-1e0qhhza/llama-cpp-python_5906e59816184d0d863e6ff77b057617/vendor/llama.cpp/. -O3 -DNDEBUG -fPIC -Wshadow -Wstrict-prototypes -Wpointer-arith -Wmissing-prototypes -Werror=implicit-int -Werror=implicit-function-declaration -Wall -Wextra -Wpedantic -Wcast-qual -Wno-unused-function -Wdouble-promotion -mfp16-format=ieee -mfpu=neon-fp-armv8 -mno-unaligned-access -funsafe-math-optimizations -std=gnu11 -MD -MT vendor/llama.cpp/CMakeFiles/ggml.dir/ggml-quants.c.o -MF vendor/llama.cpp/CMakeFiles/ggml.dir/ggml-quants.c.o.d -o vendor/llama.cpp/CMakeFiles/ggml.dir/ggml-quants.c.o -c /tmp/pip-wheel-1e0qhhza/llama-cpp-python_5906e59816184d0d863e6ff77b057617/vendor/llama.cpp/ggml-quants.c 2024-02-28T07:54:18,204 FAILED: vendor/llama.cpp/CMakeFiles/ggml.dir/ggml-quants.c.o 2024-02-28T07:54:18,205 /usr/bin/cc -D_GNU_SOURCE -D_XOPEN_SOURCE=600 -I/tmp/pip-wheel-1e0qhhza/llama-cpp-python_5906e59816184d0d863e6ff77b057617/vendor/llama.cpp/. -O3 -DNDEBUG -fPIC -Wshadow -Wstrict-prototypes -Wpointer-arith -Wmissing-prototypes -Werror=implicit-int -Werror=implicit-function-declaration -Wall -Wextra -Wpedantic -Wcast-qual -Wno-unused-function -Wdouble-promotion -mfp16-format=ieee -mfpu=neon-fp-armv8 -mno-unaligned-access -funsafe-math-optimizations -std=gnu11 -MD -MT vendor/llama.cpp/CMakeFiles/ggml.dir/ggml-quants.c.o -MF vendor/llama.cpp/CMakeFiles/ggml.dir/ggml-quants.c.o.d -o vendor/llama.cpp/CMakeFiles/ggml.dir/ggml-quants.c.o -c /tmp/pip-wheel-1e0qhhza/llama-cpp-python_5906e59816184d0d863e6ff77b057617/vendor/llama.cpp/ggml-quants.c 2024-02-28T07:54:18,206 /tmp/pip-wheel-1e0qhhza/llama-cpp-python_5906e59816184d0d863e6ff77b057617/vendor/llama.cpp/ggml-quants.c: In function ‘ggml_vec_dot_iq2_s_q8_K’: 2024-02-28T07:54:18,207 /tmp/pip-wheel-1e0qhhza/llama-cpp-python_5906e59816184d0d863e6ff77b057617/vendor/llama.cpp/ggml-quants.c:9668:32: error: implicit declaration of function ‘vld1q_u8_x2’; did you mean ‘vld1q_u32’? [-Werror=implicit-function-declaration] 2024-02-28T07:54:18,208 9668 | const uint8x16x2_t mask1 = vld1q_u8_x2(k_mask1); 2024-02-28T07:54:18,209 | ^~~~~~~~~~~ 2024-02-28T07:54:18,210 | vld1q_u32 2024-02-28T07:54:18,211 /tmp/pip-wheel-1e0qhhza/llama-cpp-python_5906e59816184d0d863e6ff77b057617/vendor/llama.cpp/ggml-quants.c:9668:32: error: invalid initializer 2024-02-28T07:54:18,212 /tmp/pip-wheel-1e0qhhza/llama-cpp-python_5906e59816184d0d863e6ff77b057617/vendor/llama.cpp/ggml-quants.c:9701:13: note: use ‘-flax-vector-conversions’ to permit conversions between vectors with differing element types or numbers of subparts 2024-02-28T07:54:18,213 9701 | vs.val[1] = vandq_u8(ggml_vqtbl1q_u8(vs.val[0], mask1.val[1]), mask2); 2024-02-28T07:54:18,214 | ^~ 2024-02-28T07:54:18,214 /tmp/pip-wheel-1e0qhhza/llama-cpp-python_5906e59816184d0d863e6ff77b057617/vendor/llama.cpp/ggml-quants.c:9701:34: error: incompatible type for argument 1 of ‘vandq_u8’ 2024-02-28T07:54:18,215 9701 | vs.val[1] = vandq_u8(ggml_vqtbl1q_u8(vs.val[0], mask1.val[1]), mask2); 2024-02-28T07:54:18,216 | ^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ 2024-02-28T07:54:18,217 | | 2024-02-28T07:54:18,218 | int8x16_t 2024-02-28T07:54:18,219 In file included from /tmp/pip-wheel-1e0qhhza/llama-cpp-python_5906e59816184d0d863e6ff77b057617/vendor/llama.cpp/ggml-impl.h:54, 2024-02-28T07:54:18,231 from /tmp/pip-wheel-1e0qhhza/llama-cpp-python_5906e59816184d0d863e6ff77b057617/vendor/llama.cpp/ggml-quants.h:3, 2024-02-28T07:54:18,241 from /tmp/pip-wheel-1e0qhhza/llama-cpp-python_5906e59816184d0d863e6ff77b057617/vendor/llama.cpp/ggml-quants.c:1: 2024-02-28T07:54:18,242 /usr/lib/gcc/arm-linux-gnueabihf/12/include/arm_neon.h:13874:22: note: expected ‘uint8x16_t’ but argument is of type ‘int8x16_t’ 2024-02-28T07:54:18,243 13874 | vandq_u8 (uint8x16_t __a, uint8x16_t __b) 2024-02-28T07:54:18,244 | ~~~~~~~~~~~^~~ 2024-02-28T07:54:18,245 /tmp/pip-wheel-1e0qhhza/llama-cpp-python_5906e59816184d0d863e6ff77b057617/vendor/llama.cpp/ggml-quants.c:9702:34: error: incompatible type for argument 1 of ‘vandq_u8’ 2024-02-28T07:54:18,246 9702 | vs.val[0] = vandq_u8(ggml_vqtbl1q_u8(vs.val[0], mask1.val[0]), mask2); 2024-02-28T07:54:18,247 | ^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ 2024-02-28T07:54:18,248 | | 2024-02-28T07:54:18,249 | int8x16_t 2024-02-28T07:54:18,250 /usr/lib/gcc/arm-linux-gnueabihf/12/include/arm_neon.h:13874:22: note: expected ‘uint8x16_t’ but argument is of type ‘int8x16_t’ 2024-02-28T07:54:18,251 13874 | vandq_u8 (uint8x16_t __a, uint8x16_t __b) 2024-02-28T07:54:18,252 | ~~~~~~~~~~~^~~ 2024-02-28T07:54:18,253 /tmp/pip-wheel-1e0qhhza/llama-cpp-python_5906e59816184d0d863e6ff77b057617/vendor/llama.cpp/ggml-quants.c:9710:34: error: incompatible type for argument 1 of ‘vandq_u8’ 2024-02-28T07:54:18,254 9710 | vs.val[1] = vandq_u8(ggml_vqtbl1q_u8(vs.val[0], mask1.val[1]), mask2); 2024-02-28T07:54:18,255 | ^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ 2024-02-28T07:54:18,256 | | 2024-02-28T07:54:18,257 | int8x16_t 2024-02-28T07:54:18,258 /usr/lib/gcc/arm-linux-gnueabihf/12/include/arm_neon.h:13874:22: note: expected ‘uint8x16_t’ but argument is of type ‘int8x16_t’ 2024-02-28T07:54:18,259 13874 | vandq_u8 (uint8x16_t __a, uint8x16_t __b) 2024-02-28T07:54:18,260 | ~~~~~~~~~~~^~~ 2024-02-28T07:54:18,261 /tmp/pip-wheel-1e0qhhza/llama-cpp-python_5906e59816184d0d863e6ff77b057617/vendor/llama.cpp/ggml-quants.c:9711:34: error: incompatible type for argument 1 of ‘vandq_u8’ 2024-02-28T07:54:18,262 9711 | vs.val[0] = vandq_u8(ggml_vqtbl1q_u8(vs.val[0], mask1.val[0]), mask2); 2024-02-28T07:54:18,263 | ^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ 2024-02-28T07:54:18,263 | | 2024-02-28T07:54:18,264 | int8x16_t 2024-02-28T07:54:18,265 /usr/lib/gcc/arm-linux-gnueabihf/12/include/arm_neon.h:13874:22: note: expected ‘uint8x16_t’ but argument is of type ‘int8x16_t’ 2024-02-28T07:54:18,266 13874 | vandq_u8 (uint8x16_t __a, uint8x16_t __b) 2024-02-28T07:54:18,267 | ~~~~~~~~~~~^~~ 2024-02-28T07:54:18,267 /tmp/pip-wheel-1e0qhhza/llama-cpp-python_5906e59816184d0d863e6ff77b057617/vendor/llama.cpp/ggml-quants.c: In function ‘ggml_vec_dot_iq3_s_q8_K’: 2024-02-28T07:54:18,268 /tmp/pip-wheel-1e0qhhza/llama-cpp-python_5906e59816184d0d863e6ff77b057617/vendor/llama.cpp/ggml-quants.c:10006:32: error: invalid initializer 2024-02-28T07:54:18,269 10006 | const uint8x16x2_t mask1 = vld1q_u8_x2(k_mask1); 2024-02-28T07:54:18,270 | ^~~~~~~~~~~ 2024-02-28T07:54:18,271 /tmp/pip-wheel-1e0qhhza/llama-cpp-python_5906e59816184d0d863e6ff77b057617/vendor/llama.cpp/ggml-quants.c:10034:34: error: incompatible type for argument 1 of ‘vandq_u8’ 2024-02-28T07:54:18,272 10034 | vs.val[1] = vandq_u8(ggml_vqtbl1q_u8(vs.val[0], mask1.val[1]), mask2); 2024-02-28T07:54:18,273 | ^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ 2024-02-28T07:54:18,274 | | 2024-02-28T07:54:18,275 | int8x16_t 2024-02-28T07:54:18,276 /usr/lib/gcc/arm-linux-gnueabihf/12/include/arm_neon.h:13874:22: note: expected ‘uint8x16_t’ but argument is of type ‘int8x16_t’ 2024-02-28T07:54:18,291 13874 | vandq_u8 (uint8x16_t __a, uint8x16_t __b) 2024-02-28T07:54:18,292 | ~~~~~~~~~~~^~~ 2024-02-28T07:54:18,293 /tmp/pip-wheel-1e0qhhza/llama-cpp-python_5906e59816184d0d863e6ff77b057617/vendor/llama.cpp/ggml-quants.c:10035:34: error: incompatible type for argument 1 of ‘vandq_u8’ 2024-02-28T07:54:18,294 10035 | vs.val[0] = vandq_u8(ggml_vqtbl1q_u8(vs.val[0], mask1.val[0]), mask2); 2024-02-28T07:54:18,295 | ^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ 2024-02-28T07:54:18,296 | | 2024-02-28T07:54:18,297 | int8x16_t 2024-02-28T07:54:18,297 /usr/lib/gcc/arm-linux-gnueabihf/12/include/arm_neon.h:13874:22: note: expected ‘uint8x16_t’ but argument is of type ‘int8x16_t’ 2024-02-28T07:54:18,298 13874 | vandq_u8 (uint8x16_t __a, uint8x16_t __b) 2024-02-28T07:54:18,299 | ~~~~~~~~~~~^~~ 2024-02-28T07:54:18,300 /tmp/pip-wheel-1e0qhhza/llama-cpp-python_5906e59816184d0d863e6ff77b057617/vendor/llama.cpp/ggml-quants.c:10043:34: error: incompatible type for argument 1 of ‘vandq_u8’ 2024-02-28T07:54:18,301 10043 | vs.val[1] = vandq_u8(ggml_vqtbl1q_u8(vs.val[0], mask1.val[1]), mask2); 2024-02-28T07:54:18,302 | ^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ 2024-02-28T07:54:18,303 | | 2024-02-28T07:54:18,304 | int8x16_t 2024-02-28T07:54:18,305 /usr/lib/gcc/arm-linux-gnueabihf/12/include/arm_neon.h:13874:22: note: expected ‘uint8x16_t’ but argument is of type ‘int8x16_t’ 2024-02-28T07:54:18,306 13874 | vandq_u8 (uint8x16_t __a, uint8x16_t __b) 2024-02-28T07:54:18,307 | ~~~~~~~~~~~^~~ 2024-02-28T07:54:18,308 /tmp/pip-wheel-1e0qhhza/llama-cpp-python_5906e59816184d0d863e6ff77b057617/vendor/llama.cpp/ggml-quants.c:10044:34: error: incompatible type for argument 1 of ‘vandq_u8’ 2024-02-28T07:54:18,309 10044 | vs.val[0] = vandq_u8(ggml_vqtbl1q_u8(vs.val[0], mask1.val[0]), mask2); 2024-02-28T07:54:18,310 | ^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ 2024-02-28T07:54:18,310 | | 2024-02-28T07:54:18,311 | int8x16_t 2024-02-28T07:54:18,312 /usr/lib/gcc/arm-linux-gnueabihf/12/include/arm_neon.h:13874:22: note: expected ‘uint8x16_t’ but argument is of type ‘int8x16_t’ 2024-02-28T07:54:18,313 13874 | vandq_u8 (uint8x16_t __a, uint8x16_t __b) 2024-02-28T07:54:18,314 | ~~~~~~~~~~~^~~ 2024-02-28T07:54:18,315 cc1: some warnings being treated as errors 2024-02-28T07:54:18,899 [4/22] /usr/bin/cc -D_GNU_SOURCE -D_XOPEN_SOURCE=600 -I/tmp/pip-wheel-1e0qhhza/llama-cpp-python_5906e59816184d0d863e6ff77b057617/vendor/llama.cpp/. -O3 -DNDEBUG -fPIC -Wshadow -Wstrict-prototypes -Wpointer-arith -Wmissing-prototypes -Werror=implicit-int -Werror=implicit-function-declaration -Wall -Wextra -Wpedantic -Wcast-qual -Wno-unused-function -Wdouble-promotion -mfp16-format=ieee -mfpu=neon-fp-armv8 -mno-unaligned-access -funsafe-math-optimizations -std=gnu11 -MD -MT vendor/llama.cpp/CMakeFiles/ggml.dir/ggml-alloc.c.o -MF vendor/llama.cpp/CMakeFiles/ggml.dir/ggml-alloc.c.o.d -o vendor/llama.cpp/CMakeFiles/ggml.dir/ggml-alloc.c.o -c /tmp/pip-wheel-1e0qhhza/llama-cpp-python_5906e59816184d0d863e6ff77b057617/vendor/llama.cpp/ggml-alloc.c 2024-02-28T07:54:19,580 [5/22] /usr/bin/cc -D_GNU_SOURCE -D_XOPEN_SOURCE=600 -I/tmp/pip-wheel-1e0qhhza/llama-cpp-python_5906e59816184d0d863e6ff77b057617/vendor/llama.cpp/. -O3 -DNDEBUG -fPIC -Wshadow -Wstrict-prototypes -Wpointer-arith -Wmissing-prototypes -Werror=implicit-int -Werror=implicit-function-declaration -Wall -Wextra -Wpedantic -Wcast-qual -Wno-unused-function -Wdouble-promotion -mfp16-format=ieee -mfpu=neon-fp-armv8 -mno-unaligned-access -funsafe-math-optimizations -std=gnu11 -MD -MT vendor/llama.cpp/CMakeFiles/ggml.dir/ggml-backend.c.o -MF vendor/llama.cpp/CMakeFiles/ggml.dir/ggml-backend.c.o.d -o vendor/llama.cpp/CMakeFiles/ggml.dir/ggml-backend.c.o -c /tmp/pip-wheel-1e0qhhza/llama-cpp-python_5906e59816184d0d863e6ff77b057617/vendor/llama.cpp/ggml-backend.c 2024-02-28T07:54:54,698 [6/22] /usr/bin/c++ -D_GNU_SOURCE -D_XOPEN_SOURCE=600 -I/tmp/pip-wheel-1e0qhhza/llama-cpp-python_5906e59816184d0d863e6ff77b057617/vendor/llama.cpp/common/. -I/tmp/pip-wheel-1e0qhhza/llama-cpp-python_5906e59816184d0d863e6ff77b057617/vendor/llama.cpp/. -O3 -DNDEBUG -fPIC -Wmissing-declarations -Wmissing-noreturn -Wall -Wextra -Wpedantic -Wcast-qual -Wno-unused-function -Wno-array-bounds -Wno-format-truncation -Wextra-semi -mfp16-format=ieee -mfpu=neon-fp-armv8 -mno-unaligned-access -funsafe-math-optimizations -std=gnu++11 -MD -MT vendor/llama.cpp/common/CMakeFiles/common.dir/common.cpp.o -MF vendor/llama.cpp/common/CMakeFiles/common.dir/common.cpp.o.d -o vendor/llama.cpp/common/CMakeFiles/common.dir/common.cpp.o -c /tmp/pip-wheel-1e0qhhza/llama-cpp-python_5906e59816184d0d863e6ff77b057617/vendor/llama.cpp/common/common.cpp 2024-02-28T07:54:54,699 In file included from /usr/include/c++/12/vector:70, 2024-02-28T07:54:54,700 from /tmp/pip-wheel-1e0qhhza/llama-cpp-python_5906e59816184d0d863e6ff77b057617/vendor/llama.cpp/common/grammar-parser.h:14, 2024-02-28T07:54:54,701 from /tmp/pip-wheel-1e0qhhza/llama-cpp-python_5906e59816184d0d863e6ff77b057617/vendor/llama.cpp/common/sampling.h:5, 2024-02-28T07:54:54,702 from /tmp/pip-wheel-1e0qhhza/llama-cpp-python_5906e59816184d0d863e6ff77b057617/vendor/llama.cpp/common/common.h:7, 2024-02-28T07:54:54,702 from /tmp/pip-wheel-1e0qhhza/llama-cpp-python_5906e59816184d0d863e6ff77b057617/vendor/llama.cpp/common/common.cpp:1: 2024-02-28T07:54:54,703 /usr/include/c++/12/bits/vector.tcc: In member function ‘void std::vector<_Tp, _Alloc>::_M_realloc_insert(iterator, _Args&& ...) [with _Args = {const llama_model_kv_override&}; _Tp = llama_model_kv_override; _Alloc = std::allocator]’: 2024-02-28T07:54:54,704 /usr/include/c++/12/bits/vector.tcc:439:7: note: parameter passing for argument of type ‘std::vector::iterator’ changed in GCC 7.1 2024-02-28T07:54:54,705 439 | vector<_Tp, _Alloc>:: 2024-02-28T07:54:54,705 | ^~~~~~~~~~~~~~~~~~~ 2024-02-28T07:54:54,706 /usr/include/c++/12/bits/vector.tcc: In member function ‘void std::vector<_Tp, _Alloc>::_M_realloc_insert(iterator, _Args&& ...) [with _Args = {}; _Tp = llama_model_kv_override; _Alloc = std::allocator]’: 2024-02-28T07:54:54,707 /usr/include/c++/12/bits/vector.tcc:439:7: note: parameter passing for argument of type ‘std::vector::iterator’ changed in GCC 7.1 2024-02-28T07:54:54,707 In file included from /usr/include/c++/12/vector:64: 2024-02-28T07:54:54,708 In member function ‘void std::vector<_Tp, _Alloc>::push_back(const value_type&) [with _Tp = llama_model_kv_override; _Alloc = std::allocator]’, 2024-02-28T07:54:54,709 inlined from ‘bool gpt_params_parse_ex(int, char**, gpt_params&)’ at /tmp/pip-wheel-1e0qhhza/llama-cpp-python_5906e59816184d0d863e6ff77b057617/vendor/llama.cpp/common/common.cpp:869:42: 2024-02-28T07:54:54,709 /usr/include/c++/12/bits/stl_vector.h:1287:28: note: parameter passing for argument of type ‘__gnu_cxx::__normal_iterator >’ changed in GCC 7.1 2024-02-28T07:54:54,710 1287 | _M_realloc_insert(end(), __x); 2024-02-28T07:54:54,711 | ~~~~~~~~~~~~~~~~~^~~~~~~~~~~~ 2024-02-28T07:54:54,711 In member function ‘void std::vector<_Tp, _Alloc>::emplace_back(_Args&& ...) [with _Args = {}; _Tp = llama_model_kv_override; _Alloc = std::allocator]’, 2024-02-28T07:54:54,712 inlined from ‘bool gpt_params_parse_ex(int, char**, gpt_params&)’ at /tmp/pip-wheel-1e0qhhza/llama-cpp-python_5906e59816184d0d863e6ff77b057617/vendor/llama.cpp/common/common.cpp:914:41: 2024-02-28T07:54:54,713 /usr/include/c++/12/bits/vector.tcc:123:28: note: parameter passing for argument of type ‘__gnu_cxx::__normal_iterator >’ changed in GCC 7.1 2024-02-28T07:54:54,714 123 | _M_realloc_insert(end(), std::forward<_Args>(__args)...); 2024-02-28T07:54:54,715 | ~~~~~~~~~~~~~~~~~^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ 2024-02-28T07:54:59,651 [7/22] /usr/bin/cc -D_GNU_SOURCE -D_XOPEN_SOURCE=600 -I/tmp/pip-wheel-1e0qhhza/llama-cpp-python_5906e59816184d0d863e6ff77b057617/vendor/llama.cpp/. -O3 -DNDEBUG -fPIC -Wshadow -Wstrict-prototypes -Wpointer-arith -Wmissing-prototypes -Werror=implicit-int -Werror=implicit-function-declaration -Wall -Wextra -Wpedantic -Wcast-qual -Wno-unused-function -Wdouble-promotion -mfp16-format=ieee -mfpu=neon-fp-armv8 -mno-unaligned-access -funsafe-math-optimizations -std=gnu11 -MD -MT vendor/llama.cpp/CMakeFiles/ggml.dir/ggml.c.o -MF vendor/llama.cpp/CMakeFiles/ggml.dir/ggml.c.o.d -o vendor/llama.cpp/CMakeFiles/ggml.dir/ggml.c.o -c /tmp/pip-wheel-1e0qhhza/llama-cpp-python_5906e59816184d0d863e6ff77b057617/vendor/llama.cpp/ggml.c 2024-02-28T07:55:54,085 [8/22] /usr/bin/c++ -DLLAMA_BUILD -DLLAMA_SHARED -D_GNU_SOURCE -D_XOPEN_SOURCE=600 -Dllama_EXPORTS -I/tmp/pip-wheel-1e0qhhza/llama-cpp-python_5906e59816184d0d863e6ff77b057617/vendor/llama.cpp/. -O3 -DNDEBUG -fPIC -Wmissing-declarations -Wmissing-noreturn -Wall -Wextra -Wpedantic -Wcast-qual -Wno-unused-function -Wno-array-bounds -Wno-format-truncation -Wextra-semi -mfp16-format=ieee -mfpu=neon-fp-armv8 -mno-unaligned-access -funsafe-math-optimizations -std=gnu++11 -MD -MT vendor/llama.cpp/CMakeFiles/llama.dir/llama.cpp.o -MF vendor/llama.cpp/CMakeFiles/llama.dir/llama.cpp.o.d -o vendor/llama.cpp/CMakeFiles/llama.dir/llama.cpp.o -c /tmp/pip-wheel-1e0qhhza/llama-cpp-python_5906e59816184d0d863e6ff77b057617/vendor/llama.cpp/llama.cpp 2024-02-28T07:55:54,086 In file included from /usr/include/c++/12/vector:64, 2024-02-28T07:55:54,087 from /tmp/pip-wheel-1e0qhhza/llama-cpp-python_5906e59816184d0d863e6ff77b057617/vendor/llama.cpp/llama.h:984, 2024-02-28T07:55:54,087 from /tmp/pip-wheel-1e0qhhza/llama-cpp-python_5906e59816184d0d863e6ff77b057617/vendor/llama.cpp/llama.cpp:2: 2024-02-28T07:55:54,087 /usr/include/c++/12/bits/stl_vector.h: In function ‘std::vector<_Tp, _Alloc>::vector(std::initializer_list<_Tp>, const allocator_type&) [with _Tp = long long int; _Alloc = std::allocator]’: 2024-02-28T07:55:54,088 /usr/include/c++/12/bits/stl_vector.h:673:7: note: parameter passing for argument of type ‘std::initializer_list’ changed in GCC 7.1 2024-02-28T07:55:54,088 673 | vector(initializer_list __l, 2024-02-28T07:55:54,089 | ^~~~~~ 2024-02-28T07:55:54,089 /tmp/pip-wheel-1e0qhhza/llama-cpp-python_5906e59816184d0d863e6ff77b057617/vendor/llama.cpp/llama.cpp: In function ‘bool llm_load_tensors(llama_model_loader&, llama_model&, int, llama_split_mode, int, const float*, bool, llama_progress_callback, void*)’: 2024-02-28T07:55:54,090 /tmp/pip-wheel-1e0qhhza/llama-cpp-python_5906e59816184d0d863e6ff77b057617/vendor/llama.cpp/llama.cpp:3821:54: note: parameter passing for argument of type ‘std::initializer_list’ changed in GCC 7.1 2024-02-28T07:55:54,090 3821 | model.tok_embd = ml.create_tensor(ctx_input, tn(LLM_TENSOR_TOKEN_EMBD, "weight"), {n_embd, n_vocab}); 2024-02-28T07:55:54,090 | ~~~~~~~~~~~~~~~~^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ 2024-02-28T07:55:54,091 /tmp/pip-wheel-1e0qhhza/llama-cpp-python_5906e59816184d0d863e6ff77b057617/vendor/llama.cpp/llama.cpp:3825:61: note: parameter passing for argument of type ‘std::initializer_list’ changed in GCC 7.1 2024-02-28T07:55:54,091 3825 | model.output_norm = ml.create_tensor(ctx_output, tn(LLM_TENSOR_OUTPUT_NORM, "weight"), {n_embd}); 2024-02-28T07:55:54,092 | ~~~~~~~~~~~~~~~~^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ 2024-02-28T07:55:54,092 /tmp/pip-wheel-1e0qhhza/llama-cpp-python_5906e59816184d0d863e6ff77b057617/vendor/llama.cpp/llama.cpp:3827:60: note: parameter passing for argument of type ‘std::initializer_list’ changed in GCC 7.1 2024-02-28T07:55:54,093 3827 | model.output = ml.create_tensor(ctx_output_split, tn(LLM_TENSOR_OUTPUT, "weight"), {n_embd, n_vocab}); 2024-02-28T07:55:54,094 | ~~~~~~~~~~~~~~~~^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ 2024-02-28T07:55:54,102 /tmp/pip-wheel-1e0qhhza/llama-cpp-python_5906e59816184d0d863e6ff77b057617/vendor/llama.cpp/llama.cpp:3858:62: note: parameter passing for argument of type ‘std::initializer_list’ changed in GCC 7.1 2024-02-28T07:55:54,102 3858 | layer.ffn_gate = ml.create_tensor(ctx_split, tn(LLM_TENSOR_FFN_GATE, "weight", i), {n_embd, n_ff}); 2024-02-28T07:55:54,103 | ~~~~~~~~~~~~~~~~^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ 2024-02-28T07:55:54,103 /tmp/pip-wheel-1e0qhhza/llama-cpp-python_5906e59816184d0d863e6ff77b057617/vendor/llama.cpp/llama.cpp:3859:62: note: parameter passing for argument of type ‘std::initializer_list’ changed in GCC 7.1 2024-02-28T07:55:54,103 3859 | layer.ffn_down = ml.create_tensor(ctx_split, tn(LLM_TENSOR_FFN_DOWN, "weight", i), { n_ff, n_embd}); 2024-02-28T07:55:54,104 | ~~~~~~~~~~~~~~~~^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ 2024-02-28T07:55:54,104 /tmp/pip-wheel-1e0qhhza/llama-cpp-python_5906e59816184d0d863e6ff77b057617/vendor/llama.cpp/llama.cpp:3860:62: note: parameter passing for argument of type ‘std::initializer_list’ changed in GCC 7.1 2024-02-28T07:55:54,105 3860 | layer.ffn_up = ml.create_tensor(ctx_split, tn(LLM_TENSOR_FFN_UP, "weight", i), {n_embd, n_ff}); 2024-02-28T07:55:54,106 | ~~~~~~~~~~~~~~~~^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ 2024-02-28T07:55:54,106 /tmp/pip-wheel-1e0qhhza/llama-cpp-python_5906e59816184d0d863e6ff77b057617/vendor/llama.cpp/llama.cpp:3876:54: note: parameter passing for argument of type ‘std::initializer_list’ changed in GCC 7.1 2024-02-28T07:55:54,107 3876 | model.tok_embd = ml.create_tensor(ctx_input, tn(LLM_TENSOR_TOKEN_EMBD, "weight"), {n_embd, n_vocab}); 2024-02-28T07:55:54,107 | ~~~~~~~~~~~~~~~~^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ 2024-02-28T07:55:54,107 /tmp/pip-wheel-1e0qhhza/llama-cpp-python_5906e59816184d0d863e6ff77b057617/vendor/llama.cpp/llama.cpp:3878:61: note: parameter passing for argument of type ‘std::initializer_list’ changed in GCC 7.1 2024-02-28T07:55:54,108 3878 | model.output_norm = ml.create_tensor(ctx_output, tn(LLM_TENSOR_OUTPUT_NORM, "weight"), {n_embd}); 2024-02-28T07:55:54,108 | ~~~~~~~~~~~~~~~~^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ 2024-02-28T07:55:54,109 /tmp/pip-wheel-1e0qhhza/llama-cpp-python_5906e59816184d0d863e6ff77b057617/vendor/llama.cpp/llama.cpp:3879:61: note: parameter passing for argument of type ‘std::initializer_list’ changed in GCC 7.1 2024-02-28T07:55:54,109 3879 | model.output = ml.create_tensor(ctx_output_split, tn(LLM_TENSOR_OUTPUT, "weight"), {n_embd, n_vocab}); 2024-02-28T07:55:54,110 | ~~~~~~~~~~~~~~~~^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ 2024-02-28T07:55:54,110 /tmp/pip-wheel-1e0qhhza/llama-cpp-python_5906e59816184d0d863e6ff77b057617/vendor/llama.cpp/llama.cpp:3888:59: note: parameter passing for argument of type ‘std::initializer_list’ changed in GCC 7.1 2024-02-28T07:55:54,111 3888 | layer.attn_norm = ml.create_tensor(ctx_layer, tn(LLM_TENSOR_ATTN_NORM, "weight", i), {n_embd}); 2024-02-28T07:55:54,111 | ~~~~~~~~~~~~~~~~^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ 2024-02-28T07:55:54,111 /tmp/pip-wheel-1e0qhhza/llama-cpp-python_5906e59816184d0d863e6ff77b057617/vendor/llama.cpp/llama.cpp:3890:52: note: parameter passing for argument of type ‘std::initializer_list’ changed in GCC 7.1 2024-02-28T07:55:54,112 3890 | layer.wq = ml.create_tensor(ctx_split, tn(LLM_TENSOR_ATTN_Q, "weight", i), {n_embd, n_embd}); 2024-02-28T07:55:54,112 | ~~~~~~~~~~~~~~~~^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ 2024-02-28T07:55:54,113 /tmp/pip-wheel-1e0qhhza/llama-cpp-python_5906e59816184d0d863e6ff77b057617/vendor/llama.cpp/llama.cpp:3891:52: note: parameter passing for argument of type ‘std::initializer_list’ changed in GCC 7.1 2024-02-28T07:55:54,114 3891 | layer.wk = ml.create_tensor(ctx_split, tn(LLM_TENSOR_ATTN_K, "weight", i), {n_embd, n_embd_gqa}); 2024-02-28T07:55:54,114 | ~~~~~~~~~~~~~~~~^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ 2024-02-28T07:55:54,115 /tmp/pip-wheel-1e0qhhza/llama-cpp-python_5906e59816184d0d863e6ff77b057617/vendor/llama.cpp/llama.cpp:3892:52: note: parameter passing for argument of type ‘std::initializer_list’ changed in GCC 7.1 2024-02-28T07:55:54,115 3892 | layer.wv = ml.create_tensor(ctx_split, tn(LLM_TENSOR_ATTN_V, "weight", i), {n_embd, n_embd_gqa}); 2024-02-28T07:55:54,116 | ~~~~~~~~~~~~~~~~^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ 2024-02-28T07:55:54,116 /tmp/pip-wheel-1e0qhhza/llama-cpp-python_5906e59816184d0d863e6ff77b057617/vendor/llama.cpp/llama.cpp:3893:52: note: parameter passing for argument of type ‘std::initializer_list’ changed in GCC 7.1 2024-02-28T07:55:54,117 3893 | layer.wo = ml.create_tensor(ctx_split, tn(LLM_TENSOR_ATTN_OUT, "weight", i), {n_embd, n_embd}); 2024-02-28T07:55:54,117 | ~~~~~~~~~~~~~~~~^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ 2024-02-28T07:55:54,118 /tmp/pip-wheel-1e0qhhza/llama-cpp-python_5906e59816184d0d863e6ff77b057617/vendor/llama.cpp/llama.cpp:3895:58: note: parameter passing for argument of type ‘std::initializer_list’ changed in GCC 7.1 2024-02-28T07:55:54,118 3895 | layer.ffn_norm = ml.create_tensor(ctx_layer, tn(LLM_TENSOR_FFN_NORM, "weight", i), {n_embd}); 2024-02-28T07:55:54,118 | ~~~~~~~~~~~~~~~~^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ 2024-02-28T07:55:54,119 /tmp/pip-wheel-1e0qhhza/llama-cpp-python_5906e59816184d0d863e6ff77b057617/vendor/llama.cpp/llama.cpp:3897:58: note: parameter passing for argument of type ‘std::initializer_list’ changed in GCC 7.1 2024-02-28T07:55:54,119 3897 | layer.ffn_gate = ml.create_tensor(ctx_split, tn(LLM_TENSOR_FFN_GATE, "weight", i), {n_embd, n_ff}); 2024-02-28T07:55:54,119 | ~~~~~~~~~~~~~~~~^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ 2024-02-28T07:55:54,120 /tmp/pip-wheel-1e0qhhza/llama-cpp-python_5906e59816184d0d863e6ff77b057617/vendor/llama.cpp/llama.cpp:3898:58: note: parameter passing for argument of type ‘std::initializer_list’ changed in GCC 7.1 2024-02-28T07:55:54,121 3898 | layer.ffn_down = ml.create_tensor(ctx_split, tn(LLM_TENSOR_FFN_DOWN, "weight", i), { n_ff, n_embd}); 2024-02-28T07:55:54,121 | ~~~~~~~~~~~~~~~~^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ 2024-02-28T07:55:54,122 /tmp/pip-wheel-1e0qhhza/llama-cpp-python_5906e59816184d0d863e6ff77b057617/vendor/llama.cpp/llama.cpp:3899:58: note: parameter passing for argument of type ‘std::initializer_list’ changed in GCC 7.1 2024-02-28T07:55:54,122 3899 | layer.ffn_up = ml.create_tensor(ctx_split, tn(LLM_TENSOR_FFN_UP, "weight", i), {n_embd, n_ff}); 2024-02-28T07:55:54,123 | ~~~~~~~~~~~~~~~~^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ 2024-02-28T07:55:54,123 /tmp/pip-wheel-1e0qhhza/llama-cpp-python_5906e59816184d0d863e6ff77b057617/vendor/llama.cpp/llama.cpp:3904:54: note: parameter passing for argument of type ‘std::initializer_list’ changed in GCC 7.1 2024-02-28T07:55:54,124 3904 | model.tok_embd = ml.create_tensor(ctx_input, tn(LLM_TENSOR_TOKEN_EMBD, "weight"), {n_embd, n_vocab}); 2024-02-28T07:55:54,124 | ~~~~~~~~~~~~~~~~^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ 2024-02-28T07:55:54,125 /tmp/pip-wheel-1e0qhhza/llama-cpp-python_5906e59816184d0d863e6ff77b057617/vendor/llama.cpp/llama.cpp:3908:63: note: parameter passing for argument of type ‘std::initializer_list’ changed in GCC 7.1 2024-02-28T07:55:54,126 3908 | model.output_norm = ml.create_tensor(ctx_output, tn(LLM_TENSOR_OUTPUT_NORM, "weight"), {n_embd}); 2024-02-28T07:55:54,126 | ~~~~~~~~~~~~~~~~^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ 2024-02-28T07:55:54,127 /tmp/pip-wheel-1e0qhhza/llama-cpp-python_5906e59816184d0d863e6ff77b057617/vendor/llama.cpp/llama.cpp:3909:63: note: parameter passing for argument of type ‘std::initializer_list’ changed in GCC 7.1 2024-02-28T07:55:54,128 3909 | model.output_norm_b = ml.create_tensor(ctx_output, tn(LLM_TENSOR_OUTPUT_NORM, "bias"), {n_embd}); 2024-02-28T07:55:54,128 | ~~~~~~~~~~~~~~~~^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ 2024-02-28T07:55:54,128 /tmp/pip-wheel-1e0qhhza/llama-cpp-python_5906e59816184d0d863e6ff77b057617/vendor/llama.cpp/llama.cpp:3911:60: note: parameter passing for argument of type ‘std::initializer_list’ changed in GCC 7.1 2024-02-28T07:55:54,129 3911 | model.output = ml.create_tensor(ctx_output_split, tn(LLM_TENSOR_OUTPUT, "weight"), {n_embd, n_vocab}); 2024-02-28T07:55:54,129 | ~~~~~~~~~~~~~~~~^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ 2024-02-28T07:55:54,130 /tmp/pip-wheel-1e0qhhza/llama-cpp-python_5906e59816184d0d863e6ff77b057617/vendor/llama.cpp/llama.cpp:3913:60: note: parameter passing for argument of type ‘std::initializer_list’ changed in GCC 7.1 2024-02-28T07:55:54,130 3913 | model.output = ml.create_tensor(ctx_output_split, tn(LLM_TENSOR_TOKEN_EMBD, "weight"), {n_embd, n_vocab}); // needs to be on GPU 2024-02-28T07:55:54,131 | ~~~~~~~~~~~~~~~~^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ 2024-02-28T07:55:54,131 /tmp/pip-wheel-1e0qhhza/llama-cpp-python_5906e59816184d0d863e6ff77b057617/vendor/llama.cpp/llama.cpp:3926:61: note: parameter passing for argument of type ‘std::initializer_list’ changed in GCC 7.1 2024-02-28T07:55:54,132 3926 | layer.attn_norm_b = ml.create_tensor(ctx_layer, tn(LLM_TENSOR_ATTN_NORM, "bias", i), {n_embd}); 2024-02-28T07:55:54,132 | ~~~~~~~~~~~~~~~~^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ 2024-02-28T07:55:54,133 /tmp/pip-wheel-1e0qhhza/llama-cpp-python_5906e59816184d0d863e6ff77b057617/vendor/llama.cpp/llama.cpp:3929:67: note: parameter passing for argument of type ‘std::initializer_list’ changed in GCC 7.1 2024-02-28T07:55:54,133 3929 | layer.attn_norm_2 = ml.create_tensor(ctx_layer, tn(LLM_TENSOR_ATTN_NORM_2, "weight", i), {n_embd}); 2024-02-28T07:55:54,134 | ~~~~~~~~~~~~~~~~^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ 2024-02-28T07:55:54,134 /tmp/pip-wheel-1e0qhhza/llama-cpp-python_5906e59816184d0d863e6ff77b057617/vendor/llama.cpp/llama.cpp:3930:67: note: parameter passing for argument of type ‘std::initializer_list’ changed in GCC 7.1 2024-02-28T07:55:54,135 3930 | layer.attn_norm_2_b = ml.create_tensor(ctx_layer, tn(LLM_TENSOR_ATTN_NORM_2, "bias", i), {n_embd}); 2024-02-28T07:55:54,135 | ~~~~~~~~~~~~~~~~^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ 2024-02-28T07:55:54,136 /tmp/pip-wheel-1e0qhhza/llama-cpp-python_5906e59816184d0d863e6ff77b057617/vendor/llama.cpp/llama.cpp:3933:54: note: parameter passing for argument of type ‘std::initializer_list’ changed in GCC 7.1 2024-02-28T07:55:54,136 3933 | layer.wqkv = ml.create_tensor(ctx_split, tn(LLM_TENSOR_ATTN_QKV, "weight", i), {n_embd, n_embd + 2*n_embd_gqa}); 2024-02-28T07:55:54,136 | ~~~~~~~~~~~~~~~~^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ 2024-02-28T07:55:54,137 /tmp/pip-wheel-1e0qhhza/llama-cpp-python_5906e59816184d0d863e6ff77b057617/vendor/llama.cpp/llama.cpp:3937:58: note: parameter passing for argument of type ‘std::initializer_list’ changed in GCC 7.1 2024-02-28T07:55:54,138 3937 | layer.ffn_up = ml.create_tensor(ctx_split, tn(LLM_TENSOR_FFN_UP, "weight", i), {n_embd, n_ff}); 2024-02-28T07:55:54,138 | ~~~~~~~~~~~~~~~~^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ 2024-02-28T07:55:54,138 /tmp/pip-wheel-1e0qhhza/llama-cpp-python_5906e59816184d0d863e6ff77b057617/vendor/llama.cpp/llama.cpp:3942:54: note: parameter passing for argument of type ‘std::initializer_list’ changed in GCC 7.1 2024-02-28T07:55:54,139 3942 | model.tok_embd = ml.create_tensor(ctx_input, tn(LLM_TENSOR_TOKEN_EMBD, "weight"), {n_embd, n_vocab}); 2024-02-28T07:55:54,139 | ~~~~~~~~~~~~~~~~^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ 2024-02-28T07:55:54,140 /tmp/pip-wheel-1e0qhhza/llama-cpp-python_5906e59816184d0d863e6ff77b057617/vendor/llama.cpp/llama.cpp:3943:54: note: parameter passing for argument of type ‘std::initializer_list’ changed in GCC 7.1 2024-02-28T07:55:54,140 3943 | model.pos_embd = ml.create_tensor(ctx_input, tn(LLM_TENSOR_POS_EMBD, "weight"), {n_embd, hparams.n_ctx_train}); 2024-02-28T07:55:54,140 | ~~~~~~~~~~~~~~~~^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ 2024-02-28T07:55:54,141 /tmp/pip-wheel-1e0qhhza/llama-cpp-python_5906e59816184d0d863e6ff77b057617/vendor/llama.cpp/llama.cpp:3947:63: note: parameter passing for argument of type ‘std::initializer_list’ changed in GCC 7.1 2024-02-28T07:55:54,142 3947 | model.output_norm = ml.create_tensor(ctx_output, tn(LLM_TENSOR_OUTPUT_NORM, "weight"), {n_embd}); 2024-02-28T07:55:54,142 | ~~~~~~~~~~~~~~~~^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ 2024-02-28T07:55:54,142 /tmp/pip-wheel-1e0qhhza/llama-cpp-python_5906e59816184d0d863e6ff77b057617/vendor/llama.cpp/llama.cpp:3948:63: note: parameter passing for argument of type ‘std::initializer_list’ changed in GCC 7.1 2024-02-28T07:55:54,143 3948 | model.output_norm_b = ml.create_tensor(ctx_output, tn(LLM_TENSOR_OUTPUT_NORM, "bias"), {n_embd}); 2024-02-28T07:55:54,143 | ~~~~~~~~~~~~~~~~^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ 2024-02-28T07:55:54,144 /tmp/pip-wheel-1e0qhhza/llama-cpp-python_5906e59816184d0d863e6ff77b057617/vendor/llama.cpp/llama.cpp:3949:63: note: parameter passing for argument of type ‘std::initializer_list’ changed in GCC 7.1 2024-02-28T07:55:54,144 3949 | model.output = ml.create_tensor(ctx_output_split, tn(LLM_TENSOR_OUTPUT, "weight"), {n_embd, n_vocab}); 2024-02-28T07:55:54,145 | ~~~~~~~~~~~~~~~~^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ 2024-02-28T07:55:54,145 /tmp/pip-wheel-1e0qhhza/llama-cpp-python_5906e59816184d0d863e6ff77b057617/vendor/llama.cpp/llama.cpp:3958:61: note: parameter passing for argument of type ‘std::initializer_list’ changed in GCC 7.1 2024-02-28T07:55:54,146 3958 | layer.attn_norm = ml.create_tensor(ctx_layer, tn(LLM_TENSOR_ATTN_NORM, "weight", i), {n_embd}); 2024-02-28T07:55:54,146 | ~~~~~~~~~~~~~~~~^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ 2024-02-28T07:55:54,147 /tmp/pip-wheel-1e0qhhza/llama-cpp-python_5906e59816184d0d863e6ff77b057617/vendor/llama.cpp/llama.cpp:3961:54: note: parameter passing for argument of type ‘std::initializer_list’ changed in GCC 7.1 2024-02-28T07:55:54,147 3961 | layer.wqkv = ml.create_tensor(ctx_split, tn(LLM_TENSOR_ATTN_QKV, "weight", i), {n_embd, n_embd + 2*n_embd_gqa}); 2024-02-28T07:55:54,148 | ~~~~~~~~~~~~~~~~^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ 2024-02-28T07:55:54,148 /tmp/pip-wheel-1e0qhhza/llama-cpp-python_5906e59816184d0d863e6ff77b057617/vendor/llama.cpp/llama.cpp:3964:54: note: parameter passing for argument of type ‘std::initializer_list’ changed in GCC 7.1 2024-02-28T07:55:54,149 3964 | layer.wo = ml.create_tensor(ctx_split, tn(LLM_TENSOR_ATTN_OUT, "weight", i), {n_embd, n_embd}); 2024-02-28T07:55:54,149 | ~~~~~~~~~~~~~~~~^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ 2024-02-28T07:55:54,149 /tmp/pip-wheel-1e0qhhza/llama-cpp-python_5906e59816184d0d863e6ff77b057617/vendor/llama.cpp/llama.cpp:3965:54: note: parameter passing for argument of type ‘std::initializer_list’ changed in GCC 7.1 2024-02-28T07:55:54,150 3965 | layer.bo = ml.create_tensor(ctx_layer, tn(LLM_TENSOR_ATTN_OUT, "bias", i), {n_embd}); 2024-02-28T07:55:54,150 | ~~~~~~~~~~~~~~~~^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ 2024-02-28T07:55:54,151 /tmp/pip-wheel-1e0qhhza/llama-cpp-python_5906e59816184d0d863e6ff77b057617/vendor/llama.cpp/llama.cpp:3968:60: note: parameter passing for argument of type ‘std::initializer_list’ changed in GCC 7.1 2024-02-28T07:55:54,151 3968 | layer.ffn_norm_b = ml.create_tensor(ctx_layer, tn(LLM_TENSOR_FFN_NORM, "bias", i), {n_embd}); 2024-02-28T07:55:54,152 | ~~~~~~~~~~~~~~~~^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ 2024-02-28T07:55:54,152 /tmp/pip-wheel-1e0qhhza/llama-cpp-python_5906e59816184d0d863e6ff77b057617/vendor/llama.cpp/llama.cpp:3970:60: note: parameter passing for argument of type ‘std::initializer_list’ changed in GCC 7.1 2024-02-28T07:55:54,152 3970 | layer.ffn_down = ml.create_tensor(ctx_split, tn(LLM_TENSOR_FFN_DOWN, "weight", i), {n_ff, n_embd}); 2024-02-28T07:55:54,153 | ~~~~~~~~~~~~~~~~^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ 2024-02-28T07:55:54,153 /tmp/pip-wheel-1e0qhhza/llama-cpp-python_5906e59816184d0d863e6ff77b057617/vendor/llama.cpp/llama.cpp:3974:60: note: parameter passing for argument of type ‘std::initializer_list’ changed in GCC 7.1 2024-02-28T07:55:54,154 3974 | layer.ffn_up_b = ml.create_tensor(ctx_layer, tn(LLM_TENSOR_FFN_UP, "bias", i), {n_ff}); 2024-02-28T07:55:54,154 | ~~~~~~~~~~~~~~~~^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ 2024-02-28T07:55:54,155 /tmp/pip-wheel-1e0qhhza/llama-cpp-python_5906e59816184d0d863e6ff77b057617/vendor/llama.cpp/llama.cpp:3979:54: note: parameter passing for argument of type ‘std::initializer_list’ changed in GCC 7.1 2024-02-28T07:55:54,155 3979 | model.tok_embd = ml.create_tensor(ctx_input, tn(LLM_TENSOR_TOKEN_EMBD, "weight"), {n_embd, n_vocab}); 2024-02-28T07:55:54,156 | ~~~~~~~~~~~~~~~~^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ 2024-02-28T07:55:54,156 /tmp/pip-wheel-1e0qhhza/llama-cpp-python_5906e59816184d0d863e6ff77b057617/vendor/llama.cpp/llama.cpp:3982:64: note: parameter passing for argument of type ‘std::initializer_list’ changed in GCC 7.1 2024-02-28T07:55:54,157 3982 | model.output_norm = ml.create_tensor(ctx_output, tn(LLM_TENSOR_OUTPUT_NORM, "weight"), {n_embd}); 2024-02-28T07:55:54,158 | ~~~~~~~~~~~~~~~~^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ 2024-02-28T07:55:54,158 /tmp/pip-wheel-1e0qhhza/llama-cpp-python_5906e59816184d0d863e6ff77b057617/vendor/llama.cpp/llama.cpp:3983:64: note: parameter passing for argument of type ‘std::initializer_list’ changed in GCC 7.1 2024-02-28T07:55:54,159 3983 | model.output_norm_b = ml.create_tensor(ctx_output, tn(LLM_TENSOR_OUTPUT_NORM, "bias"), {n_embd}); 2024-02-28T07:55:54,159 | ~~~~~~~~~~~~~~~~^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ 2024-02-28T07:55:54,159 /tmp/pip-wheel-1e0qhhza/llama-cpp-python_5906e59816184d0d863e6ff77b057617/vendor/llama.cpp/llama.cpp:3984:64: note: parameter passing for argument of type ‘std::initializer_list’ changed in GCC 7.1 2024-02-28T07:55:54,160 3984 | model.output = ml.create_tensor(ctx_output_split, tn(LLM_TENSOR_OUTPUT, "weight"), {n_embd, n_vocab}); 2024-02-28T07:55:54,160 | ~~~~~~~~~~~~~~~~^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ 2024-02-28T07:55:54,161 /tmp/pip-wheel-1e0qhhza/llama-cpp-python_5906e59816184d0d863e6ff77b057617/vendor/llama.cpp/llama.cpp:3993:63: note: parameter passing for argument of type ‘std::initializer_list’ changed in GCC 7.1 2024-02-28T07:55:54,161 3993 | layer.attn_norm = ml.create_tensor(ctx_layer, tn(LLM_TENSOR_ATTN_NORM, "weight", i), {n_embd}); 2024-02-28T07:55:54,162 | ~~~~~~~~~~~~~~~~^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ 2024-02-28T07:55:54,162 /tmp/pip-wheel-1e0qhhza/llama-cpp-python_5906e59816184d0d863e6ff77b057617/vendor/llama.cpp/llama.cpp:3994:63: note: parameter passing for argument of type ‘std::initializer_list’ changed in GCC 7.1 2024-02-28T07:55:54,162 3994 | layer.attn_norm_b = ml.create_tensor(ctx_layer, tn(LLM_TENSOR_ATTN_NORM, "bias", i), {n_embd}); 2024-02-28T07:55:54,163 | ~~~~~~~~~~~~~~~~^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ 2024-02-28T07:55:54,163 /tmp/pip-wheel-1e0qhhza/llama-cpp-python_5906e59816184d0d863e6ff77b057617/vendor/llama.cpp/llama.cpp:3997:63: note: parameter passing for argument of type ‘std::initializer_list’ changed in GCC 7.1 2024-02-28T07:55:54,164 3997 | layer.bqkv = ml.create_tensor(ctx_layer, tn(LLM_TENSOR_ATTN_QKV, "bias", i), {n_embd + 2*n_embd_gqa}); 2024-02-28T07:55:54,165 | ~~~~~~~~~~~~~~~~^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ 2024-02-28T07:55:54,165 /tmp/pip-wheel-1e0qhhza/llama-cpp-python_5906e59816184d0d863e6ff77b057617/vendor/llama.cpp/llama.cpp:4000:63: note: parameter passing for argument of type ‘std::initializer_list’ changed in GCC 7.1 2024-02-28T07:55:54,166 4000 | layer.bo = ml.create_tensor(ctx_layer, tn(LLM_TENSOR_ATTN_OUT, "bias", i), {n_embd}); 2024-02-28T07:55:54,166 | ~~~~~~~~~~~~~~~~^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ 2024-02-28T07:55:54,167 /tmp/pip-wheel-1e0qhhza/llama-cpp-python_5906e59816184d0d863e6ff77b057617/vendor/llama.cpp/llama.cpp:4002:63: note: parameter passing for argument of type ‘std::initializer_list’ changed in GCC 7.1 2024-02-28T07:55:54,167 4002 | layer.ffn_down = ml.create_tensor(ctx_split, tn(LLM_TENSOR_FFN_DOWN, "weight", i), {n_ff, n_embd}); 2024-02-28T07:55:54,168 | ~~~~~~~~~~~~~~~~^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ 2024-02-28T07:55:54,169 /tmp/pip-wheel-1e0qhhza/llama-cpp-python_5906e59816184d0d863e6ff77b057617/vendor/llama.cpp/llama.cpp:4003:63: note: parameter passing for argument of type ‘std::initializer_list’ changed in GCC 7.1 2024-02-28T07:55:54,169 4003 | layer.ffn_down_b = ml.create_tensor(ctx_layer, tn(LLM_TENSOR_FFN_DOWN, "bias", i), {n_embd}); 2024-02-28T07:55:54,169 | ~~~~~~~~~~~~~~~~^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ 2024-02-28T07:55:54,170 /tmp/pip-wheel-1e0qhhza/llama-cpp-python_5906e59816184d0d863e6ff77b057617/vendor/llama.cpp/llama.cpp:4005:63: note: parameter passing for argument of type ‘std::initializer_list’ changed in GCC 7.1 2024-02-28T07:55:54,170 4005 | layer.ffn_up = ml.create_tensor(ctx_split, tn(LLM_TENSOR_FFN_UP, "weight", i), {n_embd, n_ff}); 2024-02-28T07:55:54,171 | ~~~~~~~~~~~~~~~~^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ 2024-02-28T07:55:54,171 /tmp/pip-wheel-1e0qhhza/llama-cpp-python_5906e59816184d0d863e6ff77b057617/vendor/llama.cpp/llama.cpp:4006:63: note: parameter passing for argument of type ‘std::initializer_list’ changed in GCC 7.1 2024-02-28T07:55:54,171 4006 | layer.ffn_up_b = ml.create_tensor(ctx_layer, tn(LLM_TENSOR_FFN_UP, "bias", i), {n_ff}); 2024-02-28T07:55:54,172 | ~~~~~~~~~~~~~~~~^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ 2024-02-28T07:55:54,172 /tmp/pip-wheel-1e0qhhza/llama-cpp-python_5906e59816184d0d863e6ff77b057617/vendor/llama.cpp/llama.cpp:4008:63: note: parameter passing for argument of type ‘std::initializer_list’ changed in GCC 7.1 2024-02-28T07:55:54,173 4008 | layer.ffn_norm = ml.create_tensor(ctx_layer, tn(LLM_TENSOR_FFN_NORM, "weight", i), {n_embd}); 2024-02-28T07:55:54,173 | ~~~~~~~~~~~~~~~~^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ 2024-02-28T07:55:54,174 /tmp/pip-wheel-1e0qhhza/llama-cpp-python_5906e59816184d0d863e6ff77b057617/vendor/llama.cpp/llama.cpp:4009:63: note: parameter passing for argument of type ‘std::initializer_list’ changed in GCC 7.1 2024-02-28T07:55:54,174 4009 | layer.ffn_norm_b = ml.create_tensor(ctx_layer, tn(LLM_TENSOR_FFN_NORM, "bias", i), {n_embd}); 2024-02-28T07:55:54,174 | ~~~~~~~~~~~~~~~~^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ 2024-02-28T07:55:54,175 /tmp/pip-wheel-1e0qhhza/llama-cpp-python_5906e59816184d0d863e6ff77b057617/vendor/llama.cpp/llama.cpp:4012:63: note: parameter passing for argument of type ‘std::initializer_list’ changed in GCC 7.1 2024-02-28T07:55:54,175 4012 | layer.attn_q_norm_b = ml.create_tensor(ctx_layer, tn(LLM_TENSOR_ATTN_Q_NORM, "bias", i), {64}); 2024-02-28T07:55:54,176 | ~~~~~~~~~~~~~~~~^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ 2024-02-28T07:55:54,176 /tmp/pip-wheel-1e0qhhza/llama-cpp-python_5906e59816184d0d863e6ff77b057617/vendor/llama.cpp/llama.cpp:4014:63: note: parameter passing for argument of type ‘std::initializer_list’ changed in GCC 7.1 2024-02-28T07:55:54,177 4014 | layer.attn_k_norm = ml.create_tensor(ctx_layer, tn(LLM_TENSOR_ATTN_K_NORM, "weight", i), {64}); 2024-02-28T07:55:54,178 | ~~~~~~~~~~~~~~~~^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ 2024-02-28T07:55:54,178 /tmp/pip-wheel-1e0qhhza/llama-cpp-python_5906e59816184d0d863e6ff77b057617/vendor/llama.cpp/llama.cpp:4015:63: note: parameter passing for argument of type ‘std::initializer_list’ changed in GCC 7.1 2024-02-28T07:55:54,179 4015 | layer.attn_k_norm_b = ml.create_tensor(ctx_layer, tn(LLM_TENSOR_ATTN_K_NORM, "bias", i), {64}); 2024-02-28T07:55:54,179 | ~~~~~~~~~~~~~~~~^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ 2024-02-28T07:55:54,180 /tmp/pip-wheel-1e0qhhza/llama-cpp-python_5906e59816184d0d863e6ff77b057617/vendor/llama.cpp/llama.cpp:4021:58: note: parameter passing for argument of type ‘std::initializer_list’ changed in GCC 7.1 2024-02-28T07:55:54,180 4021 | model.tok_embd = ml.create_tensor(ctx_input, tn(LLM_TENSOR_TOKEN_EMBD, "weight"), {n_embd, n_vocab}); 2024-02-28T07:55:54,180 | ~~~~~~~~~~~~~~~~^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ 2024-02-28T07:55:54,181 /tmp/pip-wheel-1e0qhhza/llama-cpp-python_5906e59816184d0d863e6ff77b057617/vendor/llama.cpp/llama.cpp:4022:58: note: parameter passing for argument of type ‘std::initializer_list’ changed in GCC 7.1 2024-02-28T07:55:54,181 4022 | model.type_embd = ml.create_tensor(ctx_input, tn(LLM_TENSOR_TOKEN_TYPES, "weight"), {n_embd, n_vocab_type}); 2024-02-28T07:55:54,182 | ~~~~~~~~~~~~~~~~^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ 2024-02-28T07:55:54,182 /tmp/pip-wheel-1e0qhhza/llama-cpp-python_5906e59816184d0d863e6ff77b057617/vendor/llama.cpp/llama.cpp:4024:58: note: parameter passing for argument of type ‘std::initializer_list’ changed in GCC 7.1 2024-02-28T07:55:54,183 4024 | model.pos_embd = ml.create_tensor(ctx_input, tn(LLM_TENSOR_POS_EMBD, "weight"), {n_embd, hparams.n_ctx_train}); 2024-02-28T07:55:54,183 | ~~~~~~~~~~~~~~~~^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ 2024-02-28T07:55:54,184 /tmp/pip-wheel-1e0qhhza/llama-cpp-python_5906e59816184d0d863e6ff77b057617/vendor/llama.cpp/llama.cpp:4027:56: note: parameter passing for argument of type ‘std::initializer_list’ changed in GCC 7.1 2024-02-28T07:55:54,184 4027 | model.tok_norm = ml.create_tensor(ctx_output, tn(LLM_TENSOR_TOKEN_EMBD_NORM, "weight"), {n_embd}); 2024-02-28T07:55:54,185 | ~~~~~~~~~~~~~~~~^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ 2024-02-28T07:55:54,185 /tmp/pip-wheel-1e0qhhza/llama-cpp-python_5906e59816184d0d863e6ff77b057617/vendor/llama.cpp/llama.cpp:4028:56: note: parameter passing for argument of type ‘std::initializer_list’ changed in GCC 7.1 2024-02-28T07:55:54,186 4028 | model.tok_norm_b = ml.create_tensor(ctx_output, tn(LLM_TENSOR_TOKEN_EMBD_NORM, "bias"), {n_embd}); 2024-02-28T07:55:54,186 | ~~~~~~~~~~~~~~~~^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ 2024-02-28T07:55:54,187 /tmp/pip-wheel-1e0qhhza/llama-cpp-python_5906e59816184d0d863e6ff77b057617/vendor/llama.cpp/llama.cpp:4037:58: note: parameter passing for argument of type ‘std::initializer_list’ changed in GCC 7.1 2024-02-28T07:55:54,187 4037 | layer.wq = ml.create_tensor(ctx_split, tn(LLM_TENSOR_ATTN_Q, "weight", i), {n_embd, n_embd}); 2024-02-28T07:55:54,188 | ~~~~~~~~~~~~~~~~^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ 2024-02-28T07:55:54,188 /tmp/pip-wheel-1e0qhhza/llama-cpp-python_5906e59816184d0d863e6ff77b057617/vendor/llama.cpp/llama.cpp:4038:58: note: parameter passing for argument of type ‘std::initializer_list’ changed in GCC 7.1 2024-02-28T07:55:54,189 4038 | layer.bq = ml.create_tensor(ctx_layer, tn(LLM_TENSOR_ATTN_Q, "bias", i), {n_embd}); 2024-02-28T07:55:54,190 | ~~~~~~~~~~~~~~~~^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ 2024-02-28T07:55:54,190 /tmp/pip-wheel-1e0qhhza/llama-cpp-python_5906e59816184d0d863e6ff77b057617/vendor/llama.cpp/llama.cpp:4040:58: note: parameter passing for argument of type ‘std::initializer_list’ changed in GCC 7.1 2024-02-28T07:55:54,190 4040 | layer.wk = ml.create_tensor(ctx_split, tn(LLM_TENSOR_ATTN_K, "weight", i), {n_embd, n_embd_gqa}); 2024-02-28T07:55:54,191 | ~~~~~~~~~~~~~~~~^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ 2024-02-28T07:55:54,191 /tmp/pip-wheel-1e0qhhza/llama-cpp-python_5906e59816184d0d863e6ff77b057617/vendor/llama.cpp/llama.cpp:4041:58: note: parameter passing for argument of type ‘std::initializer_list’ changed in GCC 7.1 2024-02-28T07:55:54,191 4041 | layer.bk = ml.create_tensor(ctx_layer, tn(LLM_TENSOR_ATTN_K, "bias", i), {n_embd_gqa}); 2024-02-28T07:55:54,192 | ~~~~~~~~~~~~~~~~^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ 2024-02-28T07:55:54,192 /tmp/pip-wheel-1e0qhhza/llama-cpp-python_5906e59816184d0d863e6ff77b057617/vendor/llama.cpp/llama.cpp:4043:58: note: parameter passing for argument of type ‘std::initializer_list’ changed in GCC 7.1 2024-02-28T07:55:54,193 4043 | layer.wv = ml.create_tensor(ctx_split, tn(LLM_TENSOR_ATTN_V, "weight", i), {n_embd, n_embd_gqa}); 2024-02-28T07:55:54,193 | ~~~~~~~~~~~~~~~~^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ 2024-02-28T07:55:54,194 /tmp/pip-wheel-1e0qhhza/llama-cpp-python_5906e59816184d0d863e6ff77b057617/vendor/llama.cpp/llama.cpp:4044:58: note: parameter passing for argument of type ‘std::initializer_list’ changed in GCC 7.1 2024-02-28T07:55:54,194 4044 | layer.bv = ml.create_tensor(ctx_layer, tn(LLM_TENSOR_ATTN_V, "bias", i), {n_embd_gqa}); 2024-02-28T07:55:54,195 | ~~~~~~~~~~~~~~~~^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ 2024-02-28T07:55:54,195 /tmp/pip-wheel-1e0qhhza/llama-cpp-python_5906e59816184d0d863e6ff77b057617/vendor/llama.cpp/llama.cpp:4046:58: note: parameter passing for argument of type ‘std::initializer_list’ changed in GCC 7.1 2024-02-28T07:55:54,196 4046 | layer.wqkv = ml.create_tensor(ctx_split, tn(LLM_TENSOR_ATTN_QKV, "weight", i), {n_embd, n_embd + 2*n_embd_gqa}); 2024-02-28T07:55:54,196 | ~~~~~~~~~~~~~~~~^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ 2024-02-28T07:55:54,197 /tmp/pip-wheel-1e0qhhza/llama-cpp-python_5906e59816184d0d863e6ff77b057617/vendor/llama.cpp/llama.cpp:4049:65: note: parameter passing for argument of type ‘std::initializer_list’ changed in GCC 7.1 2024-02-28T07:55:54,197 4049 | layer.wo = ml.create_tensor(ctx_split, tn(LLM_TENSOR_ATTN_OUT, "weight", i), {n_embd, n_embd}); 2024-02-28T07:55:54,198 | ~~~~~~~~~~~~~~~~^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ 2024-02-28T07:55:54,198 /tmp/pip-wheel-1e0qhhza/llama-cpp-python_5906e59816184d0d863e6ff77b057617/vendor/llama.cpp/llama.cpp:4051:65: note: parameter passing for argument of type ‘std::initializer_list’ changed in GCC 7.1 2024-02-28T07:55:54,199 4051 | layer.attn_out_norm = ml.create_tensor(ctx_layer, tn(LLM_TENSOR_ATTN_OUT_NORM, "weight", i), {n_embd}); 2024-02-28T07:55:54,200 | ~~~~~~~~~~~~~~~~^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ 2024-02-28T07:55:54,200 /tmp/pip-wheel-1e0qhhza/llama-cpp-python_5906e59816184d0d863e6ff77b057617/vendor/llama.cpp/llama.cpp:4052:65: note: parameter passing for argument of type ‘std::initializer_list’ changed in GCC 7.1 2024-02-28T07:55:54,200 4052 | layer.attn_out_norm_b = ml.create_tensor(ctx_layer, tn(LLM_TENSOR_ATTN_OUT_NORM, "bias", i), {n_embd}); 2024-02-28T07:55:54,201 | ~~~~~~~~~~~~~~~~^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ 2024-02-28T07:55:54,201 /tmp/pip-wheel-1e0qhhza/llama-cpp-python_5906e59816184d0d863e6ff77b057617/vendor/llama.cpp/llama.cpp:4054:65: note: parameter passing for argument of type ‘std::initializer_list’ changed in GCC 7.1 2024-02-28T07:55:54,201 4054 | layer.ffn_up = ml.create_tensor(ctx_split, tn(LLM_TENSOR_FFN_UP, "weight", i), {n_embd, n_ff}); 2024-02-28T07:55:54,202 | ~~~~~~~~~~~~~~~~^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ 2024-02-28T07:55:54,202 /tmp/pip-wheel-1e0qhhza/llama-cpp-python_5906e59816184d0d863e6ff77b057617/vendor/llama.cpp/llama.cpp:4055:65: note: parameter passing for argument of type ‘std::initializer_list’ changed in GCC 7.1 2024-02-28T07:55:54,203 4055 | layer.ffn_down = ml.create_tensor(ctx_split, tn(LLM_TENSOR_FFN_DOWN, "weight", i), {n_ff, n_embd}); 2024-02-28T07:55:54,203 | ~~~~~~~~~~~~~~~~^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ 2024-02-28T07:55:54,204 /tmp/pip-wheel-1e0qhhza/llama-cpp-python_5906e59816184d0d863e6ff77b057617/vendor/llama.cpp/llama.cpp:4058:64: note: parameter passing for argument of type ‘std::initializer_list’ changed in GCC 7.1 2024-02-28T07:55:54,204 4058 | layer.bo = ml.create_tensor(ctx_layer, tn(LLM_TENSOR_ATTN_OUT, "bias", i), {n_embd}); 2024-02-28T07:55:54,205 | ~~~~~~~~~~~~~~~~^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ 2024-02-28T07:55:54,205 /tmp/pip-wheel-1e0qhhza/llama-cpp-python_5906e59816184d0d863e6ff77b057617/vendor/llama.cpp/llama.cpp:4059:64: note: parameter passing for argument of type ‘std::initializer_list’ changed in GCC 7.1 2024-02-28T07:55:54,206 4059 | layer.ffn_up_b = ml.create_tensor(ctx_layer, tn(LLM_TENSOR_FFN_UP, "bias", i), {n_ff}); 2024-02-28T07:55:54,206 | ~~~~~~~~~~~~~~~~^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ 2024-02-28T07:55:54,207 /tmp/pip-wheel-1e0qhhza/llama-cpp-python_5906e59816184d0d863e6ff77b057617/vendor/llama.cpp/llama.cpp:4061:64: note: parameter passing for argument of type ‘std::initializer_list’ changed in GCC 7.1 2024-02-28T07:55:54,207 4061 | layer.ffn_down_b = ml.create_tensor(ctx_layer, tn(LLM_TENSOR_FFN_DOWN, "bias", i), {n_embd}); 2024-02-28T07:55:54,208 | ~~~~~~~~~~~~~~~~^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ 2024-02-28T07:55:54,208 /tmp/pip-wheel-1e0qhhza/llama-cpp-python_5906e59816184d0d863e6ff77b057617/vendor/llama.cpp/llama.cpp:4063:64: note: parameter passing for argument of type ‘std::initializer_list’ changed in GCC 7.1 2024-02-28T07:55:54,209 4063 | layer.ffn_gate = ml.create_tensor(ctx_split, tn(LLM_TENSOR_FFN_GATE, "weight", i), {n_embd, n_ff}); 2024-02-28T07:55:54,209 | ~~~~~~~~~~~~~~~~^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ 2024-02-28T07:55:54,210 /tmp/pip-wheel-1e0qhhza/llama-cpp-python_5906e59816184d0d863e6ff77b057617/vendor/llama.cpp/llama.cpp:4066:66: note: parameter passing for argument of type ‘std::initializer_list’ changed in GCC 7.1 2024-02-28T07:55:54,210 4066 | layer.layer_out_norm = ml.create_tensor(ctx_layer, tn(LLM_TENSOR_LAYER_OUT_NORM, "weight", i), {n_embd}); 2024-02-28T07:55:54,211 | ~~~~~~~~~~~~~~~~^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ 2024-02-28T07:55:54,211 /tmp/pip-wheel-1e0qhhza/llama-cpp-python_5906e59816184d0d863e6ff77b057617/vendor/llama.cpp/llama.cpp:4067:66: note: parameter passing for argument of type ‘std::initializer_list’ changed in GCC 7.1 2024-02-28T07:55:54,211 4067 | layer.layer_out_norm_b = ml.create_tensor(ctx_layer, tn(LLM_TENSOR_LAYER_OUT_NORM, "bias", i), {n_embd}); 2024-02-28T07:55:54,212 | ~~~~~~~~~~~~~~~~^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ 2024-02-28T07:55:54,212 /tmp/pip-wheel-1e0qhhza/llama-cpp-python_5906e59816184d0d863e6ff77b057617/vendor/llama.cpp/llama.cpp:4072:56: note: parameter passing for argument of type ‘std::initializer_list’ changed in GCC 7.1 2024-02-28T07:55:54,212 4072 | model.tok_embd = ml.create_tensor(ctx_input, tn(LLM_TENSOR_TOKEN_EMBD, "weight"), {n_embd, n_vocab}); 2024-02-28T07:55:54,213 | ~~~~~~~~~~~~~~~~^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ 2024-02-28T07:55:54,213 /tmp/pip-wheel-1e0qhhza/llama-cpp-python_5906e59816184d0d863e6ff77b057617/vendor/llama.cpp/llama.cpp:4073:56: note: parameter passing for argument of type ‘std::initializer_list’ changed in GCC 7.1 2024-02-28T07:55:54,214 4073 | model.tok_norm = ml.create_tensor(ctx_output, tn(LLM_TENSOR_TOKEN_EMBD_NORM, "weight"), {n_embd}); 2024-02-28T07:55:54,214 | ~~~~~~~~~~~~~~~~^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ 2024-02-28T07:55:54,216 /tmp/pip-wheel-1e0qhhza/llama-cpp-python_5906e59816184d0d863e6ff77b057617/vendor/llama.cpp/llama.cpp:4074:56: note: parameter passing for argument of type ‘std::initializer_list’ changed in GCC 7.1 2024-02-28T07:55:54,217 4074 | model.tok_norm_b = ml.create_tensor(ctx_output, tn(LLM_TENSOR_TOKEN_EMBD_NORM, "bias"), {n_embd}); 2024-02-28T07:55:54,217 | ~~~~~~~~~~~~~~~~^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ 2024-02-28T07:55:54,218 /tmp/pip-wheel-1e0qhhza/llama-cpp-python_5906e59816184d0d863e6ff77b057617/vendor/llama.cpp/llama.cpp:4078:63: note: parameter passing for argument of type ‘std::initializer_list’ changed in GCC 7.1 2024-02-28T07:55:54,219 4078 | model.output_norm = ml.create_tensor(ctx_output, tn(LLM_TENSOR_OUTPUT_NORM, "weight"), {n_embd}); 2024-02-28T07:55:54,219 | ~~~~~~~~~~~~~~~~^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ 2024-02-28T07:55:54,220 /tmp/pip-wheel-1e0qhhza/llama-cpp-python_5906e59816184d0d863e6ff77b057617/vendor/llama.cpp/llama.cpp:4079:63: note: parameter passing for argument of type ‘std::initializer_list’ changed in GCC 7.1 2024-02-28T07:55:54,220 4079 | model.output_norm_b = ml.create_tensor(ctx_output, tn(LLM_TENSOR_OUTPUT_NORM, "bias"), {n_embd}); 2024-02-28T07:55:54,221 | ~~~~~~~~~~~~~~~~^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ 2024-02-28T07:55:54,222 /tmp/pip-wheel-1e0qhhza/llama-cpp-python_5906e59816184d0d863e6ff77b057617/vendor/llama.cpp/llama.cpp:4080:63: note: parameter passing for argument of type ‘std::initializer_list’ changed in GCC 7.1 2024-02-28T07:55:54,223 4080 | model.output = ml.create_tensor(ctx_output_split, tn(LLM_TENSOR_OUTPUT, "weight"), {n_embd, n_vocab}); 2024-02-28T07:55:54,223 | ~~~~~~~~~~~~~~~~^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ 2024-02-28T07:55:54,224 /tmp/pip-wheel-1e0qhhza/llama-cpp-python_5906e59816184d0d863e6ff77b057617/vendor/llama.cpp/llama.cpp:4089:61: note: parameter passing for argument of type ‘std::initializer_list’ changed in GCC 7.1 2024-02-28T07:55:54,224 4089 | layer.attn_norm = ml.create_tensor(ctx_layer, tn(LLM_TENSOR_ATTN_NORM, "weight", i), {n_embd}); 2024-02-28T07:55:54,225 | ~~~~~~~~~~~~~~~~^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ 2024-02-28T07:55:54,226 /tmp/pip-wheel-1e0qhhza/llama-cpp-python_5906e59816184d0d863e6ff77b057617/vendor/llama.cpp/llama.cpp:4090:61: note: parameter passing for argument of type ‘std::initializer_list’ changed in GCC 7.1 2024-02-28T07:55:54,226 4090 | layer.attn_norm_b = ml.create_tensor(ctx_layer, tn(LLM_TENSOR_ATTN_NORM, "bias", i), {n_embd}); 2024-02-28T07:55:54,227 | ~~~~~~~~~~~~~~~~^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ 2024-02-28T07:55:54,227 /tmp/pip-wheel-1e0qhhza/llama-cpp-python_5906e59816184d0d863e6ff77b057617/vendor/llama.cpp/llama.cpp:4093:54: note: parameter passing for argument of type ‘std::initializer_list’ changed in GCC 7.1 2024-02-28T07:55:54,228 4093 | layer.bqkv = ml.create_tensor(ctx_layer, tn(LLM_TENSOR_ATTN_QKV, "bias", i), {n_embd + 2*n_embd_gqa}); 2024-02-28T07:55:54,228 | ~~~~~~~~~~~~~~~~^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ 2024-02-28T07:55:54,229 /tmp/pip-wheel-1e0qhhza/llama-cpp-python_5906e59816184d0d863e6ff77b057617/vendor/llama.cpp/llama.cpp:4095:54: note: parameter passing for argument of type ‘std::initializer_list’ changed in GCC 7.1 2024-02-28T07:55:54,229 4095 | layer.wo = ml.create_tensor(ctx_split, tn(LLM_TENSOR_ATTN_OUT, "weight", i), {n_embd, n_embd}); 2024-02-28T07:55:54,230 | ~~~~~~~~~~~~~~~~^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ 2024-02-28T07:55:54,231 /tmp/pip-wheel-1e0qhhza/llama-cpp-python_5906e59816184d0d863e6ff77b057617/vendor/llama.cpp/llama.cpp:4096:54: note: parameter passing for argument of type ‘std::initializer_list’ changed in GCC 7.1 2024-02-28T07:55:54,231 4096 | layer.bo = ml.create_tensor(ctx_layer, tn(LLM_TENSOR_ATTN_OUT, "bias", i), {n_embd}); 2024-02-28T07:55:54,232 | ~~~~~~~~~~~~~~~~^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ 2024-02-28T07:55:54,232 /tmp/pip-wheel-1e0qhhza/llama-cpp-python_5906e59816184d0d863e6ff77b057617/vendor/llama.cpp/llama.cpp:4099:60: note: parameter passing for argument of type ‘std::initializer_list’ changed in GCC 7.1 2024-02-28T07:55:54,233 4099 | layer.ffn_norm_b = ml.create_tensor(ctx_layer, tn(LLM_TENSOR_FFN_NORM, "bias", i), {n_embd}); 2024-02-28T07:55:54,233 | ~~~~~~~~~~~~~~~~^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ 2024-02-28T07:55:54,234 /tmp/pip-wheel-1e0qhhza/llama-cpp-python_5906e59816184d0d863e6ff77b057617/vendor/llama.cpp/llama.cpp:4101:60: note: parameter passing for argument of type ‘std::initializer_list’ changed in GCC 7.1 2024-02-28T07:55:54,234 4101 | layer.ffn_down = ml.create_tensor(ctx_split, tn(LLM_TENSOR_FFN_DOWN, "weight", i), {n_ff, n_embd}); 2024-02-28T07:55:54,235 | ~~~~~~~~~~~~~~~~^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ 2024-02-28T07:55:54,236 /tmp/pip-wheel-1e0qhhza/llama-cpp-python_5906e59816184d0d863e6ff77b057617/vendor/llama.cpp/llama.cpp:4102:60: note: parameter passing for argument of type ‘std::initializer_list’ changed in GCC 7.1 2024-02-28T07:55:54,236 4102 | layer.ffn_down_b = ml.create_tensor(ctx_layer, tn(LLM_TENSOR_FFN_DOWN, "bias", i), {n_embd}); 2024-02-28T07:55:54,237 | ~~~~~~~~~~~~~~~~^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ 2024-02-28T07:55:54,237 /tmp/pip-wheel-1e0qhhza/llama-cpp-python_5906e59816184d0d863e6ff77b057617/vendor/llama.cpp/llama.cpp:4104:60: note: parameter passing for argument of type ‘std::initializer_list’ changed in GCC 7.1 2024-02-28T07:55:54,238 4104 | layer.ffn_up = ml.create_tensor(ctx_split, tn(LLM_TENSOR_FFN_UP, "weight", i), {n_embd, n_ff}); 2024-02-28T07:55:54,238 | ~~~~~~~~~~~~~~~~^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ 2024-02-28T07:55:54,239 /tmp/pip-wheel-1e0qhhza/llama-cpp-python_5906e59816184d0d863e6ff77b057617/vendor/llama.cpp/llama.cpp:4105:60: note: parameter passing for argument of type ‘std::initializer_list’ changed in GCC 7.1 2024-02-28T07:55:54,240 4105 | layer.ffn_up_b = ml.create_tensor(ctx_layer, tn(LLM_TENSOR_FFN_UP, "bias", i), {n_ff}); 2024-02-28T07:55:54,240 | ~~~~~~~~~~~~~~~~^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ 2024-02-28T07:55:54,241 /tmp/pip-wheel-1e0qhhza/llama-cpp-python_5906e59816184d0d863e6ff77b057617/vendor/llama.cpp/llama.cpp:4110:54: note: parameter passing for argument of type ‘std::initializer_list’ changed in GCC 7.1 2024-02-28T07:55:54,242 4110 | model.tok_embd = ml.create_tensor(ctx_input, tn(LLM_TENSOR_TOKEN_EMBD, "weight"), {n_embd, n_vocab}); 2024-02-28T07:55:54,242 | ~~~~~~~~~~~~~~~~^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ 2024-02-28T07:55:54,243 /tmp/pip-wheel-1e0qhhza/llama-cpp-python_5906e59816184d0d863e6ff77b057617/vendor/llama.cpp/llama.cpp:4114:63: note: parameter passing for argument of type ‘std::initializer_list’ changed in GCC 7.1 2024-02-28T07:55:54,243 4114 | model.output_norm = ml.create_tensor(ctx_output, tn(LLM_TENSOR_OUTPUT_NORM, "weight"), {n_embd}); 2024-02-28T07:55:54,244 | ~~~~~~~~~~~~~~~~^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ 2024-02-28T07:55:54,244 /tmp/pip-wheel-1e0qhhza/llama-cpp-python_5906e59816184d0d863e6ff77b057617/vendor/llama.cpp/llama.cpp:4115:63: note: parameter passing for argument of type ‘std::initializer_list’ changed in GCC 7.1 2024-02-28T07:55:54,245 4115 | model.output_norm_b = ml.create_tensor(ctx_output, tn(LLM_TENSOR_OUTPUT_NORM, "bias"), {n_embd}, false); 2024-02-28T07:55:54,245 | ~~~~~~~~~~~~~~~~^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ 2024-02-28T07:55:54,246 /tmp/pip-wheel-1e0qhhza/llama-cpp-python_5906e59816184d0d863e6ff77b057617/vendor/llama.cpp/llama.cpp:4118:63: note: parameter passing for argument of type ‘std::initializer_list’ changed in GCC 7.1 2024-02-28T07:55:54,246 4118 | model.output = ml.create_tensor(ctx_output_split, tn(LLM_TENSOR_TOKEN_EMBD, "weight"), {n_embd, n_vocab}); 2024-02-28T07:55:54,247 | ~~~~~~~~~~~~~~~~^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ 2024-02-28T07:55:54,247 /tmp/pip-wheel-1e0qhhza/llama-cpp-python_5906e59816184d0d863e6ff77b057617/vendor/llama.cpp/llama.cpp:4129:61: note: parameter passing for argument of type ‘std::initializer_list’ changed in GCC 7.1 2024-02-28T07:55:54,248 4129 | layer.attn_norm = ml.create_tensor(ctx_layer, tn(LLM_TENSOR_ATTN_NORM, "weight", i), {n_embd}); 2024-02-28T07:55:54,248 | ~~~~~~~~~~~~~~~~^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ 2024-02-28T07:55:54,249 /tmp/pip-wheel-1e0qhhza/llama-cpp-python_5906e59816184d0d863e6ff77b057617/vendor/llama.cpp/llama.cpp:4130:61: note: parameter passing for argument of type ‘std::initializer_list’ changed in GCC 7.1 2024-02-28T07:55:54,249 4130 | layer.attn_norm_b = ml.create_tensor(ctx_layer, tn(LLM_TENSOR_ATTN_NORM, "bias", i), {n_embd}, false); 2024-02-28T07:55:54,250 | ~~~~~~~~~~~~~~~~^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ 2024-02-28T07:55:54,250 /tmp/pip-wheel-1e0qhhza/llama-cpp-python_5906e59816184d0d863e6ff77b057617/vendor/llama.cpp/llama.cpp:4132:54: note: parameter passing for argument of type ‘std::initializer_list’ changed in GCC 7.1 2024-02-28T07:55:54,250 4132 | layer.wqkv = ml.create_tensor(ctx_split, tn(LLM_TENSOR_ATTN_QKV, "weight", i), {n_embd, n_embd + 2*n_embd_gqa}); 2024-02-28T07:55:54,251 | ~~~~~~~~~~~~~~~~^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ 2024-02-28T07:55:54,251 /tmp/pip-wheel-1e0qhhza/llama-cpp-python_5906e59816184d0d863e6ff77b057617/vendor/llama.cpp/llama.cpp:4133:54: note: parameter passing for argument of type ‘std::initializer_list’ changed in GCC 7.1 2024-02-28T07:55:54,252 4133 | layer.bqkv = ml.create_tensor(ctx_layer, tn(LLM_TENSOR_ATTN_QKV, "bias", i), {n_embd + 2*n_embd_gqa}, false); 2024-02-28T07:55:54,252 | ~~~~~~~~~~~~~~~~^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ 2024-02-28T07:55:54,253 /tmp/pip-wheel-1e0qhhza/llama-cpp-python_5906e59816184d0d863e6ff77b057617/vendor/llama.cpp/llama.cpp:4135:54: note: parameter passing for argument of type ‘std::initializer_list’ changed in GCC 7.1 2024-02-28T07:55:54,253 4135 | layer.wo = ml.create_tensor(ctx_split, tn(LLM_TENSOR_ATTN_OUT, "weight", i), {n_embd, n_embd}); 2024-02-28T07:55:54,254 | ~~~~~~~~~~~~~~~~^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ 2024-02-28T07:55:54,254 /tmp/pip-wheel-1e0qhhza/llama-cpp-python_5906e59816184d0d863e6ff77b057617/vendor/llama.cpp/llama.cpp:4136:54: note: parameter passing for argument of type ‘std::initializer_list’ changed in GCC 7.1 2024-02-28T07:55:54,255 4136 | layer.bo = ml.create_tensor(ctx_layer, tn(LLM_TENSOR_ATTN_OUT, "bias", i), {n_embd}, false); 2024-02-28T07:55:54,255 | ~~~~~~~~~~~~~~~~^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ 2024-02-28T07:55:54,256 /tmp/pip-wheel-1e0qhhza/llama-cpp-python_5906e59816184d0d863e6ff77b057617/vendor/llama.cpp/llama.cpp:4138:60: note: parameter passing for argument of type ‘std::initializer_list’ changed in GCC 7.1 2024-02-28T07:55:54,256 4138 | layer.ffn_norm = ml.create_tensor(ctx_layer, tn(LLM_TENSOR_FFN_NORM, "weight", i), {n_embd}); 2024-02-28T07:55:54,257 | ~~~~~~~~~~~~~~~~^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ 2024-02-28T07:55:54,257 /tmp/pip-wheel-1e0qhhza/llama-cpp-python_5906e59816184d0d863e6ff77b057617/vendor/llama.cpp/llama.cpp:4142:60: note: parameter passing for argument of type ‘std::initializer_list’ changed in GCC 7.1 2024-02-28T07:55:54,258 4142 | layer.ffn_down_b = ml.create_tensor(ctx_layer, tn(LLM_TENSOR_FFN_DOWN, "bias", i), {n_embd}, false); 2024-02-28T07:55:54,258 | ~~~~~~~~~~~~~~~~^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ 2024-02-28T07:55:54,258 /tmp/pip-wheel-1e0qhhza/llama-cpp-python_5906e59816184d0d863e6ff77b057617/vendor/llama.cpp/llama.cpp:4145:60: note: parameter passing for argument of type ‘std::initializer_list’ changed in GCC 7.1 2024-02-28T07:55:54,259 4145 | layer.ffn_up_b = ml.create_tensor(ctx_layer, tn(LLM_TENSOR_FFN_UP, "bias", i), {n_ff}, false); 2024-02-28T07:55:54,259 | ~~~~~~~~~~~~~~~~^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ 2024-02-28T07:55:54,260 /tmp/pip-wheel-1e0qhhza/llama-cpp-python_5906e59816184d0d863e6ff77b057617/vendor/llama.cpp/llama.cpp:4148:57: note: parameter passing for argument of type ‘std::initializer_list’ changed in GCC 7.1 2024-02-28T07:55:54,260 4148 | layer.ffn_act = ml.create_tensor(ctx_layer, tn(LLM_TENSOR_FFN_ACT, "scales", i), {n_ff}, false); 2024-02-28T07:55:54,261 | ~~~~~~~~~~~~~~~~^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ 2024-02-28T07:55:54,261 /tmp/pip-wheel-1e0qhhza/llama-cpp-python_5906e59816184d0d863e6ff77b057617/vendor/llama.cpp/llama.cpp:4153:54: note: parameter passing for argument of type ‘std::initializer_list’ changed in GCC 7.1 2024-02-28T07:55:54,262 4153 | model.tok_embd = ml.create_tensor(ctx_input, tn(LLM_TENSOR_TOKEN_EMBD, "weight"), {n_embd, n_vocab}); 2024-02-28T07:55:54,263 | ~~~~~~~~~~~~~~~~^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ 2024-02-28T07:55:54,264 /tmp/pip-wheel-1e0qhhza/llama-cpp-python_5906e59816184d0d863e6ff77b057617/vendor/llama.cpp/llama.cpp:4157:63: note: parameter passing for argument of type ‘std::initializer_list’ changed in GCC 7.1 2024-02-28T07:55:54,264 4157 | model.output_norm_b = ml.create_tensor(ctx_output, tn(LLM_TENSOR_OUTPUT_NORM, "bias"), {n_embd}); 2024-02-28T07:55:54,265 | ~~~~~~~~~~~~~~~~^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ 2024-02-28T07:55:54,266 /tmp/pip-wheel-1e0qhhza/llama-cpp-python_5906e59816184d0d863e6ff77b057617/vendor/llama.cpp/llama.cpp:4158:63: note: parameter passing for argument of type ‘std::initializer_list’ changed in GCC 7.1 2024-02-28T07:55:54,267 4158 | model.output_norm = ml.create_tensor(ctx_output, tn(LLM_TENSOR_OUTPUT_NORM, "weight"), {n_embd}); 2024-02-28T07:55:54,267 | ~~~~~~~~~~~~~~~~^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ 2024-02-28T07:55:54,268 /tmp/pip-wheel-1e0qhhza/llama-cpp-python_5906e59816184d0d863e6ff77b057617/vendor/llama.cpp/llama.cpp:4159:63: note: parameter passing for argument of type ‘std::initializer_list’ changed in GCC 7.1 2024-02-28T07:55:54,269 4159 | model.output = ml.create_tensor(ctx_output_split, tn(LLM_TENSOR_OUTPUT, "weight"), {n_embd, n_vocab}); 2024-02-28T07:55:54,270 | ~~~~~~~~~~~~~~~~^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ 2024-02-28T07:55:54,271 /tmp/pip-wheel-1e0qhhza/llama-cpp-python_5906e59816184d0d863e6ff77b057617/vendor/llama.cpp/llama.cpp:4168:61: note: parameter passing for argument of type ‘std::initializer_list’ changed in GCC 7.1 2024-02-28T07:55:54,272 4168 | layer.attn_norm = ml.create_tensor(ctx_layer, tn(LLM_TENSOR_ATTN_NORM, "weight", i), {n_embd}); 2024-02-28T07:55:54,272 | ~~~~~~~~~~~~~~~~^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ 2024-02-28T07:55:54,273 /tmp/pip-wheel-1e0qhhza/llama-cpp-python_5906e59816184d0d863e6ff77b057617/vendor/llama.cpp/llama.cpp:4173:52: note: parameter passing for argument of type ‘std::initializer_list’ changed in GCC 7.1 2024-02-28T07:55:54,274 4173 | layer.wv = ml.create_tensor(ctx_split, tn(LLM_TENSOR_ATTN_V, "weight", i), {n_embd, n_embd_gqa}); 2024-02-28T07:55:54,274 | ~~~~~~~~~~~~~~~~^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ 2024-02-28T07:55:54,275 /tmp/pip-wheel-1e0qhhza/llama-cpp-python_5906e59816184d0d863e6ff77b057617/vendor/llama.cpp/llama.cpp:4177:52: note: parameter passing for argument of type ‘std::initializer_list’ changed in GCC 7.1 2024-02-28T07:55:54,276 4177 | layer.bq = ml.create_tensor(ctx_layer, tn(LLM_TENSOR_ATTN_Q, "bias", i), {n_embd}, false); 2024-02-28T07:55:54,277 | ~~~~~~~~~~~~~~~~^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ 2024-02-28T07:55:54,277 /tmp/pip-wheel-1e0qhhza/llama-cpp-python_5906e59816184d0d863e6ff77b057617/vendor/llama.cpp/llama.cpp:4178:52: note: parameter passing for argument of type ‘std::initializer_list’ changed in GCC 7.1 2024-02-28T07:55:54,278 4178 | layer.bk = ml.create_tensor(ctx_layer, tn(LLM_TENSOR_ATTN_K, "bias", i), {n_embd_gqa}, false); 2024-02-28T07:55:54,279 | ~~~~~~~~~~~~~~~~^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ 2024-02-28T07:55:54,279 /tmp/pip-wheel-1e0qhhza/llama-cpp-python_5906e59816184d0d863e6ff77b057617/vendor/llama.cpp/llama.cpp:4179:52: note: parameter passing for argument of type ‘std::initializer_list’ changed in GCC 7.1 2024-02-28T07:55:54,280 4179 | layer.bv = ml.create_tensor(ctx_layer, tn(LLM_TENSOR_ATTN_V, "bias", i), {n_embd_gqa}, false); 2024-02-28T07:55:54,281 | ~~~~~~~~~~~~~~~~^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ 2024-02-28T07:55:54,282 /tmp/pip-wheel-1e0qhhza/llama-cpp-python_5906e59816184d0d863e6ff77b057617/vendor/llama.cpp/llama.cpp:4181:60: note: parameter passing for argument of type ‘std::initializer_list’ changed in GCC 7.1 2024-02-28T07:55:54,283 4181 | layer.ffn_norm = ml.create_tensor(ctx_layer, tn(LLM_TENSOR_FFN_NORM, "weight", i), {n_embd}); 2024-02-28T07:55:54,284 | ~~~~~~~~~~~~~~~~^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ 2024-02-28T07:55:54,285 /tmp/pip-wheel-1e0qhhza/llama-cpp-python_5906e59816184d0d863e6ff77b057617/vendor/llama.cpp/llama.cpp:4182:60: note: parameter passing for argument of type ‘std::initializer_list’ changed in GCC 7.1 2024-02-28T07:55:54,286 4182 | layer.ffn_norm_b = ml.create_tensor(ctx_layer, tn(LLM_TENSOR_FFN_NORM, "bias", i), {n_embd}); 2024-02-28T07:55:54,287 | ~~~~~~~~~~~~~~~~^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ 2024-02-28T07:55:54,287 /tmp/pip-wheel-1e0qhhza/llama-cpp-python_5906e59816184d0d863e6ff77b057617/vendor/llama.cpp/llama.cpp:4184:58: note: parameter passing for argument of type ‘std::initializer_list’ changed in GCC 7.1 2024-02-28T07:55:54,288 4184 | layer.ffn_gate = ml.create_tensor(ctx_split, tn(LLM_TENSOR_FFN_GATE, "weight", i), {n_embd, n_ff}); 2024-02-28T07:55:54,289 | ~~~~~~~~~~~~~~~~^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ 2024-02-28T07:55:54,290 /tmp/pip-wheel-1e0qhhza/llama-cpp-python_5906e59816184d0d863e6ff77b057617/vendor/llama.cpp/llama.cpp:4185:58: note: parameter passing for argument of type ‘std::initializer_list’ changed in GCC 7.1 2024-02-28T07:55:54,291 4185 | layer.ffn_down = ml.create_tensor(ctx_split, tn(LLM_TENSOR_FFN_DOWN, "weight", i), { n_ff, n_embd}); 2024-02-28T07:55:54,292 | ~~~~~~~~~~~~~~~~^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ 2024-02-28T07:55:54,292 /tmp/pip-wheel-1e0qhhza/llama-cpp-python_5906e59816184d0d863e6ff77b057617/vendor/llama.cpp/llama.cpp:4186:58: note: parameter passing for argument of type ‘std::initializer_list’ changed in GCC 7.1 2024-02-28T07:55:54,293 4186 | layer.ffn_up = ml.create_tensor(ctx_split, tn(LLM_TENSOR_FFN_UP, "weight", i), {n_embd, n_ff}); 2024-02-28T07:55:54,294 | ~~~~~~~~~~~~~~~~^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ 2024-02-28T07:55:54,294 /tmp/pip-wheel-1e0qhhza/llama-cpp-python_5906e59816184d0d863e6ff77b057617/vendor/llama.cpp/llama.cpp:4191:54: note: parameter passing for argument of type ‘std::initializer_list’ changed in GCC 7.1 2024-02-28T07:55:54,295 4191 | model.tok_embd = ml.create_tensor(ctx_input, tn(LLM_TENSOR_TOKEN_EMBD, "weight"), {n_embd, n_vocab}); 2024-02-28T07:55:54,296 | ~~~~~~~~~~~~~~~~^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ 2024-02-28T07:55:54,297 /tmp/pip-wheel-1e0qhhza/llama-cpp-python_5906e59816184d0d863e6ff77b057617/vendor/llama.cpp/llama.cpp:4195:61: note: parameter passing for argument of type ‘std::initializer_list’ changed in GCC 7.1 2024-02-28T07:55:54,298 4195 | model.output_norm = ml.create_tensor(ctx_output, tn(LLM_TENSOR_OUTPUT_NORM, "weight"), {n_embd}); 2024-02-28T07:55:54,299 | ~~~~~~~~~~~~~~~~^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ 2024-02-28T07:55:54,300 /tmp/pip-wheel-1e0qhhza/llama-cpp-python_5906e59816184d0d863e6ff77b057617/vendor/llama.cpp/llama.cpp:4196:61: note: parameter passing for argument of type ‘std::initializer_list’ changed in GCC 7.1 2024-02-28T07:55:54,301 4196 | model.output = ml.create_tensor(ctx_output_split, tn(LLM_TENSOR_OUTPUT, "weight"), {n_embd, n_vocab}); 2024-02-28T07:55:54,302 | ~~~~~~~~~~~~~~~~^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ 2024-02-28T07:55:54,302 /tmp/pip-wheel-1e0qhhza/llama-cpp-python_5906e59816184d0d863e6ff77b057617/vendor/llama.cpp/llama.cpp:4205:59: note: parameter passing for argument of type ‘std::initializer_list’ changed in GCC 7.1 2024-02-28T07:55:54,303 4205 | layer.attn_norm = ml.create_tensor(ctx_layer, tn(LLM_TENSOR_ATTN_NORM, "weight", i), {n_embd}); 2024-02-28T07:55:54,304 | ~~~~~~~~~~~~~~~~^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ 2024-02-28T07:55:54,305 /tmp/pip-wheel-1e0qhhza/llama-cpp-python_5906e59816184d0d863e6ff77b057617/vendor/llama.cpp/llama.cpp:4207:54: note: parameter passing for argument of type ‘std::initializer_list’ changed in GCC 7.1 2024-02-28T07:55:54,305 4207 | layer.wqkv = ml.create_tensor(ctx_split, tn(LLM_TENSOR_ATTN_QKV, "weight", i), {n_embd, n_embd*3}); 2024-02-28T07:55:54,306 | ~~~~~~~~~~~~~~~~^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ 2024-02-28T07:55:54,307 /tmp/pip-wheel-1e0qhhza/llama-cpp-python_5906e59816184d0d863e6ff77b057617/vendor/llama.cpp/llama.cpp:4213:58: note: parameter passing for argument of type ‘std::initializer_list’ changed in GCC 7.1 2024-02-28T07:55:54,308 4213 | layer.ffn_gate = ml.create_tensor(ctx_split, tn(LLM_TENSOR_FFN_GATE, "weight", i), {n_embd, n_ff/2}); 2024-02-28T07:55:54,308 | ~~~~~~~~~~~~~~~~^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ 2024-02-28T07:55:54,309 /tmp/pip-wheel-1e0qhhza/llama-cpp-python_5906e59816184d0d863e6ff77b057617/vendor/llama.cpp/llama.cpp:4214:58: note: parameter passing for argument of type ‘std::initializer_list’ changed in GCC 7.1 2024-02-28T07:55:54,309 4214 | layer.ffn_down = ml.create_tensor(ctx_split, tn(LLM_TENSOR_FFN_DOWN, "weight", i), {n_ff/2, n_embd}); 2024-02-28T07:55:54,310 | ~~~~~~~~~~~~~~~~^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ 2024-02-28T07:55:54,311 /tmp/pip-wheel-1e0qhhza/llama-cpp-python_5906e59816184d0d863e6ff77b057617/vendor/llama.cpp/llama.cpp:4220:54: note: parameter passing for argument of type ‘std::initializer_list’ changed in GCC 7.1 2024-02-28T07:55:54,311 4220 | model.tok_embd = ml.create_tensor(ctx_input, tn(LLM_TENSOR_TOKEN_EMBD, "weight"), {n_embd, n_vocab}); 2024-02-28T07:55:54,312 | ~~~~~~~~~~~~~~~~^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ 2024-02-28T07:55:54,312 /tmp/pip-wheel-1e0qhhza/llama-cpp-python_5906e59816184d0d863e6ff77b057617/vendor/llama.cpp/llama.cpp:4224:61: note: parameter passing for argument of type ‘std::initializer_list’ changed in GCC 7.1 2024-02-28T07:55:54,313 4224 | model.output_norm = ml.create_tensor(ctx_output, tn(LLM_TENSOR_OUTPUT_NORM, "weight"), {n_embd}); 2024-02-28T07:55:54,314 | ~~~~~~~~~~~~~~~~^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ 2024-02-28T07:55:54,315 /tmp/pip-wheel-1e0qhhza/llama-cpp-python_5906e59816184d0d863e6ff77b057617/vendor/llama.cpp/llama.cpp:4225:61: note: parameter passing for argument of type ‘std::initializer_list’ changed in GCC 7.1 2024-02-28T07:55:54,316 4225 | model.output = ml.create_tensor(ctx_output_split, tn(LLM_TENSOR_OUTPUT, "weight"), {n_embd, n_vocab}); 2024-02-28T07:55:54,317 | ~~~~~~~~~~~~~~~~^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ 2024-02-28T07:55:54,318 /tmp/pip-wheel-1e0qhhza/llama-cpp-python_5906e59816184d0d863e6ff77b057617/vendor/llama.cpp/llama.cpp:4236:52: note: parameter passing for argument of type ‘std::initializer_list’ changed in GCC 7.1 2024-02-28T07:55:54,319 4236 | layer.wq = ml.create_tensor(ctx_split, tn(LLM_TENSOR_ATTN_Q, "weight", i), {n_embd, n_embd}); 2024-02-28T07:55:54,320 | ~~~~~~~~~~~~~~~~^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ 2024-02-28T07:55:54,321 /tmp/pip-wheel-1e0qhhza/llama-cpp-python_5906e59816184d0d863e6ff77b057617/vendor/llama.cpp/llama.cpp:4237:52: note: parameter passing for argument of type ‘std::initializer_list’ changed in GCC 7.1 2024-02-28T07:55:54,321 4237 | layer.wk = ml.create_tensor(ctx_split, tn(LLM_TENSOR_ATTN_K, "weight", i), {n_embd, n_embd_gqa}); 2024-02-28T07:55:54,322 | ~~~~~~~~~~~~~~~~^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ 2024-02-28T07:55:54,323 /tmp/pip-wheel-1e0qhhza/llama-cpp-python_5906e59816184d0d863e6ff77b057617/vendor/llama.cpp/llama.cpp:4238:52: note: parameter passing for argument of type ‘std::initializer_list’ changed in GCC 7.1 2024-02-28T07:55:54,324 4238 | layer.wv = ml.create_tensor(ctx_split, tn(LLM_TENSOR_ATTN_V, "weight", i), {n_embd, n_embd_gqa}); 2024-02-28T07:55:54,325 | ~~~~~~~~~~~~~~~~^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ 2024-02-28T07:55:54,325 /tmp/pip-wheel-1e0qhhza/llama-cpp-python_5906e59816184d0d863e6ff77b057617/vendor/llama.cpp/llama.cpp:4243:52: note: parameter passing for argument of type ‘std::initializer_list’ changed in GCC 7.1 2024-02-28T07:55:54,326 4243 | layer.bk = ml.create_tensor(ctx_layer, tn(LLM_TENSOR_ATTN_K, "bias", i), {n_embd_gqa}); 2024-02-28T07:55:54,327 | ~~~~~~~~~~~~~~~~^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ 2024-02-28T07:55:54,327 /tmp/pip-wheel-1e0qhhza/llama-cpp-python_5906e59816184d0d863e6ff77b057617/vendor/llama.cpp/llama.cpp:4244:52: note: parameter passing for argument of type ‘std::initializer_list’ changed in GCC 7.1 2024-02-28T07:55:54,328 4244 | layer.bv = ml.create_tensor(ctx_layer, tn(LLM_TENSOR_ATTN_V, "bias", i), {n_embd_gqa}); 2024-02-28T07:55:54,328 | ~~~~~~~~~~~~~~~~^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ 2024-02-28T07:55:54,329 /tmp/pip-wheel-1e0qhhza/llama-cpp-python_5906e59816184d0d863e6ff77b057617/vendor/llama.cpp/llama.cpp:4249:58: note: parameter passing for argument of type ‘std::initializer_list’ changed in GCC 7.1 2024-02-28T07:55:54,330 4249 | layer.ffn_down = ml.create_tensor(ctx_split, tn(LLM_TENSOR_FFN_DOWN, "weight", i), { n_ff, n_embd}); 2024-02-28T07:55:54,330 | ~~~~~~~~~~~~~~~~^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ 2024-02-28T07:55:54,331 /tmp/pip-wheel-1e0qhhza/llama-cpp-python_5906e59816184d0d863e6ff77b057617/vendor/llama.cpp/llama.cpp:4250:58: note: parameter passing for argument of type ‘std::initializer_list’ changed in GCC 7.1 2024-02-28T07:55:54,332 4250 | layer.ffn_up = ml.create_tensor(ctx_split, tn(LLM_TENSOR_FFN_UP, "weight", i), {n_embd, n_ff}); 2024-02-28T07:55:54,333 | ~~~~~~~~~~~~~~~~^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ 2024-02-28T07:55:54,334 /tmp/pip-wheel-1e0qhhza/llama-cpp-python_5906e59816184d0d863e6ff77b057617/vendor/llama.cpp/llama.cpp:4255:54: note: parameter passing for argument of type ‘std::initializer_list’ changed in GCC 7.1 2024-02-28T07:55:54,335 4255 | model.tok_embd = ml.create_tensor(ctx_input, tn(LLM_TENSOR_TOKEN_EMBD, "weight"), {n_embd, n_vocab}); 2024-02-28T07:55:54,335 | ~~~~~~~~~~~~~~~~^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ 2024-02-28T07:55:54,336 /tmp/pip-wheel-1e0qhhza/llama-cpp-python_5906e59816184d0d863e6ff77b057617/vendor/llama.cpp/llama.cpp:4259:63: note: parameter passing for argument of type ‘std::initializer_list’ changed in GCC 7.1 2024-02-28T07:55:54,336 4259 | model.output_norm = ml.create_tensor(ctx_output, tn(LLM_TENSOR_OUTPUT_NORM, "weight"), {n_embd}); 2024-02-28T07:55:54,337 | ~~~~~~~~~~~~~~~~^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ 2024-02-28T07:55:54,337 /tmp/pip-wheel-1e0qhhza/llama-cpp-python_5906e59816184d0d863e6ff77b057617/vendor/llama.cpp/llama.cpp:4260:63: note: parameter passing for argument of type ‘std::initializer_list’ changed in GCC 7.1 2024-02-28T07:55:54,338 4260 | model.output_norm_b = ml.create_tensor(ctx_output, tn(LLM_TENSOR_OUTPUT_NORM, "bias"), {n_embd}); 2024-02-28T07:55:54,338 | ~~~~~~~~~~~~~~~~^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ 2024-02-28T07:55:54,339 /tmp/pip-wheel-1e0qhhza/llama-cpp-python_5906e59816184d0d863e6ff77b057617/vendor/llama.cpp/llama.cpp:4261:63: note: parameter passing for argument of type ‘std::initializer_list’ changed in GCC 7.1 2024-02-28T07:55:54,339 4261 | model.output = ml.create_tensor(ctx_output_split, tn(LLM_TENSOR_OUTPUT, "weight"), {n_embd, n_vocab}); 2024-02-28T07:55:54,340 | ~~~~~~~~~~~~~~~~^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ 2024-02-28T07:55:54,340 /tmp/pip-wheel-1e0qhhza/llama-cpp-python_5906e59816184d0d863e6ff77b057617/vendor/llama.cpp/llama.cpp:4262:63: note: parameter passing for argument of type ‘std::initializer_list’ changed in GCC 7.1 2024-02-28T07:55:54,340 4262 | model.output_b = ml.create_tensor(ctx_output, tn(LLM_TENSOR_OUTPUT, "bias"), {n_vocab}); 2024-02-28T07:55:54,341 | ~~~~~~~~~~~~~~~~^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ 2024-02-28T07:55:54,341 /tmp/pip-wheel-1e0qhhza/llama-cpp-python_5906e59816184d0d863e6ff77b057617/vendor/llama.cpp/llama.cpp:4272:61: note: parameter passing for argument of type ‘std::initializer_list’ changed in GCC 7.1 2024-02-28T07:55:54,342 4272 | layer.attn_norm_b = ml.create_tensor(ctx_layer, tn(LLM_TENSOR_ATTN_NORM, "bias", i), {n_embd}); 2024-02-28T07:55:54,342 | ~~~~~~~~~~~~~~~~^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ 2024-02-28T07:55:54,342 /tmp/pip-wheel-1e0qhhza/llama-cpp-python_5906e59816184d0d863e6ff77b057617/vendor/llama.cpp/llama.cpp:4274:54: note: parameter passing for argument of type ‘std::initializer_list’ changed in GCC 7.1 2024-02-28T07:55:54,343 4274 | layer.wqkv = ml.create_tensor(ctx_split, tn(LLM_TENSOR_ATTN_QKV, "weight", i), {n_embd, n_embd + 2*n_embd_gqa}, false); 2024-02-28T07:55:54,343 | ~~~~~~~~~~~~~~~~^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ 2024-02-28T07:55:54,344 /tmp/pip-wheel-1e0qhhza/llama-cpp-python_5906e59816184d0d863e6ff77b057617/vendor/llama.cpp/llama.cpp:4275:54: note: parameter passing for argument of type ‘std::initializer_list’ changed in GCC 7.1 2024-02-28T07:55:54,344 4275 | layer.bqkv = ml.create_tensor(ctx_layer, tn(LLM_TENSOR_ATTN_QKV, "bias", i), {n_embd + 2*n_embd_gqa}, false); 2024-02-28T07:55:54,345 | ~~~~~~~~~~~~~~~~^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ 2024-02-28T07:55:54,345 /tmp/pip-wheel-1e0qhhza/llama-cpp-python_5906e59816184d0d863e6ff77b057617/vendor/llama.cpp/llama.cpp:4278:56: note: parameter passing for argument of type ‘std::initializer_list’ changed in GCC 7.1 2024-02-28T07:55:54,346 4278 | layer.wq = ml.create_tensor(ctx_split, tn(LLM_TENSOR_ATTN_Q, "weight", i), {n_embd, n_embd}); 2024-02-28T07:55:54,346 | ~~~~~~~~~~~~~~~~^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ 2024-02-28T07:55:54,347 /tmp/pip-wheel-1e0qhhza/llama-cpp-python_5906e59816184d0d863e6ff77b057617/vendor/llama.cpp/llama.cpp:4279:56: note: parameter passing for argument of type ‘std::initializer_list’ changed in GCC 7.1 2024-02-28T07:55:54,347 4279 | layer.bq = ml.create_tensor(ctx_layer, tn(LLM_TENSOR_ATTN_Q, "bias", i), {n_embd}); 2024-02-28T07:55:54,348 | ~~~~~~~~~~~~~~~~^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ 2024-02-28T07:55:54,349 /tmp/pip-wheel-1e0qhhza/llama-cpp-python_5906e59816184d0d863e6ff77b057617/vendor/llama.cpp/llama.cpp:4281:56: note: parameter passing for argument of type ‘std::initializer_list’ changed in GCC 7.1 2024-02-28T07:55:54,349 4281 | layer.wk = ml.create_tensor(ctx_split, tn(LLM_TENSOR_ATTN_K, "weight", i), {n_embd, n_embd_gqa}); 2024-02-28T07:55:54,350 | ~~~~~~~~~~~~~~~~^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ 2024-02-28T07:55:54,350 /tmp/pip-wheel-1e0qhhza/llama-cpp-python_5906e59816184d0d863e6ff77b057617/vendor/llama.cpp/llama.cpp:4282:56: note: parameter passing for argument of type ‘std::initializer_list’ changed in GCC 7.1 2024-02-28T07:55:54,350 4282 | layer.bk = ml.create_tensor(ctx_layer, tn(LLM_TENSOR_ATTN_K, "bias", i), {n_embd_gqa}); 2024-02-28T07:55:54,351 | ~~~~~~~~~~~~~~~~^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ 2024-02-28T07:55:54,351 /tmp/pip-wheel-1e0qhhza/llama-cpp-python_5906e59816184d0d863e6ff77b057617/vendor/llama.cpp/llama.cpp:4284:56: note: parameter passing for argument of type ‘std::initializer_list’ changed in GCC 7.1 2024-02-28T07:55:54,352 4284 | layer.wv = ml.create_tensor(ctx_split, tn(LLM_TENSOR_ATTN_V, "weight", i), {n_embd, n_embd_gqa}); 2024-02-28T07:55:54,352 | ~~~~~~~~~~~~~~~~^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ 2024-02-28T07:55:54,352 /tmp/pip-wheel-1e0qhhza/llama-cpp-python_5906e59816184d0d863e6ff77b057617/vendor/llama.cpp/llama.cpp:4285:56: note: parameter passing for argument of type ‘std::initializer_list’ changed in GCC 7.1 2024-02-28T07:55:54,353 4285 | layer.bv = ml.create_tensor(ctx_layer, tn(LLM_TENSOR_ATTN_V, "bias", i), {n_embd_gqa}); 2024-02-28T07:55:54,353 | ~~~~~~~~~~~~~~~~^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ 2024-02-28T07:55:54,354 /tmp/pip-wheel-1e0qhhza/llama-cpp-python_5906e59816184d0d863e6ff77b057617/vendor/llama.cpp/llama.cpp:4289:54: note: parameter passing for argument of type ‘std::initializer_list’ changed in GCC 7.1 2024-02-28T07:55:54,354 4289 | layer.bo = ml.create_tensor(ctx_layer, tn(LLM_TENSOR_ATTN_OUT, "bias", i), {n_embd}); 2024-02-28T07:55:54,355 | ~~~~~~~~~~~~~~~~^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ 2024-02-28T07:55:54,355 /tmp/pip-wheel-1e0qhhza/llama-cpp-python_5906e59816184d0d863e6ff77b057617/vendor/llama.cpp/llama.cpp:4291:60: note: parameter passing for argument of type ‘std::initializer_list’ changed in GCC 7.1 2024-02-28T07:55:54,355 4291 | layer.ffn_down = ml.create_tensor(ctx_split, tn(LLM_TENSOR_FFN_DOWN, "weight", i), {n_ff, n_embd}); 2024-02-28T07:55:54,356 | ~~~~~~~~~~~~~~~~^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ 2024-02-28T07:55:54,356 /tmp/pip-wheel-1e0qhhza/llama-cpp-python_5906e59816184d0d863e6ff77b057617/vendor/llama.cpp/llama.cpp:4292:60: note: parameter passing for argument of type ‘std::initializer_list’ changed in GCC 7.1 2024-02-28T07:55:54,357 4292 | layer.ffn_down_b = ml.create_tensor(ctx_layer, tn(LLM_TENSOR_FFN_DOWN, "bias", i), {n_embd}); 2024-02-28T07:55:54,357 | ~~~~~~~~~~~~~~~~^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ 2024-02-28T07:55:54,358 /tmp/pip-wheel-1e0qhhza/llama-cpp-python_5906e59816184d0d863e6ff77b057617/vendor/llama.cpp/llama.cpp:4294:60: note: parameter passing for argument of type ‘std::initializer_list’ changed in GCC 7.1 2024-02-28T07:55:54,359 4294 | layer.ffn_up = ml.create_tensor(ctx_split, tn(LLM_TENSOR_FFN_UP, "weight", i), {n_embd, n_ff}); 2024-02-28T07:55:54,359 | ~~~~~~~~~~~~~~~~^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ 2024-02-28T07:55:54,360 /tmp/pip-wheel-1e0qhhza/llama-cpp-python_5906e59816184d0d863e6ff77b057617/vendor/llama.cpp/llama.cpp:4295:60: note: parameter passing for argument of type ‘std::initializer_list’ changed in GCC 7.1 2024-02-28T07:55:54,360 4295 | layer.ffn_up_b = ml.create_tensor(ctx_layer, tn(LLM_TENSOR_FFN_UP, "bias", i), {n_ff}); 2024-02-28T07:55:54,360 | ~~~~~~~~~~~~~~~~^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ 2024-02-28T07:55:54,361 /tmp/pip-wheel-1e0qhhza/llama-cpp-python_5906e59816184d0d863e6ff77b057617/vendor/llama.cpp/llama.cpp:4300:54: note: parameter passing for argument of type ‘std::initializer_list’ changed in GCC 7.1 2024-02-28T07:55:54,361 4300 | model.tok_embd = ml.create_tensor(ctx_input, tn(LLM_TENSOR_TOKEN_EMBD, "weight"), {n_embd, n_vocab}); 2024-02-28T07:55:54,362 | ~~~~~~~~~~~~~~~~^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ 2024-02-28T07:55:54,362 /tmp/pip-wheel-1e0qhhza/llama-cpp-python_5906e59816184d0d863e6ff77b057617/vendor/llama.cpp/llama.cpp:4304:61: note: parameter passing for argument of type ‘std::initializer_list’ changed in GCC 7.1 2024-02-28T07:55:54,362 4304 | model.output_norm = ml.create_tensor(ctx_output, tn(LLM_TENSOR_OUTPUT_NORM, "weight"), {n_embd}); 2024-02-28T07:55:54,363 | ~~~~~~~~~~~~~~~~^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ 2024-02-28T07:55:54,363 /tmp/pip-wheel-1e0qhhza/llama-cpp-python_5906e59816184d0d863e6ff77b057617/vendor/llama.cpp/llama.cpp:4305:61: note: parameter passing for argument of type ‘std::initializer_list’ changed in GCC 7.1 2024-02-28T07:55:54,364 4305 | model.output = ml.create_tensor(ctx_output_split, tn(LLM_TENSOR_OUTPUT, "weight"), {n_embd, n_vocab}); 2024-02-28T07:55:54,364 | ~~~~~~~~~~~~~~~~^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ 2024-02-28T07:55:54,365 /tmp/pip-wheel-1e0qhhza/llama-cpp-python_5906e59816184d0d863e6ff77b057617/vendor/llama.cpp/llama.cpp:4314:59: note: parameter passing for argument of type ‘std::initializer_list’ changed in GCC 7.1 2024-02-28T07:55:54,366 4314 | layer.attn_norm = ml.create_tensor(ctx_layer, tn(LLM_TENSOR_ATTN_NORM, "weight", i), {n_embd}); 2024-02-28T07:55:54,366 | ~~~~~~~~~~~~~~~~^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ 2024-02-28T07:55:54,367 /tmp/pip-wheel-1e0qhhza/llama-cpp-python_5906e59816184d0d863e6ff77b057617/vendor/llama.cpp/llama.cpp:4316:52: note: parameter passing for argument of type ‘std::initializer_list’ changed in GCC 7.1 2024-02-28T07:55:54,367 4316 | layer.wq = ml.create_tensor(ctx_split, tn(LLM_TENSOR_ATTN_Q, "weight", i), {n_embd, n_embd}); 2024-02-28T07:55:54,368 | ~~~~~~~~~~~~~~~~^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ 2024-02-28T07:55:54,368 /tmp/pip-wheel-1e0qhhza/llama-cpp-python_5906e59816184d0d863e6ff77b057617/vendor/llama.cpp/llama.cpp:4317:52: note: parameter passing for argument of type ‘std::initializer_list’ changed in GCC 7.1 2024-02-28T07:55:54,369 4317 | layer.wk = ml.create_tensor(ctx_split, tn(LLM_TENSOR_ATTN_K, "weight", i), {n_embd, n_embd_gqa}); 2024-02-28T07:55:54,372 | ~~~~~~~~~~~~~~~~^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ 2024-02-28T07:55:54,373 /tmp/pip-wheel-1e0qhhza/llama-cpp-python_5906e59816184d0d863e6ff77b057617/vendor/llama.cpp/llama.cpp:4318:52: note: parameter passing for argument of type ‘std::initializer_list’ changed in GCC 7.1 2024-02-28T07:55:54,373 4318 | layer.wv = ml.create_tensor(ctx_split, tn(LLM_TENSOR_ATTN_V, "weight", i), {n_embd, n_embd_gqa}); 2024-02-28T07:55:54,374 | ~~~~~~~~~~~~~~~~^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ 2024-02-28T07:55:54,374 /tmp/pip-wheel-1e0qhhza/llama-cpp-python_5906e59816184d0d863e6ff77b057617/vendor/llama.cpp/llama.cpp:4319:52: note: parameter passing for argument of type ‘std::initializer_list’ changed in GCC 7.1 2024-02-28T07:55:54,375 4319 | layer.wo = ml.create_tensor(ctx_split, tn(LLM_TENSOR_ATTN_OUT, "weight", i), {n_embd, n_embd}); 2024-02-28T07:55:54,375 | ~~~~~~~~~~~~~~~~^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ 2024-02-28T07:55:54,376 /tmp/pip-wheel-1e0qhhza/llama-cpp-python_5906e59816184d0d863e6ff77b057617/vendor/llama.cpp/llama.cpp:4321:58: note: parameter passing for argument of type ‘std::initializer_list’ changed in GCC 7.1 2024-02-28T07:55:54,376 4321 | layer.ffn_gate = ml.create_tensor(ctx_split, tn(LLM_TENSOR_FFN_GATE, "weight", i), {n_embd, n_ff}); 2024-02-28T07:55:54,376 | ~~~~~~~~~~~~~~~~^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ 2024-02-28T07:55:54,377 /tmp/pip-wheel-1e0qhhza/llama-cpp-python_5906e59816184d0d863e6ff77b057617/vendor/llama.cpp/llama.cpp:4322:58: note: parameter passing for argument of type ‘std::initializer_list’ changed in GCC 7.1 2024-02-28T07:55:54,377 4322 | layer.ffn_down = ml.create_tensor(ctx_split, tn(LLM_TENSOR_FFN_DOWN, "weight", i), { n_ff, n_embd}); 2024-02-28T07:55:54,378 | ~~~~~~~~~~~~~~~~^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ 2024-02-28T07:55:54,378 /tmp/pip-wheel-1e0qhhza/llama-cpp-python_5906e59816184d0d863e6ff77b057617/vendor/llama.cpp/llama.cpp:4323:58: note: parameter passing for argument of type ‘std::initializer_list’ changed in GCC 7.1 2024-02-28T07:55:54,379 4323 | layer.ffn_up = ml.create_tensor(ctx_split, tn(LLM_TENSOR_FFN_UP, "weight", i), {n_embd, n_ff}); 2024-02-28T07:55:54,380 | ~~~~~~~~~~~~~~~~^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ 2024-02-28T07:55:54,380 /tmp/pip-wheel-1e0qhhza/llama-cpp-python_5906e59816184d0d863e6ff77b057617/vendor/llama.cpp/llama.cpp:4328:54: note: parameter passing for argument of type ‘std::initializer_list’ changed in GCC 7.1 2024-02-28T07:55:54,381 4328 | model.tok_embd = ml.create_tensor(ctx_input, tn(LLM_TENSOR_TOKEN_EMBD, "weight"), {n_embd, n_vocab}); 2024-02-28T07:55:54,381 | ~~~~~~~~~~~~~~~~^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ 2024-02-28T07:55:54,382 /tmp/pip-wheel-1e0qhhza/llama-cpp-python_5906e59816184d0d863e6ff77b057617/vendor/llama.cpp/llama.cpp:4329:54: note: parameter passing for argument of type ‘std::initializer_list’ changed in GCC 7.1 2024-02-28T07:55:54,382 4329 | model.pos_embd = ml.create_tensor(ctx_input, tn(LLM_TENSOR_POS_EMBD, "weight"), {n_embd, hparams.n_ctx_train}); 2024-02-28T07:55:54,383 | ~~~~~~~~~~~~~~~~^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ 2024-02-28T07:55:54,384 /tmp/pip-wheel-1e0qhhza/llama-cpp-python_5906e59816184d0d863e6ff77b057617/vendor/llama.cpp/llama.cpp:4333:63: note: parameter passing for argument of type ‘std::initializer_list’ changed in GCC 7.1 2024-02-28T07:55:54,384 4333 | model.output_norm = ml.create_tensor(ctx_output, tn(LLM_TENSOR_OUTPUT_NORM, "weight"), {n_embd}); 2024-02-28T07:55:54,384 | ~~~~~~~~~~~~~~~~^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ 2024-02-28T07:55:54,385 /tmp/pip-wheel-1e0qhhza/llama-cpp-python_5906e59816184d0d863e6ff77b057617/vendor/llama.cpp/llama.cpp:4334:63: note: parameter passing for argument of type ‘std::initializer_list’ changed in GCC 7.1 2024-02-28T07:55:54,385 4334 | model.output_norm_b = ml.create_tensor(ctx_output, tn(LLM_TENSOR_OUTPUT_NORM, "bias"), {n_embd}); 2024-02-28T07:55:54,385 | ~~~~~~~~~~~~~~~~^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ 2024-02-28T07:55:54,386 /tmp/pip-wheel-1e0qhhza/llama-cpp-python_5906e59816184d0d863e6ff77b057617/vendor/llama.cpp/llama.cpp:4335:63: note: parameter passing for argument of type ‘std::initializer_list’ changed in GCC 7.1 2024-02-28T07:55:54,386 4335 | model.output = ml.create_tensor(ctx_output_split, tn(LLM_TENSOR_OUTPUT, "weight"), {n_embd, n_vocab}); 2024-02-28T07:55:54,387 | ~~~~~~~~~~~~~~~~^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ 2024-02-28T07:55:54,387 /tmp/pip-wheel-1e0qhhza/llama-cpp-python_5906e59816184d0d863e6ff77b057617/vendor/llama.cpp/llama.cpp:4344:61: note: parameter passing for argument of type ‘std::initializer_list’ changed in GCC 7.1 2024-02-28T07:55:54,388 4344 | layer.attn_norm = ml.create_tensor(ctx_layer, tn(LLM_TENSOR_ATTN_NORM, "weight", i), {n_embd}); 2024-02-28T07:55:54,388 | ~~~~~~~~~~~~~~~~^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ 2024-02-28T07:55:54,389 /tmp/pip-wheel-1e0qhhza/llama-cpp-python_5906e59816184d0d863e6ff77b057617/vendor/llama.cpp/llama.cpp:4347:54: note: parameter passing for argument of type ‘std::initializer_list’ changed in GCC 7.1 2024-02-28T07:55:54,389 4347 | layer.wqkv = ml.create_tensor(ctx_split, tn(LLM_TENSOR_ATTN_QKV, "weight", i), {n_embd, n_embd + 2*n_embd_gqa}); 2024-02-28T07:55:54,390 | ~~~~~~~~~~~~~~~~^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ 2024-02-28T07:55:54,390 /tmp/pip-wheel-1e0qhhza/llama-cpp-python_5906e59816184d0d863e6ff77b057617/vendor/llama.cpp/llama.cpp:4348:54: note: parameter passing for argument of type ‘std::initializer_list’ changed in GCC 7.1 2024-02-28T07:55:54,391 4348 | layer.bqkv = ml.create_tensor(ctx_layer, tn(LLM_TENSOR_ATTN_QKV, "bias", i), {n_embd + 2*n_embd_gqa}); 2024-02-28T07:55:54,391 | ~~~~~~~~~~~~~~~~^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ 2024-02-28T07:55:54,392 /tmp/pip-wheel-1e0qhhza/llama-cpp-python_5906e59816184d0d863e6ff77b057617/vendor/llama.cpp/llama.cpp:4350:54: note: parameter passing for argument of type ‘std::initializer_list’ changed in GCC 7.1 2024-02-28T07:55:54,392 4350 | layer.wo = ml.create_tensor(ctx_split, tn(LLM_TENSOR_ATTN_OUT, "weight", i), {n_embd, n_embd}); 2024-02-28T07:55:54,393 | ~~~~~~~~~~~~~~~~^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ 2024-02-28T07:55:54,393 /tmp/pip-wheel-1e0qhhza/llama-cpp-python_5906e59816184d0d863e6ff77b057617/vendor/llama.cpp/llama.cpp:4351:54: note: parameter passing for argument of type ‘std::initializer_list’ changed in GCC 7.1 2024-02-28T07:55:54,394 4351 | layer.bo = ml.create_tensor(ctx_layer, tn(LLM_TENSOR_ATTN_OUT, "bias", i), {n_embd}); 2024-02-28T07:55:54,394 | ~~~~~~~~~~~~~~~~^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ 2024-02-28T07:55:54,394 /tmp/pip-wheel-1e0qhhza/llama-cpp-python_5906e59816184d0d863e6ff77b057617/vendor/llama.cpp/llama.cpp:4353:60: note: parameter passing for argument of type ‘std::initializer_list’ changed in GCC 7.1 2024-02-28T07:55:54,395 4353 | layer.ffn_norm = ml.create_tensor(ctx_layer, tn(LLM_TENSOR_FFN_NORM, "weight", i), {n_embd}); 2024-02-28T07:55:54,395 | ~~~~~~~~~~~~~~~~^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ 2024-02-28T07:55:54,395 /tmp/pip-wheel-1e0qhhza/llama-cpp-python_5906e59816184d0d863e6ff77b057617/vendor/llama.cpp/llama.cpp:4354:60: note: parameter passing for argument of type ‘std::initializer_list’ changed in GCC 7.1 2024-02-28T07:55:54,396 4354 | layer.ffn_norm_b = ml.create_tensor(ctx_layer, tn(LLM_TENSOR_FFN_NORM, "bias", i), {n_embd}); 2024-02-28T07:55:54,396 | ~~~~~~~~~~~~~~~~^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ 2024-02-28T07:55:54,397 /tmp/pip-wheel-1e0qhhza/llama-cpp-python_5906e59816184d0d863e6ff77b057617/vendor/llama.cpp/llama.cpp:4356:60: note: parameter passing for argument of type ‘std::initializer_list’ changed in GCC 7.1 2024-02-28T07:55:54,397 4356 | layer.ffn_down = ml.create_tensor(ctx_split, tn(LLM_TENSOR_FFN_DOWN, "weight", i), {n_ff, n_embd}); 2024-02-28T07:55:54,398 | ~~~~~~~~~~~~~~~~^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ 2024-02-28T07:55:54,398 /tmp/pip-wheel-1e0qhhza/llama-cpp-python_5906e59816184d0d863e6ff77b057617/vendor/llama.cpp/llama.cpp:4360:60: note: parameter passing for argument of type ‘std::initializer_list’ changed in GCC 7.1 2024-02-28T07:55:54,399 4360 | layer.ffn_up_b = ml.create_tensor(ctx_layer, tn(LLM_TENSOR_FFN_UP, "bias", i), {n_ff}); 2024-02-28T07:55:54,400 | ~~~~~~~~~~~~~~~~^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ 2024-02-28T07:55:54,400 /tmp/pip-wheel-1e0qhhza/llama-cpp-python_5906e59816184d0d863e6ff77b057617/vendor/llama.cpp/llama.cpp:4365:54: note: parameter passing for argument of type ‘std::initializer_list’ changed in GCC 7.1 2024-02-28T07:55:54,401 4365 | model.tok_embd = ml.create_tensor(ctx_input, tn(LLM_TENSOR_TOKEN_EMBD, "weight"), {n_embd, n_vocab}); 2024-02-28T07:55:54,401 | ~~~~~~~~~~~~~~~~^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ 2024-02-28T07:55:54,402 /tmp/pip-wheel-1e0qhhza/llama-cpp-python_5906e59816184d0d863e6ff77b057617/vendor/llama.cpp/llama.cpp:4369:63: note: parameter passing for argument of type ‘std::initializer_list’ changed in GCC 7.1 2024-02-28T07:55:54,402 4369 | model.output_norm = ml.create_tensor(ctx_output, tn(LLM_TENSOR_OUTPUT_NORM, "weight"), {n_embd}); 2024-02-28T07:55:54,403 | ~~~~~~~~~~~~~~~~^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ 2024-02-28T07:55:54,403 /tmp/pip-wheel-1e0qhhza/llama-cpp-python_5906e59816184d0d863e6ff77b057617/vendor/llama.cpp/llama.cpp:4370:63: note: parameter passing for argument of type ‘std::initializer_list’ changed in GCC 7.1 2024-02-28T07:55:54,404 4370 | model.output_norm_b = ml.create_tensor(ctx_output, tn(LLM_TENSOR_OUTPUT_NORM, "bias"), {n_embd}); 2024-02-28T07:55:54,404 | ~~~~~~~~~~~~~~~~^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ 2024-02-28T07:55:54,405 /tmp/pip-wheel-1e0qhhza/llama-cpp-python_5906e59816184d0d863e6ff77b057617/vendor/llama.cpp/llama.cpp:4371:63: note: parameter passing for argument of type ‘std::initializer_list’ changed in GCC 7.1 2024-02-28T07:55:54,405 4371 | model.output = ml.create_tensor(ctx_output_split, tn(LLM_TENSOR_OUTPUT, "weight"), {n_embd, n_vocab}); 2024-02-28T07:55:54,405 | ~~~~~~~~~~~~~~~~^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ 2024-02-28T07:55:54,406 /tmp/pip-wheel-1e0qhhza/llama-cpp-python_5906e59816184d0d863e6ff77b057617/vendor/llama.cpp/llama.cpp:4380:61: note: parameter passing for argument of type ‘std::initializer_list’ changed in GCC 7.1 2024-02-28T07:55:54,406 4380 | layer.attn_norm = ml.create_tensor(ctx_layer, tn(LLM_TENSOR_ATTN_NORM, "weight", i), {n_embd}); 2024-02-28T07:55:54,407 | ~~~~~~~~~~~~~~~~^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ 2024-02-28T07:55:54,407 /tmp/pip-wheel-1e0qhhza/llama-cpp-python_5906e59816184d0d863e6ff77b057617/vendor/llama.cpp/llama.cpp:4381:61: note: parameter passing for argument of type ‘std::initializer_list’ changed in GCC 7.1 2024-02-28T07:55:54,407 4381 | layer.attn_norm_b = ml.create_tensor(ctx_layer, tn(LLM_TENSOR_ATTN_NORM, "bias", i), {n_embd}); 2024-02-28T07:55:54,408 | ~~~~~~~~~~~~~~~~^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ 2024-02-28T07:55:54,409 /tmp/pip-wheel-1e0qhhza/llama-cpp-python_5906e59816184d0d863e6ff77b057617/vendor/llama.cpp/llama.cpp:4384:54: note: parameter passing for argument of type ‘std::initializer_list’ changed in GCC 7.1 2024-02-28T07:55:54,409 4384 | layer.bqkv = ml.create_tensor(ctx_layer, tn(LLM_TENSOR_ATTN_QKV, "bias", i), {n_embd + 2*n_embd_gqa}); 2024-02-28T07:55:54,410 | ~~~~~~~~~~~~~~~~^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ 2024-02-28T07:55:54,410 /tmp/pip-wheel-1e0qhhza/llama-cpp-python_5906e59816184d0d863e6ff77b057617/vendor/llama.cpp/llama.cpp:4386:54: note: parameter passing for argument of type ‘std::initializer_list’ changed in GCC 7.1 2024-02-28T07:55:54,411 4386 | layer.wo = ml.create_tensor(ctx_split, tn(LLM_TENSOR_ATTN_OUT, "weight", i), {n_embd, n_embd}); 2024-02-28T07:55:54,411 | ~~~~~~~~~~~~~~~~^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ 2024-02-28T07:55:54,412 /tmp/pip-wheel-1e0qhhza/llama-cpp-python_5906e59816184d0d863e6ff77b057617/vendor/llama.cpp/llama.cpp:4387:54: note: parameter passing for argument of type ‘std::initializer_list’ changed in GCC 7.1 2024-02-28T07:55:54,412 4387 | layer.bo = ml.create_tensor(ctx_layer, tn(LLM_TENSOR_ATTN_OUT, "bias", i), {n_embd}); 2024-02-28T07:55:54,413 | ~~~~~~~~~~~~~~~~^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ 2024-02-28T07:55:54,413 /tmp/pip-wheel-1e0qhhza/llama-cpp-python_5906e59816184d0d863e6ff77b057617/vendor/llama.cpp/llama.cpp:4389:60: note: parameter passing for argument of type ‘std::initializer_list’ changed in GCC 7.1 2024-02-28T07:55:54,414 4389 | layer.ffn_norm = ml.create_tensor(ctx_layer, tn(LLM_TENSOR_FFN_NORM, "weight", i), {n_embd}); 2024-02-28T07:55:54,415 | ~~~~~~~~~~~~~~~~^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ 2024-02-28T07:55:54,415 /tmp/pip-wheel-1e0qhhza/llama-cpp-python_5906e59816184d0d863e6ff77b057617/vendor/llama.cpp/llama.cpp:4390:60: note: parameter passing for argument of type ‘std::initializer_list’ changed in GCC 7.1 2024-02-28T07:55:54,415 4390 | layer.ffn_norm_b = ml.create_tensor(ctx_layer, tn(LLM_TENSOR_FFN_NORM, "bias", i), {n_embd}); 2024-02-28T07:55:54,416 | ~~~~~~~~~~~~~~~~^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ 2024-02-28T07:55:54,416 /tmp/pip-wheel-1e0qhhza/llama-cpp-python_5906e59816184d0d863e6ff77b057617/vendor/llama.cpp/llama.cpp:4393:60: note: parameter passing for argument of type ‘std::initializer_list’ changed in GCC 7.1 2024-02-28T07:55:54,416 4393 | layer.ffn_down_b = ml.create_tensor(ctx_layer, tn(LLM_TENSOR_FFN_DOWN, "bias", i), {n_embd}); 2024-02-28T07:55:54,417 | ~~~~~~~~~~~~~~~~^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ 2024-02-28T07:55:54,417 /tmp/pip-wheel-1e0qhhza/llama-cpp-python_5906e59816184d0d863e6ff77b057617/vendor/llama.cpp/llama.cpp:4395:60: note: parameter passing for argument of type ‘std::initializer_list’ changed in GCC 7.1 2024-02-28T07:55:54,418 4395 | layer.ffn_up = ml.create_tensor(ctx_split, tn(LLM_TENSOR_FFN_UP, "weight", i), {n_embd, n_ff}); 2024-02-28T07:55:54,418 | ~~~~~~~~~~~~~~~~^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ 2024-02-28T07:55:54,418 /tmp/pip-wheel-1e0qhhza/llama-cpp-python_5906e59816184d0d863e6ff77b057617/vendor/llama.cpp/llama.cpp:4396:60: note: parameter passing for argument of type ‘std::initializer_list’ changed in GCC 7.1 2024-02-28T07:55:54,419 4396 | layer.ffn_up_b = ml.create_tensor(ctx_layer, tn(LLM_TENSOR_FFN_UP, "bias", i), {n_ff}); 2024-02-28T07:55:54,420 | ~~~~~~~~~~~~~~~~^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ 2024-02-28T07:55:54,420 /tmp/pip-wheel-1e0qhhza/llama-cpp-python_5906e59816184d0d863e6ff77b057617/vendor/llama.cpp/llama.cpp:4401:54: note: parameter passing for argument of type ‘std::initializer_list’ changed in GCC 7.1 2024-02-28T07:55:54,421 4401 | model.tok_embd = ml.create_tensor(ctx_input, tn(LLM_TENSOR_TOKEN_EMBD, "weight"), {n_embd, n_vocab}); 2024-02-28T07:55:54,421 | ~~~~~~~~~~~~~~~~^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ 2024-02-28T07:55:54,422 /tmp/pip-wheel-1e0qhhza/llama-cpp-python_5906e59816184d0d863e6ff77b057617/vendor/llama.cpp/llama.cpp:4403:63: note: parameter passing for argument of type ‘std::initializer_list’ changed in GCC 7.1 2024-02-28T07:55:54,422 4403 | model.output_norm = ml.create_tensor(ctx_output, tn(LLM_TENSOR_OUTPUT_NORM, "weight"), {n_embd}); 2024-02-28T07:55:54,423 | ~~~~~~~~~~~~~~~~^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ 2024-02-28T07:55:54,423 /tmp/pip-wheel-1e0qhhza/llama-cpp-python_5906e59816184d0d863e6ff77b057617/vendor/llama.cpp/llama.cpp:4404:63: note: parameter passing for argument of type ‘std::initializer_list’ changed in GCC 7.1 2024-02-28T07:55:54,424 4404 | model.output_norm_b = ml.create_tensor(ctx_output, tn(LLM_TENSOR_OUTPUT_NORM, "bias"), {n_embd}); 2024-02-28T07:55:54,424 | ~~~~~~~~~~~~~~~~^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ 2024-02-28T07:55:54,425 /tmp/pip-wheel-1e0qhhza/llama-cpp-python_5906e59816184d0d863e6ff77b057617/vendor/llama.cpp/llama.cpp:4405:63: note: parameter passing for argument of type ‘std::initializer_list’ changed in GCC 7.1 2024-02-28T07:55:54,426 4405 | model.output = ml.create_tensor(ctx_output_split, tn(LLM_TENSOR_OUTPUT, "weight"), {n_embd, n_vocab}); 2024-02-28T07:55:54,426 | ~~~~~~~~~~~~~~~~^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ 2024-02-28T07:55:54,426 /tmp/pip-wheel-1e0qhhza/llama-cpp-python_5906e59816184d0d863e6ff77b057617/vendor/llama.cpp/llama.cpp:4413:61: note: parameter passing for argument of type ‘std::initializer_list’ changed in GCC 7.1 2024-02-28T07:55:54,427 4413 | layer.attn_norm = ml.create_tensor(ctx_layer, tn(LLM_TENSOR_ATTN_NORM, "weight", i), {n_embd}); 2024-02-28T07:55:54,427 | ~~~~~~~~~~~~~~~~^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ 2024-02-28T07:55:54,427 /tmp/pip-wheel-1e0qhhza/llama-cpp-python_5906e59816184d0d863e6ff77b057617/vendor/llama.cpp/llama.cpp:4416:52: note: parameter passing for argument of type ‘std::initializer_list’ changed in GCC 7.1 2024-02-28T07:55:54,428 4416 | layer.wq = ml.create_tensor(ctx_split, tn(LLM_TENSOR_ATTN_Q, "weight", i), {n_embd, n_embd}); 2024-02-28T07:55:54,428 | ~~~~~~~~~~~~~~~~^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ 2024-02-28T07:55:54,429 /tmp/pip-wheel-1e0qhhza/llama-cpp-python_5906e59816184d0d863e6ff77b057617/vendor/llama.cpp/llama.cpp:4419:52: note: parameter passing for argument of type ‘std::initializer_list’ changed in GCC 7.1 2024-02-28T07:55:54,429 4419 | layer.wo = ml.create_tensor(ctx_split, tn(LLM_TENSOR_ATTN_OUT, "weight", i), {n_embd, n_embd}); 2024-02-28T07:55:54,430 | ~~~~~~~~~~~~~~~~^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ 2024-02-28T07:55:54,431 /tmp/pip-wheel-1e0qhhza/llama-cpp-python_5906e59816184d0d863e6ff77b057617/vendor/llama.cpp/llama.cpp:4421:60: note: parameter passing for argument of type ‘std::initializer_list’ changed in GCC 7.1 2024-02-28T07:55:54,431 4421 | layer.ffn_norm = ml.create_tensor(ctx_layer, tn(LLM_TENSOR_FFN_NORM, "weight", i), {n_embd}); 2024-02-28T07:55:54,432 | ~~~~~~~~~~~~~~~~^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ 2024-02-28T07:55:54,432 /tmp/pip-wheel-1e0qhhza/llama-cpp-python_5906e59816184d0d863e6ff77b057617/vendor/llama.cpp/llama.cpp:4422:60: note: parameter passing for argument of type ‘std::initializer_list’ changed in GCC 7.1 2024-02-28T07:55:54,433 4422 | layer.ffn_norm_b = ml.create_tensor(ctx_layer, tn(LLM_TENSOR_FFN_NORM, "bias", i), {n_embd}); 2024-02-28T07:55:54,433 | ~~~~~~~~~~~~~~~~^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ 2024-02-28T07:55:54,434 /tmp/pip-wheel-1e0qhhza/llama-cpp-python_5906e59816184d0d863e6ff77b057617/vendor/llama.cpp/llama.cpp:4424:58: note: parameter passing for argument of type ‘std::initializer_list’ changed in GCC 7.1 2024-02-28T07:55:54,434 4424 | layer.ffn_gate = ml.create_tensor(ctx_split, tn(LLM_TENSOR_FFN_GATE, "weight", i), {n_embd, n_ff}); 2024-02-28T07:55:54,434 | ~~~~~~~~~~~~~~~~^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ 2024-02-28T07:55:54,435 /tmp/pip-wheel-1e0qhhza/llama-cpp-python_5906e59816184d0d863e6ff77b057617/vendor/llama.cpp/llama.cpp:4425:58: note: parameter passing for argument of type ‘std::initializer_list’ changed in GCC 7.1 2024-02-28T07:55:54,435 4425 | layer.ffn_down = ml.create_tensor(ctx_split, tn(LLM_TENSOR_FFN_DOWN, "weight", i), { n_ff, n_embd}); 2024-02-28T07:55:54,436 | ~~~~~~~~~~~~~~~~^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ 2024-02-28T07:55:54,437 /tmp/pip-wheel-1e0qhhza/llama-cpp-python_5906e59816184d0d863e6ff77b057617/vendor/llama.cpp/llama.cpp:4431:54: note: parameter passing for argument of type ‘std::initializer_list’ changed in GCC 7.1 2024-02-28T07:55:54,437 4431 | model.tok_embd = ml.create_tensor(ctx_input, tn(LLM_TENSOR_TOKEN_EMBD, "weight"), {n_embd, n_vocab}); 2024-02-28T07:55:54,437 | ~~~~~~~~~~~~~~~~^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ 2024-02-28T07:55:54,438 /tmp/pip-wheel-1e0qhhza/llama-cpp-python_5906e59816184d0d863e6ff77b057617/vendor/llama.cpp/llama.cpp:4435:61: note: parameter passing for argument of type ‘std::initializer_list’ changed in GCC 7.1 2024-02-28T07:55:54,438 4435 | model.output_norm = ml.create_tensor(ctx_output, tn(LLM_TENSOR_OUTPUT_NORM, "weight"), {n_embd}); 2024-02-28T07:55:54,438 | ~~~~~~~~~~~~~~~~^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ 2024-02-28T07:55:54,439 /tmp/pip-wheel-1e0qhhza/llama-cpp-python_5906e59816184d0d863e6ff77b057617/vendor/llama.cpp/llama.cpp:4436:61: note: parameter passing for argument of type ‘std::initializer_list’ changed in GCC 7.1 2024-02-28T07:55:54,439 4436 | model.output = ml.create_tensor(ctx_output_split, tn(LLM_TENSOR_OUTPUT, "weight"), {n_embd, n_vocab}); 2024-02-28T07:55:54,440 | ~~~~~~~~~~~~~~~~^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ 2024-02-28T07:55:54,440 /tmp/pip-wheel-1e0qhhza/llama-cpp-python_5906e59816184d0d863e6ff77b057617/vendor/llama.cpp/llama.cpp:4445:59: note: parameter passing for argument of type ‘std::initializer_list’ changed in GCC 7.1 2024-02-28T07:55:54,441 4445 | layer.attn_norm = ml.create_tensor(ctx_layer, tn(LLM_TENSOR_ATTN_NORM, "weight", i), {n_embd}); 2024-02-28T07:55:54,441 | ~~~~~~~~~~~~~~~~^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ 2024-02-28T07:55:54,442 /tmp/pip-wheel-1e0qhhza/llama-cpp-python_5906e59816184d0d863e6ff77b057617/vendor/llama.cpp/llama.cpp:4447:52: note: parameter passing for argument of type ‘std::initializer_list’ changed in GCC 7.1 2024-02-28T07:55:54,442 4447 | layer.wq = ml.create_tensor(ctx_split, tn(LLM_TENSOR_ATTN_Q, "weight", i), {n_embd, n_embd}); 2024-02-28T07:55:54,443 | ~~~~~~~~~~~~~~~~^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ 2024-02-28T07:55:54,443 /tmp/pip-wheel-1e0qhhza/llama-cpp-python_5906e59816184d0d863e6ff77b057617/vendor/llama.cpp/llama.cpp:4448:52: note: parameter passing for argument of type ‘std::initializer_list’ changed in GCC 7.1 2024-02-28T07:55:54,444 4448 | layer.wk = ml.create_tensor(ctx_split, tn(LLM_TENSOR_ATTN_K, "weight", i), {n_embd, n_embd_gqa}); 2024-02-28T07:55:54,444 | ~~~~~~~~~~~~~~~~^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ 2024-02-28T07:55:54,445 /tmp/pip-wheel-1e0qhhza/llama-cpp-python_5906e59816184d0d863e6ff77b057617/vendor/llama.cpp/llama.cpp:4449:52: note: parameter passing for argument of type ‘std::initializer_list’ changed in GCC 7.1 2024-02-28T07:55:54,445 4449 | layer.wv = ml.create_tensor(ctx_split, tn(LLM_TENSOR_ATTN_V, "weight", i), {n_embd, n_embd_gqa}); 2024-02-28T07:55:54,446 | ~~~~~~~~~~~~~~~~^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ 2024-02-28T07:55:54,446 /tmp/pip-wheel-1e0qhhza/llama-cpp-python_5906e59816184d0d863e6ff77b057617/vendor/llama.cpp/llama.cpp:4451:52: note: parameter passing for argument of type ‘std::initializer_list’ changed in GCC 7.1 2024-02-28T07:55:54,447 4451 | layer.wo = ml.create_tensor(ctx_split, tn(LLM_TENSOR_ATTN_OUT, "weight", i), {n_embd, n_embd}); 2024-02-28T07:55:54,447 | ~~~~~~~~~~~~~~~~^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ 2024-02-28T07:55:54,447 /tmp/pip-wheel-1e0qhhza/llama-cpp-python_5906e59816184d0d863e6ff77b057617/vendor/llama.cpp/llama.cpp:4452:58: note: parameter passing for argument of type ‘std::initializer_list’ changed in GCC 7.1 2024-02-28T07:55:54,448 4452 | layer.ffn_norm = ml.create_tensor(ctx_layer, tn(LLM_TENSOR_FFN_NORM, "weight", i), {n_embd}); 2024-02-28T07:55:54,448 | ~~~~~~~~~~~~~~~~^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ 2024-02-28T07:55:54,449 /tmp/pip-wheel-1e0qhhza/llama-cpp-python_5906e59816184d0d863e6ff77b057617/vendor/llama.cpp/llama.cpp:4453:58: note: parameter passing for argument of type ‘std::initializer_list’ changed in GCC 7.1 2024-02-28T07:55:54,449 4453 | layer.ffn_gate = ml.create_tensor(ctx_split, tn(LLM_TENSOR_FFN_GATE, "weight", i), {n_embd, n_ff}); 2024-02-28T07:55:54,450 | ~~~~~~~~~~~~~~~~^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ 2024-02-28T07:55:54,450 /tmp/pip-wheel-1e0qhhza/llama-cpp-python_5906e59816184d0d863e6ff77b057617/vendor/llama.cpp/llama.cpp:4454:58: note: parameter passing for argument of type ‘std::initializer_list’ changed in GCC 7.1 2024-02-28T07:55:54,451 4454 | layer.ffn_down = ml.create_tensor(ctx_split, tn(LLM_TENSOR_FFN_DOWN, "weight", i), { n_ff, n_embd}); 2024-02-28T07:55:54,451 | ~~~~~~~~~~~~~~~~^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ 2024-02-28T07:55:54,452 /tmp/pip-wheel-1e0qhhza/llama-cpp-python_5906e59816184d0d863e6ff77b057617/vendor/llama.cpp/llama.cpp:4460:54: note: parameter passing for argument of type ‘std::initializer_list’ changed in GCC 7.1 2024-02-28T07:55:54,452 4460 | model.tok_embd = ml.create_tensor(ctx_input, tn(LLM_TENSOR_TOKEN_EMBD, "weight"), {n_embd, n_vocab}); 2024-02-28T07:55:54,453 | ~~~~~~~~~~~~~~~~^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ 2024-02-28T07:55:54,454 /tmp/pip-wheel-1e0qhhza/llama-cpp-python_5906e59816184d0d863e6ff77b057617/vendor/llama.cpp/llama.cpp:4463:57: note: parameter passing for argument of type ‘std::initializer_list’ changed in GCC 7.1 2024-02-28T07:55:54,454 4463 | model.output_norm = ml.create_tensor(ctx_output, tn(LLM_TENSOR_OUTPUT_NORM, "weight"), {n_embd}); 2024-02-28T07:55:54,455 | ~~~~~~~~~~~~~~~~^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ 2024-02-28T07:55:54,455 /tmp/pip-wheel-1e0qhhza/llama-cpp-python_5906e59816184d0d863e6ff77b057617/vendor/llama.cpp/llama.cpp:4464:57: note: parameter passing for argument of type ‘std::initializer_list’ changed in GCC 7.1 2024-02-28T07:55:54,455 4464 | model.output = ml.create_tensor(ctx_output, tn(LLM_TENSOR_TOKEN_EMBD, "weight"), {n_embd, n_vocab}); // same as tok_embd, duplicated to allow offloading 2024-02-28T07:55:54,456 | ~~~~~~~~~~~~~~~~^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ 2024-02-28T07:55:54,457 /tmp/pip-wheel-1e0qhhza/llama-cpp-python_5906e59816184d0d863e6ff77b057617/vendor/llama.cpp/llama.cpp:4487:58: note: parameter passing for argument of type ‘std::initializer_list’ changed in GCC 7.1 2024-02-28T07:55:54,457 4487 | layer.ffn_gate = ml.create_tensor(ctx_split, tn(LLM_TENSOR_FFN_GATE, "weight", i), {n_embd, n_ff}); 2024-02-28T07:55:54,458 | ~~~~~~~~~~~~~~~~^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ 2024-02-28T07:55:54,458 /tmp/pip-wheel-1e0qhhza/llama-cpp-python_5906e59816184d0d863e6ff77b057617/vendor/llama.cpp/llama.cpp:4488:58: note: parameter passing for argument of type ‘std::initializer_list’ changed in GCC 7.1 2024-02-28T07:55:54,458 4488 | layer.ffn_up = ml.create_tensor(ctx_split, tn(LLM_TENSOR_FFN_UP, "weight", i), {n_embd, n_ff}); 2024-02-28T07:55:54,459 | ~~~~~~~~~~~~~~~~^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ 2024-02-28T07:55:54,459 In file included from /usr/include/c++/12/vector:70: 2024-02-28T07:55:54,459 /usr/include/c++/12/bits/vector.tcc: In member function ‘void std::vector<_Tp, _Alloc>::_M_realloc_insert(iterator, _Args&& ...) [with _Args = {const double&}; _Tp = double; _Alloc = std::allocator]’: 2024-02-28T07:55:54,460 /usr/include/c++/12/bits/vector.tcc:439:7: note: parameter passing for argument of type ‘std::vector::iterator’ changed in GCC 7.1 2024-02-28T07:55:54,460 439 | vector<_Tp, _Alloc>:: 2024-02-28T07:55:54,461 | ^~~~~~~~~~~~~~~~~~~ 2024-02-28T07:55:54,461 In member function ‘void std::vector<_Tp, _Alloc>::push_back(const value_type&) [with _Tp = double; _Alloc = std::allocator]’, 2024-02-28T07:55:54,462 inlined from ‘std::back_insert_iterator<_Container>& std::back_insert_iterator<_Container>::operator=(const typename _Container::value_type&) [with _Container = std::vector]’ at /usr/include/c++/12/bits/stl_iterator.h:735:22, 2024-02-28T07:55:54,462 inlined from ‘_OutputIterator std::partial_sum(_InputIterator, _InputIterator, _OutputIterator) [with _InputIterator = __gnu_cxx::__normal_iterator >; _OutputIterator = back_insert_iterator >]’ at /usr/include/c++/12/bits/stl_numeric.h:270:17, 2024-02-28T07:55:54,463 inlined from ‘void std::discrete_distribution<_IntType>::param_type::_M_initialize() [with _IntType = int]’ at /usr/include/c++/12/bits/random.tcc:2679:23: 2024-02-28T07:55:54,463 /usr/include/c++/12/bits/stl_vector.h:1287:28: note: parameter passing for argument of type ‘__gnu_cxx::__normal_iterator >’ changed in GCC 7.1 2024-02-28T07:55:54,464 1287 | _M_realloc_insert(end(), __x); 2024-02-28T07:55:54,464 | ~~~~~~~~~~~~~~~~~^~~~~~~~~~~~ 2024-02-28T07:55:54,465 In member function ‘void std::vector<_Tp, _Alloc>::push_back(const value_type&) [with _Tp = double; _Alloc = std::allocator]’, 2024-02-28T07:55:54,465 inlined from ‘std::back_insert_iterator<_Container>& std::back_insert_iterator<_Container>::operator=(const typename _Container::value_type&) [with _Container = std::vector]’ at /usr/include/c++/12/bits/stl_iterator.h:735:22, 2024-02-28T07:55:54,466 inlined from ‘_OutputIterator std::partial_sum(_InputIterator, _InputIterator, _OutputIterator) [with _InputIterator = __gnu_cxx::__normal_iterator >; _OutputIterator = back_insert_iterator >]’ at /usr/include/c++/12/bits/stl_numeric.h:274:16, 2024-02-28T07:55:54,466 inlined from ‘void std::discrete_distribution<_IntType>::param_type::_M_initialize() [with _IntType = int]’ at /usr/include/c++/12/bits/random.tcc:2679:23: 2024-02-28T07:55:54,467 /usr/include/c++/12/bits/stl_vector.h:1287:28: note: parameter passing for argument of type ‘__gnu_cxx::__normal_iterator >’ changed in GCC 7.1 2024-02-28T07:55:54,467 1287 | _M_realloc_insert(end(), __x); 2024-02-28T07:55:54,468 | ~~~~~~~~~~~~~~~~~^~~~~~~~~~~~ 2024-02-28T07:55:54,468 ninja: build stopped: subcommand failed. 2024-02-28T07:55:54,469 *** CMake build failed 2024-02-28T07:55:54,470 ERROR: Building wheel for llama-cpp-python (pyproject.toml) exited with 1 2024-02-28T07:55:54,484 [bold magenta]full command[/]: [blue]/usr/bin/python3 /usr/local/lib/python3.11/dist-packages/pip/_vendor/pyproject_hooks/_in_process/_in_process.py build_wheel /tmp/tmpgcqw2vop[/] 2024-02-28T07:55:54,484 [bold magenta]cwd[/]: /tmp/pip-wheel-1e0qhhza/llama-cpp-python_5906e59816184d0d863e6ff77b057617 2024-02-28T07:55:54,485 Building wheel for llama-cpp-python (pyproject.toml): finished with status 'error' 2024-02-28T07:55:54,487 ERROR: Failed building wheel for llama-cpp-python 2024-02-28T07:55:54,490 Failed to build llama-cpp-python 2024-02-28T07:55:54,491 ERROR: Failed to build one or more wheels 2024-02-28T07:55:54,492 Exception information: 2024-02-28T07:55:54,492 Traceback (most recent call last): 2024-02-28T07:55:54,492 File "/usr/local/lib/python3.11/dist-packages/pip/_internal/cli/base_command.py", line 180, in exc_logging_wrapper 2024-02-28T07:55:54,492 status = run_func(*args) 2024-02-28T07:55:54,492 ^^^^^^^^^^^^^^^ 2024-02-28T07:55:54,492 File "/usr/local/lib/python3.11/dist-packages/pip/_internal/cli/req_command.py", line 245, in wrapper 2024-02-28T07:55:54,492 return func(self, options, args) 2024-02-28T07:55:54,492 ^^^^^^^^^^^^^^^^^^^^^^^^^ 2024-02-28T07:55:54,492 File "/usr/local/lib/python3.11/dist-packages/pip/_internal/commands/wheel.py", line 181, in run 2024-02-28T07:55:54,492 raise CommandError("Failed to build one or more wheels") 2024-02-28T07:55:54,492 pip._internal.exceptions.CommandError: Failed to build one or more wheels 2024-02-28T07:55:54,496 Removed build tracker: '/tmp/pip-build-tracker-tlab4r61'