2023-12-13T21:37:05,797 Created temporary directory: /tmp/pip-build-tracker-mdfijr_4 2023-12-13T21:37:05,799 Initialized build tracking at /tmp/pip-build-tracker-mdfijr_4 2023-12-13T21:37:05,800 Created build tracker: /tmp/pip-build-tracker-mdfijr_4 2023-12-13T21:37:05,801 Entered build tracker: /tmp/pip-build-tracker-mdfijr_4 2023-12-13T21:37:05,802 Created temporary directory: /tmp/pip-wheel-h5vnrp6_ 2023-12-13T21:37:05,806 Created temporary directory: /tmp/pip-ephem-wheel-cache-2e7fc67n 2023-12-13T21:37:05,829 Looking in indexes: https://pypi.org/simple, https://www.piwheels.org/simple 2023-12-13T21:37:05,833 2 location(s) to search for versions of llama-cpp-python: 2023-12-13T21:37:05,833 * https://pypi.org/simple/llama-cpp-python/ 2023-12-13T21:37:05,833 * https://www.piwheels.org/simple/llama-cpp-python/ 2023-12-13T21:37:05,833 Fetching project page and analyzing links: https://pypi.org/simple/llama-cpp-python/ 2023-12-13T21:37:05,834 Getting page https://pypi.org/simple/llama-cpp-python/ 2023-12-13T21:37:05,836 Found index url https://pypi.org/simple/ 2023-12-13T21:37:05,975 Fetched page https://pypi.org/simple/llama-cpp-python/ as application/vnd.pypi.simple.v1+json 2023-12-13T21:37:05,993 Found link https://files.pythonhosted.org/packages/17/9c/813d8c83d81cb9ab42e5ee66657f8d3670bacdcd67df4aa7728e8dccbcfd/llama_cpp_python-0.1.1.tar.gz (from https://pypi.org/simple/llama-cpp-python/), version: 0.1.1 2023-12-13T21:37:05,994 Found link https://files.pythonhosted.org/packages/42/22/07711b8fc85ed188182c923aa424254a451ee23a58d6c45a033e05e57f9a/llama_cpp_python-0.1.2.tar.gz (from https://pypi.org/simple/llama-cpp-python/), version: 0.1.2 2023-12-13T21:37:05,994 Found link https://files.pythonhosted.org/packages/13/a2/a3a6e665905992e2ed2c79b7af2dce4a36f23c5147959f0f56d9bd72543c/llama_cpp_python-0.1.3.tar.gz (from https://pypi.org/simple/llama-cpp-python/), version: 0.1.3 2023-12-13T21:37:05,995 Found link https://files.pythonhosted.org/packages/00/b6/3069b31e8cd0073685aa059e161e4b8dc3a4e3c77c4f8f433fa5ebc01655/llama_cpp_python-0.1.4.tar.gz (from https://pypi.org/simple/llama-cpp-python/), version: 0.1.4 2023-12-13T21:37:05,996 Found link https://files.pythonhosted.org/packages/cd/32/e2380800128e64542f719c3d7287b2818e7234e268298b95273164cb0a3d/llama_cpp_python-0.1.5.tar.gz (from https://pypi.org/simple/llama-cpp-python/), version: 0.1.5 2023-12-13T21:37:05,997 Found link https://files.pythonhosted.org/packages/9f/d3/9904d8616a5af9515b8852c441472c930b780db1879f13cae240bd4eb05f/llama_cpp_python-0.1.6.tar.gz (from https://pypi.org/simple/llama-cpp-python/), version: 0.1.6 2023-12-13T21:37:05,997 Found link https://files.pythonhosted.org/packages/20/ff/c192e4469e14be86d3b11fdee4b56aca486033e4256174e2cf8425840e54/llama_cpp_python-0.1.7.tar.gz (from https://pypi.org/simple/llama-cpp-python/), version: 0.1.7 2023-12-13T21:37:05,998 Found link https://files.pythonhosted.org/packages/7e/3b/b5f7e1ec5f43a4e980733c63bd4f05e1b7e14fd3b7aa72d9ca91f2415323/llama_cpp_python-0.1.8.tar.gz (from https://pypi.org/simple/llama-cpp-python/), version: 0.1.8 2023-12-13T21:37:05,999 Found link https://files.pythonhosted.org/packages/9f/24/45a5a3beee1354f668d916eb1a2146835a0eda4dbad0da45252170e105a6/llama_cpp_python-0.1.9.tar.gz (from https://pypi.org/simple/llama-cpp-python/), version: 0.1.9 2023-12-13T21:37:06,000 Found link https://files.pythonhosted.org/packages/71/ad/e3f373300efdfbcd67dc3909512a5b80dd6c5f2092102cbea66bad75ec4d/llama_cpp_python-0.1.10.tar.gz (from https://pypi.org/simple/llama-cpp-python/), version: 0.1.10 2023-12-13T21:37:06,001 Found link https://files.pythonhosted.org/packages/bb/5e/c15d23176dd5783b1f62fd1b89c38fa655c9c1b524451e34a240fabffca8/llama_cpp_python-0.1.11.tar.gz (from https://pypi.org/simple/llama-cpp-python/) (requires-python:>=3.7), version: 0.1.11 2023-12-13T21:37:06,002 Found link https://files.pythonhosted.org/packages/ad/61/91b0c968596bcca9b09c6e40a38852500d31ed5f8649e25cfab293dc9af0/llama_cpp_python-0.1.12.tar.gz (from https://pypi.org/simple/llama-cpp-python/) (requires-python:>=3.7), version: 0.1.12 2023-12-13T21:37:06,003 Found link https://files.pythonhosted.org/packages/63/8f/1bb0a901a1be8c243e741a17ece1588615a1c5c4b9578ce80f12ce809d14/llama_cpp_python-0.1.13.tar.gz (from https://pypi.org/simple/llama-cpp-python/) (requires-python:>=3.7), version: 0.1.13 2023-12-13T21:37:06,004 Found link https://files.pythonhosted.org/packages/25/bc/83364cb8c3fff7da82fadd10e0d1ec221278a5403ab4222dd0745bfa6709/llama_cpp_python-0.1.14.tar.gz (from https://pypi.org/simple/llama-cpp-python/) (requires-python:>=3.7), version: 0.1.14 2023-12-13T21:37:06,005 Found link https://files.pythonhosted.org/packages/d8/6b/0b89436a26c2a7a5e1b57809d6f692c4f0afd87b19c31fe5425ddb19f54b/llama_cpp_python-0.1.15.tar.gz (from https://pypi.org/simple/llama-cpp-python/) (requires-python:>=3.7), version: 0.1.15 2023-12-13T21:37:06,006 Found link https://files.pythonhosted.org/packages/7f/ef/aa0d2e4ef92173bf7e3539b5fa3338e7f9f88a66e7a90cb2f00052b7a9cb/llama_cpp_python-0.1.16.tar.gz (from https://pypi.org/simple/llama-cpp-python/) (requires-python:>=3.7), version: 0.1.16 2023-12-13T21:37:06,007 Found link https://files.pythonhosted.org/packages/71/d6/bb0a4bb92abf16dee92a933b45ba16f0e6c0a1b63ee8877c678a54c373a8/llama_cpp_python-0.1.17.tar.gz (from https://pypi.org/simple/llama-cpp-python/) (requires-python:>=3.7), version: 0.1.17 2023-12-13T21:37:06,007 Found link https://files.pythonhosted.org/packages/c2/08/7c12856cbe4523e518e280914674f4b65f5f62076408a7984b69d9771494/llama_cpp_python-0.1.18.tar.gz (from https://pypi.org/simple/llama-cpp-python/) (requires-python:>=3.7), version: 0.1.18 2023-12-13T21:37:06,008 Found link https://files.pythonhosted.org/packages/63/48/977cd0ffdbfb9446e758c8c69aa49025a7477058d42bd30bef67f42c556c/llama_cpp_python-0.1.19.tar.gz (from https://pypi.org/simple/llama-cpp-python/) (requires-python:>=3.7), version: 0.1.19 2023-12-13T21:37:06,009 Found link https://files.pythonhosted.org/packages/dc/2e/730cc405e0227ce6f49dd2bab4d6ce69963cb65bc3452fd33a552c9b8630/llama_cpp_python-0.1.20.tar.gz (from https://pypi.org/simple/llama-cpp-python/) (requires-python:>=3.7), version: 0.1.20 2023-12-13T21:37:06,010 Found link https://files.pythonhosted.org/packages/52/1a/d122abc9571e09e17ad8909d2f8710ea0abe26ced1287ae82828fc80aaa3/llama_cpp_python-0.1.21.tar.gz (from https://pypi.org/simple/llama-cpp-python/) (requires-python:>=3.7), version: 0.1.21 2023-12-13T21:37:06,011 Found link https://files.pythonhosted.org/packages/cf/94/4c35d7e3011ce86f063e3c754afd71f3a6f1f2a0ec9616deb55e8f3743a1/llama_cpp_python-0.1.22.tar.gz (from https://pypi.org/simple/llama-cpp-python/) (requires-python:>=3.7), version: 0.1.22 2023-12-13T21:37:06,012 Found link https://files.pythonhosted.org/packages/03/6e/3e0768c396be6807b9e835c223ce37385d574eaf9e4d0ac80116325f6775/llama_cpp_python-0.1.23.tar.gz (from https://pypi.org/simple/llama-cpp-python/) (requires-python:>=3.7), version: 0.1.23 2023-12-13T21:37:06,013 Found link https://files.pythonhosted.org/packages/bc/8b/618c42fdfa078a3cec9ed871b9c1bb6cca65b66e4e3ce0bf690f8109eaa1/llama_cpp_python-0.1.24.tar.gz (from https://pypi.org/simple/llama-cpp-python/) (requires-python:>=3.7), version: 0.1.24 2023-12-13T21:37:06,014 Found link https://files.pythonhosted.org/packages/6c/64/bd9d98588aa8b6c49c0cfa1d0b4ef4ec5a1a05e4d8d67c1aed3587ae2e1a/llama_cpp_python-0.1.25.tar.gz (from https://pypi.org/simple/llama-cpp-python/) (requires-python:>=3.7), version: 0.1.25 2023-12-13T21:37:06,015 Found link https://files.pythonhosted.org/packages/c1/cf/c81b3ba5340398820cc12c247e33f3f1ee15c4043794596968dc31ebac9c/llama_cpp_python-0.1.26.tar.gz (from https://pypi.org/simple/llama-cpp-python/) (requires-python:>=3.7), version: 0.1.26 2023-12-13T21:37:06,016 Found link https://files.pythonhosted.org/packages/fa/b8/0a6fafae31b2c40997c282cd9220743c419dd8b372f09c57e551792bb899/llama_cpp_python-0.1.27.tar.gz (from https://pypi.org/simple/llama-cpp-python/) (requires-python:>=3.7), version: 0.1.27 2023-12-13T21:37:06,017 Found link https://files.pythonhosted.org/packages/fb/6a/0c7421119d6e536ee1ca02ad5555dbbda7a38189333b0ac67f582cd5a84f/llama_cpp_python-0.1.28.tar.gz (from https://pypi.org/simple/llama-cpp-python/) (requires-python:>=3.7), version: 0.1.28 2023-12-13T21:37:06,018 Found link https://files.pythonhosted.org/packages/fa/e3/3a12c770007f9a3c5903f7e2904aff4af5fa7d36cb06843c65cfaadccdd2/llama_cpp_python-0.1.29.tar.gz (from https://pypi.org/simple/llama-cpp-python/) (requires-python:>=3.7), version: 0.1.29 2023-12-13T21:37:06,019 Found link https://files.pythonhosted.org/packages/e5/8e/b8dfcb10fdb1b2556a688cb23fd3d1b7b60c2b24ddc1cb9fc61a915c94d0/llama_cpp_python-0.1.30.tar.gz (from https://pypi.org/simple/llama-cpp-python/) (requires-python:>=3.7), version: 0.1.30 2023-12-13T21:37:06,020 Found link https://files.pythonhosted.org/packages/c9/46/e37f0120bf5996b644c373c8fea9d2bf31ceb30e18724f2ae0876cb25b96/llama_cpp_python-0.1.31.tar.gz (from https://pypi.org/simple/llama-cpp-python/) (requires-python:>=3.7), version: 0.1.31 2023-12-13T21:37:06,021 Found link https://files.pythonhosted.org/packages/39/f2/9d9c98ccb9ffe2ca7c9aeef235d5e45a4694f3148dfc9559e672c346f6ea/llama_cpp_python-0.1.32.tar.gz (from https://pypi.org/simple/llama-cpp-python/) (requires-python:>=3.7), version: 0.1.32 2023-12-13T21:37:06,022 Found link https://files.pythonhosted.org/packages/70/b3/a1497e783b921cc8cd0d2f7fabe9d0b5c2bf95ab9fd56503d282862ce720/llama_cpp_python-0.1.33.tar.gz (from https://pypi.org/simple/llama-cpp-python/) (requires-python:>=3.7), version: 0.1.33 2023-12-13T21:37:06,023 Found link https://files.pythonhosted.org/packages/b3/f0/82690e424b3fdb0d1738f312095a7a88cbe06cb910be9c5f5d4c7e3bdde8/llama_cpp_python-0.1.34.tar.gz (from https://pypi.org/simple/llama-cpp-python/) (requires-python:>=3.7), version: 0.1.34 2023-12-13T21:37:06,024 Found link https://files.pythonhosted.org/packages/e9/47/013240af1272400ad49422f8ebfc47476a4d82e3375dd05dbd1440da3c50/llama_cpp_python-0.1.35.tar.gz (from https://pypi.org/simple/llama-cpp-python/) (requires-python:>=3.7), version: 0.1.35 2023-12-13T21:37:06,025 Found link https://files.pythonhosted.org/packages/1b/ea/3f2aff10fd7195c6bc8c52375d9ff027a551151569c50e0d47581b14b7c1/llama_cpp_python-0.1.36.tar.gz (from https://pypi.org/simple/llama-cpp-python/) (requires-python:>=3.7), version: 0.1.36 2023-12-13T21:37:06,026 Found link https://files.pythonhosted.org/packages/5d/10/e037dc290ed7435dd6f5fa5dcce2453f1cf145b84f1e8e40d0a63ac62aa2/llama_cpp_python-0.1.37.tar.gz (from https://pypi.org/simple/llama-cpp-python/) (requires-python:>=3.7), version: 0.1.37 2023-12-13T21:37:06,027 Found link https://files.pythonhosted.org/packages/e6/2a/d898551013b9f0863b8134dbcb5863a306f5d9c2ad4a394c68a2988a77a0/llama_cpp_python-0.1.38.tar.gz (from https://pypi.org/simple/llama-cpp-python/) (requires-python:>=3.7), version: 0.1.38 2023-12-13T21:37:06,028 Found link https://files.pythonhosted.org/packages/5a/41/955ac2e592949ca95a29efc5f544afcbc9ca3fc5484cb0272837d98c6b5a/llama_cpp_python-0.1.39.tar.gz (from https://pypi.org/simple/llama-cpp-python/) (requires-python:>=3.7), version: 0.1.39 2023-12-13T21:37:06,029 Found link https://files.pythonhosted.org/packages/fc/2c/62c5ce16f88348f928320565cf6c0dfe8220a03615bff14e47e4f3b4e439/llama_cpp_python-0.1.40.tar.gz (from https://pypi.org/simple/llama-cpp-python/) (requires-python:>=3.7), version: 0.1.40 2023-12-13T21:37:06,030 Found link https://files.pythonhosted.org/packages/d1/fe/852d447828bdcdfe1c8aa88061517b5de9e5c12389dd852076d5c913936a/llama_cpp_python-0.1.41.tar.gz (from https://pypi.org/simple/llama-cpp-python/) (requires-python:>=3.7), version: 0.1.41 2023-12-13T21:37:06,031 Found link https://files.pythonhosted.org/packages/8d/bb/48129f3696fcc125fac1c91a5a6df5ab472e561d74ed5818e6fca748a432/llama_cpp_python-0.1.42.tar.gz (from https://pypi.org/simple/llama-cpp-python/) (requires-python:>=3.7), version: 0.1.42 2023-12-13T21:37:06,032 Found link https://files.pythonhosted.org/packages/eb/43/ac841dc1a3f5f618e4546ce69fe7da0d976cb141c92b8d1f735f2baf0b85/llama_cpp_python-0.1.43.tar.gz (from https://pypi.org/simple/llama-cpp-python/) (requires-python:>=3.7), version: 0.1.43 2023-12-13T21:37:06,033 Found link https://files.pythonhosted.org/packages/29/69/b73ae145d6f40683656f537b8526ca27e8348c7ff9af9c014a6a723fda5f/llama_cpp_python-0.1.44.tar.gz (from https://pypi.org/simple/llama-cpp-python/) (requires-python:>=3.7), version: 0.1.44 2023-12-13T21:37:06,034 Found link https://files.pythonhosted.org/packages/62/b7/299b9d537037a95d4433498c73c1a8024de230a26d0c94b3e889364038d4/llama_cpp_python-0.1.45.tar.gz (from https://pypi.org/simple/llama-cpp-python/) (requires-python:>=3.7), version: 0.1.45 2023-12-13T21:37:06,035 Found link https://files.pythonhosted.org/packages/c2/12/450986c9506525096cc77fcb6584ee02ec7d0017df0d34e6c79b9dba5a58/llama_cpp_python-0.1.46.tar.gz (from https://pypi.org/simple/llama-cpp-python/) (requires-python:>=3.7), version: 0.1.46 2023-12-13T21:37:06,036 Found link https://files.pythonhosted.org/packages/28/95/11fcced0778cb9b82a81cd61c93760a379527ef13d90a66254fdc2e982df/llama_cpp_python-0.1.47.tar.gz (from https://pypi.org/simple/llama-cpp-python/) (requires-python:>=3.7), version: 0.1.47 2023-12-13T21:37:06,036 Found link https://files.pythonhosted.org/packages/35/04/63f43ff24bd8948abbe2d7c9c3e3d235c0e7501ec8b1e72d01676051f75d/llama_cpp_python-0.1.48.tar.gz (from https://pypi.org/simple/llama-cpp-python/) (requires-python:>=3.7), version: 0.1.48 2023-12-13T21:37:06,037 Found link https://files.pythonhosted.org/packages/1b/60/be610e7e95eb53e949ac74024b30d5fa763244928b07a16815d16643b7ab/llama_cpp_python-0.1.49.tar.gz (from https://pypi.org/simple/llama-cpp-python/) (requires-python:>=3.7), version: 0.1.49 2023-12-13T21:37:06,038 Found link https://files.pythonhosted.org/packages/82/2c/9614ef76422168fde5326095559f271a22b1926185add8ae739901e113b9/llama_cpp_python-0.1.50.tar.gz (from https://pypi.org/simple/llama-cpp-python/) (requires-python:>=3.7), version: 0.1.50 2023-12-13T21:37:06,039 Found link https://files.pythonhosted.org/packages/f9/65/78748102cca92fb148e111c41827433ecc2cb79eed9de0a72a4d7a4361c0/llama_cpp_python-0.1.51.tar.gz (from https://pypi.org/simple/llama-cpp-python/) (requires-python:>=3.7), version: 0.1.51 2023-12-13T21:37:06,040 Found link https://files.pythonhosted.org/packages/87/cb/21c00f6f5b3a680671cb9c7e7ec5e07a6c03df70e28cd54f6197744c1f12/llama_cpp_python-0.1.52.tar.gz (from https://pypi.org/simple/llama-cpp-python/) (requires-python:>=3.7), version: 0.1.52 2023-12-13T21:37:06,041 Found link https://files.pythonhosted.org/packages/d6/8d/d1700e37bd9b8965154e12008620e3bd3ed7ed585ad86650294074577629/llama_cpp_python-0.1.53.tar.gz (from https://pypi.org/simple/llama-cpp-python/) (requires-python:>=3.7), version: 0.1.53 2023-12-13T21:37:06,042 Found link https://files.pythonhosted.org/packages/24/a7/e2904574d326e24338aab2e5fd618f007ef8b51c2a29618791f9c24269e2/llama_cpp_python-0.1.54.tar.gz (from https://pypi.org/simple/llama-cpp-python/) (requires-python:>=3.7), version: 0.1.54 2023-12-13T21:37:06,043 Found link https://files.pythonhosted.org/packages/b2/9b/15a40971444775d7aa5aee934991fa97eee285ae3a77c98c70c382f2ed60/llama_cpp_python-0.1.55.tar.gz (from https://pypi.org/simple/llama-cpp-python/) (requires-python:>=3.7), version: 0.1.55 2023-12-13T21:37:06,044 Found link https://files.pythonhosted.org/packages/2e/d7/36eccf10a611e2f3040cec775b9734ea51cf9938b2d911e30cbf71dd321b/llama_cpp_python-0.1.56.tar.gz (from https://pypi.org/simple/llama-cpp-python/) (requires-python:>=3.7), version: 0.1.56 2023-12-13T21:37:06,045 Found link https://files.pythonhosted.org/packages/4d/e5/b337c9e7330695eb5efa2329d25b2d964fe10364429698c89140729ebaaf/llama_cpp_python-0.1.57.tar.gz (from https://pypi.org/simple/llama-cpp-python/) (requires-python:>=3.7), version: 0.1.57 2023-12-13T21:37:06,046 Found link https://files.pythonhosted.org/packages/91/0f/8156d3f1b6bbbea68f28df5e325a2863ed736362b0f93f7936acba424e70/llama_cpp_python-0.1.59.tar.gz (from https://pypi.org/simple/llama-cpp-python/) (requires-python:>=3.7), version: 0.1.59 2023-12-13T21:37:06,047 Found link https://files.pythonhosted.org/packages/e9/18/9531e94f7a4cd402cf200a9e6257fc08d162b8a8d57adf6f4049f60ba05b/llama_cpp_python-0.1.61.tar.gz (from https://pypi.org/simple/llama-cpp-python/) (requires-python:>=3.7), version: 0.1.61 2023-12-13T21:37:06,049 Found link https://files.pythonhosted.org/packages/cc/ed/fe9bbe6c4f2156fc5e887d9e669872bc1722f80a2932a78a8166d7a82877/llama_cpp_python-0.1.62.tar.gz (from https://pypi.org/simple/llama-cpp-python/) (requires-python:>=3.7), version: 0.1.62 2023-12-13T21:37:06,050 Found link https://files.pythonhosted.org/packages/a8/01/7e39377ad0d20d2379b01b7019aad9b3595ea21ced1705ccc49c78936088/llama_cpp_python-0.1.63.tar.gz (from https://pypi.org/simple/llama-cpp-python/) (requires-python:>=3.7), version: 0.1.63 2023-12-13T21:37:06,051 Found link https://files.pythonhosted.org/packages/ad/c1/4083e90a0b31e1abb72d3f00f8d1403bdc9384301e1e370d0915f73519f5/llama_cpp_python-0.1.64.tar.gz (from https://pypi.org/simple/llama-cpp-python/) (requires-python:>=3.7), version: 0.1.64 2023-12-13T21:37:06,052 Found link https://files.pythonhosted.org/packages/84/7d/a659b65132db354147654bf2b6b2c8820b25aa10833b4849ec6b66e69117/llama_cpp_python-0.1.65.tar.gz (from https://pypi.org/simple/llama-cpp-python/) (requires-python:>=3.7), version: 0.1.65 2023-12-13T21:37:06,052 Found link https://files.pythonhosted.org/packages/59/43/6dfbaed1f70ef013279b03e436b8f58f9f2ab0835e04034927fc31bb8fc9/llama_cpp_python-0.1.66.tar.gz (from https://pypi.org/simple/llama-cpp-python/) (requires-python:>=3.7), version: 0.1.66 2023-12-13T21:37:06,053 Found link https://files.pythonhosted.org/packages/96/79/3dbc78c1a6e14d088673d21549a736aa27ca69ef1734541a07c36f349cf7/llama_cpp_python-0.1.67.tar.gz (from https://pypi.org/simple/llama-cpp-python/) (requires-python:>=3.7), version: 0.1.67 2023-12-13T21:37:06,054 Found link https://files.pythonhosted.org/packages/87/0a/f99cdd3befe25e414f9a758fb89bf70ca5278d68430af140391fc262bb55/llama_cpp_python-0.1.68.tar.gz (from https://pypi.org/simple/llama-cpp-python/) (requires-python:>=3.7), version: 0.1.68 2023-12-13T21:37:06,055 Found link https://files.pythonhosted.org/packages/e6/a2/86200ff91d374311fbb704079d95927edacfc47592ae34c3c48a47863eea/llama_cpp_python-0.1.69.tar.gz (from https://pypi.org/simple/llama-cpp-python/) (requires-python:>=3.7), version: 0.1.69 2023-12-13T21:37:06,056 Found link https://files.pythonhosted.org/packages/78/60/5cfb3842ef25db4ee1555dc2a70b99c569ad27c0438e7d9704c1672828b8/llama_cpp_python-0.1.70.tar.gz (from https://pypi.org/simple/llama-cpp-python/) (requires-python:>=3.7), version: 0.1.70 2023-12-13T21:37:06,057 Found link https://files.pythonhosted.org/packages/4b/d1/24602670353e3f08f07c9bf36dca5ef5466ac3c0d27b5d5be0685e8032a7/llama_cpp_python-0.1.71.tar.gz (from https://pypi.org/simple/llama-cpp-python/) (requires-python:>=3.7), version: 0.1.71 2023-12-13T21:37:06,058 Found link https://files.pythonhosted.org/packages/7f/59/b17486fa68bd3bce14fad89e049ea2700cf9ca36e7710d9380e2facbe182/llama_cpp_python-0.1.72.tar.gz (from https://pypi.org/simple/llama-cpp-python/) (requires-python:>=3.7), version: 0.1.72 2023-12-13T21:37:06,059 Found link https://files.pythonhosted.org/packages/c5/c5/3bcee8d4fa2a3faef625dd1223e945ab15aa7d2f180158f30762eaa597b1/llama_cpp_python-0.1.73.tar.gz (from https://pypi.org/simple/llama-cpp-python/) (requires-python:>=3.7), version: 0.1.73 2023-12-13T21:37:06,061 Found link https://files.pythonhosted.org/packages/73/09/99e6bf5d56e96a15a67628b15b705afbddf27279e6738018c4d7866d05c7/llama_cpp_python-0.1.74.tar.gz (from https://pypi.org/simple/llama-cpp-python/) (requires-python:>=3.7), version: 0.1.74 2023-12-13T21:37:06,062 Found link https://files.pythonhosted.org/packages/b3/61/85c4defcdd3157004611feff6c95e8b4776d8671ca754ff2ed91fbc85154/llama_cpp_python-0.1.76.tar.gz (from https://pypi.org/simple/llama-cpp-python/) (requires-python:>=3.7), version: 0.1.76 2023-12-13T21:37:06,063 Found link https://files.pythonhosted.org/packages/28/57/6db0db4582e31ced78487c6f28a4ee127fe38a22a85c573c39c7e5a03e2f/llama_cpp_python-0.1.77.tar.gz (from https://pypi.org/simple/llama-cpp-python/) (requires-python:>=3.7), version: 0.1.77 2023-12-13T21:37:06,064 Found link https://files.pythonhosted.org/packages/dd/98/3d2382ac0b462b175519de360c57d514fbe5d33a5e67e42e82dc03bfb0f9/llama_cpp_python-0.1.78.tar.gz (from https://pypi.org/simple/llama-cpp-python/) (requires-python:>=3.7), version: 0.1.78 2023-12-13T21:37:06,065 Found link https://files.pythonhosted.org/packages/f2/85/39c90a6b2306fbf91fc9dd2346bb4599c57e5c29aec15981fe5d662cef34/llama_cpp_python-0.1.79.tar.gz (from https://pypi.org/simple/llama-cpp-python/) (requires-python:>=3.7), version: 0.1.79 2023-12-13T21:37:06,066 Found link https://files.pythonhosted.org/packages/af/c7/e3cee337dc44024bece8faf7683e40d015bae55b0dfaddd1a97ab4d1b432/llama_cpp_python-0.1.80.tar.gz (from https://pypi.org/simple/llama-cpp-python/) (requires-python:>=3.7), version: 0.1.80 2023-12-13T21:37:06,067 Found link https://files.pythonhosted.org/packages/ae/92/c10ee59095bc1336edbecc8f6eea98d9d2f4df1d944b9df9b4484ea268ae/llama_cpp_python-0.1.81.tar.gz (from https://pypi.org/simple/llama-cpp-python/) (requires-python:>=3.7), version: 0.1.81 2023-12-13T21:37:06,068 Found link https://files.pythonhosted.org/packages/81/b5/b63dbe0b799b9063208543a84b0e99b622f8a8d19de9564fc1d2877e1c9e/llama_cpp_python-0.1.82.tar.gz (from https://pypi.org/simple/llama-cpp-python/) (requires-python:>=3.7), version: 0.1.82 2023-12-13T21:37:06,069 Found link https://files.pythonhosted.org/packages/6e/c7/651fa47b77d2189a46b00caa44627d17476bf41bcbeb0b72906295d6de79/llama_cpp_python-0.1.83.tar.gz (from https://pypi.org/simple/llama-cpp-python/) (requires-python:>=3.7), version: 0.1.83 2023-12-13T21:37:06,070 Found link https://files.pythonhosted.org/packages/39/f2/a64d37bdaecb2ad66cfc2faab95201acf66b537affbd042656b27dc135f4/llama_cpp_python-0.1.84.tar.gz (from https://pypi.org/simple/llama-cpp-python/) (requires-python:>=3.7), version: 0.1.84 2023-12-13T21:37:06,070 Found link https://files.pythonhosted.org/packages/ed/f2/2fb3b4c3886de5d1bcfbd258932159e374d1d9a0d52d6850805e26cc9fc2/llama_cpp_python-0.1.85.tar.gz (from https://pypi.org/simple/llama-cpp-python/) (requires-python:>=3.7), version: 0.1.85 2023-12-13T21:37:06,071 Found link https://files.pythonhosted.org/packages/5b/a6/a49b40d4c0ac9aa703bf11e5783d38beb3924a6ba5165a393518646894c9/llama_cpp_python-0.2.0.tar.gz (from https://pypi.org/simple/llama-cpp-python/) (requires-python:>=3.8), version: 0.2.0 2023-12-13T21:37:06,072 Found link https://files.pythonhosted.org/packages/e4/3a/7c65dbed3913086ec0a84549acdd4002ef4e1ef9fbb1d31596a4c1fd64a3/llama_cpp_python-0.2.1.tar.gz (from https://pypi.org/simple/llama-cpp-python/) (requires-python:>=3.8), version: 0.2.1 2023-12-13T21:37:06,073 Found link https://files.pythonhosted.org/packages/d0/28/ef9e91c4ed9e96a2a0bcd6a8327f2d039745b59946eccc6ccb1a9ee2dedf/llama_cpp_python-0.2.2.tar.gz (from https://pypi.org/simple/llama-cpp-python/) (requires-python:>=3.8), version: 0.2.2 2023-12-13T21:37:06,074 Found link https://files.pythonhosted.org/packages/99/e6/19d9c978dc634d91b05416c8fc502171af6b27a20683669048afa5738b74/llama_cpp_python-0.2.3.tar.gz (from https://pypi.org/simple/llama-cpp-python/) (requires-python:>=3.8), version: 0.2.3 2023-12-13T21:37:06,075 Found link https://files.pythonhosted.org/packages/7b/26/be5c224560ccbe64592afbdbe0710ae5b0a8413e1416cc8c2c0b093b713b/llama_cpp_python-0.2.4.tar.gz (from https://pypi.org/simple/llama-cpp-python/) (requires-python:>=3.8), version: 0.2.4 2023-12-13T21:37:06,076 Found link https://files.pythonhosted.org/packages/04/9d/1f8fe06199b5fda5a691f23ef5622b32d5fe717da748f4fc2c9cbde60223/llama_cpp_python-0.2.5.tar.gz (from https://pypi.org/simple/llama-cpp-python/) (requires-python:>=3.8), version: 0.2.5 2023-12-13T21:37:06,077 Found link https://files.pythonhosted.org/packages/ff/ca/8c45e45abb21069f6274efe3f1cf0aca29a1fd089fec6acf924ee4a67c46/llama_cpp_python-0.2.6.tar.gz (from https://pypi.org/simple/llama-cpp-python/) (requires-python:>=3.8), version: 0.2.6 2023-12-13T21:37:06,078 Found link https://files.pythonhosted.org/packages/b1/78/bd5e6653102ea16ce53a044cec606f257811da99c9c2a760af6a93cdfef3/llama_cpp_python-0.2.7.tar.gz (from https://pypi.org/simple/llama-cpp-python/) (requires-python:>=3.8), version: 0.2.7 2023-12-13T21:37:06,079 Found link https://files.pythonhosted.org/packages/6d/60/edbd982673a71c6c27fa6818914ad61c6171d165de4e777d489539f1d959/llama_cpp_python-0.2.8.tar.gz (from https://pypi.org/simple/llama-cpp-python/) (requires-python:>=3.8), version: 0.2.8 2023-12-13T21:37:06,081 Found link https://files.pythonhosted.org/packages/98/2e/357d936ff7418591c56a27b9472e2b3581bd9eeb90c4221580fae5e00588/llama_cpp_python-0.2.9.tar.gz (from https://pypi.org/simple/llama-cpp-python/) (requires-python:>=3.8), version: 0.2.9 2023-12-13T21:37:06,082 Found link https://files.pythonhosted.org/packages/d4/a2/ff96c80f91d7d534a6b65517247c09680b1bbf064d6388feda9aac3201dd/llama_cpp_python-0.2.10.tar.gz (from https://pypi.org/simple/llama-cpp-python/) (requires-python:>=3.8), version: 0.2.10 2023-12-13T21:37:06,082 Found link https://files.pythonhosted.org/packages/5b/b9/1ea446f1dcccb13313ea1e651c73bd5cc4db2aabf6cae1894064bddf1fc4/llama_cpp_python-0.2.11.tar.gz (from https://pypi.org/simple/llama-cpp-python/) (requires-python:>=3.8), version: 0.2.11 2023-12-13T21:37:06,083 Found link https://files.pythonhosted.org/packages/11/35/0185e28cfcdb59ab17e09a6cc6e19c7271db236ee1c9d41143a082b463b7/llama_cpp_python-0.2.12.tar.gz (from https://pypi.org/simple/llama-cpp-python/) (requires-python:>=3.8), version: 0.2.12 2023-12-13T21:37:06,084 Found link https://files.pythonhosted.org/packages/da/58/55a26595009d76237273b340d718e04d9a33c5afd440e45552f45a16b1d9/llama_cpp_python-0.2.13.tar.gz (from https://pypi.org/simple/llama-cpp-python/) (requires-python:>=3.8), version: 0.2.13 2023-12-13T21:37:06,085 Found link https://files.pythonhosted.org/packages/82/2c/e742d611024256b5540380e7a62cd1fdc3cc1b47f5d2b86610f545804acd/llama_cpp_python-0.2.14.tar.gz (from https://pypi.org/simple/llama-cpp-python/) (requires-python:>=3.8), version: 0.2.14 2023-12-13T21:37:06,086 Found link https://files.pythonhosted.org/packages/0c/e9/0d48a445430bed484791f76a4ab1d7950e57468127a3ee6a6ec494f46ae5/llama_cpp_python-0.2.15.tar.gz (from https://pypi.org/simple/llama-cpp-python/) (requires-python:>=3.8), version: 0.2.15 2023-12-13T21:37:06,087 Found link https://files.pythonhosted.org/packages/a8/3e/b0bd26d0d0d0dd9187a6e4e46c2744c1d7d52cc2834b35db61776af00219/llama_cpp_python-0.2.16.tar.gz (from https://pypi.org/simple/llama-cpp-python/) (requires-python:>=3.8), version: 0.2.16 2023-12-13T21:37:06,088 Found link https://files.pythonhosted.org/packages/d1/2c/e75e2e5b08b805d23066f1c1f8dbb1777a5bd3b43f057d16d4b2634d9ae1/llama_cpp_python-0.2.17.tar.gz (from https://pypi.org/simple/llama-cpp-python/) (requires-python:>=3.8), version: 0.2.17 2023-12-13T21:37:06,089 Found link https://files.pythonhosted.org/packages/1b/be/3ce85cdf2f3b7c035ca52e0158b98d244d4ce40a51908b22e0b45c3ef75f/llama_cpp_python-0.2.18.tar.gz (from https://pypi.org/simple/llama-cpp-python/) (requires-python:>=3.8), version: 0.2.18 2023-12-13T21:37:06,090 Found link https://files.pythonhosted.org/packages/9d/1a/f74ce61893791530a9af61fe8925bd569d8fb087545dc1973d617c03ce11/llama_cpp_python-0.2.19.tar.gz (from https://pypi.org/simple/llama-cpp-python/) (requires-python:>=3.8), version: 0.2.19 2023-12-13T21:37:06,091 Found link https://files.pythonhosted.org/packages/f0/6a/3e161b68097fe2f9901e01dc7ec2afb4753699495004a37d2abdc3b1fd07/llama_cpp_python-0.2.20.tar.gz (from https://pypi.org/simple/llama-cpp-python/) (requires-python:>=3.8), version: 0.2.20 2023-12-13T21:37:06,092 Found link https://files.pythonhosted.org/packages/15/7a/49906adb90113f628c1f07dc746ca0978b8aa99a8f7325a8d961ce2a1919/llama_cpp_python-0.2.22.tar.gz (from https://pypi.org/simple/llama-cpp-python/) (requires-python:>=3.8), version: 0.2.22 2023-12-13T21:37:06,093 Fetching project page and analyzing links: https://www.piwheels.org/simple/llama-cpp-python/ 2023-12-13T21:37:06,093 Getting page https://www.piwheels.org/simple/llama-cpp-python/ 2023-12-13T21:37:06,095 Found index url https://www.piwheels.org/simple/ 2023-12-13T21:37:06,251 Fetched page https://www.piwheels.org/simple/llama-cpp-python/ as text/html 2023-12-13T21:37:06,271 Skipping link: No binaries permitted for llama-cpp-python: https://www.piwheels.org/simple/llama-cpp-python/llama_cpp_python-0.2.20-cp311-cp311-manylinux_2_36_armv7l.whl#sha256=da14ce17a2476a706a8e8b7489a303536550d7c8cdff9db42cb4b56985c7688f (from https://www.piwheels.org/simple/llama-cpp-python/) (requires-python:>=3.8) 2023-12-13T21:37:06,272 Skipping link: No binaries permitted for llama-cpp-python: https://www.piwheels.org/simple/llama-cpp-python/llama_cpp_python-0.2.20-cp39-cp39-manylinux_2_31_armv7l.whl#sha256=5e05d20b69f7e652531141ef60200d9351118692a28d8b87a8a8a7d527928e9a (from https://www.piwheels.org/simple/llama-cpp-python/) (requires-python:>=3.8) 2023-12-13T21:37:06,272 Skipping link: No binaries permitted for llama-cpp-python: https://www.piwheels.org/simple/llama-cpp-python/llama_cpp_python-0.2.19-cp311-cp311-manylinux_2_36_armv7l.whl#sha256=c1833281926198d9276c3c08ac7cb0f49630c164ce8f29bab9c41e00d55e721f (from https://www.piwheels.org/simple/llama-cpp-python/) (requires-python:>=3.8) 2023-12-13T21:37:06,273 Skipping link: No binaries permitted for llama-cpp-python: https://www.piwheels.org/simple/llama-cpp-python/llama_cpp_python-0.2.19-cp39-cp39-manylinux_2_31_armv7l.whl#sha256=dd16bdc23237ef0e4cc1c9c4c29f6624c9b510052a3dfdaba483957289ac48d5 (from https://www.piwheels.org/simple/llama-cpp-python/) (requires-python:>=3.8) 2023-12-13T21:37:06,273 Skipping link: No binaries permitted for llama-cpp-python: https://www.piwheels.org/simple/llama-cpp-python/llama_cpp_python-0.2.18-cp311-cp311-manylinux_2_36_armv7l.whl#sha256=7f066f10c0560b76776560045941383e9b8627d7696362b387fd4e652db00dad (from https://www.piwheels.org/simple/llama-cpp-python/) (requires-python:>=3.8) 2023-12-13T21:37:06,274 Skipping link: No binaries permitted for llama-cpp-python: https://www.piwheels.org/simple/llama-cpp-python/llama_cpp_python-0.2.18-cp39-cp39-manylinux_2_31_armv7l.whl#sha256=885911c08b103762c507be6075b91de5ecb5b5422f913d7b9f844dcf3ab9b6ae (from https://www.piwheels.org/simple/llama-cpp-python/) (requires-python:>=3.8) 2023-12-13T21:37:06,274 Skipping link: No binaries permitted for llama-cpp-python: https://www.piwheels.org/simple/llama-cpp-python/llama_cpp_python-0.1.57-cp37-cp37m-linux_armv6l.whl#sha256=c46f12906971196ab3fa8250c23e5ae1f72581c00d910fadf491a710a97cb3d7 (from https://www.piwheels.org/simple/llama-cpp-python/) (requires-python:>=3.7) 2023-12-13T21:37:06,275 Skipping link: No binaries permitted for llama-cpp-python: https://www.piwheels.org/simple/llama-cpp-python/llama_cpp_python-0.1.57-cp37-cp37m-linux_armv7l.whl#sha256=c46f12906971196ab3fa8250c23e5ae1f72581c00d910fadf491a710a97cb3d7 (from https://www.piwheels.org/simple/llama-cpp-python/) (requires-python:>=3.7) 2023-12-13T21:37:06,275 Skipping link: No binaries permitted for llama-cpp-python: https://www.piwheels.org/simple/llama-cpp-python/llama_cpp_python-0.1.57-cp39-cp39-linux_armv6l.whl#sha256=888f3796690ccb21c9fac07b1ff83afa7b56fedfaa70ce1568572f1b7fdb3f27 (from https://www.piwheels.org/simple/llama-cpp-python/) (requires-python:>=3.7) 2023-12-13T21:37:06,276 Skipping link: No binaries permitted for llama-cpp-python: https://www.piwheels.org/simple/llama-cpp-python/llama_cpp_python-0.1.57-cp39-cp39-linux_armv7l.whl#sha256=888f3796690ccb21c9fac07b1ff83afa7b56fedfaa70ce1568572f1b7fdb3f27 (from https://www.piwheels.org/simple/llama-cpp-python/) (requires-python:>=3.7) 2023-12-13T21:37:06,277 Skipping link: No binaries permitted for llama-cpp-python: https://www.piwheels.org/simple/llama-cpp-python/llama_cpp_python-0.1.57-cp311-cp311-linux_armv6l.whl#sha256=caf38ff85ab251e84b4f951438454931514bde01dc36643d034d340ed14736d9 (from https://www.piwheels.org/simple/llama-cpp-python/) (requires-python:>=3.7) 2023-12-13T21:37:06,277 Skipping link: No binaries permitted for llama-cpp-python: https://www.piwheels.org/simple/llama-cpp-python/llama_cpp_python-0.1.57-cp311-cp311-linux_armv7l.whl#sha256=caf38ff85ab251e84b4f951438454931514bde01dc36643d034d340ed14736d9 (from https://www.piwheels.org/simple/llama-cpp-python/) (requires-python:>=3.7) 2023-12-13T21:37:06,278 Skipping link: No binaries permitted for llama-cpp-python: https://www.piwheels.org/simple/llama-cpp-python/llama_cpp_python-0.1.56-cp37-cp37m-linux_armv6l.whl#sha256=6ca6e31293dbf909df09e8c0ff119a6706c3b279bbf05716bdd04e99b6ff1665 (from https://www.piwheels.org/simple/llama-cpp-python/) (requires-python:>=3.7) 2023-12-13T21:37:06,279 Skipping link: No binaries permitted for llama-cpp-python: https://www.piwheels.org/simple/llama-cpp-python/llama_cpp_python-0.1.56-cp37-cp37m-linux_armv7l.whl#sha256=6ca6e31293dbf909df09e8c0ff119a6706c3b279bbf05716bdd04e99b6ff1665 (from https://www.piwheels.org/simple/llama-cpp-python/) (requires-python:>=3.7) 2023-12-13T21:37:06,279 Skipping link: No binaries permitted for llama-cpp-python: https://www.piwheels.org/simple/llama-cpp-python/llama_cpp_python-0.1.56-cp39-cp39-linux_armv6l.whl#sha256=2e5b18a3b1b32ea7c1ec0205c6d65ab42b07e29da5bea6c6fc8d17cdd9ee22bd (from https://www.piwheels.org/simple/llama-cpp-python/) (requires-python:>=3.7) 2023-12-13T21:37:06,280 Skipping link: No binaries permitted for llama-cpp-python: https://www.piwheels.org/simple/llama-cpp-python/llama_cpp_python-0.1.56-cp39-cp39-linux_armv7l.whl#sha256=2e5b18a3b1b32ea7c1ec0205c6d65ab42b07e29da5bea6c6fc8d17cdd9ee22bd (from https://www.piwheels.org/simple/llama-cpp-python/) (requires-python:>=3.7) 2023-12-13T21:37:06,280 Skipping link: No binaries permitted for llama-cpp-python: https://www.piwheels.org/simple/llama-cpp-python/llama_cpp_python-0.1.55-cp39-cp39-linux_armv6l.whl#sha256=d9a4ec585cfc04a6b43e815fefdf6c493a08b569cace3fd7c9bbe4d2ebd97fc5 (from https://www.piwheels.org/simple/llama-cpp-python/) (requires-python:>=3.7) 2023-12-13T21:37:06,281 Skipping link: No binaries permitted for llama-cpp-python: https://www.piwheels.org/simple/llama-cpp-python/llama_cpp_python-0.1.55-cp39-cp39-linux_armv7l.whl#sha256=d9a4ec585cfc04a6b43e815fefdf6c493a08b569cace3fd7c9bbe4d2ebd97fc5 (from https://www.piwheels.org/simple/llama-cpp-python/) (requires-python:>=3.7) 2023-12-13T21:37:06,281 Skipping link: No binaries permitted for llama-cpp-python: https://www.piwheels.org/simple/llama-cpp-python/llama_cpp_python-0.1.55-cp37-cp37m-linux_armv6l.whl#sha256=edb85834fe2145fc906e933a5643471bdebf3f3e376675ab3e914785fd1ec21d (from https://www.piwheels.org/simple/llama-cpp-python/) (requires-python:>=3.7) 2023-12-13T21:37:06,282 Skipping link: No binaries permitted for llama-cpp-python: https://www.piwheels.org/simple/llama-cpp-python/llama_cpp_python-0.1.55-cp37-cp37m-linux_armv7l.whl#sha256=edb85834fe2145fc906e933a5643471bdebf3f3e376675ab3e914785fd1ec21d (from https://www.piwheels.org/simple/llama-cpp-python/) (requires-python:>=3.7) 2023-12-13T21:37:06,283 Skipping link: No binaries permitted for llama-cpp-python: https://www.piwheels.org/simple/llama-cpp-python/llama_cpp_python-0.1.55-cp311-cp311-linux_armv6l.whl#sha256=6abbec187e6c40b192040ba4dee145ae69de4aaa65c3a350fe0cea85bf6aa197 (from https://www.piwheels.org/simple/llama-cpp-python/) (requires-python:>=3.7) 2023-12-13T21:37:06,283 Skipping link: No binaries permitted for llama-cpp-python: https://www.piwheels.org/simple/llama-cpp-python/llama_cpp_python-0.1.55-cp311-cp311-linux_armv7l.whl#sha256=6abbec187e6c40b192040ba4dee145ae69de4aaa65c3a350fe0cea85bf6aa197 (from https://www.piwheels.org/simple/llama-cpp-python/) (requires-python:>=3.7) 2023-12-13T21:37:06,284 Skipping link: No binaries permitted for llama-cpp-python: https://www.piwheels.org/simple/llama-cpp-python/llama_cpp_python-0.1.54-cp37-cp37m-linux_armv6l.whl#sha256=221d6012bf80f402d83593047359ddb7c767e83147d2c8445d24e58466b050bc (from https://www.piwheels.org/simple/llama-cpp-python/) (requires-python:>=3.7) 2023-12-13T21:37:06,284 Skipping link: No binaries permitted for llama-cpp-python: https://www.piwheels.org/simple/llama-cpp-python/llama_cpp_python-0.1.54-cp37-cp37m-linux_armv7l.whl#sha256=221d6012bf80f402d83593047359ddb7c767e83147d2c8445d24e58466b050bc (from https://www.piwheels.org/simple/llama-cpp-python/) (requires-python:>=3.7) 2023-12-13T21:37:06,285 Skipping link: No binaries permitted for llama-cpp-python: https://www.piwheels.org/simple/llama-cpp-python/llama_cpp_python-0.1.54-cp39-cp39-linux_armv6l.whl#sha256=064c3e8c4f3dd76aef301720241909048b2b4da4b0d1564b0436693e6efd1ddd (from https://www.piwheels.org/simple/llama-cpp-python/) (requires-python:>=3.7) 2023-12-13T21:37:06,286 Skipping link: No binaries permitted for llama-cpp-python: https://www.piwheels.org/simple/llama-cpp-python/llama_cpp_python-0.1.54-cp39-cp39-linux_armv7l.whl#sha256=064c3e8c4f3dd76aef301720241909048b2b4da4b0d1564b0436693e6efd1ddd (from https://www.piwheels.org/simple/llama-cpp-python/) (requires-python:>=3.7) 2023-12-13T21:37:06,287 Skipping link: No binaries permitted for llama-cpp-python: https://www.piwheels.org/simple/llama-cpp-python/llama_cpp_python-0.1.54-cp311-cp311-linux_armv6l.whl#sha256=2552020ab6570979cc92527cce3acf131f181b466944034d0ae4bc4674934989 (from https://www.piwheels.org/simple/llama-cpp-python/) (requires-python:>=3.7) 2023-12-13T21:37:06,287 Skipping link: No binaries permitted for llama-cpp-python: https://www.piwheels.org/simple/llama-cpp-python/llama_cpp_python-0.1.54-cp311-cp311-linux_armv7l.whl#sha256=2552020ab6570979cc92527cce3acf131f181b466944034d0ae4bc4674934989 (from https://www.piwheels.org/simple/llama-cpp-python/) (requires-python:>=3.7) 2023-12-13T21:37:06,287 Skipping link: No binaries permitted for llama-cpp-python: https://www.piwheels.org/simple/llama-cpp-python/llama_cpp_python-0.1.53-cp37-cp37m-linux_armv6l.whl#sha256=b177a40248c14829c96942ccdc570d96cf86f94d2ccd1fba2440ca7f496432b1 (from https://www.piwheels.org/simple/llama-cpp-python/) (requires-python:>=3.7) 2023-12-13T21:37:06,288 Skipping link: No binaries permitted for llama-cpp-python: https://www.piwheels.org/simple/llama-cpp-python/llama_cpp_python-0.1.53-cp37-cp37m-linux_armv7l.whl#sha256=b177a40248c14829c96942ccdc570d96cf86f94d2ccd1fba2440ca7f496432b1 (from https://www.piwheels.org/simple/llama-cpp-python/) (requires-python:>=3.7) 2023-12-13T21:37:06,289 Skipping link: No binaries permitted for llama-cpp-python: https://www.piwheels.org/simple/llama-cpp-python/llama_cpp_python-0.1.53-cp39-cp39-linux_armv6l.whl#sha256=fb5ba2f1e57a03c0d0e587577ab280d1a4a4fed295f1fcd4e19a3313a2ef07ac (from https://www.piwheels.org/simple/llama-cpp-python/) (requires-python:>=3.7) 2023-12-13T21:37:06,289 Skipping link: No binaries permitted for llama-cpp-python: https://www.piwheels.org/simple/llama-cpp-python/llama_cpp_python-0.1.53-cp39-cp39-linux_armv7l.whl#sha256=fb5ba2f1e57a03c0d0e587577ab280d1a4a4fed295f1fcd4e19a3313a2ef07ac (from https://www.piwheels.org/simple/llama-cpp-python/) (requires-python:>=3.7) 2023-12-13T21:37:06,290 Skipping link: No binaries permitted for llama-cpp-python: https://www.piwheels.org/simple/llama-cpp-python/llama_cpp_python-0.1.53-cp311-cp311-linux_armv6l.whl#sha256=5afe359d7635ee4081bf60ef6cdbc35860b635d75699f7185b4d637c55ac2572 (from https://www.piwheels.org/simple/llama-cpp-python/) (requires-python:>=3.7) 2023-12-13T21:37:06,291 Skipping link: No binaries permitted for llama-cpp-python: https://www.piwheels.org/simple/llama-cpp-python/llama_cpp_python-0.1.53-cp311-cp311-linux_armv7l.whl#sha256=5afe359d7635ee4081bf60ef6cdbc35860b635d75699f7185b4d637c55ac2572 (from https://www.piwheels.org/simple/llama-cpp-python/) (requires-python:>=3.7) 2023-12-13T21:37:06,291 Skipping link: No binaries permitted for llama-cpp-python: https://www.piwheels.org/simple/llama-cpp-python/llama_cpp_python-0.1.52-cp37-cp37m-linux_armv6l.whl#sha256=bf3bc680532ad36080ca0e375cdb349c91ba90a6880c0c9090b83bb41463aacc (from https://www.piwheels.org/simple/llama-cpp-python/) (requires-python:>=3.7) 2023-12-13T21:37:06,292 Skipping link: No binaries permitted for llama-cpp-python: https://www.piwheels.org/simple/llama-cpp-python/llama_cpp_python-0.1.52-cp37-cp37m-linux_armv7l.whl#sha256=bf3bc680532ad36080ca0e375cdb349c91ba90a6880c0c9090b83bb41463aacc (from https://www.piwheels.org/simple/llama-cpp-python/) (requires-python:>=3.7) 2023-12-13T21:37:06,292 Skipping link: No binaries permitted for llama-cpp-python: https://www.piwheels.org/simple/llama-cpp-python/llama_cpp_python-0.1.52-cp39-cp39-linux_armv6l.whl#sha256=2d8f9447a21a804a90f8adcee77052587ab9ace32dbe36eca23c72cb2ce20fac (from https://www.piwheels.org/simple/llama-cpp-python/) (requires-python:>=3.7) 2023-12-13T21:37:06,293 Skipping link: No binaries permitted for llama-cpp-python: https://www.piwheels.org/simple/llama-cpp-python/llama_cpp_python-0.1.52-cp39-cp39-linux_armv7l.whl#sha256=2d8f9447a21a804a90f8adcee77052587ab9ace32dbe36eca23c72cb2ce20fac (from https://www.piwheels.org/simple/llama-cpp-python/) (requires-python:>=3.7) 2023-12-13T21:37:06,294 Skipping link: No binaries permitted for llama-cpp-python: https://www.piwheels.org/simple/llama-cpp-python/llama_cpp_python-0.1.52-cp311-cp311-linux_armv6l.whl#sha256=fefadcad700a08bdc860fb3a5f45f54d635e130484bbadf9015da2268f57cb44 (from https://www.piwheels.org/simple/llama-cpp-python/) (requires-python:>=3.7) 2023-12-13T21:37:06,294 Skipping link: No binaries permitted for llama-cpp-python: https://www.piwheels.org/simple/llama-cpp-python/llama_cpp_python-0.1.52-cp311-cp311-linux_armv7l.whl#sha256=fefadcad700a08bdc860fb3a5f45f54d635e130484bbadf9015da2268f57cb44 (from https://www.piwheels.org/simple/llama-cpp-python/) (requires-python:>=3.7) 2023-12-13T21:37:06,295 Skipping link: No binaries permitted for llama-cpp-python: https://www.piwheels.org/simple/llama-cpp-python/llama_cpp_python-0.1.51-cp37-cp37m-linux_armv6l.whl#sha256=23d1e81835a4f9d2cd07c25dfe46adb3541bc7e7104c92b9e4ce40d8042f40e0 (from https://www.piwheels.org/simple/llama-cpp-python/) (requires-python:>=3.7) 2023-12-13T21:37:06,295 Skipping link: No binaries permitted for llama-cpp-python: https://www.piwheels.org/simple/llama-cpp-python/llama_cpp_python-0.1.51-cp37-cp37m-linux_armv7l.whl#sha256=23d1e81835a4f9d2cd07c25dfe46adb3541bc7e7104c92b9e4ce40d8042f40e0 (from https://www.piwheels.org/simple/llama-cpp-python/) (requires-python:>=3.7) 2023-12-13T21:37:06,296 Skipping link: No binaries permitted for llama-cpp-python: https://www.piwheels.org/simple/llama-cpp-python/llama_cpp_python-0.1.51-cp39-cp39-linux_armv6l.whl#sha256=b7620dc9874978dd791e463c32bcd526f5eb3eb53b8b4221b9eaec21eabd7958 (from https://www.piwheels.org/simple/llama-cpp-python/) (requires-python:>=3.7) 2023-12-13T21:37:06,296 Skipping link: No binaries permitted for llama-cpp-python: https://www.piwheels.org/simple/llama-cpp-python/llama_cpp_python-0.1.51-cp39-cp39-linux_armv7l.whl#sha256=b7620dc9874978dd791e463c32bcd526f5eb3eb53b8b4221b9eaec21eabd7958 (from https://www.piwheels.org/simple/llama-cpp-python/) (requires-python:>=3.7) 2023-12-13T21:37:06,297 Skipping link: No binaries permitted for llama-cpp-python: https://www.piwheels.org/simple/llama-cpp-python/llama_cpp_python-0.1.51-cp311-cp311-linux_armv6l.whl#sha256=6b45a1fb53ff22631be2564cb7274fe170f2982b28a5476e8ff905770eec557e (from https://www.piwheels.org/simple/llama-cpp-python/) (requires-python:>=3.7) 2023-12-13T21:37:06,298 Skipping link: No binaries permitted for llama-cpp-python: https://www.piwheels.org/simple/llama-cpp-python/llama_cpp_python-0.1.51-cp311-cp311-linux_armv7l.whl#sha256=6b45a1fb53ff22631be2564cb7274fe170f2982b28a5476e8ff905770eec557e (from https://www.piwheels.org/simple/llama-cpp-python/) (requires-python:>=3.7) 2023-12-13T21:37:06,298 Skipping link: No binaries permitted for llama-cpp-python: https://www.piwheels.org/simple/llama-cpp-python/llama_cpp_python-0.1.50-cp37-cp37m-linux_armv6l.whl#sha256=5b64a8dc60df2396aa83907b89ccc8e6db4ab43e017b3b3a26c091714099da12 (from https://www.piwheels.org/simple/llama-cpp-python/) (requires-python:>=3.7) 2023-12-13T21:37:06,299 Skipping link: No binaries permitted for llama-cpp-python: https://www.piwheels.org/simple/llama-cpp-python/llama_cpp_python-0.1.50-cp37-cp37m-linux_armv7l.whl#sha256=5b64a8dc60df2396aa83907b89ccc8e6db4ab43e017b3b3a26c091714099da12 (from https://www.piwheels.org/simple/llama-cpp-python/) (requires-python:>=3.7) 2023-12-13T21:37:06,299 Skipping link: No binaries permitted for llama-cpp-python: https://www.piwheels.org/simple/llama-cpp-python/llama_cpp_python-0.1.50-cp311-cp311-linux_armv6l.whl#sha256=715122f66811a350122cd555ac8a883fd243f0008711c03138d76742162d1e63 (from https://www.piwheels.org/simple/llama-cpp-python/) (requires-python:>=3.7) 2023-12-13T21:37:06,300 Skipping link: No binaries permitted for llama-cpp-python: https://www.piwheels.org/simple/llama-cpp-python/llama_cpp_python-0.1.50-cp311-cp311-linux_armv7l.whl#sha256=715122f66811a350122cd555ac8a883fd243f0008711c03138d76742162d1e63 (from https://www.piwheels.org/simple/llama-cpp-python/) (requires-python:>=3.7) 2023-12-13T21:37:06,301 Skipping link: No binaries permitted for llama-cpp-python: https://www.piwheels.org/simple/llama-cpp-python/llama_cpp_python-0.1.49-cp37-cp37m-linux_armv6l.whl#sha256=6bb78e03dfe2c72307aede0cb78e223e8d69e29948c964f2a0651654c6d62d55 (from https://www.piwheels.org/simple/llama-cpp-python/) (requires-python:>=3.7) 2023-12-13T21:37:06,301 Skipping link: No binaries permitted for llama-cpp-python: https://www.piwheels.org/simple/llama-cpp-python/llama_cpp_python-0.1.49-cp37-cp37m-linux_armv7l.whl#sha256=6bb78e03dfe2c72307aede0cb78e223e8d69e29948c964f2a0651654c6d62d55 (from https://www.piwheels.org/simple/llama-cpp-python/) (requires-python:>=3.7) 2023-12-13T21:37:06,301 Skipping link: No binaries permitted for llama-cpp-python: https://www.piwheels.org/simple/llama-cpp-python/llama_cpp_python-0.1.48-cp37-cp37m-linux_armv6l.whl#sha256=67af3df96f6ba459ca0a542bf8ec23e3cafefef3b7f6ed6ec7fe5b2ac6be3a2f (from https://www.piwheels.org/simple/llama-cpp-python/) (requires-python:>=3.7) 2023-12-13T21:37:06,302 Skipping link: No binaries permitted for llama-cpp-python: https://www.piwheels.org/simple/llama-cpp-python/llama_cpp_python-0.1.48-cp37-cp37m-linux_armv7l.whl#sha256=67af3df96f6ba459ca0a542bf8ec23e3cafefef3b7f6ed6ec7fe5b2ac6be3a2f (from https://www.piwheels.org/simple/llama-cpp-python/) (requires-python:>=3.7) 2023-12-13T21:37:06,302 Skipping link: No binaries permitted for llama-cpp-python: https://www.piwheels.org/simple/llama-cpp-python/llama_cpp_python-0.1.48-cp311-cp311-linux_armv6l.whl#sha256=7b4ce590d0f5b3f1b5c967a79a59067684631fb424510f80bed3f109f74a2d43 (from https://www.piwheels.org/simple/llama-cpp-python/) (requires-python:>=3.7) 2023-12-13T21:37:06,303 Skipping link: No binaries permitted for llama-cpp-python: https://www.piwheels.org/simple/llama-cpp-python/llama_cpp_python-0.1.48-cp311-cp311-linux_armv7l.whl#sha256=7b4ce590d0f5b3f1b5c967a79a59067684631fb424510f80bed3f109f74a2d43 (from https://www.piwheels.org/simple/llama-cpp-python/) (requires-python:>=3.7) 2023-12-13T21:37:06,303 Skipping link: No binaries permitted for llama-cpp-python: https://www.piwheels.org/simple/llama-cpp-python/llama_cpp_python-0.1.47-cp37-cp37m-linux_armv6l.whl#sha256=c4b404a9a588ba34c86302ea053359619a9aa93f844a933a08637cc65dfdb6e4 (from https://www.piwheels.org/simple/llama-cpp-python/) (requires-python:>=3.7) 2023-12-13T21:37:06,304 Skipping link: No binaries permitted for llama-cpp-python: https://www.piwheels.org/simple/llama-cpp-python/llama_cpp_python-0.1.47-cp37-cp37m-linux_armv7l.whl#sha256=c4b404a9a588ba34c86302ea053359619a9aa93f844a933a08637cc65dfdb6e4 (from https://www.piwheels.org/simple/llama-cpp-python/) (requires-python:>=3.7) 2023-12-13T21:37:06,305 Skipping link: No binaries permitted for llama-cpp-python: https://www.piwheels.org/simple/llama-cpp-python/llama_cpp_python-0.1.47-cp311-cp311-linux_armv6l.whl#sha256=a2ac92bd0de7e00a32f9fa9b5a1a22ead4031f5711ad81e191a2ebc8e9df3dcf (from https://www.piwheels.org/simple/llama-cpp-python/) (requires-python:>=3.7) 2023-12-13T21:37:06,305 Skipping link: No binaries permitted for llama-cpp-python: https://www.piwheels.org/simple/llama-cpp-python/llama_cpp_python-0.1.47-cp311-cp311-linux_armv7l.whl#sha256=a2ac92bd0de7e00a32f9fa9b5a1a22ead4031f5711ad81e191a2ebc8e9df3dcf (from https://www.piwheels.org/simple/llama-cpp-python/) (requires-python:>=3.7) 2023-12-13T21:37:06,306 Skipping link: No binaries permitted for llama-cpp-python: https://www.piwheels.org/simple/llama-cpp-python/llama_cpp_python-0.1.46-cp37-cp37m-linux_armv6l.whl#sha256=851936642b661501ebdd692b9cc1a9f420b54d4b6c1568a0b5561c0c313c3375 (from https://www.piwheels.org/simple/llama-cpp-python/) (requires-python:>=3.7) 2023-12-13T21:37:06,307 Skipping link: No binaries permitted for llama-cpp-python: https://www.piwheels.org/simple/llama-cpp-python/llama_cpp_python-0.1.46-cp37-cp37m-linux_armv7l.whl#sha256=851936642b661501ebdd692b9cc1a9f420b54d4b6c1568a0b5561c0c313c3375 (from https://www.piwheels.org/simple/llama-cpp-python/) (requires-python:>=3.7) 2023-12-13T21:37:06,307 Skipping link: No binaries permitted for llama-cpp-python: https://www.piwheels.org/simple/llama-cpp-python/llama_cpp_python-0.1.45-cp37-cp37m-linux_armv6l.whl#sha256=b01d7783f853028706cb7cd4833e1040430f089a3770cc4dcf88af128329b3e9 (from https://www.piwheels.org/simple/llama-cpp-python/) (requires-python:>=3.7) 2023-12-13T21:37:06,307 Skipping link: No binaries permitted for llama-cpp-python: https://www.piwheels.org/simple/llama-cpp-python/llama_cpp_python-0.1.45-cp37-cp37m-linux_armv7l.whl#sha256=b01d7783f853028706cb7cd4833e1040430f089a3770cc4dcf88af128329b3e9 (from https://www.piwheels.org/simple/llama-cpp-python/) (requires-python:>=3.7) 2023-12-13T21:37:06,308 Skipping link: No binaries permitted for llama-cpp-python: https://www.piwheels.org/simple/llama-cpp-python/llama_cpp_python-0.1.44-cp37-cp37m-linux_armv6l.whl#sha256=35dc305c6d40fbbc0ef489c18521a842192156419573134f83bfbb4ec4bfd3d9 (from https://www.piwheels.org/simple/llama-cpp-python/) (requires-python:>=3.7) 2023-12-13T21:37:06,308 Skipping link: No binaries permitted for llama-cpp-python: https://www.piwheels.org/simple/llama-cpp-python/llama_cpp_python-0.1.44-cp37-cp37m-linux_armv7l.whl#sha256=35dc305c6d40fbbc0ef489c18521a842192156419573134f83bfbb4ec4bfd3d9 (from https://www.piwheels.org/simple/llama-cpp-python/) (requires-python:>=3.7) 2023-12-13T21:37:06,309 Skipping link: No binaries permitted for llama-cpp-python: https://www.piwheels.org/simple/llama-cpp-python/llama_cpp_python-0.1.43-cp37-cp37m-linux_armv6l.whl#sha256=49440356659a24d945119356b9c6e352e9dacf9e873e4d0d2167a501a7050592 (from https://www.piwheels.org/simple/llama-cpp-python/) (requires-python:>=3.7) 2023-12-13T21:37:06,309 Skipping link: No binaries permitted for llama-cpp-python: https://www.piwheels.org/simple/llama-cpp-python/llama_cpp_python-0.1.43-cp37-cp37m-linux_armv7l.whl#sha256=49440356659a24d945119356b9c6e352e9dacf9e873e4d0d2167a501a7050592 (from https://www.piwheels.org/simple/llama-cpp-python/) (requires-python:>=3.7) 2023-12-13T21:37:06,310 Skipping link: No binaries permitted for llama-cpp-python: https://www.piwheels.org/simple/llama-cpp-python/llama_cpp_python-0.1.42-cp37-cp37m-linux_armv6l.whl#sha256=48b89ad5d0e3274b6b637c58a8067672586596856f146a2e9c580c6f9ca285ef (from https://www.piwheels.org/simple/llama-cpp-python/) (requires-python:>=3.7) 2023-12-13T21:37:06,310 Skipping link: No binaries permitted for llama-cpp-python: https://www.piwheels.org/simple/llama-cpp-python/llama_cpp_python-0.1.42-cp37-cp37m-linux_armv7l.whl#sha256=48b89ad5d0e3274b6b637c58a8067672586596856f146a2e9c580c6f9ca285ef (from https://www.piwheels.org/simple/llama-cpp-python/) (requires-python:>=3.7) 2023-12-13T21:37:06,311 Skipping link: No binaries permitted for llama-cpp-python: https://www.piwheels.org/simple/llama-cpp-python/llama_cpp_python-0.1.41-cp37-cp37m-linux_armv6l.whl#sha256=eb37310c71596893c50ba3e1e2b45a236b54c86025f4e61e369e0e916e3ea927 (from https://www.piwheels.org/simple/llama-cpp-python/) (requires-python:>=3.7) 2023-12-13T21:37:06,311 Skipping link: No binaries permitted for llama-cpp-python: https://www.piwheels.org/simple/llama-cpp-python/llama_cpp_python-0.1.41-cp37-cp37m-linux_armv7l.whl#sha256=eb37310c71596893c50ba3e1e2b45a236b54c86025f4e61e369e0e916e3ea927 (from https://www.piwheels.org/simple/llama-cpp-python/) (requires-python:>=3.7) 2023-12-13T21:37:06,312 Skipping link: No binaries permitted for llama-cpp-python: https://www.piwheels.org/simple/llama-cpp-python/llama_cpp_python-0.1.40-cp37-cp37m-linux_armv6l.whl#sha256=438062696c2aa9e624eba548d48c7e72e677f8514be6641dd76cad874124d08b (from https://www.piwheels.org/simple/llama-cpp-python/) (requires-python:>=3.7) 2023-12-13T21:37:06,313 Skipping link: No binaries permitted for llama-cpp-python: https://www.piwheels.org/simple/llama-cpp-python/llama_cpp_python-0.1.40-cp37-cp37m-linux_armv7l.whl#sha256=438062696c2aa9e624eba548d48c7e72e677f8514be6641dd76cad874124d08b (from https://www.piwheels.org/simple/llama-cpp-python/) (requires-python:>=3.7) 2023-12-13T21:37:06,314 Skipping link: not a file: https://www.piwheels.org/simple/llama-cpp-python/ 2023-12-13T21:37:06,314 Skipping link: not a file: https://pypi.org/simple/llama-cpp-python/ 2023-12-13T21:37:06,346 Given no hashes to check 1 links for project 'llama-cpp-python': discarding no candidates 2023-12-13T21:37:06,366 Collecting llama-cpp-python==0.2.22 2023-12-13T21:37:06,368 Created temporary directory: /tmp/pip-unpack-nyrnitwr 2023-12-13T21:37:06,502 Downloading llama_cpp_python-0.2.22.tar.gz (8.7 MB) 2023-12-13T21:37:08,333 Added llama-cpp-python==0.2.22 from https://files.pythonhosted.org/packages/15/7a/49906adb90113f628c1f07dc746ca0978b8aa99a8f7325a8d961ce2a1919/llama_cpp_python-0.2.22.tar.gz to build tracker '/tmp/pip-build-tracker-mdfijr_4' 2023-12-13T21:37:08,339 Created temporary directory: /tmp/pip-build-env-x1fx15eo 2023-12-13T21:37:08,344 Installing build dependencies: started 2023-12-13T21:37:08,345 Running command pip subprocess to install build dependencies 2023-12-13T21:37:09,526 Using pip 23.3.1 from /home/piwheels/.local/lib/python3.11/site-packages/pip (python 3.11) 2023-12-13T21:37:10,074 Looking in indexes: https://pypi.org/simple, https://www.piwheels.org/simple 2023-12-13T21:37:10,496 Collecting scikit-build-core>=0.5.1 (from scikit-build-core[pyproject]>=0.5.1) 2023-12-13T21:37:10,513 Using cached https://www.piwheels.org/simple/scikit-build-core/scikit_build_core-0.7.0-py3-none-any.whl (136 kB) 2023-12-13T21:37:10,797 Collecting packaging>=20.9 (from scikit-build-core>=0.5.1->scikit-build-core[pyproject]>=0.5.1) 2023-12-13T21:37:10,815 Using cached https://www.piwheels.org/simple/packaging/packaging-23.2-py3-none-any.whl (53 kB) 2023-12-13T21:37:11,041 Collecting pathspec>=0.10.1 (from scikit-build-core[pyproject]>=0.5.1) 2023-12-13T21:37:11,056 Using cached https://www.piwheels.org/simple/pathspec/pathspec-0.12.1-py3-none-any.whl (31 kB) 2023-12-13T21:37:11,133 Collecting pyproject-metadata>=0.5 (from scikit-build-core[pyproject]>=0.5.1) 2023-12-13T21:37:11,149 Using cached https://www.piwheels.org/simple/pyproject-metadata/pyproject_metadata-0.7.1-py3-none-any.whl (7.4 kB) 2023-12-13T21:37:13,753 Installing collected packages: pathspec, packaging, scikit-build-core, pyproject-metadata 2023-12-13T21:37:14,472 Successfully installed packaging-23.2 pathspec-0.12.1 pyproject-metadata-0.7.1 scikit-build-core-0.7.0 2023-12-13T21:37:14,964 Installing build dependencies: finished with status 'done' 2023-12-13T21:37:14,967 Getting requirements to build wheel: started 2023-12-13T21:37:14,969 Running command Getting requirements to build wheel 2023-12-13T21:37:15,390 Getting requirements to build wheel: finished with status 'done' 2023-12-13T21:37:15,412 Created temporary directory: /tmp/pip-modern-metadata-822suf_6 2023-12-13T21:37:15,414 Preparing metadata (pyproject.toml): started 2023-12-13T21:37:15,415 Running command Preparing metadata (pyproject.toml) 2023-12-13T21:37:15,917 *** scikit-build-core 0.7.0 using CMake 3.25.1 (metadata_wheel) 2023-12-13T21:37:16,011 Preparing metadata (pyproject.toml): finished with status 'done' 2023-12-13T21:37:16,017 Source in /tmp/pip-wheel-h5vnrp6_/llama-cpp-python_92f0ea9b67c04b939d2e7bae30081482 has version 0.2.22, which satisfies requirement llama-cpp-python==0.2.22 from https://files.pythonhosted.org/packages/15/7a/49906adb90113f628c1f07dc746ca0978b8aa99a8f7325a8d961ce2a1919/llama_cpp_python-0.2.22.tar.gz 2023-12-13T21:37:16,018 Removed llama-cpp-python==0.2.22 from https://files.pythonhosted.org/packages/15/7a/49906adb90113f628c1f07dc746ca0978b8aa99a8f7325a8d961ce2a1919/llama_cpp_python-0.2.22.tar.gz from build tracker '/tmp/pip-build-tracker-mdfijr_4' 2023-12-13T21:37:16,026 Created temporary directory: /tmp/pip-unpack-1_fvs8ef 2023-12-13T21:37:16,027 Created temporary directory: /tmp/pip-unpack-vprz6zjb 2023-12-13T21:37:16,079 Building wheels for collected packages: llama-cpp-python 2023-12-13T21:37:16,083 Created temporary directory: /tmp/pip-wheel-gda8k6di 2023-12-13T21:37:16,084 Destination directory: /tmp/pip-wheel-gda8k6di 2023-12-13T21:37:16,086 Building wheel for llama-cpp-python (pyproject.toml): started 2023-12-13T21:37:16,087 Running command Building wheel for llama-cpp-python (pyproject.toml) 2023-12-13T21:37:16,584 *** scikit-build-core 0.7.0 using CMake 3.25.1 (wheel) 2023-12-13T21:37:16,604 *** Configuring CMake... 2023-12-13T21:37:16,699 loading initial cache file /tmp/tmpcx65fafl/build/CMakeInit.txt 2023-12-13T21:37:16,973 -- The C compiler identification is GNU 12.2.0 2023-12-13T21:37:17,295 -- The CXX compiler identification is GNU 12.2.0 2023-12-13T21:37:17,355 -- Detecting C compiler ABI info 2023-12-13T21:37:17,621 -- Detecting C compiler ABI info - done 2023-12-13T21:37:17,659 -- Check for working C compiler: /usr/bin/cc - skipped 2023-12-13T21:37:17,661 -- Detecting C compile features 2023-12-13T21:37:17,663 -- Detecting C compile features - done 2023-12-13T21:37:17,682 -- Detecting CXX compiler ABI info 2023-12-13T21:37:18,006 -- Detecting CXX compiler ABI info - done 2023-12-13T21:37:18,048 -- Check for working CXX compiler: /usr/bin/c++ - skipped 2023-12-13T21:37:18,050 -- Detecting CXX compile features 2023-12-13T21:37:18,054 -- Detecting CXX compile features - done 2023-12-13T21:37:18,097 -- Found Git: /usr/bin/git (found version "2.39.2") 2023-12-13T21:37:18,167 -- Performing Test CMAKE_HAVE_LIBC_PTHREAD 2023-12-13T21:37:18,461 -- Performing Test CMAKE_HAVE_LIBC_PTHREAD - Success 2023-12-13T21:37:18,466 -- Found Threads: TRUE 2023-12-13T21:37:18,565 GNU ld (GNU Binutils for Raspbian) 2.40 2023-12-13T21:37:18,572 -- CMAKE_SYSTEM_PROCESSOR: armv7l 2023-12-13T21:37:18,573 -- ARM detected 2023-12-13T21:37:18,576 -- Performing Test COMPILER_SUPPORTS_FP16_FORMAT_I3E 2023-12-13T21:37:18,904 -- Performing Test COMPILER_SUPPORTS_FP16_FORMAT_I3E - Success 2023-12-13T21:37:18,946 CMake Warning (dev) at CMakeLists.txt:21 (install): 2023-12-13T21:37:18,947 Target llama has PUBLIC_HEADER files but no PUBLIC_HEADER DESTINATION. 2023-12-13T21:37:18,948 This warning is for project developers. Use -Wno-dev to suppress it. 2023-12-13T21:37:18,950 CMake Warning (dev) at CMakeLists.txt:30 (install): 2023-12-13T21:37:18,950 Target llama has PUBLIC_HEADER files but no PUBLIC_HEADER DESTINATION. 2023-12-13T21:37:18,951 This warning is for project developers. Use -Wno-dev to suppress it. 2023-12-13T21:37:18,959 -- Configuring done 2023-12-13T21:37:19,056 -- Generating done 2023-12-13T21:37:19,075 -- Build files have been written to: /tmp/tmpcx65fafl/build 2023-12-13T21:37:19,086 *** Building project with Ninja... 2023-12-13T21:37:19,416 [1/22] cd /tmp/pip-wheel-h5vnrp6_/llama-cpp-python_92f0ea9b67c04b939d2e7bae30081482/vendor/llama.cpp && /usr/bin/cmake -DMSVC= -DCMAKE_C_COMPILER_VERSION=12.2.0 -DCMAKE_C_COMPILER_ID=GNU -DCMAKE_VS_PLATFORM_NAME= -DCMAKE_C_COMPILER=/usr/bin/cc -P /tmp/pip-wheel-h5vnrp6_/llama-cpp-python_92f0ea9b67c04b939d2e7bae30081482/vendor/llama.cpp/common/../scripts/gen-build-info-cpp.cmake 2023-12-13T21:37:19,417 -- Found Git: /usr/bin/git (found version "2.39.2") 2023-12-13T21:37:19,598 [2/22] /usr/bin/c++ -D_GNU_SOURCE -D_XOPEN_SOURCE=600 -O3 -DNDEBUG -fPIC -Wmissing-declarations -Wmissing-noreturn -Wall -Wextra -Wpedantic -Wcast-qual -Wno-unused-function -Wno-array-bounds -Wno-format-truncation -Wextra-semi -mfp16-format=ieee -mfpu=neon-fp-armv8 -mno-unaligned-access -funsafe-math-optimizations -std=gnu++11 -MD -MT vendor/llama.cpp/common/CMakeFiles/build_info.dir/build-info.cpp.o -MF vendor/llama.cpp/common/CMakeFiles/build_info.dir/build-info.cpp.o.d -o vendor/llama.cpp/common/CMakeFiles/build_info.dir/build-info.cpp.o -c /tmp/pip-wheel-h5vnrp6_/llama-cpp-python_92f0ea9b67c04b939d2e7bae30081482/vendor/llama.cpp/common/build-info.cpp 2023-12-13T21:37:21,665 [3/22] /usr/bin/cc -D_GNU_SOURCE -D_XOPEN_SOURCE=600 -I/tmp/pip-wheel-h5vnrp6_/llama-cpp-python_92f0ea9b67c04b939d2e7bae30081482/vendor/llama.cpp/. -O3 -DNDEBUG -fPIC -Wshadow -Wstrict-prototypes -Wpointer-arith -Wmissing-prototypes -Werror=implicit-int -Werror=implicit-function-declaration -Wdouble-promotion -Wall -Wextra -Wpedantic -Wcast-qual -Wno-unused-function -mfp16-format=ieee -mfpu=neon-fp-armv8 -mno-unaligned-access -funsafe-math-optimizations -std=gnu11 -MD -MT vendor/llama.cpp/CMakeFiles/ggml.dir/ggml-alloc.c.o -MF vendor/llama.cpp/CMakeFiles/ggml.dir/ggml-alloc.c.o.d -o vendor/llama.cpp/CMakeFiles/ggml.dir/ggml-alloc.c.o -c /tmp/pip-wheel-h5vnrp6_/llama-cpp-python_92f0ea9b67c04b939d2e7bae30081482/vendor/llama.cpp/ggml-alloc.c 2023-12-13T21:37:24,296 [4/22] /usr/bin/cc -D_GNU_SOURCE -D_XOPEN_SOURCE=600 -I/tmp/pip-wheel-h5vnrp6_/llama-cpp-python_92f0ea9b67c04b939d2e7bae30081482/vendor/llama.cpp/. -O3 -DNDEBUG -fPIC -Wshadow -Wstrict-prototypes -Wpointer-arith -Wmissing-prototypes -Werror=implicit-int -Werror=implicit-function-declaration -Wdouble-promotion -Wall -Wextra -Wpedantic -Wcast-qual -Wno-unused-function -mfp16-format=ieee -mfpu=neon-fp-armv8 -mno-unaligned-access -funsafe-math-optimizations -std=gnu11 -MD -MT vendor/llama.cpp/CMakeFiles/ggml.dir/ggml-backend.c.o -MF vendor/llama.cpp/CMakeFiles/ggml.dir/ggml-backend.c.o.d -o vendor/llama.cpp/CMakeFiles/ggml.dir/ggml-backend.c.o -c /tmp/pip-wheel-h5vnrp6_/llama-cpp-python_92f0ea9b67c04b939d2e7bae30081482/vendor/llama.cpp/ggml-backend.c 2023-12-13T21:37:28,603 [5/22] /usr/bin/c++ -D_GNU_SOURCE -D_XOPEN_SOURCE=600 -I/tmp/pip-wheel-h5vnrp6_/llama-cpp-python_92f0ea9b67c04b939d2e7bae30081482/vendor/llama.cpp/common/. -I/tmp/pip-wheel-h5vnrp6_/llama-cpp-python_92f0ea9b67c04b939d2e7bae30081482/vendor/llama.cpp/. -O3 -DNDEBUG -fPIC -Wmissing-declarations -Wmissing-noreturn -Wall -Wextra -Wpedantic -Wcast-qual -Wno-unused-function -Wno-array-bounds -Wno-format-truncation -Wextra-semi -mfp16-format=ieee -mfpu=neon-fp-armv8 -mno-unaligned-access -funsafe-math-optimizations -std=gnu++11 -MD -MT vendor/llama.cpp/common/CMakeFiles/common.dir/console.cpp.o -MF vendor/llama.cpp/common/CMakeFiles/common.dir/console.cpp.o.d -o vendor/llama.cpp/common/CMakeFiles/common.dir/console.cpp.o -c /tmp/pip-wheel-h5vnrp6_/llama-cpp-python_92f0ea9b67c04b939d2e7bae30081482/vendor/llama.cpp/common/console.cpp 2023-12-13T21:37:31,927 [6/22] /usr/bin/c++ -D_GNU_SOURCE -D_XOPEN_SOURCE=600 -I/tmp/pip-wheel-h5vnrp6_/llama-cpp-python_92f0ea9b67c04b939d2e7bae30081482/vendor/llama.cpp/common/. -I/tmp/pip-wheel-h5vnrp6_/llama-cpp-python_92f0ea9b67c04b939d2e7bae30081482/vendor/llama.cpp/. -O3 -DNDEBUG -fPIC -Wmissing-declarations -Wmissing-noreturn -Wall -Wextra -Wpedantic -Wcast-qual -Wno-unused-function -Wno-array-bounds -Wno-format-truncation -Wextra-semi -mfp16-format=ieee -mfpu=neon-fp-armv8 -mno-unaligned-access -funsafe-math-optimizations -std=gnu++11 -MD -MT vendor/llama.cpp/common/CMakeFiles/common.dir/sampling.cpp.o -MF vendor/llama.cpp/common/CMakeFiles/common.dir/sampling.cpp.o.d -o vendor/llama.cpp/common/CMakeFiles/common.dir/sampling.cpp.o -c /tmp/pip-wheel-h5vnrp6_/llama-cpp-python_92f0ea9b67c04b939d2e7bae30081482/vendor/llama.cpp/common/sampling.cpp 2023-12-13T21:37:33,030 [7/22] /usr/bin/cc -D_GNU_SOURCE -D_XOPEN_SOURCE=600 -I/tmp/pip-wheel-h5vnrp6_/llama-cpp-python_92f0ea9b67c04b939d2e7bae30081482/vendor/llama.cpp/. -O3 -DNDEBUG -fPIC -Wshadow -Wstrict-prototypes -Wpointer-arith -Wmissing-prototypes -Werror=implicit-int -Werror=implicit-function-declaration -Wdouble-promotion -Wall -Wextra -Wpedantic -Wcast-qual -Wno-unused-function -mfp16-format=ieee -mfpu=neon-fp-armv8 -mno-unaligned-access -funsafe-math-optimizations -std=gnu11 -MD -MT vendor/llama.cpp/CMakeFiles/ggml.dir/ggml-quants.c.o -MF vendor/llama.cpp/CMakeFiles/ggml.dir/ggml-quants.c.o.d -o vendor/llama.cpp/CMakeFiles/ggml.dir/ggml-quants.c.o -c /tmp/pip-wheel-h5vnrp6_/llama-cpp-python_92f0ea9b67c04b939d2e7bae30081482/vendor/llama.cpp/ggml-quants.c 2023-12-13T21:37:33,031 /tmp/pip-wheel-h5vnrp6_/llama-cpp-python_92f0ea9b67c04b939d2e7bae30081482/vendor/llama.cpp/ggml-quants.c: In function ‘ggml_vec_dot_q2_K_q8_K’: 2023-12-13T21:37:33,032 /tmp/pip-wheel-h5vnrp6_/llama-cpp-python_92f0ea9b67c04b939d2e7bae30081482/vendor/llama.cpp/ggml-quants.c:3680:41: warning: missing braces around initializer [-Wmissing-braces] 2023-12-13T21:37:33,033 3680 | const ggml_int16x8x2_t mins16 = {vreinterpretq_s16_u16(vmovl_u8(vget_low_u8(mins))), vreinterpretq_s16_u16(vmovl_u8(vget_high_u8(mins)))}; 2023-12-13T21:37:33,034 | ^ 2023-12-13T21:37:33,035 | { } 2023-12-13T21:37:33,036 /tmp/pip-wheel-h5vnrp6_/llama-cpp-python_92f0ea9b67c04b939d2e7bae30081482/vendor/llama.cpp/ggml-quants.c: In function ‘ggml_vec_dot_q6_K_q8_K’: 2023-12-13T21:37:33,037 /tmp/pip-wheel-h5vnrp6_/llama-cpp-python_92f0ea9b67c04b939d2e7bae30081482/vendor/llama.cpp/ggml-quants.c:6629:43: warning: missing braces around initializer [-Wmissing-braces] 2023-12-13T21:37:33,038 6629 | const ggml_int16x8x2_t q6scales = {vmovl_s8(vget_low_s8(scales)), vmovl_s8(vget_high_s8(scales))}; 2023-12-13T21:37:33,039 | ^ 2023-12-13T21:37:33,040 | { } 2023-12-13T21:37:39,148 [8/22] /usr/bin/c++ -D_GNU_SOURCE -D_XOPEN_SOURCE=600 -I/tmp/pip-wheel-h5vnrp6_/llama-cpp-python_92f0ea9b67c04b939d2e7bae30081482/vendor/llama.cpp/common/. -I/tmp/pip-wheel-h5vnrp6_/llama-cpp-python_92f0ea9b67c04b939d2e7bae30081482/vendor/llama.cpp/. -O3 -DNDEBUG -fPIC -Wmissing-declarations -Wmissing-noreturn -Wall -Wextra -Wpedantic -Wcast-qual -Wno-unused-function -Wno-array-bounds -Wno-format-truncation -Wextra-semi -mfp16-format=ieee -mfpu=neon-fp-armv8 -mno-unaligned-access -funsafe-math-optimizations -std=gnu++11 -MD -MT vendor/llama.cpp/common/CMakeFiles/common.dir/grammar-parser.cpp.o -MF vendor/llama.cpp/common/CMakeFiles/common.dir/grammar-parser.cpp.o.d -o vendor/llama.cpp/common/CMakeFiles/common.dir/grammar-parser.cpp.o -c /tmp/pip-wheel-h5vnrp6_/llama-cpp-python_92f0ea9b67c04b939d2e7bae30081482/vendor/llama.cpp/common/grammar-parser.cpp 2023-12-13T21:37:43,098 [9/22] /usr/bin/c++ -DLLAMA_BUILD -DLLAMA_SHARED -I/tmp/pip-wheel-h5vnrp6_/llama-cpp-python_92f0ea9b67c04b939d2e7bae30081482/vendor/llama.cpp/examples/llava/. -I/tmp/pip-wheel-h5vnrp6_/llama-cpp-python_92f0ea9b67c04b939d2e7bae30081482/vendor/llama.cpp/examples/llava/../.. -I/tmp/pip-wheel-h5vnrp6_/llama-cpp-python_92f0ea9b67c04b939d2e7bae30081482/vendor/llama.cpp/examples/llava/../../common -I/tmp/pip-wheel-h5vnrp6_/llama-cpp-python_92f0ea9b67c04b939d2e7bae30081482/vendor/llama.cpp/. -O3 -DNDEBUG -fPIC -Wno-cast-qual -MD -MT vendor/llama.cpp/examples/llava/CMakeFiles/llava.dir/llava.cpp.o -MF vendor/llama.cpp/examples/llava/CMakeFiles/llava.dir/llava.cpp.o.d -o vendor/llama.cpp/examples/llava/CMakeFiles/llava.dir/llava.cpp.o -c /tmp/pip-wheel-h5vnrp6_/llama-cpp-python_92f0ea9b67c04b939d2e7bae30081482/vendor/llama.cpp/examples/llava/llava.cpp 2023-12-13T21:37:49,786 [10/22] /usr/bin/c++ -D_GNU_SOURCE -D_XOPEN_SOURCE=600 -I/tmp/pip-wheel-h5vnrp6_/llama-cpp-python_92f0ea9b67c04b939d2e7bae30081482/vendor/llama.cpp/common/. -I/tmp/pip-wheel-h5vnrp6_/llama-cpp-python_92f0ea9b67c04b939d2e7bae30081482/vendor/llama.cpp/. -O3 -DNDEBUG -fPIC -Wmissing-declarations -Wmissing-noreturn -Wall -Wextra -Wpedantic -Wcast-qual -Wno-unused-function -Wno-array-bounds -Wno-format-truncation -Wextra-semi -mfp16-format=ieee -mfpu=neon-fp-armv8 -mno-unaligned-access -funsafe-math-optimizations -std=gnu++11 -MD -MT vendor/llama.cpp/common/CMakeFiles/common.dir/train.cpp.o -MF vendor/llama.cpp/common/CMakeFiles/common.dir/train.cpp.o.d -o vendor/llama.cpp/common/CMakeFiles/common.dir/train.cpp.o -c /tmp/pip-wheel-h5vnrp6_/llama-cpp-python_92f0ea9b67c04b939d2e7bae30081482/vendor/llama.cpp/common/train.cpp 2023-12-13T21:37:54,502 [11/22] /usr/bin/c++ -I/tmp/pip-wheel-h5vnrp6_/llama-cpp-python_92f0ea9b67c04b939d2e7bae30081482/vendor/llama.cpp/common/. -I/tmp/pip-wheel-h5vnrp6_/llama-cpp-python_92f0ea9b67c04b939d2e7bae30081482/vendor/llama.cpp/. -I/tmp/pip-wheel-h5vnrp6_/llama-cpp-python_92f0ea9b67c04b939d2e7bae30081482/vendor/llama.cpp/examples/llava/. -I/tmp/pip-wheel-h5vnrp6_/llama-cpp-python_92f0ea9b67c04b939d2e7bae30081482/vendor/llama.cpp/examples/llava/../.. -I/tmp/pip-wheel-h5vnrp6_/llama-cpp-python_92f0ea9b67c04b939d2e7bae30081482/vendor/llama.cpp/examples/llava/../../common -O3 -DNDEBUG -MD -MT vendor/llama.cpp/examples/llava/CMakeFiles/llava-cli.dir/llava-cli.cpp.o -MF vendor/llama.cpp/examples/llava/CMakeFiles/llava-cli.dir/llava-cli.cpp.o.d -o vendor/llama.cpp/examples/llava/CMakeFiles/llava-cli.dir/llava-cli.cpp.o -c /tmp/pip-wheel-h5vnrp6_/llama-cpp-python_92f0ea9b67c04b939d2e7bae30081482/vendor/llama.cpp/examples/llava/llava-cli.cpp 2023-12-13T21:38:10,159 [12/22] /usr/bin/c++ -D_GNU_SOURCE -D_XOPEN_SOURCE=600 -I/tmp/pip-wheel-h5vnrp6_/llama-cpp-python_92f0ea9b67c04b939d2e7bae30081482/vendor/llama.cpp/common/. -I/tmp/pip-wheel-h5vnrp6_/llama-cpp-python_92f0ea9b67c04b939d2e7bae30081482/vendor/llama.cpp/. -O3 -DNDEBUG -fPIC -Wmissing-declarations -Wmissing-noreturn -Wall -Wextra -Wpedantic -Wcast-qual -Wno-unused-function -Wno-array-bounds -Wno-format-truncation -Wextra-semi -mfp16-format=ieee -mfpu=neon-fp-armv8 -mno-unaligned-access -funsafe-math-optimizations -std=gnu++11 -MD -MT vendor/llama.cpp/common/CMakeFiles/common.dir/common.cpp.o -MF vendor/llama.cpp/common/CMakeFiles/common.dir/common.cpp.o.d -o vendor/llama.cpp/common/CMakeFiles/common.dir/common.cpp.o -c /tmp/pip-wheel-h5vnrp6_/llama-cpp-python_92f0ea9b67c04b939d2e7bae30081482/vendor/llama.cpp/common/common.cpp 2023-12-13T21:38:10,159 In file included from /usr/include/c++/12/vector:70, 2023-12-13T21:38:10,160 from /tmp/pip-wheel-h5vnrp6_/llama-cpp-python_92f0ea9b67c04b939d2e7bae30081482/vendor/llama.cpp/common/grammar-parser.h:14, 2023-12-13T21:38:10,161 from /tmp/pip-wheel-h5vnrp6_/llama-cpp-python_92f0ea9b67c04b939d2e7bae30081482/vendor/llama.cpp/common/sampling.h:5, 2023-12-13T21:38:10,162 from /tmp/pip-wheel-h5vnrp6_/llama-cpp-python_92f0ea9b67c04b939d2e7bae30081482/vendor/llama.cpp/common/common.h:7, 2023-12-13T21:38:10,163 from /tmp/pip-wheel-h5vnrp6_/llama-cpp-python_92f0ea9b67c04b939d2e7bae30081482/vendor/llama.cpp/common/common.cpp:1: 2023-12-13T21:38:10,164 /usr/include/c++/12/bits/vector.tcc: In member function ‘void std::vector<_Tp, _Alloc>::_M_realloc_insert(iterator, _Args&& ...) [with _Args = {const llama_model_kv_override&}; _Tp = llama_model_kv_override; _Alloc = std::allocator]’: 2023-12-13T21:38:10,164 /usr/include/c++/12/bits/vector.tcc:439:7: note: parameter passing for argument of type ‘std::vector::iterator’ changed in GCC 7.1 2023-12-13T21:38:10,165 439 | vector<_Tp, _Alloc>:: 2023-12-13T21:38:10,166 | ^~~~~~~~~~~~~~~~~~~ 2023-12-13T21:38:10,167 /usr/include/c++/12/bits/vector.tcc: In member function ‘void std::vector<_Tp, _Alloc>::_M_realloc_insert(iterator, _Args&& ...) [with _Args = {llama_model_kv_override}; _Tp = llama_model_kv_override; _Alloc = std::allocator]’: 2023-12-13T21:38:10,168 /usr/include/c++/12/bits/vector.tcc:439:7: note: parameter passing for argument of type ‘std::vector::iterator’ changed in GCC 7.1 2023-12-13T21:38:10,170 In file included from /usr/include/c++/12/vector:64: 2023-12-13T21:38:10,171 In member function ‘void std::vector<_Tp, _Alloc>::push_back(const value_type&) [with _Tp = llama_model_kv_override; _Alloc = std::allocator]’, 2023-12-13T21:38:10,173 inlined from ‘bool gpt_params_parse_ex(int, char**, gpt_params&)’ at /tmp/pip-wheel-h5vnrp6_/llama-cpp-python_92f0ea9b67c04b939d2e7bae30081482/vendor/llama.cpp/common/common.cpp:737:42: 2023-12-13T21:38:10,174 /usr/include/c++/12/bits/stl_vector.h:1287:28: note: parameter passing for argument of type ‘__gnu_cxx::__normal_iterator >’ changed in GCC 7.1 2023-12-13T21:38:10,175 1287 | _M_realloc_insert(end(), __x); 2023-12-13T21:38:10,176 | ~~~~~~~~~~~~~~~~~^~~~~~~~~~~~ 2023-12-13T21:38:10,177 In member function ‘void std::vector<_Tp, _Alloc>::emplace_back(_Args&& ...) [with _Args = {llama_model_kv_override}; _Tp = llama_model_kv_override; _Alloc = std::allocator]’, 2023-12-13T21:38:10,178 inlined from ‘bool gpt_params_parse_ex(int, char**, gpt_params&)’ at /tmp/pip-wheel-h5vnrp6_/llama-cpp-python_92f0ea9b67c04b939d2e7bae30081482/vendor/llama.cpp/common/common.cpp:782:41: 2023-12-13T21:38:10,179 /usr/include/c++/12/bits/vector.tcc:123:28: note: parameter passing for argument of type ‘__gnu_cxx::__normal_iterator >’ changed in GCC 7.1 2023-12-13T21:38:10,181 123 | _M_realloc_insert(end(), std::forward<_Args>(__args)...); 2023-12-13T21:38:10,182 | ~~~~~~~~~~~~~~~~~^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ 2023-12-13T21:38:18,643 [13/22] /usr/bin/cc -D_GNU_SOURCE -D_XOPEN_SOURCE=600 -I/tmp/pip-wheel-h5vnrp6_/llama-cpp-python_92f0ea9b67c04b939d2e7bae30081482/vendor/llama.cpp/. -O3 -DNDEBUG -fPIC -Wshadow -Wstrict-prototypes -Wpointer-arith -Wmissing-prototypes -Werror=implicit-int -Werror=implicit-function-declaration -Wdouble-promotion -Wall -Wextra -Wpedantic -Wcast-qual -Wno-unused-function -mfp16-format=ieee -mfpu=neon-fp-armv8 -mno-unaligned-access -funsafe-math-optimizations -std=gnu11 -MD -MT vendor/llama.cpp/CMakeFiles/ggml.dir/ggml.c.o -MF vendor/llama.cpp/CMakeFiles/ggml.dir/ggml.c.o.d -o vendor/llama.cpp/CMakeFiles/ggml.dir/ggml.c.o -c /tmp/pip-wheel-h5vnrp6_/llama-cpp-python_92f0ea9b67c04b939d2e7bae30081482/vendor/llama.cpp/ggml.c 2023-12-13T21:38:18,868 [14/22] : && /usr/bin/cc -fPIC -O3 -DNDEBUG -shared -Wl,-soname,libggml_shared.so -o vendor/llama.cpp/libggml_shared.so vendor/llama.cpp/CMakeFiles/ggml.dir/ggml.c.o vendor/llama.cpp/CMakeFiles/ggml.dir/ggml-alloc.c.o vendor/llama.cpp/CMakeFiles/ggml.dir/ggml-backend.c.o vendor/llama.cpp/CMakeFiles/ggml.dir/ggml-quants.c.o && : 2023-12-13T21:38:19,997 [15/22] : && /usr/bin/cmake -E rm -f vendor/llama.cpp/libggml_static.a && /usr/bin/ar qc vendor/llama.cpp/libggml_static.a vendor/llama.cpp/CMakeFiles/ggml.dir/ggml.c.o vendor/llama.cpp/CMakeFiles/ggml.dir/ggml-alloc.c.o vendor/llama.cpp/CMakeFiles/ggml.dir/ggml-backend.c.o vendor/llama.cpp/CMakeFiles/ggml.dir/ggml-quants.c.o && /usr/bin/ranlib vendor/llama.cpp/libggml_static.a && : 2023-12-13T21:38:28,282 [16/22] /usr/bin/c++ -DLLAMA_BUILD -DLLAMA_SHARED -I/tmp/pip-wheel-h5vnrp6_/llama-cpp-python_92f0ea9b67c04b939d2e7bae30081482/vendor/llama.cpp/examples/llava/. -I/tmp/pip-wheel-h5vnrp6_/llama-cpp-python_92f0ea9b67c04b939d2e7bae30081482/vendor/llama.cpp/examples/llava/../.. -I/tmp/pip-wheel-h5vnrp6_/llama-cpp-python_92f0ea9b67c04b939d2e7bae30081482/vendor/llama.cpp/examples/llava/../../common -I/tmp/pip-wheel-h5vnrp6_/llama-cpp-python_92f0ea9b67c04b939d2e7bae30081482/vendor/llama.cpp/. -O3 -DNDEBUG -fPIC -Wno-cast-qual -MD -MT vendor/llama.cpp/examples/llava/CMakeFiles/llava.dir/clip.cpp.o -MF vendor/llama.cpp/examples/llava/CMakeFiles/llava.dir/clip.cpp.o.d -o vendor/llama.cpp/examples/llava/CMakeFiles/llava.dir/clip.cpp.o -c /tmp/pip-wheel-h5vnrp6_/llama-cpp-python_92f0ea9b67c04b939d2e7bae30081482/vendor/llama.cpp/examples/llava/clip.cpp 2023-12-13T21:38:28,546 [17/22] : && /usr/bin/cmake -E rm -f vendor/llama.cpp/examples/llava/libllava_static.a && /usr/bin/ar qc vendor/llama.cpp/examples/llava/libllava_static.a vendor/llama.cpp/examples/llava/CMakeFiles/llava.dir/llava.cpp.o vendor/llama.cpp/examples/llava/CMakeFiles/llava.dir/clip.cpp.o && /usr/bin/ranlib vendor/llama.cpp/examples/llava/libllava_static.a && : 2023-12-13T21:38:45,475 [18/22] /usr/bin/c++ -DLLAMA_BUILD -DLLAMA_SHARED -D_GNU_SOURCE -D_XOPEN_SOURCE=600 -Dllama_EXPORTS -I/tmp/pip-wheel-h5vnrp6_/llama-cpp-python_92f0ea9b67c04b939d2e7bae30081482/vendor/llama.cpp/. -O3 -DNDEBUG -fPIC -Wmissing-declarations -Wmissing-noreturn -Wall -Wextra -Wpedantic -Wcast-qual -Wno-unused-function -Wno-array-bounds -Wno-format-truncation -Wextra-semi -mfp16-format=ieee -mfpu=neon-fp-armv8 -mno-unaligned-access -funsafe-math-optimizations -std=gnu++11 -MD -MT vendor/llama.cpp/CMakeFiles/llama.dir/llama.cpp.o -MF vendor/llama.cpp/CMakeFiles/llama.dir/llama.cpp.o.d -o vendor/llama.cpp/CMakeFiles/llama.dir/llama.cpp.o -c /tmp/pip-wheel-h5vnrp6_/llama-cpp-python_92f0ea9b67c04b939d2e7bae30081482/vendor/llama.cpp/llama.cpp 2023-12-13T21:38:45,476 In file included from /usr/include/c++/12/vector:64, 2023-12-13T21:38:45,476 from /tmp/pip-wheel-h5vnrp6_/llama-cpp-python_92f0ea9b67c04b939d2e7bae30081482/vendor/llama.cpp/llama.h:859, 2023-12-13T21:38:45,477 from /tmp/pip-wheel-h5vnrp6_/llama-cpp-python_92f0ea9b67c04b939d2e7bae30081482/vendor/llama.cpp/llama.cpp:2: 2023-12-13T21:38:45,477 /usr/include/c++/12/bits/stl_vector.h: In function ‘std::vector<_Tp, _Alloc>::vector(std::initializer_list<_Tp>, const allocator_type&) [with _Tp = long long int; _Alloc = std::allocator]’: 2023-12-13T21:38:45,478 /usr/include/c++/12/bits/stl_vector.h:673:7: note: parameter passing for argument of type ‘std::initializer_list’ changed in GCC 7.1 2023-12-13T21:38:45,479 673 | vector(initializer_list __l, 2023-12-13T21:38:45,479 | ^~~~~~ 2023-12-13T21:38:45,480 /tmp/pip-wheel-h5vnrp6_/llama-cpp-python_92f0ea9b67c04b939d2e7bae30081482/vendor/llama.cpp/llama.cpp: In function ‘void llm_load_tensors(llama_model_loader&, llama_model&, int, int, const float*, bool, llama_progress_callback, void*)’: 2023-12-13T21:38:45,480 /tmp/pip-wheel-h5vnrp6_/llama-cpp-python_92f0ea9b67c04b939d2e7bae30081482/vendor/llama.cpp/llama.cpp:2975:54: note: parameter passing for argument of type ‘std::initializer_list’ changed in GCC 7.1 2023-12-13T21:38:45,481 2975 | model.tok_embd = ml.create_tensor(ctx, tn(LLM_TENSOR_TOKEN_EMBD, "weight"), {n_embd, n_vocab}, GGML_BACKEND_CPU); 2023-12-13T21:38:45,481 | ~~~~~~~~~~~~~~~~^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ 2023-12-13T21:38:45,482 /tmp/pip-wheel-h5vnrp6_/llama-cpp-python_92f0ea9b67c04b939d2e7bae30081482/vendor/llama.cpp/llama.cpp:2990:61: note: parameter passing for argument of type ‘std::initializer_list’ changed in GCC 7.1 2023-12-13T21:38:45,483 2990 | model.output_norm = ml.create_tensor(ctx, tn(LLM_TENSOR_OUTPUT_NORM, "weight"), {n_embd}, backend_norm); 2023-12-13T21:38:45,483 | ~~~~~~~~~~~~~~~~^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ 2023-12-13T21:38:45,484 /tmp/pip-wheel-h5vnrp6_/llama-cpp-python_92f0ea9b67c04b939d2e7bae30081482/vendor/llama.cpp/llama.cpp:2991:61: note: parameter passing for argument of type ‘std::initializer_list’ changed in GCC 7.1 2023-12-13T21:38:45,484 2991 | model.output = ml.create_tensor(ctx, tn(LLM_TENSOR_OUTPUT, "weight"), {n_embd, n_vocab}, backend_output); 2023-12-13T21:38:45,485 | ~~~~~~~~~~~~~~~~^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ 2023-12-13T21:38:45,486 /tmp/pip-wheel-h5vnrp6_/llama-cpp-python_92f0ea9b67c04b939d2e7bae30081482/vendor/llama.cpp/llama.cpp:3013:59: note: parameter passing for argument of type ‘std::initializer_list’ changed in GCC 7.1 2023-12-13T21:38:45,487 3013 | layer.attn_norm = ml.create_tensor(ctx, tn(LLM_TENSOR_ATTN_NORM, "weight", i), {n_embd}, backend); 2023-12-13T21:38:45,487 | ~~~~~~~~~~~~~~~~^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ 2023-12-13T21:38:45,488 /tmp/pip-wheel-h5vnrp6_/llama-cpp-python_92f0ea9b67c04b939d2e7bae30081482/vendor/llama.cpp/llama.cpp:3015:52: note: parameter passing for argument of type ‘std::initializer_list’ changed in GCC 7.1 2023-12-13T21:38:45,489 3015 | layer.wq = ml.create_tensor(ctx, tn(LLM_TENSOR_ATTN_Q, "weight", i), {n_embd, n_embd}, backend_split); 2023-12-13T21:38:45,490 | ~~~~~~~~~~~~~~~~^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ 2023-12-13T21:38:45,490 /tmp/pip-wheel-h5vnrp6_/llama-cpp-python_92f0ea9b67c04b939d2e7bae30081482/vendor/llama.cpp/llama.cpp:3016:52: note: parameter passing for argument of type ‘std::initializer_list’ changed in GCC 7.1 2023-12-13T21:38:45,491 3016 | layer.wk = ml.create_tensor(ctx, tn(LLM_TENSOR_ATTN_K, "weight", i), {n_embd, n_embd_gqa}, backend_split); 2023-12-13T21:38:45,492 | ~~~~~~~~~~~~~~~~^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ 2023-12-13T21:38:45,492 /tmp/pip-wheel-h5vnrp6_/llama-cpp-python_92f0ea9b67c04b939d2e7bae30081482/vendor/llama.cpp/llama.cpp:3017:52: note: parameter passing for argument of type ‘std::initializer_list’ changed in GCC 7.1 2023-12-13T21:38:45,493 3017 | layer.wv = ml.create_tensor(ctx, tn(LLM_TENSOR_ATTN_V, "weight", i), {n_embd, n_embd_gqa}, backend_split); 2023-12-13T21:38:45,494 | ~~~~~~~~~~~~~~~~^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ 2023-12-13T21:38:45,494 /tmp/pip-wheel-h5vnrp6_/llama-cpp-python_92f0ea9b67c04b939d2e7bae30081482/vendor/llama.cpp/llama.cpp:3018:52: note: parameter passing for argument of type ‘std::initializer_list’ changed in GCC 7.1 2023-12-13T21:38:45,495 3018 | layer.wo = ml.create_tensor(ctx, tn(LLM_TENSOR_ATTN_OUT, "weight", i), {n_embd, n_embd}, backend_split); 2023-12-13T21:38:45,496 | ~~~~~~~~~~~~~~~~^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ 2023-12-13T21:38:45,496 /tmp/pip-wheel-h5vnrp6_/llama-cpp-python_92f0ea9b67c04b939d2e7bae30081482/vendor/llama.cpp/llama.cpp:3021:52: note: parameter passing for argument of type ‘std::initializer_list’ changed in GCC 7.1 2023-12-13T21:38:45,497 3021 | layer.bq = ml.create_tensor(ctx, tn(LLM_TENSOR_ATTN_Q, "bias", i), {n_embd}, backend, false); 2023-12-13T21:38:45,497 | ~~~~~~~~~~~~~~~~^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ 2023-12-13T21:38:45,498 /tmp/pip-wheel-h5vnrp6_/llama-cpp-python_92f0ea9b67c04b939d2e7bae30081482/vendor/llama.cpp/llama.cpp:3022:52: note: parameter passing for argument of type ‘std::initializer_list’ changed in GCC 7.1 2023-12-13T21:38:45,498 3022 | layer.bk = ml.create_tensor(ctx, tn(LLM_TENSOR_ATTN_K, "bias", i), {n_embd_gqa}, backend, false); 2023-12-13T21:38:45,499 | ~~~~~~~~~~~~~~~~^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ 2023-12-13T21:38:45,500 /tmp/pip-wheel-h5vnrp6_/llama-cpp-python_92f0ea9b67c04b939d2e7bae30081482/vendor/llama.cpp/llama.cpp:3023:52: note: parameter passing for argument of type ‘std::initializer_list’ changed in GCC 7.1 2023-12-13T21:38:45,501 3023 | layer.bv = ml.create_tensor(ctx, tn(LLM_TENSOR_ATTN_V, "bias", i), {n_embd_gqa}, backend, false); 2023-12-13T21:38:45,501 | ~~~~~~~~~~~~~~~~^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ 2023-12-13T21:38:45,501 /tmp/pip-wheel-h5vnrp6_/llama-cpp-python_92f0ea9b67c04b939d2e7bae30081482/vendor/llama.cpp/llama.cpp:3024:52: note: parameter passing for argument of type ‘std::initializer_list’ changed in GCC 7.1 2023-12-13T21:38:45,502 3024 | layer.bo = ml.create_tensor(ctx, tn(LLM_TENSOR_ATTN_OUT, "bias", i), {n_embd}, backend, false); 2023-12-13T21:38:45,502 | ~~~~~~~~~~~~~~~~^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ 2023-12-13T21:38:45,503 /tmp/pip-wheel-h5vnrp6_/llama-cpp-python_92f0ea9b67c04b939d2e7bae30081482/vendor/llama.cpp/llama.cpp:3026:58: note: parameter passing for argument of type ‘std::initializer_list’ changed in GCC 7.1 2023-12-13T21:38:45,503 3026 | layer.ffn_norm = ml.create_tensor(ctx, tn(LLM_TENSOR_FFN_NORM, "weight", i), {n_embd}, backend); 2023-12-13T21:38:45,504 | ~~~~~~~~~~~~~~~~^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ 2023-12-13T21:38:45,504 /tmp/pip-wheel-h5vnrp6_/llama-cpp-python_92f0ea9b67c04b939d2e7bae30081482/vendor/llama.cpp/llama.cpp:3028:58: note: parameter passing for argument of type ‘std::initializer_list’ changed in GCC 7.1 2023-12-13T21:38:45,505 3028 | layer.ffn_gate = ml.create_tensor(ctx, tn(LLM_TENSOR_FFN_GATE, "weight", i), {n_embd, n_ff}, backend_split); 2023-12-13T21:38:45,505 | ~~~~~~~~~~~~~~~~^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ 2023-12-13T21:38:45,506 /tmp/pip-wheel-h5vnrp6_/llama-cpp-python_92f0ea9b67c04b939d2e7bae30081482/vendor/llama.cpp/llama.cpp:3029:58: note: parameter passing for argument of type ‘std::initializer_list’ changed in GCC 7.1 2023-12-13T21:38:45,507 3029 | layer.ffn_down = ml.create_tensor(ctx, tn(LLM_TENSOR_FFN_DOWN, "weight", i), { n_ff, n_embd}, backend_split); 2023-12-13T21:38:45,507 | ~~~~~~~~~~~~~~~~^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ 2023-12-13T21:38:45,508 /tmp/pip-wheel-h5vnrp6_/llama-cpp-python_92f0ea9b67c04b939d2e7bae30081482/vendor/llama.cpp/llama.cpp:3030:58: note: parameter passing for argument of type ‘std::initializer_list’ changed in GCC 7.1 2023-12-13T21:38:45,509 3030 | layer.ffn_up = ml.create_tensor(ctx, tn(LLM_TENSOR_FFN_UP, "weight", i), {n_embd, n_ff}, backend_split); 2023-12-13T21:38:45,509 | ~~~~~~~~~~~~~~~~^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ 2023-12-13T21:38:45,510 /tmp/pip-wheel-h5vnrp6_/llama-cpp-python_92f0ea9b67c04b939d2e7bae30081482/vendor/llama.cpp/llama.cpp:3047:54: note: parameter passing for argument of type ‘std::initializer_list’ changed in GCC 7.1 2023-12-13T21:38:45,510 3047 | model.tok_embd = ml.create_tensor(ctx, tn(LLM_TENSOR_TOKEN_EMBD, "weight"), {n_embd, n_vocab}, GGML_BACKEND_CPU); 2023-12-13T21:38:45,511 | ~~~~~~~~~~~~~~~~^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ 2023-12-13T21:38:45,511 /tmp/pip-wheel-h5vnrp6_/llama-cpp-python_92f0ea9b67c04b939d2e7bae30081482/vendor/llama.cpp/llama.cpp:3060:61: note: parameter passing for argument of type ‘std::initializer_list’ changed in GCC 7.1 2023-12-13T21:38:45,512 3060 | model.output_norm = ml.create_tensor(ctx, tn(LLM_TENSOR_OUTPUT_NORM, "weight"), {n_embd}, backend_norm); 2023-12-13T21:38:45,513 | ~~~~~~~~~~~~~~~~^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ 2023-12-13T21:38:45,513 /tmp/pip-wheel-h5vnrp6_/llama-cpp-python_92f0ea9b67c04b939d2e7bae30081482/vendor/llama.cpp/llama.cpp:3061:61: note: parameter passing for argument of type ‘std::initializer_list’ changed in GCC 7.1 2023-12-13T21:38:45,514 3061 | model.output = ml.create_tensor(ctx, tn(LLM_TENSOR_OUTPUT, "weight"), {n_embd, n_vocab}, backend_output); 2023-12-13T21:38:45,514 | ~~~~~~~~~~~~~~~~^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ 2023-12-13T21:38:45,516 /tmp/pip-wheel-h5vnrp6_/llama-cpp-python_92f0ea9b67c04b939d2e7bae30081482/vendor/llama.cpp/llama.cpp:3083:59: note: parameter passing for argument of type ‘std::initializer_list’ changed in GCC 7.1 2023-12-13T21:38:45,516 3083 | layer.attn_norm = ml.create_tensor(ctx, tn(LLM_TENSOR_ATTN_NORM, "weight", i), {n_embd}, backend); 2023-12-13T21:38:45,517 | ~~~~~~~~~~~~~~~~^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ 2023-12-13T21:38:45,517 /tmp/pip-wheel-h5vnrp6_/llama-cpp-python_92f0ea9b67c04b939d2e7bae30081482/vendor/llama.cpp/llama.cpp:3085:52: note: parameter passing for argument of type ‘std::initializer_list’ changed in GCC 7.1 2023-12-13T21:38:45,518 3085 | layer.wq = ml.create_tensor(ctx, tn(LLM_TENSOR_ATTN_Q, "weight", i), {n_embd, n_embd}, backend_split); 2023-12-13T21:38:45,518 | ~~~~~~~~~~~~~~~~^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ 2023-12-13T21:38:45,519 /tmp/pip-wheel-h5vnrp6_/llama-cpp-python_92f0ea9b67c04b939d2e7bae30081482/vendor/llama.cpp/llama.cpp:3086:52: note: parameter passing for argument of type ‘std::initializer_list’ changed in GCC 7.1 2023-12-13T21:38:45,519 3086 | layer.wk = ml.create_tensor(ctx, tn(LLM_TENSOR_ATTN_K, "weight", i), {n_embd, n_embd_gqa}, backend_split); 2023-12-13T21:38:45,520 | ~~~~~~~~~~~~~~~~^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ 2023-12-13T21:38:45,520 /tmp/pip-wheel-h5vnrp6_/llama-cpp-python_92f0ea9b67c04b939d2e7bae30081482/vendor/llama.cpp/llama.cpp:3087:52: note: parameter passing for argument of type ‘std::initializer_list’ changed in GCC 7.1 2023-12-13T21:38:45,521 3087 | layer.wv = ml.create_tensor(ctx, tn(LLM_TENSOR_ATTN_V, "weight", i), {n_embd, n_embd_gqa}, backend_split); 2023-12-13T21:38:45,521 | ~~~~~~~~~~~~~~~~^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ 2023-12-13T21:38:45,522 /tmp/pip-wheel-h5vnrp6_/llama-cpp-python_92f0ea9b67c04b939d2e7bae30081482/vendor/llama.cpp/llama.cpp:3088:52: note: parameter passing for argument of type ‘std::initializer_list’ changed in GCC 7.1 2023-12-13T21:38:45,522 3088 | layer.wo = ml.create_tensor(ctx, tn(LLM_TENSOR_ATTN_OUT, "weight", i), {n_embd, n_embd}, backend_split); 2023-12-13T21:38:45,523 | ~~~~~~~~~~~~~~~~^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ 2023-12-13T21:38:45,524 /tmp/pip-wheel-h5vnrp6_/llama-cpp-python_92f0ea9b67c04b939d2e7bae30081482/vendor/llama.cpp/llama.cpp:3090:58: note: parameter passing for argument of type ‘std::initializer_list’ changed in GCC 7.1 2023-12-13T21:38:45,524 3090 | layer.ffn_norm = ml.create_tensor(ctx, tn(LLM_TENSOR_FFN_NORM, "weight", i), {n_embd}, backend); 2023-12-13T21:38:45,525 | ~~~~~~~~~~~~~~~~^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ 2023-12-13T21:38:45,526 /tmp/pip-wheel-h5vnrp6_/llama-cpp-python_92f0ea9b67c04b939d2e7bae30081482/vendor/llama.cpp/llama.cpp:3092:58: note: parameter passing for argument of type ‘std::initializer_list’ changed in GCC 7.1 2023-12-13T21:38:45,526 3092 | layer.ffn_gate = ml.create_tensor(ctx, tn(LLM_TENSOR_FFN_GATE, "weight", i), {n_embd, n_ff}, backend_split); 2023-12-13T21:38:45,526 | ~~~~~~~~~~~~~~~~^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ 2023-12-13T21:38:45,527 /tmp/pip-wheel-h5vnrp6_/llama-cpp-python_92f0ea9b67c04b939d2e7bae30081482/vendor/llama.cpp/llama.cpp:3093:58: note: parameter passing for argument of type ‘std::initializer_list’ changed in GCC 7.1 2023-12-13T21:38:45,528 3093 | layer.ffn_down = ml.create_tensor(ctx, tn(LLM_TENSOR_FFN_DOWN, "weight", i), { n_ff, n_embd}, backend_split); 2023-12-13T21:38:45,528 | ~~~~~~~~~~~~~~~~^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ 2023-12-13T21:38:45,528 /tmp/pip-wheel-h5vnrp6_/llama-cpp-python_92f0ea9b67c04b939d2e7bae30081482/vendor/llama.cpp/llama.cpp:3094:58: note: parameter passing for argument of type ‘std::initializer_list’ changed in GCC 7.1 2023-12-13T21:38:45,529 3094 | layer.ffn_up = ml.create_tensor(ctx, tn(LLM_TENSOR_FFN_UP, "weight", i), {n_embd, n_ff}, backend_split); 2023-12-13T21:38:45,530 | ~~~~~~~~~~~~~~~~^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ 2023-12-13T21:38:45,530 /tmp/pip-wheel-h5vnrp6_/llama-cpp-python_92f0ea9b67c04b939d2e7bae30081482/vendor/llama.cpp/llama.cpp:3108:54: note: parameter passing for argument of type ‘std::initializer_list’ changed in GCC 7.1 2023-12-13T21:38:45,531 3108 | model.tok_embd = ml.create_tensor(ctx, tn(LLM_TENSOR_TOKEN_EMBD, "weight"), {n_embd, n_vocab}, GGML_BACKEND_CPU); 2023-12-13T21:38:45,532 | ~~~~~~~~~~~~~~~~^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ 2023-12-13T21:38:45,532 /tmp/pip-wheel-h5vnrp6_/llama-cpp-python_92f0ea9b67c04b939d2e7bae30081482/vendor/llama.cpp/llama.cpp:3123:63: note: parameter passing for argument of type ‘std::initializer_list’ changed in GCC 7.1 2023-12-13T21:38:45,533 3123 | model.output_norm = ml.create_tensor(ctx, tn(LLM_TENSOR_OUTPUT_NORM, "weight"), {n_embd}, backend_norm); 2023-12-13T21:38:45,534 | ~~~~~~~~~~~~~~~~^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ 2023-12-13T21:38:45,534 /tmp/pip-wheel-h5vnrp6_/llama-cpp-python_92f0ea9b67c04b939d2e7bae30081482/vendor/llama.cpp/llama.cpp:3124:63: note: parameter passing for argument of type ‘std::initializer_list’ changed in GCC 7.1 2023-12-13T21:38:45,535 3124 | model.output_norm_b = ml.create_tensor(ctx, tn(LLM_TENSOR_OUTPUT_NORM, "bias"), {n_embd}, backend_norm); 2023-12-13T21:38:45,535 | ~~~~~~~~~~~~~~~~^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ 2023-12-13T21:38:45,536 /tmp/pip-wheel-h5vnrp6_/llama-cpp-python_92f0ea9b67c04b939d2e7bae30081482/vendor/llama.cpp/llama.cpp:3125:63: note: parameter passing for argument of type ‘std::initializer_list’ changed in GCC 7.1 2023-12-13T21:38:45,537 3125 | model.output = ml.create_tensor(ctx, tn(LLM_TENSOR_OUTPUT, "weight"), {n_embd, n_vocab}, backend_output); 2023-12-13T21:38:45,538 | ~~~~~~~~~~~~~~~~^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ 2023-12-13T21:38:45,538 /tmp/pip-wheel-h5vnrp6_/llama-cpp-python_92f0ea9b67c04b939d2e7bae30081482/vendor/llama.cpp/llama.cpp:3148:61: note: parameter passing for argument of type ‘std::initializer_list’ changed in GCC 7.1 2023-12-13T21:38:45,539 3148 | layer.attn_norm = ml.create_tensor(ctx, tn(LLM_TENSOR_ATTN_NORM, "weight", i), {n_embd}, backend); 2023-12-13T21:38:45,539 | ~~~~~~~~~~~~~~~~^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ 2023-12-13T21:38:45,540 /tmp/pip-wheel-h5vnrp6_/llama-cpp-python_92f0ea9b67c04b939d2e7bae30081482/vendor/llama.cpp/llama.cpp:3149:61: note: parameter passing for argument of type ‘std::initializer_list’ changed in GCC 7.1 2023-12-13T21:38:45,540 3149 | layer.attn_norm_b = ml.create_tensor(ctx, tn(LLM_TENSOR_ATTN_NORM, "bias", i), {n_embd}, backend); 2023-12-13T21:38:45,541 | ~~~~~~~~~~~~~~~~^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ 2023-12-13T21:38:45,541 /tmp/pip-wheel-h5vnrp6_/llama-cpp-python_92f0ea9b67c04b939d2e7bae30081482/vendor/llama.cpp/llama.cpp:3152:67: note: parameter passing for argument of type ‘std::initializer_list’ changed in GCC 7.1 2023-12-13T21:38:45,542 3152 | layer.attn_norm_2 = ml.create_tensor(ctx, tn(LLM_TENSOR_ATTN_NORM_2, "weight", i), {n_embd}, backend); 2023-12-13T21:38:45,542 | ~~~~~~~~~~~~~~~~^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ 2023-12-13T21:38:45,543 /tmp/pip-wheel-h5vnrp6_/llama-cpp-python_92f0ea9b67c04b939d2e7bae30081482/vendor/llama.cpp/llama.cpp:3153:67: note: parameter passing for argument of type ‘std::initializer_list’ changed in GCC 7.1 2023-12-13T21:38:45,544 3153 | layer.attn_norm_2_b = ml.create_tensor(ctx, tn(LLM_TENSOR_ATTN_NORM_2, "bias", i), {n_embd}, backend); 2023-12-13T21:38:45,544 | ~~~~~~~~~~~~~~~~^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ 2023-12-13T21:38:45,545 /tmp/pip-wheel-h5vnrp6_/llama-cpp-python_92f0ea9b67c04b939d2e7bae30081482/vendor/llama.cpp/llama.cpp:3161:54: note: parameter passing for argument of type ‘std::initializer_list’ changed in GCC 7.1 2023-12-13T21:38:45,545 3161 | layer.wqkv = ml.create_tensor(ctx, tn(LLM_TENSOR_ATTN_QKV, "weight", i), {n_embd, n_embd + 2*n_embd_gqa}, backend_split); 2023-12-13T21:38:45,546 | ~~~~~~~~~~~~~~~~^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ 2023-12-13T21:38:45,546 /tmp/pip-wheel-h5vnrp6_/llama-cpp-python_92f0ea9b67c04b939d2e7bae30081482/vendor/llama.cpp/llama.cpp:3162:54: note: parameter passing for argument of type ‘std::initializer_list’ changed in GCC 7.1 2023-12-13T21:38:45,547 3162 | layer.wo = ml.create_tensor(ctx, tn(LLM_TENSOR_ATTN_OUT, "weight", i), {n_embd, n_embd}, backend_split); 2023-12-13T21:38:45,547 | ~~~~~~~~~~~~~~~~^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ 2023-12-13T21:38:45,548 /tmp/pip-wheel-h5vnrp6_/llama-cpp-python_92f0ea9b67c04b939d2e7bae30081482/vendor/llama.cpp/llama.cpp:3164:58: note: parameter passing for argument of type ‘std::initializer_list’ changed in GCC 7.1 2023-12-13T21:38:45,549 3164 | layer.ffn_down = ml.create_tensor(ctx, tn(LLM_TENSOR_FFN_DOWN, "weight", i), { n_ff, n_embd}, backend_split); 2023-12-13T21:38:45,550 | ~~~~~~~~~~~~~~~~^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ 2023-12-13T21:38:45,550 /tmp/pip-wheel-h5vnrp6_/llama-cpp-python_92f0ea9b67c04b939d2e7bae30081482/vendor/llama.cpp/llama.cpp:3165:58: note: parameter passing for argument of type ‘std::initializer_list’ changed in GCC 7.1 2023-12-13T21:38:45,551 3165 | layer.ffn_up = ml.create_tensor(ctx, tn(LLM_TENSOR_FFN_UP, "weight", i), {n_embd, n_ff}, backend_split); 2023-12-13T21:38:45,551 | ~~~~~~~~~~~~~~~~^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ 2023-12-13T21:38:45,551 /tmp/pip-wheel-h5vnrp6_/llama-cpp-python_92f0ea9b67c04b939d2e7bae30081482/vendor/llama.cpp/llama.cpp:3177:54: note: parameter passing for argument of type ‘std::initializer_list’ changed in GCC 7.1 2023-12-13T21:38:45,552 3177 | model.tok_embd = ml.create_tensor(ctx, tn(LLM_TENSOR_TOKEN_EMBD, "weight"), {n_embd, n_vocab}, GGML_BACKEND_CPU); 2023-12-13T21:38:45,552 | ~~~~~~~~~~~~~~~~^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ 2023-12-13T21:38:45,553 /tmp/pip-wheel-h5vnrp6_/llama-cpp-python_92f0ea9b67c04b939d2e7bae30081482/vendor/llama.cpp/llama.cpp:3178:54: note: parameter passing for argument of type ‘std::initializer_list’ changed in GCC 7.1 2023-12-13T21:38:45,553 3178 | model.pos_embd = ml.create_tensor(ctx, tn(LLM_TENSOR_POS_EMBD, "weight"), {n_embd, hparams.n_ctx_train}, GGML_BACKEND_CPU); 2023-12-13T21:38:45,554 | ~~~~~~~~~~~~~~~~^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ 2023-12-13T21:38:45,554 /tmp/pip-wheel-h5vnrp6_/llama-cpp-python_92f0ea9b67c04b939d2e7bae30081482/vendor/llama.cpp/llama.cpp:3193:63: note: parameter passing for argument of type ‘std::initializer_list’ changed in GCC 7.1 2023-12-13T21:38:45,555 3193 | model.output_norm = ml.create_tensor(ctx, tn(LLM_TENSOR_OUTPUT_NORM, "weight"), {n_embd}, backend_norm); 2023-12-13T21:38:45,556 | ~~~~~~~~~~~~~~~~^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ 2023-12-13T21:38:45,556 /tmp/pip-wheel-h5vnrp6_/llama-cpp-python_92f0ea9b67c04b939d2e7bae30081482/vendor/llama.cpp/llama.cpp:3194:63: note: parameter passing for argument of type ‘std::initializer_list’ changed in GCC 7.1 2023-12-13T21:38:45,557 3194 | model.output_norm_b = ml.create_tensor(ctx, tn(LLM_TENSOR_OUTPUT_NORM, "bias"), {n_embd}, backend_norm); 2023-12-13T21:38:45,558 | ~~~~~~~~~~~~~~~~^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ 2023-12-13T21:38:45,558 /tmp/pip-wheel-h5vnrp6_/llama-cpp-python_92f0ea9b67c04b939d2e7bae30081482/vendor/llama.cpp/llama.cpp:3195:63: note: parameter passing for argument of type ‘std::initializer_list’ changed in GCC 7.1 2023-12-13T21:38:45,559 3195 | model.output = ml.create_tensor(ctx, tn(LLM_TENSOR_OUTPUT, "weight"), {n_embd, n_vocab}, backend_output); 2023-12-13T21:38:45,560 | ~~~~~~~~~~~~~~~~^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ 2023-12-13T21:38:45,560 /tmp/pip-wheel-h5vnrp6_/llama-cpp-python_92f0ea9b67c04b939d2e7bae30081482/vendor/llama.cpp/llama.cpp:3218:61: note: parameter passing for argument of type ‘std::initializer_list’ changed in GCC 7.1 2023-12-13T21:38:45,561 3218 | layer.attn_norm = ml.create_tensor(ctx, tn(LLM_TENSOR_ATTN_NORM, "weight", i), {n_embd}, backend); 2023-12-13T21:38:45,562 | ~~~~~~~~~~~~~~~~^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ 2023-12-13T21:38:45,562 /tmp/pip-wheel-h5vnrp6_/llama-cpp-python_92f0ea9b67c04b939d2e7bae30081482/vendor/llama.cpp/llama.cpp:3219:61: note: parameter passing for argument of type ‘std::initializer_list’ changed in GCC 7.1 2023-12-13T21:38:45,563 3219 | layer.attn_norm_b = ml.create_tensor(ctx, tn(LLM_TENSOR_ATTN_NORM, "bias", i), {n_embd}, backend); 2023-12-13T21:38:45,563 | ~~~~~~~~~~~~~~~~^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ 2023-12-13T21:38:45,564 /tmp/pip-wheel-h5vnrp6_/llama-cpp-python_92f0ea9b67c04b939d2e7bae30081482/vendor/llama.cpp/llama.cpp:3221:54: note: parameter passing for argument of type ‘std::initializer_list’ changed in GCC 7.1 2023-12-13T21:38:45,564 3221 | layer.wqkv = ml.create_tensor(ctx, tn(LLM_TENSOR_ATTN_QKV, "weight", i), {n_embd, n_embd + 2*n_embd_gqa}, backend_split); 2023-12-13T21:38:45,565 | ~~~~~~~~~~~~~~~~^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ 2023-12-13T21:38:45,565 /tmp/pip-wheel-h5vnrp6_/llama-cpp-python_92f0ea9b67c04b939d2e7bae30081482/vendor/llama.cpp/llama.cpp:3222:54: note: parameter passing for argument of type ‘std::initializer_list’ changed in GCC 7.1 2023-12-13T21:38:45,566 3222 | layer.bqkv = ml.create_tensor(ctx, tn(LLM_TENSOR_ATTN_QKV, "bias", i), {n_embd + 2*n_embd_gqa}, backend); 2023-12-13T21:38:45,566 | ~~~~~~~~~~~~~~~~^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ 2023-12-13T21:38:45,567 /tmp/pip-wheel-h5vnrp6_/llama-cpp-python_92f0ea9b67c04b939d2e7bae30081482/vendor/llama.cpp/llama.cpp:3224:54: note: parameter passing for argument of type ‘std::initializer_list’ changed in GCC 7.1 2023-12-13T21:38:45,567 3224 | layer.wo = ml.create_tensor(ctx, tn(LLM_TENSOR_ATTN_OUT, "weight", i), {n_embd, n_embd}, backend_split); 2023-12-13T21:38:45,568 | ~~~~~~~~~~~~~~~~^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ 2023-12-13T21:38:45,568 /tmp/pip-wheel-h5vnrp6_/llama-cpp-python_92f0ea9b67c04b939d2e7bae30081482/vendor/llama.cpp/llama.cpp:3225:54: note: parameter passing for argument of type ‘std::initializer_list’ changed in GCC 7.1 2023-12-13T21:38:45,569 3225 | layer.bo = ml.create_tensor(ctx, tn(LLM_TENSOR_ATTN_OUT, "bias", i), {n_embd}, backend); 2023-12-13T21:38:45,569 | ~~~~~~~~~~~~~~~~^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ 2023-12-13T21:38:45,570 /tmp/pip-wheel-h5vnrp6_/llama-cpp-python_92f0ea9b67c04b939d2e7bae30081482/vendor/llama.cpp/llama.cpp:3227:60: note: parameter passing for argument of type ‘std::initializer_list’ changed in GCC 7.1 2023-12-13T21:38:45,570 3227 | layer.ffn_norm = ml.create_tensor(ctx, tn(LLM_TENSOR_FFN_NORM, "weight", i), {n_embd}, backend); 2023-12-13T21:38:45,571 | ~~~~~~~~~~~~~~~~^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ 2023-12-13T21:38:45,572 /tmp/pip-wheel-h5vnrp6_/llama-cpp-python_92f0ea9b67c04b939d2e7bae30081482/vendor/llama.cpp/llama.cpp:3228:60: note: parameter passing for argument of type ‘std::initializer_list’ changed in GCC 7.1 2023-12-13T21:38:45,572 3228 | layer.ffn_norm_b = ml.create_tensor(ctx, tn(LLM_TENSOR_FFN_NORM, "bias", i), {n_embd}, backend); 2023-12-13T21:38:45,573 | ~~~~~~~~~~~~~~~~^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ 2023-12-13T21:38:45,574 /tmp/pip-wheel-h5vnrp6_/llama-cpp-python_92f0ea9b67c04b939d2e7bae30081482/vendor/llama.cpp/llama.cpp:3230:60: note: parameter passing for argument of type ‘std::initializer_list’ changed in GCC 7.1 2023-12-13T21:38:45,574 3230 | layer.ffn_down = ml.create_tensor(ctx, tn(LLM_TENSOR_FFN_DOWN, "weight", i), {n_ff, n_embd}, backend_split); 2023-12-13T21:38:45,574 | ~~~~~~~~~~~~~~~~^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ 2023-12-13T21:38:45,575 /tmp/pip-wheel-h5vnrp6_/llama-cpp-python_92f0ea9b67c04b939d2e7bae30081482/vendor/llama.cpp/llama.cpp:3231:60: note: parameter passing for argument of type ‘std::initializer_list’ changed in GCC 7.1 2023-12-13T21:38:45,575 3231 | layer.ffn_down_b = ml.create_tensor(ctx, tn(LLM_TENSOR_FFN_DOWN, "bias", i), {n_embd}, backend); 2023-12-13T21:38:45,576 | ~~~~~~~~~~~~~~~~^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ 2023-12-13T21:38:45,576 /tmp/pip-wheel-h5vnrp6_/llama-cpp-python_92f0ea9b67c04b939d2e7bae30081482/vendor/llama.cpp/llama.cpp:3233:58: note: parameter passing for argument of type ‘std::initializer_list’ changed in GCC 7.1 2023-12-13T21:38:45,577 3233 | layer.ffn_up = ml.create_tensor(ctx, tn(LLM_TENSOR_FFN_UP, "weight", i), {n_embd, n_ff}, backend_split); 2023-12-13T21:38:45,577 | ~~~~~~~~~~~~~~~~^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ 2023-12-13T21:38:45,578 /tmp/pip-wheel-h5vnrp6_/llama-cpp-python_92f0ea9b67c04b939d2e7bae30081482/vendor/llama.cpp/llama.cpp:3234:58: note: parameter passing for argument of type ‘std::initializer_list’ changed in GCC 7.1 2023-12-13T21:38:45,578 3234 | layer.ffn_up_b = ml.create_tensor(ctx, tn(LLM_TENSOR_FFN_UP, "bias", i), {n_ff}, backend); 2023-12-13T21:38:45,579 | ~~~~~~~~~~~~~~~~^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ 2023-12-13T21:38:45,579 /tmp/pip-wheel-h5vnrp6_/llama-cpp-python_92f0ea9b67c04b939d2e7bae30081482/vendor/llama.cpp/llama.cpp:3249:54: note: parameter passing for argument of type ‘std::initializer_list’ changed in GCC 7.1 2023-12-13T21:38:45,580 3249 | model.tok_embd = ml.create_tensor(ctx, tn(LLM_TENSOR_TOKEN_EMBD, "weight"), {n_embd, n_vocab}, GGML_BACKEND_CPU); 2023-12-13T21:38:45,581 | ~~~~~~~~~~~~~~~~^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ 2023-12-13T21:38:45,581 /tmp/pip-wheel-h5vnrp6_/llama-cpp-python_92f0ea9b67c04b939d2e7bae30081482/vendor/llama.cpp/llama.cpp:3263:64: note: parameter passing for argument of type ‘std::initializer_list’ changed in GCC 7.1 2023-12-13T21:38:45,582 3263 | model.output_norm = ml.create_tensor(ctx, tn(LLM_TENSOR_OUTPUT_NORM, "weight"), {n_embd}, backend_norm); 2023-12-13T21:38:45,582 | ~~~~~~~~~~~~~~~~^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ 2023-12-13T21:38:45,583 /tmp/pip-wheel-h5vnrp6_/llama-cpp-python_92f0ea9b67c04b939d2e7bae30081482/vendor/llama.cpp/llama.cpp:3264:64: note: parameter passing for argument of type ‘std::initializer_list’ changed in GCC 7.1 2023-12-13T21:38:45,583 3264 | model.output_norm_b = ml.create_tensor(ctx, tn(LLM_TENSOR_OUTPUT_NORM, "bias"), {n_embd}, backend_norm); 2023-12-13T21:38:45,584 | ~~~~~~~~~~~~~~~~^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ 2023-12-13T21:38:45,585 /tmp/pip-wheel-h5vnrp6_/llama-cpp-python_92f0ea9b67c04b939d2e7bae30081482/vendor/llama.cpp/llama.cpp:3265:64: note: parameter passing for argument of type ‘std::initializer_list’ changed in GCC 7.1 2023-12-13T21:38:45,586 3265 | model.output = ml.create_tensor(ctx, tn(LLM_TENSOR_OUTPUT, "weight"), {n_embd, n_vocab}, backend_output); 2023-12-13T21:38:45,586 | ~~~~~~~~~~~~~~~~^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ 2023-12-13T21:38:45,587 /tmp/pip-wheel-h5vnrp6_/llama-cpp-python_92f0ea9b67c04b939d2e7bae30081482/vendor/llama.cpp/llama.cpp:3283:63: note: parameter passing for argument of type ‘std::initializer_list’ changed in GCC 7.1 2023-12-13T21:38:45,587 3283 | layer.attn_norm = ml.create_tensor(ctx, tn(LLM_TENSOR_ATTN_NORM, "weight", i), {n_embd}, backend); 2023-12-13T21:38:45,588 | ~~~~~~~~~~~~~~~~^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ 2023-12-13T21:38:45,588 /tmp/pip-wheel-h5vnrp6_/llama-cpp-python_92f0ea9b67c04b939d2e7bae30081482/vendor/llama.cpp/llama.cpp:3284:63: note: parameter passing for argument of type ‘std::initializer_list’ changed in GCC 7.1 2023-12-13T21:38:45,589 3284 | layer.attn_norm_b = ml.create_tensor(ctx, tn(LLM_TENSOR_ATTN_NORM, "bias", i), {n_embd}, backend); 2023-12-13T21:38:45,589 | ~~~~~~~~~~~~~~~~^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ 2023-12-13T21:38:45,590 /tmp/pip-wheel-h5vnrp6_/llama-cpp-python_92f0ea9b67c04b939d2e7bae30081482/vendor/llama.cpp/llama.cpp:3285:63: note: parameter passing for argument of type ‘std::initializer_list’ changed in GCC 7.1 2023-12-13T21:38:45,590 3285 | layer.wqkv = ml.create_tensor(ctx, tn(LLM_TENSOR_ATTN_QKV, "weight", i), {n_embd, n_embd + 2*n_embd_gqa}, backend_split); 2023-12-13T21:38:45,591 | ~~~~~~~~~~~~~~~~^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ 2023-12-13T21:38:45,591 /tmp/pip-wheel-h5vnrp6_/llama-cpp-python_92f0ea9b67c04b939d2e7bae30081482/vendor/llama.cpp/llama.cpp:3286:63: note: parameter passing for argument of type ‘std::initializer_list’ changed in GCC 7.1 2023-12-13T21:38:45,592 3286 | layer.bqkv = ml.create_tensor(ctx, tn(LLM_TENSOR_ATTN_QKV, "bias", i), {n_embd + 2*n_embd_gqa}, backend); 2023-12-13T21:38:45,593 | ~~~~~~~~~~~~~~~~^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ 2023-12-13T21:38:45,593 /tmp/pip-wheel-h5vnrp6_/llama-cpp-python_92f0ea9b67c04b939d2e7bae30081482/vendor/llama.cpp/llama.cpp:3287:63: note: parameter passing for argument of type ‘std::initializer_list’ changed in GCC 7.1 2023-12-13T21:38:45,594 3287 | layer.wo = ml.create_tensor(ctx, tn(LLM_TENSOR_ATTN_OUT, "weight", i), {n_embd, n_embd}, backend_split); 2023-12-13T21:38:45,594 | ~~~~~~~~~~~~~~~~^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ 2023-12-13T21:38:45,595 /tmp/pip-wheel-h5vnrp6_/llama-cpp-python_92f0ea9b67c04b939d2e7bae30081482/vendor/llama.cpp/llama.cpp:3288:63: note: parameter passing for argument of type ‘std::initializer_list’ changed in GCC 7.1 2023-12-13T21:38:45,596 3288 | layer.bo = ml.create_tensor(ctx, tn(LLM_TENSOR_ATTN_OUT, "bias", i), {n_embd}, backend); 2023-12-13T21:38:45,596 | ~~~~~~~~~~~~~~~~^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ 2023-12-13T21:38:45,597 /tmp/pip-wheel-h5vnrp6_/llama-cpp-python_92f0ea9b67c04b939d2e7bae30081482/vendor/llama.cpp/llama.cpp:3289:63: note: parameter passing for argument of type ‘std::initializer_list’ changed in GCC 7.1 2023-12-13T21:38:45,598 3289 | layer.ffn_down = ml.create_tensor(ctx, tn(LLM_TENSOR_FFN_DOWN, "weight", i), {n_ff, n_embd}, backend_split); 2023-12-13T21:38:45,598 | ~~~~~~~~~~~~~~~~^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ 2023-12-13T21:38:45,598 /tmp/pip-wheel-h5vnrp6_/llama-cpp-python_92f0ea9b67c04b939d2e7bae30081482/vendor/llama.cpp/llama.cpp:3290:63: note: parameter passing for argument of type ‘std::initializer_list’ changed in GCC 7.1 2023-12-13T21:38:45,599 3290 | layer.ffn_down_b = ml.create_tensor(ctx, tn(LLM_TENSOR_FFN_DOWN, "bias", i), {n_embd}, backend); 2023-12-13T21:38:45,599 | ~~~~~~~~~~~~~~~~^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ 2023-12-13T21:38:45,600 /tmp/pip-wheel-h5vnrp6_/llama-cpp-python_92f0ea9b67c04b939d2e7bae30081482/vendor/llama.cpp/llama.cpp:3291:63: note: parameter passing for argument of type ‘std::initializer_list’ changed in GCC 7.1 2023-12-13T21:38:45,600 3291 | layer.ffn_up = ml.create_tensor(ctx, tn(LLM_TENSOR_FFN_UP, "weight", i), {n_embd, n_ff}, backend_split); 2023-12-13T21:38:45,601 | ~~~~~~~~~~~~~~~~^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ 2023-12-13T21:38:45,601 /tmp/pip-wheel-h5vnrp6_/llama-cpp-python_92f0ea9b67c04b939d2e7bae30081482/vendor/llama.cpp/llama.cpp:3292:63: note: parameter passing for argument of type ‘std::initializer_list’ changed in GCC 7.1 2023-12-13T21:38:45,602 3292 | layer.ffn_up_b = ml.create_tensor(ctx, tn(LLM_TENSOR_FFN_UP, "bias", i), {n_ff}, backend); 2023-12-13T21:38:45,602 | ~~~~~~~~~~~~~~~~^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ 2023-12-13T21:38:45,603 /tmp/pip-wheel-h5vnrp6_/llama-cpp-python_92f0ea9b67c04b939d2e7bae30081482/vendor/llama.cpp/llama.cpp:3293:63: note: parameter passing for argument of type ‘std::initializer_list’ changed in GCC 7.1 2023-12-13T21:38:45,604 3293 | layer.ffn_norm = ml.create_tensor(ctx, tn(LLM_TENSOR_FFN_NORM, "weight", i), {n_embd}, backend); 2023-12-13T21:38:45,604 | ~~~~~~~~~~~~~~~~^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ 2023-12-13T21:38:45,605 /tmp/pip-wheel-h5vnrp6_/llama-cpp-python_92f0ea9b67c04b939d2e7bae30081482/vendor/llama.cpp/llama.cpp:3294:63: note: parameter passing for argument of type ‘std::initializer_list’ changed in GCC 7.1 2023-12-13T21:38:45,605 3294 | layer.ffn_norm_b = ml.create_tensor(ctx, tn(LLM_TENSOR_FFN_NORM, "bias", i), {n_embd}, backend); 2023-12-13T21:38:45,606 | ~~~~~~~~~~~~~~~~^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ 2023-12-13T21:38:45,606 /tmp/pip-wheel-h5vnrp6_/llama-cpp-python_92f0ea9b67c04b939d2e7bae30081482/vendor/llama.cpp/llama.cpp:3295:63: note: parameter passing for argument of type ‘std::initializer_list’ changed in GCC 7.1 2023-12-13T21:38:45,607 3295 | layer.attn_q_norm = ml.create_tensor(ctx, tn(LLM_TENSOR_ATTN_Q_NORM, "weight", i), {64}, backend); 2023-12-13T21:38:45,608 | ~~~~~~~~~~~~~~~~^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ 2023-12-13T21:38:45,609 /tmp/pip-wheel-h5vnrp6_/llama-cpp-python_92f0ea9b67c04b939d2e7bae30081482/vendor/llama.cpp/llama.cpp:3296:63: note: parameter passing for argument of type ‘std::initializer_list’ changed in GCC 7.1 2023-12-13T21:38:45,609 3296 | layer.attn_q_norm_b = ml.create_tensor(ctx, tn(LLM_TENSOR_ATTN_Q_NORM, "bias", i), {64}, backend); 2023-12-13T21:38:45,610 | ~~~~~~~~~~~~~~~~^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ 2023-12-13T21:38:45,610 /tmp/pip-wheel-h5vnrp6_/llama-cpp-python_92f0ea9b67c04b939d2e7bae30081482/vendor/llama.cpp/llama.cpp:3297:63: note: parameter passing for argument of type ‘std::initializer_list’ changed in GCC 7.1 2023-12-13T21:38:45,611 3297 | layer.attn_k_norm = ml.create_tensor(ctx, tn(LLM_TENSOR_ATTN_K_NORM, "weight", i), {64}, backend); 2023-12-13T21:38:45,611 | ~~~~~~~~~~~~~~~~^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ 2023-12-13T21:38:45,612 /tmp/pip-wheel-h5vnrp6_/llama-cpp-python_92f0ea9b67c04b939d2e7bae30081482/vendor/llama.cpp/llama.cpp:3298:63: note: parameter passing for argument of type ‘std::initializer_list’ changed in GCC 7.1 2023-12-13T21:38:45,612 3298 | layer.attn_k_norm_b = ml.create_tensor(ctx, tn(LLM_TENSOR_ATTN_K_NORM, "bias", i), {64}, backend); 2023-12-13T21:38:45,613 | ~~~~~~~~~~~~~~~~^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ 2023-12-13T21:38:45,613 /tmp/pip-wheel-h5vnrp6_/llama-cpp-python_92f0ea9b67c04b939d2e7bae30081482/vendor/llama.cpp/llama.cpp:3305:56: note: parameter passing for argument of type ‘std::initializer_list’ changed in GCC 7.1 2023-12-13T21:38:45,614 3305 | model.tok_embd = ml.create_tensor(ctx, tn(LLM_TENSOR_TOKEN_EMBD, "weight"), {n_embd, n_vocab}, GGML_BACKEND_CPU); 2023-12-13T21:38:45,614 | ~~~~~~~~~~~~~~~~^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ 2023-12-13T21:38:45,615 /tmp/pip-wheel-h5vnrp6_/llama-cpp-python_92f0ea9b67c04b939d2e7bae30081482/vendor/llama.cpp/llama.cpp:3306:56: note: parameter passing for argument of type ‘std::initializer_list’ changed in GCC 7.1 2023-12-13T21:38:45,616 3306 | model.tok_norm = ml.create_tensor(ctx, tn(LLM_TENSOR_TOKEN_EMBD_NORM, "weight"), {n_embd}, GGML_BACKEND_CPU); 2023-12-13T21:38:45,617 | ~~~~~~~~~~~~~~~~^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ 2023-12-13T21:38:45,617 /tmp/pip-wheel-h5vnrp6_/llama-cpp-python_92f0ea9b67c04b939d2e7bae30081482/vendor/llama.cpp/llama.cpp:3307:56: note: parameter passing for argument of type ‘std::initializer_list’ changed in GCC 7.1 2023-12-13T21:38:45,618 3307 | model.tok_norm_b = ml.create_tensor(ctx, tn(LLM_TENSOR_TOKEN_EMBD_NORM, "bias"), {n_embd}, GGML_BACKEND_CPU); 2023-12-13T21:38:45,619 | ~~~~~~~~~~~~~~~~^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ 2023-12-13T21:38:45,619 /tmp/pip-wheel-h5vnrp6_/llama-cpp-python_92f0ea9b67c04b939d2e7bae30081482/vendor/llama.cpp/llama.cpp:3322:63: note: parameter passing for argument of type ‘std::initializer_list’ changed in GCC 7.1 2023-12-13T21:38:45,620 3322 | model.output_norm = ml.create_tensor(ctx, tn(LLM_TENSOR_OUTPUT_NORM, "weight"), {n_embd}, backend_norm); 2023-12-13T21:38:45,621 | ~~~~~~~~~~~~~~~~^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ 2023-12-13T21:38:45,622 /tmp/pip-wheel-h5vnrp6_/llama-cpp-python_92f0ea9b67c04b939d2e7bae30081482/vendor/llama.cpp/llama.cpp:3323:63: note: parameter passing for argument of type ‘std::initializer_list’ changed in GCC 7.1 2023-12-13T21:38:45,622 3323 | model.output_norm_b = ml.create_tensor(ctx, tn(LLM_TENSOR_OUTPUT_NORM, "bias"), {n_embd}, backend_norm); 2023-12-13T21:38:45,623 | ~~~~~~~~~~~~~~~~^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ 2023-12-13T21:38:45,623 /tmp/pip-wheel-h5vnrp6_/llama-cpp-python_92f0ea9b67c04b939d2e7bae30081482/vendor/llama.cpp/llama.cpp:3324:63: note: parameter passing for argument of type ‘std::initializer_list’ changed in GCC 7.1 2023-12-13T21:38:45,624 3324 | model.output = ml.create_tensor(ctx, tn(LLM_TENSOR_OUTPUT, "weight"), {n_embd, n_vocab}, backend_output); 2023-12-13T21:38:45,624 | ~~~~~~~~~~~~~~~~^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ 2023-12-13T21:38:45,625 /tmp/pip-wheel-h5vnrp6_/llama-cpp-python_92f0ea9b67c04b939d2e7bae30081482/vendor/llama.cpp/llama.cpp:3347:61: note: parameter passing for argument of type ‘std::initializer_list’ changed in GCC 7.1 2023-12-13T21:38:45,625 3347 | layer.attn_norm = ml.create_tensor(ctx, tn(LLM_TENSOR_ATTN_NORM, "weight", i), {n_embd}, backend); 2023-12-13T21:38:45,626 | ~~~~~~~~~~~~~~~~^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ 2023-12-13T21:38:45,626 /tmp/pip-wheel-h5vnrp6_/llama-cpp-python_92f0ea9b67c04b939d2e7bae30081482/vendor/llama.cpp/llama.cpp:3348:61: note: parameter passing for argument of type ‘std::initializer_list’ changed in GCC 7.1 2023-12-13T21:38:45,627 3348 | layer.attn_norm_b = ml.create_tensor(ctx, tn(LLM_TENSOR_ATTN_NORM, "bias", i), {n_embd}, backend); 2023-12-13T21:38:45,628 | ~~~~~~~~~~~~~~~~^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ 2023-12-13T21:38:45,628 /tmp/pip-wheel-h5vnrp6_/llama-cpp-python_92f0ea9b67c04b939d2e7bae30081482/vendor/llama.cpp/llama.cpp:3350:54: note: parameter passing for argument of type ‘std::initializer_list’ changed in GCC 7.1 2023-12-13T21:38:45,629 3350 | layer.wqkv = ml.create_tensor(ctx, tn(LLM_TENSOR_ATTN_QKV, "weight", i), {n_embd, n_embd + 2*n_embd_gqa}, backend_split); 2023-12-13T21:38:45,630 | ~~~~~~~~~~~~~~~~^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ 2023-12-13T21:38:45,630 /tmp/pip-wheel-h5vnrp6_/llama-cpp-python_92f0ea9b67c04b939d2e7bae30081482/vendor/llama.cpp/llama.cpp:3351:54: note: parameter passing for argument of type ‘std::initializer_list’ changed in GCC 7.1 2023-12-13T21:38:45,631 3351 | layer.bqkv = ml.create_tensor(ctx, tn(LLM_TENSOR_ATTN_QKV, "bias", i), {n_embd + 2*n_embd_gqa}, backend); 2023-12-13T21:38:45,632 | ~~~~~~~~~~~~~~~~^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ 2023-12-13T21:38:45,632 /tmp/pip-wheel-h5vnrp6_/llama-cpp-python_92f0ea9b67c04b939d2e7bae30081482/vendor/llama.cpp/llama.cpp:3353:54: note: parameter passing for argument of type ‘std::initializer_list’ changed in GCC 7.1 2023-12-13T21:38:45,633 3353 | layer.wo = ml.create_tensor(ctx, tn(LLM_TENSOR_ATTN_OUT, "weight", i), {n_embd, n_embd}, backend_split); 2023-12-13T21:38:45,634 | ~~~~~~~~~~~~~~~~^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ 2023-12-13T21:38:45,635 /tmp/pip-wheel-h5vnrp6_/llama-cpp-python_92f0ea9b67c04b939d2e7bae30081482/vendor/llama.cpp/llama.cpp:3354:54: note: parameter passing for argument of type ‘std::initializer_list’ changed in GCC 7.1 2023-12-13T21:38:45,635 3354 | layer.bo = ml.create_tensor(ctx, tn(LLM_TENSOR_ATTN_OUT, "bias", i), {n_embd}, backend); 2023-12-13T21:38:45,635 | ~~~~~~~~~~~~~~~~^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ 2023-12-13T21:38:45,636 /tmp/pip-wheel-h5vnrp6_/llama-cpp-python_92f0ea9b67c04b939d2e7bae30081482/vendor/llama.cpp/llama.cpp:3356:60: note: parameter passing for argument of type ‘std::initializer_list’ changed in GCC 7.1 2023-12-13T21:38:45,636 3356 | layer.ffn_norm = ml.create_tensor(ctx, tn(LLM_TENSOR_FFN_NORM, "weight", i), {n_embd}, backend); 2023-12-13T21:38:45,637 | ~~~~~~~~~~~~~~~~^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ 2023-12-13T21:38:45,637 /tmp/pip-wheel-h5vnrp6_/llama-cpp-python_92f0ea9b67c04b939d2e7bae30081482/vendor/llama.cpp/llama.cpp:3357:60: note: parameter passing for argument of type ‘std::initializer_list’ changed in GCC 7.1 2023-12-13T21:38:45,638 3357 | layer.ffn_norm_b = ml.create_tensor(ctx, tn(LLM_TENSOR_FFN_NORM, "bias", i), {n_embd}, backend); 2023-12-13T21:38:45,638 | ~~~~~~~~~~~~~~~~^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ 2023-12-13T21:38:45,639 /tmp/pip-wheel-h5vnrp6_/llama-cpp-python_92f0ea9b67c04b939d2e7bae30081482/vendor/llama.cpp/llama.cpp:3359:60: note: parameter passing for argument of type ‘std::initializer_list’ changed in GCC 7.1 2023-12-13T21:38:45,639 3359 | layer.ffn_down = ml.create_tensor(ctx, tn(LLM_TENSOR_FFN_DOWN, "weight", i), {n_ff, n_embd}, backend_split); 2023-12-13T21:38:45,640 | ~~~~~~~~~~~~~~~~^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ 2023-12-13T21:38:45,641 /tmp/pip-wheel-h5vnrp6_/llama-cpp-python_92f0ea9b67c04b939d2e7bae30081482/vendor/llama.cpp/llama.cpp:3360:60: note: parameter passing for argument of type ‘std::initializer_list’ changed in GCC 7.1 2023-12-13T21:38:45,641 3360 | layer.ffn_down_b = ml.create_tensor(ctx, tn(LLM_TENSOR_FFN_DOWN, "bias", i), {n_embd}, backend); 2023-12-13T21:38:45,642 | ~~~~~~~~~~~~~~~~^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ 2023-12-13T21:38:45,642 /tmp/pip-wheel-h5vnrp6_/llama-cpp-python_92f0ea9b67c04b939d2e7bae30081482/vendor/llama.cpp/llama.cpp:3362:58: note: parameter passing for argument of type ‘std::initializer_list’ changed in GCC 7.1 2023-12-13T21:38:45,643 3362 | layer.ffn_up = ml.create_tensor(ctx, tn(LLM_TENSOR_FFN_UP, "weight", i), {n_embd, n_ff}, backend_split); 2023-12-13T21:38:45,643 | ~~~~~~~~~~~~~~~~^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ 2023-12-13T21:38:45,644 /tmp/pip-wheel-h5vnrp6_/llama-cpp-python_92f0ea9b67c04b939d2e7bae30081482/vendor/llama.cpp/llama.cpp:3363:58: note: parameter passing for argument of type ‘std::initializer_list’ changed in GCC 7.1 2023-12-13T21:38:45,644 3363 | layer.ffn_up_b = ml.create_tensor(ctx, tn(LLM_TENSOR_FFN_UP, "bias", i), {n_ff}, backend); 2023-12-13T21:38:45,645 | ~~~~~~~~~~~~~~~~^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ 2023-12-13T21:38:45,646 /tmp/pip-wheel-h5vnrp6_/llama-cpp-python_92f0ea9b67c04b939d2e7bae30081482/vendor/llama.cpp/llama.cpp:3378:54: note: parameter passing for argument of type ‘std::initializer_list’ changed in GCC 7.1 2023-12-13T21:38:45,646 3378 | model.tok_embd = ml.create_tensor(ctx, tn(LLM_TENSOR_TOKEN_EMBD, "weight"), {n_embd, n_vocab}, GGML_BACKEND_CPU); 2023-12-13T21:38:45,647 | ~~~~~~~~~~~~~~~~^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ 2023-12-13T21:38:45,647 /tmp/pip-wheel-h5vnrp6_/llama-cpp-python_92f0ea9b67c04b939d2e7bae30081482/vendor/llama.cpp/llama.cpp:3393:63: note: parameter passing for argument of type ‘std::initializer_list’ changed in GCC 7.1 2023-12-13T21:38:45,648 3393 | model.output_norm = ml.create_tensor(ctx, tn(LLM_TENSOR_OUTPUT_NORM, "weight"), {n_embd}, backend_norm); 2023-12-13T21:38:45,648 | ~~~~~~~~~~~~~~~~^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ 2023-12-13T21:38:45,649 /tmp/pip-wheel-h5vnrp6_/llama-cpp-python_92f0ea9b67c04b939d2e7bae30081482/vendor/llama.cpp/llama.cpp:3394:63: note: parameter passing for argument of type ‘std::initializer_list’ changed in GCC 7.1 2023-12-13T21:38:45,650 3394 | model.output = ml.create_tensor(ctx, tn(LLM_TENSOR_OUTPUT, "weight"), {n_embd, n_vocab}, backend_output); 2023-12-13T21:38:45,650 | ~~~~~~~~~~~~~~~~^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ 2023-12-13T21:38:45,651 /tmp/pip-wheel-h5vnrp6_/llama-cpp-python_92f0ea9b67c04b939d2e7bae30081482/vendor/llama.cpp/llama.cpp:3416:59: note: parameter passing for argument of type ‘std::initializer_list’ changed in GCC 7.1 2023-12-13T21:38:45,652 3416 | layer.attn_norm = ml.create_tensor(ctx, tn(LLM_TENSOR_ATTN_NORM, "weight", i), {n_embd}, backend); 2023-12-13T21:38:45,652 | ~~~~~~~~~~~~~~~~^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ 2023-12-13T21:38:45,653 /tmp/pip-wheel-h5vnrp6_/llama-cpp-python_92f0ea9b67c04b939d2e7bae30081482/vendor/llama.cpp/llama.cpp:3417:54: note: parameter passing for argument of type ‘std::initializer_list’ changed in GCC 7.1 2023-12-13T21:38:45,654 3417 | layer.wqkv = ml.create_tensor(ctx, tn(LLM_TENSOR_ATTN_QKV, "weight", i), {n_embd, n_embd + 2*n_embd_gqa}, backend_split); 2023-12-13T21:38:45,655 | ~~~~~~~~~~~~~~~~^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ 2023-12-13T21:38:45,656 /tmp/pip-wheel-h5vnrp6_/llama-cpp-python_92f0ea9b67c04b939d2e7bae30081482/vendor/llama.cpp/llama.cpp:3418:54: note: parameter passing for argument of type ‘std::initializer_list’ changed in GCC 7.1 2023-12-13T21:38:45,656 3418 | layer.wo = ml.create_tensor(ctx, tn(LLM_TENSOR_ATTN_OUT, "weight", i), {n_embd, n_embd}, backend_split); 2023-12-13T21:38:45,657 | ~~~~~~~~~~~~~~~~^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ 2023-12-13T21:38:45,658 /tmp/pip-wheel-h5vnrp6_/llama-cpp-python_92f0ea9b67c04b939d2e7bae30081482/vendor/llama.cpp/llama.cpp:3420:58: note: parameter passing for argument of type ‘std::initializer_list’ changed in GCC 7.1 2023-12-13T21:38:45,659 3420 | layer.ffn_norm = ml.create_tensor(ctx, tn(LLM_TENSOR_FFN_NORM, "weight", i), {n_embd}, backend); 2023-12-13T21:38:45,660 | ~~~~~~~~~~~~~~~~^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ 2023-12-13T21:38:45,660 /tmp/pip-wheel-h5vnrp6_/llama-cpp-python_92f0ea9b67c04b939d2e7bae30081482/vendor/llama.cpp/llama.cpp:3422:58: note: parameter passing for argument of type ‘std::initializer_list’ changed in GCC 7.1 2023-12-13T21:38:45,661 3422 | layer.ffn_down = ml.create_tensor(ctx, tn(LLM_TENSOR_FFN_DOWN, "weight", i), { n_ff, n_embd}, backend_split); 2023-12-13T21:38:45,662 | ~~~~~~~~~~~~~~~~^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ 2023-12-13T21:38:45,662 /tmp/pip-wheel-h5vnrp6_/llama-cpp-python_92f0ea9b67c04b939d2e7bae30081482/vendor/llama.cpp/llama.cpp:3423:58: note: parameter passing for argument of type ‘std::initializer_list’ changed in GCC 7.1 2023-12-13T21:38:45,663 3423 | layer.ffn_up = ml.create_tensor(ctx, tn(LLM_TENSOR_FFN_UP, "weight", i), {n_embd, n_ff}, backend_split); 2023-12-13T21:38:45,663 | ~~~~~~~~~~~~~~~~^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ 2023-12-13T21:38:45,664 /tmp/pip-wheel-h5vnrp6_/llama-cpp-python_92f0ea9b67c04b939d2e7bae30081482/vendor/llama.cpp/llama.cpp:3453:63: note: parameter passing for argument of type ‘std::initializer_list’ changed in GCC 7.1 2023-12-13T21:38:45,664 3453 | model.output_norm_b = ml.create_tensor(ctx, tn(LLM_TENSOR_OUTPUT_NORM, "bias"), {n_embd}, backend_norm); 2023-12-13T21:38:45,665 | ~~~~~~~~~~~~~~~~^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ 2023-12-13T21:38:45,665 /tmp/pip-wheel-h5vnrp6_/llama-cpp-python_92f0ea9b67c04b939d2e7bae30081482/vendor/llama.cpp/llama.cpp:3454:61: note: parameter passing for argument of type ‘std::initializer_list’ changed in GCC 7.1 2023-12-13T21:38:45,666 3454 | model.output_norm = ml.create_tensor(ctx, tn(LLM_TENSOR_OUTPUT_NORM, "weight"), {n_embd}, backend_norm); 2023-12-13T21:38:45,666 | ~~~~~~~~~~~~~~~~^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ 2023-12-13T21:38:45,667 /tmp/pip-wheel-h5vnrp6_/llama-cpp-python_92f0ea9b67c04b939d2e7bae30081482/vendor/llama.cpp/llama.cpp:3455:61: note: parameter passing for argument of type ‘std::initializer_list’ changed in GCC 7.1 2023-12-13T21:38:45,668 3455 | model.output = ml.create_tensor(ctx, tn(LLM_TENSOR_OUTPUT, "weight"), {n_embd, n_vocab}, backend_output); 2023-12-13T21:38:45,668 | ~~~~~~~~~~~~~~~~^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ 2023-12-13T21:38:45,669 /tmp/pip-wheel-h5vnrp6_/llama-cpp-python_92f0ea9b67c04b939d2e7bae30081482/vendor/llama.cpp/llama.cpp:3480:59: note: parameter passing for argument of type ‘std::initializer_list’ changed in GCC 7.1 2023-12-13T21:38:45,669 3480 | layer.attn_norm = ml.create_tensor(ctx, tn(LLM_TENSOR_ATTN_NORM, "weight", i), {n_embd}, backend); 2023-12-13T21:38:45,670 | ~~~~~~~~~~~~~~~~^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ 2023-12-13T21:38:45,671 /tmp/pip-wheel-h5vnrp6_/llama-cpp-python_92f0ea9b67c04b939d2e7bae30081482/vendor/llama.cpp/llama.cpp:3481:61: note: parameter passing for argument of type ‘std::initializer_list’ changed in GCC 7.1 2023-12-13T21:38:45,671 3481 | layer.attn_norm_b = ml.create_tensor(ctx, tn(LLM_TENSOR_ATTN_NORM, "bias", i), {n_embd}, backend); 2023-12-13T21:38:45,672 | ~~~~~~~~~~~~~~~~^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ 2023-12-13T21:38:45,672 /tmp/pip-wheel-h5vnrp6_/llama-cpp-python_92f0ea9b67c04b939d2e7bae30081482/vendor/llama.cpp/llama.cpp:3483:52: note: parameter passing for argument of type ‘std::initializer_list’ changed in GCC 7.1 2023-12-13T21:38:45,673 3483 | layer.wq = ml.create_tensor(ctx, tn(LLM_TENSOR_ATTN_Q, "weight", i), {n_embd, n_embd}, backend_split); 2023-12-13T21:38:45,674 | ~~~~~~~~~~~~~~~~^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ 2023-12-13T21:38:45,674 /tmp/pip-wheel-h5vnrp6_/llama-cpp-python_92f0ea9b67c04b939d2e7bae30081482/vendor/llama.cpp/llama.cpp:3484:52: note: parameter passing for argument of type ‘std::initializer_list’ changed in GCC 7.1 2023-12-13T21:38:45,675 3484 | layer.wk = ml.create_tensor(ctx, tn(LLM_TENSOR_ATTN_K, "weight", i), {n_embd, n_embd_gqa}, backend_split); 2023-12-13T21:38:45,675 | ~~~~~~~~~~~~~~~~^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ 2023-12-13T21:38:45,676 /tmp/pip-wheel-h5vnrp6_/llama-cpp-python_92f0ea9b67c04b939d2e7bae30081482/vendor/llama.cpp/llama.cpp:3485:52: note: parameter passing for argument of type ‘std::initializer_list’ changed in GCC 7.1 2023-12-13T21:38:45,677 3485 | layer.wv = ml.create_tensor(ctx, tn(LLM_TENSOR_ATTN_V, "weight", i), {n_embd, n_embd_gqa}, backend_split); 2023-12-13T21:38:45,677 | ~~~~~~~~~~~~~~~~^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ 2023-12-13T21:38:45,678 /tmp/pip-wheel-h5vnrp6_/llama-cpp-python_92f0ea9b67c04b939d2e7bae30081482/vendor/llama.cpp/llama.cpp:3486:52: note: parameter passing for argument of type ‘std::initializer_list’ changed in GCC 7.1 2023-12-13T21:38:45,679 3486 | layer.wo = ml.create_tensor(ctx, tn(LLM_TENSOR_ATTN_OUT, "weight", i), {n_embd, n_embd}, backend_split); 2023-12-13T21:38:45,679 | ~~~~~~~~~~~~~~~~^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ 2023-12-13T21:38:45,680 /tmp/pip-wheel-h5vnrp6_/llama-cpp-python_92f0ea9b67c04b939d2e7bae30081482/vendor/llama.cpp/llama.cpp:3488:58: note: parameter passing for argument of type ‘std::initializer_list’ changed in GCC 7.1 2023-12-13T21:38:45,681 3488 | layer.ffn_norm = ml.create_tensor(ctx, tn(LLM_TENSOR_FFN_NORM, "weight", i), {n_embd}, backend); 2023-12-13T21:38:45,682 | ~~~~~~~~~~~~~~~~^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ 2023-12-13T21:38:45,682 /tmp/pip-wheel-h5vnrp6_/llama-cpp-python_92f0ea9b67c04b939d2e7bae30081482/vendor/llama.cpp/llama.cpp:3489:60: note: parameter passing for argument of type ‘std::initializer_list’ changed in GCC 7.1 2023-12-13T21:38:45,683 3489 | layer.ffn_norm_b = ml.create_tensor(ctx, tn(LLM_TENSOR_FFN_NORM, "bias", i), {n_embd}, backend); 2023-12-13T21:38:45,684 | ~~~~~~~~~~~~~~~~^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ 2023-12-13T21:38:45,684 /tmp/pip-wheel-h5vnrp6_/llama-cpp-python_92f0ea9b67c04b939d2e7bae30081482/vendor/llama.cpp/llama.cpp:3491:58: note: parameter passing for argument of type ‘std::initializer_list’ changed in GCC 7.1 2023-12-13T21:38:45,685 3491 | layer.ffn_gate = ml.create_tensor(ctx, tn(LLM_TENSOR_FFN_GATE, "weight", i), {n_embd, n_ff}, backend_split); 2023-12-13T21:38:45,686 | ~~~~~~~~~~~~~~~~^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ 2023-12-13T21:38:45,687 /tmp/pip-wheel-h5vnrp6_/llama-cpp-python_92f0ea9b67c04b939d2e7bae30081482/vendor/llama.cpp/llama.cpp:3492:58: note: parameter passing for argument of type ‘std::initializer_list’ changed in GCC 7.1 2023-12-13T21:38:45,687 3492 | layer.ffn_down = ml.create_tensor(ctx, tn(LLM_TENSOR_FFN_DOWN, "weight", i), { n_ff, n_embd}, backend_split); 2023-12-13T21:38:45,689 | ~~~~~~~~~~~~~~~~^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ 2023-12-13T21:38:45,689 /tmp/pip-wheel-h5vnrp6_/llama-cpp-python_92f0ea9b67c04b939d2e7bae30081482/vendor/llama.cpp/llama.cpp:3493:56: note: parameter passing for argument of type ‘std::initializer_list’ changed in GCC 7.1 2023-12-13T21:38:45,690 3493 | layer.ffn_up = ml.create_tensor(ctx, tn(LLM_TENSOR_FFN_UP, "weight", i), {n_embd, n_ff}, backend_split); 2023-12-13T21:38:45,690 | ~~~~~~~~~~~~~~~~^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ 2023-12-13T21:38:45,691 /tmp/pip-wheel-h5vnrp6_/llama-cpp-python_92f0ea9b67c04b939d2e7bae30081482/vendor/llama.cpp/llama.cpp:3518:61: note: parameter passing for argument of type ‘std::initializer_list’ changed in GCC 7.1 2023-12-13T21:38:45,692 3518 | model.output_norm = ml.create_tensor(ctx, tn(LLM_TENSOR_OUTPUT_NORM, "weight"), {n_embd}, backend_norm); 2023-12-13T21:38:45,692 | ~~~~~~~~~~~~~~~~^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ 2023-12-13T21:38:45,693 /tmp/pip-wheel-h5vnrp6_/llama-cpp-python_92f0ea9b67c04b939d2e7bae30081482/vendor/llama.cpp/llama.cpp:3541:59: note: parameter passing for argument of type ‘std::initializer_list’ changed in GCC 7.1 2023-12-13T21:38:45,693 3541 | layer.attn_norm = ml.create_tensor(ctx, tn(LLM_TENSOR_ATTN_NORM, "weight", i), {n_embd}, backend); 2023-12-13T21:38:45,694 | ~~~~~~~~~~~~~~~~^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ 2023-12-13T21:38:45,695 /tmp/pip-wheel-h5vnrp6_/llama-cpp-python_92f0ea9b67c04b939d2e7bae30081482/vendor/llama.cpp/llama.cpp:3544:54: note: parameter passing for argument of type ‘std::initializer_list’ changed in GCC 7.1 2023-12-13T21:38:45,696 3544 | layer.bqkv = ml.create_tensor(ctx, tn(LLM_TENSOR_ATTN_QKV, "bias", i), {n_embd * 3}, backend); 2023-12-13T21:38:45,696 | ~~~~~~~~~~~~~~~~^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ 2023-12-13T21:38:45,698 /tmp/pip-wheel-h5vnrp6_/llama-cpp-python_92f0ea9b67c04b939d2e7bae30081482/vendor/llama.cpp/llama.cpp:3547:58: note: parameter passing for argument of type ‘std::initializer_list’ changed in GCC 7.1 2023-12-13T21:38:45,698 3547 | layer.ffn_norm = ml.create_tensor(ctx, tn(LLM_TENSOR_FFN_NORM, "weight", i), {n_embd}, backend); 2023-12-13T21:38:45,699 | ~~~~~~~~~~~~~~~~^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ 2023-12-13T21:38:45,700 In file included from /usr/include/c++/12/vector:70: 2023-12-13T21:38:45,700 /usr/include/c++/12/bits/vector.tcc: In member function ‘void std::vector<_Tp, _Alloc>::_M_realloc_insert(iterator, _Args&& ...) [with _Args = {const double&}; _Tp = double; _Alloc = std::allocator]’: 2023-12-13T21:38:45,701 /usr/include/c++/12/bits/vector.tcc:439:7: note: parameter passing for argument of type ‘std::vector::iterator’ changed in GCC 7.1 2023-12-13T21:38:45,701 439 | vector<_Tp, _Alloc>:: 2023-12-13T21:38:45,702 | ^~~~~~~~~~~~~~~~~~~ 2023-12-13T21:38:45,703 In member function ‘void std::vector<_Tp, _Alloc>::push_back(const value_type&) [with _Tp = double; _Alloc = std::allocator]’, 2023-12-13T21:38:45,703 inlined from ‘std::back_insert_iterator<_Container>& std::back_insert_iterator<_Container>::operator=(const typename _Container::value_type&) [with _Container = std::vector]’ at /usr/include/c++/12/bits/stl_iterator.h:735:22, 2023-12-13T21:38:45,704 inlined from ‘_OutputIterator std::partial_sum(_InputIterator, _InputIterator, _OutputIterator) [with _InputIterator = __gnu_cxx::__normal_iterator >; _OutputIterator = back_insert_iterator >]’ at /usr/include/c++/12/bits/stl_numeric.h:270:17, 2023-12-13T21:38:45,705 inlined from ‘void std::discrete_distribution<_IntType>::param_type::_M_initialize() [with _IntType = int]’ at /usr/include/c++/12/bits/random.tcc:2679:23: 2023-12-13T21:38:45,705 /usr/include/c++/12/bits/stl_vector.h:1287:28: note: parameter passing for argument of type ‘__gnu_cxx::__normal_iterator >’ changed in GCC 7.1 2023-12-13T21:38:45,706 1287 | _M_realloc_insert(end(), __x); 2023-12-13T21:38:45,706 | ~~~~~~~~~~~~~~~~~^~~~~~~~~~~~ 2023-12-13T21:38:45,707 In member function ‘void std::vector<_Tp, _Alloc>::push_back(const value_type&) [with _Tp = double; _Alloc = std::allocator]’, 2023-12-13T21:38:45,707 inlined from ‘std::back_insert_iterator<_Container>& std::back_insert_iterator<_Container>::operator=(const typename _Container::value_type&) [with _Container = std::vector]’ at /usr/include/c++/12/bits/stl_iterator.h:735:22, 2023-12-13T21:38:45,708 inlined from ‘_OutputIterator std::partial_sum(_InputIterator, _InputIterator, _OutputIterator) [with _InputIterator = __gnu_cxx::__normal_iterator >; _OutputIterator = back_insert_iterator >]’ at /usr/include/c++/12/bits/stl_numeric.h:274:16, 2023-12-13T21:38:45,708 inlined from ‘void std::discrete_distribution<_IntType>::param_type::_M_initialize() [with _IntType = int]’ at /usr/include/c++/12/bits/random.tcc:2679:23: 2023-12-13T21:38:45,709 /usr/include/c++/12/bits/stl_vector.h:1287:28: note: parameter passing for argument of type ‘__gnu_cxx::__normal_iterator >’ changed in GCC 7.1 2023-12-13T21:38:45,709 1287 | _M_realloc_insert(end(), __x); 2023-12-13T21:38:45,710 | ~~~~~~~~~~~~~~~~~^~~~~~~~~~~~ 2023-12-13T21:38:45,912 [19/22] : && /usr/bin/c++ -fPIC -O3 -DNDEBUG -shared -Wl,-soname,libllama.so -o vendor/llama.cpp/libllama.so vendor/llama.cpp/CMakeFiles/ggml.dir/ggml.c.o vendor/llama.cpp/CMakeFiles/ggml.dir/ggml-alloc.c.o vendor/llama.cpp/CMakeFiles/ggml.dir/ggml-backend.c.o vendor/llama.cpp/CMakeFiles/ggml.dir/ggml-quants.c.o vendor/llama.cpp/CMakeFiles/llama.dir/llama.cpp.o && : 2023-12-13T21:38:46,214 [20/22] : && /usr/bin/cmake -E rm -f vendor/llama.cpp/common/libcommon.a && /usr/bin/ar qc vendor/llama.cpp/common/libcommon.a vendor/llama.cpp/common/CMakeFiles/build_info.dir/build-info.cpp.o vendor/llama.cpp/common/CMakeFiles/common.dir/common.cpp.o vendor/llama.cpp/common/CMakeFiles/common.dir/sampling.cpp.o vendor/llama.cpp/common/CMakeFiles/common.dir/console.cpp.o vendor/llama.cpp/common/CMakeFiles/common.dir/grammar-parser.cpp.o vendor/llama.cpp/common/CMakeFiles/common.dir/train.cpp.o && /usr/bin/ranlib vendor/llama.cpp/common/libcommon.a && : 2023-12-13T21:38:46,256 [21/22] : && /usr/bin/c++ -fPIC -O3 -DNDEBUG -shared -Wl,-soname,libllava.so -o vendor/llama.cpp/examples/llava/libllava.so vendor/llama.cpp/examples/llava/CMakeFiles/llava.dir/llava.cpp.o vendor/llama.cpp/examples/llava/CMakeFiles/llava.dir/clip.cpp.o vendor/llama.cpp/CMakeFiles/ggml.dir/ggml.c.o vendor/llama.cpp/CMakeFiles/ggml.dir/ggml-alloc.c.o vendor/llama.cpp/CMakeFiles/ggml.dir/ggml-backend.c.o vendor/llama.cpp/CMakeFiles/ggml.dir/ggml-quants.c.o -Wl,-rpath,/tmp/tmpcx65fafl/build/vendor/llama.cpp: vendor/llama.cpp/libllama.so && : 2023-12-13T21:38:46,570 [22/22] : && /usr/bin/c++ -O3 -DNDEBUG vendor/llama.cpp/examples/llava/CMakeFiles/llava.dir/llava.cpp.o vendor/llama.cpp/examples/llava/CMakeFiles/llava.dir/clip.cpp.o vendor/llama.cpp/examples/llava/CMakeFiles/llava-cli.dir/llava-cli.cpp.o -o vendor/llama.cpp/examples/llava/llava-cli -Wl,-rpath,/tmp/tmpcx65fafl/build/vendor/llama.cpp: vendor/llama.cpp/common/libcommon.a vendor/llama.cpp/libllama.so && : 2023-12-13T21:38:46,574 *** Installing project into wheel... 2023-12-13T21:38:46,613 -- Install configuration: "Release" 2023-12-13T21:38:46,617 -- Installing: /tmp/tmpcx65fafl/wheel/platlib/lib/libggml_shared.so 2023-12-13T21:38:46,661 -- Installing: /tmp/tmpcx65fafl/wheel/platlib/lib/cmake/Llama/LlamaConfig.cmake 2023-12-13T21:38:46,664 -- Installing: /tmp/tmpcx65fafl/wheel/platlib/lib/cmake/Llama/LlamaConfigVersion.cmake 2023-12-13T21:38:46,668 -- Installing: /tmp/tmpcx65fafl/wheel/platlib/include/ggml.h 2023-12-13T21:38:46,672 -- Installing: /tmp/tmpcx65fafl/wheel/platlib/lib/libllama.so 2023-12-13T21:38:46,735 -- Installing: /tmp/tmpcx65fafl/wheel/platlib/include/llama.h 2023-12-13T21:38:46,740 -- Installing: /tmp/tmpcx65fafl/wheel/platlib/bin/convert.py 2023-12-13T21:38:46,744 -- Installing: /tmp/tmpcx65fafl/wheel/platlib/bin/convert-lora-to-ggml.py 2023-12-13T21:38:46,749 -- Installing: /tmp/tmpcx65fafl/wheel/platlib/llama_cpp/libllama.so 2023-12-13T21:38:46,806 -- Installing: /tmp/pip-wheel-h5vnrp6_/llama-cpp-python_92f0ea9b67c04b939d2e7bae30081482/llama_cpp/libllama.so 2023-12-13T21:38:46,871 -- Installing: /tmp/tmpcx65fafl/wheel/platlib/lib/libllava.so 2023-12-13T21:38:46,889 -- Set runtime path of "/tmp/tmpcx65fafl/wheel/platlib/lib/libllava.so" to "" 2023-12-13T21:38:46,925 -- Installing: /tmp/tmpcx65fafl/wheel/platlib/bin/llava-cli 2023-12-13T21:38:46,939 -- Set runtime path of "/tmp/tmpcx65fafl/wheel/platlib/bin/llava-cli" to "" 2023-12-13T21:38:46,964 -- Installing: /tmp/tmpcx65fafl/wheel/platlib/llama_cpp/libllava.so 2023-12-13T21:38:46,982 -- Set runtime path of "/tmp/tmpcx65fafl/wheel/platlib/llama_cpp/libllava.so" to "" 2023-12-13T21:38:47,021 -- Installing: /tmp/pip-wheel-h5vnrp6_/llama-cpp-python_92f0ea9b67c04b939d2e7bae30081482/llama_cpp/libllava.so 2023-12-13T21:38:47,044 -- Set runtime path of "/tmp/pip-wheel-h5vnrp6_/llama-cpp-python_92f0ea9b67c04b939d2e7bae30081482/llama_cpp/libllava.so" to "" 2023-12-13T21:38:47,107 *** Making wheel... 2023-12-13T21:38:48,242 *** Created llama_cpp_python-0.2.22-cp311-cp311-manylinux_2_36_armv7l.whl... 2023-12-13T21:38:48,300 Building wheel for llama-cpp-python (pyproject.toml): finished with status 'done' 2023-12-13T21:38:48,324 Created wheel for llama-cpp-python: filename=llama_cpp_python-0.2.22-cp311-cp311-manylinux_2_36_armv7l.whl size=1906194 sha256=d63553eca926a129319ccbc2f586b1a1cd3b5eb4aca1f18180142b3e3a27f72d 2023-12-13T21:38:48,325 Stored in directory: /tmp/pip-ephem-wheel-cache-2e7fc67n/wheels/74/40/92/1bbcf88ff8a0f716e857edfa318052f723d89a6dbcbb8612ec 2023-12-13T21:38:48,340 Successfully built llama-cpp-python 2023-12-13T21:38:48,385 Removed build tracker: '/tmp/pip-build-tracker-mdfijr_4'