2024-07-02T16:50:38,827 Created temporary directory: /tmp/pip-build-tracker-vawopyc0 2024-07-02T16:50:38,828 Initialized build tracking at /tmp/pip-build-tracker-vawopyc0 2024-07-02T16:50:38,828 Created build tracker: /tmp/pip-build-tracker-vawopyc0 2024-07-02T16:50:38,829 Entered build tracker: /tmp/pip-build-tracker-vawopyc0 2024-07-02T16:50:38,829 Created temporary directory: /tmp/pip-wheel-i4vgucre 2024-07-02T16:50:38,833 Created temporary directory: /tmp/pip-ephem-wheel-cache-zpm724e3 2024-07-02T16:50:38,855 Looking in indexes: https://pypi.org/simple, https://www.piwheels.org/simple 2024-07-02T16:50:38,859 2 location(s) to search for versions of llama-cpp-python: 2024-07-02T16:50:38,859 * https://pypi.org/simple/llama-cpp-python/ 2024-07-02T16:50:38,859 * https://www.piwheels.org/simple/llama-cpp-python/ 2024-07-02T16:50:38,860 Fetching project page and analyzing links: https://pypi.org/simple/llama-cpp-python/ 2024-07-02T16:50:38,860 Getting page https://pypi.org/simple/llama-cpp-python/ 2024-07-02T16:50:38,862 Found index url https://pypi.org/simple/ 2024-07-02T16:50:39,005 Fetched page https://pypi.org/simple/llama-cpp-python/ as application/vnd.pypi.simple.v1+json 2024-07-02T16:50:39,033 Found link https://files.pythonhosted.org/packages/17/9c/813d8c83d81cb9ab42e5ee66657f8d3670bacdcd67df4aa7728e8dccbcfd/llama_cpp_python-0.1.1.tar.gz (from https://pypi.org/simple/llama-cpp-python/), version: 0.1.1 2024-07-02T16:50:39,034 Found link https://files.pythonhosted.org/packages/42/22/07711b8fc85ed188182c923aa424254a451ee23a58d6c45a033e05e57f9a/llama_cpp_python-0.1.2.tar.gz (from https://pypi.org/simple/llama-cpp-python/), version: 0.1.2 2024-07-02T16:50:39,034 Found link https://files.pythonhosted.org/packages/13/a2/a3a6e665905992e2ed2c79b7af2dce4a36f23c5147959f0f56d9bd72543c/llama_cpp_python-0.1.3.tar.gz (from https://pypi.org/simple/llama-cpp-python/), version: 0.1.3 2024-07-02T16:50:39,035 Found link https://files.pythonhosted.org/packages/00/b6/3069b31e8cd0073685aa059e161e4b8dc3a4e3c77c4f8f433fa5ebc01655/llama_cpp_python-0.1.4.tar.gz (from https://pypi.org/simple/llama-cpp-python/), version: 0.1.4 2024-07-02T16:50:39,036 Found link https://files.pythonhosted.org/packages/cd/32/e2380800128e64542f719c3d7287b2818e7234e268298b95273164cb0a3d/llama_cpp_python-0.1.5.tar.gz (from https://pypi.org/simple/llama-cpp-python/), version: 0.1.5 2024-07-02T16:50:39,036 Found link https://files.pythonhosted.org/packages/9f/d3/9904d8616a5af9515b8852c441472c930b780db1879f13cae240bd4eb05f/llama_cpp_python-0.1.6.tar.gz (from https://pypi.org/simple/llama-cpp-python/), version: 0.1.6 2024-07-02T16:50:39,037 Found link https://files.pythonhosted.org/packages/20/ff/c192e4469e14be86d3b11fdee4b56aca486033e4256174e2cf8425840e54/llama_cpp_python-0.1.7.tar.gz (from https://pypi.org/simple/llama-cpp-python/), version: 0.1.7 2024-07-02T16:50:39,038 Found link https://files.pythonhosted.org/packages/7e/3b/b5f7e1ec5f43a4e980733c63bd4f05e1b7e14fd3b7aa72d9ca91f2415323/llama_cpp_python-0.1.8.tar.gz (from https://pypi.org/simple/llama-cpp-python/), version: 0.1.8 2024-07-02T16:50:39,039 Found link https://files.pythonhosted.org/packages/9f/24/45a5a3beee1354f668d916eb1a2146835a0eda4dbad0da45252170e105a6/llama_cpp_python-0.1.9.tar.gz (from https://pypi.org/simple/llama-cpp-python/), version: 0.1.9 2024-07-02T16:50:39,039 Found link https://files.pythonhosted.org/packages/71/ad/e3f373300efdfbcd67dc3909512a5b80dd6c5f2092102cbea66bad75ec4d/llama_cpp_python-0.1.10.tar.gz (from https://pypi.org/simple/llama-cpp-python/), version: 0.1.10 2024-07-02T16:50:39,040 Found link https://files.pythonhosted.org/packages/bb/5e/c15d23176dd5783b1f62fd1b89c38fa655c9c1b524451e34a240fabffca8/llama_cpp_python-0.1.11.tar.gz (from https://pypi.org/simple/llama-cpp-python/) (requires-python:>=3.7), version: 0.1.11 2024-07-02T16:50:39,041 Found link https://files.pythonhosted.org/packages/ad/61/91b0c968596bcca9b09c6e40a38852500d31ed5f8649e25cfab293dc9af0/llama_cpp_python-0.1.12.tar.gz (from https://pypi.org/simple/llama-cpp-python/) (requires-python:>=3.7), version: 0.1.12 2024-07-02T16:50:39,042 Found link https://files.pythonhosted.org/packages/63/8f/1bb0a901a1be8c243e741a17ece1588615a1c5c4b9578ce80f12ce809d14/llama_cpp_python-0.1.13.tar.gz (from https://pypi.org/simple/llama-cpp-python/) (requires-python:>=3.7), version: 0.1.13 2024-07-02T16:50:39,044 Found link https://files.pythonhosted.org/packages/25/bc/83364cb8c3fff7da82fadd10e0d1ec221278a5403ab4222dd0745bfa6709/llama_cpp_python-0.1.14.tar.gz (from https://pypi.org/simple/llama-cpp-python/) (requires-python:>=3.7), version: 0.1.14 2024-07-02T16:50:39,044 Found link https://files.pythonhosted.org/packages/d8/6b/0b89436a26c2a7a5e1b57809d6f692c4f0afd87b19c31fe5425ddb19f54b/llama_cpp_python-0.1.15.tar.gz (from https://pypi.org/simple/llama-cpp-python/) (requires-python:>=3.7), version: 0.1.15 2024-07-02T16:50:39,045 Found link https://files.pythonhosted.org/packages/7f/ef/aa0d2e4ef92173bf7e3539b5fa3338e7f9f88a66e7a90cb2f00052b7a9cb/llama_cpp_python-0.1.16.tar.gz (from https://pypi.org/simple/llama-cpp-python/) (requires-python:>=3.7), version: 0.1.16 2024-07-02T16:50:39,046 Found link https://files.pythonhosted.org/packages/71/d6/bb0a4bb92abf16dee92a933b45ba16f0e6c0a1b63ee8877c678a54c373a8/llama_cpp_python-0.1.17.tar.gz (from https://pypi.org/simple/llama-cpp-python/) (requires-python:>=3.7), version: 0.1.17 2024-07-02T16:50:39,047 Found link https://files.pythonhosted.org/packages/c2/08/7c12856cbe4523e518e280914674f4b65f5f62076408a7984b69d9771494/llama_cpp_python-0.1.18.tar.gz (from https://pypi.org/simple/llama-cpp-python/) (requires-python:>=3.7), version: 0.1.18 2024-07-02T16:50:39,048 Found link https://files.pythonhosted.org/packages/63/48/977cd0ffdbfb9446e758c8c69aa49025a7477058d42bd30bef67f42c556c/llama_cpp_python-0.1.19.tar.gz (from https://pypi.org/simple/llama-cpp-python/) (requires-python:>=3.7), version: 0.1.19 2024-07-02T16:50:39,049 Found link https://files.pythonhosted.org/packages/dc/2e/730cc405e0227ce6f49dd2bab4d6ce69963cb65bc3452fd33a552c9b8630/llama_cpp_python-0.1.20.tar.gz (from https://pypi.org/simple/llama-cpp-python/) (requires-python:>=3.7), version: 0.1.20 2024-07-02T16:50:39,050 Found link https://files.pythonhosted.org/packages/52/1a/d122abc9571e09e17ad8909d2f8710ea0abe26ced1287ae82828fc80aaa3/llama_cpp_python-0.1.21.tar.gz (from https://pypi.org/simple/llama-cpp-python/) (requires-python:>=3.7), version: 0.1.21 2024-07-02T16:50:39,051 Found link https://files.pythonhosted.org/packages/cf/94/4c35d7e3011ce86f063e3c754afd71f3a6f1f2a0ec9616deb55e8f3743a1/llama_cpp_python-0.1.22.tar.gz (from https://pypi.org/simple/llama-cpp-python/) (requires-python:>=3.7), version: 0.1.22 2024-07-02T16:50:39,052 Found link https://files.pythonhosted.org/packages/03/6e/3e0768c396be6807b9e835c223ce37385d574eaf9e4d0ac80116325f6775/llama_cpp_python-0.1.23.tar.gz (from https://pypi.org/simple/llama-cpp-python/) (requires-python:>=3.7), version: 0.1.23 2024-07-02T16:50:39,053 Found link https://files.pythonhosted.org/packages/bc/8b/618c42fdfa078a3cec9ed871b9c1bb6cca65b66e4e3ce0bf690f8109eaa1/llama_cpp_python-0.1.24.tar.gz (from https://pypi.org/simple/llama-cpp-python/) (requires-python:>=3.7), version: 0.1.24 2024-07-02T16:50:39,054 Found link https://files.pythonhosted.org/packages/6c/64/bd9d98588aa8b6c49c0cfa1d0b4ef4ec5a1a05e4d8d67c1aed3587ae2e1a/llama_cpp_python-0.1.25.tar.gz (from https://pypi.org/simple/llama-cpp-python/) (requires-python:>=3.7), version: 0.1.25 2024-07-02T16:50:39,055 Found link https://files.pythonhosted.org/packages/c1/cf/c81b3ba5340398820cc12c247e33f3f1ee15c4043794596968dc31ebac9c/llama_cpp_python-0.1.26.tar.gz (from https://pypi.org/simple/llama-cpp-python/) (requires-python:>=3.7), version: 0.1.26 2024-07-02T16:50:39,056 Found link https://files.pythonhosted.org/packages/fa/b8/0a6fafae31b2c40997c282cd9220743c419dd8b372f09c57e551792bb899/llama_cpp_python-0.1.27.tar.gz (from https://pypi.org/simple/llama-cpp-python/) (requires-python:>=3.7), version: 0.1.27 2024-07-02T16:50:39,057 Found link https://files.pythonhosted.org/packages/fb/6a/0c7421119d6e536ee1ca02ad5555dbbda7a38189333b0ac67f582cd5a84f/llama_cpp_python-0.1.28.tar.gz (from https://pypi.org/simple/llama-cpp-python/) (requires-python:>=3.7), version: 0.1.28 2024-07-02T16:50:39,058 Found link https://files.pythonhosted.org/packages/fa/e3/3a12c770007f9a3c5903f7e2904aff4af5fa7d36cb06843c65cfaadccdd2/llama_cpp_python-0.1.29.tar.gz (from https://pypi.org/simple/llama-cpp-python/) (requires-python:>=3.7), version: 0.1.29 2024-07-02T16:50:39,059 Found link https://files.pythonhosted.org/packages/e5/8e/b8dfcb10fdb1b2556a688cb23fd3d1b7b60c2b24ddc1cb9fc61a915c94d0/llama_cpp_python-0.1.30.tar.gz (from https://pypi.org/simple/llama-cpp-python/) (requires-python:>=3.7), version: 0.1.30 2024-07-02T16:50:39,060 Found link https://files.pythonhosted.org/packages/c9/46/e37f0120bf5996b644c373c8fea9d2bf31ceb30e18724f2ae0876cb25b96/llama_cpp_python-0.1.31.tar.gz (from https://pypi.org/simple/llama-cpp-python/) (requires-python:>=3.7), version: 0.1.31 2024-07-02T16:50:39,061 Found link https://files.pythonhosted.org/packages/39/f2/9d9c98ccb9ffe2ca7c9aeef235d5e45a4694f3148dfc9559e672c346f6ea/llama_cpp_python-0.1.32.tar.gz (from https://pypi.org/simple/llama-cpp-python/) (requires-python:>=3.7), version: 0.1.32 2024-07-02T16:50:39,062 Found link https://files.pythonhosted.org/packages/70/b3/a1497e783b921cc8cd0d2f7fabe9d0b5c2bf95ab9fd56503d282862ce720/llama_cpp_python-0.1.33.tar.gz (from https://pypi.org/simple/llama-cpp-python/) (requires-python:>=3.7), version: 0.1.33 2024-07-02T16:50:39,063 Found link https://files.pythonhosted.org/packages/b3/f0/82690e424b3fdb0d1738f312095a7a88cbe06cb910be9c5f5d4c7e3bdde8/llama_cpp_python-0.1.34.tar.gz (from https://pypi.org/simple/llama-cpp-python/) (requires-python:>=3.7), version: 0.1.34 2024-07-02T16:50:39,064 Found link https://files.pythonhosted.org/packages/e9/47/013240af1272400ad49422f8ebfc47476a4d82e3375dd05dbd1440da3c50/llama_cpp_python-0.1.35.tar.gz (from https://pypi.org/simple/llama-cpp-python/) (requires-python:>=3.7), version: 0.1.35 2024-07-02T16:50:39,065 Found link https://files.pythonhosted.org/packages/1b/ea/3f2aff10fd7195c6bc8c52375d9ff027a551151569c50e0d47581b14b7c1/llama_cpp_python-0.1.36.tar.gz (from https://pypi.org/simple/llama-cpp-python/) (requires-python:>=3.7), version: 0.1.36 2024-07-02T16:50:39,066 Found link https://files.pythonhosted.org/packages/5d/10/e037dc290ed7435dd6f5fa5dcce2453f1cf145b84f1e8e40d0a63ac62aa2/llama_cpp_python-0.1.37.tar.gz (from https://pypi.org/simple/llama-cpp-python/) (requires-python:>=3.7), version: 0.1.37 2024-07-02T16:50:39,067 Found link https://files.pythonhosted.org/packages/e6/2a/d898551013b9f0863b8134dbcb5863a306f5d9c2ad4a394c68a2988a77a0/llama_cpp_python-0.1.38.tar.gz (from https://pypi.org/simple/llama-cpp-python/) (requires-python:>=3.7), version: 0.1.38 2024-07-02T16:50:39,068 Found link https://files.pythonhosted.org/packages/5a/41/955ac2e592949ca95a29efc5f544afcbc9ca3fc5484cb0272837d98c6b5a/llama_cpp_python-0.1.39.tar.gz (from https://pypi.org/simple/llama-cpp-python/) (requires-python:>=3.7), version: 0.1.39 2024-07-02T16:50:39,070 Found link https://files.pythonhosted.org/packages/fc/2c/62c5ce16f88348f928320565cf6c0dfe8220a03615bff14e47e4f3b4e439/llama_cpp_python-0.1.40.tar.gz (from https://pypi.org/simple/llama-cpp-python/) (requires-python:>=3.7), version: 0.1.40 2024-07-02T16:50:39,070 Found link https://files.pythonhosted.org/packages/d1/fe/852d447828bdcdfe1c8aa88061517b5de9e5c12389dd852076d5c913936a/llama_cpp_python-0.1.41.tar.gz (from https://pypi.org/simple/llama-cpp-python/) (requires-python:>=3.7), version: 0.1.41 2024-07-02T16:50:39,071 Found link https://files.pythonhosted.org/packages/8d/bb/48129f3696fcc125fac1c91a5a6df5ab472e561d74ed5818e6fca748a432/llama_cpp_python-0.1.42.tar.gz (from https://pypi.org/simple/llama-cpp-python/) (requires-python:>=3.7), version: 0.1.42 2024-07-02T16:50:39,072 Found link https://files.pythonhosted.org/packages/eb/43/ac841dc1a3f5f618e4546ce69fe7da0d976cb141c92b8d1f735f2baf0b85/llama_cpp_python-0.1.43.tar.gz (from https://pypi.org/simple/llama-cpp-python/) (requires-python:>=3.7), version: 0.1.43 2024-07-02T16:50:39,074 Found link https://files.pythonhosted.org/packages/29/69/b73ae145d6f40683656f537b8526ca27e8348c7ff9af9c014a6a723fda5f/llama_cpp_python-0.1.44.tar.gz (from https://pypi.org/simple/llama-cpp-python/) (requires-python:>=3.7), version: 0.1.44 2024-07-02T16:50:39,075 Found link https://files.pythonhosted.org/packages/62/b7/299b9d537037a95d4433498c73c1a8024de230a26d0c94b3e889364038d4/llama_cpp_python-0.1.45.tar.gz (from https://pypi.org/simple/llama-cpp-python/) (requires-python:>=3.7), version: 0.1.45 2024-07-02T16:50:39,076 Found link https://files.pythonhosted.org/packages/c2/12/450986c9506525096cc77fcb6584ee02ec7d0017df0d34e6c79b9dba5a58/llama_cpp_python-0.1.46.tar.gz (from https://pypi.org/simple/llama-cpp-python/) (requires-python:>=3.7), version: 0.1.46 2024-07-02T16:50:39,076 Found link https://files.pythonhosted.org/packages/28/95/11fcced0778cb9b82a81cd61c93760a379527ef13d90a66254fdc2e982df/llama_cpp_python-0.1.47.tar.gz (from https://pypi.org/simple/llama-cpp-python/) (requires-python:>=3.7), version: 0.1.47 2024-07-02T16:50:39,077 Found link https://files.pythonhosted.org/packages/35/04/63f43ff24bd8948abbe2d7c9c3e3d235c0e7501ec8b1e72d01676051f75d/llama_cpp_python-0.1.48.tar.gz (from https://pypi.org/simple/llama-cpp-python/) (requires-python:>=3.7), version: 0.1.48 2024-07-02T16:50:39,078 Found link https://files.pythonhosted.org/packages/1b/60/be610e7e95eb53e949ac74024b30d5fa763244928b07a16815d16643b7ab/llama_cpp_python-0.1.49.tar.gz (from https://pypi.org/simple/llama-cpp-python/) (requires-python:>=3.7), version: 0.1.49 2024-07-02T16:50:39,079 Found link https://files.pythonhosted.org/packages/82/2c/9614ef76422168fde5326095559f271a22b1926185add8ae739901e113b9/llama_cpp_python-0.1.50.tar.gz (from https://pypi.org/simple/llama-cpp-python/) (requires-python:>=3.7), version: 0.1.50 2024-07-02T16:50:39,080 Found link https://files.pythonhosted.org/packages/f9/65/78748102cca92fb148e111c41827433ecc2cb79eed9de0a72a4d7a4361c0/llama_cpp_python-0.1.51.tar.gz (from https://pypi.org/simple/llama-cpp-python/) (requires-python:>=3.7), version: 0.1.51 2024-07-02T16:50:39,081 Found link https://files.pythonhosted.org/packages/87/cb/21c00f6f5b3a680671cb9c7e7ec5e07a6c03df70e28cd54f6197744c1f12/llama_cpp_python-0.1.52.tar.gz (from https://pypi.org/simple/llama-cpp-python/) (requires-python:>=3.7), version: 0.1.52 2024-07-02T16:50:39,082 Found link https://files.pythonhosted.org/packages/d6/8d/d1700e37bd9b8965154e12008620e3bd3ed7ed585ad86650294074577629/llama_cpp_python-0.1.53.tar.gz (from https://pypi.org/simple/llama-cpp-python/) (requires-python:>=3.7), version: 0.1.53 2024-07-02T16:50:39,083 Found link https://files.pythonhosted.org/packages/24/a7/e2904574d326e24338aab2e5fd618f007ef8b51c2a29618791f9c24269e2/llama_cpp_python-0.1.54.tar.gz (from https://pypi.org/simple/llama-cpp-python/) (requires-python:>=3.7), version: 0.1.54 2024-07-02T16:50:39,084 Found link https://files.pythonhosted.org/packages/b2/9b/15a40971444775d7aa5aee934991fa97eee285ae3a77c98c70c382f2ed60/llama_cpp_python-0.1.55.tar.gz (from https://pypi.org/simple/llama-cpp-python/) (requires-python:>=3.7), version: 0.1.55 2024-07-02T16:50:39,085 Found link https://files.pythonhosted.org/packages/2e/d7/36eccf10a611e2f3040cec775b9734ea51cf9938b2d911e30cbf71dd321b/llama_cpp_python-0.1.56.tar.gz (from https://pypi.org/simple/llama-cpp-python/) (requires-python:>=3.7), version: 0.1.56 2024-07-02T16:50:39,086 Found link https://files.pythonhosted.org/packages/4d/e5/b337c9e7330695eb5efa2329d25b2d964fe10364429698c89140729ebaaf/llama_cpp_python-0.1.57.tar.gz (from https://pypi.org/simple/llama-cpp-python/) (requires-python:>=3.7), version: 0.1.57 2024-07-02T16:50:39,087 Found link https://files.pythonhosted.org/packages/91/0f/8156d3f1b6bbbea68f28df5e325a2863ed736362b0f93f7936acba424e70/llama_cpp_python-0.1.59.tar.gz (from https://pypi.org/simple/llama-cpp-python/) (requires-python:>=3.7), version: 0.1.59 2024-07-02T16:50:39,088 Found link https://files.pythonhosted.org/packages/e9/18/9531e94f7a4cd402cf200a9e6257fc08d162b8a8d57adf6f4049f60ba05b/llama_cpp_python-0.1.61.tar.gz (from https://pypi.org/simple/llama-cpp-python/) (requires-python:>=3.7), version: 0.1.61 2024-07-02T16:50:39,089 Found link https://files.pythonhosted.org/packages/cc/ed/fe9bbe6c4f2156fc5e887d9e669872bc1722f80a2932a78a8166d7a82877/llama_cpp_python-0.1.62.tar.gz (from https://pypi.org/simple/llama-cpp-python/) (requires-python:>=3.7), version: 0.1.62 2024-07-02T16:50:39,090 Found link https://files.pythonhosted.org/packages/a8/01/7e39377ad0d20d2379b01b7019aad9b3595ea21ced1705ccc49c78936088/llama_cpp_python-0.1.63.tar.gz (from https://pypi.org/simple/llama-cpp-python/) (requires-python:>=3.7), version: 0.1.63 2024-07-02T16:50:39,091 Found link https://files.pythonhosted.org/packages/ad/c1/4083e90a0b31e1abb72d3f00f8d1403bdc9384301e1e370d0915f73519f5/llama_cpp_python-0.1.64.tar.gz (from https://pypi.org/simple/llama-cpp-python/) (requires-python:>=3.7), version: 0.1.64 2024-07-02T16:50:39,092 Found link https://files.pythonhosted.org/packages/84/7d/a659b65132db354147654bf2b6b2c8820b25aa10833b4849ec6b66e69117/llama_cpp_python-0.1.65.tar.gz (from https://pypi.org/simple/llama-cpp-python/) (requires-python:>=3.7), version: 0.1.65 2024-07-02T16:50:39,093 Found link https://files.pythonhosted.org/packages/59/43/6dfbaed1f70ef013279b03e436b8f58f9f2ab0835e04034927fc31bb8fc9/llama_cpp_python-0.1.66.tar.gz (from https://pypi.org/simple/llama-cpp-python/) (requires-python:>=3.7), version: 0.1.66 2024-07-02T16:50:39,094 Found link https://files.pythonhosted.org/packages/96/79/3dbc78c1a6e14d088673d21549a736aa27ca69ef1734541a07c36f349cf7/llama_cpp_python-0.1.67.tar.gz (from https://pypi.org/simple/llama-cpp-python/) (requires-python:>=3.7), version: 0.1.67 2024-07-02T16:50:39,094 Found link https://files.pythonhosted.org/packages/87/0a/f99cdd3befe25e414f9a758fb89bf70ca5278d68430af140391fc262bb55/llama_cpp_python-0.1.68.tar.gz (from https://pypi.org/simple/llama-cpp-python/) (requires-python:>=3.7), version: 0.1.68 2024-07-02T16:50:39,095 Found link https://files.pythonhosted.org/packages/e6/a2/86200ff91d374311fbb704079d95927edacfc47592ae34c3c48a47863eea/llama_cpp_python-0.1.69.tar.gz (from https://pypi.org/simple/llama-cpp-python/) (requires-python:>=3.7), version: 0.1.69 2024-07-02T16:50:39,096 Found link https://files.pythonhosted.org/packages/78/60/5cfb3842ef25db4ee1555dc2a70b99c569ad27c0438e7d9704c1672828b8/llama_cpp_python-0.1.70.tar.gz (from https://pypi.org/simple/llama-cpp-python/) (requires-python:>=3.7), version: 0.1.70 2024-07-02T16:50:39,097 Found link https://files.pythonhosted.org/packages/4b/d1/24602670353e3f08f07c9bf36dca5ef5466ac3c0d27b5d5be0685e8032a7/llama_cpp_python-0.1.71.tar.gz (from https://pypi.org/simple/llama-cpp-python/) (requires-python:>=3.7), version: 0.1.71 2024-07-02T16:50:39,098 Found link https://files.pythonhosted.org/packages/7f/59/b17486fa68bd3bce14fad89e049ea2700cf9ca36e7710d9380e2facbe182/llama_cpp_python-0.1.72.tar.gz (from https://pypi.org/simple/llama-cpp-python/) (requires-python:>=3.7), version: 0.1.72 2024-07-02T16:50:39,099 Found link https://files.pythonhosted.org/packages/c5/c5/3bcee8d4fa2a3faef625dd1223e945ab15aa7d2f180158f30762eaa597b1/llama_cpp_python-0.1.73.tar.gz (from https://pypi.org/simple/llama-cpp-python/) (requires-python:>=3.7), version: 0.1.73 2024-07-02T16:50:39,100 Found link https://files.pythonhosted.org/packages/73/09/99e6bf5d56e96a15a67628b15b705afbddf27279e6738018c4d7866d05c7/llama_cpp_python-0.1.74.tar.gz (from https://pypi.org/simple/llama-cpp-python/) (requires-python:>=3.7), version: 0.1.74 2024-07-02T16:50:39,101 Found link https://files.pythonhosted.org/packages/b3/61/85c4defcdd3157004611feff6c95e8b4776d8671ca754ff2ed91fbc85154/llama_cpp_python-0.1.76.tar.gz (from https://pypi.org/simple/llama-cpp-python/) (requires-python:>=3.7), version: 0.1.76 2024-07-02T16:50:39,102 Found link https://files.pythonhosted.org/packages/28/57/6db0db4582e31ced78487c6f28a4ee127fe38a22a85c573c39c7e5a03e2f/llama_cpp_python-0.1.77.tar.gz (from https://pypi.org/simple/llama-cpp-python/) (requires-python:>=3.7), version: 0.1.77 2024-07-02T16:50:39,103 Found link https://files.pythonhosted.org/packages/dd/98/3d2382ac0b462b175519de360c57d514fbe5d33a5e67e42e82dc03bfb0f9/llama_cpp_python-0.1.78.tar.gz (from https://pypi.org/simple/llama-cpp-python/) (requires-python:>=3.7), version: 0.1.78 2024-07-02T16:50:39,105 Found link https://files.pythonhosted.org/packages/f2/85/39c90a6b2306fbf91fc9dd2346bb4599c57e5c29aec15981fe5d662cef34/llama_cpp_python-0.1.79.tar.gz (from https://pypi.org/simple/llama-cpp-python/) (requires-python:>=3.7), version: 0.1.79 2024-07-02T16:50:39,106 Found link https://files.pythonhosted.org/packages/af/c7/e3cee337dc44024bece8faf7683e40d015bae55b0dfaddd1a97ab4d1b432/llama_cpp_python-0.1.80.tar.gz (from https://pypi.org/simple/llama-cpp-python/) (requires-python:>=3.7), version: 0.1.80 2024-07-02T16:50:39,106 Found link https://files.pythonhosted.org/packages/ae/92/c10ee59095bc1336edbecc8f6eea98d9d2f4df1d944b9df9b4484ea268ae/llama_cpp_python-0.1.81.tar.gz (from https://pypi.org/simple/llama-cpp-python/) (requires-python:>=3.7), version: 0.1.81 2024-07-02T16:50:39,107 Found link https://files.pythonhosted.org/packages/81/b5/b63dbe0b799b9063208543a84b0e99b622f8a8d19de9564fc1d2877e1c9e/llama_cpp_python-0.1.82.tar.gz (from https://pypi.org/simple/llama-cpp-python/) (requires-python:>=3.7), version: 0.1.82 2024-07-02T16:50:39,108 Found link https://files.pythonhosted.org/packages/6e/c7/651fa47b77d2189a46b00caa44627d17476bf41bcbeb0b72906295d6de79/llama_cpp_python-0.1.83.tar.gz (from https://pypi.org/simple/llama-cpp-python/) (requires-python:>=3.7), version: 0.1.83 2024-07-02T16:50:39,109 Found link https://files.pythonhosted.org/packages/39/f2/a64d37bdaecb2ad66cfc2faab95201acf66b537affbd042656b27dc135f4/llama_cpp_python-0.1.84.tar.gz (from https://pypi.org/simple/llama-cpp-python/) (requires-python:>=3.7), version: 0.1.84 2024-07-02T16:50:39,110 Found link https://files.pythonhosted.org/packages/ed/f2/2fb3b4c3886de5d1bcfbd258932159e374d1d9a0d52d6850805e26cc9fc2/llama_cpp_python-0.1.85.tar.gz (from https://pypi.org/simple/llama-cpp-python/) (requires-python:>=3.7), version: 0.1.85 2024-07-02T16:50:39,111 Found link https://files.pythonhosted.org/packages/5b/a6/a49b40d4c0ac9aa703bf11e5783d38beb3924a6ba5165a393518646894c9/llama_cpp_python-0.2.0.tar.gz (from https://pypi.org/simple/llama-cpp-python/) (requires-python:>=3.8), version: 0.2.0 2024-07-02T16:50:39,112 Found link https://files.pythonhosted.org/packages/e4/3a/7c65dbed3913086ec0a84549acdd4002ef4e1ef9fbb1d31596a4c1fd64a3/llama_cpp_python-0.2.1.tar.gz (from https://pypi.org/simple/llama-cpp-python/) (requires-python:>=3.8), version: 0.2.1 2024-07-02T16:50:39,113 Found link https://files.pythonhosted.org/packages/d0/28/ef9e91c4ed9e96a2a0bcd6a8327f2d039745b59946eccc6ccb1a9ee2dedf/llama_cpp_python-0.2.2.tar.gz (from https://pypi.org/simple/llama-cpp-python/) (requires-python:>=3.8), version: 0.2.2 2024-07-02T16:50:39,114 Found link https://files.pythonhosted.org/packages/99/e6/19d9c978dc634d91b05416c8fc502171af6b27a20683669048afa5738b74/llama_cpp_python-0.2.3.tar.gz (from https://pypi.org/simple/llama-cpp-python/) (requires-python:>=3.8), version: 0.2.3 2024-07-02T16:50:39,115 Found link https://files.pythonhosted.org/packages/7b/26/be5c224560ccbe64592afbdbe0710ae5b0a8413e1416cc8c2c0b093b713b/llama_cpp_python-0.2.4.tar.gz (from https://pypi.org/simple/llama-cpp-python/) (requires-python:>=3.8), version: 0.2.4 2024-07-02T16:50:39,116 Found link https://files.pythonhosted.org/packages/04/9d/1f8fe06199b5fda5a691f23ef5622b32d5fe717da748f4fc2c9cbde60223/llama_cpp_python-0.2.5.tar.gz (from https://pypi.org/simple/llama-cpp-python/) (requires-python:>=3.8), version: 0.2.5 2024-07-02T16:50:39,117 Found link https://files.pythonhosted.org/packages/ff/ca/8c45e45abb21069f6274efe3f1cf0aca29a1fd089fec6acf924ee4a67c46/llama_cpp_python-0.2.6.tar.gz (from https://pypi.org/simple/llama-cpp-python/) (requires-python:>=3.8), version: 0.2.6 2024-07-02T16:50:39,118 Found link https://files.pythonhosted.org/packages/b1/78/bd5e6653102ea16ce53a044cec606f257811da99c9c2a760af6a93cdfef3/llama_cpp_python-0.2.7.tar.gz (from https://pypi.org/simple/llama-cpp-python/) (requires-python:>=3.8), version: 0.2.7 2024-07-02T16:50:39,119 Found link https://files.pythonhosted.org/packages/6d/60/edbd982673a71c6c27fa6818914ad61c6171d165de4e777d489539f1d959/llama_cpp_python-0.2.8.tar.gz (from https://pypi.org/simple/llama-cpp-python/) (requires-python:>=3.8), version: 0.2.8 2024-07-02T16:50:39,120 Found link https://files.pythonhosted.org/packages/98/2e/357d936ff7418591c56a27b9472e2b3581bd9eeb90c4221580fae5e00588/llama_cpp_python-0.2.9.tar.gz (from https://pypi.org/simple/llama-cpp-python/) (requires-python:>=3.8), version: 0.2.9 2024-07-02T16:50:39,121 Found link https://files.pythonhosted.org/packages/d4/a2/ff96c80f91d7d534a6b65517247c09680b1bbf064d6388feda9aac3201dd/llama_cpp_python-0.2.10.tar.gz (from https://pypi.org/simple/llama-cpp-python/) (requires-python:>=3.8), version: 0.2.10 2024-07-02T16:50:39,122 Found link https://files.pythonhosted.org/packages/5b/b9/1ea446f1dcccb13313ea1e651c73bd5cc4db2aabf6cae1894064bddf1fc4/llama_cpp_python-0.2.11.tar.gz (from https://pypi.org/simple/llama-cpp-python/) (requires-python:>=3.8), version: 0.2.11 2024-07-02T16:50:39,123 Found link https://files.pythonhosted.org/packages/11/35/0185e28cfcdb59ab17e09a6cc6e19c7271db236ee1c9d41143a082b463b7/llama_cpp_python-0.2.12.tar.gz (from https://pypi.org/simple/llama-cpp-python/) (requires-python:>=3.8), version: 0.2.12 2024-07-02T16:50:39,124 Found link https://files.pythonhosted.org/packages/da/58/55a26595009d76237273b340d718e04d9a33c5afd440e45552f45a16b1d9/llama_cpp_python-0.2.13.tar.gz (from https://pypi.org/simple/llama-cpp-python/) (requires-python:>=3.8), version: 0.2.13 2024-07-02T16:50:39,125 Found link https://files.pythonhosted.org/packages/82/2c/e742d611024256b5540380e7a62cd1fdc3cc1b47f5d2b86610f545804acd/llama_cpp_python-0.2.14.tar.gz (from https://pypi.org/simple/llama-cpp-python/) (requires-python:>=3.8), version: 0.2.14 2024-07-02T16:50:39,125 Found link https://files.pythonhosted.org/packages/0c/e9/0d48a445430bed484791f76a4ab1d7950e57468127a3ee6a6ec494f46ae5/llama_cpp_python-0.2.15.tar.gz (from https://pypi.org/simple/llama-cpp-python/) (requires-python:>=3.8), version: 0.2.15 2024-07-02T16:50:39,126 Found link https://files.pythonhosted.org/packages/a8/3e/b0bd26d0d0d0dd9187a6e4e46c2744c1d7d52cc2834b35db61776af00219/llama_cpp_python-0.2.16.tar.gz (from https://pypi.org/simple/llama-cpp-python/) (requires-python:>=3.8), version: 0.2.16 2024-07-02T16:50:39,128 Found link https://files.pythonhosted.org/packages/d1/2c/e75e2e5b08b805d23066f1c1f8dbb1777a5bd3b43f057d16d4b2634d9ae1/llama_cpp_python-0.2.17.tar.gz (from https://pypi.org/simple/llama-cpp-python/) (requires-python:>=3.8), version: 0.2.17 2024-07-02T16:50:39,129 Found link https://files.pythonhosted.org/packages/1b/be/3ce85cdf2f3b7c035ca52e0158b98d244d4ce40a51908b22e0b45c3ef75f/llama_cpp_python-0.2.18.tar.gz (from https://pypi.org/simple/llama-cpp-python/) (requires-python:>=3.8), version: 0.2.18 2024-07-02T16:50:39,130 Found link https://files.pythonhosted.org/packages/9d/1a/f74ce61893791530a9af61fe8925bd569d8fb087545dc1973d617c03ce11/llama_cpp_python-0.2.19.tar.gz (from https://pypi.org/simple/llama-cpp-python/) (requires-python:>=3.8), version: 0.2.19 2024-07-02T16:50:39,131 Found link https://files.pythonhosted.org/packages/f0/6a/3e161b68097fe2f9901e01dc7ec2afb4753699495004a37d2abdc3b1fd07/llama_cpp_python-0.2.20.tar.gz (from https://pypi.org/simple/llama-cpp-python/) (requires-python:>=3.8), version: 0.2.20 2024-07-02T16:50:39,132 Found link https://files.pythonhosted.org/packages/15/7a/49906adb90113f628c1f07dc746ca0978b8aa99a8f7325a8d961ce2a1919/llama_cpp_python-0.2.22.tar.gz (from https://pypi.org/simple/llama-cpp-python/) (requires-python:>=3.8), version: 0.2.22 2024-07-02T16:50:39,133 Found link https://files.pythonhosted.org/packages/9b/30/fb7cd2d9a395d64f39b25eb36ba86163fd5bbb3c1427b9f2381b7d798d3a/llama_cpp_python-0.2.23.tar.gz (from https://pypi.org/simple/llama-cpp-python/) (requires-python:>=3.8), version: 0.2.23 2024-07-02T16:50:39,133 Found link https://files.pythonhosted.org/packages/fe/fd/498415767be24e802135c409922c0072947adc5d73ea85ce6c98c42f2e63/llama_cpp_python-0.2.24.tar.gz (from https://pypi.org/simple/llama-cpp-python/) (requires-python:>=3.8), version: 0.2.24 2024-07-02T16:50:39,135 Found link https://files.pythonhosted.org/packages/f7/3f/e21c6af55661e7499133245ab622871e375b716af5a96d83770f2ad6d602/llama_cpp_python-0.2.25.tar.gz (from https://pypi.org/simple/llama-cpp-python/) (requires-python:>=3.8), version: 0.2.25 2024-07-02T16:50:39,136 Found link https://files.pythonhosted.org/packages/ce/64/16a6bbae31c24d07d1ef6f488b81d13e0eb009147f583d9047371216b7a0/llama_cpp_python-0.2.26.tar.gz (from https://pypi.org/simple/llama-cpp-python/) (requires-python:>=3.8), version: 0.2.26 2024-07-02T16:50:39,137 Found link https://files.pythonhosted.org/packages/a9/83/e3b7405f36b2f3dd4ae76c32e9331232c5692078deda7f84c1f0ede071ab/llama_cpp_python-0.2.27.tar.gz (from https://pypi.org/simple/llama-cpp-python/) (requires-python:>=3.8), version: 0.2.27 2024-07-02T16:50:39,138 Found link https://files.pythonhosted.org/packages/1b/7c/ebe6be46264fad03bf3490fdd48d03608c5e5f10656ffc0155f23b7872a9/llama_cpp_python-0.2.28.tar.gz (from https://pypi.org/simple/llama-cpp-python/) (requires-python:>=3.8), version: 0.2.28 2024-07-02T16:50:39,138 Found link https://files.pythonhosted.org/packages/12/b6/91ec62d6b2b9648f013d77350446e0351b5685bd89129f188dae60157032/llama_cpp_python-0.2.29.tar.gz (from https://pypi.org/simple/llama-cpp-python/) (requires-python:>=3.8), version: 0.2.29 2024-07-02T16:50:39,139 Found link https://files.pythonhosted.org/packages/04/fb/13c99d504497ab63833600f8ae2196e28c04ad2a1cb43987cc9b51dc0a56/llama_cpp_python-0.2.30.tar.gz (from https://pypi.org/simple/llama-cpp-python/) (requires-python:>=3.8), version: 0.2.30 2024-07-02T16:50:39,140 Found link https://files.pythonhosted.org/packages/a1/c8/7831d0908b23670112663913b1789a7adb47dc70e28318ee889afc7fc3be/llama_cpp_python-0.2.31.tar.gz (from https://pypi.org/simple/llama-cpp-python/) (requires-python:>=3.8), version: 0.2.31 2024-07-02T16:50:39,141 Found link https://files.pythonhosted.org/packages/80/65/01fd26598cdd3cd09b6ce006cca2290bb762a4cc9f76e1a2c9c5a00b8cff/llama_cpp_python-0.2.32.tar.gz (from https://pypi.org/simple/llama-cpp-python/) (requires-python:>=3.8), version: 0.2.32 2024-07-02T16:50:39,142 Found link https://files.pythonhosted.org/packages/d4/5e/c544cd520169f55e6cad63d3b8dec9c4e47326b1cb4095a91dce942be1a7/llama_cpp_python-0.2.33.tar.gz (from https://pypi.org/simple/llama-cpp-python/) (requires-python:>=3.8), version: 0.2.33 2024-07-02T16:50:39,143 Found link https://files.pythonhosted.org/packages/78/5f/d46a72081d6e0e77e44abf092b11517267e4d290a3f20cf3b9a9faab7705/llama_cpp_python-0.2.34.tar.gz (from https://pypi.org/simple/llama-cpp-python/) (requires-python:>=3.8), version: 0.2.34 2024-07-02T16:50:39,144 Found link https://files.pythonhosted.org/packages/45/3e/c5eb7a5a2689c15657beb08d0c6915cc61a9a20311ff00a567fc7a70a530/llama_cpp_python-0.2.35.tar.gz (from https://pypi.org/simple/llama-cpp-python/) (requires-python:>=3.8), version: 0.2.35 2024-07-02T16:50:39,145 Found link https://files.pythonhosted.org/packages/6a/25/02e865aee5472e28ec65ee0994ed9fce179ee106b41a9783e7e1816c557a/llama_cpp_python-0.2.36.tar.gz (from https://pypi.org/simple/llama-cpp-python/) (requires-python:>=3.8), version: 0.2.36 2024-07-02T16:50:39,146 Found link https://files.pythonhosted.org/packages/ee/82/ce00de6b3b2adde8d59791ec986992b4e736da592cfafb22ccbdac14a049/llama_cpp_python-0.2.37.tar.gz (from https://pypi.org/simple/llama-cpp-python/) (requires-python:>=3.8), version: 0.2.37 2024-07-02T16:50:39,147 Found link https://files.pythonhosted.org/packages/90/41/7774fb44546685c88193629f95e20adad3a3078a0bdb9aeacb174a6ee9ca/llama_cpp_python-0.2.38.tar.gz (from https://pypi.org/simple/llama-cpp-python/) (requires-python:>=3.8), version: 0.2.38 2024-07-02T16:50:39,148 Found link https://files.pythonhosted.org/packages/af/a6/6b836876620823551650db19d217118b9ef0983a936aa7895ed5d05df9c0/llama_cpp_python-0.2.39.tar.gz (from https://pypi.org/simple/llama-cpp-python/) (requires-python:>=3.8), version: 0.2.39 2024-07-02T16:50:39,149 Found link https://files.pythonhosted.org/packages/1a/d2/dbf69d882517a534c5640e7b7f1cca360882cbd53c8c5c25ff0a7a854e07/llama_cpp_python-0.2.40.tar.gz (from https://pypi.org/simple/llama-cpp-python/) (requires-python:>=3.8), version: 0.2.40 2024-07-02T16:50:39,151 Found link https://files.pythonhosted.org/packages/35/73/b2abe489ae7a7fbe096266457a00a8f801b83c6929c9ee7a2fd0c43baff0/llama_cpp_python-0.2.41.tar.gz (from https://pypi.org/simple/llama-cpp-python/) (requires-python:>=3.8), version: 0.2.41 2024-07-02T16:50:39,151 Found link https://files.pythonhosted.org/packages/71/71/d5acd94964c599b348e81714aac9e75a578f51d224ac0343e27e6d9c38fc/llama_cpp_python-0.2.42.tar.gz (from https://pypi.org/simple/llama-cpp-python/) (requires-python:>=3.8), version: 0.2.42 2024-07-02T16:50:39,152 Found link https://files.pythonhosted.org/packages/2c/07/b2bbd5e826d5910be3fd96eb639ba717349b3c2b0cc1360b13c63c50338a/llama_cpp_python-0.2.43.tar.gz (from https://pypi.org/simple/llama-cpp-python/) (requires-python:>=3.8), version: 0.2.43 2024-07-02T16:50:39,153 Found link https://files.pythonhosted.org/packages/a3/1d/fc000e07680831b074446f059611b02844fd9d949d70146b1ae7b2df9ccc/llama_cpp_python-0.2.44.tar.gz (from https://pypi.org/simple/llama-cpp-python/) (requires-python:>=3.8), version: 0.2.44 2024-07-02T16:50:39,154 Found link https://files.pythonhosted.org/packages/7a/cb/3e958c169fabb2df7ffaeb170a5d2b2cc8370ff31621e23b778ebcd8ab24/llama_cpp_python-0.2.45.tar.gz (from https://pypi.org/simple/llama-cpp-python/) (requires-python:>=3.8), version: 0.2.45 2024-07-02T16:50:39,155 Found link https://files.pythonhosted.org/packages/25/b0/1df28f6ec4d14432dddc56e04bb05c0e78c40bc5611c1a54132fe2244d1a/llama_cpp_python-0.2.46.tar.gz (from https://pypi.org/simple/llama-cpp-python/) (requires-python:>=3.8), version: 0.2.46 2024-07-02T16:50:39,156 Found link https://files.pythonhosted.org/packages/b9/af/30371683d30a0485080448f0382ceec2272d1bce1a711904bb6a3cf3b38b/llama_cpp_python-0.2.47.tar.gz (from https://pypi.org/simple/llama-cpp-python/) (requires-python:>=3.8), version: 0.2.47 2024-07-02T16:50:39,156 Found link https://files.pythonhosted.org/packages/2e/cf/ab532896aa3837755dca592962552ae5c9114b71590bee2d959c57e97710/llama_cpp_python-0.2.48.tar.gz (from https://pypi.org/simple/llama-cpp-python/) (requires-python:>=3.8), version: 0.2.48 2024-07-02T16:50:39,158 Found link https://files.pythonhosted.org/packages/21/e9/71ceed04be64ca9ae36214ba94a8d271817ad83196af003db6435b9ca333/llama_cpp_python-0.2.49.tar.gz (from https://pypi.org/simple/llama-cpp-python/) (requires-python:>=3.8), version: 0.2.49 2024-07-02T16:50:39,159 Found link https://files.pythonhosted.org/packages/e8/ff/492c54a6dde08db51fc4ae0b4c9f3e4c7bc5036eeab223ebdd51bc34a146/llama_cpp_python-0.2.50.tar.gz (from https://pypi.org/simple/llama-cpp-python/) (requires-python:>=3.8), version: 0.2.50 2024-07-02T16:50:39,160 Found link https://files.pythonhosted.org/packages/9d/3a/5476da33c736830b73393f05851c8eccea6f5a54ec2a0e35fc1297d1b219/llama_cpp_python-0.2.51.tar.gz (from https://pypi.org/simple/llama-cpp-python/) (requires-python:>=3.8), version: 0.2.51 2024-07-02T16:50:39,160 Found link https://files.pythonhosted.org/packages/4c/09/a1fefdac604d70b211918a0dbe47d65573368db8988a5fa4f0777e950f12/llama_cpp_python-0.2.52.tar.gz (from https://pypi.org/simple/llama-cpp-python/) (requires-python:>=3.8), version: 0.2.52 2024-07-02T16:50:39,161 Found link https://files.pythonhosted.org/packages/61/a1/6a4f3df444ddd3903d07d35f3ef7a2a2f2711ced64944fd5ee3f0ed1ef39/llama_cpp_python-0.2.53.tar.gz (from https://pypi.org/simple/llama-cpp-python/) (requires-python:>=3.8), version: 0.2.53 2024-07-02T16:50:39,162 Found link https://files.pythonhosted.org/packages/1e/e2/5227d3fdb81aa6d3db68f240a2f5a462f229ebac7535087f5040d253fca4/llama_cpp_python-0.2.54.tar.gz (from https://pypi.org/simple/llama-cpp-python/) (requires-python:>=3.8), version: 0.2.54 2024-07-02T16:50:39,163 Found link https://files.pythonhosted.org/packages/73/18/7154fde7dfa9218f7f72784865d76cbfe2553adce0c35cfc8a9cbcd635b3/llama_cpp_python-0.2.55.tar.gz (from https://pypi.org/simple/llama-cpp-python/) (requires-python:>=3.8), version: 0.2.55 2024-07-02T16:50:39,164 Found link https://files.pythonhosted.org/packages/ca/e7/a96c0405c73e9b86fde675c30456d231e4a6bc46a69642587318856bf2d4/llama_cpp_python-0.2.56.tar.gz (from https://pypi.org/simple/llama-cpp-python/) (requires-python:>=3.8), version: 0.2.56 2024-07-02T16:50:39,165 Found link https://files.pythonhosted.org/packages/8e/ae/551f28037d9a49693f7b09b0e22912be4e839b1af5f4ae6ab721162a37a4/llama_cpp_python-0.2.57.tar.gz (from https://pypi.org/simple/llama-cpp-python/) (requires-python:>=3.8), version: 0.2.57 2024-07-02T16:50:39,166 Found link https://files.pythonhosted.org/packages/8e/8c/812402ef32432fbe9da1817b5f58e8ec2d8839c741fc374a8a9d5d78e300/llama_cpp_python-0.2.58.tar.gz (from https://pypi.org/simple/llama-cpp-python/) (requires-python:>=3.8), version: 0.2.58 2024-07-02T16:50:39,167 Found link https://files.pythonhosted.org/packages/c4/b3/3e22b81dc89371aff82ed94a95208c57214b02dde9669546c7122fb28338/llama_cpp_python-0.2.59.tar.gz (from https://pypi.org/simple/llama-cpp-python/) (requires-python:>=3.8), version: 0.2.59 2024-07-02T16:50:39,168 Found link https://files.pythonhosted.org/packages/fd/c7/d0fd42f15abca13448f7c6b8a0a1e82fb3ee1252fe589413805cc6219edb/llama_cpp_python-0.2.60.tar.gz (from https://pypi.org/simple/llama-cpp-python/) (requires-python:>=3.8), version: 0.2.60 2024-07-02T16:50:39,169 Found link https://files.pythonhosted.org/packages/47/35/e9148ee3edbfabc151f84fec765703e5653ca00c6edb90ddb8ac958db620/llama_cpp_python-0.2.61.tar.gz (from https://pypi.org/simple/llama-cpp-python/) (requires-python:>=3.8), version: 0.2.61 2024-07-02T16:50:39,170 Found link https://files.pythonhosted.org/packages/29/e6/bdf6e894b12fbf7ad88bf2aa77fdc4135be773910cd59944b0b254170793/llama_cpp_python-0.2.62.tar.gz (from https://pypi.org/simple/llama-cpp-python/) (requires-python:>=3.8), version: 0.2.62 2024-07-02T16:50:39,171 Found link https://files.pythonhosted.org/packages/d9/38/1f7328b3b9f156246a91ed9a5902c64d8c7dd877b97977afdac95815bd9e/llama_cpp_python-0.2.63.tar.gz (from https://pypi.org/simple/llama-cpp-python/) (requires-python:>=3.8), version: 0.2.63 2024-07-02T16:50:39,172 Found link https://files.pythonhosted.org/packages/6e/c8/9903231ca2f9279b1469b970e846f15aa0c287a96c5946148b65137b437c/llama_cpp_python-0.2.64.tar.gz (from https://pypi.org/simple/llama-cpp-python/) (requires-python:>=3.8), version: 0.2.64 2024-07-02T16:50:39,173 Found link https://files.pythonhosted.org/packages/64/14/247b19217d7bbfca5aa0e7bae78db857eaeb779250919c38afb7efb509ad/llama_cpp_python-0.2.65.tar.gz (from https://pypi.org/simple/llama-cpp-python/) (requires-python:>=3.8), version: 0.2.65 2024-07-02T16:50:39,174 Found link https://files.pythonhosted.org/packages/3f/67/58628b196fb39d055da7193203fc93c56212715a7c2ef4f428e64a8c07d4/llama_cpp_python-0.2.66.tar.gz (from https://pypi.org/simple/llama-cpp-python/) (requires-python:>=3.8), version: 0.2.66 2024-07-02T16:50:39,175 Found link https://files.pythonhosted.org/packages/e2/c1/df80fbbaa2f91928b61336bac48057837d2ec36e30ce704cfe96d784cd55/llama_cpp_python-0.2.67.tar.gz (from https://pypi.org/simple/llama-cpp-python/) (requires-python:>=3.8), version: 0.2.67 2024-07-02T16:50:39,176 Found link https://files.pythonhosted.org/packages/b9/0d/e44b55c3dd60daa566ac7bbc9b21a943926773aac8fad9bbb03b5ba38be0/llama_cpp_python-0.2.68.tar.gz (from https://pypi.org/simple/llama-cpp-python/) (requires-python:>=3.8), version: 0.2.68 2024-07-02T16:50:39,177 Found link https://files.pythonhosted.org/packages/f5/ff/da2ef42c64e7716fa49c76933a8e551fba20dafec87f7040ec53d01d4f2d/llama_cpp_python-0.2.69.tar.gz (from https://pypi.org/simple/llama-cpp-python/) (requires-python:>=3.8), version: 0.2.69 2024-07-02T16:50:39,178 Found link https://files.pythonhosted.org/packages/0f/6a/382c0bf11983cde1d6a6b5c79b5da3426792198ce3397eeecc042a8d559b/llama_cpp_python-0.2.70.tar.gz (from https://pypi.org/simple/llama-cpp-python/) (requires-python:>=3.8), version: 0.2.70 2024-07-02T16:50:39,179 Found link https://files.pythonhosted.org/packages/c9/05/4623a2861963a42f66358fbdd38e3c202ff97784ec1d8c1fa7011e24064d/llama_cpp_python-0.2.71.tar.gz (from https://pypi.org/simple/llama-cpp-python/) (requires-python:>=3.8), version: 0.2.71 2024-07-02T16:50:39,180 Found link https://files.pythonhosted.org/packages/71/4b/2a2d4e69d5e4600655713678ac95d19c5e882623721680921a2eda0921ce/llama_cpp_python-0.2.72.tar.gz (from https://pypi.org/simple/llama-cpp-python/) (requires-python:>=3.8), version: 0.2.72 2024-07-02T16:50:39,181 Found link https://files.pythonhosted.org/packages/c0/c6/986827c26b1c746be0d5ac0ba545a5e76385f28be628ea4238cdb9e1cc73/llama_cpp_python-0.2.73.tar.gz (from https://pypi.org/simple/llama-cpp-python/) (requires-python:>=3.8), version: 0.2.73 2024-07-02T16:50:39,182 Found link https://files.pythonhosted.org/packages/fe/7a/9c22611417bd8087bd709d51726af950b9587790903d0fa6f5b894e024c8/llama_cpp_python-0.2.74.tar.gz (from https://pypi.org/simple/llama-cpp-python/) (requires-python:>=3.8), version: 0.2.74 2024-07-02T16:50:39,183 Found link https://files.pythonhosted.org/packages/d8/71/ea384e5dfad3875bbc936b56a39f6eb9216d84cbd637d07dd45a00815d9a/llama_cpp_python-0.2.75.tar.gz (from https://pypi.org/simple/llama-cpp-python/) (requires-python:>=3.8), version: 0.2.75 2024-07-02T16:50:39,184 Found link https://files.pythonhosted.org/packages/39/bd/b115d123496f05ba7b6de938abaa0e83373b8c8706200ccb9dbb2ab8918a/llama_cpp_python-0.2.76.tar.gz (from https://pypi.org/simple/llama-cpp-python/) (requires-python:>=3.8), version: 0.2.76 2024-07-02T16:50:39,184 Found link https://files.pythonhosted.org/packages/9e/60/227e89f9fe92856e8009a2246b82561e9f4b9bf58d8ac755e19bf5da6ac9/llama_cpp_python-0.2.77.tar.gz (from https://pypi.org/simple/llama-cpp-python/) (requires-python:>=3.8), version: 0.2.77 2024-07-02T16:50:39,185 Found link https://files.pythonhosted.org/packages/f0/9a/d8f8075fa25fd5774cc4fb40059e63517871ff3c676a50c66151bb071b96/llama_cpp_python-0.2.78.tar.gz (from https://pypi.org/simple/llama-cpp-python/) (requires-python:>=3.8), version: 0.2.78 2024-07-02T16:50:39,186 Found link https://files.pythonhosted.org/packages/b6/f2/cb93a90e0d4fdb9eeb3a1d20bdc22b3ff59e7ab303a9634b8a6bef82d3cb/llama_cpp_python-0.2.79.tar.gz (from https://pypi.org/simple/llama-cpp-python/) (requires-python:>=3.8), version: 0.2.79 2024-07-02T16:50:39,187 Found link https://files.pythonhosted.org/packages/cf/a0/6db5f7db78eb63019d5fa81047998cb2ecff27b8bbf4ba70a3ca4a3c3053/llama_cpp_python-0.2.80.tar.gz (from https://pypi.org/simple/llama-cpp-python/) (requires-python:>=3.8), version: 0.2.80 2024-07-02T16:50:39,188 Found link https://files.pythonhosted.org/packages/18/80/20834e766968ce923de7f24d57cdf06e21d15b569ff20cf98c46fc721f72/llama_cpp_python-0.2.81.tar.gz (from https://pypi.org/simple/llama-cpp-python/) (requires-python:>=3.8), version: 0.2.81 2024-07-02T16:50:39,189 Fetching project page and analyzing links: https://www.piwheels.org/simple/llama-cpp-python/ 2024-07-02T16:50:39,190 Getting page https://www.piwheels.org/simple/llama-cpp-python/ 2024-07-02T16:50:39,191 Found index url https://www.piwheels.org/simple/ 2024-07-02T16:50:39,345 Fetched page https://www.piwheels.org/simple/llama-cpp-python/ as text/html 2024-07-02T16:50:39,395 Skipping link: No binaries permitted for llama-cpp-python: https://www.piwheels.org/simple/llama-cpp-python/llama_cpp_python-0.2.79-cp311-cp311-linux_armv6l.whl#sha256=a4c03906bc98d10edfa23c09d33ff485919ca8827f373caa5719c9f01fb88517 (from https://www.piwheels.org/simple/llama-cpp-python/) (requires-python:>=3.8) 2024-07-02T16:50:39,396 Skipping link: No binaries permitted for llama-cpp-python: https://www.piwheels.org/simple/llama-cpp-python/llama_cpp_python-0.2.79-cp311-cp311-linux_armv7l.whl#sha256=a4c03906bc98d10edfa23c09d33ff485919ca8827f373caa5719c9f01fb88517 (from https://www.piwheels.org/simple/llama-cpp-python/) (requires-python:>=3.8) 2024-07-02T16:50:39,397 Skipping link: No binaries permitted for llama-cpp-python: https://www.piwheels.org/simple/llama-cpp-python/llama_cpp_python-0.2.77-cp311-cp311-linux_armv6l.whl#sha256=ff57feb5d9893e25afaec9c7bd9c7dec61adec5dd7146fca836aa16527ef4df2 (from https://www.piwheels.org/simple/llama-cpp-python/) (requires-python:>=3.8) 2024-07-02T16:50:39,398 Skipping link: No binaries permitted for llama-cpp-python: https://www.piwheels.org/simple/llama-cpp-python/llama_cpp_python-0.2.77-cp311-cp311-linux_armv7l.whl#sha256=ff57feb5d9893e25afaec9c7bd9c7dec61adec5dd7146fca836aa16527ef4df2 (from https://www.piwheels.org/simple/llama-cpp-python/) (requires-python:>=3.8) 2024-07-02T16:50:39,398 Skipping link: No binaries permitted for llama-cpp-python: https://www.piwheels.org/simple/llama-cpp-python/llama_cpp_python-0.2.76-cp311-cp311-linux_armv6l.whl#sha256=73c27b12e346ab45224c1a85b0b211e64c461a575d75cd701dc8a5a7eaf7713d (from https://www.piwheels.org/simple/llama-cpp-python/) (requires-python:>=3.8) 2024-07-02T16:50:39,399 Skipping link: No binaries permitted for llama-cpp-python: https://www.piwheels.org/simple/llama-cpp-python/llama_cpp_python-0.2.76-cp311-cp311-linux_armv7l.whl#sha256=73c27b12e346ab45224c1a85b0b211e64c461a575d75cd701dc8a5a7eaf7713d (from https://www.piwheels.org/simple/llama-cpp-python/) (requires-python:>=3.8) 2024-07-02T16:50:39,399 Skipping link: No binaries permitted for llama-cpp-python: https://www.piwheels.org/simple/llama-cpp-python/llama_cpp_python-0.2.75-cp311-cp311-linux_armv6l.whl#sha256=01916881e7fcaca92e53bd1c4020739f34c43fd6c9ad3e8b1c2f9c79629b0a22 (from https://www.piwheels.org/simple/llama-cpp-python/) (requires-python:>=3.8) 2024-07-02T16:50:39,400 Skipping link: No binaries permitted for llama-cpp-python: https://www.piwheels.org/simple/llama-cpp-python/llama_cpp_python-0.2.75-cp311-cp311-linux_armv7l.whl#sha256=01916881e7fcaca92e53bd1c4020739f34c43fd6c9ad3e8b1c2f9c79629b0a22 (from https://www.piwheels.org/simple/llama-cpp-python/) (requires-python:>=3.8) 2024-07-02T16:50:39,400 Skipping link: No binaries permitted for llama-cpp-python: https://www.piwheels.org/simple/llama-cpp-python/llama_cpp_python-0.2.74-cp311-cp311-linux_armv6l.whl#sha256=02c7b6c65890c4b79418a0ffe8d88be144ea9f3165735e639142c623967607f3 (from https://www.piwheels.org/simple/llama-cpp-python/) (requires-python:>=3.8) 2024-07-02T16:50:39,401 Skipping link: No binaries permitted for llama-cpp-python: https://www.piwheels.org/simple/llama-cpp-python/llama_cpp_python-0.2.74-cp311-cp311-linux_armv7l.whl#sha256=02c7b6c65890c4b79418a0ffe8d88be144ea9f3165735e639142c623967607f3 (from https://www.piwheels.org/simple/llama-cpp-python/) (requires-python:>=3.8) 2024-07-02T16:50:39,402 Skipping link: No binaries permitted for llama-cpp-python: https://www.piwheels.org/simple/llama-cpp-python/llama_cpp_python-0.2.73-cp311-cp311-linux_armv6l.whl#sha256=d3ab0a4bd2ce9bc0f22182b0952379cc5f94cda3a91e8eff972c3a14de5fcc5e (from https://www.piwheels.org/simple/llama-cpp-python/) (requires-python:>=3.8) 2024-07-02T16:50:39,402 Skipping link: No binaries permitted for llama-cpp-python: https://www.piwheels.org/simple/llama-cpp-python/llama_cpp_python-0.2.73-cp311-cp311-linux_armv7l.whl#sha256=d3ab0a4bd2ce9bc0f22182b0952379cc5f94cda3a91e8eff972c3a14de5fcc5e (from https://www.piwheels.org/simple/llama-cpp-python/) (requires-python:>=3.8) 2024-07-02T16:50:39,403 Skipping link: No binaries permitted for llama-cpp-python: https://www.piwheels.org/simple/llama-cpp-python/llama_cpp_python-0.2.73-cp39-cp39-linux_armv6l.whl#sha256=eb23227c954e78980ce6399954d56a2c2833ef282ceb6b55b3540b23f5e829b4 (from https://www.piwheels.org/simple/llama-cpp-python/) (requires-python:>=3.8) 2024-07-02T16:50:39,404 Skipping link: No binaries permitted for llama-cpp-python: https://www.piwheels.org/simple/llama-cpp-python/llama_cpp_python-0.2.73-cp39-cp39-linux_armv7l.whl#sha256=eb23227c954e78980ce6399954d56a2c2833ef282ceb6b55b3540b23f5e829b4 (from https://www.piwheels.org/simple/llama-cpp-python/) (requires-python:>=3.8) 2024-07-02T16:50:39,404 Skipping link: No binaries permitted for llama-cpp-python: https://www.piwheels.org/simple/llama-cpp-python/llama_cpp_python-0.2.72-cp311-cp311-linux_armv6l.whl#sha256=b0ed5eb0cddcd326056e15c1cc19afb5d71fc34208786314c39bb2410df4a798 (from https://www.piwheels.org/simple/llama-cpp-python/) (requires-python:>=3.8) 2024-07-02T16:50:39,405 Skipping link: No binaries permitted for llama-cpp-python: https://www.piwheels.org/simple/llama-cpp-python/llama_cpp_python-0.2.72-cp311-cp311-linux_armv7l.whl#sha256=b0ed5eb0cddcd326056e15c1cc19afb5d71fc34208786314c39bb2410df4a798 (from https://www.piwheels.org/simple/llama-cpp-python/) (requires-python:>=3.8) 2024-07-02T16:50:39,406 Skipping link: No binaries permitted for llama-cpp-python: https://www.piwheels.org/simple/llama-cpp-python/llama_cpp_python-0.2.72-cp39-cp39-linux_armv6l.whl#sha256=c29a4ec9df944ac921c69ba257aa5dbd6b324604bec34e6cd003c3c40dd2ec80 (from https://www.piwheels.org/simple/llama-cpp-python/) (requires-python:>=3.8) 2024-07-02T16:50:39,406 Skipping link: No binaries permitted for llama-cpp-python: https://www.piwheels.org/simple/llama-cpp-python/llama_cpp_python-0.2.72-cp39-cp39-linux_armv7l.whl#sha256=c29a4ec9df944ac921c69ba257aa5dbd6b324604bec34e6cd003c3c40dd2ec80 (from https://www.piwheels.org/simple/llama-cpp-python/) (requires-python:>=3.8) 2024-07-02T16:50:39,407 Skipping link: No binaries permitted for llama-cpp-python: https://www.piwheels.org/simple/llama-cpp-python/llama_cpp_python-0.2.71-cp311-cp311-linux_armv6l.whl#sha256=70f47939a9de849738467b932d3bc512b356dccd3be05293e13a73979c78eab1 (from https://www.piwheels.org/simple/llama-cpp-python/) (requires-python:>=3.8) 2024-07-02T16:50:39,407 Skipping link: No binaries permitted for llama-cpp-python: https://www.piwheels.org/simple/llama-cpp-python/llama_cpp_python-0.2.71-cp311-cp311-linux_armv7l.whl#sha256=70f47939a9de849738467b932d3bc512b356dccd3be05293e13a73979c78eab1 (from https://www.piwheels.org/simple/llama-cpp-python/) (requires-python:>=3.8) 2024-07-02T16:50:39,408 Skipping link: No binaries permitted for llama-cpp-python: https://www.piwheels.org/simple/llama-cpp-python/llama_cpp_python-0.2.71-cp39-cp39-linux_armv6l.whl#sha256=4413651ffcce9f2afb629f0aac47b2b1e6a3ace3607678c574da5a71497369da (from https://www.piwheels.org/simple/llama-cpp-python/) (requires-python:>=3.8) 2024-07-02T16:50:39,409 Skipping link: No binaries permitted for llama-cpp-python: https://www.piwheels.org/simple/llama-cpp-python/llama_cpp_python-0.2.71-cp39-cp39-linux_armv7l.whl#sha256=4413651ffcce9f2afb629f0aac47b2b1e6a3ace3607678c574da5a71497369da (from https://www.piwheels.org/simple/llama-cpp-python/) (requires-python:>=3.8) 2024-07-02T16:50:39,409 Skipping link: No binaries permitted for llama-cpp-python: https://www.piwheels.org/simple/llama-cpp-python/llama_cpp_python-0.2.70-cp311-cp311-linux_armv6l.whl#sha256=9b4d1110b1093b1ad081b6a009c9c8167c108c410355f25903f5d298703d692c (from https://www.piwheels.org/simple/llama-cpp-python/) (requires-python:>=3.8) 2024-07-02T16:50:39,410 Skipping link: No binaries permitted for llama-cpp-python: https://www.piwheels.org/simple/llama-cpp-python/llama_cpp_python-0.2.70-cp311-cp311-linux_armv7l.whl#sha256=9b4d1110b1093b1ad081b6a009c9c8167c108c410355f25903f5d298703d692c (from https://www.piwheels.org/simple/llama-cpp-python/) (requires-python:>=3.8) 2024-07-02T16:50:39,411 Skipping link: No binaries permitted for llama-cpp-python: https://www.piwheels.org/simple/llama-cpp-python/llama_cpp_python-0.2.70-cp39-cp39-linux_armv6l.whl#sha256=eb27b04cb9a08cd0052633f283406619f77c61e25d3d86647a8aad96b9a17276 (from https://www.piwheels.org/simple/llama-cpp-python/) (requires-python:>=3.8) 2024-07-02T16:50:39,411 Skipping link: No binaries permitted for llama-cpp-python: https://www.piwheels.org/simple/llama-cpp-python/llama_cpp_python-0.2.70-cp39-cp39-linux_armv7l.whl#sha256=eb27b04cb9a08cd0052633f283406619f77c61e25d3d86647a8aad96b9a17276 (from https://www.piwheels.org/simple/llama-cpp-python/) (requires-python:>=3.8) 2024-07-02T16:50:39,412 Skipping link: No binaries permitted for llama-cpp-python: https://www.piwheels.org/simple/llama-cpp-python/llama_cpp_python-0.2.69-cp311-cp311-linux_armv6l.whl#sha256=75c8a3f1ca931cf451b86b823d59300ae9c43e8732e06a1e4e30017404870578 (from https://www.piwheels.org/simple/llama-cpp-python/) (requires-python:>=3.8) 2024-07-02T16:50:39,413 Skipping link: No binaries permitted for llama-cpp-python: https://www.piwheels.org/simple/llama-cpp-python/llama_cpp_python-0.2.69-cp311-cp311-linux_armv7l.whl#sha256=75c8a3f1ca931cf451b86b823d59300ae9c43e8732e06a1e4e30017404870578 (from https://www.piwheels.org/simple/llama-cpp-python/) (requires-python:>=3.8) 2024-07-02T16:50:39,413 Skipping link: No binaries permitted for llama-cpp-python: https://www.piwheels.org/simple/llama-cpp-python/llama_cpp_python-0.2.69-cp39-cp39-linux_armv6l.whl#sha256=6938dbad4a9c3957ea261e6adc2a63e17cc7a976bffd42ecead424d7c49bfb1d (from https://www.piwheels.org/simple/llama-cpp-python/) (requires-python:>=3.8) 2024-07-02T16:50:39,414 Skipping link: No binaries permitted for llama-cpp-python: https://www.piwheels.org/simple/llama-cpp-python/llama_cpp_python-0.2.69-cp39-cp39-linux_armv7l.whl#sha256=6938dbad4a9c3957ea261e6adc2a63e17cc7a976bffd42ecead424d7c49bfb1d (from https://www.piwheels.org/simple/llama-cpp-python/) (requires-python:>=3.8) 2024-07-02T16:50:39,414 Skipping link: No binaries permitted for llama-cpp-python: https://www.piwheels.org/simple/llama-cpp-python/llama_cpp_python-0.2.68-cp311-cp311-linux_armv6l.whl#sha256=5f4bc83a9739aa49555b7800278ded57e1359a72a051db55ce7e820a17847407 (from https://www.piwheels.org/simple/llama-cpp-python/) (requires-python:>=3.8) 2024-07-02T16:50:39,415 Skipping link: No binaries permitted for llama-cpp-python: https://www.piwheels.org/simple/llama-cpp-python/llama_cpp_python-0.2.68-cp311-cp311-linux_armv7l.whl#sha256=5f4bc83a9739aa49555b7800278ded57e1359a72a051db55ce7e820a17847407 (from https://www.piwheels.org/simple/llama-cpp-python/) (requires-python:>=3.8) 2024-07-02T16:50:39,415 Skipping link: No binaries permitted for llama-cpp-python: https://www.piwheels.org/simple/llama-cpp-python/llama_cpp_python-0.2.68-cp39-cp39-linux_armv6l.whl#sha256=bcbda651ae29bdf8e52f891c77cf082ce3c44da8accfdc35d771adb7ce0423dd (from https://www.piwheels.org/simple/llama-cpp-python/) (requires-python:>=3.8) 2024-07-02T16:50:39,416 Skipping link: No binaries permitted for llama-cpp-python: https://www.piwheels.org/simple/llama-cpp-python/llama_cpp_python-0.2.68-cp39-cp39-linux_armv7l.whl#sha256=bcbda651ae29bdf8e52f891c77cf082ce3c44da8accfdc35d771adb7ce0423dd (from https://www.piwheels.org/simple/llama-cpp-python/) (requires-python:>=3.8) 2024-07-02T16:50:39,416 Skipping link: No binaries permitted for llama-cpp-python: https://www.piwheels.org/simple/llama-cpp-python/llama_cpp_python-0.2.67-cp39-cp39-linux_armv6l.whl#sha256=0142f9f68cdfe82c584771b357fe812d3627102b74c630933ae272f302812ca4 (from https://www.piwheels.org/simple/llama-cpp-python/) (requires-python:>=3.8) 2024-07-02T16:50:39,417 Skipping link: No binaries permitted for llama-cpp-python: https://www.piwheels.org/simple/llama-cpp-python/llama_cpp_python-0.2.67-cp39-cp39-linux_armv7l.whl#sha256=0142f9f68cdfe82c584771b357fe812d3627102b74c630933ae272f302812ca4 (from https://www.piwheels.org/simple/llama-cpp-python/) (requires-python:>=3.8) 2024-07-02T16:50:39,418 Skipping link: No binaries permitted for llama-cpp-python: https://www.piwheels.org/simple/llama-cpp-python/llama_cpp_python-0.2.66-cp39-cp39-linux_armv6l.whl#sha256=7b5ea562be735d7d8bb5c37c1601b550fc975b26d7239de74b7286a885c965e7 (from https://www.piwheels.org/simple/llama-cpp-python/) (requires-python:>=3.8) 2024-07-02T16:50:39,418 Skipping link: No binaries permitted for llama-cpp-python: https://www.piwheels.org/simple/llama-cpp-python/llama_cpp_python-0.2.66-cp39-cp39-linux_armv7l.whl#sha256=7b5ea562be735d7d8bb5c37c1601b550fc975b26d7239de74b7286a885c965e7 (from https://www.piwheels.org/simple/llama-cpp-python/) (requires-python:>=3.8) 2024-07-02T16:50:39,419 Skipping link: No binaries permitted for llama-cpp-python: https://www.piwheels.org/simple/llama-cpp-python/llama_cpp_python-0.2.65-cp311-cp311-linux_armv6l.whl#sha256=6d91e653b3a4046a9549598ccf85d229a0c51d30baba03a2e60963e338d31c8a (from https://www.piwheels.org/simple/llama-cpp-python/) (requires-python:>=3.8) 2024-07-02T16:50:39,420 Skipping link: No binaries permitted for llama-cpp-python: https://www.piwheels.org/simple/llama-cpp-python/llama_cpp_python-0.2.65-cp311-cp311-linux_armv7l.whl#sha256=6d91e653b3a4046a9549598ccf85d229a0c51d30baba03a2e60963e338d31c8a (from https://www.piwheels.org/simple/llama-cpp-python/) (requires-python:>=3.8) 2024-07-02T16:50:39,420 Skipping link: No binaries permitted for llama-cpp-python: https://www.piwheels.org/simple/llama-cpp-python/llama_cpp_python-0.2.65-cp39-cp39-linux_armv6l.whl#sha256=d2347e62d54733f995c65fe6aa9857a228a6addf3c81cc58239bb949078ad593 (from https://www.piwheels.org/simple/llama-cpp-python/) (requires-python:>=3.8) 2024-07-02T16:50:39,421 Skipping link: No binaries permitted for llama-cpp-python: https://www.piwheels.org/simple/llama-cpp-python/llama_cpp_python-0.2.65-cp39-cp39-linux_armv7l.whl#sha256=d2347e62d54733f995c65fe6aa9857a228a6addf3c81cc58239bb949078ad593 (from https://www.piwheels.org/simple/llama-cpp-python/) (requires-python:>=3.8) 2024-07-02T16:50:39,421 Skipping link: No binaries permitted for llama-cpp-python: https://www.piwheels.org/simple/llama-cpp-python/llama_cpp_python-0.2.63-cp311-cp311-linux_armv6l.whl#sha256=1f023c52fc0eed18aae41c3d96bca7392549a41c7a2ac28d24545701f026cc2b (from https://www.piwheels.org/simple/llama-cpp-python/) (requires-python:>=3.8) 2024-07-02T16:50:39,422 Skipping link: No binaries permitted for llama-cpp-python: https://www.piwheels.org/simple/llama-cpp-python/llama_cpp_python-0.2.63-cp311-cp311-linux_armv7l.whl#sha256=1f023c52fc0eed18aae41c3d96bca7392549a41c7a2ac28d24545701f026cc2b (from https://www.piwheels.org/simple/llama-cpp-python/) (requires-python:>=3.8) 2024-07-02T16:50:39,422 Skipping link: No binaries permitted for llama-cpp-python: https://www.piwheels.org/simple/llama-cpp-python/llama_cpp_python-0.2.63-cp39-cp39-linux_armv6l.whl#sha256=6e7c94cc181e497bdda80fea62726fb3a39aed554f69e07e46fa5aeab7a8b72a (from https://www.piwheels.org/simple/llama-cpp-python/) (requires-python:>=3.8) 2024-07-02T16:50:39,423 Skipping link: No binaries permitted for llama-cpp-python: https://www.piwheels.org/simple/llama-cpp-python/llama_cpp_python-0.2.63-cp39-cp39-linux_armv7l.whl#sha256=6e7c94cc181e497bdda80fea62726fb3a39aed554f69e07e46fa5aeab7a8b72a (from https://www.piwheels.org/simple/llama-cpp-python/) (requires-python:>=3.8) 2024-07-02T16:50:39,424 Skipping link: No binaries permitted for llama-cpp-python: https://www.piwheels.org/simple/llama-cpp-python/llama_cpp_python-0.2.62-cp311-cp311-manylinux_2_36_armv7l.whl#sha256=dc56810ff7e6c880c5c7da9858cc6ed400730120d438b57871b41ccb9873b4d6 (from https://www.piwheels.org/simple/llama-cpp-python/) (requires-python:>=3.8) 2024-07-02T16:50:39,424 Skipping link: No binaries permitted for llama-cpp-python: https://www.piwheels.org/simple/llama-cpp-python/llama_cpp_python-0.2.62-cp39-cp39-manylinux_2_31_armv7l.whl#sha256=1f87bdb3757c75f74cc9d63852a2bbda4c84106bbe7b8920ade6c7c4893e90ab (from https://www.piwheels.org/simple/llama-cpp-python/) (requires-python:>=3.8) 2024-07-02T16:50:39,425 Skipping link: No binaries permitted for llama-cpp-python: https://www.piwheels.org/simple/llama-cpp-python/llama_cpp_python-0.2.61-cp311-cp311-manylinux_2_36_armv7l.whl#sha256=1740adbc7023150610f0af94d90bb0ccd85ea6fbba8a8ec68cc42e2b8b2ee2da (from https://www.piwheels.org/simple/llama-cpp-python/) (requires-python:>=3.8) 2024-07-02T16:50:39,426 Skipping link: No binaries permitted for llama-cpp-python: https://www.piwheels.org/simple/llama-cpp-python/llama_cpp_python-0.2.61-cp39-cp39-manylinux_2_31_armv7l.whl#sha256=27d20ba4a1b0e14b5c7b4e274e7a5e87795e463c64bfac26da64ceca1cdcce1f (from https://www.piwheels.org/simple/llama-cpp-python/) (requires-python:>=3.8) 2024-07-02T16:50:39,426 Skipping link: No binaries permitted for llama-cpp-python: https://www.piwheels.org/simple/llama-cpp-python/llama_cpp_python-0.2.60-cp311-cp311-manylinux_2_36_armv7l.whl#sha256=e9c9f62848e9ac11b4618b02b704bd3150068134b0ad2ed31c6357515e5fba0d (from https://www.piwheels.org/simple/llama-cpp-python/) (requires-python:>=3.8) 2024-07-02T16:50:39,427 Skipping link: No binaries permitted for llama-cpp-python: https://www.piwheels.org/simple/llama-cpp-python/llama_cpp_python-0.2.60-cp39-cp39-manylinux_2_31_armv7l.whl#sha256=d4877a38c0c01c6043a37f0303c2a47e7a4a89ae3a63bd730360a464ad3d4a5c (from https://www.piwheels.org/simple/llama-cpp-python/) (requires-python:>=3.8) 2024-07-02T16:50:39,428 Skipping link: No binaries permitted for llama-cpp-python: https://www.piwheels.org/simple/llama-cpp-python/llama_cpp_python-0.2.59-cp311-cp311-manylinux_2_36_armv7l.whl#sha256=ba72e76ae2029debb3701f51d0d813b51147193d52f25e164e4fdb9c82f301c0 (from https://www.piwheels.org/simple/llama-cpp-python/) (requires-python:>=3.8) 2024-07-02T16:50:39,428 Skipping link: No binaries permitted for llama-cpp-python: https://www.piwheels.org/simple/llama-cpp-python/llama_cpp_python-0.2.59-cp39-cp39-manylinux_2_31_armv7l.whl#sha256=3bb0a0aacafdbb182f001af8d8d499ceb9935b8bcbec5411ee7600eba253d986 (from https://www.piwheels.org/simple/llama-cpp-python/) (requires-python:>=3.8) 2024-07-02T16:50:39,428 Skipping link: No binaries permitted for llama-cpp-python: https://www.piwheels.org/simple/llama-cpp-python/llama_cpp_python-0.2.58-cp311-cp311-manylinux_2_36_armv7l.whl#sha256=2a90b5232aa82a70b56e8dd3aec4238c7d78019d8bff0e26e5933ddbc699dbcc (from https://www.piwheels.org/simple/llama-cpp-python/) (requires-python:>=3.8) 2024-07-02T16:50:39,429 Skipping link: No binaries permitted for llama-cpp-python: https://www.piwheels.org/simple/llama-cpp-python/llama_cpp_python-0.2.58-cp39-cp39-manylinux_2_31_armv7l.whl#sha256=4619ca5bfabd8edf60b1d9d8f7c181c57b2f41c82a743f5f7b4672877b97ff65 (from https://www.piwheels.org/simple/llama-cpp-python/) (requires-python:>=3.8) 2024-07-02T16:50:39,430 Skipping link: No binaries permitted for llama-cpp-python: https://www.piwheels.org/simple/llama-cpp-python/llama_cpp_python-0.2.57-cp311-cp311-manylinux_2_36_armv7l.whl#sha256=f06e210f2c44e3a9ccbae6f9b520bb2f64853cc7eefc71bdb2f5d692a2fef84e (from https://www.piwheels.org/simple/llama-cpp-python/) (requires-python:>=3.8) 2024-07-02T16:50:39,430 Skipping link: No binaries permitted for llama-cpp-python: https://www.piwheels.org/simple/llama-cpp-python/llama_cpp_python-0.2.57-cp39-cp39-manylinux_2_31_armv7l.whl#sha256=72980d67009ba54744b8e7064ffaa96c7b3eec7a1e3f5880f30c84879163d7e6 (from https://www.piwheels.org/simple/llama-cpp-python/) (requires-python:>=3.8) 2024-07-02T16:50:39,431 Skipping link: No binaries permitted for llama-cpp-python: https://www.piwheels.org/simple/llama-cpp-python/llama_cpp_python-0.2.50-cp311-cp311-manylinux_2_36_armv7l.whl#sha256=ae6213e1a296eb7773de5ace4c9709bb0c6c26b569f698401f02ec8c0b1f70e1 (from https://www.piwheels.org/simple/llama-cpp-python/) (requires-python:>=3.8) 2024-07-02T16:50:39,432 Skipping link: No binaries permitted for llama-cpp-python: https://www.piwheels.org/simple/llama-cpp-python/llama_cpp_python-0.2.50-cp39-cp39-manylinux_2_31_armv7l.whl#sha256=e64328caf961eb1349ba103303b224160d9ec1d905b360b8776f05ef8a56548c (from https://www.piwheels.org/simple/llama-cpp-python/) (requires-python:>=3.8) 2024-07-02T16:50:39,432 Skipping link: No binaries permitted for llama-cpp-python: https://www.piwheels.org/simple/llama-cpp-python/llama_cpp_python-0.2.49-cp311-cp311-manylinux_2_36_armv7l.whl#sha256=7ee779b157b2285faa8f687ddfc37a70add93387f65c6d369c6f55af00e7991e (from https://www.piwheels.org/simple/llama-cpp-python/) (requires-python:>=3.8) 2024-07-02T16:50:39,433 Skipping link: No binaries permitted for llama-cpp-python: https://www.piwheels.org/simple/llama-cpp-python/llama_cpp_python-0.2.49-cp39-cp39-manylinux_2_31_armv7l.whl#sha256=bebac0c1a654b69ac3ad5b033f47aefdf9c5cffaf561177fafd1f9137cd6a109 (from https://www.piwheels.org/simple/llama-cpp-python/) (requires-python:>=3.8) 2024-07-02T16:50:39,433 Skipping link: No binaries permitted for llama-cpp-python: https://www.piwheels.org/simple/llama-cpp-python/llama_cpp_python-0.2.48-cp311-cp311-manylinux_2_36_armv7l.whl#sha256=a770ff0c49d7b89225f276e004c0365dd2e2a29aade3813f181bf849bcfae172 (from https://www.piwheels.org/simple/llama-cpp-python/) (requires-python:>=3.8) 2024-07-02T16:50:39,434 Skipping link: No binaries permitted for llama-cpp-python: https://www.piwheels.org/simple/llama-cpp-python/llama_cpp_python-0.2.48-cp39-cp39-manylinux_2_31_armv7l.whl#sha256=44c1145db5af2f0e64f7dfeabbccad1b8a8f3f8191ac6d772942bc9707617774 (from https://www.piwheels.org/simple/llama-cpp-python/) (requires-python:>=3.8) 2024-07-02T16:50:39,435 Skipping link: No binaries permitted for llama-cpp-python: https://www.piwheels.org/simple/llama-cpp-python/llama_cpp_python-0.2.44-cp311-cp311-manylinux_2_36_armv7l.whl#sha256=bbed4880ea24c9048cd079bf375e248fcb287c58b4da1fa55db699f50498fc47 (from https://www.piwheels.org/simple/llama-cpp-python/) (requires-python:>=3.8) 2024-07-02T16:50:39,435 Skipping link: No binaries permitted for llama-cpp-python: https://www.piwheels.org/simple/llama-cpp-python/llama_cpp_python-0.2.44-cp39-cp39-manylinux_2_31_armv7l.whl#sha256=77f7353d0548dfc077df72accec9e446eb41d08531135f33e912e777f3709b7d (from https://www.piwheels.org/simple/llama-cpp-python/) (requires-python:>=3.8) 2024-07-02T16:50:39,436 Skipping link: No binaries permitted for llama-cpp-python: https://www.piwheels.org/simple/llama-cpp-python/llama_cpp_python-0.2.43-cp311-cp311-manylinux_2_36_armv7l.whl#sha256=74bf2edb0d26028c31a8ddb255ca8cbf09754f32f11ec4ba92b3c38f28c49689 (from https://www.piwheels.org/simple/llama-cpp-python/) (requires-python:>=3.8) 2024-07-02T16:50:39,436 Skipping link: No binaries permitted for llama-cpp-python: https://www.piwheels.org/simple/llama-cpp-python/llama_cpp_python-0.2.43-cp39-cp39-manylinux_2_31_armv7l.whl#sha256=902b3d68b16f3592a31bfab67f2743b6c880983d2388a86c794f1ff70dd0fbd5 (from https://www.piwheels.org/simple/llama-cpp-python/) (requires-python:>=3.8) 2024-07-02T16:50:39,437 Skipping link: No binaries permitted for llama-cpp-python: https://www.piwheels.org/simple/llama-cpp-python/llama_cpp_python-0.2.42-cp311-cp311-manylinux_2_36_armv7l.whl#sha256=0725254705f761711ae588c6ec374610b02c2be4fb42cef43c613f94b24e09fd (from https://www.piwheels.org/simple/llama-cpp-python/) (requires-python:>=3.8) 2024-07-02T16:50:39,437 Skipping link: No binaries permitted for llama-cpp-python: https://www.piwheels.org/simple/llama-cpp-python/llama_cpp_python-0.2.42-cp39-cp39-manylinux_2_31_armv7l.whl#sha256=eeb01d04e368a69d9e72b3f624bdb8e5ba1c7fd5bb05b867cb7db9aa2ecef712 (from https://www.piwheels.org/simple/llama-cpp-python/) (requires-python:>=3.8) 2024-07-02T16:50:39,438 Skipping link: No binaries permitted for llama-cpp-python: https://www.piwheels.org/simple/llama-cpp-python/llama_cpp_python-0.2.41-cp311-cp311-manylinux_2_36_armv7l.whl#sha256=605a1f8ad4a71ffaa08d9d81c061583f888c98daa92f90ae6702a0d132379ec5 (from https://www.piwheels.org/simple/llama-cpp-python/) (requires-python:>=3.8) 2024-07-02T16:50:39,439 Skipping link: No binaries permitted for llama-cpp-python: https://www.piwheels.org/simple/llama-cpp-python/llama_cpp_python-0.2.41-cp39-cp39-manylinux_2_31_armv7l.whl#sha256=b591c74c6abce1fb0130a7c5e1fcb00d995376bc5aa50bc57591cf3f0d78fcfd (from https://www.piwheels.org/simple/llama-cpp-python/) (requires-python:>=3.8) 2024-07-02T16:50:39,440 Skipping link: No binaries permitted for llama-cpp-python: https://www.piwheels.org/simple/llama-cpp-python/llama_cpp_python-0.2.40-cp311-cp311-manylinux_2_36_armv7l.whl#sha256=32e6eb85956ea95b1392b657da6eb22968aacb7fae0f1bf9d11c07cd4c2cef33 (from https://www.piwheels.org/simple/llama-cpp-python/) (requires-python:>=3.8) 2024-07-02T16:50:39,440 Skipping link: No binaries permitted for llama-cpp-python: https://www.piwheels.org/simple/llama-cpp-python/llama_cpp_python-0.2.40-cp39-cp39-manylinux_2_31_armv7l.whl#sha256=704761bc81d1deb8abe84f1a14ddc3cfc3d8d34ae64f74d4dfc6494552728619 (from https://www.piwheels.org/simple/llama-cpp-python/) (requires-python:>=3.8) 2024-07-02T16:50:39,441 Skipping link: No binaries permitted for llama-cpp-python: https://www.piwheels.org/simple/llama-cpp-python/llama_cpp_python-0.2.39-cp311-cp311-manylinux_2_36_armv7l.whl#sha256=150fa80b5bbf8b0d4e185d3f9dba6a0f955442d434c1aba5102d8813fff0e97b (from https://www.piwheels.org/simple/llama-cpp-python/) (requires-python:>=3.8) 2024-07-02T16:50:39,442 Skipping link: No binaries permitted for llama-cpp-python: https://www.piwheels.org/simple/llama-cpp-python/llama_cpp_python-0.2.39-cp39-cp39-manylinux_2_31_armv7l.whl#sha256=48c95175d440fe2d7c0988c258ce439668b4c601e3fe03f7b1600fdec5f1c381 (from https://www.piwheels.org/simple/llama-cpp-python/) (requires-python:>=3.8) 2024-07-02T16:50:39,442 Skipping link: No binaries permitted for llama-cpp-python: https://www.piwheels.org/simple/llama-cpp-python/llama_cpp_python-0.2.38-cp311-cp311-manylinux_2_36_armv7l.whl#sha256=5f74ae2118cfc667cd8c6fae15341e0465f3aa5f607a4639c4cd20abcad4c0ef (from https://www.piwheels.org/simple/llama-cpp-python/) (requires-python:>=3.8) 2024-07-02T16:50:39,443 Skipping link: No binaries permitted for llama-cpp-python: https://www.piwheels.org/simple/llama-cpp-python/llama_cpp_python-0.2.38-cp39-cp39-manylinux_2_31_armv7l.whl#sha256=5666d2bc6ca576a5b1db7ba9eb18f098f6dd45561e96ddc465ee68bb3242af7f (from https://www.piwheels.org/simple/llama-cpp-python/) (requires-python:>=3.8) 2024-07-02T16:50:39,443 Skipping link: No binaries permitted for llama-cpp-python: https://www.piwheels.org/simple/llama-cpp-python/llama_cpp_python-0.2.37-cp311-cp311-manylinux_2_36_armv7l.whl#sha256=52bf796d479f4e9df37e1828dabb987f8ce1d7c34bf968d86a08fac609b01d7d (from https://www.piwheels.org/simple/llama-cpp-python/) (requires-python:>=3.8) 2024-07-02T16:50:39,444 Skipping link: No binaries permitted for llama-cpp-python: https://www.piwheels.org/simple/llama-cpp-python/llama_cpp_python-0.2.37-cp39-cp39-manylinux_2_31_armv7l.whl#sha256=2c82f345afce240e7b7d385551145d1ce5ebab4461e943bb6024e4e9f9dffdf4 (from https://www.piwheels.org/simple/llama-cpp-python/) (requires-python:>=3.8) 2024-07-02T16:50:39,444 Skipping link: No binaries permitted for llama-cpp-python: https://www.piwheels.org/simple/llama-cpp-python/llama_cpp_python-0.2.36-cp311-cp311-manylinux_2_36_armv7l.whl#sha256=bb505332fcf1e70bfc1814f6159c487915a5df69dfab10907053a070e3b53449 (from https://www.piwheels.org/simple/llama-cpp-python/) (requires-python:>=3.8) 2024-07-02T16:50:39,445 Skipping link: No binaries permitted for llama-cpp-python: https://www.piwheels.org/simple/llama-cpp-python/llama_cpp_python-0.2.36-cp39-cp39-manylinux_2_31_armv7l.whl#sha256=eb2eabd471c1af5c77a8b7d69061e8f0dc3e56e10a5a6c4a77acd033f6183cd8 (from https://www.piwheels.org/simple/llama-cpp-python/) (requires-python:>=3.8) 2024-07-02T16:50:39,446 Skipping link: No binaries permitted for llama-cpp-python: https://www.piwheels.org/simple/llama-cpp-python/llama_cpp_python-0.2.35-cp311-cp311-manylinux_2_36_armv7l.whl#sha256=ab8f24a87f39bab7655d052963692ccd334b9eb78f2f2e9fb19d02616a6d647c (from https://www.piwheels.org/simple/llama-cpp-python/) (requires-python:>=3.8) 2024-07-02T16:50:39,446 Skipping link: No binaries permitted for llama-cpp-python: https://www.piwheels.org/simple/llama-cpp-python/llama_cpp_python-0.2.35-cp39-cp39-manylinux_2_31_armv7l.whl#sha256=b49f9efa08a556a02732d0a22ed36e2e879ae2789ca9f10ec8ad9daf5c8b2f3f (from https://www.piwheels.org/simple/llama-cpp-python/) (requires-python:>=3.8) 2024-07-02T16:50:39,447 Skipping link: No binaries permitted for llama-cpp-python: https://www.piwheels.org/simple/llama-cpp-python/llama_cpp_python-0.2.34-cp311-cp311-manylinux_2_36_armv7l.whl#sha256=da6a7d8c431275720252bac217874d7eb9b8e97294e237c3053f01ba2659e611 (from https://www.piwheels.org/simple/llama-cpp-python/) (requires-python:>=3.8) 2024-07-02T16:50:39,448 Skipping link: No binaries permitted for llama-cpp-python: https://www.piwheels.org/simple/llama-cpp-python/llama_cpp_python-0.2.34-cp39-cp39-manylinux_2_31_armv7l.whl#sha256=b1389b56552f7b5a058bb447510394fe93e11aca6be95177dfdfd72ea58ef4d7 (from https://www.piwheels.org/simple/llama-cpp-python/) (requires-python:>=3.8) 2024-07-02T16:50:39,449 Skipping link: No binaries permitted for llama-cpp-python: https://www.piwheels.org/simple/llama-cpp-python/llama_cpp_python-0.2.33-cp311-cp311-manylinux_2_36_armv7l.whl#sha256=00eb7f4a087a39240cfc70aa2f1611a76ecc52c4c1d46e2b6fb8d88ea642bae3 (from https://www.piwheels.org/simple/llama-cpp-python/) (requires-python:>=3.8) 2024-07-02T16:50:39,449 Skipping link: No binaries permitted for llama-cpp-python: https://www.piwheels.org/simple/llama-cpp-python/llama_cpp_python-0.2.33-cp39-cp39-manylinux_2_31_armv7l.whl#sha256=96fff83d6f39dd9a71ea1da8da5bd4fe22561795a33d06b6cc01fdf0bff85ccf (from https://www.piwheels.org/simple/llama-cpp-python/) (requires-python:>=3.8) 2024-07-02T16:50:39,450 Skipping link: No binaries permitted for llama-cpp-python: https://www.piwheels.org/simple/llama-cpp-python/llama_cpp_python-0.2.32-cp311-cp311-manylinux_2_36_armv7l.whl#sha256=7930ee195d390635aac4b7af0fd7cd490fc6f93e3922585e4cb156979b4e1660 (from https://www.piwheels.org/simple/llama-cpp-python/) (requires-python:>=3.8) 2024-07-02T16:50:39,451 Skipping link: No binaries permitted for llama-cpp-python: https://www.piwheels.org/simple/llama-cpp-python/llama_cpp_python-0.2.32-cp39-cp39-manylinux_2_31_armv7l.whl#sha256=78c4efbe665a8e60cc63cf23d37500de80fa88f33d62b4ee476a2957aecac4fa (from https://www.piwheels.org/simple/llama-cpp-python/) (requires-python:>=3.8) 2024-07-02T16:50:39,451 Skipping link: No binaries permitted for llama-cpp-python: https://www.piwheels.org/simple/llama-cpp-python/llama_cpp_python-0.2.31-cp311-cp311-manylinux_2_36_armv7l.whl#sha256=fd6731ded8988f781e89c5a096a015772f9b4d8af17447ccb542030169c315c0 (from https://www.piwheels.org/simple/llama-cpp-python/) (requires-python:>=3.8) 2024-07-02T16:50:39,452 Skipping link: No binaries permitted for llama-cpp-python: https://www.piwheels.org/simple/llama-cpp-python/llama_cpp_python-0.2.31-cp39-cp39-manylinux_2_31_armv7l.whl#sha256=dd1044d26325193b7d26c03a05ba125774c659b32a2160bcb2763c1686aedab4 (from https://www.piwheels.org/simple/llama-cpp-python/) (requires-python:>=3.8) 2024-07-02T16:50:39,452 Skipping link: No binaries permitted for llama-cpp-python: https://www.piwheels.org/simple/llama-cpp-python/llama_cpp_python-0.2.30-cp311-cp311-manylinux_2_36_armv7l.whl#sha256=a58864e906f616d77145c2e9dd5cfc6e6db1ac219be004ff3f6b39c25a56a2a3 (from https://www.piwheels.org/simple/llama-cpp-python/) (requires-python:>=3.8) 2024-07-02T16:50:39,453 Skipping link: No binaries permitted for llama-cpp-python: https://www.piwheels.org/simple/llama-cpp-python/llama_cpp_python-0.2.30-cp39-cp39-manylinux_2_31_armv7l.whl#sha256=3cbff75b267c242ab801da2dc7e15a0fe32c029fa269e8628aff3469dade6e70 (from https://www.piwheels.org/simple/llama-cpp-python/) (requires-python:>=3.8) 2024-07-02T16:50:39,454 Skipping link: No binaries permitted for llama-cpp-python: https://www.piwheels.org/simple/llama-cpp-python/llama_cpp_python-0.2.29-cp311-cp311-manylinux_2_36_armv7l.whl#sha256=fc5f5cf3982532109d89e596be5659aece0c248a39a81bb5264fa0652d04d24e (from https://www.piwheels.org/simple/llama-cpp-python/) (requires-python:>=3.8) 2024-07-02T16:50:39,454 Skipping link: No binaries permitted for llama-cpp-python: https://www.piwheels.org/simple/llama-cpp-python/llama_cpp_python-0.2.29-cp39-cp39-manylinux_2_31_armv7l.whl#sha256=04b881d5d2ce4e66256f858b4ac3ff037f96eb221d5bc558a73f4807bc2fd426 (from https://www.piwheels.org/simple/llama-cpp-python/) (requires-python:>=3.8) 2024-07-02T16:50:39,455 Skipping link: No binaries permitted for llama-cpp-python: https://www.piwheels.org/simple/llama-cpp-python/llama_cpp_python-0.2.28-cp311-cp311-manylinux_2_36_armv7l.whl#sha256=2c53b3e5679c7ee69fae7304e592e567dfd20adeaf161dada7a34fdb94b5fc5f (from https://www.piwheels.org/simple/llama-cpp-python/) (requires-python:>=3.8) 2024-07-02T16:50:39,455 Skipping link: No binaries permitted for llama-cpp-python: https://www.piwheels.org/simple/llama-cpp-python/llama_cpp_python-0.2.28-cp39-cp39-linux_armv6l.whl#sha256=1ea67be8bf202834c230c5fabdc3b75728b45fb6eaee741d43fa64209daf86f3 (from https://www.piwheels.org/simple/llama-cpp-python/) (requires-python:>=3.8) 2024-07-02T16:50:39,456 Skipping link: No binaries permitted for llama-cpp-python: https://www.piwheels.org/simple/llama-cpp-python/llama_cpp_python-0.2.28-cp39-cp39-linux_armv7l.whl#sha256=1ea67be8bf202834c230c5fabdc3b75728b45fb6eaee741d43fa64209daf86f3 (from https://www.piwheels.org/simple/llama-cpp-python/) (requires-python:>=3.8) 2024-07-02T16:50:39,457 Skipping link: No binaries permitted for llama-cpp-python: https://www.piwheels.org/simple/llama-cpp-python/llama_cpp_python-0.2.27-cp311-cp311-manylinux_2_36_armv7l.whl#sha256=02bd20ff69c48b49fedb4901900bc34e42031c1e155f14e077b61de5389a776d (from https://www.piwheels.org/simple/llama-cpp-python/) (requires-python:>=3.8) 2024-07-02T16:50:39,457 Skipping link: No binaries permitted for llama-cpp-python: https://www.piwheels.org/simple/llama-cpp-python/llama_cpp_python-0.2.27-cp39-cp39-manylinux_2_31_armv7l.whl#sha256=300e823a56d1ab95cfcbbca1a3dacc3217a9eadc208b203867b9f79e9ede9b89 (from https://www.piwheels.org/simple/llama-cpp-python/) (requires-python:>=3.8) 2024-07-02T16:50:39,458 Skipping link: No binaries permitted for llama-cpp-python: https://www.piwheels.org/simple/llama-cpp-python/llama_cpp_python-0.2.26-cp311-cp311-manylinux_2_36_armv7l.whl#sha256=ddca4eda48f63369879ef1bf209013469154af220d4c87677cba833316d255fb (from https://www.piwheels.org/simple/llama-cpp-python/) (requires-python:>=3.8) 2024-07-02T16:50:39,458 Skipping link: No binaries permitted for llama-cpp-python: https://www.piwheels.org/simple/llama-cpp-python/llama_cpp_python-0.2.26-cp39-cp39-manylinux_2_31_armv7l.whl#sha256=8af06b93145405d9da8ec41db07fac77f9b7295b3da925d96b7cc360d135f9cd (from https://www.piwheels.org/simple/llama-cpp-python/) (requires-python:>=3.8) 2024-07-02T16:50:39,459 Skipping link: No binaries permitted for llama-cpp-python: https://www.piwheels.org/simple/llama-cpp-python/llama_cpp_python-0.2.25-cp311-cp311-manylinux_2_36_armv7l.whl#sha256=01d603b372965f4d2c44a69560535cbd9e1927f3a914f47b4aa5015a8fe2358a (from https://www.piwheels.org/simple/llama-cpp-python/) (requires-python:>=3.8) 2024-07-02T16:50:39,459 Skipping link: No binaries permitted for llama-cpp-python: https://www.piwheels.org/simple/llama-cpp-python/llama_cpp_python-0.2.25-cp39-cp39-manylinux_2_31_armv7l.whl#sha256=89c1f7bd5208ad77af971a36a94812302a08e26d5c68dc5823dd56144600736d (from https://www.piwheels.org/simple/llama-cpp-python/) (requires-python:>=3.8) 2024-07-02T16:50:39,460 Skipping link: No binaries permitted for llama-cpp-python: https://www.piwheels.org/simple/llama-cpp-python/llama_cpp_python-0.2.24-cp311-cp311-manylinux_2_36_armv7l.whl#sha256=a10093f712345a2248e8b1b3516ab1683da2d5ea888bc2879d3a2c60d5d7f303 (from https://www.piwheels.org/simple/llama-cpp-python/) (requires-python:>=3.8) 2024-07-02T16:50:39,461 Skipping link: No binaries permitted for llama-cpp-python: https://www.piwheels.org/simple/llama-cpp-python/llama_cpp_python-0.2.24-cp39-cp39-manylinux_2_31_armv7l.whl#sha256=8a7693c022ebbcfa411231c816b64816996dcc6841d7d67301b115af9187d8a9 (from https://www.piwheels.org/simple/llama-cpp-python/) (requires-python:>=3.8) 2024-07-02T16:50:39,461 Skipping link: No binaries permitted for llama-cpp-python: https://www.piwheels.org/simple/llama-cpp-python/llama_cpp_python-0.2.23-cp311-cp311-manylinux_2_36_armv7l.whl#sha256=8b3ce6db762da6790eb010227b523953e11bb6e2c38110ccdf57ec80a8609bb6 (from https://www.piwheels.org/simple/llama-cpp-python/) (requires-python:>=3.8) 2024-07-02T16:50:39,462 Skipping link: No binaries permitted for llama-cpp-python: https://www.piwheels.org/simple/llama-cpp-python/llama_cpp_python-0.2.23-cp39-cp39-manylinux_2_31_armv7l.whl#sha256=88148b8856335f05b52be26b07bc639db37be9e5162ecacdc481c0f1b658dcf1 (from https://www.piwheels.org/simple/llama-cpp-python/) (requires-python:>=3.8) 2024-07-02T16:50:39,463 Skipping link: No binaries permitted for llama-cpp-python: https://www.piwheels.org/simple/llama-cpp-python/llama_cpp_python-0.2.22-cp311-cp311-manylinux_2_36_armv7l.whl#sha256=d63553eca926a129319ccbc2f586b1a1cd3b5eb4aca1f18180142b3e3a27f72d (from https://www.piwheels.org/simple/llama-cpp-python/) (requires-python:>=3.8) 2024-07-02T16:50:39,464 Skipping link: No binaries permitted for llama-cpp-python: https://www.piwheels.org/simple/llama-cpp-python/llama_cpp_python-0.2.22-cp39-cp39-manylinux_2_31_armv7l.whl#sha256=11403337141805f35ae8325c881e9ce3741f2578f4cdb45b724a307dd635f829 (from https://www.piwheels.org/simple/llama-cpp-python/) (requires-python:>=3.8) 2024-07-02T16:50:39,464 Skipping link: No binaries permitted for llama-cpp-python: https://www.piwheels.org/simple/llama-cpp-python/llama_cpp_python-0.2.20-cp311-cp311-manylinux_2_36_armv7l.whl#sha256=da14ce17a2476a706a8e8b7489a303536550d7c8cdff9db42cb4b56985c7688f (from https://www.piwheels.org/simple/llama-cpp-python/) (requires-python:>=3.8) 2024-07-02T16:50:39,465 Skipping link: No binaries permitted for llama-cpp-python: https://www.piwheels.org/simple/llama-cpp-python/llama_cpp_python-0.2.20-cp39-cp39-manylinux_2_31_armv7l.whl#sha256=5e05d20b69f7e652531141ef60200d9351118692a28d8b87a8a8a7d527928e9a (from https://www.piwheels.org/simple/llama-cpp-python/) (requires-python:>=3.8) 2024-07-02T16:50:39,465 Skipping link: No binaries permitted for llama-cpp-python: https://www.piwheels.org/simple/llama-cpp-python/llama_cpp_python-0.2.19-cp311-cp311-manylinux_2_36_armv7l.whl#sha256=c1833281926198d9276c3c08ac7cb0f49630c164ce8f29bab9c41e00d55e721f (from https://www.piwheels.org/simple/llama-cpp-python/) (requires-python:>=3.8) 2024-07-02T16:50:39,466 Skipping link: No binaries permitted for llama-cpp-python: https://www.piwheels.org/simple/llama-cpp-python/llama_cpp_python-0.2.19-cp39-cp39-manylinux_2_31_armv7l.whl#sha256=dd16bdc23237ef0e4cc1c9c4c29f6624c9b510052a3dfdaba483957289ac48d5 (from https://www.piwheels.org/simple/llama-cpp-python/) (requires-python:>=3.8) 2024-07-02T16:50:39,466 Skipping link: No binaries permitted for llama-cpp-python: https://www.piwheels.org/simple/llama-cpp-python/llama_cpp_python-0.2.18-cp311-cp311-manylinux_2_36_armv7l.whl#sha256=7f066f10c0560b76776560045941383e9b8627d7696362b387fd4e652db00dad (from https://www.piwheels.org/simple/llama-cpp-python/) (requires-python:>=3.8) 2024-07-02T16:50:39,467 Skipping link: No binaries permitted for llama-cpp-python: https://www.piwheels.org/simple/llama-cpp-python/llama_cpp_python-0.2.18-cp39-cp39-manylinux_2_31_armv7l.whl#sha256=885911c08b103762c507be6075b91de5ecb5b5422f913d7b9f844dcf3ab9b6ae (from https://www.piwheels.org/simple/llama-cpp-python/) (requires-python:>=3.8) 2024-07-02T16:50:39,467 Skipping link: No binaries permitted for llama-cpp-python: https://www.piwheels.org/simple/llama-cpp-python/llama_cpp_python-0.1.57-cp37-cp37m-linux_armv6l.whl#sha256=c46f12906971196ab3fa8250c23e5ae1f72581c00d910fadf491a710a97cb3d7 (from https://www.piwheels.org/simple/llama-cpp-python/) (requires-python:>=3.7) 2024-07-02T16:50:39,468 Skipping link: No binaries permitted for llama-cpp-python: https://www.piwheels.org/simple/llama-cpp-python/llama_cpp_python-0.1.57-cp37-cp37m-linux_armv7l.whl#sha256=c46f12906971196ab3fa8250c23e5ae1f72581c00d910fadf491a710a97cb3d7 (from https://www.piwheels.org/simple/llama-cpp-python/) (requires-python:>=3.7) 2024-07-02T16:50:39,469 Skipping link: No binaries permitted for llama-cpp-python: https://www.piwheels.org/simple/llama-cpp-python/llama_cpp_python-0.1.57-cp39-cp39-linux_armv6l.whl#sha256=888f3796690ccb21c9fac07b1ff83afa7b56fedfaa70ce1568572f1b7fdb3f27 (from https://www.piwheels.org/simple/llama-cpp-python/) (requires-python:>=3.7) 2024-07-02T16:50:39,469 Skipping link: No binaries permitted for llama-cpp-python: https://www.piwheels.org/simple/llama-cpp-python/llama_cpp_python-0.1.57-cp39-cp39-linux_armv7l.whl#sha256=888f3796690ccb21c9fac07b1ff83afa7b56fedfaa70ce1568572f1b7fdb3f27 (from https://www.piwheels.org/simple/llama-cpp-python/) (requires-python:>=3.7) 2024-07-02T16:50:39,470 Skipping link: No binaries permitted for llama-cpp-python: https://www.piwheels.org/simple/llama-cpp-python/llama_cpp_python-0.1.57-cp311-cp311-linux_armv6l.whl#sha256=caf38ff85ab251e84b4f951438454931514bde01dc36643d034d340ed14736d9 (from https://www.piwheels.org/simple/llama-cpp-python/) (requires-python:>=3.7) 2024-07-02T16:50:39,471 Skipping link: No binaries permitted for llama-cpp-python: https://www.piwheels.org/simple/llama-cpp-python/llama_cpp_python-0.1.57-cp311-cp311-linux_armv7l.whl#sha256=caf38ff85ab251e84b4f951438454931514bde01dc36643d034d340ed14736d9 (from https://www.piwheels.org/simple/llama-cpp-python/) (requires-python:>=3.7) 2024-07-02T16:50:39,471 Skipping link: No binaries permitted for llama-cpp-python: https://www.piwheels.org/simple/llama-cpp-python/llama_cpp_python-0.1.56-cp37-cp37m-linux_armv6l.whl#sha256=6ca6e31293dbf909df09e8c0ff119a6706c3b279bbf05716bdd04e99b6ff1665 (from https://www.piwheels.org/simple/llama-cpp-python/) (requires-python:>=3.7) 2024-07-02T16:50:39,472 Skipping link: No binaries permitted for llama-cpp-python: https://www.piwheels.org/simple/llama-cpp-python/llama_cpp_python-0.1.56-cp37-cp37m-linux_armv7l.whl#sha256=6ca6e31293dbf909df09e8c0ff119a6706c3b279bbf05716bdd04e99b6ff1665 (from https://www.piwheels.org/simple/llama-cpp-python/) (requires-python:>=3.7) 2024-07-02T16:50:39,472 Skipping link: No binaries permitted for llama-cpp-python: https://www.piwheels.org/simple/llama-cpp-python/llama_cpp_python-0.1.56-cp39-cp39-linux_armv6l.whl#sha256=2e5b18a3b1b32ea7c1ec0205c6d65ab42b07e29da5bea6c6fc8d17cdd9ee22bd (from https://www.piwheels.org/simple/llama-cpp-python/) (requires-python:>=3.7) 2024-07-02T16:50:39,473 Skipping link: No binaries permitted for llama-cpp-python: https://www.piwheels.org/simple/llama-cpp-python/llama_cpp_python-0.1.56-cp39-cp39-linux_armv7l.whl#sha256=2e5b18a3b1b32ea7c1ec0205c6d65ab42b07e29da5bea6c6fc8d17cdd9ee22bd (from https://www.piwheels.org/simple/llama-cpp-python/) (requires-python:>=3.7) 2024-07-02T16:50:39,473 Skipping link: No binaries permitted for llama-cpp-python: https://www.piwheels.org/simple/llama-cpp-python/llama_cpp_python-0.1.56-cp311-cp311-linux_armv6l.whl#sha256=72b8f3e8d182491cda0e8e544ac083620ecd18e43787b3d9dfaf16175655fc7e (from https://www.piwheels.org/simple/llama-cpp-python/) (requires-python:>=3.7) 2024-07-02T16:50:39,474 Skipping link: No binaries permitted for llama-cpp-python: https://www.piwheels.org/simple/llama-cpp-python/llama_cpp_python-0.1.56-cp311-cp311-linux_armv7l.whl#sha256=72b8f3e8d182491cda0e8e544ac083620ecd18e43787b3d9dfaf16175655fc7e (from https://www.piwheels.org/simple/llama-cpp-python/) (requires-python:>=3.7) 2024-07-02T16:50:39,474 Skipping link: No binaries permitted for llama-cpp-python: https://www.piwheels.org/simple/llama-cpp-python/llama_cpp_python-0.1.55-cp39-cp39-linux_armv6l.whl#sha256=d9a4ec585cfc04a6b43e815fefdf6c493a08b569cace3fd7c9bbe4d2ebd97fc5 (from https://www.piwheels.org/simple/llama-cpp-python/) (requires-python:>=3.7) 2024-07-02T16:50:39,475 Skipping link: No binaries permitted for llama-cpp-python: https://www.piwheels.org/simple/llama-cpp-python/llama_cpp_python-0.1.55-cp39-cp39-linux_armv7l.whl#sha256=d9a4ec585cfc04a6b43e815fefdf6c493a08b569cace3fd7c9bbe4d2ebd97fc5 (from https://www.piwheels.org/simple/llama-cpp-python/) (requires-python:>=3.7) 2024-07-02T16:50:39,476 Skipping link: No binaries permitted for llama-cpp-python: https://www.piwheels.org/simple/llama-cpp-python/llama_cpp_python-0.1.55-cp37-cp37m-linux_armv6l.whl#sha256=edb85834fe2145fc906e933a5643471bdebf3f3e376675ab3e914785fd1ec21d (from https://www.piwheels.org/simple/llama-cpp-python/) (requires-python:>=3.7) 2024-07-02T16:50:39,476 Skipping link: No binaries permitted for llama-cpp-python: https://www.piwheels.org/simple/llama-cpp-python/llama_cpp_python-0.1.55-cp37-cp37m-linux_armv7l.whl#sha256=edb85834fe2145fc906e933a5643471bdebf3f3e376675ab3e914785fd1ec21d (from https://www.piwheels.org/simple/llama-cpp-python/) (requires-python:>=3.7) 2024-07-02T16:50:39,477 Skipping link: No binaries permitted for llama-cpp-python: https://www.piwheels.org/simple/llama-cpp-python/llama_cpp_python-0.1.55-cp311-cp311-linux_armv6l.whl#sha256=6abbec187e6c40b192040ba4dee145ae69de4aaa65c3a350fe0cea85bf6aa197 (from https://www.piwheels.org/simple/llama-cpp-python/) (requires-python:>=3.7) 2024-07-02T16:50:39,478 Skipping link: No binaries permitted for llama-cpp-python: https://www.piwheels.org/simple/llama-cpp-python/llama_cpp_python-0.1.55-cp311-cp311-linux_armv7l.whl#sha256=6abbec187e6c40b192040ba4dee145ae69de4aaa65c3a350fe0cea85bf6aa197 (from https://www.piwheels.org/simple/llama-cpp-python/) (requires-python:>=3.7) 2024-07-02T16:50:39,478 Skipping link: No binaries permitted for llama-cpp-python: https://www.piwheels.org/simple/llama-cpp-python/llama_cpp_python-0.1.54-cp37-cp37m-linux_armv6l.whl#sha256=221d6012bf80f402d83593047359ddb7c767e83147d2c8445d24e58466b050bc (from https://www.piwheels.org/simple/llama-cpp-python/) (requires-python:>=3.7) 2024-07-02T16:50:39,479 Skipping link: No binaries permitted for llama-cpp-python: https://www.piwheels.org/simple/llama-cpp-python/llama_cpp_python-0.1.54-cp37-cp37m-linux_armv7l.whl#sha256=221d6012bf80f402d83593047359ddb7c767e83147d2c8445d24e58466b050bc (from https://www.piwheels.org/simple/llama-cpp-python/) (requires-python:>=3.7) 2024-07-02T16:50:39,479 Skipping link: No binaries permitted for llama-cpp-python: https://www.piwheels.org/simple/llama-cpp-python/llama_cpp_python-0.1.54-cp39-cp39-linux_armv6l.whl#sha256=064c3e8c4f3dd76aef301720241909048b2b4da4b0d1564b0436693e6efd1ddd (from https://www.piwheels.org/simple/llama-cpp-python/) (requires-python:>=3.7) 2024-07-02T16:50:39,480 Skipping link: No binaries permitted for llama-cpp-python: https://www.piwheels.org/simple/llama-cpp-python/llama_cpp_python-0.1.54-cp39-cp39-linux_armv7l.whl#sha256=064c3e8c4f3dd76aef301720241909048b2b4da4b0d1564b0436693e6efd1ddd (from https://www.piwheels.org/simple/llama-cpp-python/) (requires-python:>=3.7) 2024-07-02T16:50:39,480 Skipping link: No binaries permitted for llama-cpp-python: https://www.piwheels.org/simple/llama-cpp-python/llama_cpp_python-0.1.54-cp311-cp311-linux_armv6l.whl#sha256=2552020ab6570979cc92527cce3acf131f181b466944034d0ae4bc4674934989 (from https://www.piwheels.org/simple/llama-cpp-python/) (requires-python:>=3.7) 2024-07-02T16:50:39,481 Skipping link: No binaries permitted for llama-cpp-python: https://www.piwheels.org/simple/llama-cpp-python/llama_cpp_python-0.1.54-cp311-cp311-linux_armv7l.whl#sha256=2552020ab6570979cc92527cce3acf131f181b466944034d0ae4bc4674934989 (from https://www.piwheels.org/simple/llama-cpp-python/) (requires-python:>=3.7) 2024-07-02T16:50:39,481 Skipping link: No binaries permitted for llama-cpp-python: https://www.piwheels.org/simple/llama-cpp-python/llama_cpp_python-0.1.53-cp37-cp37m-linux_armv6l.whl#sha256=b177a40248c14829c96942ccdc570d96cf86f94d2ccd1fba2440ca7f496432b1 (from https://www.piwheels.org/simple/llama-cpp-python/) (requires-python:>=3.7) 2024-07-02T16:50:39,482 Skipping link: No binaries permitted for llama-cpp-python: https://www.piwheels.org/simple/llama-cpp-python/llama_cpp_python-0.1.53-cp37-cp37m-linux_armv7l.whl#sha256=b177a40248c14829c96942ccdc570d96cf86f94d2ccd1fba2440ca7f496432b1 (from https://www.piwheels.org/simple/llama-cpp-python/) (requires-python:>=3.7) 2024-07-02T16:50:39,483 Skipping link: No binaries permitted for llama-cpp-python: https://www.piwheels.org/simple/llama-cpp-python/llama_cpp_python-0.1.53-cp39-cp39-linux_armv6l.whl#sha256=fb5ba2f1e57a03c0d0e587577ab280d1a4a4fed295f1fcd4e19a3313a2ef07ac (from https://www.piwheels.org/simple/llama-cpp-python/) (requires-python:>=3.7) 2024-07-02T16:50:39,483 Skipping link: No binaries permitted for llama-cpp-python: https://www.piwheels.org/simple/llama-cpp-python/llama_cpp_python-0.1.53-cp39-cp39-linux_armv7l.whl#sha256=fb5ba2f1e57a03c0d0e587577ab280d1a4a4fed295f1fcd4e19a3313a2ef07ac (from https://www.piwheels.org/simple/llama-cpp-python/) (requires-python:>=3.7) 2024-07-02T16:50:39,484 Skipping link: No binaries permitted for llama-cpp-python: https://www.piwheels.org/simple/llama-cpp-python/llama_cpp_python-0.1.53-cp311-cp311-linux_armv6l.whl#sha256=5afe359d7635ee4081bf60ef6cdbc35860b635d75699f7185b4d637c55ac2572 (from https://www.piwheels.org/simple/llama-cpp-python/) (requires-python:>=3.7) 2024-07-02T16:50:39,485 Skipping link: No binaries permitted for llama-cpp-python: https://www.piwheels.org/simple/llama-cpp-python/llama_cpp_python-0.1.53-cp311-cp311-linux_armv7l.whl#sha256=5afe359d7635ee4081bf60ef6cdbc35860b635d75699f7185b4d637c55ac2572 (from https://www.piwheels.org/simple/llama-cpp-python/) (requires-python:>=3.7) 2024-07-02T16:50:39,485 Skipping link: No binaries permitted for llama-cpp-python: https://www.piwheels.org/simple/llama-cpp-python/llama_cpp_python-0.1.52-cp37-cp37m-linux_armv6l.whl#sha256=bf3bc680532ad36080ca0e375cdb349c91ba90a6880c0c9090b83bb41463aacc (from https://www.piwheels.org/simple/llama-cpp-python/) (requires-python:>=3.7) 2024-07-02T16:50:39,486 Skipping link: No binaries permitted for llama-cpp-python: https://www.piwheels.org/simple/llama-cpp-python/llama_cpp_python-0.1.52-cp37-cp37m-linux_armv7l.whl#sha256=bf3bc680532ad36080ca0e375cdb349c91ba90a6880c0c9090b83bb41463aacc (from https://www.piwheels.org/simple/llama-cpp-python/) (requires-python:>=3.7) 2024-07-02T16:50:39,486 Skipping link: No binaries permitted for llama-cpp-python: https://www.piwheels.org/simple/llama-cpp-python/llama_cpp_python-0.1.52-cp39-cp39-linux_armv6l.whl#sha256=2d8f9447a21a804a90f8adcee77052587ab9ace32dbe36eca23c72cb2ce20fac (from https://www.piwheels.org/simple/llama-cpp-python/) (requires-python:>=3.7) 2024-07-02T16:50:39,487 Skipping link: No binaries permitted for llama-cpp-python: https://www.piwheels.org/simple/llama-cpp-python/llama_cpp_python-0.1.52-cp39-cp39-linux_armv7l.whl#sha256=2d8f9447a21a804a90f8adcee77052587ab9ace32dbe36eca23c72cb2ce20fac (from https://www.piwheels.org/simple/llama-cpp-python/) (requires-python:>=3.7) 2024-07-02T16:50:39,487 Skipping link: No binaries permitted for llama-cpp-python: https://www.piwheels.org/simple/llama-cpp-python/llama_cpp_python-0.1.52-cp311-cp311-linux_armv6l.whl#sha256=fefadcad700a08bdc860fb3a5f45f54d635e130484bbadf9015da2268f57cb44 (from https://www.piwheels.org/simple/llama-cpp-python/) (requires-python:>=3.7) 2024-07-02T16:50:39,488 Skipping link: No binaries permitted for llama-cpp-python: https://www.piwheels.org/simple/llama-cpp-python/llama_cpp_python-0.1.52-cp311-cp311-linux_armv7l.whl#sha256=fefadcad700a08bdc860fb3a5f45f54d635e130484bbadf9015da2268f57cb44 (from https://www.piwheels.org/simple/llama-cpp-python/) (requires-python:>=3.7) 2024-07-02T16:50:39,488 Skipping link: No binaries permitted for llama-cpp-python: https://www.piwheels.org/simple/llama-cpp-python/llama_cpp_python-0.1.51-cp37-cp37m-linux_armv6l.whl#sha256=23d1e81835a4f9d2cd07c25dfe46adb3541bc7e7104c92b9e4ce40d8042f40e0 (from https://www.piwheels.org/simple/llama-cpp-python/) (requires-python:>=3.7) 2024-07-02T16:50:39,489 Skipping link: No binaries permitted for llama-cpp-python: https://www.piwheels.org/simple/llama-cpp-python/llama_cpp_python-0.1.51-cp37-cp37m-linux_armv7l.whl#sha256=23d1e81835a4f9d2cd07c25dfe46adb3541bc7e7104c92b9e4ce40d8042f40e0 (from https://www.piwheels.org/simple/llama-cpp-python/) (requires-python:>=3.7) 2024-07-02T16:50:39,490 Skipping link: No binaries permitted for llama-cpp-python: https://www.piwheels.org/simple/llama-cpp-python/llama_cpp_python-0.1.51-cp39-cp39-linux_armv6l.whl#sha256=b7620dc9874978dd791e463c32bcd526f5eb3eb53b8b4221b9eaec21eabd7958 (from https://www.piwheels.org/simple/llama-cpp-python/) (requires-python:>=3.7) 2024-07-02T16:50:39,490 Skipping link: No binaries permitted for llama-cpp-python: https://www.piwheels.org/simple/llama-cpp-python/llama_cpp_python-0.1.51-cp39-cp39-linux_armv7l.whl#sha256=b7620dc9874978dd791e463c32bcd526f5eb3eb53b8b4221b9eaec21eabd7958 (from https://www.piwheels.org/simple/llama-cpp-python/) (requires-python:>=3.7) 2024-07-02T16:50:39,491 Skipping link: No binaries permitted for llama-cpp-python: https://www.piwheels.org/simple/llama-cpp-python/llama_cpp_python-0.1.51-cp311-cp311-linux_armv6l.whl#sha256=6b45a1fb53ff22631be2564cb7274fe170f2982b28a5476e8ff905770eec557e (from https://www.piwheels.org/simple/llama-cpp-python/) (requires-python:>=3.7) 2024-07-02T16:50:39,492 Skipping link: No binaries permitted for llama-cpp-python: https://www.piwheels.org/simple/llama-cpp-python/llama_cpp_python-0.1.51-cp311-cp311-linux_armv7l.whl#sha256=6b45a1fb53ff22631be2564cb7274fe170f2982b28a5476e8ff905770eec557e (from https://www.piwheels.org/simple/llama-cpp-python/) (requires-python:>=3.7) 2024-07-02T16:50:39,492 Skipping link: No binaries permitted for llama-cpp-python: https://www.piwheels.org/simple/llama-cpp-python/llama_cpp_python-0.1.50-cp37-cp37m-linux_armv6l.whl#sha256=5b64a8dc60df2396aa83907b89ccc8e6db4ab43e017b3b3a26c091714099da12 (from https://www.piwheels.org/simple/llama-cpp-python/) (requires-python:>=3.7) 2024-07-02T16:50:39,493 Skipping link: No binaries permitted for llama-cpp-python: https://www.piwheels.org/simple/llama-cpp-python/llama_cpp_python-0.1.50-cp37-cp37m-linux_armv7l.whl#sha256=5b64a8dc60df2396aa83907b89ccc8e6db4ab43e017b3b3a26c091714099da12 (from https://www.piwheels.org/simple/llama-cpp-python/) (requires-python:>=3.7) 2024-07-02T16:50:39,493 Skipping link: No binaries permitted for llama-cpp-python: https://www.piwheels.org/simple/llama-cpp-python/llama_cpp_python-0.1.50-cp311-cp311-linux_armv6l.whl#sha256=715122f66811a350122cd555ac8a883fd243f0008711c03138d76742162d1e63 (from https://www.piwheels.org/simple/llama-cpp-python/) (requires-python:>=3.7) 2024-07-02T16:50:39,493 Skipping link: No binaries permitted for llama-cpp-python: https://www.piwheels.org/simple/llama-cpp-python/llama_cpp_python-0.1.50-cp311-cp311-linux_armv7l.whl#sha256=715122f66811a350122cd555ac8a883fd243f0008711c03138d76742162d1e63 (from https://www.piwheels.org/simple/llama-cpp-python/) (requires-python:>=3.7) 2024-07-02T16:50:39,494 Skipping link: No binaries permitted for llama-cpp-python: https://www.piwheels.org/simple/llama-cpp-python/llama_cpp_python-0.1.49-cp37-cp37m-linux_armv6l.whl#sha256=6bb78e03dfe2c72307aede0cb78e223e8d69e29948c964f2a0651654c6d62d55 (from https://www.piwheels.org/simple/llama-cpp-python/) (requires-python:>=3.7) 2024-07-02T16:50:39,494 Skipping link: No binaries permitted for llama-cpp-python: https://www.piwheels.org/simple/llama-cpp-python/llama_cpp_python-0.1.49-cp37-cp37m-linux_armv7l.whl#sha256=6bb78e03dfe2c72307aede0cb78e223e8d69e29948c964f2a0651654c6d62d55 (from https://www.piwheels.org/simple/llama-cpp-python/) (requires-python:>=3.7) 2024-07-02T16:50:39,495 Skipping link: No binaries permitted for llama-cpp-python: https://www.piwheels.org/simple/llama-cpp-python/llama_cpp_python-0.1.48-cp37-cp37m-linux_armv6l.whl#sha256=67af3df96f6ba459ca0a542bf8ec23e3cafefef3b7f6ed6ec7fe5b2ac6be3a2f (from https://www.piwheels.org/simple/llama-cpp-python/) (requires-python:>=3.7) 2024-07-02T16:50:39,496 Skipping link: No binaries permitted for llama-cpp-python: https://www.piwheels.org/simple/llama-cpp-python/llama_cpp_python-0.1.48-cp37-cp37m-linux_armv7l.whl#sha256=67af3df96f6ba459ca0a542bf8ec23e3cafefef3b7f6ed6ec7fe5b2ac6be3a2f (from https://www.piwheels.org/simple/llama-cpp-python/) (requires-python:>=3.7) 2024-07-02T16:50:39,497 Skipping link: No binaries permitted for llama-cpp-python: https://www.piwheels.org/simple/llama-cpp-python/llama_cpp_python-0.1.48-cp311-cp311-linux_armv6l.whl#sha256=7b4ce590d0f5b3f1b5c967a79a59067684631fb424510f80bed3f109f74a2d43 (from https://www.piwheels.org/simple/llama-cpp-python/) (requires-python:>=3.7) 2024-07-02T16:50:39,497 Skipping link: No binaries permitted for llama-cpp-python: https://www.piwheels.org/simple/llama-cpp-python/llama_cpp_python-0.1.48-cp311-cp311-linux_armv7l.whl#sha256=7b4ce590d0f5b3f1b5c967a79a59067684631fb424510f80bed3f109f74a2d43 (from https://www.piwheels.org/simple/llama-cpp-python/) (requires-python:>=3.7) 2024-07-02T16:50:39,498 Skipping link: No binaries permitted for llama-cpp-python: https://www.piwheels.org/simple/llama-cpp-python/llama_cpp_python-0.1.47-cp37-cp37m-linux_armv6l.whl#sha256=c4b404a9a588ba34c86302ea053359619a9aa93f844a933a08637cc65dfdb6e4 (from https://www.piwheels.org/simple/llama-cpp-python/) (requires-python:>=3.7) 2024-07-02T16:50:39,499 Skipping link: No binaries permitted for llama-cpp-python: https://www.piwheels.org/simple/llama-cpp-python/llama_cpp_python-0.1.47-cp37-cp37m-linux_armv7l.whl#sha256=c4b404a9a588ba34c86302ea053359619a9aa93f844a933a08637cc65dfdb6e4 (from https://www.piwheels.org/simple/llama-cpp-python/) (requires-python:>=3.7) 2024-07-02T16:50:39,499 Skipping link: No binaries permitted for llama-cpp-python: https://www.piwheels.org/simple/llama-cpp-python/llama_cpp_python-0.1.47-cp311-cp311-linux_armv6l.whl#sha256=a2ac92bd0de7e00a32f9fa9b5a1a22ead4031f5711ad81e191a2ebc8e9df3dcf (from https://www.piwheels.org/simple/llama-cpp-python/) (requires-python:>=3.7) 2024-07-02T16:50:39,499 Skipping link: No binaries permitted for llama-cpp-python: https://www.piwheels.org/simple/llama-cpp-python/llama_cpp_python-0.1.47-cp311-cp311-linux_armv7l.whl#sha256=a2ac92bd0de7e00a32f9fa9b5a1a22ead4031f5711ad81e191a2ebc8e9df3dcf (from https://www.piwheels.org/simple/llama-cpp-python/) (requires-python:>=3.7) 2024-07-02T16:50:39,500 Skipping link: No binaries permitted for llama-cpp-python: https://www.piwheels.org/simple/llama-cpp-python/llama_cpp_python-0.1.46-cp37-cp37m-linux_armv6l.whl#sha256=851936642b661501ebdd692b9cc1a9f420b54d4b6c1568a0b5561c0c313c3375 (from https://www.piwheels.org/simple/llama-cpp-python/) (requires-python:>=3.7) 2024-07-02T16:50:39,500 Skipping link: No binaries permitted for llama-cpp-python: https://www.piwheels.org/simple/llama-cpp-python/llama_cpp_python-0.1.46-cp37-cp37m-linux_armv7l.whl#sha256=851936642b661501ebdd692b9cc1a9f420b54d4b6c1568a0b5561c0c313c3375 (from https://www.piwheels.org/simple/llama-cpp-python/) (requires-python:>=3.7) 2024-07-02T16:50:39,501 Skipping link: No binaries permitted for llama-cpp-python: https://www.piwheels.org/simple/llama-cpp-python/llama_cpp_python-0.1.45-cp37-cp37m-linux_armv6l.whl#sha256=b01d7783f853028706cb7cd4833e1040430f089a3770cc4dcf88af128329b3e9 (from https://www.piwheels.org/simple/llama-cpp-python/) (requires-python:>=3.7) 2024-07-02T16:50:39,501 Skipping link: No binaries permitted for llama-cpp-python: https://www.piwheels.org/simple/llama-cpp-python/llama_cpp_python-0.1.45-cp37-cp37m-linux_armv7l.whl#sha256=b01d7783f853028706cb7cd4833e1040430f089a3770cc4dcf88af128329b3e9 (from https://www.piwheels.org/simple/llama-cpp-python/) (requires-python:>=3.7) 2024-07-02T16:50:39,502 Skipping link: No binaries permitted for llama-cpp-python: https://www.piwheels.org/simple/llama-cpp-python/llama_cpp_python-0.1.44-cp37-cp37m-linux_armv6l.whl#sha256=35dc305c6d40fbbc0ef489c18521a842192156419573134f83bfbb4ec4bfd3d9 (from https://www.piwheels.org/simple/llama-cpp-python/) (requires-python:>=3.7) 2024-07-02T16:50:39,503 Skipping link: No binaries permitted for llama-cpp-python: https://www.piwheels.org/simple/llama-cpp-python/llama_cpp_python-0.1.44-cp37-cp37m-linux_armv7l.whl#sha256=35dc305c6d40fbbc0ef489c18521a842192156419573134f83bfbb4ec4bfd3d9 (from https://www.piwheels.org/simple/llama-cpp-python/) (requires-python:>=3.7) 2024-07-02T16:50:39,503 Skipping link: No binaries permitted for llama-cpp-python: https://www.piwheels.org/simple/llama-cpp-python/llama_cpp_python-0.1.43-cp37-cp37m-linux_armv6l.whl#sha256=49440356659a24d945119356b9c6e352e9dacf9e873e4d0d2167a501a7050592 (from https://www.piwheels.org/simple/llama-cpp-python/) (requires-python:>=3.7) 2024-07-02T16:50:39,504 Skipping link: No binaries permitted for llama-cpp-python: https://www.piwheels.org/simple/llama-cpp-python/llama_cpp_python-0.1.43-cp37-cp37m-linux_armv7l.whl#sha256=49440356659a24d945119356b9c6e352e9dacf9e873e4d0d2167a501a7050592 (from https://www.piwheels.org/simple/llama-cpp-python/) (requires-python:>=3.7) 2024-07-02T16:50:39,505 Skipping link: No binaries permitted for llama-cpp-python: https://www.piwheels.org/simple/llama-cpp-python/llama_cpp_python-0.1.42-cp37-cp37m-linux_armv6l.whl#sha256=48b89ad5d0e3274b6b637c58a8067672586596856f146a2e9c580c6f9ca285ef (from https://www.piwheels.org/simple/llama-cpp-python/) (requires-python:>=3.7) 2024-07-02T16:50:39,505 Skipping link: No binaries permitted for llama-cpp-python: https://www.piwheels.org/simple/llama-cpp-python/llama_cpp_python-0.1.42-cp37-cp37m-linux_armv7l.whl#sha256=48b89ad5d0e3274b6b637c58a8067672586596856f146a2e9c580c6f9ca285ef (from https://www.piwheels.org/simple/llama-cpp-python/) (requires-python:>=3.7) 2024-07-02T16:50:39,506 Skipping link: No binaries permitted for llama-cpp-python: https://www.piwheels.org/simple/llama-cpp-python/llama_cpp_python-0.1.41-cp37-cp37m-linux_armv6l.whl#sha256=eb37310c71596893c50ba3e1e2b45a236b54c86025f4e61e369e0e916e3ea927 (from https://www.piwheels.org/simple/llama-cpp-python/) (requires-python:>=3.7) 2024-07-02T16:50:39,506 Skipping link: No binaries permitted for llama-cpp-python: https://www.piwheels.org/simple/llama-cpp-python/llama_cpp_python-0.1.41-cp37-cp37m-linux_armv7l.whl#sha256=eb37310c71596893c50ba3e1e2b45a236b54c86025f4e61e369e0e916e3ea927 (from https://www.piwheels.org/simple/llama-cpp-python/) (requires-python:>=3.7) 2024-07-02T16:50:39,507 Skipping link: No binaries permitted for llama-cpp-python: https://www.piwheels.org/simple/llama-cpp-python/llama_cpp_python-0.1.41-cp311-cp311-linux_armv6l.whl#sha256=8ff76703666851bfdfad2bae84439d4b6771a31b4d7d81605ba4561e6edee27f (from https://www.piwheels.org/simple/llama-cpp-python/) (requires-python:>=3.7) 2024-07-02T16:50:39,507 Skipping link: No binaries permitted for llama-cpp-python: https://www.piwheels.org/simple/llama-cpp-python/llama_cpp_python-0.1.41-cp311-cp311-linux_armv7l.whl#sha256=8ff76703666851bfdfad2bae84439d4b6771a31b4d7d81605ba4561e6edee27f (from https://www.piwheels.org/simple/llama-cpp-python/) (requires-python:>=3.7) 2024-07-02T16:50:39,508 Skipping link: No binaries permitted for llama-cpp-python: https://www.piwheels.org/simple/llama-cpp-python/llama_cpp_python-0.1.40-cp37-cp37m-linux_armv6l.whl#sha256=438062696c2aa9e624eba548d48c7e72e677f8514be6641dd76cad874124d08b (from https://www.piwheels.org/simple/llama-cpp-python/) (requires-python:>=3.7) 2024-07-02T16:50:39,508 Skipping link: No binaries permitted for llama-cpp-python: https://www.piwheels.org/simple/llama-cpp-python/llama_cpp_python-0.1.40-cp37-cp37m-linux_armv7l.whl#sha256=438062696c2aa9e624eba548d48c7e72e677f8514be6641dd76cad874124d08b (from https://www.piwheels.org/simple/llama-cpp-python/) (requires-python:>=3.7) 2024-07-02T16:50:39,509 Skipping link: not a file: https://www.piwheels.org/simple/llama-cpp-python/ 2024-07-02T16:50:39,510 Skipping link: not a file: https://pypi.org/simple/llama-cpp-python/ 2024-07-02T16:50:39,549 Given no hashes to check 1 links for project 'llama-cpp-python': discarding no candidates 2024-07-02T16:50:39,572 Collecting llama-cpp-python==0.2.81 2024-07-02T16:50:39,574 Created temporary directory: /tmp/pip-unpack-7wcdozn1 2024-07-02T16:50:39,793 Downloading llama_cpp_python-0.2.81.tar.gz (50.4 MB) 2024-07-02T16:50:51,418 Added llama-cpp-python==0.2.81 from https://files.pythonhosted.org/packages/18/80/20834e766968ce923de7f24d57cdf06e21d15b569ff20cf98c46fc721f72/llama_cpp_python-0.2.81.tar.gz to build tracker '/tmp/pip-build-tracker-vawopyc0' 2024-07-02T16:50:51,424 Created temporary directory: /tmp/pip-build-env-4fnvlnzq 2024-07-02T16:50:51,428 Installing build dependencies: started 2024-07-02T16:50:51,429 Running command pip subprocess to install build dependencies 2024-07-02T16:50:52,591 Using pip 24.0 from /usr/local/lib/python3.11/dist-packages/pip (python 3.11) 2024-07-02T16:50:53,098 Looking in indexes: https://pypi.org/simple, https://www.piwheels.org/simple 2024-07-02T16:50:53,551 Collecting scikit-build-core>=0.9.2 (from scikit-build-core[pyproject]>=0.9.2) 2024-07-02T16:50:53,568 Using cached https://www.piwheels.org/simple/scikit-build-core/scikit_build_core-0.9.8-py3-none-any.whl (152 kB) 2024-07-02T16:50:53,883 Collecting packaging>=21.3 (from scikit-build-core>=0.9.2->scikit-build-core[pyproject]>=0.9.2) 2024-07-02T16:50:53,900 Using cached https://www.piwheels.org/simple/packaging/packaging-24.1-py3-none-any.whl (53 kB) 2024-07-02T16:50:54,005 Collecting pathspec>=0.10.1 (from scikit-build-core>=0.9.2->scikit-build-core[pyproject]>=0.9.2) 2024-07-02T16:50:54,019 Using cached https://www.piwheels.org/simple/pathspec/pathspec-0.12.1-py3-none-any.whl (31 kB) 2024-07-02T16:50:56,678 Installing collected packages: pathspec, packaging, scikit-build-core 2024-07-02T16:50:57,389 Successfully installed packaging-24.1 pathspec-0.12.1 scikit-build-core-0.9.8 2024-07-02T16:50:57,693 [notice] A new release of pip is available: 24.0 -> 24.1.1 2024-07-02T16:50:57,694 [notice] To update, run: python3 -m pip install --upgrade pip 2024-07-02T16:50:57,925 Installing build dependencies: finished with status 'done' 2024-07-02T16:50:57,928 Getting requirements to build wheel: started 2024-07-02T16:50:57,929 Running command Getting requirements to build wheel 2024-07-02T16:50:58,373 Getting requirements to build wheel: finished with status 'done' 2024-07-02T16:50:58,377 Created temporary directory: /tmp/pip-modern-metadata-naj9rn2s 2024-07-02T16:50:58,380 Preparing metadata (pyproject.toml): started 2024-07-02T16:50:58,382 Running command Preparing metadata (pyproject.toml) 2024-07-02T16:50:58,927 *** scikit-build-core 0.9.8 using CMake 3.25.1 (metadata_wheel) 2024-07-02T16:50:59,022 Preparing metadata (pyproject.toml): finished with status 'done' 2024-07-02T16:50:59,029 Source in /tmp/pip-wheel-i4vgucre/llama-cpp-python_ac63946418344e25b39ca071585534ac has version 0.2.81, which satisfies requirement llama-cpp-python==0.2.81 from https://files.pythonhosted.org/packages/18/80/20834e766968ce923de7f24d57cdf06e21d15b569ff20cf98c46fc721f72/llama_cpp_python-0.2.81.tar.gz 2024-07-02T16:50:59,030 Removed llama-cpp-python==0.2.81 from https://files.pythonhosted.org/packages/18/80/20834e766968ce923de7f24d57cdf06e21d15b569ff20cf98c46fc721f72/llama_cpp_python-0.2.81.tar.gz from build tracker '/tmp/pip-build-tracker-vawopyc0' 2024-07-02T16:50:59,040 Created temporary directory: /tmp/pip-unpack-jeiqtdxl 2024-07-02T16:50:59,041 Created temporary directory: /tmp/pip-unpack-vybopu__ 2024-07-02T16:50:59,103 Building wheels for collected packages: llama-cpp-python 2024-07-02T16:50:59,107 Created temporary directory: /tmp/pip-wheel-xnc5qxqj 2024-07-02T16:50:59,108 Destination directory: /tmp/pip-wheel-xnc5qxqj 2024-07-02T16:50:59,110 Building wheel for llama-cpp-python (pyproject.toml): started 2024-07-02T16:50:59,111 Running command Building wheel for llama-cpp-python (pyproject.toml) 2024-07-02T16:50:59,608 *** scikit-build-core 0.9.8 using CMake 3.25.1 (wheel) 2024-07-02T16:50:59,629 *** Configuring CMake... 2024-07-02T16:50:59,718 loading initial cache file /tmp/tmpe2ytxxrc/build/CMakeInit.txt 2024-07-02T16:50:59,994 -- The C compiler identification is GNU 12.2.0 2024-07-02T16:51:00,291 -- The CXX compiler identification is GNU 12.2.0 2024-07-02T16:51:00,343 -- Detecting C compiler ABI info 2024-07-02T16:51:00,612 -- Detecting C compiler ABI info - done 2024-07-02T16:51:00,673 -- Check for working C compiler: /usr/bin/arm-linux-gnueabihf-gcc - skipped 2024-07-02T16:51:00,676 -- Detecting C compile features 2024-07-02T16:51:00,679 -- Detecting C compile features - done 2024-07-02T16:51:00,698 -- Detecting CXX compiler ABI info 2024-07-02T16:51:01,013 -- Detecting CXX compiler ABI info - done 2024-07-02T16:51:01,053 -- Check for working CXX compiler: /usr/bin/arm-linux-gnueabihf-g++ - skipped 2024-07-02T16:51:01,055 -- Detecting CXX compile features 2024-07-02T16:51:01,057 -- Detecting CXX compile features - done 2024-07-02T16:51:01,084 -- Found Git: /usr/bin/git (found version "2.39.2") 2024-07-02T16:51:01,141 -- Performing Test CMAKE_HAVE_LIBC_PTHREAD 2024-07-02T16:51:01,440 -- Performing Test CMAKE_HAVE_LIBC_PTHREAD - Success 2024-07-02T16:51:01,447 -- Found Threads: TRUE 2024-07-02T16:51:02,538 -- Found OpenMP_C: -fopenmp (found version "4.5") 2024-07-02T16:51:02,901 -- Found OpenMP_CXX: -fopenmp (found version "4.5") 2024-07-02T16:51:02,902 -- Found OpenMP: TRUE (found version "4.5") 2024-07-02T16:51:02,903 -- OpenMP found 2024-07-02T16:51:02,903 -- Using ggml SGEMM 2024-07-02T16:51:02,906 -- Warning: ccache not found - consider installing it for faster compilation or disable this warning with GGML_CCACHE=OFF 2024-07-02T16:51:03,006 -- CMAKE_SYSTEM_PROCESSOR: armv7l 2024-07-02T16:51:03,007 -- ARM detected 2024-07-02T16:51:03,010 -- Performing Test COMPILER_SUPPORTS_FP16_FORMAT_I3E 2024-07-02T16:51:03,359 -- Performing Test COMPILER_SUPPORTS_FP16_FORMAT_I3E - Success 2024-07-02T16:51:03,422 CMake Warning (dev) at CMakeLists.txt:9 (install): 2024-07-02T16:51:03,422 Target llama has PUBLIC_HEADER files but no PUBLIC_HEADER DESTINATION. 2024-07-02T16:51:03,423 Call Stack (most recent call first): 2024-07-02T16:51:03,423 CMakeLists.txt:73 (llama_cpp_python_install_target) 2024-07-02T16:51:03,424 This warning is for project developers. Use -Wno-dev to suppress it. 2024-07-02T16:51:03,425 CMake Warning (dev) at CMakeLists.txt:17 (install): 2024-07-02T16:51:03,425 Target llama has PUBLIC_HEADER files but no PUBLIC_HEADER DESTINATION. 2024-07-02T16:51:03,426 Call Stack (most recent call first): 2024-07-02T16:51:03,426 CMakeLists.txt:73 (llama_cpp_python_install_target) 2024-07-02T16:51:03,427 This warning is for project developers. Use -Wno-dev to suppress it. 2024-07-02T16:51:03,428 CMake Warning (dev) at CMakeLists.txt:9 (install): 2024-07-02T16:51:03,428 Target ggml has PUBLIC_HEADER files but no PUBLIC_HEADER DESTINATION. 2024-07-02T16:51:03,429 Call Stack (most recent call first): 2024-07-02T16:51:03,429 CMakeLists.txt:74 (llama_cpp_python_install_target) 2024-07-02T16:51:03,430 This warning is for project developers. Use -Wno-dev to suppress it. 2024-07-02T16:51:03,431 CMake Warning (dev) at CMakeLists.txt:17 (install): 2024-07-02T16:51:03,432 Target ggml has PUBLIC_HEADER files but no PUBLIC_HEADER DESTINATION. 2024-07-02T16:51:03,432 Call Stack (most recent call first): 2024-07-02T16:51:03,433 CMakeLists.txt:74 (llama_cpp_python_install_target) 2024-07-02T16:51:03,433 This warning is for project developers. Use -Wno-dev to suppress it. 2024-07-02T16:51:03,435 -- Configuring done 2024-07-02T16:51:03,518 -- Generating done 2024-07-02T16:51:03,537 -- Build files have been written to: /tmp/tmpe2ytxxrc/build 2024-07-02T16:51:03,550 *** Building project with Ninja... 2024-07-02T16:51:07,001 [1/26] /usr/bin/arm-linux-gnueabihf-gcc -DGGML_SCHED_MAX_COPIES=4 -DGGML_USE_LLAMAFILE -DGGML_USE_OPENMP -D_GNU_SOURCE -D_XOPEN_SOURCE=600 -Dggml_EXPORTS -I/tmp/pip-wheel-i4vgucre/llama-cpp-python_ac63946418344e25b39ca071585534ac/vendor/llama.cpp/ggml/src/../include -I/tmp/pip-wheel-i4vgucre/llama-cpp-python_ac63946418344e25b39ca071585534ac/vendor/llama.cpp/ggml/src/. -O3 -DNDEBUG -fPIC -Wshadow -Wstrict-prototypes -Wpointer-arith -Wmissing-prototypes -Werror=implicit-int -Werror=implicit-function-declaration -Wall -Wextra -Wpedantic -Wcast-qual -Wno-unused-function -Wdouble-promotion -mfp16-format=ieee -mfpu=neon-fp-armv8 -mno-unaligned-access -funsafe-math-optimizations -fopenmp -std=gnu11 -MD -MT vendor/llama.cpp/ggml/src/CMakeFiles/ggml.dir/ggml-alloc.c.o -MF vendor/llama.cpp/ggml/src/CMakeFiles/ggml.dir/ggml-alloc.c.o.d -o vendor/llama.cpp/ggml/src/CMakeFiles/ggml.dir/ggml-alloc.c.o -c /tmp/pip-wheel-i4vgucre/llama-cpp-python_ac63946418344e25b39ca071585534ac/vendor/llama.cpp/ggml/src/ggml-alloc.c 2024-07-02T16:51:10,531 [2/26] /usr/bin/arm-linux-gnueabihf-gcc -DGGML_SCHED_MAX_COPIES=4 -DGGML_USE_LLAMAFILE -DGGML_USE_OPENMP -D_GNU_SOURCE -D_XOPEN_SOURCE=600 -Dggml_EXPORTS -I/tmp/pip-wheel-i4vgucre/llama-cpp-python_ac63946418344e25b39ca071585534ac/vendor/llama.cpp/ggml/src/../include -I/tmp/pip-wheel-i4vgucre/llama-cpp-python_ac63946418344e25b39ca071585534ac/vendor/llama.cpp/ggml/src/. -O3 -DNDEBUG -fPIC -Wshadow -Wstrict-prototypes -Wpointer-arith -Wmissing-prototypes -Werror=implicit-int -Werror=implicit-function-declaration -Wall -Wextra -Wpedantic -Wcast-qual -Wno-unused-function -Wdouble-promotion -mfp16-format=ieee -mfpu=neon-fp-armv8 -mno-unaligned-access -funsafe-math-optimizations -fopenmp -std=gnu11 -MD -MT vendor/llama.cpp/ggml/src/CMakeFiles/ggml.dir/ggml-backend.c.o -MF vendor/llama.cpp/ggml/src/CMakeFiles/ggml.dir/ggml-backend.c.o.d -o vendor/llama.cpp/ggml/src/CMakeFiles/ggml.dir/ggml-backend.c.o -c /tmp/pip-wheel-i4vgucre/llama-cpp-python_ac63946418344e25b39ca071585534ac/vendor/llama.cpp/ggml/src/ggml-backend.c 2024-07-02T16:51:21,702 [3/26] /usr/bin/arm-linux-gnueabihf-g++ -DGGML_SCHED_MAX_COPIES=4 -DGGML_USE_LLAMAFILE -DGGML_USE_OPENMP -D_GNU_SOURCE -D_XOPEN_SOURCE=600 -Dggml_EXPORTS -I/tmp/pip-wheel-i4vgucre/llama-cpp-python_ac63946418344e25b39ca071585534ac/vendor/llama.cpp/ggml/src/../include -I/tmp/pip-wheel-i4vgucre/llama-cpp-python_ac63946418344e25b39ca071585534ac/vendor/llama.cpp/ggml/src/. -O3 -DNDEBUG -fPIC -Wmissing-declarations -Wmissing-noreturn -Wall -Wextra -Wpedantic -Wcast-qual -Wno-unused-function -Wno-array-bounds -Wno-format-truncation -Wextra-semi -mfp16-format=ieee -mfpu=neon-fp-armv8 -mno-unaligned-access -funsafe-math-optimizations -fopenmp -std=gnu++11 -MD -MT vendor/llama.cpp/ggml/src/CMakeFiles/ggml.dir/sgemm.cpp.o -MF vendor/llama.cpp/ggml/src/CMakeFiles/ggml.dir/sgemm.cpp.o.d -o vendor/llama.cpp/ggml/src/CMakeFiles/ggml.dir/sgemm.cpp.o -c /tmp/pip-wheel-i4vgucre/llama-cpp-python_ac63946418344e25b39ca071585534ac/vendor/llama.cpp/ggml/src/sgemm.cpp 2024-07-02T16:51:22,045 [4/26] cd /tmp/pip-wheel-i4vgucre/llama-cpp-python_ac63946418344e25b39ca071585534ac/vendor/llama.cpp && /usr/bin/cmake -DMSVC= -DCMAKE_C_COMPILER_VERSION=12.2.0 -DCMAKE_C_COMPILER_ID=GNU -DCMAKE_VS_PLATFORM_NAME= -DCMAKE_C_COMPILER=/usr/bin/arm-linux-gnueabihf-gcc -P /tmp/pip-wheel-i4vgucre/llama-cpp-python_ac63946418344e25b39ca071585534ac/vendor/llama.cpp/common/cmake/build-info-gen-cpp.cmake 2024-07-02T16:51:22,046 -- Found Git: /usr/bin/git (found version "2.39.2") 2024-07-02T16:51:22,268 [5/26] /usr/bin/arm-linux-gnueabihf-g++ -O3 -DNDEBUG -fPIC -MD -MT vendor/llama.cpp/common/CMakeFiles/build_info.dir/build-info.cpp.o -MF vendor/llama.cpp/common/CMakeFiles/build_info.dir/build-info.cpp.o.d -o vendor/llama.cpp/common/CMakeFiles/build_info.dir/build-info.cpp.o -c /tmp/pip-wheel-i4vgucre/llama-cpp-python_ac63946418344e25b39ca071585534ac/vendor/llama.cpp/common/build-info.cpp 2024-07-02T16:52:09,748 [6/26] /usr/bin/arm-linux-gnueabihf-gcc -DGGML_SCHED_MAX_COPIES=4 -DGGML_USE_LLAMAFILE -DGGML_USE_OPENMP -D_GNU_SOURCE -D_XOPEN_SOURCE=600 -Dggml_EXPORTS -I/tmp/pip-wheel-i4vgucre/llama-cpp-python_ac63946418344e25b39ca071585534ac/vendor/llama.cpp/ggml/src/../include -I/tmp/pip-wheel-i4vgucre/llama-cpp-python_ac63946418344e25b39ca071585534ac/vendor/llama.cpp/ggml/src/. -O3 -DNDEBUG -fPIC -Wshadow -Wstrict-prototypes -Wpointer-arith -Wmissing-prototypes -Werror=implicit-int -Werror=implicit-function-declaration -Wall -Wextra -Wpedantic -Wcast-qual -Wno-unused-function -Wdouble-promotion -mfp16-format=ieee -mfpu=neon-fp-armv8 -mno-unaligned-access -funsafe-math-optimizations -fopenmp -std=gnu11 -MD -MT vendor/llama.cpp/ggml/src/CMakeFiles/ggml.dir/ggml-quants.c.o -MF vendor/llama.cpp/ggml/src/CMakeFiles/ggml.dir/ggml-quants.c.o.d -o vendor/llama.cpp/ggml/src/CMakeFiles/ggml.dir/ggml-quants.c.o -c /tmp/pip-wheel-i4vgucre/llama-cpp-python_ac63946418344e25b39ca071585534ac/vendor/llama.cpp/ggml/src/ggml-quants.c 2024-07-02T16:52:17,030 [7/26] /usr/bin/arm-linux-gnueabihf-g++ -DLLAMA_BUILD -DLLAMA_SHARED -Dllama_EXPORTS -I/tmp/pip-wheel-i4vgucre/llama-cpp-python_ac63946418344e25b39ca071585534ac/vendor/llama.cpp/src/. -I/tmp/pip-wheel-i4vgucre/llama-cpp-python_ac63946418344e25b39ca071585534ac/vendor/llama.cpp/src/../include -I/tmp/pip-wheel-i4vgucre/llama-cpp-python_ac63946418344e25b39ca071585534ac/vendor/llama.cpp/ggml/src/../include -O3 -DNDEBUG -fPIC -MD -MT vendor/llama.cpp/src/CMakeFiles/llama.dir/unicode.cpp.o -MF vendor/llama.cpp/src/CMakeFiles/llama.dir/unicode.cpp.o.d -o vendor/llama.cpp/src/CMakeFiles/llama.dir/unicode.cpp.o -c /tmp/pip-wheel-i4vgucre/llama-cpp-python_ac63946418344e25b39ca071585534ac/vendor/llama.cpp/src/unicode.cpp 2024-07-02T16:52:18,532 [8/26] /usr/bin/arm-linux-gnueabihf-gcc -DGGML_SCHED_MAX_COPIES=4 -DGGML_USE_LLAMAFILE -DGGML_USE_OPENMP -D_GNU_SOURCE -D_XOPEN_SOURCE=600 -Dggml_EXPORTS -I/tmp/pip-wheel-i4vgucre/llama-cpp-python_ac63946418344e25b39ca071585534ac/vendor/llama.cpp/ggml/src/../include -I/tmp/pip-wheel-i4vgucre/llama-cpp-python_ac63946418344e25b39ca071585534ac/vendor/llama.cpp/ggml/src/. -O3 -DNDEBUG -fPIC -Wshadow -Wstrict-prototypes -Wpointer-arith -Wmissing-prototypes -Werror=implicit-int -Werror=implicit-function-declaration -Wall -Wextra -Wpedantic -Wcast-qual -Wno-unused-function -Wdouble-promotion -mfp16-format=ieee -mfpu=neon-fp-armv8 -mno-unaligned-access -funsafe-math-optimizations -fopenmp -std=gnu11 -MD -MT vendor/llama.cpp/ggml/src/CMakeFiles/ggml.dir/ggml.c.o -MF vendor/llama.cpp/ggml/src/CMakeFiles/ggml.dir/ggml.c.o.d -o vendor/llama.cpp/ggml/src/CMakeFiles/ggml.dir/ggml.c.o -c /tmp/pip-wheel-i4vgucre/llama-cpp-python_ac63946418344e25b39ca071585534ac/vendor/llama.cpp/ggml/src/ggml.c 2024-07-02T16:52:19,020 [9/26] : && /usr/bin/arm-linux-gnueabihf-g++ -fPIC -O3 -DNDEBUG -shared -Wl,-soname,libggml.so -o vendor/llama.cpp/ggml/src/libggml.so vendor/llama.cpp/ggml/src/CMakeFiles/ggml.dir/ggml.c.o vendor/llama.cpp/ggml/src/CMakeFiles/ggml.dir/ggml-alloc.c.o vendor/llama.cpp/ggml/src/CMakeFiles/ggml.dir/ggml-backend.c.o vendor/llama.cpp/ggml/src/CMakeFiles/ggml.dir/ggml-quants.c.o vendor/llama.cpp/ggml/src/CMakeFiles/ggml.dir/sgemm.cpp.o -Wl,-rpath,"\$ORIGIN" /usr/lib/arm-linux-gnueabihf/libm.so /usr/lib/gcc/arm-linux-gnueabihf/12/libgomp.so /usr/lib/arm-linux-gnueabihf/libpthread.a && : 2024-07-02T16:52:21,761 [10/26] /usr/bin/arm-linux-gnueabihf-g++ -I/tmp/pip-wheel-i4vgucre/llama-cpp-python_ac63946418344e25b39ca071585534ac/vendor/llama.cpp/common/. -I/tmp/pip-wheel-i4vgucre/llama-cpp-python_ac63946418344e25b39ca071585534ac/vendor/llama.cpp/src/. -I/tmp/pip-wheel-i4vgucre/llama-cpp-python_ac63946418344e25b39ca071585534ac/vendor/llama.cpp/src/../include -I/tmp/pip-wheel-i4vgucre/llama-cpp-python_ac63946418344e25b39ca071585534ac/vendor/llama.cpp/ggml/src/../include -O3 -DNDEBUG -fPIC -MD -MT vendor/llama.cpp/common/CMakeFiles/common.dir/console.cpp.o -MF vendor/llama.cpp/common/CMakeFiles/common.dir/console.cpp.o.d -o vendor/llama.cpp/common/CMakeFiles/common.dir/console.cpp.o -c /tmp/pip-wheel-i4vgucre/llama-cpp-python_ac63946418344e25b39ca071585534ac/vendor/llama.cpp/common/console.cpp 2024-07-02T16:52:26,058 [11/26] /usr/bin/arm-linux-gnueabihf-g++ -I/tmp/pip-wheel-i4vgucre/llama-cpp-python_ac63946418344e25b39ca071585534ac/vendor/llama.cpp/common/. -I/tmp/pip-wheel-i4vgucre/llama-cpp-python_ac63946418344e25b39ca071585534ac/vendor/llama.cpp/src/. -I/tmp/pip-wheel-i4vgucre/llama-cpp-python_ac63946418344e25b39ca071585534ac/vendor/llama.cpp/src/../include -I/tmp/pip-wheel-i4vgucre/llama-cpp-python_ac63946418344e25b39ca071585534ac/vendor/llama.cpp/ggml/src/../include -O3 -DNDEBUG -fPIC -MD -MT vendor/llama.cpp/common/CMakeFiles/common.dir/sampling.cpp.o -MF vendor/llama.cpp/common/CMakeFiles/common.dir/sampling.cpp.o.d -o vendor/llama.cpp/common/CMakeFiles/common.dir/sampling.cpp.o -c /tmp/pip-wheel-i4vgucre/llama-cpp-python_ac63946418344e25b39ca071585534ac/vendor/llama.cpp/common/sampling.cpp 2024-07-02T16:52:36,142 [12/26] /usr/bin/arm-linux-gnueabihf-g++ -I/tmp/pip-wheel-i4vgucre/llama-cpp-python_ac63946418344e25b39ca071585534ac/vendor/llama.cpp/common/. -I/tmp/pip-wheel-i4vgucre/llama-cpp-python_ac63946418344e25b39ca071585534ac/vendor/llama.cpp/src/. -I/tmp/pip-wheel-i4vgucre/llama-cpp-python_ac63946418344e25b39ca071585534ac/vendor/llama.cpp/src/../include -I/tmp/pip-wheel-i4vgucre/llama-cpp-python_ac63946418344e25b39ca071585534ac/vendor/llama.cpp/ggml/src/../include -O3 -DNDEBUG -fPIC -MD -MT vendor/llama.cpp/common/CMakeFiles/common.dir/grammar-parser.cpp.o -MF vendor/llama.cpp/common/CMakeFiles/common.dir/grammar-parser.cpp.o.d -o vendor/llama.cpp/common/CMakeFiles/common.dir/grammar-parser.cpp.o -c /tmp/pip-wheel-i4vgucre/llama-cpp-python_ac63946418344e25b39ca071585534ac/vendor/llama.cpp/common/grammar-parser.cpp 2024-07-02T16:52:44,463 [13/26] /usr/bin/arm-linux-gnueabihf-g++ -I/tmp/pip-wheel-i4vgucre/llama-cpp-python_ac63946418344e25b39ca071585534ac/vendor/llama.cpp/common/. -I/tmp/pip-wheel-i4vgucre/llama-cpp-python_ac63946418344e25b39ca071585534ac/vendor/llama.cpp/src/. -I/tmp/pip-wheel-i4vgucre/llama-cpp-python_ac63946418344e25b39ca071585534ac/vendor/llama.cpp/src/../include -I/tmp/pip-wheel-i4vgucre/llama-cpp-python_ac63946418344e25b39ca071585534ac/vendor/llama.cpp/ggml/src/../include -O3 -DNDEBUG -fPIC -MD -MT vendor/llama.cpp/common/CMakeFiles/common.dir/train.cpp.o -MF vendor/llama.cpp/common/CMakeFiles/common.dir/train.cpp.o.d -o vendor/llama.cpp/common/CMakeFiles/common.dir/train.cpp.o -c /tmp/pip-wheel-i4vgucre/llama-cpp-python_ac63946418344e25b39ca071585534ac/vendor/llama.cpp/common/train.cpp 2024-07-02T16:52:51,229 [14/26] /usr/bin/arm-linux-gnueabihf-g++ -I/tmp/pip-wheel-i4vgucre/llama-cpp-python_ac63946418344e25b39ca071585534ac/vendor/llama.cpp/common/. -I/tmp/pip-wheel-i4vgucre/llama-cpp-python_ac63946418344e25b39ca071585534ac/vendor/llama.cpp/src/. -I/tmp/pip-wheel-i4vgucre/llama-cpp-python_ac63946418344e25b39ca071585534ac/vendor/llama.cpp/src/../include -I/tmp/pip-wheel-i4vgucre/llama-cpp-python_ac63946418344e25b39ca071585534ac/vendor/llama.cpp/ggml/src/../include -O3 -DNDEBUG -fPIC -MD -MT vendor/llama.cpp/common/CMakeFiles/common.dir/ngram-cache.cpp.o -MF vendor/llama.cpp/common/CMakeFiles/common.dir/ngram-cache.cpp.o.d -o vendor/llama.cpp/common/CMakeFiles/common.dir/ngram-cache.cpp.o -c /tmp/pip-wheel-i4vgucre/llama-cpp-python_ac63946418344e25b39ca071585534ac/vendor/llama.cpp/common/ngram-cache.cpp 2024-07-02T16:52:54,232 [15/26] /usr/bin/arm-linux-gnueabihf-g++ -DLLAMA_BUILD -DLLAMA_SHARED -I/tmp/pip-wheel-i4vgucre/llama-cpp-python_ac63946418344e25b39ca071585534ac/vendor/llama.cpp/examples/llava/. -I/tmp/pip-wheel-i4vgucre/llama-cpp-python_ac63946418344e25b39ca071585534ac/vendor/llama.cpp/examples/llava/../.. -I/tmp/pip-wheel-i4vgucre/llama-cpp-python_ac63946418344e25b39ca071585534ac/vendor/llama.cpp/examples/llava/../../common -I/tmp/pip-wheel-i4vgucre/llama-cpp-python_ac63946418344e25b39ca071585534ac/vendor/llama.cpp/ggml/src/../include -I/tmp/pip-wheel-i4vgucre/llama-cpp-python_ac63946418344e25b39ca071585534ac/vendor/llama.cpp/src/. -I/tmp/pip-wheel-i4vgucre/llama-cpp-python_ac63946418344e25b39ca071585534ac/vendor/llama.cpp/src/../include -O3 -DNDEBUG -fPIC -Wno-cast-qual -MD -MT vendor/llama.cpp/examples/llava/CMakeFiles/llava.dir/llava.cpp.o -MF vendor/llama.cpp/examples/llava/CMakeFiles/llava.dir/llava.cpp.o.d -o vendor/llama.cpp/examples/llava/CMakeFiles/llava.dir/llava.cpp.o -c /tmp/pip-wheel-i4vgucre/llama-cpp-python_ac63946418344e25b39ca071585534ac/vendor/llama.cpp/examples/llava/llava.cpp 2024-07-02T16:53:07,458 [16/26] /usr/bin/arm-linux-gnueabihf-g++ -I/tmp/pip-wheel-i4vgucre/llama-cpp-python_ac63946418344e25b39ca071585534ac/vendor/llama.cpp/common/. -I/tmp/pip-wheel-i4vgucre/llama-cpp-python_ac63946418344e25b39ca071585534ac/vendor/llama.cpp/src/. -I/tmp/pip-wheel-i4vgucre/llama-cpp-python_ac63946418344e25b39ca071585534ac/vendor/llama.cpp/src/../include -I/tmp/pip-wheel-i4vgucre/llama-cpp-python_ac63946418344e25b39ca071585534ac/vendor/llama.cpp/ggml/src/../include -I/tmp/pip-wheel-i4vgucre/llama-cpp-python_ac63946418344e25b39ca071585534ac/vendor/llama.cpp/examples/llava/. -I/tmp/pip-wheel-i4vgucre/llama-cpp-python_ac63946418344e25b39ca071585534ac/vendor/llama.cpp/examples/llava/../.. -I/tmp/pip-wheel-i4vgucre/llama-cpp-python_ac63946418344e25b39ca071585534ac/vendor/llama.cpp/examples/llava/../../common -O3 -DNDEBUG -MD -MT vendor/llama.cpp/examples/llava/CMakeFiles/llama-llava-cli.dir/llava-cli.cpp.o -MF vendor/llama.cpp/examples/llava/CMakeFiles/llama-llava-cli.dir/llava-cli.cpp.o.d -o vendor/llama.cpp/examples/llava/CMakeFiles/llama-llava-cli.dir/llava-cli.cpp.o -c /tmp/pip-wheel-i4vgucre/llama-cpp-python_ac63946418344e25b39ca071585534ac/vendor/llama.cpp/examples/llava/llava-cli.cpp 2024-07-02T16:53:17,380 [17/26] /usr/bin/arm-linux-gnueabihf-g++ -I/tmp/pip-wheel-i4vgucre/llama-cpp-python_ac63946418344e25b39ca071585534ac/vendor/llama.cpp/common/. -I/tmp/pip-wheel-i4vgucre/llama-cpp-python_ac63946418344e25b39ca071585534ac/vendor/llama.cpp/src/. -I/tmp/pip-wheel-i4vgucre/llama-cpp-python_ac63946418344e25b39ca071585534ac/vendor/llama.cpp/src/../include -I/tmp/pip-wheel-i4vgucre/llama-cpp-python_ac63946418344e25b39ca071585534ac/vendor/llama.cpp/ggml/src/../include -O3 -DNDEBUG -fPIC -MD -MT vendor/llama.cpp/common/CMakeFiles/common.dir/common.cpp.o -MF vendor/llama.cpp/common/CMakeFiles/common.dir/common.cpp.o.d -o vendor/llama.cpp/common/CMakeFiles/common.dir/common.cpp.o -c /tmp/pip-wheel-i4vgucre/llama-cpp-python_ac63946418344e25b39ca071585534ac/vendor/llama.cpp/common/common.cpp 2024-07-02T16:53:17,380 In file included from /usr/include/c++/12/vector:70, 2024-07-02T16:53:17,381 from /tmp/pip-wheel-i4vgucre/llama-cpp-python_ac63946418344e25b39ca071585534ac/vendor/llama.cpp/common/grammar-parser.h:14, 2024-07-02T16:53:17,382 from /tmp/pip-wheel-i4vgucre/llama-cpp-python_ac63946418344e25b39ca071585534ac/vendor/llama.cpp/common/sampling.h:5, 2024-07-02T16:53:17,383 from /tmp/pip-wheel-i4vgucre/llama-cpp-python_ac63946418344e25b39ca071585534ac/vendor/llama.cpp/common/common.h:7, 2024-07-02T16:53:17,383 from /tmp/pip-wheel-i4vgucre/llama-cpp-python_ac63946418344e25b39ca071585534ac/vendor/llama.cpp/common/common.cpp:1: 2024-07-02T16:53:17,384 /usr/include/c++/12/bits/vector.tcc: In member function ‘void std::vector<_Tp, _Alloc>::_M_realloc_insert(iterator, _Args&& ...) [with _Args = {}; _Tp = llama_model_kv_override; _Alloc = std::allocator]’: 2024-07-02T16:53:17,385 /usr/include/c++/12/bits/vector.tcc:439:7: note: parameter passing for argument of type ‘std::vector::iterator’ changed in GCC 7.1 2024-07-02T16:53:17,386 439 | vector<_Tp, _Alloc>:: 2024-07-02T16:53:17,387 | ^~~~~~~~~~~~~~~~~~~ 2024-07-02T16:53:17,388 /usr/include/c++/12/bits/vector.tcc: In member function ‘void std::vector<_Tp, _Alloc>::_M_realloc_insert(iterator, _Args&& ...) [with _Args = {llama_model_kv_override}; _Tp = llama_model_kv_override; _Alloc = std::allocator]’: 2024-07-02T16:53:17,389 /usr/include/c++/12/bits/vector.tcc:439:7: note: parameter passing for argument of type ‘std::vector::iterator’ changed in GCC 7.1 2024-07-02T16:53:17,390 In member function ‘std::vector<_Tp, _Alloc>::reference std::vector<_Tp, _Alloc>::emplace_back(_Args&& ...) [with _Args = {llama_model_kv_override}; _Tp = llama_model_kv_override; _Alloc = std::allocator]’, 2024-07-02T16:53:17,391 inlined from ‘bool string_parse_kv_override(const char*, std::vector&)’ at /tmp/pip-wheel-i4vgucre/llama-cpp-python_ac63946418344e25b39ca071585534ac/vendor/llama.cpp/common/common.cpp:1802:27: 2024-07-02T16:53:17,392 /usr/include/c++/12/bits/vector.tcc:123:28: note: parameter passing for argument of type ‘__gnu_cxx::__normal_iterator >’ changed in GCC 7.1 2024-07-02T16:53:17,392 123 | _M_realloc_insert(end(), std::forward<_Args>(__args)...); 2024-07-02T16:53:17,393 | ~~~~~~~~~~~~~~~~~^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ 2024-07-02T16:53:17,394 In member function ‘std::vector<_Tp, _Alloc>::reference std::vector<_Tp, _Alloc>::emplace_back(_Args&& ...) [with _Args = {}; _Tp = llama_model_kv_override; _Alloc = std::allocator]’, 2024-07-02T16:53:17,395 inlined from ‘bool gpt_params_parse_ex(int, char**, gpt_params&)’ at /tmp/pip-wheel-i4vgucre/llama-cpp-python_ac63946418344e25b39ca071585534ac/vendor/llama.cpp/common/common.cpp:251:41: 2024-07-02T16:53:17,396 /usr/include/c++/12/bits/vector.tcc:123:28: note: parameter passing for argument of type ‘__gnu_cxx::__normal_iterator >’ changed in GCC 7.1 2024-07-02T16:53:17,397 123 | _M_realloc_insert(end(), std::forward<_Args>(__args)...); 2024-07-02T16:53:17,398 | ~~~~~~~~~~~~~~~~~^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ 2024-07-02T16:53:36,064 [18/26] /usr/bin/arm-linux-gnueabihf-g++ -DLLAMA_BUILD -DLLAMA_SHARED -Dllama_EXPORTS -I/tmp/pip-wheel-i4vgucre/llama-cpp-python_ac63946418344e25b39ca071585534ac/vendor/llama.cpp/src/. -I/tmp/pip-wheel-i4vgucre/llama-cpp-python_ac63946418344e25b39ca071585534ac/vendor/llama.cpp/src/../include -I/tmp/pip-wheel-i4vgucre/llama-cpp-python_ac63946418344e25b39ca071585534ac/vendor/llama.cpp/ggml/src/../include -O3 -DNDEBUG -fPIC -MD -MT vendor/llama.cpp/src/CMakeFiles/llama.dir/unicode-data.cpp.o -MF vendor/llama.cpp/src/CMakeFiles/llama.dir/unicode-data.cpp.o.d -o vendor/llama.cpp/src/CMakeFiles/llama.dir/unicode-data.cpp.o -c /tmp/pip-wheel-i4vgucre/llama-cpp-python_ac63946418344e25b39ca071585534ac/vendor/llama.cpp/src/unicode-data.cpp 2024-07-02T16:53:36,065 FAILED: vendor/llama.cpp/src/CMakeFiles/llama.dir/unicode-data.cpp.o 2024-07-02T16:53:36,065 /usr/bin/arm-linux-gnueabihf-g++ -DLLAMA_BUILD -DLLAMA_SHARED -Dllama_EXPORTS -I/tmp/pip-wheel-i4vgucre/llama-cpp-python_ac63946418344e25b39ca071585534ac/vendor/llama.cpp/src/. -I/tmp/pip-wheel-i4vgucre/llama-cpp-python_ac63946418344e25b39ca071585534ac/vendor/llama.cpp/src/../include -I/tmp/pip-wheel-i4vgucre/llama-cpp-python_ac63946418344e25b39ca071585534ac/vendor/llama.cpp/ggml/src/../include -O3 -DNDEBUG -fPIC -MD -MT vendor/llama.cpp/src/CMakeFiles/llama.dir/unicode-data.cpp.o -MF vendor/llama.cpp/src/CMakeFiles/llama.dir/unicode-data.cpp.o.d -o vendor/llama.cpp/src/CMakeFiles/llama.dir/unicode-data.cpp.o -c /tmp/pip-wheel-i4vgucre/llama-cpp-python_ac63946418344e25b39ca071585534ac/vendor/llama.cpp/src/unicode-data.cpp 2024-07-02T16:53:36,066 virtual memory exhausted: Cannot allocate memory 2024-07-02T16:53:41,481 [19/26] /usr/bin/arm-linux-gnueabihf-g++ -I/tmp/pip-wheel-i4vgucre/llama-cpp-python_ac63946418344e25b39ca071585534ac/vendor/llama.cpp/common/. -I/tmp/pip-wheel-i4vgucre/llama-cpp-python_ac63946418344e25b39ca071585534ac/vendor/llama.cpp/src/. -I/tmp/pip-wheel-i4vgucre/llama-cpp-python_ac63946418344e25b39ca071585534ac/vendor/llama.cpp/src/../include -I/tmp/pip-wheel-i4vgucre/llama-cpp-python_ac63946418344e25b39ca071585534ac/vendor/llama.cpp/ggml/src/../include -O3 -DNDEBUG -fPIC -MD -MT vendor/llama.cpp/common/CMakeFiles/common.dir/json-schema-to-grammar.cpp.o -MF vendor/llama.cpp/common/CMakeFiles/common.dir/json-schema-to-grammar.cpp.o.d -o vendor/llama.cpp/common/CMakeFiles/common.dir/json-schema-to-grammar.cpp.o -c /tmp/pip-wheel-i4vgucre/llama-cpp-python_ac63946418344e25b39ca071585534ac/vendor/llama.cpp/common/json-schema-to-grammar.cpp 2024-07-02T16:53:41,482 In file included from /usr/include/c++/12/vector:70, 2024-07-02T16:53:41,483 from /usr/include/c++/12/functional:62, 2024-07-02T16:53:41,484 from /tmp/pip-wheel-i4vgucre/llama-cpp-python_ac63946418344e25b39ca071585534ac/vendor/llama.cpp/common/json.hpp:23, 2024-07-02T16:53:41,484 from /tmp/pip-wheel-i4vgucre/llama-cpp-python_ac63946418344e25b39ca071585534ac/vendor/llama.cpp/common/json-schema-to-grammar.h:6, 2024-07-02T16:53:41,485 from /tmp/pip-wheel-i4vgucre/llama-cpp-python_ac63946418344e25b39ca071585534ac/vendor/llama.cpp/common/json-schema-to-grammar.cpp:1: 2024-07-02T16:53:41,486 /usr/include/c++/12/bits/vector.tcc: In member function ‘void std::vector<_Tp, _Alloc>::_M_realloc_insert(iterator, _Args&& ...) [with _Args = {const std::__cxx11::basic_string, std::allocator >&, const nlohmann::json_abi_v3_11_3::basic_json, std::allocator >, bool, long long int, long long unsigned int, double, std::allocator, nlohmann::json_abi_v3_11_3::adl_serializer, std::vector >, void>&}; _Tp = std::pair, nlohmann::json_abi_v3_11_3::basic_json >; _Alloc = std::allocator, nlohmann::json_abi_v3_11_3::basic_json > >]’: 2024-07-02T16:53:41,487 /usr/include/c++/12/bits/vector.tcc:439:7: note: parameter passing for argument of type ‘std::vector, nlohmann::json_abi_v3_11_3::basic_json > >::iterator’ changed in GCC 7.1 2024-07-02T16:53:41,488 439 | vector<_Tp, _Alloc>:: 2024-07-02T16:53:41,489 | ^~~~~~~~~~~~~~~~~~~ 2024-07-02T16:53:41,490 In member function ‘std::vector<_Tp, _Alloc>::reference std::vector<_Tp, _Alloc>::emplace_back(_Args&& ...) [with _Args = {const std::__cxx11::basic_string, std::allocator >&, const nlohmann::json_abi_v3_11_3::basic_json, std::allocator >, bool, long long int, long long unsigned int, double, std::allocator, nlohmann::json_abi_v3_11_3::adl_serializer, std::vector >, void>&}; _Tp = std::pair, nlohmann::json_abi_v3_11_3::basic_json >; _Alloc = std::allocator, nlohmann::json_abi_v3_11_3::basic_json > >]’, 2024-07-02T16:53:41,490 inlined from ‘SchemaConverter::visit(const json&, const std::string&)::’ at /tmp/pip-wheel-i4vgucre/llama-cpp-python_ac63946418344e25b39ca071585534ac/vendor/llama.cpp/common/json-schema-to-grammar.cpp:939:48: 2024-07-02T16:53:41,491 /usr/include/c++/12/bits/vector.tcc:123:28: note: parameter passing for argument of type ‘__gnu_cxx::__normal_iterator, nlohmann::json_abi_v3_11_3::basic_json >*, std::vector, nlohmann::json_abi_v3_11_3::basic_json > > >’ changed in GCC 7.1 2024-07-02T16:53:41,492 123 | _M_realloc_insert(end(), std::forward<_Args>(__args)...); 2024-07-02T16:53:41,493 | ~~~~~~~~~~~~~~~~~^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ 2024-07-02T16:53:41,494 In member function ‘std::vector<_Tp, _Alloc>::reference std::vector<_Tp, _Alloc>::emplace_back(_Args&& ...) [with _Args = {const std::__cxx11::basic_string, std::allocator >&, const nlohmann::json_abi_v3_11_3::basic_json, std::allocator >, bool, long long int, long long unsigned int, double, std::allocator, nlohmann::json_abi_v3_11_3::adl_serializer, std::vector >, void>&}; _Tp = std::pair, nlohmann::json_abi_v3_11_3::basic_json >; _Alloc = std::allocator, nlohmann::json_abi_v3_11_3::basic_json > >]’, 2024-07-02T16:53:41,495 inlined from ‘std::string SchemaConverter::visit(const json&, const std::string&)’ at /tmp/pip-wheel-i4vgucre/llama-cpp-python_ac63946418344e25b39ca071585534ac/vendor/llama.cpp/common/json-schema-to-grammar.cpp:923:44: 2024-07-02T16:53:41,496 /usr/include/c++/12/bits/vector.tcc:123:28: note: parameter passing for argument of type ‘__gnu_cxx::__normal_iterator, nlohmann::json_abi_v3_11_3::basic_json >*, std::vector, nlohmann::json_abi_v3_11_3::basic_json > > >’ changed in GCC 7.1 2024-07-02T16:53:41,497 123 | _M_realloc_insert(end(), std::forward<_Args>(__args)...); 2024-07-02T16:53:41,498 | ~~~~~~~~~~~~~~~~~^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ 2024-07-02T16:53:51,846 [20/26] /usr/bin/arm-linux-gnueabihf-g++ -DLLAMA_BUILD -DLLAMA_SHARED -I/tmp/pip-wheel-i4vgucre/llama-cpp-python_ac63946418344e25b39ca071585534ac/vendor/llama.cpp/examples/llava/. -I/tmp/pip-wheel-i4vgucre/llama-cpp-python_ac63946418344e25b39ca071585534ac/vendor/llama.cpp/examples/llava/../.. -I/tmp/pip-wheel-i4vgucre/llama-cpp-python_ac63946418344e25b39ca071585534ac/vendor/llama.cpp/examples/llava/../../common -I/tmp/pip-wheel-i4vgucre/llama-cpp-python_ac63946418344e25b39ca071585534ac/vendor/llama.cpp/ggml/src/../include -I/tmp/pip-wheel-i4vgucre/llama-cpp-python_ac63946418344e25b39ca071585534ac/vendor/llama.cpp/src/. -I/tmp/pip-wheel-i4vgucre/llama-cpp-python_ac63946418344e25b39ca071585534ac/vendor/llama.cpp/src/../include -O3 -DNDEBUG -fPIC -Wno-cast-qual -MD -MT vendor/llama.cpp/examples/llava/CMakeFiles/llava.dir/clip.cpp.o -MF vendor/llama.cpp/examples/llava/CMakeFiles/llava.dir/clip.cpp.o.d -o vendor/llama.cpp/examples/llava/CMakeFiles/llava.dir/clip.cpp.o -c /tmp/pip-wheel-i4vgucre/llama-cpp-python_ac63946418344e25b39ca071585534ac/vendor/llama.cpp/examples/llava/clip.cpp 2024-07-02T16:55:11,579 [21/26] /usr/bin/arm-linux-gnueabihf-g++ -DLLAMA_BUILD -DLLAMA_SHARED -Dllama_EXPORTS -I/tmp/pip-wheel-i4vgucre/llama-cpp-python_ac63946418344e25b39ca071585534ac/vendor/llama.cpp/src/. -I/tmp/pip-wheel-i4vgucre/llama-cpp-python_ac63946418344e25b39ca071585534ac/vendor/llama.cpp/src/../include -I/tmp/pip-wheel-i4vgucre/llama-cpp-python_ac63946418344e25b39ca071585534ac/vendor/llama.cpp/ggml/src/../include -O3 -DNDEBUG -fPIC -MD -MT vendor/llama.cpp/src/CMakeFiles/llama.dir/llama.cpp.o -MF vendor/llama.cpp/src/CMakeFiles/llama.dir/llama.cpp.o.d -o vendor/llama.cpp/src/CMakeFiles/llama.dir/llama.cpp.o -c /tmp/pip-wheel-i4vgucre/llama-cpp-python_ac63946418344e25b39ca071585534ac/vendor/llama.cpp/src/llama.cpp 2024-07-02T16:55:11,579 In file included from /usr/include/c++/12/vector:64, 2024-07-02T16:55:11,580 from /usr/include/c++/12/bits/random.h:34, 2024-07-02T16:55:11,580 from /usr/include/c++/12/random:49, 2024-07-02T16:55:11,581 from /tmp/pip-wheel-i4vgucre/llama-cpp-python_ac63946418344e25b39ca071585534ac/vendor/llama.cpp/src/../include/llama.h:1112, 2024-07-02T16:55:11,581 from /tmp/pip-wheel-i4vgucre/llama-cpp-python_ac63946418344e25b39ca071585534ac/vendor/llama.cpp/src/llama.cpp:2: 2024-07-02T16:55:11,582 /usr/include/c++/12/bits/stl_vector.h: In function ‘std::vector<_Tp, _Alloc>::vector(std::initializer_list<_Tp>, const allocator_type&) [with _Tp = long long int; _Alloc = std::allocator]’: 2024-07-02T16:55:11,582 /usr/include/c++/12/bits/stl_vector.h:673:7: note: parameter passing for argument of type ‘std::initializer_list’ changed in GCC 7.1 2024-07-02T16:55:11,583 673 | vector(initializer_list __l, 2024-07-02T16:55:11,583 | ^~~~~~ 2024-07-02T16:55:11,584 /tmp/pip-wheel-i4vgucre/llama-cpp-python_ac63946418344e25b39ca071585534ac/vendor/llama.cpp/src/llama.cpp: In function ‘bool llm_load_tensors(llama_model_loader&, llama_model&, int, llama_split_mode, int, const float*, bool, llama_progress_callback, void*)’: 2024-07-02T16:55:11,584 /tmp/pip-wheel-i4vgucre/llama-cpp-python_ac63946418344e25b39ca071585534ac/vendor/llama.cpp/src/llama.cpp:5710:54: note: parameter passing for argument of type ‘std::initializer_list’ changed in GCC 7.1 2024-07-02T16:55:11,585 5710 | model.tok_embd = ml.create_tensor(ctx_input, tn(LLM_TENSOR_TOKEN_EMBD, "weight"), {n_embd, n_vocab}); 2024-07-02T16:55:11,585 | ~~~~~~~~~~~~~~~~^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ 2024-07-02T16:55:11,586 /tmp/pip-wheel-i4vgucre/llama-cpp-python_ac63946418344e25b39ca071585534ac/vendor/llama.cpp/src/llama.cpp:5714:61: note: parameter passing for argument of type ‘std::initializer_list’ changed in GCC 7.1 2024-07-02T16:55:11,586 5714 | model.output_norm = ml.create_tensor(ctx_output, tn(LLM_TENSOR_OUTPUT_NORM, "weight"), {n_embd}); 2024-07-02T16:55:11,587 | ~~~~~~~~~~~~~~~~^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ 2024-07-02T16:55:11,587 /tmp/pip-wheel-i4vgucre/llama-cpp-python_ac63946418344e25b39ca071585534ac/vendor/llama.cpp/src/llama.cpp:5715:56: note: parameter passing for argument of type ‘std::initializer_list’ changed in GCC 7.1 2024-07-02T16:55:11,587 5715 | model.output = ml.create_tensor(ctx_output_split, tn(LLM_TENSOR_OUTPUT, "weight"), {n_embd, n_vocab}, llama_model_loader::TENSOR_NOT_REQUIRED); 2024-07-02T16:55:11,588 | ~~~~~~~~~~~~~~~~^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ 2024-07-02T16:55:11,588 /tmp/pip-wheel-i4vgucre/llama-cpp-python_ac63946418344e25b39ca071585534ac/vendor/llama.cpp/src/llama.cpp:5718:60: note: parameter passing for argument of type ‘std::initializer_list’ changed in GCC 7.1 2024-07-02T16:55:11,589 5718 | model.output = ml.create_tensor(ctx_output, tn(LLM_TENSOR_TOKEN_EMBD, "weight"), {n_embd, n_vocab}, llama_model_loader::TENSOR_DUPLICATED); 2024-07-02T16:55:11,589 | ~~~~~~~~~~~~~~~~^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ 2024-07-02T16:55:11,590 /tmp/pip-wheel-i4vgucre/llama-cpp-python_ac63946418344e25b39ca071585534ac/vendor/llama.cpp/src/llama.cpp:5730:52: note: parameter passing for argument of type ‘std::initializer_list’ changed in GCC 7.1 2024-07-02T16:55:11,590 5730 | layer.wq = ml.create_tensor(ctx_split, tn(LLM_TENSOR_ATTN_Q, "weight", i), {n_embd, n_embd}); 2024-07-02T16:55:11,591 | ~~~~~~~~~~~~~~~~^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ 2024-07-02T16:55:11,591 /tmp/pip-wheel-i4vgucre/llama-cpp-python_ac63946418344e25b39ca071585534ac/vendor/llama.cpp/src/llama.cpp:5731:52: note: parameter passing for argument of type ‘std::initializer_list’ changed in GCC 7.1 2024-07-02T16:55:11,592 5731 | layer.wk = ml.create_tensor(ctx_split, tn(LLM_TENSOR_ATTN_K, "weight", i), {n_embd, n_embd_gqa}); 2024-07-02T16:55:11,593 | ~~~~~~~~~~~~~~~~^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ 2024-07-02T16:55:11,593 /tmp/pip-wheel-i4vgucre/llama-cpp-python_ac63946418344e25b39ca071585534ac/vendor/llama.cpp/src/llama.cpp:5732:52: note: parameter passing for argument of type ‘std::initializer_list’ changed in GCC 7.1 2024-07-02T16:55:11,593 5732 | layer.wv = ml.create_tensor(ctx_split, tn(LLM_TENSOR_ATTN_V, "weight", i), {n_embd, n_embd_gqa}); 2024-07-02T16:55:11,594 | ~~~~~~~~~~~~~~~~^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ 2024-07-02T16:55:11,594 /tmp/pip-wheel-i4vgucre/llama-cpp-python_ac63946418344e25b39ca071585534ac/vendor/llama.cpp/src/llama.cpp:5733:52: note: parameter passing for argument of type ‘std::initializer_list’ changed in GCC 7.1 2024-07-02T16:55:11,595 5733 | layer.wo = ml.create_tensor(ctx_split, tn(LLM_TENSOR_ATTN_OUT, "weight", i), {n_embd, n_embd}); 2024-07-02T16:55:11,596 | ~~~~~~~~~~~~~~~~^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ 2024-07-02T16:55:11,596 /tmp/pip-wheel-i4vgucre/llama-cpp-python_ac63946418344e25b39ca071585534ac/vendor/llama.cpp/src/llama.cpp:5736:52: note: parameter passing for argument of type ‘std::initializer_list’ changed in GCC 7.1 2024-07-02T16:55:11,597 5736 | layer.bq = ml.create_tensor(ctx_layer, tn(LLM_TENSOR_ATTN_Q, "bias", i), {n_embd}, llama_model_loader::TENSOR_NOT_REQUIRED); 2024-07-02T16:55:11,597 | ~~~~~~~~~~~~~~~~^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ 2024-07-02T16:55:11,597 /tmp/pip-wheel-i4vgucre/llama-cpp-python_ac63946418344e25b39ca071585534ac/vendor/llama.cpp/src/llama.cpp:5737:52: note: parameter passing for argument of type ‘std::initializer_list’ changed in GCC 7.1 2024-07-02T16:55:11,598 5737 | layer.bk = ml.create_tensor(ctx_layer, tn(LLM_TENSOR_ATTN_K, "bias", i), {n_embd_gqa}, llama_model_loader::TENSOR_NOT_REQUIRED); 2024-07-02T16:55:11,598 | ~~~~~~~~~~~~~~~~^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ 2024-07-02T16:55:11,598 /tmp/pip-wheel-i4vgucre/llama-cpp-python_ac63946418344e25b39ca071585534ac/vendor/llama.cpp/src/llama.cpp:5738:52: note: parameter passing for argument of type ‘std::initializer_list’ changed in GCC 7.1 2024-07-02T16:55:11,599 5738 | layer.bv = ml.create_tensor(ctx_layer, tn(LLM_TENSOR_ATTN_V, "bias", i), {n_embd_gqa}, llama_model_loader::TENSOR_NOT_REQUIRED); 2024-07-02T16:55:11,599 | ~~~~~~~~~~~~~~~~^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ 2024-07-02T16:55:11,600 /tmp/pip-wheel-i4vgucre/llama-cpp-python_ac63946418344e25b39ca071585534ac/vendor/llama.cpp/src/llama.cpp:5739:52: note: parameter passing for argument of type ‘std::initializer_list’ changed in GCC 7.1 2024-07-02T16:55:11,600 5739 | layer.bo = ml.create_tensor(ctx_layer, tn(LLM_TENSOR_ATTN_OUT, "bias", i), {n_embd}, llama_model_loader::TENSOR_NOT_REQUIRED); 2024-07-02T16:55:11,601 | ~~~~~~~~~~~~~~~~^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ 2024-07-02T16:55:11,601 /tmp/pip-wheel-i4vgucre/llama-cpp-python_ac63946418344e25b39ca071585534ac/vendor/llama.cpp/src/llama.cpp:5741:58: note: parameter passing for argument of type ‘std::initializer_list’ changed in GCC 7.1 2024-07-02T16:55:11,602 5741 | layer.ffn_norm = ml.create_tensor(ctx_layer, tn(LLM_TENSOR_FFN_NORM, "weight", i), {n_embd}); 2024-07-02T16:55:11,603 | ~~~~~~~~~~~~~~~~^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ 2024-07-02T16:55:11,603 /tmp/pip-wheel-i4vgucre/llama-cpp-python_ac63946418344e25b39ca071585534ac/vendor/llama.cpp/src/llama.cpp:5744:62: note: parameter passing for argument of type ‘std::initializer_list’ changed in GCC 7.1 2024-07-02T16:55:11,604 5744 | layer.ffn_gate = ml.create_tensor(ctx_split, tn(LLM_TENSOR_FFN_GATE, "weight", i), {n_embd, n_ff}); 2024-07-02T16:55:11,604 | ~~~~~~~~~~~~~~~~^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ 2024-07-02T16:55:11,605 /tmp/pip-wheel-i4vgucre/llama-cpp-python_ac63946418344e25b39ca071585534ac/vendor/llama.cpp/src/llama.cpp:5745:62: note: parameter passing for argument of type ‘std::initializer_list’ changed in GCC 7.1 2024-07-02T16:55:11,605 5745 | layer.ffn_down = ml.create_tensor(ctx_split, tn(LLM_TENSOR_FFN_DOWN, "weight", i), { n_ff, n_embd}); 2024-07-02T16:55:11,606 | ~~~~~~~~~~~~~~~~^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ 2024-07-02T16:55:11,606 /tmp/pip-wheel-i4vgucre/llama-cpp-python_ac63946418344e25b39ca071585534ac/vendor/llama.cpp/src/llama.cpp:5746:62: note: parameter passing for argument of type ‘std::initializer_list’ changed in GCC 7.1 2024-07-02T16:55:11,607 5746 | layer.ffn_up = ml.create_tensor(ctx_split, tn(LLM_TENSOR_FFN_UP, "weight", i), {n_embd, n_ff}); 2024-07-02T16:55:11,607 | ~~~~~~~~~~~~~~~~^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ 2024-07-02T16:55:11,607 /tmp/pip-wheel-i4vgucre/llama-cpp-python_ac63946418344e25b39ca071585534ac/vendor/llama.cpp/src/llama.cpp:5749:64: note: parameter passing for argument of type ‘std::initializer_list’ changed in GCC 7.1 2024-07-02T16:55:11,608 5749 | layer.ffn_gate_b = ml.create_tensor(ctx_split, tn(LLM_TENSOR_FFN_GATE, "bias", i), {n_ff}, llama_model_loader::TENSOR_NOT_REQUIRED); 2024-07-02T16:55:11,608 | ~~~~~~~~~~~~~~~~^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ 2024-07-02T16:55:11,608 /tmp/pip-wheel-i4vgucre/llama-cpp-python_ac63946418344e25b39ca071585534ac/vendor/llama.cpp/src/llama.cpp:5750:64: note: parameter passing for argument of type ‘std::initializer_list’ changed in GCC 7.1 2024-07-02T16:55:11,609 5750 | layer.ffn_down_b = ml.create_tensor(ctx_split, tn(LLM_TENSOR_FFN_DOWN, "bias", i), {n_embd}, llama_model_loader::TENSOR_NOT_REQUIRED); 2024-07-02T16:55:11,609 | ~~~~~~~~~~~~~~~~^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ 2024-07-02T16:55:11,610 /tmp/pip-wheel-i4vgucre/llama-cpp-python_ac63946418344e25b39ca071585534ac/vendor/llama.cpp/src/llama.cpp:5751:64: note: parameter passing for argument of type ‘std::initializer_list’ changed in GCC 7.1 2024-07-02T16:55:11,610 5751 | layer.ffn_up_b = ml.create_tensor(ctx_split, tn(LLM_TENSOR_FFN_UP, "bias", i), {n_ff}, llama_model_loader::TENSOR_NOT_REQUIRED); 2024-07-02T16:55:11,611 | ~~~~~~~~~~~~~~~~^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ 2024-07-02T16:55:11,611 /tmp/pip-wheel-i4vgucre/llama-cpp-python_ac63946418344e25b39ca071585534ac/vendor/llama.cpp/src/llama.cpp:5753:66: note: parameter passing for argument of type ‘std::initializer_list’ changed in GCC 7.1 2024-07-02T16:55:11,612 5753 | layer.ffn_gate_inp = ml.create_tensor(ctx_layer, tn(LLM_TENSOR_FFN_GATE_INP, "weight", i), {n_embd, n_expert}); 2024-07-02T16:55:11,612 | ~~~~~~~~~~~~~~~~^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ 2024-07-02T16:55:11,613 /tmp/pip-wheel-i4vgucre/llama-cpp-python_ac63946418344e25b39ca071585534ac/vendor/llama.cpp/src/llama.cpp:5755:67: note: parameter passing for argument of type ‘std::initializer_list’ changed in GCC 7.1 2024-07-02T16:55:11,613 5755 | layer.ffn_gate_exps = ml.create_tensor(ctx_split, tn(LLM_TENSOR_FFN_GATE_EXPS, "weight", i), {n_embd, n_ff, n_expert}, llama_model_loader::TENSOR_NOT_REQUIRED); 2024-07-02T16:55:11,614 | ~~~~~~~~~~~~~~~~^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ 2024-07-02T16:55:11,614 /tmp/pip-wheel-i4vgucre/llama-cpp-python_ac63946418344e25b39ca071585534ac/vendor/llama.cpp/src/llama.cpp:5757:71: note: parameter passing for argument of type ‘std::initializer_list’ changed in GCC 7.1 2024-07-02T16:55:11,615 5757 | layer.ffn_down_exps = ml.create_tensor(ctx_split, tn(LLM_TENSOR_FFN_DOWN_EXPS, "weight", i), { n_ff, n_embd, n_expert}); 2024-07-02T16:55:11,616 | ~~~~~~~~~~~~~~~~^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ 2024-07-02T16:55:11,616 /tmp/pip-wheel-i4vgucre/llama-cpp-python_ac63946418344e25b39ca071585534ac/vendor/llama.cpp/src/llama.cpp:5758:71: note: parameter passing for argument of type ‘std::initializer_list’ changed in GCC 7.1 2024-07-02T16:55:11,616 5758 | layer.ffn_up_exps = ml.create_tensor(ctx_split, tn(LLM_TENSOR_FFN_UP_EXPS, "weight", i), {n_embd, n_ff, n_expert}); 2024-07-02T16:55:11,617 | ~~~~~~~~~~~~~~~~^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ 2024-07-02T16:55:11,617 /tmp/pip-wheel-i4vgucre/llama-cpp-python_ac63946418344e25b39ca071585534ac/vendor/llama.cpp/src/llama.cpp:5778:61: note: parameter passing for argument of type ‘std::initializer_list’ changed in GCC 7.1 2024-07-02T16:55:11,618 5778 | ml.create_tensor_as_view(ctx_split, layer.ffn_gate_exps, tn(LLM_TENSOR_FFN_GATE_EXP, "weight", i, x), { n_embd, n_ff }, layer.ffn_gate_exps->nb[2]*x); 2024-07-02T16:55:11,618 | ~~~~~~~~~~~~~~~~~~~~~~~~^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ 2024-07-02T16:55:11,618 /tmp/pip-wheel-i4vgucre/llama-cpp-python_ac63946418344e25b39ca071585534ac/vendor/llama.cpp/src/llama.cpp:5779:61: note: parameter passing for argument of type ‘std::initializer_list’ changed in GCC 7.1 2024-07-02T16:55:11,619 5779 | ml.create_tensor_as_view(ctx_split, layer.ffn_down_exps, tn(LLM_TENSOR_FFN_DOWN_EXP, "weight", i, x), { n_ff, n_embd }, layer.ffn_down_exps->nb[2]*x); 2024-07-02T16:55:11,620 | ~~~~~~~~~~~~~~~~~~~~~~~~^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ 2024-07-02T16:55:11,620 /tmp/pip-wheel-i4vgucre/llama-cpp-python_ac63946418344e25b39ca071585534ac/vendor/llama.cpp/src/llama.cpp:5780:61: note: parameter passing for argument of type ‘std::initializer_list’ changed in GCC 7.1 2024-07-02T16:55:11,621 5780 | ml.create_tensor_as_view(ctx_split, layer.ffn_up_exps, tn(LLM_TENSOR_FFN_UP_EXP, "weight", i, x), { n_embd, n_ff }, layer.ffn_up_exps->nb[2]*x); 2024-07-02T16:55:11,621 | ~~~~~~~~~~~~~~~~~~~~~~~~^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ 2024-07-02T16:55:11,622 /tmp/pip-wheel-i4vgucre/llama-cpp-python_ac63946418344e25b39ca071585534ac/vendor/llama.cpp/src/llama.cpp:5792:54: note: parameter passing for argument of type ‘std::initializer_list’ changed in GCC 7.1 2024-07-02T16:55:11,622 5792 | model.tok_embd = ml.create_tensor(ctx_input, tn(LLM_TENSOR_TOKEN_EMBD, "weight"), {n_embd, n_vocab}); 2024-07-02T16:55:11,623 | ~~~~~~~~~~~~~~~~^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ 2024-07-02T16:55:11,623 /tmp/pip-wheel-i4vgucre/llama-cpp-python_ac63946418344e25b39ca071585534ac/vendor/llama.cpp/src/llama.cpp:5796:61: note: parameter passing for argument of type ‘std::initializer_list’ changed in GCC 7.1 2024-07-02T16:55:11,624 5796 | model.output_norm = ml.create_tensor(ctx_output, tn(LLM_TENSOR_OUTPUT_NORM, "weight"), {n_embd}); 2024-07-02T16:55:11,625 | ~~~~~~~~~~~~~~~~^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ 2024-07-02T16:55:11,625 /tmp/pip-wheel-i4vgucre/llama-cpp-python_ac63946418344e25b39ca071585534ac/vendor/llama.cpp/src/llama.cpp:5797:61: note: parameter passing for argument of type ‘std::initializer_list’ changed in GCC 7.1 2024-07-02T16:55:11,625 5797 | model.output = ml.create_tensor(ctx_output_split, tn(LLM_TENSOR_OUTPUT, "weight"), {n_embd, n_vocab}, llama_model_loader::TENSOR_NOT_REQUIRED); 2024-07-02T16:55:11,626 | ~~~~~~~~~~~~~~~~^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ 2024-07-02T16:55:11,626 /tmp/pip-wheel-i4vgucre/llama-cpp-python_ac63946418344e25b39ca071585534ac/vendor/llama.cpp/src/llama.cpp:5800:60: note: parameter passing for argument of type ‘std::initializer_list’ changed in GCC 7.1 2024-07-02T16:55:11,627 5800 | model.output = ml.create_tensor(ctx_output, tn(LLM_TENSOR_TOKEN_EMBD, "weight"), {n_embd, n_vocab}, llama_model_loader::TENSOR_DUPLICATED); 2024-07-02T16:55:11,627 | ~~~~~~~~~~~~~~~~^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ 2024-07-02T16:55:11,627 /tmp/pip-wheel-i4vgucre/llama-cpp-python_ac63946418344e25b39ca071585534ac/vendor/llama.cpp/src/llama.cpp:5817:65: note: parameter passing for argument of type ‘std::initializer_list’ changed in GCC 7.1 2024-07-02T16:55:11,628 5817 | layer.attn_out_norm = ml.create_tensor(ctx_layer, tn(LLM_TENSOR_ATTN_OUT_NORM, "weight", i), {n_embd}); 2024-07-02T16:55:11,628 | ~~~~~~~~~~~~~~~~^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ 2024-07-02T16:55:11,629 /tmp/pip-wheel-i4vgucre/llama-cpp-python_ac63946418344e25b39ca071585534ac/vendor/llama.cpp/src/llama.cpp:5823:63: note: parameter passing for argument of type ‘std::initializer_list’ changed in GCC 7.1 2024-07-02T16:55:11,629 5823 | layer.ffn_gate_exps = ml.create_tensor(ctx_split, tn(LLM_TENSOR_FFN_GATE_EXPS, "weight", i), {n_embd, n_ff, n_expert}, llama_model_loader::TENSOR_NOT_REQUIRED); 2024-07-02T16:55:11,630 | ~~~~~~~~~~~~~~~~^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ 2024-07-02T16:55:11,631 /tmp/pip-wheel-i4vgucre/llama-cpp-python_ac63946418344e25b39ca071585534ac/vendor/llama.cpp/src/llama.cpp:5825:67: note: parameter passing for argument of type ‘std::initializer_list’ changed in GCC 7.1 2024-07-02T16:55:11,631 5825 | layer.ffn_down_exps = ml.create_tensor(ctx_split, tn(LLM_TENSOR_FFN_DOWN_EXPS, "weight", i), { n_ff, n_embd, n_expert}); 2024-07-02T16:55:11,632 | ~~~~~~~~~~~~~~~~^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ 2024-07-02T16:55:11,632 /tmp/pip-wheel-i4vgucre/llama-cpp-python_ac63946418344e25b39ca071585534ac/vendor/llama.cpp/src/llama.cpp:5826:67: note: parameter passing for argument of type ‘std::initializer_list’ changed in GCC 7.1 2024-07-02T16:55:11,633 5826 | layer.ffn_up_exps = ml.create_tensor(ctx_split, tn(LLM_TENSOR_FFN_UP_EXPS, "weight", i), {n_embd, n_ff, n_expert}); 2024-07-02T16:55:11,633 | ~~~~~~~~~~~~~~~~^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ 2024-07-02T16:55:11,634 /tmp/pip-wheel-i4vgucre/llama-cpp-python_ac63946418344e25b39ca071585534ac/vendor/llama.cpp/src/llama.cpp:5861:50: note: parameter passing for argument of type ‘std::initializer_list’ changed in GCC 7.1 2024-07-02T16:55:11,635 5861 | model.tok_embd = ml.create_tensor(ctx_input, tn(LLM_TENSOR_TOKEN_EMBD, "weight"), {n_embd, n_vocab}); 2024-07-02T16:55:11,635 | ~~~~~~~~~~~~~~~~^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ 2024-07-02T16:55:11,635 /tmp/pip-wheel-i4vgucre/llama-cpp-python_ac63946418344e25b39ca071585534ac/vendor/llama.cpp/src/llama.cpp:5865:57: note: parameter passing for argument of type ‘std::initializer_list’ changed in GCC 7.1 2024-07-02T16:55:11,636 5865 | model.output_norm = ml.create_tensor(ctx_output, tn(LLM_TENSOR_OUTPUT_NORM, "weight"), {n_embd}); 2024-07-02T16:55:11,636 | ~~~~~~~~~~~~~~~~^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ 2024-07-02T16:55:11,637 /tmp/pip-wheel-i4vgucre/llama-cpp-python_ac63946418344e25b39ca071585534ac/vendor/llama.cpp/src/llama.cpp:5866:57: note: parameter passing for argument of type ‘std::initializer_list’ changed in GCC 7.1 2024-07-02T16:55:11,637 5866 | model.output = ml.create_tensor(ctx_output_split, tn(LLM_TENSOR_OUTPUT, "weight"), {n_embd, n_vocab}); 2024-07-02T16:55:11,637 | ~~~~~~~~~~~~~~~~^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ 2024-07-02T16:55:11,638 /tmp/pip-wheel-i4vgucre/llama-cpp-python_ac63946418344e25b39ca071585534ac/vendor/llama.cpp/src/llama.cpp:5880:61: note: parameter passing for argument of type ‘std::initializer_list’ changed in GCC 7.1 2024-07-02T16:55:11,638 5880 | layer.attn_out_norm = ml.create_tensor(ctx_layer, tn(LLM_TENSOR_ATTN_OUT_NORM, "weight", i), {n_embd}); 2024-07-02T16:55:11,639 | ~~~~~~~~~~~~~~~~^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ 2024-07-02T16:55:11,639 /tmp/pip-wheel-i4vgucre/llama-cpp-python_ac63946418344e25b39ca071585534ac/vendor/llama.cpp/src/llama.cpp:5890:54: note: parameter passing for argument of type ‘std::initializer_list’ changed in GCC 7.1 2024-07-02T16:55:11,640 5890 | model.tok_embd = ml.create_tensor(ctx_input, tn(LLM_TENSOR_TOKEN_EMBD, "weight"), {n_embd, n_vocab}); 2024-07-02T16:55:11,640 | ~~~~~~~~~~~~~~~~^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ 2024-07-02T16:55:11,641 /tmp/pip-wheel-i4vgucre/llama-cpp-python_ac63946418344e25b39ca071585534ac/vendor/llama.cpp/src/llama.cpp:5892:61: note: parameter passing for argument of type ‘std::initializer_list’ changed in GCC 7.1 2024-07-02T16:55:11,642 5892 | model.output_norm = ml.create_tensor(ctx_output, tn(LLM_TENSOR_OUTPUT_NORM, "weight"), {n_embd}); 2024-07-02T16:55:11,642 | ~~~~~~~~~~~~~~~~^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ 2024-07-02T16:55:11,643 /tmp/pip-wheel-i4vgucre/llama-cpp-python_ac63946418344e25b39ca071585534ac/vendor/llama.cpp/src/llama.cpp:5893:61: note: parameter passing for argument of type ‘std::initializer_list’ changed in GCC 7.1 2024-07-02T16:55:11,643 5893 | model.output = ml.create_tensor(ctx_output_split, tn(LLM_TENSOR_OUTPUT, "weight"), {n_embd, n_vocab}); 2024-07-02T16:55:11,643 | ~~~~~~~~~~~~~~~~^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ 2024-07-02T16:55:11,644 /tmp/pip-wheel-i4vgucre/llama-cpp-python_ac63946418344e25b39ca071585534ac/vendor/llama.cpp/src/llama.cpp:5918:54: note: parameter passing for argument of type ‘std::initializer_list’ changed in GCC 7.1 2024-07-02T16:55:11,645 5918 | model.tok_embd = ml.create_tensor(ctx_input, tn(LLM_TENSOR_TOKEN_EMBD, "weight"), {n_embd, n_vocab}); 2024-07-02T16:55:11,646 | ~~~~~~~~~~~~~~~~^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ 2024-07-02T16:55:11,646 /tmp/pip-wheel-i4vgucre/llama-cpp-python_ac63946418344e25b39ca071585534ac/vendor/llama.cpp/src/llama.cpp:5922:63: note: parameter passing for argument of type ‘std::initializer_list’ changed in GCC 7.1 2024-07-02T16:55:11,646 5922 | model.output_norm = ml.create_tensor(ctx_output, tn(LLM_TENSOR_OUTPUT_NORM, "weight"), {n_embd}); 2024-07-02T16:55:11,647 | ~~~~~~~~~~~~~~~~^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ 2024-07-02T16:55:11,647 /tmp/pip-wheel-i4vgucre/llama-cpp-python_ac63946418344e25b39ca071585534ac/vendor/llama.cpp/src/llama.cpp:5923:63: note: parameter passing for argument of type ‘std::initializer_list’ changed in GCC 7.1 2024-07-02T16:55:11,648 5923 | model.output_norm_b = ml.create_tensor(ctx_output, tn(LLM_TENSOR_OUTPUT_NORM, "bias"), {n_embd}); 2024-07-02T16:55:11,648 | ~~~~~~~~~~~~~~~~^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ 2024-07-02T16:55:11,648 /tmp/pip-wheel-i4vgucre/llama-cpp-python_ac63946418344e25b39ca071585534ac/vendor/llama.cpp/src/llama.cpp:5925:63: note: parameter passing for argument of type ‘std::initializer_list’ changed in GCC 7.1 2024-07-02T16:55:11,649 5925 | model.output = ml.create_tensor(ctx_output_split, tn(LLM_TENSOR_OUTPUT, "weight"), {n_embd, n_vocab}, llama_model_loader::TENSOR_NOT_REQUIRED); 2024-07-02T16:55:11,649 | ~~~~~~~~~~~~~~~~^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ 2024-07-02T16:55:11,650 /tmp/pip-wheel-i4vgucre/llama-cpp-python_ac63946418344e25b39ca071585534ac/vendor/llama.cpp/src/llama.cpp:5927:60: note: parameter passing for argument of type ‘std::initializer_list’ changed in GCC 7.1 2024-07-02T16:55:11,650 5927 | model.output = ml.create_tensor(ctx_output_split, tn(LLM_TENSOR_TOKEN_EMBD, "weight"), {n_embd, n_vocab}, llama_model_loader::TENSOR_DUPLICATED); // needs to be on GPU 2024-07-02T16:55:11,651 | ~~~~~~~~~~~~~~~~^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ 2024-07-02T16:55:11,652 /tmp/pip-wheel-i4vgucre/llama-cpp-python_ac63946418344e25b39ca071585534ac/vendor/llama.cpp/src/llama.cpp:5952:54: note: parameter passing for argument of type ‘std::initializer_list’ changed in GCC 7.1 2024-07-02T16:55:11,652 5952 | model.tok_embd = ml.create_tensor(ctx_input, tn(LLM_TENSOR_TOKEN_EMBD, "weight"), {n_embd, n_vocab}); 2024-07-02T16:55:11,653 | ~~~~~~~~~~~~~~~~^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ 2024-07-02T16:55:11,653 /tmp/pip-wheel-i4vgucre/llama-cpp-python_ac63946418344e25b39ca071585534ac/vendor/llama.cpp/src/llama.cpp:5953:54: note: parameter passing for argument of type ‘std::initializer_list’ changed in GCC 7.1 2024-07-02T16:55:11,654 5953 | model.pos_embd = ml.create_tensor(ctx_input, tn(LLM_TENSOR_POS_EMBD, "weight"), {n_embd, hparams.n_ctx_train}); 2024-07-02T16:55:11,654 | ~~~~~~~~~~~~~~~~^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ 2024-07-02T16:55:11,655 /tmp/pip-wheel-i4vgucre/llama-cpp-python_ac63946418344e25b39ca071585534ac/vendor/llama.cpp/src/llama.cpp:5957:63: note: parameter passing for argument of type ‘std::initializer_list’ changed in GCC 7.1 2024-07-02T16:55:11,655 5957 | model.output_norm = ml.create_tensor(ctx_output, tn(LLM_TENSOR_OUTPUT_NORM, "weight"), {n_embd}); 2024-07-02T16:55:11,656 | ~~~~~~~~~~~~~~~~^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ 2024-07-02T16:55:11,656 /tmp/pip-wheel-i4vgucre/llama-cpp-python_ac63946418344e25b39ca071585534ac/vendor/llama.cpp/src/llama.cpp:5958:63: note: parameter passing for argument of type ‘std::initializer_list’ changed in GCC 7.1 2024-07-02T16:55:11,657 5958 | model.output_norm_b = ml.create_tensor(ctx_output, tn(LLM_TENSOR_OUTPUT_NORM, "bias"), {n_embd}); 2024-07-02T16:55:11,657 | ~~~~~~~~~~~~~~~~^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ 2024-07-02T16:55:11,657 /tmp/pip-wheel-i4vgucre/llama-cpp-python_ac63946418344e25b39ca071585534ac/vendor/llama.cpp/src/llama.cpp:5959:63: note: parameter passing for argument of type ‘std::initializer_list’ changed in GCC 7.1 2024-07-02T16:55:11,658 5959 | model.output = ml.create_tensor(ctx_output_split, tn(LLM_TENSOR_OUTPUT, "weight"), {n_embd, n_vocab}, llama_model_loader::TENSOR_NOT_REQUIRED); 2024-07-02T16:55:11,658 | ~~~~~~~~~~~~~~~~^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ 2024-07-02T16:55:11,658 /tmp/pip-wheel-i4vgucre/llama-cpp-python_ac63946418344e25b39ca071585534ac/vendor/llama.cpp/src/llama.cpp:5962:60: note: parameter passing for argument of type ‘std::initializer_list’ changed in GCC 7.1 2024-07-02T16:55:11,659 5962 | model.output = ml.create_tensor(ctx_output_split, tn(LLM_TENSOR_TOKEN_EMBD, "weight"), {n_embd, n_vocab}, llama_model_loader::TENSOR_DUPLICATED); 2024-07-02T16:55:11,659 | ~~~~~~~~~~~~~~~~^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ 2024-07-02T16:55:11,660 /tmp/pip-wheel-i4vgucre/llama-cpp-python_ac63946418344e25b39ca071585534ac/vendor/llama.cpp/src/llama.cpp:5980:54: note: parameter passing for argument of type ‘std::initializer_list’ changed in GCC 7.1 2024-07-02T16:55:11,660 5980 | layer.bo = ml.create_tensor(ctx_layer, tn(LLM_TENSOR_ATTN_OUT, "bias", i), {n_embd}); 2024-07-02T16:55:11,661 | ~~~~~~~~~~~~~~~~^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ 2024-07-02T16:55:11,661 /tmp/pip-wheel-i4vgucre/llama-cpp-python_ac63946418344e25b39ca071585534ac/vendor/llama.cpp/src/llama.cpp:5995:58: note: parameter passing for argument of type ‘std::initializer_list’ changed in GCC 7.1 2024-07-02T16:55:11,662 5995 | model.tok_embd = ml.create_tensor(ctx_input, tn(LLM_TENSOR_TOKEN_EMBD, "weight"), {n_embd, n_vocab}); 2024-07-02T16:55:11,662 | ~~~~~~~~~~~~~~~~^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ 2024-07-02T16:55:11,663 /tmp/pip-wheel-i4vgucre/llama-cpp-python_ac63946418344e25b39ca071585534ac/vendor/llama.cpp/src/llama.cpp:5996:58: note: parameter passing for argument of type ‘std::initializer_list’ changed in GCC 7.1 2024-07-02T16:55:11,663 5996 | model.type_embd = ml.create_tensor(ctx_input, tn(LLM_TENSOR_TOKEN_TYPES, "weight"), {n_embd, n_vocab_type}); 2024-07-02T16:55:11,664 | ~~~~~~~~~~~~~~~~^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ 2024-07-02T16:55:11,665 /tmp/pip-wheel-i4vgucre/llama-cpp-python_ac63946418344e25b39ca071585534ac/vendor/llama.cpp/src/llama.cpp:5998:58: note: parameter passing for argument of type ‘std::initializer_list’ changed in GCC 7.1 2024-07-02T16:55:11,665 5998 | model.pos_embd = ml.create_tensor(ctx_input, tn(LLM_TENSOR_POS_EMBD, "weight"), {n_embd, hparams.n_ctx_train}); 2024-07-02T16:55:11,666 | ~~~~~~~~~~~~~~~~^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ 2024-07-02T16:55:11,666 /tmp/pip-wheel-i4vgucre/llama-cpp-python_ac63946418344e25b39ca071585534ac/vendor/llama.cpp/src/llama.cpp:6001:56: note: parameter passing for argument of type ‘std::initializer_list’ changed in GCC 7.1 2024-07-02T16:55:11,666 6001 | model.tok_norm = ml.create_tensor(ctx_output, tn(LLM_TENSOR_TOKEN_EMBD_NORM, "weight"), {n_embd}); 2024-07-02T16:55:11,667 | ~~~~~~~~~~~~~~~~^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ 2024-07-02T16:55:11,667 /tmp/pip-wheel-i4vgucre/llama-cpp-python_ac63946418344e25b39ca071585534ac/vendor/llama.cpp/src/llama.cpp:6002:56: note: parameter passing for argument of type ‘std::initializer_list’ changed in GCC 7.1 2024-07-02T16:55:11,667 6002 | model.tok_norm_b = ml.create_tensor(ctx_output, tn(LLM_TENSOR_TOKEN_EMBD_NORM, "bias"), {n_embd}); 2024-07-02T16:55:11,668 | ~~~~~~~~~~~~~~~~^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ 2024-07-02T16:55:11,668 /tmp/pip-wheel-i4vgucre/llama-cpp-python_ac63946418344e25b39ca071585534ac/vendor/llama.cpp/src/llama.cpp:6011:58: note: parameter passing for argument of type ‘std::initializer_list’ changed in GCC 7.1 2024-07-02T16:55:11,669 6011 | layer.wq = ml.create_tensor(ctx_split, tn(LLM_TENSOR_ATTN_Q, "weight", i), {n_embd, n_embd}); 2024-07-02T16:55:11,669 | ~~~~~~~~~~~~~~~~^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ 2024-07-02T16:55:11,670 /tmp/pip-wheel-i4vgucre/llama-cpp-python_ac63946418344e25b39ca071585534ac/vendor/llama.cpp/src/llama.cpp:6012:58: note: parameter passing for argument of type ‘std::initializer_list’ changed in GCC 7.1 2024-07-02T16:55:11,671 6012 | layer.bq = ml.create_tensor(ctx_layer, tn(LLM_TENSOR_ATTN_Q, "bias", i), {n_embd}); 2024-07-02T16:55:11,671 | ~~~~~~~~~~~~~~~~^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ 2024-07-02T16:55:11,672 /tmp/pip-wheel-i4vgucre/llama-cpp-python_ac63946418344e25b39ca071585534ac/vendor/llama.cpp/src/llama.cpp:6014:58: note: parameter passing for argument of type ‘std::initializer_list’ changed in GCC 7.1 2024-07-02T16:55:11,672 6014 | layer.wk = ml.create_tensor(ctx_split, tn(LLM_TENSOR_ATTN_K, "weight", i), {n_embd, n_embd_gqa}); 2024-07-02T16:55:11,673 | ~~~~~~~~~~~~~~~~^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ 2024-07-02T16:55:11,673 /tmp/pip-wheel-i4vgucre/llama-cpp-python_ac63946418344e25b39ca071585534ac/vendor/llama.cpp/src/llama.cpp:6015:58: note: parameter passing for argument of type ‘std::initializer_list’ changed in GCC 7.1 2024-07-02T16:55:11,673 6015 | layer.bk = ml.create_tensor(ctx_layer, tn(LLM_TENSOR_ATTN_K, "bias", i), {n_embd_gqa}); 2024-07-02T16:55:11,674 | ~~~~~~~~~~~~~~~~^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ 2024-07-02T16:55:11,675 /tmp/pip-wheel-i4vgucre/llama-cpp-python_ac63946418344e25b39ca071585534ac/vendor/llama.cpp/src/llama.cpp:6017:58: note: parameter passing for argument of type ‘std::initializer_list’ changed in GCC 7.1 2024-07-02T16:55:11,675 6017 | layer.wv = ml.create_tensor(ctx_split, tn(LLM_TENSOR_ATTN_V, "weight", i), {n_embd, n_embd_gqa}); 2024-07-02T16:55:11,676 | ~~~~~~~~~~~~~~~~^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ 2024-07-02T16:55:11,676 /tmp/pip-wheel-i4vgucre/llama-cpp-python_ac63946418344e25b39ca071585534ac/vendor/llama.cpp/src/llama.cpp:6018:58: note: parameter passing for argument of type ‘std::initializer_list’ changed in GCC 7.1 2024-07-02T16:55:11,677 6018 | layer.bv = ml.create_tensor(ctx_layer, tn(LLM_TENSOR_ATTN_V, "bias", i), {n_embd_gqa}); 2024-07-02T16:55:11,677 | ~~~~~~~~~~~~~~~~^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ 2024-07-02T16:55:11,677 /tmp/pip-wheel-i4vgucre/llama-cpp-python_ac63946418344e25b39ca071585534ac/vendor/llama.cpp/src/llama.cpp:6020:58: note: parameter passing for argument of type ‘std::initializer_list’ changed in GCC 7.1 2024-07-02T16:55:11,678 6020 | layer.wqkv = ml.create_tensor(ctx_split, tn(LLM_TENSOR_ATTN_QKV, "weight", i), {n_embd, n_embd + 2*n_embd_gqa}); 2024-07-02T16:55:11,678 | ~~~~~~~~~~~~~~~~^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ 2024-07-02T16:55:11,678 /tmp/pip-wheel-i4vgucre/llama-cpp-python_ac63946418344e25b39ca071585534ac/vendor/llama.cpp/src/llama.cpp:6032:64: note: parameter passing for argument of type ‘std::initializer_list’ changed in GCC 7.1 2024-07-02T16:55:11,679 6032 | layer.bo = ml.create_tensor(ctx_layer, tn(LLM_TENSOR_ATTN_OUT, "bias", i), {n_embd}); 2024-07-02T16:55:11,679 | ~~~~~~~~~~~~~~~~^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ 2024-07-02T16:55:11,680 /tmp/pip-wheel-i4vgucre/llama-cpp-python_ac63946418344e25b39ca071585534ac/vendor/llama.cpp/src/llama.cpp:6033:64: note: parameter passing for argument of type ‘std::initializer_list’ changed in GCC 7.1 2024-07-02T16:55:11,680 6033 | layer.ffn_up_b = ml.create_tensor(ctx_layer, tn(LLM_TENSOR_FFN_UP, "bias", i), {n_ff}); 2024-07-02T16:55:11,681 | ~~~~~~~~~~~~~~~~^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ 2024-07-02T16:55:11,681 /tmp/pip-wheel-i4vgucre/llama-cpp-python_ac63946418344e25b39ca071585534ac/vendor/llama.cpp/src/llama.cpp:6035:64: note: parameter passing for argument of type ‘std::initializer_list’ changed in GCC 7.1 2024-07-02T16:55:11,682 6035 | layer.ffn_down_b = ml.create_tensor(ctx_layer, tn(LLM_TENSOR_FFN_DOWN, "bias", i), {n_embd}); 2024-07-02T16:55:11,682 | ~~~~~~~~~~~~~~~~^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ 2024-07-02T16:55:11,683 /tmp/pip-wheel-i4vgucre/llama-cpp-python_ac63946418344e25b39ca071585534ac/vendor/llama.cpp/src/llama.cpp:6037:62: note: parameter passing for argument of type ‘std::initializer_list’ changed in GCC 7.1 2024-07-02T16:55:11,683 6037 | layer.ffn_gate = ml.create_tensor(ctx_split, tn(LLM_TENSOR_FFN_GATE, "weight", i), {n_embd, n_ff}); 2024-07-02T16:55:11,684 | ~~~~~~~~~~~~~~~~^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ 2024-07-02T16:55:11,684 /tmp/pip-wheel-i4vgucre/llama-cpp-python_ac63946418344e25b39ca071585534ac/vendor/llama.cpp/src/llama.cpp:6046:58: note: parameter passing for argument of type ‘std::initializer_list’ changed in GCC 7.1 2024-07-02T16:55:11,685 6046 | model.tok_embd = ml.create_tensor(ctx_input, tn(LLM_TENSOR_TOKEN_EMBD, "weight"), {n_embd, n_vocab}); // word_embeddings 2024-07-02T16:55:11,686 | ~~~~~~~~~~~~~~~~^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ 2024-07-02T16:55:11,686 /tmp/pip-wheel-i4vgucre/llama-cpp-python_ac63946418344e25b39ca071585534ac/vendor/llama.cpp/src/llama.cpp:6047:58: note: parameter passing for argument of type ‘std::initializer_list’ changed in GCC 7.1 2024-07-02T16:55:11,687 6047 | model.type_embd = ml.create_tensor(ctx_input, tn(LLM_TENSOR_TOKEN_TYPES, "weight"), {n_embd, n_vocab_type}); //token_type_embeddings 2024-07-02T16:55:11,687 | ~~~~~~~~~~~~~~~~^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ 2024-07-02T16:55:11,687 /tmp/pip-wheel-i4vgucre/llama-cpp-python_ac63946418344e25b39ca071585534ac/vendor/llama.cpp/src/llama.cpp:6048:56: note: parameter passing for argument of type ‘std::initializer_list’ changed in GCC 7.1 2024-07-02T16:55:11,688 6048 | model.tok_norm = ml.create_tensor(ctx_output, tn(LLM_TENSOR_TOKEN_EMBD_NORM, "weight"), {n_embd}); // LayerNorm 2024-07-02T16:55:11,688 | ~~~~~~~~~~~~~~~~^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ 2024-07-02T16:55:11,688 /tmp/pip-wheel-i4vgucre/llama-cpp-python_ac63946418344e25b39ca071585534ac/vendor/llama.cpp/src/llama.cpp:6049:56: note: parameter passing for argument of type ‘std::initializer_list’ changed in GCC 7.1 2024-07-02T16:55:11,689 6049 | model.tok_norm_b = ml.create_tensor(ctx_output, tn(LLM_TENSOR_TOKEN_EMBD_NORM, "bias"), {n_embd}); //LayerNorm bias 2024-07-02T16:55:11,689 | ~~~~~~~~~~~~~~~~^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ 2024-07-02T16:55:11,690 /tmp/pip-wheel-i4vgucre/llama-cpp-python_ac63946418344e25b39ca071585534ac/vendor/llama.cpp/src/llama.cpp:6093:56: note: parameter passing for argument of type ‘std::initializer_list’ changed in GCC 7.1 2024-07-02T16:55:11,690 6093 | model.tok_embd = ml.create_tensor(ctx_input, tn(LLM_TENSOR_TOKEN_EMBD, "weight"), {n_embd, n_vocab}); 2024-07-02T16:55:11,691 | ~~~~~~~~~~~~~~~~^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ 2024-07-02T16:55:11,691 /tmp/pip-wheel-i4vgucre/llama-cpp-python_ac63946418344e25b39ca071585534ac/vendor/llama.cpp/src/llama.cpp:6094:56: note: parameter passing for argument of type ‘std::initializer_list’ changed in GCC 7.1 2024-07-02T16:55:11,692 6094 | model.tok_norm = ml.create_tensor(ctx_output, tn(LLM_TENSOR_TOKEN_EMBD_NORM, "weight"), {n_embd}); 2024-07-02T16:55:11,692 | ~~~~~~~~~~~~~~~~^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ 2024-07-02T16:55:11,693 /tmp/pip-wheel-i4vgucre/llama-cpp-python_ac63946418344e25b39ca071585534ac/vendor/llama.cpp/src/llama.cpp:6095:56: note: parameter passing for argument of type ‘std::initializer_list’ changed in GCC 7.1 2024-07-02T16:55:11,693 6095 | model.tok_norm_b = ml.create_tensor(ctx_output, tn(LLM_TENSOR_TOKEN_EMBD_NORM, "bias"), {n_embd}); 2024-07-02T16:55:11,694 | ~~~~~~~~~~~~~~~~^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ 2024-07-02T16:55:11,694 /tmp/pip-wheel-i4vgucre/llama-cpp-python_ac63946418344e25b39ca071585534ac/vendor/llama.cpp/src/llama.cpp:6099:63: note: parameter passing for argument of type ‘std::initializer_list’ changed in GCC 7.1 2024-07-02T16:55:11,695 6099 | model.output_norm = ml.create_tensor(ctx_output, tn(LLM_TENSOR_OUTPUT_NORM, "weight"), {n_embd}); 2024-07-02T16:55:11,695 | ~~~~~~~~~~~~~~~~^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ 2024-07-02T16:55:11,696 /tmp/pip-wheel-i4vgucre/llama-cpp-python_ac63946418344e25b39ca071585534ac/vendor/llama.cpp/src/llama.cpp:6100:63: note: parameter passing for argument of type ‘std::initializer_list’ changed in GCC 7.1 2024-07-02T16:55:11,696 6100 | model.output_norm_b = ml.create_tensor(ctx_output, tn(LLM_TENSOR_OUTPUT_NORM, "bias"), {n_embd}); 2024-07-02T16:55:11,696 | ~~~~~~~~~~~~~~~~^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ 2024-07-02T16:55:11,697 /tmp/pip-wheel-i4vgucre/llama-cpp-python_ac63946418344e25b39ca071585534ac/vendor/llama.cpp/src/llama.cpp:6101:63: note: parameter passing for argument of type ‘std::initializer_list’ changed in GCC 7.1 2024-07-02T16:55:11,697 6101 | model.output = ml.create_tensor(ctx_output_split, tn(LLM_TENSOR_OUTPUT, "weight"), {n_embd, n_vocab}); 2024-07-02T16:55:11,698 | ~~~~~~~~~~~~~~~~^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ 2024-07-02T16:55:11,698 /tmp/pip-wheel-i4vgucre/llama-cpp-python_ac63946418344e25b39ca071585534ac/vendor/llama.cpp/src/llama.cpp:6120:60: note: parameter passing for argument of type ‘std::initializer_list’ changed in GCC 7.1 2024-07-02T16:55:11,698 6120 | layer.ffn_norm_b = ml.create_tensor(ctx_layer, tn(LLM_TENSOR_FFN_NORM, "bias", i), {n_embd}); 2024-07-02T16:55:11,699 | ~~~~~~~~~~~~~~~~^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ 2024-07-02T16:55:11,699 /tmp/pip-wheel-i4vgucre/llama-cpp-python_ac63946418344e25b39ca071585534ac/vendor/llama.cpp/src/llama.cpp:6131:54: note: parameter passing for argument of type ‘std::initializer_list’ changed in GCC 7.1 2024-07-02T16:55:11,700 6131 | model.tok_embd = ml.create_tensor(ctx_input, tn(LLM_TENSOR_TOKEN_EMBD, "weight"), {n_embd, n_vocab}); 2024-07-02T16:55:11,700 | ~~~~~~~~~~~~~~~~^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ 2024-07-02T16:55:11,701 /tmp/pip-wheel-i4vgucre/llama-cpp-python_ac63946418344e25b39ca071585534ac/vendor/llama.cpp/src/llama.cpp:6132:54: note: parameter passing for argument of type ‘std::initializer_list’ changed in GCC 7.1 2024-07-02T16:55:11,701 6132 | model.pos_embd = ml.create_tensor(ctx_input, tn(LLM_TENSOR_POS_EMBD, "weight"), {n_embd, hparams.n_ctx_train}, llama_model_loader::TENSOR_NOT_REQUIRED); 2024-07-02T16:55:11,702 | ~~~~~~~~~~~~~~~~^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ 2024-07-02T16:55:11,702 /tmp/pip-wheel-i4vgucre/llama-cpp-python_ac63946418344e25b39ca071585534ac/vendor/llama.cpp/src/llama.cpp:6136:63: note: parameter passing for argument of type ‘std::initializer_list’ changed in GCC 7.1 2024-07-02T16:55:11,703 6136 | model.output_norm = ml.create_tensor(ctx_output, tn(LLM_TENSOR_OUTPUT_NORM, "weight"), {n_embd}); 2024-07-02T16:55:11,703 | ~~~~~~~~~~~~~~~~^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ 2024-07-02T16:55:11,704 /tmp/pip-wheel-i4vgucre/llama-cpp-python_ac63946418344e25b39ca071585534ac/vendor/llama.cpp/src/llama.cpp:6137:63: note: parameter passing for argument of type ‘std::initializer_list’ changed in GCC 7.1 2024-07-02T16:55:11,704 6137 | model.output_norm_b = ml.create_tensor(ctx_output, tn(LLM_TENSOR_OUTPUT_NORM, "bias"), {n_embd}, llama_model_loader::TENSOR_NOT_REQUIRED); 2024-07-02T16:55:11,705 | ~~~~~~~~~~~~~~~~^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ 2024-07-02T16:55:11,705 /tmp/pip-wheel-i4vgucre/llama-cpp-python_ac63946418344e25b39ca071585534ac/vendor/llama.cpp/src/llama.cpp:6139:63: note: parameter passing for argument of type ‘std::initializer_list’ changed in GCC 7.1 2024-07-02T16:55:11,706 6139 | model.output = ml.create_tensor(ctx_output_split, tn(LLM_TENSOR_OUTPUT, "weight"), {n_embd, n_vocab}, llama_model_loader::TENSOR_NOT_REQUIRED); 2024-07-02T16:55:11,706 | ~~~~~~~~~~~~~~~~^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ 2024-07-02T16:55:11,706 /tmp/pip-wheel-i4vgucre/llama-cpp-python_ac63946418344e25b39ca071585534ac/vendor/llama.cpp/src/llama.cpp:6141:60: note: parameter passing for argument of type ‘std::initializer_list’ changed in GCC 7.1 2024-07-02T16:55:11,707 6141 | model.output = ml.create_tensor(ctx_output_split, tn(LLM_TENSOR_TOKEN_EMBD, "weight"), {n_embd, n_vocab}, llama_model_loader::TENSOR_DUPLICATED); // needs to be on GPU 2024-07-02T16:55:11,707 | ~~~~~~~~~~~~~~~~^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ 2024-07-02T16:55:11,707 /tmp/pip-wheel-i4vgucre/llama-cpp-python_ac63946418344e25b39ca071585534ac/vendor/llama.cpp/src/llama.cpp:6163:60: note: parameter passing for argument of type ‘std::initializer_list’ changed in GCC 7.1 2024-07-02T16:55:11,708 6163 | layer.ffn_down = ml.create_tensor(ctx_split, tn(LLM_TENSOR_FFN_DOWN, "weight", i), {n_ff, n_embd}); 2024-07-02T16:55:11,709 | ~~~~~~~~~~~~~~~~^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ 2024-07-02T16:55:11,709 /tmp/pip-wheel-i4vgucre/llama-cpp-python_ac63946418344e25b39ca071585534ac/vendor/llama.cpp/src/llama.cpp:6181:54: note: parameter passing for argument of type ‘std::initializer_list’ changed in GCC 7.1 2024-07-02T16:55:11,710 6181 | model.tok_embd = ml.create_tensor(ctx_input, tn(LLM_TENSOR_TOKEN_EMBD, "weight"), {n_embd, n_vocab}); 2024-07-02T16:55:11,710 | ~~~~~~~~~~~~~~~~^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ 2024-07-02T16:55:11,711 /tmp/pip-wheel-i4vgucre/llama-cpp-python_ac63946418344e25b39ca071585534ac/vendor/llama.cpp/src/llama.cpp:6185:63: note: parameter passing for argument of type ‘std::initializer_list’ changed in GCC 7.1 2024-07-02T16:55:11,711 6185 | model.output_norm_b = ml.create_tensor(ctx_output, tn(LLM_TENSOR_OUTPUT_NORM, "bias"), {n_embd}); 2024-07-02T16:55:11,712 | ~~~~~~~~~~~~~~~~^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ 2024-07-02T16:55:11,712 /tmp/pip-wheel-i4vgucre/llama-cpp-python_ac63946418344e25b39ca071585534ac/vendor/llama.cpp/src/llama.cpp:6186:63: note: parameter passing for argument of type ‘std::initializer_list’ changed in GCC 7.1 2024-07-02T16:55:11,713 6186 | model.output_norm = ml.create_tensor(ctx_output, tn(LLM_TENSOR_OUTPUT_NORM, "weight"), {n_embd}); 2024-07-02T16:55:11,713 | ~~~~~~~~~~~~~~~~^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ 2024-07-02T16:55:11,714 /tmp/pip-wheel-i4vgucre/llama-cpp-python_ac63946418344e25b39ca071585534ac/vendor/llama.cpp/src/llama.cpp:6187:63: note: parameter passing for argument of type ‘std::initializer_list’ changed in GCC 7.1 2024-07-02T16:55:11,714 6187 | model.output = ml.create_tensor(ctx_output_split, tn(LLM_TENSOR_OUTPUT, "weight"), {n_embd, n_vocab}); 2024-07-02T16:55:11,715 | ~~~~~~~~~~~~~~~~^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ 2024-07-02T16:55:11,715 /tmp/pip-wheel-i4vgucre/llama-cpp-python_ac63946418344e25b39ca071585534ac/vendor/llama.cpp/src/llama.cpp:6210:61: note: parameter passing for argument of type ‘std::initializer_list’ changed in GCC 7.1 2024-07-02T16:55:11,716 6210 | layer.attn_q_norm = ml.create_tensor(ctx_layer, tn(LLM_TENSOR_ATTN_Q_NORM, "weight", i), {hparams.n_embd_head_k, hparams.n_head}, llama_model_loader::TENSOR_NOT_REQUIRED); 2024-07-02T16:55:11,716 | ~~~~~~~~~~~~~~~~^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ 2024-07-02T16:55:11,717 /tmp/pip-wheel-i4vgucre/llama-cpp-python_ac63946418344e25b39ca071585534ac/vendor/llama.cpp/src/llama.cpp:6215:60: note: parameter passing for argument of type ‘std::initializer_list’ changed in GCC 7.1 2024-07-02T16:55:11,717 6215 | layer.ffn_norm_b = ml.create_tensor(ctx_layer, tn(LLM_TENSOR_FFN_NORM, "bias", i), {n_embd}, llama_model_loader::TENSOR_NOT_REQUIRED); 2024-07-02T16:55:11,717 | ~~~~~~~~~~~~~~~~^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ 2024-07-02T16:55:11,718 /tmp/pip-wheel-i4vgucre/llama-cpp-python_ac63946418344e25b39ca071585534ac/vendor/llama.cpp/src/llama.cpp:6217:58: note: parameter passing for argument of type ‘std::initializer_list’ changed in GCC 7.1 2024-07-02T16:55:11,718 6217 | layer.ffn_gate = ml.create_tensor(ctx_split, tn(LLM_TENSOR_FFN_GATE, "weight", i), {n_embd, n_ff}); 2024-07-02T16:55:11,719 | ~~~~~~~~~~~~~~~~^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ 2024-07-02T16:55:11,719 /tmp/pip-wheel-i4vgucre/llama-cpp-python_ac63946418344e25b39ca071585534ac/vendor/llama.cpp/src/llama.cpp:6224:54: note: parameter passing for argument of type ‘std::initializer_list’ changed in GCC 7.1 2024-07-02T16:55:11,720 6224 | model.tok_embd = ml.create_tensor(ctx_input, tn(LLM_TENSOR_TOKEN_EMBD, "weight"), {n_embd, n_vocab}); 2024-07-02T16:55:11,721 | ~~~~~~~~~~~~~~~~^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ 2024-07-02T16:55:11,721 /tmp/pip-wheel-i4vgucre/llama-cpp-python_ac63946418344e25b39ca071585534ac/vendor/llama.cpp/src/llama.cpp:6228:61: note: parameter passing for argument of type ‘std::initializer_list’ changed in GCC 7.1 2024-07-02T16:55:11,721 6228 | model.output_norm = ml.create_tensor(ctx_output, tn(LLM_TENSOR_OUTPUT_NORM, "weight"), {n_embd}); 2024-07-02T16:55:11,722 | ~~~~~~~~~~~~~~~~^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ 2024-07-02T16:55:11,722 /tmp/pip-wheel-i4vgucre/llama-cpp-python_ac63946418344e25b39ca071585534ac/vendor/llama.cpp/src/llama.cpp:6229:61: note: parameter passing for argument of type ‘std::initializer_list’ changed in GCC 7.1 2024-07-02T16:55:11,723 6229 | model.output = ml.create_tensor(ctx_output_split, tn(LLM_TENSOR_OUTPUT, "weight"), {n_embd, n_vocab}); 2024-07-02T16:55:11,723 | ~~~~~~~~~~~~~~~~^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ 2024-07-02T16:55:11,724 /tmp/pip-wheel-i4vgucre/llama-cpp-python_ac63946418344e25b39ca071585534ac/vendor/llama.cpp/src/llama.cpp:6238:59: note: parameter passing for argument of type ‘std::initializer_list’ changed in GCC 7.1 2024-07-02T16:55:11,725 6238 | layer.attn_norm = ml.create_tensor(ctx_layer, tn(LLM_TENSOR_ATTN_NORM, "weight", i), {n_embd}); 2024-07-02T16:55:11,725 | ~~~~~~~~~~~~~~~~^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ 2024-07-02T16:55:11,726 /tmp/pip-wheel-i4vgucre/llama-cpp-python_ac63946418344e25b39ca071585534ac/vendor/llama.cpp/src/llama.cpp:6244:58: note: parameter passing for argument of type ‘std::initializer_list’ changed in GCC 7.1 2024-07-02T16:55:11,726 6244 | layer.ffn_norm = ml.create_tensor(ctx_layer, tn(LLM_TENSOR_FFN_NORM, "weight", i), {n_embd}); 2024-07-02T16:55:11,726 | ~~~~~~~~~~~~~~~~^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ 2024-07-02T16:55:11,727 /tmp/pip-wheel-i4vgucre/llama-cpp-python_ac63946418344e25b39ca071585534ac/vendor/llama.cpp/src/llama.cpp:6253:54: note: parameter passing for argument of type ‘std::initializer_list’ changed in GCC 7.1 2024-07-02T16:55:11,727 6253 | model.tok_embd = ml.create_tensor(ctx_input, tn(LLM_TENSOR_TOKEN_EMBD, "weight"), {n_embd, n_vocab}); 2024-07-02T16:55:11,727 | ~~~~~~~~~~~~~~~~^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ 2024-07-02T16:55:11,728 /tmp/pip-wheel-i4vgucre/llama-cpp-python_ac63946418344e25b39ca071585534ac/vendor/llama.cpp/src/llama.cpp:6257:61: note: parameter passing for argument of type ‘std::initializer_list’ changed in GCC 7.1 2024-07-02T16:55:11,729 6257 | model.output_norm = ml.create_tensor(ctx_output, tn(LLM_TENSOR_OUTPUT_NORM, "weight"), {n_embd}); 2024-07-02T16:55:11,729 | ~~~~~~~~~~~~~~~~^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ 2024-07-02T16:55:11,730 /tmp/pip-wheel-i4vgucre/llama-cpp-python_ac63946418344e25b39ca071585534ac/vendor/llama.cpp/src/llama.cpp:6258:61: note: parameter passing for argument of type ‘std::initializer_list’ changed in GCC 7.1 2024-07-02T16:55:11,730 6258 | model.output = ml.create_tensor(ctx_output_split, tn(LLM_TENSOR_OUTPUT, "weight"), {n_embd, n_vocab}, llama_model_loader::TENSOR_NOT_REQUIRED); 2024-07-02T16:55:11,731 | ~~~~~~~~~~~~~~~~^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ 2024-07-02T16:55:11,731 /tmp/pip-wheel-i4vgucre/llama-cpp-python_ac63946418344e25b39ca071585534ac/vendor/llama.cpp/src/llama.cpp:6261:60: note: parameter passing for argument of type ‘std::initializer_list’ changed in GCC 7.1 2024-07-02T16:55:11,732 6261 | model.output = ml.create_tensor(ctx_output, tn(LLM_TENSOR_TOKEN_EMBD, "weight"), {n_embd, n_vocab}, llama_model_loader::TENSOR_DUPLICATED); 2024-07-02T16:55:11,733 | ~~~~~~~~~~~~~~~~^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ 2024-07-02T16:55:11,733 /tmp/pip-wheel-i4vgucre/llama-cpp-python_ac63946418344e25b39ca071585534ac/vendor/llama.cpp/src/llama.cpp:6275:52: note: parameter passing for argument of type ‘std::initializer_list’ changed in GCC 7.1 2024-07-02T16:55:11,734 6275 | layer.wv = ml.create_tensor(ctx_split, tn(LLM_TENSOR_ATTN_V, "weight", i), {n_embd, n_embd_gqa}); 2024-07-02T16:55:11,734 | ~~~~~~~~~~~~~~~~^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ 2024-07-02T16:55:11,735 /tmp/pip-wheel-i4vgucre/llama-cpp-python_ac63946418344e25b39ca071585534ac/vendor/llama.cpp/src/llama.cpp:6279:52: note: parameter passing for argument of type ‘std::initializer_list’ changed in GCC 7.1 2024-07-02T16:55:11,735 6279 | layer.bq = ml.create_tensor(ctx_layer, tn(LLM_TENSOR_ATTN_Q, "bias", i), {n_embd}); 2024-07-02T16:55:11,736 | ~~~~~~~~~~~~~~~~^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ 2024-07-02T16:55:11,736 /tmp/pip-wheel-i4vgucre/llama-cpp-python_ac63946418344e25b39ca071585534ac/vendor/llama.cpp/src/llama.cpp:6283:58: note: parameter passing for argument of type ‘std::initializer_list’ changed in GCC 7.1 2024-07-02T16:55:11,737 6283 | layer.ffn_norm = ml.create_tensor(ctx_layer, tn(LLM_TENSOR_FFN_NORM, "weight", i), {n_embd}); 2024-07-02T16:55:11,737 | ~~~~~~~~~~~~~~~~^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ 2024-07-02T16:55:11,737 /tmp/pip-wheel-i4vgucre/llama-cpp-python_ac63946418344e25b39ca071585534ac/vendor/llama.cpp/src/llama.cpp:6292:54: note: parameter passing for argument of type ‘std::initializer_list’ changed in GCC 7.1 2024-07-02T16:55:11,738 6292 | model.tok_embd = ml.create_tensor(ctx_input, tn(LLM_TENSOR_TOKEN_EMBD, "weight"), {n_embd, n_vocab}); 2024-07-02T16:55:11,738 | ~~~~~~~~~~~~~~~~^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ 2024-07-02T16:55:11,738 /tmp/pip-wheel-i4vgucre/llama-cpp-python_ac63946418344e25b39ca071585534ac/vendor/llama.cpp/src/llama.cpp:6296:61: note: parameter passing for argument of type ‘std::initializer_list’ changed in GCC 7.1 2024-07-02T16:55:11,739 6296 | model.output_norm = ml.create_tensor(ctx_output, tn(LLM_TENSOR_OUTPUT_NORM, "weight"), {n_embd}); 2024-07-02T16:55:11,740 | ~~~~~~~~~~~~~~~~^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ 2024-07-02T16:55:11,740 /tmp/pip-wheel-i4vgucre/llama-cpp-python_ac63946418344e25b39ca071585534ac/vendor/llama.cpp/src/llama.cpp:6297:61: note: parameter passing for argument of type ‘std::initializer_list’ changed in GCC 7.1 2024-07-02T16:55:11,741 6297 | model.output = ml.create_tensor(ctx_output_split, tn(LLM_TENSOR_OUTPUT, "weight"), {n_embd, n_vocab}); 2024-07-02T16:55:11,741 | ~~~~~~~~~~~~~~~~^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ 2024-07-02T16:55:11,742 /tmp/pip-wheel-i4vgucre/llama-cpp-python_ac63946418344e25b39ca071585534ac/vendor/llama.cpp/src/llama.cpp:6341:54: note: parameter passing for argument of type ‘std::initializer_list’ changed in GCC 7.1 2024-07-02T16:55:11,742 6341 | model.tok_embd = ml.create_tensor(ctx_input, tn(LLM_TENSOR_TOKEN_EMBD, "weight"), {n_embd, n_vocab}); 2024-07-02T16:55:11,743 | ~~~~~~~~~~~~~~~~^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ 2024-07-02T16:55:11,743 /tmp/pip-wheel-i4vgucre/llama-cpp-python_ac63946418344e25b39ca071585534ac/vendor/llama.cpp/src/llama.cpp:6345:63: note: parameter passing for argument of type ‘std::initializer_list’ changed in GCC 7.1 2024-07-02T16:55:11,744 6345 | model.output_norm = ml.create_tensor(ctx_output, tn(LLM_TENSOR_OUTPUT_NORM, "weight"), {n_embd}); 2024-07-02T16:55:11,744 | ~~~~~~~~~~~~~~~~^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ 2024-07-02T16:55:11,745 /tmp/pip-wheel-i4vgucre/llama-cpp-python_ac63946418344e25b39ca071585534ac/vendor/llama.cpp/src/llama.cpp:6346:63: note: parameter passing for argument of type ‘std::initializer_list’ changed in GCC 7.1 2024-07-02T16:55:11,745 6346 | model.output_norm_b = ml.create_tensor(ctx_output, tn(LLM_TENSOR_OUTPUT_NORM, "bias"), {n_embd}); 2024-07-02T16:55:11,746 | ~~~~~~~~~~~~~~~~^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ 2024-07-02T16:55:11,746 /tmp/pip-wheel-i4vgucre/llama-cpp-python_ac63946418344e25b39ca071585534ac/vendor/llama.cpp/src/llama.cpp:6347:63: note: parameter passing for argument of type ‘std::initializer_list’ changed in GCC 7.1 2024-07-02T16:55:11,747 6347 | model.output = ml.create_tensor(ctx_output_split, tn(LLM_TENSOR_OUTPUT, "weight"), {n_embd, n_vocab}); 2024-07-02T16:55:11,747 | ~~~~~~~~~~~~~~~~^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ 2024-07-02T16:55:11,748 /tmp/pip-wheel-i4vgucre/llama-cpp-python_ac63946418344e25b39ca071585534ac/vendor/llama.cpp/src/llama.cpp:6348:63: note: parameter passing for argument of type ‘std::initializer_list’ changed in GCC 7.1 2024-07-02T16:55:11,748 6348 | model.output_b = ml.create_tensor(ctx_output, tn(LLM_TENSOR_OUTPUT, "bias"), {n_vocab}); 2024-07-02T16:55:11,748 | ~~~~~~~~~~~~~~~~^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ 2024-07-02T16:55:11,749 /tmp/pip-wheel-i4vgucre/llama-cpp-python_ac63946418344e25b39ca071585534ac/vendor/llama.cpp/src/llama.cpp:6364:56: note: parameter passing for argument of type ‘std::initializer_list’ changed in GCC 7.1 2024-07-02T16:55:11,749 6364 | layer.wq = ml.create_tensor(ctx_split, tn(LLM_TENSOR_ATTN_Q, "weight", i), {n_embd, n_embd}); 2024-07-02T16:55:11,750 | ~~~~~~~~~~~~~~~~^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ 2024-07-02T16:55:11,750 /tmp/pip-wheel-i4vgucre/llama-cpp-python_ac63946418344e25b39ca071585534ac/vendor/llama.cpp/src/llama.cpp:6365:56: note: parameter passing for argument of type ‘std::initializer_list’ changed in GCC 7.1 2024-07-02T16:55:11,751 6365 | layer.bq = ml.create_tensor(ctx_layer, tn(LLM_TENSOR_ATTN_Q, "bias", i), {n_embd}); 2024-07-02T16:55:11,751 | ~~~~~~~~~~~~~~~~^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ 2024-07-02T16:55:11,752 /tmp/pip-wheel-i4vgucre/llama-cpp-python_ac63946418344e25b39ca071585534ac/vendor/llama.cpp/src/llama.cpp:6367:56: note: parameter passing for argument of type ‘std::initializer_list’ changed in GCC 7.1 2024-07-02T16:55:11,753 6367 | layer.wk = ml.create_tensor(ctx_split, tn(LLM_TENSOR_ATTN_K, "weight", i), {n_embd, n_embd_gqa}); 2024-07-02T16:55:11,753 | ~~~~~~~~~~~~~~~~^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ 2024-07-02T16:55:11,753 /tmp/pip-wheel-i4vgucre/llama-cpp-python_ac63946418344e25b39ca071585534ac/vendor/llama.cpp/src/llama.cpp:6368:56: note: parameter passing for argument of type ‘std::initializer_list’ changed in GCC 7.1 2024-07-02T16:55:11,754 6368 | layer.bk = ml.create_tensor(ctx_layer, tn(LLM_TENSOR_ATTN_K, "bias", i), {n_embd_gqa}); 2024-07-02T16:55:11,754 | ~~~~~~~~~~~~~~~~^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ 2024-07-02T16:55:11,755 /tmp/pip-wheel-i4vgucre/llama-cpp-python_ac63946418344e25b39ca071585534ac/vendor/llama.cpp/src/llama.cpp:6370:56: note: parameter passing for argument of type ‘std::initializer_list’ changed in GCC 7.1 2024-07-02T16:55:11,756 6370 | layer.wv = ml.create_tensor(ctx_split, tn(LLM_TENSOR_ATTN_V, "weight", i), {n_embd, n_embd_gqa}); 2024-07-02T16:55:11,757 | ~~~~~~~~~~~~~~~~^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ 2024-07-02T16:55:11,757 /tmp/pip-wheel-i4vgucre/llama-cpp-python_ac63946418344e25b39ca071585534ac/vendor/llama.cpp/src/llama.cpp:6371:56: note: parameter passing for argument of type ‘std::initializer_list’ changed in GCC 7.1 2024-07-02T16:55:11,757 6371 | layer.bv = ml.create_tensor(ctx_layer, tn(LLM_TENSOR_ATTN_V, "bias", i), {n_embd_gqa}); 2024-07-02T16:55:11,758 | ~~~~~~~~~~~~~~~~^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ 2024-07-02T16:55:11,758 /tmp/pip-wheel-i4vgucre/llama-cpp-python_ac63946418344e25b39ca071585534ac/vendor/llama.cpp/src/llama.cpp:6381:60: note: parameter passing for argument of type ‘std::initializer_list’ changed in GCC 7.1 2024-07-02T16:55:11,758 6381 | layer.ffn_up_b = ml.create_tensor(ctx_layer, tn(LLM_TENSOR_FFN_UP, "bias", i), {n_ff}); 2024-07-02T16:55:11,759 | ~~~~~~~~~~~~~~~~^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ 2024-07-02T16:55:11,759 /tmp/pip-wheel-i4vgucre/llama-cpp-python_ac63946418344e25b39ca071585534ac/vendor/llama.cpp/src/llama.cpp:6386:54: note: parameter passing for argument of type ‘std::initializer_list’ changed in GCC 7.1 2024-07-02T16:55:11,760 6386 | model.tok_embd = ml.create_tensor(ctx_input, tn(LLM_TENSOR_TOKEN_EMBD, "weight"), { n_embd, n_vocab }); 2024-07-02T16:55:11,760 | ~~~~~~~~~~~~~~~~^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ 2024-07-02T16:55:11,761 /tmp/pip-wheel-i4vgucre/llama-cpp-python_ac63946418344e25b39ca071585534ac/vendor/llama.cpp/src/llama.cpp:6390:61: note: parameter passing for argument of type ‘std::initializer_list’ changed in GCC 7.1 2024-07-02T16:55:11,761 6390 | model.output_norm = ml.create_tensor(ctx_output, tn(LLM_TENSOR_OUTPUT_NORM, "weight"), { n_embd }); 2024-07-02T16:55:11,762 | ~~~~~~~~~~~~~~~~^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ 2024-07-02T16:55:11,762 /tmp/pip-wheel-i4vgucre/llama-cpp-python_ac63946418344e25b39ca071585534ac/vendor/llama.cpp/src/llama.cpp:6391:56: note: parameter passing for argument of type ‘std::initializer_list’ changed in GCC 7.1 2024-07-02T16:55:11,763 6391 | model.output = ml.create_tensor(ctx_output_split, tn(LLM_TENSOR_OUTPUT, "weight"), { n_embd, n_vocab }); 2024-07-02T16:55:11,764 | ~~~~~~~~~~~~~~~~^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ 2024-07-02T16:55:11,764 /tmp/pip-wheel-i4vgucre/llama-cpp-python_ac63946418344e25b39ca071585534ac/vendor/llama.cpp/src/llama.cpp:6416:54: note: parameter passing for argument of type ‘std::initializer_list’ changed in GCC 7.1 2024-07-02T16:55:11,765 6416 | model.tok_embd = ml.create_tensor(ctx_input, tn(LLM_TENSOR_TOKEN_EMBD, "weight"), {n_embd, n_vocab}); 2024-07-02T16:55:11,765 | ~~~~~~~~~~~~~~~~^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ 2024-07-02T16:55:11,766 /tmp/pip-wheel-i4vgucre/llama-cpp-python_ac63946418344e25b39ca071585534ac/vendor/llama.cpp/src/llama.cpp:6420:61: note: parameter passing for argument of type ‘std::initializer_list’ changed in GCC 7.1 2024-07-02T16:55:11,766 6420 | model.output_norm = ml.create_tensor(ctx_output, tn(LLM_TENSOR_OUTPUT_NORM, "weight"), {n_embd}); 2024-07-02T16:55:11,767 | ~~~~~~~~~~~~~~~~^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ 2024-07-02T16:55:11,768 /tmp/pip-wheel-i4vgucre/llama-cpp-python_ac63946418344e25b39ca071585534ac/vendor/llama.cpp/src/llama.cpp:6421:61: note: parameter passing for argument of type ‘std::initializer_list’ changed in GCC 7.1 2024-07-02T16:55:11,768 6421 | model.output = ml.create_tensor(ctx_output_split, tn(LLM_TENSOR_OUTPUT, "weight"), {n_embd, n_vocab}); 2024-07-02T16:55:11,768 | ~~~~~~~~~~~~~~~~^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ 2024-07-02T16:55:11,769 /tmp/pip-wheel-i4vgucre/llama-cpp-python_ac63946418344e25b39ca071585534ac/vendor/llama.cpp/src/llama.cpp:6434:52: note: parameter passing for argument of type ‘std::initializer_list’ changed in GCC 7.1 2024-07-02T16:55:11,769 6434 | layer.wv = ml.create_tensor(ctx_split, tn(LLM_TENSOR_ATTN_V, "weight", i), {n_embd, n_embd_gqa}); 2024-07-02T16:55:11,769 | ~~~~~~~~~~~~~~~~^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ 2024-07-02T16:55:11,770 /tmp/pip-wheel-i4vgucre/llama-cpp-python_ac63946418344e25b39ca071585534ac/vendor/llama.cpp/src/llama.cpp:6435:52: note: parameter passing for argument of type ‘std::initializer_list’ changed in GCC 7.1 2024-07-02T16:55:11,770 6435 | layer.wo = ml.create_tensor(ctx_split, tn(LLM_TENSOR_ATTN_OUT, "weight", i), {n_embd, n_embd}); 2024-07-02T16:55:11,770 | ~~~~~~~~~~~~~~~~^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ 2024-07-02T16:55:11,771 /tmp/pip-wheel-i4vgucre/llama-cpp-python_ac63946418344e25b39ca071585534ac/vendor/llama.cpp/src/llama.cpp:6439:58: note: parameter passing for argument of type ‘std::initializer_list’ changed in GCC 7.1 2024-07-02T16:55:11,771 6439 | layer.ffn_up = ml.create_tensor(ctx_split, tn(LLM_TENSOR_FFN_UP, "weight", i), {n_embd, n_ff}); 2024-07-02T16:55:11,772 | ~~~~~~~~~~~~~~~~^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ 2024-07-02T16:55:11,772 /tmp/pip-wheel-i4vgucre/llama-cpp-python_ac63946418344e25b39ca071585534ac/vendor/llama.cpp/src/llama.cpp:6444:54: note: parameter passing for argument of type ‘std::initializer_list’ changed in GCC 7.1 2024-07-02T16:55:11,773 6444 | model.tok_embd = ml.create_tensor(ctx_input, tn(LLM_TENSOR_TOKEN_EMBD, "weight"), {n_embd, n_vocab}); 2024-07-02T16:55:11,773 | ~~~~~~~~~~~~~~~~^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ 2024-07-02T16:55:11,774 /tmp/pip-wheel-i4vgucre/llama-cpp-python_ac63946418344e25b39ca071585534ac/vendor/llama.cpp/src/llama.cpp:6445:54: note: parameter passing for argument of type ‘std::initializer_list’ changed in GCC 7.1 2024-07-02T16:55:11,774 6445 | model.pos_embd = ml.create_tensor(ctx_input, tn(LLM_TENSOR_POS_EMBD, "weight"), {n_embd, hparams.n_ctx_train}); 2024-07-02T16:55:11,775 | ~~~~~~~~~~~~~~~~^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ 2024-07-02T16:55:11,775 /tmp/pip-wheel-i4vgucre/llama-cpp-python_ac63946418344e25b39ca071585534ac/vendor/llama.cpp/src/llama.cpp:6449:63: note: parameter passing for argument of type ‘std::initializer_list’ changed in GCC 7.1 2024-07-02T16:55:11,776 6449 | model.output_norm = ml.create_tensor(ctx_output, tn(LLM_TENSOR_OUTPUT_NORM, "weight"), {n_embd}); 2024-07-02T16:55:11,777 | ~~~~~~~~~~~~~~~~^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ 2024-07-02T16:55:11,777 /tmp/pip-wheel-i4vgucre/llama-cpp-python_ac63946418344e25b39ca071585534ac/vendor/llama.cpp/src/llama.cpp:6450:63: note: parameter passing for argument of type ‘std::initializer_list’ changed in GCC 7.1 2024-07-02T16:55:11,778 6450 | model.output_norm_b = ml.create_tensor(ctx_output, tn(LLM_TENSOR_OUTPUT_NORM, "bias"), {n_embd}); 2024-07-02T16:55:11,778 | ~~~~~~~~~~~~~~~~^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ 2024-07-02T16:55:11,779 /tmp/pip-wheel-i4vgucre/llama-cpp-python_ac63946418344e25b39ca071585534ac/vendor/llama.cpp/src/llama.cpp:6451:63: note: parameter passing for argument of type ‘std::initializer_list’ changed in GCC 7.1 2024-07-02T16:55:11,779 6451 | model.output = ml.create_tensor(ctx_output_split, tn(LLM_TENSOR_OUTPUT, "weight"), {n_embd, n_vocab}); 2024-07-02T16:55:11,779 | ~~~~~~~~~~~~~~~~^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ 2024-07-02T16:55:11,780 /tmp/pip-wheel-i4vgucre/llama-cpp-python_ac63946418344e25b39ca071585534ac/vendor/llama.cpp/src/llama.cpp:6460:61: note: parameter passing for argument of type ‘std::initializer_list’ changed in GCC 7.1 2024-07-02T16:55:11,780 6460 | layer.attn_norm = ml.create_tensor(ctx_layer, tn(LLM_TENSOR_ATTN_NORM, "weight", i), {n_embd}); 2024-07-02T16:55:11,781 | ~~~~~~~~~~~~~~~~^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ 2024-07-02T16:55:11,781 /tmp/pip-wheel-i4vgucre/llama-cpp-python_ac63946418344e25b39ca071585534ac/vendor/llama.cpp/src/llama.cpp:6461:61: note: parameter passing for argument of type ‘std::initializer_list’ changed in GCC 7.1 2024-07-02T16:55:11,782 6461 | layer.attn_norm_b = ml.create_tensor(ctx_layer, tn(LLM_TENSOR_ATTN_NORM, "bias", i), {n_embd}); 2024-07-02T16:55:11,782 | ~~~~~~~~~~~~~~~~^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ 2024-07-02T16:55:11,783 /tmp/pip-wheel-i4vgucre/llama-cpp-python_ac63946418344e25b39ca071585534ac/vendor/llama.cpp/src/llama.cpp:6472:60: note: parameter passing for argument of type ‘std::initializer_list’ changed in GCC 7.1 2024-07-02T16:55:11,783 6472 | layer.ffn_down = ml.create_tensor(ctx_split, tn(LLM_TENSOR_FFN_DOWN, "weight", i), {n_ff, n_embd}); 2024-07-02T16:55:11,784 | ~~~~~~~~~~~~~~~~^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ 2024-07-02T16:55:11,785 /tmp/pip-wheel-i4vgucre/llama-cpp-python_ac63946418344e25b39ca071585534ac/vendor/llama.cpp/src/llama.cpp:6481:54: note: parameter passing for argument of type ‘std::initializer_list’ changed in GCC 7.1 2024-07-02T16:55:11,785 6481 | model.tok_embd = ml.create_tensor(ctx_input, tn(LLM_TENSOR_TOKEN_EMBD, "weight"), {n_embd, n_vocab}); 2024-07-02T16:55:11,785 | ~~~~~~~~~~~~~~~~^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ 2024-07-02T16:55:11,786 /tmp/pip-wheel-i4vgucre/llama-cpp-python_ac63946418344e25b39ca071585534ac/vendor/llama.cpp/src/llama.cpp:6485:63: note: parameter passing for argument of type ‘std::initializer_list’ changed in GCC 7.1 2024-07-02T16:55:11,786 6485 | model.output_norm = ml.create_tensor(ctx_output, tn(LLM_TENSOR_OUTPUT_NORM, "weight"), {n_embd}); 2024-07-02T16:55:11,787 | ~~~~~~~~~~~~~~~~^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ 2024-07-02T16:55:11,787 /tmp/pip-wheel-i4vgucre/llama-cpp-python_ac63946418344e25b39ca071585534ac/vendor/llama.cpp/src/llama.cpp:6486:63: note: parameter passing for argument of type ‘std::initializer_list’ changed in GCC 7.1 2024-07-02T16:55:11,788 6486 | model.output_norm_b = ml.create_tensor(ctx_output, tn(LLM_TENSOR_OUTPUT_NORM, "bias"), {n_embd}); 2024-07-02T16:55:11,788 | ~~~~~~~~~~~~~~~~^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ 2024-07-02T16:55:11,789 /tmp/pip-wheel-i4vgucre/llama-cpp-python_ac63946418344e25b39ca071585534ac/vendor/llama.cpp/src/llama.cpp:6487:63: note: parameter passing for argument of type ‘std::initializer_list’ changed in GCC 7.1 2024-07-02T16:55:11,789 6487 | model.output = ml.create_tensor(ctx_output_split, tn(LLM_TENSOR_OUTPUT, "weight"), {n_embd, n_vocab}); 2024-07-02T16:55:11,790 | ~~~~~~~~~~~~~~~~^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ 2024-07-02T16:55:11,790 /tmp/pip-wheel-i4vgucre/llama-cpp-python_ac63946418344e25b39ca071585534ac/vendor/llama.cpp/src/llama.cpp:6503:54: note: parameter passing for argument of type ‘std::initializer_list’ changed in GCC 7.1 2024-07-02T16:55:11,790 6503 | layer.bo = ml.create_tensor(ctx_layer, tn(LLM_TENSOR_ATTN_OUT, "bias", i), {n_embd}); 2024-07-02T16:55:11,791 | ~~~~~~~~~~~~~~~~^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ 2024-07-02T16:55:11,791 /tmp/pip-wheel-i4vgucre/llama-cpp-python_ac63946418344e25b39ca071585534ac/vendor/llama.cpp/src/llama.cpp:6506:60: note: parameter passing for argument of type ‘std::initializer_list’ changed in GCC 7.1 2024-07-02T16:55:11,792 6506 | layer.ffn_norm_b = ml.create_tensor(ctx_layer, tn(LLM_TENSOR_FFN_NORM, "bias", i), {n_embd}); 2024-07-02T16:55:11,792 | ~~~~~~~~~~~~~~~~^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ 2024-07-02T16:55:11,793 /tmp/pip-wheel-i4vgucre/llama-cpp-python_ac63946418344e25b39ca071585534ac/vendor/llama.cpp/src/llama.cpp:6508:60: note: parameter passing for argument of type ‘std::initializer_list’ changed in GCC 7.1 2024-07-02T16:55:11,793 6508 | layer.ffn_down = ml.create_tensor(ctx_split, tn(LLM_TENSOR_FFN_DOWN, "weight", i), {n_ff, n_embd}); 2024-07-02T16:55:11,794 | ~~~~~~~~~~~~~~~~^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ 2024-07-02T16:55:11,794 /tmp/pip-wheel-i4vgucre/llama-cpp-python_ac63946418344e25b39ca071585534ac/vendor/llama.cpp/src/llama.cpp:6509:60: note: parameter passing for argument of type ‘std::initializer_list’ changed in GCC 7.1 2024-07-02T16:55:11,795 6509 | layer.ffn_down_b = ml.create_tensor(ctx_layer, tn(LLM_TENSOR_FFN_DOWN, "bias", i), {n_embd}); 2024-07-02T16:55:11,795 | ~~~~~~~~~~~~~~~~^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ 2024-07-02T16:55:11,796 /tmp/pip-wheel-i4vgucre/llama-cpp-python_ac63946418344e25b39ca071585534ac/vendor/llama.cpp/src/llama.cpp:6512:60: note: parameter passing for argument of type ‘std::initializer_list’ changed in GCC 7.1 2024-07-02T16:55:11,797 6512 | layer.ffn_up_b = ml.create_tensor(ctx_layer, tn(LLM_TENSOR_FFN_UP, "bias", i), {n_ff}); 2024-07-02T16:55:11,797 | ~~~~~~~~~~~~~~~~^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ 2024-07-02T16:55:11,798 /tmp/pip-wheel-i4vgucre/llama-cpp-python_ac63946418344e25b39ca071585534ac/vendor/llama.cpp/src/llama.cpp:6517:54: note: parameter passing for argument of type ‘std::initializer_list’ changed in GCC 7.1 2024-07-02T16:55:11,798 6517 | model.tok_embd = ml.create_tensor(ctx_input, tn(LLM_TENSOR_TOKEN_EMBD, "weight"), {n_embd, n_vocab}); 2024-07-02T16:55:11,799 | ~~~~~~~~~~~~~~~~^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ 2024-07-02T16:55:11,799 /tmp/pip-wheel-i4vgucre/llama-cpp-python_ac63946418344e25b39ca071585534ac/vendor/llama.cpp/src/llama.cpp:6519:63: note: parameter passing for argument of type ‘std::initializer_list’ changed in GCC 7.1 2024-07-02T16:55:11,800 6519 | model.output_norm = ml.create_tensor(ctx_output, tn(LLM_TENSOR_OUTPUT_NORM, "weight"), {n_embd}); 2024-07-02T16:55:11,800 | ~~~~~~~~~~~~~~~~^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ 2024-07-02T16:55:11,800 /tmp/pip-wheel-i4vgucre/llama-cpp-python_ac63946418344e25b39ca071585534ac/vendor/llama.cpp/src/llama.cpp:6520:63: note: parameter passing for argument of type ‘std::initializer_list’ changed in GCC 7.1 2024-07-02T16:55:11,801 6520 | model.output_norm_b = ml.create_tensor(ctx_output, tn(LLM_TENSOR_OUTPUT_NORM, "bias"), {n_embd}); 2024-07-02T16:55:11,801 | ~~~~~~~~~~~~~~~~^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ 2024-07-02T16:55:11,802 /tmp/pip-wheel-i4vgucre/llama-cpp-python_ac63946418344e25b39ca071585534ac/vendor/llama.cpp/src/llama.cpp:6521:63: note: parameter passing for argument of type ‘std::initializer_list’ changed in GCC 7.1 2024-07-02T16:55:11,802 6521 | model.output = ml.create_tensor(ctx_output_split, tn(LLM_TENSOR_OUTPUT, "weight"), {n_embd, n_vocab}); 2024-07-02T16:55:11,803 | ~~~~~~~~~~~~~~~~^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ 2024-07-02T16:55:11,803 /tmp/pip-wheel-i4vgucre/llama-cpp-python_ac63946418344e25b39ca071585534ac/vendor/llama.cpp/src/llama.cpp:6530:61: note: parameter passing for argument of type ‘std::initializer_list’ changed in GCC 7.1 2024-07-02T16:55:11,804 6530 | layer.attn_norm_b = ml.create_tensor(ctx_layer, tn(LLM_TENSOR_ATTN_NORM, "bias", i), {n_embd}); 2024-07-02T16:55:11,804 | ~~~~~~~~~~~~~~~~^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ 2024-07-02T16:55:11,805 /tmp/pip-wheel-i4vgucre/llama-cpp-python_ac63946418344e25b39ca071585534ac/vendor/llama.cpp/src/llama.cpp:6533:52: note: parameter passing for argument of type ‘std::initializer_list’ changed in GCC 7.1 2024-07-02T16:55:11,806 6533 | layer.wk = ml.create_tensor(ctx_split, tn(LLM_TENSOR_ATTN_K, "weight", i), {n_embd, n_embd_gqa}); 2024-07-02T16:55:11,806 | ~~~~~~~~~~~~~~~~^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ 2024-07-02T16:55:11,807 /tmp/pip-wheel-i4vgucre/llama-cpp-python_ac63946418344e25b39ca071585534ac/vendor/llama.cpp/src/llama.cpp:6535:52: note: parameter passing for argument of type ‘std::initializer_list’ changed in GCC 7.1 2024-07-02T16:55:11,807 6535 | layer.wo = ml.create_tensor(ctx_split, tn(LLM_TENSOR_ATTN_OUT, "weight", i), {n_embd, n_embd}); 2024-07-02T16:55:11,808 | ~~~~~~~~~~~~~~~~^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ 2024-07-02T16:55:11,809 /tmp/pip-wheel-i4vgucre/llama-cpp-python_ac63946418344e25b39ca071585534ac/vendor/llama.cpp/src/llama.cpp:6541:58: note: parameter passing for argument of type ‘std::initializer_list’ changed in GCC 7.1 2024-07-02T16:55:11,809 6541 | layer.ffn_down = ml.create_tensor(ctx_split, tn(LLM_TENSOR_FFN_DOWN, "weight", i), { n_ff, n_embd}); 2024-07-02T16:55:11,810 | ~~~~~~~~~~~~~~~~^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ 2024-07-02T16:55:11,810 /tmp/pip-wheel-i4vgucre/llama-cpp-python_ac63946418344e25b39ca071585534ac/vendor/llama.cpp/src/llama.cpp:6542:58: note: parameter passing for argument of type ‘std::initializer_list’ changed in GCC 7.1 2024-07-02T16:55:11,811 6542 | layer.ffn_up = ml.create_tensor(ctx_split, tn(LLM_TENSOR_FFN_UP, "weight", i), {n_embd, n_ff}); 2024-07-02T16:55:11,811 | ~~~~~~~~~~~~~~~~^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ 2024-07-02T16:55:11,812 /tmp/pip-wheel-i4vgucre/llama-cpp-python_ac63946418344e25b39ca071585534ac/vendor/llama.cpp/src/llama.cpp:6547:54: note: parameter passing for argument of type ‘std::initializer_list’ changed in GCC 7.1 2024-07-02T16:55:11,812 6547 | model.tok_embd = ml.create_tensor(ctx_input, tn(LLM_TENSOR_TOKEN_EMBD, "weight"), {n_embd, n_vocab}); 2024-07-02T16:55:11,812 | ~~~~~~~~~~~~~~~~^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ 2024-07-02T16:55:11,813 /tmp/pip-wheel-i4vgucre/llama-cpp-python_ac63946418344e25b39ca071585534ac/vendor/llama.cpp/src/llama.cpp:6551:61: note: parameter passing for argument of type ‘std::initializer_list’ changed in GCC 7.1 2024-07-02T16:55:11,813 6551 | model.output_norm = ml.create_tensor(ctx_output, tn(LLM_TENSOR_OUTPUT_NORM, "weight"), {n_embd}); 2024-07-02T16:55:11,813 | ~~~~~~~~~~~~~~~~^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ 2024-07-02T16:55:11,814 /tmp/pip-wheel-i4vgucre/llama-cpp-python_ac63946418344e25b39ca071585534ac/vendor/llama.cpp/src/llama.cpp:6552:61: note: parameter passing for argument of type ‘std::initializer_list’ changed in GCC 7.1 2024-07-02T16:55:11,815 6552 | model.output = ml.create_tensor(ctx_output_split, tn(LLM_TENSOR_OUTPUT, "weight"), {n_embd, n_vocab}); 2024-07-02T16:55:11,816 | ~~~~~~~~~~~~~~~~^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ 2024-07-02T16:55:11,816 /tmp/pip-wheel-i4vgucre/llama-cpp-python_ac63946418344e25b39ca071585534ac/vendor/llama.cpp/src/llama.cpp:6561:59: note: parameter passing for argument of type ‘std::initializer_list’ changed in GCC 7.1 2024-07-02T16:55:11,817 6561 | layer.attn_norm = ml.create_tensor(ctx_layer, tn(LLM_TENSOR_ATTN_NORM, "weight", i), {n_embd}); 2024-07-02T16:55:11,817 | ~~~~~~~~~~~~~~~~^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ 2024-07-02T16:55:11,818 /tmp/pip-wheel-i4vgucre/llama-cpp-python_ac63946418344e25b39ca071585534ac/vendor/llama.cpp/src/llama.cpp:6563:52: note: parameter passing for argument of type ‘std::initializer_list’ changed in GCC 7.1 2024-07-02T16:55:11,818 6563 | layer.wq = ml.create_tensor(ctx_split, tn(LLM_TENSOR_ATTN_Q, "weight", i), {n_embd, n_embd}); 2024-07-02T16:55:11,819 | ~~~~~~~~~~~~~~~~^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ 2024-07-02T16:55:11,820 /tmp/pip-wheel-i4vgucre/llama-cpp-python_ac63946418344e25b39ca071585534ac/vendor/llama.cpp/src/llama.cpp:6564:52: note: parameter passing for argument of type ‘std::initializer_list’ changed in GCC 7.1 2024-07-02T16:55:11,820 6564 | layer.wk = ml.create_tensor(ctx_split, tn(LLM_TENSOR_ATTN_K, "weight", i), {n_embd, n_embd_gqa}); 2024-07-02T16:55:11,821 | ~~~~~~~~~~~~~~~~^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ 2024-07-02T16:55:11,822 /tmp/pip-wheel-i4vgucre/llama-cpp-python_ac63946418344e25b39ca071585534ac/vendor/llama.cpp/src/llama.cpp:6565:52: note: parameter passing for argument of type ‘std::initializer_list’ changed in GCC 7.1 2024-07-02T16:55:11,822 6565 | layer.wv = ml.create_tensor(ctx_split, tn(LLM_TENSOR_ATTN_V, "weight", i), {n_embd, n_embd_gqa}); 2024-07-02T16:55:11,823 | ~~~~~~~~~~~~~~~~^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ 2024-07-02T16:55:11,823 /tmp/pip-wheel-i4vgucre/llama-cpp-python_ac63946418344e25b39ca071585534ac/vendor/llama.cpp/src/llama.cpp:6567:52: note: parameter passing for argument of type ‘std::initializer_list’ changed in GCC 7.1 2024-07-02T16:55:11,823 6567 | layer.wo = ml.create_tensor(ctx_split, tn(LLM_TENSOR_ATTN_OUT, "weight", i), {n_embd, n_embd}); 2024-07-02T16:55:11,824 | ~~~~~~~~~~~~~~~~^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ 2024-07-02T16:55:11,824 /tmp/pip-wheel-i4vgucre/llama-cpp-python_ac63946418344e25b39ca071585534ac/vendor/llama.cpp/src/llama.cpp:6569:58: note: parameter passing for argument of type ‘std::initializer_list’ changed in GCC 7.1 2024-07-02T16:55:11,825 6569 | layer.ffn_gate = ml.create_tensor(ctx_split, tn(LLM_TENSOR_FFN_GATE, "weight", i), {n_embd, n_ff}); 2024-07-02T16:55:11,825 | ~~~~~~~~~~~~~~~~^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ 2024-07-02T16:55:11,826 /tmp/pip-wheel-i4vgucre/llama-cpp-python_ac63946418344e25b39ca071585534ac/vendor/llama.cpp/src/llama.cpp:6570:58: note: parameter passing for argument of type ‘std::initializer_list’ changed in GCC 7.1 2024-07-02T16:55:11,826 6570 | layer.ffn_down = ml.create_tensor(ctx_split, tn(LLM_TENSOR_FFN_DOWN, "weight", i), { n_ff, n_embd}); 2024-07-02T16:55:11,827 | ~~~~~~~~~~~~~~~~^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ 2024-07-02T16:55:11,827 /tmp/pip-wheel-i4vgucre/llama-cpp-python_ac63946418344e25b39ca071585534ac/vendor/llama.cpp/src/llama.cpp:6571:58: note: parameter passing for argument of type ‘std::initializer_list’ changed in GCC 7.1 2024-07-02T16:55:11,828 6571 | layer.ffn_up = ml.create_tensor(ctx_split, tn(LLM_TENSOR_FFN_UP, "weight", i), {n_embd, n_ff}); 2024-07-02T16:55:11,828 | ~~~~~~~~~~~~~~~~^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ 2024-07-02T16:55:11,829 /tmp/pip-wheel-i4vgucre/llama-cpp-python_ac63946418344e25b39ca071585534ac/vendor/llama.cpp/src/llama.cpp:6576:54: note: parameter passing for argument of type ‘std::initializer_list’ changed in GCC 7.1 2024-07-02T16:55:11,829 6576 | model.tok_embd = ml.create_tensor(ctx_input, tn(LLM_TENSOR_TOKEN_EMBD, "weight"), {n_embd, n_vocab}); 2024-07-02T16:55:11,830 | ~~~~~~~~~~~~~~~~^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ 2024-07-02T16:55:11,830 /tmp/pip-wheel-i4vgucre/llama-cpp-python_ac63946418344e25b39ca071585534ac/vendor/llama.cpp/src/llama.cpp:6579:57: note: parameter passing for argument of type ‘std::initializer_list’ changed in GCC 7.1 2024-07-02T16:55:11,831 6579 | model.output_norm = ml.create_tensor(ctx_output, tn(LLM_TENSOR_OUTPUT_NORM, "weight"), {n_embd}); 2024-07-02T16:55:11,831 | ~~~~~~~~~~~~~~~~^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ 2024-07-02T16:55:11,832 /tmp/pip-wheel-i4vgucre/llama-cpp-python_ac63946418344e25b39ca071585534ac/vendor/llama.cpp/src/llama.cpp:6580:57: note: parameter passing for argument of type ‘std::initializer_list’ changed in GCC 7.1 2024-07-02T16:55:11,832 6580 | model.output = ml.create_tensor(ctx_output, tn(LLM_TENSOR_TOKEN_EMBD, "weight"), {n_embd, n_vocab}, llama_model_loader::TENSOR_DUPLICATED); // same as tok_embd, duplicated to allow offloading 2024-07-02T16:55:11,833 | ~~~~~~~~~~~~~~~~^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ 2024-07-02T16:55:11,833 /tmp/pip-wheel-i4vgucre/llama-cpp-python_ac63946418344e25b39ca071585534ac/vendor/llama.cpp/src/llama.cpp:6593:59: note: parameter passing for argument of type ‘std::initializer_list’ changed in GCC 7.1 2024-07-02T16:55:11,833 6593 | layer.attn_norm = ml.create_tensor(ctx_layer, tn(LLM_TENSOR_ATTN_NORM, "weight", i), {n_embd}); 2024-07-02T16:55:11,834 | ~~~~~~~~~~~~~~~~^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ 2024-07-02T16:55:11,834 /tmp/pip-wheel-i4vgucre/llama-cpp-python_ac63946418344e25b39ca071585534ac/vendor/llama.cpp/src/llama.cpp:6596:52: note: parameter passing for argument of type ‘std::initializer_list’ changed in GCC 7.1 2024-07-02T16:55:11,835 6596 | layer.wk = ml.create_tensor(ctx_split, tn(LLM_TENSOR_ATTN_K, "weight", i), {n_embd, n_embd_k_gqa}); 2024-07-02T16:55:11,835 | ~~~~~~~~~~~~~~~~^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ 2024-07-02T16:55:11,836 /tmp/pip-wheel-i4vgucre/llama-cpp-python_ac63946418344e25b39ca071585534ac/vendor/llama.cpp/src/llama.cpp:6597:52: note: parameter passing for argument of type ‘std::initializer_list’ changed in GCC 7.1 2024-07-02T16:55:11,836 6597 | layer.wv = ml.create_tensor(ctx_split, tn(LLM_TENSOR_ATTN_V, "weight", i), {n_embd, n_embd_v_gqa}); 2024-07-02T16:55:11,836 | ~~~~~~~~~~~~~~~~^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ 2024-07-02T16:55:11,837 /tmp/pip-wheel-i4vgucre/llama-cpp-python_ac63946418344e25b39ca071585534ac/vendor/llama.cpp/src/llama.cpp:6598:52: note: parameter passing for argument of type ‘std::initializer_list’ changed in GCC 7.1 2024-07-02T16:55:11,837 6598 | layer.wo = ml.create_tensor(ctx_split, tn(LLM_TENSOR_ATTN_OUT, "weight", i), {n_embd_head_k * hparams.n_head, n_embd}); 2024-07-02T16:55:11,838 | ~~~~~~~~~~~~~~~~^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ 2024-07-02T16:55:11,838 /tmp/pip-wheel-i4vgucre/llama-cpp-python_ac63946418344e25b39ca071585534ac/vendor/llama.cpp/src/llama.cpp:6600:58: note: parameter passing for argument of type ‘std::initializer_list’ changed in GCC 7.1 2024-07-02T16:55:11,839 6600 | layer.ffn_norm = ml.create_tensor(ctx_layer, tn(LLM_TENSOR_FFN_NORM, "weight", i), {n_embd}); 2024-07-02T16:55:11,840 | ~~~~~~~~~~~~~~~~^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ 2024-07-02T16:55:11,840 /tmp/pip-wheel-i4vgucre/llama-cpp-python_ac63946418344e25b39ca071585534ac/vendor/llama.cpp/src/llama.cpp:6601:58: note: parameter passing for argument of type ‘std::initializer_list’ changed in GCC 7.1 2024-07-02T16:55:11,841 6601 | layer.ffn_gate = ml.create_tensor(ctx_split, tn(LLM_TENSOR_FFN_GATE, "weight", i), {n_embd, n_ff}); 2024-07-02T16:55:11,841 | ~~~~~~~~~~~~~~~~^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ 2024-07-02T16:55:11,842 /tmp/pip-wheel-i4vgucre/llama-cpp-python_ac63946418344e25b39ca071585534ac/vendor/llama.cpp/src/llama.cpp:6602:58: note: parameter passing for argument of type ‘std::initializer_list’ changed in GCC 7.1 2024-07-02T16:55:11,842 6602 | layer.ffn_up = ml.create_tensor(ctx_split, tn(LLM_TENSOR_FFN_UP, "weight", i), {n_embd, n_ff}); 2024-07-02T16:55:11,843 | ~~~~~~~~~~~~~~~~^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ 2024-07-02T16:55:11,843 /tmp/pip-wheel-i4vgucre/llama-cpp-python_ac63946418344e25b39ca071585534ac/vendor/llama.cpp/src/llama.cpp:6603:58: note: parameter passing for argument of type ‘std::initializer_list’ changed in GCC 7.1 2024-07-02T16:55:11,844 6603 | layer.ffn_down = ml.create_tensor(ctx_split, tn(LLM_TENSOR_FFN_DOWN, "weight", i), { n_ff, n_embd}); 2024-07-02T16:55:11,844 | ~~~~~~~~~~~~~~~~^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ 2024-07-02T16:55:11,845 /tmp/pip-wheel-i4vgucre/llama-cpp-python_ac63946418344e25b39ca071585534ac/vendor/llama.cpp/src/llama.cpp:6608:54: note: parameter passing for argument of type ‘std::initializer_list’ changed in GCC 7.1 2024-07-02T16:55:11,845 6608 | model.tok_embd = ml.create_tensor(ctx_input, tn(LLM_TENSOR_TOKEN_EMBD, "weight"), {n_embd, n_vocab}); 2024-07-02T16:55:11,846 | ~~~~~~~~~~~~~~~~^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ 2024-07-02T16:55:11,846 /tmp/pip-wheel-i4vgucre/llama-cpp-python_ac63946418344e25b39ca071585534ac/vendor/llama.cpp/src/llama.cpp:6611:57: note: parameter passing for argument of type ‘std::initializer_list’ changed in GCC 7.1 2024-07-02T16:55:11,847 6611 | model.output_norm = ml.create_tensor(ctx_output, tn(LLM_TENSOR_OUTPUT_NORM, "weight"), {n_embd}); 2024-07-02T16:55:11,847 | ~~~~~~~~~~~~~~~~^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ 2024-07-02T16:55:11,848 /tmp/pip-wheel-i4vgucre/llama-cpp-python_ac63946418344e25b39ca071585534ac/vendor/llama.cpp/src/llama.cpp:6612:57: note: parameter passing for argument of type ‘std::initializer_list’ changed in GCC 7.1 2024-07-02T16:55:11,848 6612 | model.output = ml.create_tensor(ctx_output, tn(LLM_TENSOR_TOKEN_EMBD, "weight"), {n_embd, n_vocab}, llama_model_loader::TENSOR_DUPLICATED); // same as tok_embd, duplicated to allow offloading 2024-07-02T16:55:11,849 | ~~~~~~~~~~~~~~~~^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ 2024-07-02T16:55:11,850 /tmp/pip-wheel-i4vgucre/llama-cpp-python_ac63946418344e25b39ca071585534ac/vendor/llama.cpp/src/llama.cpp:6627:52: note: parameter passing for argument of type ‘std::initializer_list’ changed in GCC 7.1 2024-07-02T16:55:11,850 6627 | layer.wq = ml.create_tensor(ctx_split, tn(LLM_TENSOR_ATTN_Q, "weight", i), {n_embd, n_embd_head_k * hparams.n_head}); 2024-07-02T16:55:11,851 | ~~~~~~~~~~~~~~~~^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ 2024-07-02T16:55:11,851 /tmp/pip-wheel-i4vgucre/llama-cpp-python_ac63946418344e25b39ca071585534ac/vendor/llama.cpp/src/llama.cpp:6628:52: note: parameter passing for argument of type ‘std::initializer_list’ changed in GCC 7.1 2024-07-02T16:55:11,852 6628 | layer.wk = ml.create_tensor(ctx_split, tn(LLM_TENSOR_ATTN_K, "weight", i), {n_embd, n_embd_k_gqa}); 2024-07-02T16:55:11,853 | ~~~~~~~~~~~~~~~~^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ 2024-07-02T16:55:11,853 /tmp/pip-wheel-i4vgucre/llama-cpp-python_ac63946418344e25b39ca071585534ac/vendor/llama.cpp/src/llama.cpp:6629:52: note: parameter passing for argument of type ‘std::initializer_list’ changed in GCC 7.1 2024-07-02T16:55:11,854 6629 | layer.wv = ml.create_tensor(ctx_split, tn(LLM_TENSOR_ATTN_V, "weight", i), {n_embd, n_embd_v_gqa}); 2024-07-02T16:55:11,854 | ~~~~~~~~~~~~~~~~^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ 2024-07-02T16:55:11,855 /tmp/pip-wheel-i4vgucre/llama-cpp-python_ac63946418344e25b39ca071585534ac/vendor/llama.cpp/src/llama.cpp:6630:52: note: parameter passing for argument of type ‘std::initializer_list’ changed in GCC 7.1 2024-07-02T16:55:11,855 6630 | layer.wo = ml.create_tensor(ctx_split, tn(LLM_TENSOR_ATTN_OUT, "weight", i), {n_embd_head_k * hparams.n_head, n_embd}); 2024-07-02T16:55:11,856 | ~~~~~~~~~~~~~~~~^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ 2024-07-02T16:55:11,856 /tmp/pip-wheel-i4vgucre/llama-cpp-python_ac63946418344e25b39ca071585534ac/vendor/llama.cpp/src/llama.cpp:6631:64: note: parameter passing for argument of type ‘std::initializer_list’ changed in GCC 7.1 2024-07-02T16:55:11,857 6631 | layer.attn_post_norm = ml.create_tensor(ctx_layer, tn(LLM_TENSOR_ATTN_POST_NORM, "weight", i), {n_embd}); 2024-07-02T16:55:11,857 | ~~~~~~~~~~~~~~~~^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ 2024-07-02T16:55:11,857 /tmp/pip-wheel-i4vgucre/llama-cpp-python_ac63946418344e25b39ca071585534ac/vendor/llama.cpp/src/llama.cpp:6633:58: note: parameter passing for argument of type ‘std::initializer_list’ changed in GCC 7.1 2024-07-02T16:55:11,858 6633 | layer.ffn_norm = ml.create_tensor(ctx_layer, tn(LLM_TENSOR_FFN_NORM, "weight", i), {n_embd}); 2024-07-02T16:55:11,858 | ~~~~~~~~~~~~~~~~^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ 2024-07-02T16:55:11,859 /tmp/pip-wheel-i4vgucre/llama-cpp-python_ac63946418344e25b39ca071585534ac/vendor/llama.cpp/src/llama.cpp:6634:58: note: parameter passing for argument of type ‘std::initializer_list’ changed in GCC 7.1 2024-07-02T16:55:11,859 6634 | layer.ffn_gate = ml.create_tensor(ctx_split, tn(LLM_TENSOR_FFN_GATE, "weight", i), {n_embd, n_ff}); 2024-07-02T16:55:11,860 | ~~~~~~~~~~~~~~~~^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ 2024-07-02T16:55:11,860 /tmp/pip-wheel-i4vgucre/llama-cpp-python_ac63946418344e25b39ca071585534ac/vendor/llama.cpp/src/llama.cpp:6635:58: note: parameter passing for argument of type ‘std::initializer_list’ changed in GCC 7.1 2024-07-02T16:55:11,861 6635 | layer.ffn_up = ml.create_tensor(ctx_split, tn(LLM_TENSOR_FFN_UP, "weight", i), {n_embd, n_ff}); 2024-07-02T16:55:11,861 | ~~~~~~~~~~~~~~~~^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ 2024-07-02T16:55:11,862 /tmp/pip-wheel-i4vgucre/llama-cpp-python_ac63946418344e25b39ca071585534ac/vendor/llama.cpp/src/llama.cpp:6636:58: note: parameter passing for argument of type ‘std::initializer_list’ changed in GCC 7.1 2024-07-02T16:55:11,862 6636 | layer.ffn_down = ml.create_tensor(ctx_split, tn(LLM_TENSOR_FFN_DOWN, "weight", i), { n_ff, n_embd}); 2024-07-02T16:55:11,863 | ~~~~~~~~~~~~~~~~^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ 2024-07-02T16:55:11,863 /tmp/pip-wheel-i4vgucre/llama-cpp-python_ac63946418344e25b39ca071585534ac/vendor/llama.cpp/src/llama.cpp:6637:63: note: parameter passing for argument of type ‘std::initializer_list’ changed in GCC 7.1 2024-07-02T16:55:11,864 6637 | layer.ffn_post_norm = ml.create_tensor(ctx_layer, tn(LLM_TENSOR_FFN_POST_NORM, "weight", i), {n_embd}); 2024-07-02T16:55:11,864 | ~~~~~~~~~~~~~~~~^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ 2024-07-02T16:55:11,865 /tmp/pip-wheel-i4vgucre/llama-cpp-python_ac63946418344e25b39ca071585534ac/vendor/llama.cpp/src/llama.cpp:6642:54: note: parameter passing for argument of type ‘std::initializer_list’ changed in GCC 7.1 2024-07-02T16:55:11,865 6642 | model.tok_embd = ml.create_tensor(ctx_input, tn(LLM_TENSOR_TOKEN_EMBD, "weight"), {n_embd, n_vocab}); 2024-07-02T16:55:11,866 | ~~~~~~~~~~~~~~~~^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ 2024-07-02T16:55:11,866 /tmp/pip-wheel-i4vgucre/llama-cpp-python_ac63946418344e25b39ca071585534ac/vendor/llama.cpp/src/llama.cpp:6646:63: note: parameter passing for argument of type ‘std::initializer_list’ changed in GCC 7.1 2024-07-02T16:55:11,867 6646 | model.output_norm = ml.create_tensor(ctx_output, tn(LLM_TENSOR_OUTPUT_NORM, "weight"), {n_embd}); 2024-07-02T16:55:11,867 | ~~~~~~~~~~~~~~~~^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ 2024-07-02T16:55:11,868 /tmp/pip-wheel-i4vgucre/llama-cpp-python_ac63946418344e25b39ca071585534ac/vendor/llama.cpp/src/llama.cpp:6647:63: note: parameter passing for argument of type ‘std::initializer_list’ changed in GCC 7.1 2024-07-02T16:55:11,868 6647 | model.output_norm_b = ml.create_tensor(ctx_output, tn(LLM_TENSOR_OUTPUT_NORM, "bias"), {n_embd}); 2024-07-02T16:55:11,869 | ~~~~~~~~~~~~~~~~^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ 2024-07-02T16:55:11,869 /tmp/pip-wheel-i4vgucre/llama-cpp-python_ac63946418344e25b39ca071585534ac/vendor/llama.cpp/src/llama.cpp:6649:56: note: parameter passing for argument of type ‘std::initializer_list’ changed in GCC 7.1 2024-07-02T16:55:11,870 6649 | model.output = ml.create_tensor(ctx_output_split, tn(LLM_TENSOR_OUTPUT, "weight"), {n_embd, n_vocab}, llama_model_loader::TENSOR_NOT_REQUIRED); 2024-07-02T16:55:11,870 | ~~~~~~~~~~~~~~~~^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ 2024-07-02T16:55:11,871 /tmp/pip-wheel-i4vgucre/llama-cpp-python_ac63946418344e25b39ca071585534ac/vendor/llama.cpp/src/llama.cpp:6652:60: note: parameter passing for argument of type ‘std::initializer_list’ changed in GCC 7.1 2024-07-02T16:55:11,871 6652 | model.output = ml.create_tensor(ctx_output, tn(LLM_TENSOR_TOKEN_EMBD, "weight"), {n_embd, n_vocab}, llama_model_loader::TENSOR_DUPLICATED); 2024-07-02T16:55:11,872 | ~~~~~~~~~~~~~~~~^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ 2024-07-02T16:55:11,872 /tmp/pip-wheel-i4vgucre/llama-cpp-python_ac63946418344e25b39ca071585534ac/vendor/llama.cpp/src/llama.cpp:6669:52: note: parameter passing for argument of type ‘std::initializer_list’ changed in GCC 7.1 2024-07-02T16:55:11,873 6669 | layer.wo = ml.create_tensor(ctx_split, tn(LLM_TENSOR_ATTN_OUT, "weight", i), {n_embd, n_embd}); 2024-07-02T16:55:11,873 | ~~~~~~~~~~~~~~~~^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ 2024-07-02T16:55:11,874 /tmp/pip-wheel-i4vgucre/llama-cpp-python_ac63946418344e25b39ca071585534ac/vendor/llama.cpp/src/llama.cpp:6680:58: note: parameter passing for argument of type ‘std::initializer_list’ changed in GCC 7.1 2024-07-02T16:55:11,875 6680 | layer.ffn_down = ml.create_tensor(ctx_split, tn(LLM_TENSOR_FFN_DOWN, "weight", i), { n_ff, n_embd}); 2024-07-02T16:55:11,877 | ~~~~~~~~~~~~~~~~^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ 2024-07-02T16:55:11,877 /tmp/pip-wheel-i4vgucre/llama-cpp-python_ac63946418344e25b39ca071585534ac/vendor/llama.cpp/src/llama.cpp:6697:54: note: parameter passing for argument of type ‘std::initializer_list’ changed in GCC 7.1 2024-07-02T16:55:11,878 6697 | model.tok_embd = ml.create_tensor(ctx_input, tn(LLM_TENSOR_TOKEN_EMBD, "weight"), {n_embd, n_vocab}); 2024-07-02T16:55:11,879 | ~~~~~~~~~~~~~~~~^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ 2024-07-02T16:55:11,879 /tmp/pip-wheel-i4vgucre/llama-cpp-python_ac63946418344e25b39ca071585534ac/vendor/llama.cpp/src/llama.cpp:6701:61: note: parameter passing for argument of type ‘std::initializer_list’ changed in GCC 7.1 2024-07-02T16:55:11,880 6701 | model.output_norm = ml.create_tensor(ctx_output, tn(LLM_TENSOR_OUTPUT_NORM, "weight"), {n_embd}); 2024-07-02T16:55:11,881 | ~~~~~~~~~~~~~~~~^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ 2024-07-02T16:55:11,881 /tmp/pip-wheel-i4vgucre/llama-cpp-python_ac63946418344e25b39ca071585534ac/vendor/llama.cpp/src/llama.cpp:6703:56: note: parameter passing for argument of type ‘std::initializer_list’ changed in GCC 7.1 2024-07-02T16:55:11,882 6703 | model.output = ml.create_tensor(ctx_output_split, tn(LLM_TENSOR_OUTPUT, "weight"), {n_embd, n_vocab}, llama_model_loader::TENSOR_NOT_REQUIRED); 2024-07-02T16:55:11,883 | ~~~~~~~~~~~~~~~~^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ 2024-07-02T16:55:11,884 /tmp/pip-wheel-i4vgucre/llama-cpp-python_ac63946418344e25b39ca071585534ac/vendor/llama.cpp/src/llama.cpp:6706:60: note: parameter passing for argument of type ‘std::initializer_list’ changed in GCC 7.1 2024-07-02T16:55:11,885 6706 | model.output = ml.create_tensor(ctx_output_split, tn(LLM_TENSOR_TOKEN_EMBD, "weight"), {n_embd, n_vocab}, llama_model_loader::TENSOR_DUPLICATED); 2024-07-02T16:55:11,886 | ~~~~~~~~~~~~~~~~^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ 2024-07-02T16:55:11,887 /tmp/pip-wheel-i4vgucre/llama-cpp-python_ac63946418344e25b39ca071585534ac/vendor/llama.cpp/src/llama.cpp:6717:59: note: parameter passing for argument of type ‘std::initializer_list’ changed in GCC 7.1 2024-07-02T16:55:11,888 6717 | layer.attn_norm = ml.create_tensor(ctx_layer, tn(LLM_TENSOR_ATTN_NORM, "weight", i), {n_embd}); 2024-07-02T16:55:11,888 | ~~~~~~~~~~~~~~~~^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ 2024-07-02T16:55:11,889 /tmp/pip-wheel-i4vgucre/llama-cpp-python_ac63946418344e25b39ca071585534ac/vendor/llama.cpp/src/llama.cpp:6719:56: note: parameter passing for argument of type ‘std::initializer_list’ changed in GCC 7.1 2024-07-02T16:55:11,890 6719 | layer.ssm_in = ml.create_tensor(ctx_split, tn(LLM_TENSOR_SSM_IN, "weight", i), {n_embd, 2*d_inner}); 2024-07-02T16:55:11,891 | ~~~~~~~~~~~~~~~~^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ 2024-07-02T16:55:11,892 /tmp/pip-wheel-i4vgucre/llama-cpp-python_ac63946418344e25b39ca071585534ac/vendor/llama.cpp/src/llama.cpp:6721:60: note: parameter passing for argument of type ‘std::initializer_list’ changed in GCC 7.1 2024-07-02T16:55:11,893 6721 | layer.ssm_conv1d = ml.create_tensor(ctx_split, tn(LLM_TENSOR_SSM_CONV1D, "weight", i), {d_conv, d_inner}); 2024-07-02T16:55:11,894 | ~~~~~~~~~~~~~~~~^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ 2024-07-02T16:55:11,895 /tmp/pip-wheel-i4vgucre/llama-cpp-python_ac63946418344e25b39ca071585534ac/vendor/llama.cpp/src/llama.cpp:6722:62: note: parameter passing for argument of type ‘std::initializer_list’ changed in GCC 7.1 2024-07-02T16:55:11,895 6722 | layer.ssm_conv1d_b = ml.create_tensor(ctx_layer, tn(LLM_TENSOR_SSM_CONV1D, "bias", i), {d_inner}); 2024-07-02T16:55:11,896 | ~~~~~~~~~~~~~~~~^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ 2024-07-02T16:55:11,897 /tmp/pip-wheel-i4vgucre/llama-cpp-python_ac63946418344e25b39ca071585534ac/vendor/llama.cpp/src/llama.cpp:6724:55: note: parameter passing for argument of type ‘std::initializer_list’ changed in GCC 7.1 2024-07-02T16:55:11,898 6724 | layer.ssm_x = ml.create_tensor(ctx_split, tn(LLM_TENSOR_SSM_X, "weight", i), {d_inner, dt_rank + 2*d_state}); 2024-07-02T16:55:11,898 | ~~~~~~~~~~~~~~~~^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ 2024-07-02T16:55:11,899 /tmp/pip-wheel-i4vgucre/llama-cpp-python_ac63946418344e25b39ca071585534ac/vendor/llama.cpp/src/llama.cpp:6726:56: note: parameter passing for argument of type ‘std::initializer_list’ changed in GCC 7.1 2024-07-02T16:55:11,900 6726 | layer.ssm_dt = ml.create_tensor(ctx_split, tn(LLM_TENSOR_SSM_DT, "weight", i), {dt_rank, d_inner}); 2024-07-02T16:55:11,901 | ~~~~~~~~~~~~~~~~^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ 2024-07-02T16:55:11,902 /tmp/pip-wheel-i4vgucre/llama-cpp-python_ac63946418344e25b39ca071585534ac/vendor/llama.cpp/src/llama.cpp:6727:58: note: parameter passing for argument of type ‘std::initializer_list’ changed in GCC 7.1 2024-07-02T16:55:11,903 6727 | layer.ssm_dt_b = ml.create_tensor(ctx_layer, tn(LLM_TENSOR_SSM_DT, "bias", i), {d_inner}); 2024-07-02T16:55:11,904 | ~~~~~~~~~~~~~~~~^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ 2024-07-02T16:55:11,905 /tmp/pip-wheel-i4vgucre/llama-cpp-python_ac63946418344e25b39ca071585534ac/vendor/llama.cpp/src/llama.cpp:6730:55: note: parameter passing for argument of type ‘std::initializer_list’ changed in GCC 7.1 2024-07-02T16:55:11,905 6730 | layer.ssm_a = ml.create_tensor(ctx_split, tn(LLM_TENSOR_SSM_A, i), {d_state, d_inner}); 2024-07-02T16:55:11,906 | ~~~~~~~~~~~~~~~~^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ 2024-07-02T16:55:11,907 /tmp/pip-wheel-i4vgucre/llama-cpp-python_ac63946418344e25b39ca071585534ac/vendor/llama.cpp/src/llama.cpp:6731:55: note: parameter passing for argument of type ‘std::initializer_list’ changed in GCC 7.1 2024-07-02T16:55:11,908 6731 | layer.ssm_d = ml.create_tensor(ctx_layer, tn(LLM_TENSOR_SSM_D, i), {d_inner}); 2024-07-02T16:55:11,909 | ~~~~~~~~~~~~~~~~^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ 2024-07-02T16:55:11,910 /tmp/pip-wheel-i4vgucre/llama-cpp-python_ac63946418344e25b39ca071585534ac/vendor/llama.cpp/src/llama.cpp:6734:57: note: parameter passing for argument of type ‘std::initializer_list’ changed in GCC 7.1 2024-07-02T16:55:11,911 6734 | layer.ssm_out = ml.create_tensor(ctx_split, tn(LLM_TENSOR_SSM_OUT, "weight", i), {d_inner, n_embd}); 2024-07-02T16:55:11,911 | ~~~~~~~~~~~~~~~~^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ 2024-07-02T16:55:11,913 /tmp/pip-wheel-i4vgucre/llama-cpp-python_ac63946418344e25b39ca071585534ac/vendor/llama.cpp/src/llama.cpp:6739:54: note: parameter passing for argument of type ‘std::initializer_list’ changed in GCC 7.1 2024-07-02T16:55:11,913 6739 | model.tok_embd = ml.create_tensor(ctx_input, tn(LLM_TENSOR_TOKEN_EMBD, "weight"), {n_embd, n_vocab}); 2024-07-02T16:55:11,914 | ~~~~~~~~~~~~~~~~^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ 2024-07-02T16:55:11,914 /tmp/pip-wheel-i4vgucre/llama-cpp-python_ac63946418344e25b39ca071585534ac/vendor/llama.cpp/src/llama.cpp:6741:61: note: parameter passing for argument of type ‘std::initializer_list’ changed in GCC 7.1 2024-07-02T16:55:11,915 6741 | model.output_norm = ml.create_tensor(ctx_output, tn(LLM_TENSOR_OUTPUT_NORM, "weight"), {n_embd}); 2024-07-02T16:55:11,915 | ~~~~~~~~~~~~~~~~^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ 2024-07-02T16:55:11,916 /tmp/pip-wheel-i4vgucre/llama-cpp-python_ac63946418344e25b39ca071585534ac/vendor/llama.cpp/src/llama.cpp:6742:61: note: parameter passing for argument of type ‘std::initializer_list’ changed in GCC 7.1 2024-07-02T16:55:11,917 6742 | model.output = ml.create_tensor(ctx_output_split, tn(LLM_TENSOR_OUTPUT, "weight"), {n_embd, n_vocab}); 2024-07-02T16:55:11,917 | ~~~~~~~~~~~~~~~~^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ 2024-07-02T16:55:11,918 /tmp/pip-wheel-i4vgucre/llama-cpp-python_ac63946418344e25b39ca071585534ac/vendor/llama.cpp/src/llama.cpp:6755:58: note: parameter passing for argument of type ‘std::initializer_list’ changed in GCC 7.1 2024-07-02T16:55:11,919 6755 | layer.ffn_down = ml.create_tensor(ctx_split, tn(LLM_TENSOR_FFN_DOWN, "weight", i), { n_ff, n_embd}); 2024-07-02T16:55:11,919 | ~~~~~~~~~~~~~~~~^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ 2024-07-02T16:55:11,920 /tmp/pip-wheel-i4vgucre/llama-cpp-python_ac63946418344e25b39ca071585534ac/vendor/llama.cpp/src/llama.cpp:6761:54: note: parameter passing for argument of type ‘std::initializer_list’ changed in GCC 7.1 2024-07-02T16:55:11,921 6761 | model.tok_embd = ml.create_tensor(ctx_input, tn(LLM_TENSOR_TOKEN_EMBD, "weight"), {n_embd, n_vocab}); 2024-07-02T16:55:11,922 | ~~~~~~~~~~~~~~~~^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ 2024-07-02T16:55:11,923 /tmp/pip-wheel-i4vgucre/llama-cpp-python_ac63946418344e25b39ca071585534ac/vendor/llama.cpp/src/llama.cpp:6765:61: note: parameter passing for argument of type ‘std::initializer_list’ changed in GCC 7.1 2024-07-02T16:55:11,923 6765 | model.output_norm = ml.create_tensor(ctx_output, tn(LLM_TENSOR_OUTPUT_NORM, "weight"), {n_embd}); 2024-07-02T16:55:11,924 | ~~~~~~~~~~~~~~~~^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ 2024-07-02T16:55:11,925 /tmp/pip-wheel-i4vgucre/llama-cpp-python_ac63946418344e25b39ca071585534ac/vendor/llama.cpp/src/llama.cpp:6767:56: note: parameter passing for argument of type ‘std::initializer_list’ changed in GCC 7.1 2024-07-02T16:55:11,926 6767 | model.output = ml.create_tensor(ctx_output, tn(LLM_TENSOR_TOKEN_EMBD, "weight"), {n_embd, n_vocab}, llama_model_loader::TENSOR_DUPLICATED); 2024-07-02T16:55:11,927 | ~~~~~~~~~~~~~~~~^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ 2024-07-02T16:55:11,928 /tmp/pip-wheel-i4vgucre/llama-cpp-python_ac63946418344e25b39ca071585534ac/vendor/llama.cpp/src/llama.cpp:6779:65: note: parameter passing for argument of type ‘std::initializer_list’ changed in GCC 7.1 2024-07-02T16:55:11,928 6779 | layer.attn_q_norm = ml.create_tensor(ctx_layer, tn(LLM_TENSOR_ATTN_Q_NORM, "weight", i), {hparams.n_embd_head_k, hparams.n_head}); 2024-07-02T16:55:11,929 | ~~~~~~~~~~~~~~~~^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ 2024-07-02T16:55:11,930 /tmp/pip-wheel-i4vgucre/llama-cpp-python_ac63946418344e25b39ca071585534ac/vendor/llama.cpp/src/llama.cpp:6780:65: note: parameter passing for argument of type ‘std::initializer_list’ changed in GCC 7.1 2024-07-02T16:55:11,930 6780 | layer.attn_k_norm = ml.create_tensor(ctx_layer, tn(LLM_TENSOR_ATTN_K_NORM, "weight", i), {hparams.n_embd_head_k, hparams.n_head_kv}); 2024-07-02T16:55:11,931 | ~~~~~~~~~~~~~~~~^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ 2024-07-02T16:55:11,931 /tmp/pip-wheel-i4vgucre/llama-cpp-python_ac63946418344e25b39ca071585534ac/vendor/llama.cpp/src/llama.cpp:6786:52: note: parameter passing for argument of type ‘std::initializer_list’ changed in GCC 7.1 2024-07-02T16:55:11,932 6786 | layer.wo = ml.create_tensor(ctx_split, tn(LLM_TENSOR_ATTN_OUT, "weight", i), {n_embd, n_embd}); 2024-07-02T16:55:11,933 | ~~~~~~~~~~~~~~~~^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ 2024-07-02T16:55:11,934 /tmp/pip-wheel-i4vgucre/llama-cpp-python_ac63946418344e25b39ca071585534ac/vendor/llama.cpp/src/llama.cpp:6795:54: note: parameter passing for argument of type ‘std::initializer_list’ changed in GCC 7.1 2024-07-02T16:55:11,934 6795 | model.tok_embd = ml.create_tensor(ctx_input, tn(LLM_TENSOR_TOKEN_EMBD, "weight"), {n_embd, n_vocab}); 2024-07-02T16:55:11,935 | ~~~~~~~~~~~~~~~~^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ 2024-07-02T16:55:11,936 /tmp/pip-wheel-i4vgucre/llama-cpp-python_ac63946418344e25b39ca071585534ac/vendor/llama.cpp/src/llama.cpp:6799:56: note: parameter passing for argument of type ‘std::initializer_list’ changed in GCC 7.1 2024-07-02T16:55:11,937 6799 | model.output = ml.create_tensor(ctx_output_split, tn(LLM_TENSOR_OUTPUT, "weight"), {n_embd, n_vocab}, llama_model_loader::TENSOR_NOT_REQUIRED); 2024-07-02T16:55:11,938 | ~~~~~~~~~~~~~~~~^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ 2024-07-02T16:55:11,938 /tmp/pip-wheel-i4vgucre/llama-cpp-python_ac63946418344e25b39ca071585534ac/vendor/llama.cpp/src/llama.cpp:6802:60: note: parameter passing for argument of type ‘std::initializer_list’ changed in GCC 7.1 2024-07-02T16:55:11,939 6802 | model.output = ml.create_tensor(ctx_output, tn(LLM_TENSOR_TOKEN_EMBD, "weight"), {n_embd, n_vocab}, llama_model_loader::TENSOR_DUPLICATED); 2024-07-02T16:55:11,940 | ~~~~~~~~~~~~~~~~^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ 2024-07-02T16:55:11,940 /tmp/pip-wheel-i4vgucre/llama-cpp-python_ac63946418344e25b39ca071585534ac/vendor/llama.cpp/src/llama.cpp:6814:52: note: parameter passing for argument of type ‘std::initializer_list’ changed in GCC 7.1 2024-07-02T16:55:11,941 6814 | layer.wo = ml.create_tensor(ctx_split, tn(LLM_TENSOR_ATTN_OUT, "weight", i), {n_embd, n_embd}); 2024-07-02T16:55:11,942 | ~~~~~~~~~~~~~~~~^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ 2024-07-02T16:55:11,942 /tmp/pip-wheel-i4vgucre/llama-cpp-python_ac63946418344e25b39ca071585534ac/vendor/llama.cpp/src/llama.cpp:6819:58: note: parameter passing for argument of type ‘std::initializer_list’ changed in GCC 7.1 2024-07-02T16:55:11,943 6819 | layer.ffn_up = ml.create_tensor(ctx_split, tn(LLM_TENSOR_FFN_UP, "weight", i), {n_embd, n_ff}); 2024-07-02T16:55:11,943 | ~~~~~~~~~~~~~~~~^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ 2024-07-02T16:55:11,944 /tmp/pip-wheel-i4vgucre/llama-cpp-python_ac63946418344e25b39ca071585534ac/vendor/llama.cpp/src/llama.cpp:6824:56: note: parameter passing for argument of type ‘std::initializer_list’ changed in GCC 7.1 2024-07-02T16:55:11,944 6824 | model.tok_embd = ml.create_tensor(ctx_input, tn(LLM_TENSOR_TOKEN_EMBD, "weight"), {n_embd, n_vocab}); 2024-07-02T16:55:11,945 | ~~~~~~~~~~~~~~~~^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ 2024-07-02T16:55:11,945 /tmp/pip-wheel-i4vgucre/llama-cpp-python_ac63946418344e25b39ca071585534ac/vendor/llama.cpp/src/llama.cpp:6827:63: note: parameter passing for argument of type ‘std::initializer_list’ changed in GCC 7.1 2024-07-02T16:55:11,945 6827 | model.output_norm = ml.create_tensor(ctx_output, tn(LLM_TENSOR_OUTPUT_NORM, "weight"), {n_embd}); 2024-07-02T16:55:11,946 | ~~~~~~~~~~~~~~~~^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ 2024-07-02T16:55:11,946 /tmp/pip-wheel-i4vgucre/llama-cpp-python_ac63946418344e25b39ca071585534ac/vendor/llama.cpp/src/llama.cpp:6828:63: note: parameter passing for argument of type ‘std::initializer_list’ changed in GCC 7.1 2024-07-02T16:55:11,947 6828 | model.output_norm_b = ml.create_tensor(ctx_output, tn(LLM_TENSOR_OUTPUT_NORM, "bias"), {n_embd}); 2024-07-02T16:55:11,947 | ~~~~~~~~~~~~~~~~^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ 2024-07-02T16:55:11,948 /tmp/pip-wheel-i4vgucre/llama-cpp-python_ac63946418344e25b39ca071585534ac/vendor/llama.cpp/src/llama.cpp:6829:63: note: parameter passing for argument of type ‘std::initializer_list’ changed in GCC 7.1 2024-07-02T16:55:11,949 6829 | model.output = ml.create_tensor(ctx_output_split, tn(LLM_TENSOR_OUTPUT, "weight"), {n_embd, n_vocab}); 2024-07-02T16:55:11,949 | ~~~~~~~~~~~~~~~~^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ 2024-07-02T16:55:11,950 /tmp/pip-wheel-i4vgucre/llama-cpp-python_ac63946418344e25b39ca071585534ac/vendor/llama.cpp/src/llama.cpp:6841:54: note: parameter passing for argument of type ‘std::initializer_list’ changed in GCC 7.1 2024-07-02T16:55:11,950 6841 | layer.wqkv = ml.create_tensor(ctx_split, tn(LLM_TENSOR_ATTN_QKV, "weight", i), {n_embd, n_embd + 2*n_embd_gqa}); 2024-07-02T16:55:11,951 | ~~~~~~~~~~~~~~~~^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ 2024-07-02T16:55:11,951 /tmp/pip-wheel-i4vgucre/llama-cpp-python_ac63946418344e25b39ca071585534ac/vendor/llama.cpp/src/llama.cpp:6842:54: note: parameter passing for argument of type ‘std::initializer_list’ changed in GCC 7.1 2024-07-02T16:55:11,952 6842 | layer.bqkv = ml.create_tensor(ctx_layer, tn(LLM_TENSOR_ATTN_QKV, "bias", i), {n_embd + 2*n_embd_gqa}); 2024-07-02T16:55:11,952 | ~~~~~~~~~~~~~~~~^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ 2024-07-02T16:55:11,953 /tmp/pip-wheel-i4vgucre/llama-cpp-python_ac63946418344e25b39ca071585534ac/vendor/llama.cpp/src/llama.cpp:6850:60: note: parameter passing for argument of type ‘std::initializer_list’ changed in GCC 7.1 2024-07-02T16:55:11,953 6850 | layer.ffn_down = ml.create_tensor(ctx_split, tn(LLM_TENSOR_FFN_DOWN, "weight", i), {n_ff, n_embd}); 2024-07-02T16:55:11,954 | ~~~~~~~~~~~~~~~~^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ 2024-07-02T16:55:11,954 /tmp/pip-wheel-i4vgucre/llama-cpp-python_ac63946418344e25b39ca071585534ac/vendor/llama.cpp/src/llama.cpp:6859:54: note: parameter passing for argument of type ‘std::initializer_list’ changed in GCC 7.1 2024-07-02T16:55:11,954 6859 | model.tok_embd = ml.create_tensor(ctx_input, tn(LLM_TENSOR_TOKEN_EMBD, "weight"), {n_embd, n_vocab}); 2024-07-02T16:55:11,955 | ~~~~~~~~~~~~~~~~^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ 2024-07-02T16:55:11,955 /tmp/pip-wheel-i4vgucre/llama-cpp-python_ac63946418344e25b39ca071585534ac/vendor/llama.cpp/src/llama.cpp:6863:61: note: parameter passing for argument of type ‘std::initializer_list’ changed in GCC 7.1 2024-07-02T16:55:11,955 6863 | model.output_norm = ml.create_tensor(ctx_output, tn(LLM_TENSOR_OUTPUT_NORM, "weight"), {n_embd}); 2024-07-02T16:55:11,956 | ~~~~~~~~~~~~~~~~^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ 2024-07-02T16:55:11,956 /tmp/pip-wheel-i4vgucre/llama-cpp-python_ac63946418344e25b39ca071585534ac/vendor/llama.cpp/src/llama.cpp:6864:56: note: parameter passing for argument of type ‘std::initializer_list’ changed in GCC 7.1 2024-07-02T16:55:11,957 6864 | model.output = ml.create_tensor(ctx_output_split, tn(LLM_TENSOR_OUTPUT, "weight"), {n_embd, n_vocab}, llama_model_loader::TENSOR_NOT_REQUIRED); 2024-07-02T16:55:11,957 | ~~~~~~~~~~~~~~~~^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ 2024-07-02T16:55:11,958 /tmp/pip-wheel-i4vgucre/llama-cpp-python_ac63946418344e25b39ca071585534ac/vendor/llama.cpp/src/llama.cpp:6867:60: note: parameter passing for argument of type ‘std::initializer_list’ changed in GCC 7.1 2024-07-02T16:55:11,958 6867 | model.output = ml.create_tensor(ctx_output, tn(LLM_TENSOR_TOKEN_EMBD, "weight"), {n_embd, n_vocab}, llama_model_loader::TENSOR_DUPLICATED); 2024-07-02T16:55:11,959 | ~~~~~~~~~~~~~~~~^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ 2024-07-02T16:55:11,959 /tmp/pip-wheel-i4vgucre/llama-cpp-python_ac63946418344e25b39ca071585534ac/vendor/llama.cpp/src/llama.cpp:6877:59: note: parameter passing for argument of type ‘std::initializer_list’ changed in GCC 7.1 2024-07-02T16:55:11,960 6877 | layer.attn_norm = ml.create_tensor(ctx_layer, tn(LLM_TENSOR_ATTN_NORM, "weight", i), {n_embd}); 2024-07-02T16:55:11,960 | ~~~~~~~~~~~~~~~~^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ 2024-07-02T16:55:11,961 /tmp/pip-wheel-i4vgucre/llama-cpp-python_ac63946418344e25b39ca071585534ac/vendor/llama.cpp/src/llama.cpp:6879:52: note: parameter passing for argument of type ‘std::initializer_list’ changed in GCC 7.1 2024-07-02T16:55:11,961 6879 | layer.wq = ml.create_tensor(ctx_split, tn(LLM_TENSOR_ATTN_Q, "weight", i), {n_embd, n_embd}); 2024-07-02T16:55:11,962 | ~~~~~~~~~~~~~~~~^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ 2024-07-02T16:55:11,963 /tmp/pip-wheel-i4vgucre/llama-cpp-python_ac63946418344e25b39ca071585534ac/vendor/llama.cpp/src/llama.cpp:6880:52: note: parameter passing for argument of type ‘std::initializer_list’ changed in GCC 7.1 2024-07-02T16:55:11,963 6880 | layer.wk = ml.create_tensor(ctx_split, tn(LLM_TENSOR_ATTN_K, "weight", i), {n_embd, n_embd_gqa}); 2024-07-02T16:55:11,963 | ~~~~~~~~~~~~~~~~^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ 2024-07-02T16:55:11,964 /tmp/pip-wheel-i4vgucre/llama-cpp-python_ac63946418344e25b39ca071585534ac/vendor/llama.cpp/src/llama.cpp:6881:52: note: parameter passing for argument of type ‘std::initializer_list’ changed in GCC 7.1 2024-07-02T16:55:11,964 6881 | layer.wv = ml.create_tensor(ctx_split, tn(LLM_TENSOR_ATTN_V, "weight", i), {n_embd, n_embd_gqa}); 2024-07-02T16:55:11,965 | ~~~~~~~~~~~~~~~~^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ 2024-07-02T16:55:11,965 /tmp/pip-wheel-i4vgucre/llama-cpp-python_ac63946418344e25b39ca071585534ac/vendor/llama.cpp/src/llama.cpp:6884:58: note: parameter passing for argument of type ‘std::initializer_list’ changed in GCC 7.1 2024-07-02T16:55:11,965 6884 | layer.ffn_norm = ml.create_tensor(ctx_layer, tn(LLM_TENSOR_FFN_NORM, "weight", i), {n_embd}); 2024-07-02T16:55:11,966 | ~~~~~~~~~~~~~~~~^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ 2024-07-02T16:55:11,966 /tmp/pip-wheel-i4vgucre/llama-cpp-python_ac63946418344e25b39ca071585534ac/vendor/llama.cpp/src/llama.cpp:6887:58: note: parameter passing for argument of type ‘std::initializer_list’ changed in GCC 7.1 2024-07-02T16:55:11,967 6887 | layer.ffn_down = ml.create_tensor(ctx_split, tn(LLM_TENSOR_FFN_DOWN, "weight", i), {n_embd, n_embd}); 2024-07-02T16:55:11,967 | ~~~~~~~~~~~~~~~~^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ 2024-07-02T16:55:11,968 /tmp/pip-wheel-i4vgucre/llama-cpp-python_ac63946418344e25b39ca071585534ac/vendor/llama.cpp/src/llama.cpp:6888:58: note: parameter passing for argument of type ‘std::initializer_list’ changed in GCC 7.1 2024-07-02T16:55:11,969 6888 | layer.ffn_up = ml.create_tensor(ctx_split, tn(LLM_TENSOR_FFN_UP, "weight", i), {n_embd, n_embd}); 2024-07-02T16:55:11,969 | ~~~~~~~~~~~~~~~~^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ 2024-07-02T16:55:11,970 /tmp/pip-wheel-i4vgucre/llama-cpp-python_ac63946418344e25b39ca071585534ac/vendor/llama.cpp/src/llama.cpp:6890:62: note: parameter passing for argument of type ‘std::initializer_list’ changed in GCC 7.1 2024-07-02T16:55:11,970 6890 | layer.ffn_gate_inp = ml.create_tensor(ctx_layer, tn(LLM_TENSOR_FFN_GATE_INP, "weight", i), {n_embd, n_expert}); 2024-07-02T16:55:11,971 | ~~~~~~~~~~~~~~~~^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ 2024-07-02T16:55:11,971 /tmp/pip-wheel-i4vgucre/llama-cpp-python_ac63946418344e25b39ca071585534ac/vendor/llama.cpp/src/llama.cpp:6892:63: note: parameter passing for argument of type ‘std::initializer_list’ changed in GCC 7.1 2024-07-02T16:55:11,972 6892 | layer.ffn_gate_exps = ml.create_tensor(ctx_split, tn(LLM_TENSOR_FFN_GATE_EXPS, "weight", i), {n_embd, n_ff, n_expert}, false); 2024-07-02T16:55:11,972 | ~~~~~~~~~~~~~~~~^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ 2024-07-02T16:55:11,973 /tmp/pip-wheel-i4vgucre/llama-cpp-python_ac63946418344e25b39ca071585534ac/vendor/llama.cpp/src/llama.cpp:6894:63: note: parameter passing for argument of type ‘std::initializer_list’ changed in GCC 7.1 2024-07-02T16:55:11,973 6894 | layer.ffn_up_exps = ml.create_tensor(ctx_split, tn(LLM_TENSOR_FFN_UP_EXPS, "weight", i), {n_embd, n_ff, n_expert}); 2024-07-02T16:55:11,973 | ~~~~~~~~~~~~~~~~^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ 2024-07-02T16:55:11,974 /tmp/pip-wheel-i4vgucre/llama-cpp-python_ac63946418344e25b39ca071585534ac/vendor/llama.cpp/src/llama.cpp:6907:54: note: parameter passing for argument of type ‘std::initializer_list’ changed in GCC 7.1 2024-07-02T16:55:11,974 6907 | model.tok_embd = ml.create_tensor(ctx_input, tn(LLM_TENSOR_TOKEN_EMBD, "weight"), {n_embd, n_vocab}); 2024-07-02T16:55:11,974 | ~~~~~~~~~~~~~~~~^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ 2024-07-02T16:55:11,975 /tmp/pip-wheel-i4vgucre/llama-cpp-python_ac63946418344e25b39ca071585534ac/vendor/llama.cpp/src/llama.cpp:6911:61: note: parameter passing for argument of type ‘std::initializer_list’ changed in GCC 7.1 2024-07-02T16:55:11,975 6911 | model.output_norm = ml.create_tensor(ctx_output, tn(LLM_TENSOR_OUTPUT_NORM, "weight"), {n_embd}); 2024-07-02T16:55:11,976 | ~~~~~~~~~~~~~~~~^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ 2024-07-02T16:55:11,976 /tmp/pip-wheel-i4vgucre/llama-cpp-python_ac63946418344e25b39ca071585534ac/vendor/llama.cpp/src/llama.cpp:6912:61: note: parameter passing for argument of type ‘std::initializer_list’ changed in GCC 7.1 2024-07-02T16:55:11,977 6912 | model.output = ml.create_tensor(ctx_output_split, tn(LLM_TENSOR_OUTPUT, "weight"), {n_embd, n_vocab}); 2024-07-02T16:55:11,977 | ~~~~~~~~~~~~~~~~^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ 2024-07-02T16:55:11,978 /tmp/pip-wheel-i4vgucre/llama-cpp-python_ac63946418344e25b39ca071585534ac/vendor/llama.cpp/src/llama.cpp:6931:56: note: parameter passing for argument of type ‘std::initializer_list’ changed in GCC 7.1 2024-07-02T16:55:11,978 6931 | layer.wq = ml.create_tensor(ctx_split, tn(LLM_TENSOR_ATTN_Q, "weight", i), {n_embd, n_embd_k_gqa}); 2024-07-02T16:55:11,979 | ~~~~~~~~~~~~~~~~^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ 2024-07-02T16:55:11,980 /tmp/pip-wheel-i4vgucre/llama-cpp-python_ac63946418344e25b39ca071585534ac/vendor/llama.cpp/src/llama.cpp:6940:62: note: parameter passing for argument of type ‘std::initializer_list’ changed in GCC 7.1 2024-07-02T16:55:11,980 6940 | layer.ffn_gate = ml.create_tensor(ctx_split, tn(LLM_TENSOR_FFN_GATE, "weight", i), {n_embd, n_ff}); 2024-07-02T16:55:11,981 | ~~~~~~~~~~~~~~~~^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ 2024-07-02T16:55:11,981 /tmp/pip-wheel-i4vgucre/llama-cpp-python_ac63946418344e25b39ca071585534ac/vendor/llama.cpp/src/llama.cpp:6941:62: note: parameter passing for argument of type ‘std::initializer_list’ changed in GCC 7.1 2024-07-02T16:55:11,982 6941 | layer.ffn_down = ml.create_tensor(ctx_split, tn(LLM_TENSOR_FFN_DOWN, "weight", i), { n_ff, n_embd}); 2024-07-02T16:55:11,982 | ~~~~~~~~~~~~~~~~^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ 2024-07-02T16:55:11,983 /tmp/pip-wheel-i4vgucre/llama-cpp-python_ac63946418344e25b39ca071585534ac/vendor/llama.cpp/src/llama.cpp:6942:62: note: parameter passing for argument of type ‘std::initializer_list’ changed in GCC 7.1 2024-07-02T16:55:11,983 6942 | layer.ffn_up = ml.create_tensor(ctx_split, tn(LLM_TENSOR_FFN_UP, "weight", i), {n_embd, n_ff}); 2024-07-02T16:55:11,984 | ~~~~~~~~~~~~~~~~^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ 2024-07-02T16:55:11,984 /tmp/pip-wheel-i4vgucre/llama-cpp-python_ac63946418344e25b39ca071585534ac/vendor/llama.cpp/src/llama.cpp:6944:66: note: parameter passing for argument of type ‘std::initializer_list’ changed in GCC 7.1 2024-07-02T16:55:11,984 6944 | layer.ffn_gate_inp = ml.create_tensor(ctx_layer, tn(LLM_TENSOR_FFN_GATE_INP, "weight", i), {n_embd, n_expert}); 2024-07-02T16:55:11,985 | ~~~~~~~~~~~~~~~~^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ 2024-07-02T16:55:11,985 /tmp/pip-wheel-i4vgucre/llama-cpp-python_ac63946418344e25b39ca071585534ac/vendor/llama.cpp/src/llama.cpp:6950:67: note: parameter passing for argument of type ‘std::initializer_list’ changed in GCC 7.1 2024-07-02T16:55:11,986 6950 | layer.ffn_gate_exps = ml.create_tensor(ctx_split, tn(LLM_TENSOR_FFN_GATE_EXPS, "weight", i), { n_embd, n_ff_exp, n_expert}); 2024-07-02T16:55:11,986 | ~~~~~~~~~~~~~~~~^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ 2024-07-02T16:55:11,986 /tmp/pip-wheel-i4vgucre/llama-cpp-python_ac63946418344e25b39ca071585534ac/vendor/llama.cpp/src/llama.cpp:6951:67: note: parameter passing for argument of type ‘std::initializer_list’ changed in GCC 7.1 2024-07-02T16:55:11,987 6951 | layer.ffn_down_exps = ml.create_tensor(ctx_split, tn(LLM_TENSOR_FFN_DOWN_EXPS, "weight", i), {n_ff_exp, n_embd, n_expert}); 2024-07-02T16:55:11,987 | ~~~~~~~~~~~~~~~~^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ 2024-07-02T16:55:11,988 /tmp/pip-wheel-i4vgucre/llama-cpp-python_ac63946418344e25b39ca071585534ac/vendor/llama.cpp/src/llama.cpp:6952:67: note: parameter passing for argument of type ‘std::initializer_list’ changed in GCC 7.1 2024-07-02T16:55:11,988 6952 | layer.ffn_up_exps = ml.create_tensor(ctx_split, tn(LLM_TENSOR_FFN_UP_EXPS, "weight", i), { n_embd, n_ff_exp, n_expert}); 2024-07-02T16:55:11,989 | ~~~~~~~~~~~~~~~~^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ 2024-07-02T16:55:11,989 /tmp/pip-wheel-i4vgucre/llama-cpp-python_ac63946418344e25b39ca071585534ac/vendor/llama.cpp/src/llama.cpp:6955:68: note: parameter passing for argument of type ‘std::initializer_list’ changed in GCC 7.1 2024-07-02T16:55:11,990 6955 | layer.ffn_gate_shexp = ml.create_tensor(ctx_split, tn(LLM_TENSOR_FFN_GATE_SHEXP, "weight", i), {n_embd, n_ff_exp * hparams.n_expert_shared}); 2024-07-02T16:55:11,990 | ~~~~~~~~~~~~~~~~^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ 2024-07-02T16:55:11,991 /tmp/pip-wheel-i4vgucre/llama-cpp-python_ac63946418344e25b39ca071585534ac/vendor/llama.cpp/src/llama.cpp:6956:68: note: parameter passing for argument of type ‘std::initializer_list’ changed in GCC 7.1 2024-07-02T16:55:11,991 6956 | layer.ffn_down_shexp = ml.create_tensor(ctx_split, tn(LLM_TENSOR_FFN_DOWN_SHEXP, "weight", i), { n_ff_exp * hparams.n_expert_shared, n_embd}); 2024-07-02T16:55:11,992 | ~~~~~~~~~~~~~~~~^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ 2024-07-02T16:55:11,993 /tmp/pip-wheel-i4vgucre/llama-cpp-python_ac63946418344e25b39ca071585534ac/vendor/llama.cpp/src/llama.cpp:6957:68: note: parameter passing for argument of type ‘std::initializer_list’ changed in GCC 7.1 2024-07-02T16:55:11,993 6957 | layer.ffn_up_shexp = ml.create_tensor(ctx_split, tn(LLM_TENSOR_FFN_UP_SHEXP, "weight", i), {n_embd, n_ff_exp * hparams.n_expert_shared}); 2024-07-02T16:55:11,993 | ~~~~~~~~~~~~~~~~^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ 2024-07-02T16:55:11,994 /tmp/pip-wheel-i4vgucre/llama-cpp-python_ac63946418344e25b39ca071585534ac/vendor/llama.cpp/src/llama.cpp:6963:54: note: parameter passing for argument of type ‘std::initializer_list’ changed in GCC 7.1 2024-07-02T16:55:11,994 6963 | model.tok_embd = ml.create_tensor(ctx_input, tn(LLM_TENSOR_TOKEN_EMBD, "weight"), {n_embd, n_vocab}); 2024-07-02T16:55:11,995 | ~~~~~~~~~~~~~~~~^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ 2024-07-02T16:55:11,995 /tmp/pip-wheel-i4vgucre/llama-cpp-python_ac63946418344e25b39ca071585534ac/vendor/llama.cpp/src/llama.cpp:6967:61: note: parameter passing for argument of type ‘std::initializer_list’ changed in GCC 7.1 2024-07-02T16:55:11,995 6967 | model.output_norm = ml.create_tensor(ctx_output, tn(LLM_TENSOR_OUTPUT_NORM, "weight"), {n_embd}); 2024-07-02T16:55:11,996 | ~~~~~~~~~~~~~~~~^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ 2024-07-02T16:55:11,997 /tmp/pip-wheel-i4vgucre/llama-cpp-python_ac63946418344e25b39ca071585534ac/vendor/llama.cpp/src/llama.cpp:6979:52: note: parameter passing for argument of type ‘std::initializer_list’ changed in GCC 7.1 2024-07-02T16:55:11,997 6979 | layer.wq = ml.create_tensor(ctx_split, tn(LLM_TENSOR_ATTN_Q, "weight", i), {n_embd, n_embd}); 2024-07-02T16:55:11,998 | ~~~~~~~~~~~~~~~~^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ 2024-07-02T16:55:11,998 /tmp/pip-wheel-i4vgucre/llama-cpp-python_ac63946418344e25b39ca071585534ac/vendor/llama.cpp/src/llama.cpp:6980:58: note: parameter passing for argument of type ‘std::initializer_list’ changed in GCC 7.1 2024-07-02T16:55:11,999 6980 | layer.wq_scale = ml.create_tensor(ctx_layer, tn(LLM_TENSOR_ATTN_Q, "scale", i), {1}); 2024-07-02T16:55:11,999 | ~~~~~~~~~~~~~~~~^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ 2024-07-02T16:55:12,000 /tmp/pip-wheel-i4vgucre/llama-cpp-python_ac63946418344e25b39ca071585534ac/vendor/llama.cpp/src/llama.cpp:6984:58: note: parameter passing for argument of type ‘std::initializer_list’ changed in GCC 7.1 2024-07-02T16:55:12,000 6984 | layer.wv_scale = ml.create_tensor(ctx_layer, tn(LLM_TENSOR_ATTN_V, "scale", i), {1}); 2024-07-02T16:55:12,001 | ~~~~~~~~~~~~~~~~^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ 2024-07-02T16:55:12,001 /tmp/pip-wheel-i4vgucre/llama-cpp-python_ac63946418344e25b39ca071585534ac/vendor/llama.cpp/src/llama.cpp:6989:62: note: parameter passing for argument of type ‘std::initializer_list’ changed in GCC 7.1 2024-07-02T16:55:12,002 6989 | layer.ffn_sub_norm = ml.create_tensor(ctx_layer, tn(LLM_TENSOR_FFN_SUB_NORM, "weight", i), {n_ff}); 2024-07-02T16:55:12,002 | ~~~~~~~~~~~~~~~~^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ 2024-07-02T16:55:12,003 /tmp/pip-wheel-i4vgucre/llama-cpp-python_ac63946418344e25b39ca071585534ac/vendor/llama.cpp/src/llama.cpp:6991:58: note: parameter passing for argument of type ‘std::initializer_list’ changed in GCC 7.1 2024-07-02T16:55:12,003 6991 | layer.ffn_gate = ml.create_tensor(ctx_split, tn(LLM_TENSOR_FFN_GATE, "weight", i), {n_embd, n_ff}); 2024-07-02T16:55:12,004 | ~~~~~~~~~~~~~~~~^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ 2024-07-02T16:55:12,004 /tmp/pip-wheel-i4vgucre/llama-cpp-python_ac63946418344e25b39ca071585534ac/vendor/llama.cpp/src/llama.cpp:6992:64: note: parameter passing for argument of type ‘std::initializer_list’ changed in GCC 7.1 2024-07-02T16:55:12,004 6992 | layer.ffn_gate_scale = ml.create_tensor(ctx_layer, tn(LLM_TENSOR_FFN_GATE, "scale", i), {1}); 2024-07-02T16:55:12,005 | ~~~~~~~~~~~~~~~~^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ 2024-07-02T16:55:12,005 /tmp/pip-wheel-i4vgucre/llama-cpp-python_ac63946418344e25b39ca071585534ac/vendor/llama.cpp/src/llama.cpp:6993:58: note: parameter passing for argument of type ‘std::initializer_list’ changed in GCC 7.1 2024-07-02T16:55:12,005 6993 | layer.ffn_down = ml.create_tensor(ctx_split, tn(LLM_TENSOR_FFN_DOWN, "weight", i), {n_ff, n_embd}); 2024-07-02T16:55:12,006 | ~~~~~~~~~~~~~~~~^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ 2024-07-02T16:55:12,006 /tmp/pip-wheel-i4vgucre/llama-cpp-python_ac63946418344e25b39ca071585534ac/vendor/llama.cpp/src/llama.cpp:6994:64: note: parameter passing for argument of type ‘std::initializer_list’ changed in GCC 7.1 2024-07-02T16:55:12,007 6994 | layer.ffn_down_scale = ml.create_tensor(ctx_layer, tn(LLM_TENSOR_FFN_DOWN, "scale", i), {1}); 2024-07-02T16:55:12,007 | ~~~~~~~~~~~~~~~~^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ 2024-07-02T16:55:12,008 /tmp/pip-wheel-i4vgucre/llama-cpp-python_ac63946418344e25b39ca071585534ac/vendor/llama.cpp/src/llama.cpp:6995:58: note: parameter passing for argument of type ‘std::initializer_list’ changed in GCC 7.1 2024-07-02T16:55:12,009 6995 | layer.ffn_up = ml.create_tensor(ctx_split, tn(LLM_TENSOR_FFN_UP, "weight", i), {n_embd, n_ff}); 2024-07-02T16:55:12,009 | ~~~~~~~~~~~~~~~~^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ 2024-07-02T16:55:12,010 /tmp/pip-wheel-i4vgucre/llama-cpp-python_ac63946418344e25b39ca071585534ac/vendor/llama.cpp/src/llama.cpp:6996:62: note: parameter passing for argument of type ‘std::initializer_list’ changed in GCC 7.1 2024-07-02T16:55:12,010 6996 | layer.ffn_up_scale = ml.create_tensor(ctx_layer, tn(LLM_TENSOR_FFN_UP, "scale", i), {1}); 2024-07-02T16:55:12,011 | ~~~~~~~~~~~~~~~~^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ 2024-07-02T16:55:12,011 /tmp/pip-wheel-i4vgucre/llama-cpp-python_ac63946418344e25b39ca071585534ac/vendor/llama.cpp/src/llama.cpp:7001:54: note: parameter passing for argument of type ‘std::initializer_list’ changed in GCC 7.1 2024-07-02T16:55:12,012 7001 | model.tok_embd = ml.create_tensor(ctx_input, tn(LLM_TENSOR_TOKEN_EMBD, "weight"), {n_embd, n_vocab}); 2024-07-02T16:55:12,012 | ~~~~~~~~~~~~~~~~^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ 2024-07-02T16:55:12,013 /tmp/pip-wheel-i4vgucre/llama-cpp-python_ac63946418344e25b39ca071585534ac/vendor/llama.cpp/src/llama.cpp:7005:63: note: parameter passing for argument of type ‘std::initializer_list’ changed in GCC 7.1 2024-07-02T16:55:12,013 7005 | model.output_norm = ml.create_tensor(ctx_output, tn(LLM_TENSOR_OUTPUT_NORM, "weight"), {n_embd}); 2024-07-02T16:55:12,014 | ~~~~~~~~~~~~~~~~^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ 2024-07-02T16:55:12,014 /tmp/pip-wheel-i4vgucre/llama-cpp-python_ac63946418344e25b39ca071585534ac/vendor/llama.cpp/src/llama.cpp:7006:63: note: parameter passing for argument of type ‘std::initializer_list’ changed in GCC 7.1 2024-07-02T16:55:12,014 7006 | model.output_norm_b = ml.create_tensor(ctx_output, tn(LLM_TENSOR_OUTPUT_NORM, "bias"), {n_embd}); 2024-07-02T16:55:12,015 | ~~~~~~~~~~~~~~~~^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ 2024-07-02T16:55:12,015 /tmp/pip-wheel-i4vgucre/llama-cpp-python_ac63946418344e25b39ca071585534ac/vendor/llama.cpp/src/llama.cpp:7007:63: note: parameter passing for argument of type ‘std::initializer_list’ changed in GCC 7.1 2024-07-02T16:55:12,015 7007 | model.output = ml.create_tensor(ctx_output_split, tn(LLM_TENSOR_OUTPUT, "weight"), {n_embd, n_vocab}); 2024-07-02T16:55:12,016 | ~~~~~~~~~~~~~~~~^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ 2024-07-02T16:55:12,016 /tmp/pip-wheel-i4vgucre/llama-cpp-python_ac63946418344e25b39ca071585534ac/vendor/llama.cpp/src/llama.cpp:7019:54: note: parameter passing for argument of type ‘std::initializer_list’ changed in GCC 7.1 2024-07-02T16:55:12,017 7019 | layer.bqkv = ml.create_tensor(ctx_layer, tn(LLM_TENSOR_ATTN_QKV, "bias", i), {n_embd + 2*n_embd_gqa}); 2024-07-02T16:55:12,017 | ~~~~~~~~~~~~~~~~^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ 2024-07-02T16:55:12,018 /tmp/pip-wheel-i4vgucre/llama-cpp-python_ac63946418344e25b39ca071585534ac/vendor/llama.cpp/src/llama.cpp:7024:60: note: parameter passing for argument of type ‘std::initializer_list’ changed in GCC 7.1 2024-07-02T16:55:12,018 7024 | layer.ffn_norm = ml.create_tensor(ctx_layer, tn(LLM_TENSOR_FFN_NORM, "weight", i), {n_embd}); 2024-07-02T16:55:12,019 | ~~~~~~~~~~~~~~~~^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ 2024-07-02T16:55:12,020 /tmp/pip-wheel-i4vgucre/llama-cpp-python_ac63946418344e25b39ca071585534ac/vendor/llama.cpp/src/llama.cpp:7025:60: note: parameter passing for argument of type ‘std::initializer_list’ changed in GCC 7.1 2024-07-02T16:55:12,020 7025 | layer.ffn_norm_b = ml.create_tensor(ctx_layer, tn(LLM_TENSOR_FFN_NORM, "bias", i), {n_embd}); 2024-07-02T16:55:12,020 | ~~~~~~~~~~~~~~~~^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ 2024-07-02T16:55:12,021 /tmp/pip-wheel-i4vgucre/llama-cpp-python_ac63946418344e25b39ca071585534ac/vendor/llama.cpp/src/llama.cpp:7027:60: note: parameter passing for argument of type ‘std::initializer_list’ changed in GCC 7.1 2024-07-02T16:55:12,021 7027 | layer.ffn_down = ml.create_tensor(ctx_split, tn(LLM_TENSOR_FFN_DOWN, "weight", i), {n_ff, n_embd}); 2024-07-02T16:55:12,022 | ~~~~~~~~~~~~~~~~^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ 2024-07-02T16:55:12,022 /tmp/pip-wheel-i4vgucre/llama-cpp-python_ac63946418344e25b39ca071585534ac/vendor/llama.cpp/src/llama.cpp:7033:60: note: parameter passing for argument of type ‘std::initializer_list’ changed in GCC 7.1 2024-07-02T16:55:12,023 7033 | layer.ffn_up = ml.create_tensor(ctx_split, tn(LLM_TENSOR_FFN_UP, "weight", i), {n_embd, n_ff}); 2024-07-02T16:55:12,024 | ~~~~~~~~~~~~~~~~^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ 2024-07-02T16:55:12,024 In file included from /usr/include/c++/12/vector:70: 2024-07-02T16:55:12,025 /usr/include/c++/12/bits/vector.tcc: In member function ‘void std::vector<_Tp, _Alloc>::_M_realloc_insert(iterator, _Args&& ...) [with _Args = {const double&}; _Tp = double; _Alloc = std::allocator]’: 2024-07-02T16:55:12,025 /usr/include/c++/12/bits/vector.tcc:439:7: note: parameter passing for argument of type ‘std::vector::iterator’ changed in GCC 7.1 2024-07-02T16:55:12,025 439 | vector<_Tp, _Alloc>:: 2024-07-02T16:55:12,026 | ^~~~~~~~~~~~~~~~~~~ 2024-07-02T16:55:12,026 In member function ‘void std::vector<_Tp, _Alloc>::push_back(const value_type&) [with _Tp = double; _Alloc = std::allocator]’, 2024-07-02T16:55:12,026 inlined from ‘std::back_insert_iterator<_Container>& std::back_insert_iterator<_Container>::operator=(const typename _Container::value_type&) [with _Container = std::vector]’ at /usr/include/c++/12/bits/stl_iterator.h:735:22, 2024-07-02T16:55:12,027 inlined from ‘_OutputIterator std::partial_sum(_InputIterator, _InputIterator, _OutputIterator) [with _InputIterator = __gnu_cxx::__normal_iterator >; _OutputIterator = back_insert_iterator >]’ at /usr/include/c++/12/bits/stl_numeric.h:270:17, 2024-07-02T16:55:12,027 inlined from ‘void std::discrete_distribution<_IntType>::param_type::_M_initialize() [with _IntType = int]’ at /usr/include/c++/12/bits/random.tcc:2679:23: 2024-07-02T16:55:12,027 /usr/include/c++/12/bits/stl_vector.h:1287:28: note: parameter passing for argument of type ‘__gnu_cxx::__normal_iterator >’ changed in GCC 7.1 2024-07-02T16:55:12,028 1287 | _M_realloc_insert(end(), __x); 2024-07-02T16:55:12,029 | ~~~~~~~~~~~~~~~~~^~~~~~~~~~~~ 2024-07-02T16:55:12,029 In member function ‘void std::vector<_Tp, _Alloc>::push_back(const value_type&) [with _Tp = double; _Alloc = std::allocator]’, 2024-07-02T16:55:12,030 inlined from ‘std::back_insert_iterator<_Container>& std::back_insert_iterator<_Container>::operator=(const typename _Container::value_type&) [with _Container = std::vector]’ at /usr/include/c++/12/bits/stl_iterator.h:735:22, 2024-07-02T16:55:12,030 inlined from ‘_OutputIterator std::partial_sum(_InputIterator, _InputIterator, _OutputIterator) [with _InputIterator = __gnu_cxx::__normal_iterator >; _OutputIterator = back_insert_iterator >]’ at /usr/include/c++/12/bits/stl_numeric.h:274:16, 2024-07-02T16:55:12,031 inlined from ‘void std::discrete_distribution<_IntType>::param_type::_M_initialize() [with _IntType = int]’ at /usr/include/c++/12/bits/random.tcc:2679:23: 2024-07-02T16:55:12,031 /usr/include/c++/12/bits/stl_vector.h:1287:28: note: parameter passing for argument of type ‘__gnu_cxx::__normal_iterator >’ changed in GCC 7.1 2024-07-02T16:55:12,032 1287 | _M_realloc_insert(end(), __x); 2024-07-02T16:55:12,032 | ~~~~~~~~~~~~~~~~~^~~~~~~~~~~~ 2024-07-02T16:55:12,033 ninja: build stopped: subcommand failed. 2024-07-02T16:55:12,034 *** CMake build failed 2024-07-02T16:55:12,036 ERROR: Building wheel for llama-cpp-python (pyproject.toml) exited with 1 2024-07-02T16:55:12,049 [bold magenta]full command[/]: [blue]/usr/bin/python3 /usr/local/lib/python3.11/dist-packages/pip/_vendor/pyproject_hooks/_in_process/_in_process.py build_wheel /tmp/tmp4ksg5ib1[/] 2024-07-02T16:55:12,050 [bold magenta]cwd[/]: /tmp/pip-wheel-i4vgucre/llama-cpp-python_ac63946418344e25b39ca071585534ac 2024-07-02T16:55:12,050 Building wheel for llama-cpp-python (pyproject.toml): finished with status 'error' 2024-07-02T16:55:12,053 ERROR: Failed building wheel for llama-cpp-python 2024-07-02T16:55:12,055 Failed to build llama-cpp-python 2024-07-02T16:55:12,056 ERROR: Failed to build one or more wheels 2024-07-02T16:55:12,058 Exception information: 2024-07-02T16:55:12,058 Traceback (most recent call last): 2024-07-02T16:55:12,058 File "/usr/local/lib/python3.11/dist-packages/pip/_internal/cli/base_command.py", line 180, in exc_logging_wrapper 2024-07-02T16:55:12,058 status = run_func(*args) 2024-07-02T16:55:12,058 ^^^^^^^^^^^^^^^ 2024-07-02T16:55:12,058 File "/usr/local/lib/python3.11/dist-packages/pip/_internal/cli/req_command.py", line 245, in wrapper 2024-07-02T16:55:12,058 return func(self, options, args) 2024-07-02T16:55:12,058 ^^^^^^^^^^^^^^^^^^^^^^^^^ 2024-07-02T16:55:12,058 File "/usr/local/lib/python3.11/dist-packages/pip/_internal/commands/wheel.py", line 181, in run 2024-07-02T16:55:12,058 raise CommandError("Failed to build one or more wheels") 2024-07-02T16:55:12,058 pip._internal.exceptions.CommandError: Failed to build one or more wheels 2024-07-02T16:55:12,062 Removed build tracker: '/tmp/pip-build-tracker-vawopyc0'