repo_name
stringlengths
9
75
topic
stringclasses
30 values
issue_number
int64
1
203k
title
stringlengths
1
976
body
stringlengths
0
254k
state
stringclasses
2 values
created_at
stringlengths
20
20
updated_at
stringlengths
20
20
url
stringlengths
38
105
labels
listlengths
0
9
user_login
stringlengths
1
39
comments_count
int64
0
452
jmcnamara/XlsxWriter
pandas
997
question: Is it possible to create a pie-of-pie chart?
### Question Is it possible to create a pie-of-pie or bar-of-pie chart with XlsWriter? If not, is there a work-around or alternative to represent the same information?
closed
2023-07-01T01:16:58Z
2023-07-01T15:06:05Z
https://github.com/jmcnamara/XlsxWriter/issues/997
[ "question" ]
rmsapre
1
microsoft/unilm
nlp
1,361
How to load beit3 fine-tined weight such as beit3_base_patch16_224_nlvr2.pth?
timm.create_model doesn't support beit3 yet, so how to load the fine-tined weight such as beit3_base_patch16_224_nlvr2.pth?
open
2023-11-08T13:16:14Z
2024-02-22T03:01:34Z
https://github.com/microsoft/unilm/issues/1361
[]
yangzhj53
6
pyeve/eve
flask
1,077
At time comparison, $gt is the same as $gte query results
First of all, thanks for this project! I have a scene that is querying data larger than a certain time, using "_updated": {"$gt": "2017-08-26 09:12:59"}, but the query returns a data equal to this time The The effect is equivalent to "$gte"
closed
2017-10-19T07:50:29Z
2018-05-18T16:19:48Z
https://github.com/pyeve/eve/issues/1077
[ "stale" ]
fantasykai
1
pallets/flask
python
4,594
Recommended link update for "Make the Project Installable" docs
The link to the "official packaging guide" in the "Make the Project Installable" tutorial section makes little reference to `setup.py` and no reference to the `MANIFEST.in` file. I recommend updating the link from the ["Packing Python Projects"](https://packaging.python.org/en/latest/tutorials/packaging-projects/) tutorial to the ["Packaging and distributing projects"](https://packaging.python.org/en/latest/guides/distributing-packages-using-setuptools/) guide which details usage of `setup.py` and the `MANIFEST.in` file. I'd be happy to make a PR to update if this change is welcome.
closed
2022-05-12T19:18:01Z
2022-06-07T00:05:24Z
https://github.com/pallets/flask/issues/4594
[ "docs" ]
nkabrown
1
piskvorky/gensim
machine-learning
2,657
Tweak placeholders
Continued from https://github.com/RaRe-Technologies/gensim/pull/2654 Point each placeholder to its corresponding tutorial.
open
2019-10-29T08:43:15Z
2019-10-29T08:43:22Z
https://github.com/piskvorky/gensim/issues/2657
[ "documentation" ]
mpenkov
0
dask/dask
pandas
11,681
Unable to use scheduler 'external_address'
<!-- Please include a self-contained copy-pastable example that generates the issue if possible. Please be concise with code posted. See guidelines below on how to provide a good bug report: - Craft Minimal Bug Reports http://matthewrocklin.com/blog/work/2018/02/28/minimal-bug-reports - Minimal Complete Verifiable Examples https://stackoverflow.com/help/mcve Bug reports that follow these guidelines are easier to diagnose, and so are often handled much more quickly. --> **Describe the issue**: Seems unable to find in the doc how to specify the `external_address` for the scheduler when initiating the LocalCluster. By guesswork, I failed to "inject" the setting via `scheduler_kwargs` due to `RuntimeError(f"Cluster failed to start: {e}") from e`. **Minimal Complete Verifiable Example**: ```python cluster = LocalCluster( host="0.0.0.0", scheduler_kwargs={"external_address": "localhost"}, ) ``` The error stack: ``` --------------------------------------------------------------------------- TypeError Traceback (most recent call last) File ~/.pyenv/versions/3.12.2/envs/venv_3.12.2/lib/python3.12/site-packages/distributed/deploy/spec.py:324, in SpecCluster._start(self) 323 cls = import_term(cls) --> 324 self.scheduler = cls(**self.scheduler_spec.get("options", {})) 325 self.scheduler = await self.scheduler File ~/.pyenv/versions/3.12.2/envs/venv_3.12.2/lib/python3.12/site-packages/distributed/scheduler.py:4019, in Scheduler.__init__(self, loop, services, service_kwargs, allowed_failures, extensions, validate, scheduler_file, security, worker_ttl, idle_timeout, interface, host, port, protocol, dashboard_address, dashboard, http_prefix, preload, preload_argv, plugins, contact_address, transition_counter_max, jupyter, **kwargs) 4005 SchedulerState.__init__( 4006 self, 4007 aliases=aliases, (...) 4017 transition_counter_max=transition_counter_max, 4018 ) -> 4019 ServerNode.__init__( 4020 self, 4021 handlers=self.handlers, 4022 stream_handlers=merge(worker_handlers, client_handlers), 4023 connection_limit=connection_limit, 4024 deserialize=False, 4025 connection_args=self.connection_args, 4026 **kwargs, 4027 ) 4029 if self.worker_ttl: TypeError: Server.__init__() got an unexpected keyword argument 'external_address' The above exception was the direct cause of the following exception: RuntimeError Traceback (most recent call last) Cell In[2], line 1 ----> 1 cluster = LocalCluster( 2 host="0.0.0.0", 3 scheduler_kwargs={"external_address": "localhost"}, 4 ) File ~/.pyenv/versions/3.12.2/envs/venv_3.12.2/lib/python3.12/site-packages/distributed/deploy/local.py:256, in LocalCluster.__init__(self, name, n_workers, threads_per_worker, processes, loop, start, host, ip, scheduler_port, silence_logs, dashboard_address, worker_dashboard_address, diagnostics_port, services, worker_services, service_kwargs, asynchronous, security, protocol, blocked_handlers, interface, worker_class, scheduler_kwargs, scheduler_sync_interval, **worker_kwargs) 253 worker = {"cls": worker_class, "options": worker_kwargs} 254 workers = {i: worker for i in range(n_workers)} --> 256 super().__init__( 257 name=name, 258 scheduler=scheduler, 259 workers=workers, 260 worker=worker, 261 loop=loop, 262 asynchronous=asynchronous, 263 silence_logs=silence_logs, 264 security=security, 265 scheduler_sync_interval=scheduler_sync_interval, 266 ) File ~/.pyenv/versions/3.12.2/envs/venv_3.12.2/lib/python3.12/site-packages/distributed/deploy/spec.py:284, in SpecCluster.__init__(self, workers, scheduler, worker, asynchronous, loop, security, silence_logs, name, shutdown_on_close, scheduler_sync_interval) 282 if not self.called_from_running_loop: 283 self._loop_runner.start() --> 284 self.sync(self._start) 285 try: 286 self.sync(self._correct_state) File ~/.pyenv/versions/3.12.2/envs/venv_3.12.2/lib/python3.12/site-packages/distributed/utils.py:363, in SyncMethodMixin.sync(self, func, asynchronous, callback_timeout, *args, **kwargs) 361 return future 362 else: --> 363 return sync( 364 self.loop, func, *args, callback_timeout=callback_timeout, **kwargs 365 ) File ~/.pyenv/versions/3.12.2/envs/venv_3.12.2/lib/python3.12/site-packages/distributed/utils.py:439, in sync(loop, func, callback_timeout, *args, **kwargs) 436 wait(10) 438 if error is not None: --> 439 raise error 440 else: 441 return result File ~/.pyenv/versions/3.12.2/envs/venv_3.12.2/lib/python3.12/site-packages/distributed/utils.py:413, in sync.<locals>.f() 411 awaitable = wait_for(awaitable, timeout) 412 future = asyncio.ensure_future(awaitable) --> 413 result = yield future 414 except Exception as exception: 415 error = exception File ~/.pyenv/versions/3.12.2/envs/venv_3.12.2/lib/python3.12/site-packages/tornado/gen.py:767, in Runner.run(self) 765 try: 766 try: --> 767 value = future.result() 768 except Exception as e: 769 # Save the exception for later. It's important that 770 # gen.throw() not be called inside this try/except block 771 # because that makes sys.exc_info behave unexpectedly. 772 exc: Optional[Exception] = e File ~/.pyenv/versions/3.12.2/envs/venv_3.12.2/lib/python3.12/site-packages/distributed/deploy/spec.py:335, in SpecCluster._start(self) 333 self.status = Status.failed 334 await self._close() --> 335 raise RuntimeError(f"Cluster failed to start: {e}") from e RuntimeError: Cluster failed to start: Server.__init__() got an unexpected keyword argument 'external_address' ``` **Anything else we need to know?**: Nope **Environment**: - Dask version: 2024.12.1 - Python version:3.12.2 - Operating System:MacOS / Ubuntu 22.04 - Install method (conda, pip, source): pip
closed
2025-01-20T08:02:15Z
2025-02-07T11:41:57Z
https://github.com/dask/dask/issues/11681
[ "needs triage" ]
carusyte
3
miguelgrinberg/flasky
flask
470
Cannot run docker container on VPS
I want to run the docker containers on my server. The services (database and app) can be started on my MacBook without problems, but when I start the same code on my server I get the following error message: ``` ERROR: for flasky-application Cannot start service app: OCI runtime create failed: container_linux.go:345: starting container process caused "process_linux.go:430: container init caused \"process_linux.go:413: running prestart hook 0 caused \\\"error running hook: exit status 2, stdout: , stderr: runtime/cgo: pthread_create failed: Resource temporarily unavailable ERROR: for app Cannot start service app: OCI runtime create failed: container_linux.go:345: starting container process caused "process_linux.go:430: container init caused \"process_linux.go:413: running prestart hook 0 caused \\\"error running hook: exit status 2, stdout: , stderr: runtime/cgo: pthread_create failed: Resource temporarily unavailable ERROR: Encountered errors while bringing up the project. ``` what can I do to make this work?
closed
2020-05-29T17:18:31Z
2020-05-29T19:02:19Z
https://github.com/miguelgrinberg/flasky/issues/470
[]
mobeit00
0
coleifer/sqlite-web
flask
63
Rspberry pi installatiom
How to install it on RPi?
closed
2019-09-14T05:08:33Z
2019-09-14T15:00:50Z
https://github.com/coleifer/sqlite-web/issues/63
[]
alanmilinovic
3
PokemonGoF/PokemonGo-Bot
automation
5,328
Starting bot error
Install By: ./setup.sh -i Run By: ./run.sh OS: Ubuntu 14.04.4 LTS Branch: master Python Version: Python 2.7.12 ![ss](https://cloud.githubusercontent.com/assets/21981334/18385958/384515c0-76c6-11e6-8f56-e7b02fca9a05.png)
closed
2016-09-09T11:48:16Z
2016-09-22T05:58:04Z
https://github.com/PokemonGoF/PokemonGo-Bot/issues/5328
[]
zZzWhoAmIzZz
1
gradio-app/gradio
python
10,084
tomlkit is out of date
### Describe the bug The latest version of tomlkit is 0.13.2. This project currently requires `tomlkit==0.12.0`. https://github.com/gradio-app/gradio/blob/main/requirements.txt#L24 I can work around it by creating a fork. ### Have you searched existing issues? 🔎 - [X] I have searched and found no existing issues ### Reproduction requirements.txt ```requirements gradio==5.7.1 tomlkit==0.13.2 ``` ```sh pip install -r requirements.txt ``` ### Screenshot ![image](https://github.com/user-attachments/assets/b45c6c51-18a1-41ab-981b-ce70f28c649a) ### Logs ```shell INFO: pip is looking at multiple versions of gradio to determine which version is compatible with other requirements. This could take a while. ERROR: Cannot install -r requirements.txt (line 13) and tomlkit==0.13.2 because these package versions have conflicting dependencies. The conflict is caused by: The user requested tomlkit==0.13.2 gradio 5.7.1 depends on tomlkit==0.12.0 To fix this you could try to: 1. loosen the range of package versions you've specified 2. remove package versions to allow pip to attempt to solve the dependency conflict ERROR: ResolutionImpossible: for help visit https://pip.pypa.io/en/latest/topics/dependency-resolution/#dealing-with-dependency-conflicts ``` ``` ### System Info ```shell ➜ ✗ poetry run gradio environment Launching in *reload mode* on: http://127.0.0.1:7860 (Press CTRL+C to quit) Watching: '/home/brian/work/Presence-AI/presence/livekit/agent/.venv/lib/python3.12/site-packages/gradio', '/home/brian/work/Presence-AI/presence/livekit/agent' ERROR: Error loading ASGI app. Could not import module "environment". ``` ### Severity I can work around it
closed
2024-11-29T23:45:01Z
2024-12-02T20:02:54Z
https://github.com/gradio-app/gradio/issues/10084
[ "bug" ]
btakita
0
ray-project/ray
machine-learning
51,276
[core][gpu-objects] Support collective operations
### Description Support collective operations of GPU objects such as gather / scatter / all-reduce. ### Use case _No response_
open
2025-03-11T22:43:35Z
2025-03-11T22:43:51Z
https://github.com/ray-project/ray/issues/51276
[ "enhancement", "P2", "core", "gpu-objects" ]
kevin85421
0
s3rius/FastAPI-template
asyncio
65
Question: where to put Mangum handler?
How do I add the Mangum handler into this template produced code? Intention: https://dwisulfahnur.medium.com/fastapi-deployment-to-aws-lambda-with-serverless-framework-b637b455142c ``` from fastapi import FastAPI from app.api.api_v1.api import router as api_router from mangum import Mangum app = FastAPI() @app.get("/") async def root(): return {"message": "Hello World!"} app.include_router(api_router, prefix="api/v1") handler = Mangum(app) ``` not sure how to wrap the handler to the app? ``` def main() -> None: """Entrypoint of the application.""" uvicorn.run( "<project>.web.application:get_app", workers=settings.workers_count, host=settings.host, port=settings.port, reload=settings.reload, factory=True, ) ``` ``` def get_app() -> FastAPI: """ Get FastAPI application. This is the main constructor of an application. :return: application. """ app = FastAPI( title="<project>", description="<project>", version=metadata.version("<project>"), docs_url=None, redoc_url=None, openapi_url="/api/openapi.json", default_response_class=UJSONResponse, ) app.on_event("startup")(startup(app)) app.on_event("shutdown")(shutdown(app)) app.include_router(router=api_router, prefix="/api") app.mount( "/static", StaticFiles(directory=APP_ROOT / "static"), name="static", ) return app ```
closed
2022-01-27T20:57:17Z
2023-12-12T23:51:25Z
https://github.com/s3rius/FastAPI-template/issues/65
[]
am1ru1
6
recommenders-team/recommenders
deep-learning
2,154
[BUG] Ranking Evaluation Metrics Exceed 1 with "by_threshold" Relevancy Method
### Description Hello! I encountered an issue while evaluating the BPR (Bayesian Personalized Ranking) model with basically the same code provided in the example on a different dataset. Specifically, when using the "by_threshold" relevancy method with ranking metrics, the computed values for precision@k, ndcg@k, and map@k exceed 1, which seems incorrect. This issue does not occur when switching the relevancy method to "top_k." ### How do we replicate the issue? I use the following parameter for BPR (all using the default seed): ``` bpr = cornac.models.BPR( k=200, max_iter=100, learning_rate=0.01, lambda_reg=0.001, verbose=True ) ``` Using these evaluation ``` TOP_K = 10 threshold =50 eval_map = map_at_k(test, all_predictions, col_prediction="prediction", relevancy_method='by_threshold', threshold=threshold, k=TOP_K) eval_ndcg = ndcg_at_k(test, all_predictions, col_prediction="prediction", relevancy_method='by_threshold', threshold=threshold, k=TOP_K) eval_precision = precision_at_k( test, all_predictions, col_prediction="prediction", relevancy_method='by_threshold', threshold=threshold, k=TOP_K) ``` Here is the dataset I test on: https://github.com/mnhqut/rec_sys-dataset/blob/main/data.csv My result: MAP: 1.417529 NDCG: 1.359902 Precision@K: 2.256466 ### Willingness to contribute <!--- Go over all the following points, and put an `x` in the box that apply. --> - [ ] Yes, I can contribute for this issue independently. - [x ] Yes, I can contribute for this issue with guidance from Recommenders community. - [ ] No, I cannot contribute at this time.
open
2024-08-23T15:01:27Z
2024-08-29T15:09:29Z
https://github.com/recommenders-team/recommenders/issues/2154
[ "bug" ]
mnhqut
4
widgetti/solara
flask
156
No clear way to define dark theme
When I look at the solara output, I see that it is possible to define "theme--dark" in most widgets (see below). However, I couldn't find a programmatic way to set the theme in solara. Is it possible to set theme of widgets in a Page with a single statement from Python? ![image](https://github.com/widgetti/solara/assets/2515171/45a173b6-5354-426c-b23c-86d47afe5266)
closed
2023-06-16T08:17:26Z
2023-06-16T14:40:46Z
https://github.com/widgetti/solara/issues/156
[ "documentation" ]
hkayabilisim
3
CorentinJ/Real-Time-Voice-Cloning
pytorch
474
Should we create a Slack Channel to communicate?
@blue-fish should we create a Slack Channel to communicate?
closed
2020-08-06T16:03:47Z
2022-09-14T18:03:50Z
https://github.com/CorentinJ/Real-Time-Voice-Cloning/issues/474
[]
mbdash
12
pytorch/vision
machine-learning
8,016
Scheduled workflow failed
Oh no, something went wrong in the scheduled workflow tests/download. Please look into it: https://github.com/pytorch/vision/actions/runs/6403841200 Feel free to close this if this was just a one-off error. cc @pmeier
closed
2023-10-04T09:10:32Z
2023-10-12T07:19:39Z
https://github.com/pytorch/vision/issues/8016
[ "bug", "module: datasets" ]
github-actions[bot]
0
lucidrains/vit-pytorch
computer-vision
58
Error shows that ViT object has no attribute 'pool'
I am trying to load my saved model and then test it on some images. This is my code snippet: device = "cuda" with torch.no_grad(): epoch_val_accuracy = 0 epoch_val_loss = 0 for data, label in val_loader: data = data.to(device , dtype = torch.float) label = label.to(device , dtype = torch.long) val_output = model(data) val_loss = criterion(val_output, label) acc = (val_output.argmax(dim=1) == label).float().mean() epoch_val_accuracy += acc / len(val_loader) epoch_val_loss += val_loss / len(val_loader) the screenshot is attached ![Screenshot from 2021-01-08 19-44-07](https://user-images.githubusercontent.com/67365559/104025030-0996b180-51ea-11eb-9bfc-33aa523ff2ea.png)
closed
2021-01-08T14:15:24Z
2021-01-11T17:15:57Z
https://github.com/lucidrains/vit-pytorch/issues/58
[]
Abhranta
4
autokey/autokey
automation
94
broken link in ABOUT
## Classification: none listed are relevant ## Summary about file links to the autokey website: https://code.google.com/archive/p/autokey/ that's 404. and it seems like you sidestepped describing what the program does by pointing to the autokey website instead (which is reasonable when it works), so now there is none at all.
closed
2017-07-27T09:57:38Z
2017-07-27T14:52:10Z
https://github.com/autokey/autokey/issues/94
[]
steanne
2
d2l-ai/d2l-en
data-science
2,107
Evaluation results of DenseNet TF look wrong
http://preview.d2l.ai.s3-website-us-west-2.amazonaws.com/d2l-en/master/chapter_convolutional-modern/densenet.html ![Screen Shot 2022-04-20 at 5 56 30 PM](https://user-images.githubusercontent.com/22279212/164350463-62864059-8537-4dbf-9009-6fc8215bce2f.png) l
closed
2022-04-21T00:58:48Z
2022-12-15T23:56:03Z
https://github.com/d2l-ai/d2l-en/issues/2107
[]
astonzhang
2
pyeve/eve
flask
645
Restoring soft_delete using PATCH results in 422 unknown field “_deleted”
@nckpark [this](https://stackoverflow.com/questions/30632740/restoring-soft-delete-using-patch-results-in-422-unknown-field-deleted) soft-delete item was reported on Stack Overflow. Would you be willing to triage it? Might be a bug.
closed
2015-06-04T05:51:05Z
2015-06-09T06:55:50Z
https://github.com/pyeve/eve/issues/645
[]
nicolaiarocci
2
custom-components/pyscript
jupyter
81
task.wait_until ignores state_check_now
The task.wait_until() function always checks state on entry, ignoring the documented state_check_now argument. In fact the state_check_now argument of wait_until() in trigger.py is declared but never used. pyscript version: 0.32
closed
2020-11-05T08:12:52Z
2020-11-06T01:54:23Z
https://github.com/custom-components/pyscript/issues/81
[]
dadler
2
exaloop/codon
numpy
432
Segmentation fault with -release flag
on current develop, works fine without -release flag, segfaults with it, even on small examples --- SIGSEGV {si_signo=SIGSEGV, si_code=SEGV_MAPERR, si_addr=NULL} --- +++ killed by SIGSEGV (core dumped) +++ Segmentation fault (core dumped)
open
2023-07-27T14:06:11Z
2024-11-10T06:25:39Z
https://github.com/exaloop/codon/issues/432
[]
Miezhiko
4
mage-ai/mage-ai
data-science
5,460
Add Databend cloud data warehouse
**Is your feature request related to a problem? Please describe.** [Databend](https://www.databend.com/) is an economical alternative to snowflake **Describe the solution you'd like** build intgration with databend for io and data integration pipelines **Describe alternatives you've considered** Snowflake Integration
open
2024-10-02T18:00:19Z
2024-10-09T18:49:26Z
https://github.com/mage-ai/mage-ai/issues/5460
[ "data integration", "io" ]
TalaatHasanin
0
pytest-dev/pytest-html
pytest
144
add custom test results to report
Hi, I have a test function like this: ``` def test_score(): # load csv of one column x = PrettyTable() x.field_names = ['old','new'] for val in column: new_val = some_func(val) x.add_row([val, new_val]) assert val == new_val output = x.get_string() # i want to print this in report when test success ``` and many other functions using this pattern, I can't figure out a way to retrieve output value inside pytest_html_results_table_html hook. Do you have any suggestions ? Thank you
closed
2018-01-04T17:18:04Z
2023-06-20T08:07:12Z
https://github.com/pytest-dev/pytest-html/issues/144
[]
ohayak
5
qubvel-org/segmentation_models.pytorch
computer-vision
75
How to override input 3x3 param for Encoders
Hi,i see that it is defaulted to false for your encoders in param dict. How i override it while passing it to seg models ?
closed
2019-10-02T13:50:06Z
2022-02-09T01:53:31Z
https://github.com/qubvel-org/segmentation_models.pytorch/issues/75
[ "Stale" ]
jaideep11061982
5
koxudaxi/datamodel-code-generator
pydantic
1,740
Invalid definition of graphql requirement
**Describe the bug** Invalid definition of graphql requirement. **To Reproduce** For a graphql schema we have Used commandline: ``` $ datamodel-codegen --input schema.graphql --output model.py --input-file-type graphql ``` we have ```bash Exception: Please run `$pip install 'datamodel-code-generator[graphql]`' to generate data-model from a GraphQL schema. ``` **Expected behavior** We get this behavior because we have **pyproject.toml** ```toml ... [tool.poetry.group.graphql.dependencies] graphql-core = "^3.2.3" ... ``` It must be like this **pyproject.toml** ```toml ... [tool.poetry.dependencies] ... graphql-core = { version = "^3.2.3", optional = true } ... ``` **Version:** - OS: [e.g. iOS] - Python version: 3.7-3.11 (i tested in 3.11) - datamodel-code-generator version: 0.25.0 I will create a fix.
closed
2023-11-26T13:32:41Z
2023-11-26T19:08:12Z
https://github.com/koxudaxi/datamodel-code-generator/issues/1740
[]
denisart
1
ets-labs/python-dependency-injector
flask
873
Making injections into class attributes can't work with `Resource` in nested container.
Reproducible project: https://github.com/YogiLiu/pdi_issue When I run `python -m pdi_issue.main`, an error was raised: ``` Traceback (most recent call last): File "<frozen runpy>", line 198, in _run_module_as_main File "<frozen runpy>", line 88, in _run_code File "/home/yogiliu/Workspace/YogiLiu/pdi_issue/src/pdi_issue/main.py", line 4, in <module> container = SrvContainer() ^^^^^^^^^^^^^^ File "src/dependency_injector/containers.pyx", line 727, in dependency_injector.containers.DeclarativeContainer.__new__ File "src/dependency_injector/providers.pyx", line 4916, in dependency_injector.providers.deepcopy File "src/dependency_injector/providers.pyx", line 4923, in dependency_injector.providers.deepcopy File "/home/yogiliu/.local/share/uv/python/cpython-3.11.10-linux-x86_64-gnu/lib/python3.11/copy.py", line 146, in deepcopy y = copier(x, memo) ^^^^^^^^^^^^^^^ File "/home/yogiliu/.local/share/uv/python/cpython-3.11.10-linux-x86_64-gnu/lib/python3.11/copy.py", line 231, in _deepcopy_dict y[deepcopy(key, memo)] = deepcopy(value, memo) ^^^^^^^^^^^^^^^^^^^^^ File "/home/yogiliu/.local/share/uv/python/cpython-3.11.10-linux-x86_64-gnu/lib/python3.11/copy.py", line 153, in deepcopy y = copier(memo) ^^^^^^^^^^^^ File "src/dependency_injector/providers.pyx", line 4024, in dependency_injector.providers.Container.__deepcopy__ File "src/dependency_injector/providers.pyx", line 4923, in dependency_injector.providers.deepcopy File "/home/yogiliu/.local/share/uv/python/cpython-3.11.10-linux-x86_64-gnu/lib/python3.11/copy.py", line 153, in deepcopy y = copier(memo) ^^^^^^^^^^^^ File "src/dependency_injector/containers.pyx", line 125, in dependency_injector.containers.DynamicContainer.__deepcopy__ File "src/dependency_injector/providers.pyx", line 4916, in dependency_injector.providers.deepcopy File "src/dependency_injector/providers.pyx", line 4923, in dependency_injector.providers.deepcopy File "/home/yogiliu/.local/share/uv/python/cpython-3.11.10-linux-x86_64-gnu/lib/python3.11/copy.py", line 146, in deepcopy y = copier(x, memo) ^^^^^^^^^^^^^^^ File "/home/yogiliu/.local/share/uv/python/cpython-3.11.10-linux-x86_64-gnu/lib/python3.11/copy.py", line 231, in _deepcopy_dict y[deepcopy(key, memo)] = deepcopy(value, memo) ^^^^^^^^^^^^^^^^^^^^^ File "/home/yogiliu/.local/share/uv/python/cpython-3.11.10-linux-x86_64-gnu/lib/python3.11/copy.py", line 153, in deepcopy y = copier(memo) ^^^^^^^^^^^^ File "src/dependency_injector/providers.pyx", line 3669, in dependency_injector.providers.Resource.__deepcopy__ dependency_injector.errors.Error: Can not copy initialized resource ```
open
2025-03-23T12:36:43Z
2025-03-23T13:33:14Z
https://github.com/ets-labs/python-dependency-injector/issues/873
[]
YogiLiu
2
huggingface/datasets
numpy
7,433
`Dataset.map` ignores existing caches and remaps when ran with different `num_proc`
### Describe the bug If you `map` a dataset and save it to a specific `cache_file_name` with a specific `num_proc`, and then call map again with that same existing `cache_file_name` but a different `num_proc`, the dataset will be re-mapped. ### Steps to reproduce the bug 1. Download a dataset ```python import datasets dataset = datasets.load_dataset("ylecun/mnist") ``` ``` Generating train split: 100%|██████████| 60000/60000 [00:00<00:00, 116429.85 examples/s] Generating test split: 100%|██████████| 10000/10000 [00:00<00:00, 103310.27 examples/s] ``` 2. `map` and cache it with a specific `num_proc` ```python cache_file_name="./cache/train.map" dataset["train"].map(lambda x: x, cache_file_name=cache_file_name, num_proc=2) ``` ``` Map (num_proc=2): 100%|██████████| 60000/60000 [00:01<00:00, 53764.03 examples/s] ``` 3. `map` it with a different `num_proc` and the same `cache_file_name` as before ```python dataset["train"].map(lambda x: x, cache_file_name=cache_file_name, num_proc=3) ``` ``` Map (num_proc=3): 100%|██████████| 60000/60000 [00:00<00:00, 65377.12 examples/s] ``` ### Expected behavior If I specify an existing `cache_file_name`, I don't expect using a different `num_proc` than the one that was used to generate it to cause the dataset to have be be re-mapped. ### Environment info ```console $ datasets-cli env - `datasets` version: 3.3.2 - Platform: Linux-5.15.0-131-generic-x86_64-with-glibc2.35 - Python version: 3.10.16 - `huggingface_hub` version: 0.29.1 - PyArrow version: 19.0.1 - Pandas version: 2.2.3 - `fsspec` version: 2024.12.0 ```
open
2025-03-03T05:51:26Z
2025-03-04T05:55:08Z
https://github.com/huggingface/datasets/issues/7433
[]
ringohoffman
2
nolar/kopf
asyncio
631
Defining a pydantic model in a kopf handler file seems to break kopf startup
## Long story short If I define a pydantic class, kopf refuses to start ## Description Steps to repro: start out with a super-simple kopf handler file, such as: ```python import kopf @kopf.on.create('what.com', 'v1', 'stuffs') def create_fn(body, **kwargs): print(body) @kopf.on.update('what.com', 'v1', 'stuffs') def update_fn(body, **kwargs): print(body) ``` I'm using pipenv so I'd run this with: `pipenv run kopf run "src/crawler_manager/handler.py"`. All good. Now, expand the file with a simple pydantic model definition: ```python import kopf from pydantic import BaseModel, Field from typing import Dict class TestModel(BaseModel): name: str = Field(...) size: int size2: int @kopf.on.create('mohawkanalytics.com', 'v1', 'stuffs') def create_fn(body, **kwargs): print(json.dumps(body, indent=4, sort_keys=True)) @kopf.on.update('mohawkanalytics.com', 'v1', 'stuffs') def update_fn(body, **kwargs): print(json.dumps(body, indent=4, sort_keys=True)) ``` This code now throws: ``` File "<frozen importlib._bootstrap_external>", line 790, in exec_module File "<frozen importlib._bootstrap>", line 228, in _call_with_frames_removed File "src/funnel_crawler_manager/handler.py", line 9, in <module> class TestModel(BaseModel): File "pydantic/main.py", line 247, in pydantic.main.ModelMetaclass.__new__ File "pydantic/typing.py", line 207, in pydantic.typing.resolve_annotations for p in parameters: KeyError: 'handler' ``` I have no idea why. The idea was to use a pydantic-defined model to define the CRD at if needed configure it during kopf startup (pydantic can generate openapi-compatible schemas, so this way we were hoping to maintain the CRD in code instead of in yaml). Testing done using `pydantic==1.7.3` I'm afraid I don't know more than this - I also don't know if the actual bug is in Pydantic or kopf. ## Environment * Kopf version: 0.28.3 * Kubernetes version: 3.9 * Python version: * OS/platform:
closed
2021-01-05T10:25:49Z
2021-01-22T20:04:59Z
https://github.com/nolar/kopf/issues/631
[ "bug" ]
trondhindenes
7
deepspeedai/DeepSpeed
deep-learning
7,037
[REQUEST] Why is the column linear layer with all-gather not implemented in DeepSpeed Inference?
**Is your feature request related to a problem? Please describe.** if there is no column linear layer with all-gather, we can't deal with single linear layer i can see the rowlinear with allreduce aka [LinearAllreduce](https://github.com/deepspeedai/DeepSpeed/blob/e637677766e0a2063adc61ddd67b58abef74753e/deepspeed/module_inject/layers.py#L300). but there is no any implementations about column linear layer with all gather. how could i set the linear type when running dit models: ```python (transformer_blocks): ModuleList( (0-18): 19 x FluxTransformerBlock( (norm1): AdaLayerNormZero( (silu): SiLU() (linear): LinearLayer(in_features=3072, out_features=9216, bias=True, dtype=torch.bfloat16) (norm): LayerNorm((3072,), eps=1e-06, elementwise_affine=False) ) (norm1_context): AdaLayerNormZero( (silu): SiLU() (linear): LinearLayer(in_features=3072, out_features=9216, bias=True, dtype=torch.bfloat16) (norm): LayerNorm((3072,), eps=1e-06, elementwise_affine=False) ) (attn): Attention( (norm_q): RMSNorm() (norm_k): RMSNorm() (to_q): LinearLayer(in_features=3072, out_features=1536, bias=True, dtype=torch.bfloat16) (to_k): LinearLayer(in_features=3072, out_features=1536, bias=True, dtype=torch.bfloat16) (to_v): LinearLayer(in_features=3072, out_features=1536, bias=True, dtype=torch.bfloat16) (add_k_proj): LinearLayer(in_features=3072, out_features=1536, bias=True, dtype=torch.bfloat16) (add_v_proj): LinearLayer(in_features=3072, out_features=1536, bias=True, dtype=torch.bfloat16) (add_q_proj): LinearLayer(in_features=3072, out_features=1536, bias=True, dtype=torch.bfloat16) (to_out): ModuleList( (0): LinearLayer(in_features=3072, out_features=1536, bias=True, dtype=torch.bfloat16) (1): Dropout(p=0.0, inplace=False) ) (to_add_out): LinearLayer(in_features=3072, out_features=1536, bias=True, dtype=torch.bfloat16) (norm_added_q): RMSNorm() (norm_added_k): RMSNorm() ) (norm2): LayerNorm((3072,), eps=1e-06, elementwise_affine=False) (ff): FeedForward( (net): ModuleList( (0): GELU( (proj): LinearLayer(in_features=3072, out_features=6144, bias=True, dtype=torch.bfloat16) ) (1): Dropout(p=0.0, inplace=False) (2): LinearLayer(in_features=12288, out_features=1536, bias=True, dtype=torch.bfloat16) ) ) (norm2_context): LayerNorm((3072,), eps=1e-06, elementwise_affine=False) (ff_context): FeedForward( (net): ModuleList( (0): GELU( (proj): LinearLayer(in_features=3072, out_features=6144, bias=True, dtype=torch.bfloat16) ) (1): Dropout(p=0.0, inplace=False) (2): LinearLayer(in_features=12288, out_features=1536, bias=True, dtype=torch.bfloat16) ) ) ) ) ``` i can set the ``` attn.to_out.0 attn.to_add_out ff.net.2 ff_context.net.2 ``` to LinearAllreduce, but how to deal with ``` norm1.linear ``` and ``` norm1_context.linear```. i need all gather the results of a single linear layer or it will cause error because the inputs of both norm1 and norm1_context are a whole hidden_states
open
2025-02-14T07:09:38Z
2025-02-14T07:09:38Z
https://github.com/deepspeedai/DeepSpeed/issues/7037
[ "enhancement" ]
zhangvia
0
ARM-DOE/pyart
data-visualization
810
Very outdated gcc in conda-forge causing CyLP to fail building.
When I try and build @jjhelmus's branch I get the following error: gcc -pthread -B /home/rjackson/anaconda3/envs/adi_env3/compiler_compat -Wl,--sysroot=/ -Wsign-compare -DNDEBUG -g -fwrapv -O3 -Wall -Wstrict-prototypes -fPIC -I./cylp/cpp -I./cylp/cy -I/home/rjackson/anaconda3/envs/adi_env3/include/coin -I/home/rjackson/anaconda3/envs/adi_env3/lib/python3.6/site-packages/numpy/core/include -I. -I/home/rjackson/anaconda3/envs/adi_env3/include/python3.6m -c cylp/cpp/IClpPrimalColumnPivotBase.cpp -o build/temp.linux-x86_64-3.6/cylp/cpp/IClpPrimalColumnPivotBase.o -w cc1plus: warning: command line option ‘-Wstrict-prototypes’ is valid for C/ObjC but not for C++ [enabled by default] In file included from /home/rjackson/anaconda3/envs/adi_env3/gcc/include/c++/cstdint:35:0, from /home/rjackson/anaconda3/envs/adi_env3/include/coin/CoinTypes.hpp:15, from /home/rjackson/anaconda3/envs/adi_env3/include/coin/CoinHelperFunctions.hpp:24, from /home/rjackson/anaconda3/envs/adi_env3/include/coin/CoinIndexedVector.hpp:20, from cylp/cpp/IClpPrimalColumnPivotBase.h:6, from cylp/cpp/IClpPrimalColumnPivotBase.cpp:1: /home/rjackson/anaconda3/envs/adi_env3/gcc/include/c++/bits/c++0x_warning.h:32:2: error: #error This file requires compiler and library support for the ISO C++ 2011 standard. This support is currently experimental, and must be enabled with the -std=c++11 or -std=gnu++11 compiler options. #error This file requires compiler and library support for the \ I tracked this down to the very, very old version of gcc that is in conda-forge (4.8.5). I got it to build with gcc 6.3.0, so I would *not* use the conda-forge gcc to try and build CyLP.
closed
2019-02-06T19:56:11Z
2019-10-01T21:21:07Z
https://github.com/ARM-DOE/pyart/issues/810
[]
rcjackson
7
scrapy/scrapy
web-scraping
6,186
lxml parser gives back wrong parsing results, messes up html
<!-- Thanks for taking an interest in Scrapy! If you have a question that starts with "How to...", please see the Scrapy Community page: https://scrapy.org/community/. The GitHub issue tracker's purpose is to deal with bug reports and feature requests for the project itself. Keep in mind that by filing an issue, you are expected to comply with Scrapy's Code of Conduct, including treating everyone with respect: https://github.com/scrapy/scrapy/blob/master/CODE_OF_CONDUCT.md The following is a suggested template to structure your issue, you can find more guidelines at https://doc.scrapy.org/en/latest/contributing.html#reporting-bugs --> ### Description parsing with lxml gives wrong result ### Steps to Reproduce 1. fetch this html code: https://pastebin.com/durYf56c 2. select section `section = response.css('main section section')[0]` **Expected behavior:** [What you expect to happen] section to be this: https://pastebin.com/3wxdHFqY **Actual behavior:** [What actually happens] section is this: https://pastebin.com/jzXZ0BRD **Reproduces how often:** [What percentage of the time does it reproduce?] 100% ### Versions Scrapy : 2.11.0 lxml : 4.9.4.0 libxml2 : 2.10.3 cssselect : 1.2.0 parsel : 1.8.1 w3lib : 2.1.2 Twisted : 22.10.0 Python : 3.10.4 (tags/v3.10.4:9d38120, Mar 23 2022, 23:13:41) [MSC v.1929 64 bit (AMD64)] pyOpenSSL : 23.3.0 (OpenSSL 3.1.4 24 Oct 2023) cryptography : 41.0.7 Platform : Windows-10-10.0.19045-SP0 ### Additional context Same error happens with Beautiful soup using `lxml` parser. Using regular `html.parser` there is no error. So it must be lxml.
closed
2023-12-24T03:43:57Z
2023-12-24T15:47:30Z
https://github.com/scrapy/scrapy/issues/6186
[]
tombohub
3
vllm-project/vllm
pytorch
15,207
[Bug]: msgspec.DecodeError: MessagePack data is malformed: trailing characters (byte 13)
### Your current environment <details> <summary>vllm==0.8.0, python=3.11</summary> </details> ### 🐛 Describe the bug I keep having this decode error when using vllm=0.8.0 for qwen-2.5 inference, I have the same error on both 1.5B and 7B model once I use apply-chat-template. `msgspec.DecodeError: MessagePack data is malformed: trailing characters (byte 13)` ### Before submitting a new issue... - [x] Make sure you already searched for relevant issues, and asked the chatbot living at the bottom right corner of the [documentation page](https://docs.vllm.ai/en/latest/), which can answer lots of frequently asked questions.
open
2025-03-20T10:48:12Z
2025-03-24T14:47:15Z
https://github.com/vllm-project/vllm/issues/15207
[ "bug" ]
fangru-lin
1
KaiyangZhou/deep-person-reid
computer-vision
375
RuntimeError: cublas runtime error
I am getting this error ``` (torchreid) $ python scripts/main.py \ > --config-file configs/im_osnet_x1_0_softmax_256x128_amsgrad_cosine.yaml \ > --transforms random_flip random_erase \ > --root $PATH_TO_DATA Show configuration adam: beta1: 0.9 beta2: 0.999 cuhk03: classic_split: False labeled_images: False use_metric_cuhk03: False data: combineall: False height: 256 k_tfm: 1 load_train_targets: False norm_mean: [0.485, 0.456, 0.406] norm_std: [0.229, 0.224, 0.225] root: /home/me/code/tmp/deep-person-reid/data save_dir: log/osnet_x1_0_market1501_softmax_cosinelr sources: ['market1501'] split_id: 0 targets: ['market1501'] transforms: ['random_flip', 'random_erase'] type: image width: 128 workers: 4 loss: name: softmax softmax: label_smooth: True triplet: margin: 0.3 weight_t: 1.0 weight_x: 0.0 market1501: use_500k_distractors: False model: load_weights: name: osnet_x1_0 pretrained: True resume: rmsprop: alpha: 0.99 sampler: num_cams: 1 num_datasets: 1 num_instances: 4 train_sampler: RandomSampler train_sampler_t: RandomSampler sgd: dampening: 0.0 momentum: 0.9 nesterov: False test: batch_size: 300 dist_metric: euclidean eval_freq: -1 evaluate: False normalize_feature: False ranks: [1, 5, 10, 20] rerank: False start_eval: 0 visrank: False visrank_topk: 10 train: base_lr_mult: 0.1 batch_size: 64 fixbase_epoch: 10 gamma: 0.1 lr: 0.0015 lr_scheduler: cosine max_epoch: 250 new_layers: ['classifier'] open_layers: ['classifier'] optim: amsgrad print_freq: 20 seed: 1 staged_lr: False start_epoch: 0 stepsize: [20] weight_decay: 0.0005 use_gpu: True video: pooling_method: avg sample_method: evenly seq_len: 15 Collecting env info ... ** System info ** PyTorch version: 1.1.0 Is debug build: No CUDA used to build PyTorch: 9.0.176 OS: Ubuntu 18.04.5 LTS GCC version: (Ubuntu 7.5.0-3ubuntu1~18.04) 7.5.0 CMake version: version 3.10.2 Python version: 3.7 Is CUDA available: Yes CUDA runtime version: Could not collect GPU models and configuration: GPU 0: GeForce RTX 2080 Ti Nvidia driver version: 440.33.01 cuDNN version: Could not collect Versions of relevant libraries: [pip3] numpy==1.19.2 [pip3] torch==1.1.0 [pip3] torchreid==1.3.3 [pip3] torchvision==0.3.0 [conda] blas 1.0 mkl [conda] mkl 2020.2 256 [conda] mkl-service 2.3.0 py37he904b0f_0 [conda] mkl_fft 1.2.0 py37h23d657b_0 [conda] mkl_random 1.1.1 py37h0573a6f_0 [conda] pytorch 1.1.0 py3.7_cuda9.0.176_cudnn7.5.1_0 pytorch [conda] torchreid 1.3.3 dev_0 <develop> [conda] torchvision 0.3.0 py37_cu9.0.176_1 pytorch Pillow (7.2.0) Building train transforms ... + resize to 256x128 + random flip + to torch tensor of range [0, 1] + normalization (mean=[0.485, 0.456, 0.406], std=[0.229, 0.224, 0.225]) + random erase Building test transforms ... + resize to 256x128 + to torch tensor of range [0, 1] + normalization (mean=[0.485, 0.456, 0.406], std=[0.229, 0.224, 0.225]) => Loading train (source) dataset => Loaded Market1501 ---------------------------------------- subset | # ids | # images | # cameras ---------------------------------------- train | 751 | 12936 | 6 query | 750 | 3368 | 6 gallery | 751 | 15913 | 6 ---------------------------------------- => Loading test (target) dataset => Loaded Market1501 ---------------------------------------- subset | # ids | # images | # cameras ---------------------------------------- train | 751 | 12936 | 6 query | 750 | 3368 | 6 gallery | 751 | 15913 | 6 ---------------------------------------- **************** Summary **************** source : ['market1501'] # source datasets : 1 # source ids : 751 # source images : 12936 # source cameras : 6 target : ['market1501'] ***************************************** Building model: osnet_x1_0 Successfully loaded imagenet pretrained weights from "/home/me/.cache/torch/checkpoints/osnet_x1_0_imagenet.pth" ** The following layers are discarded due to unmatched keys or layer size: ['classifier.weight', 'classifier.bias'] Model complexity: params=2,193,616 flops=978,878,352 Building softmax-engine for image-reid => Start training * Only train ['classifier'] (epoch: 1/10) Traceback (most recent call last): File "scripts/main.py", line 191, in <module> main() File "scripts/main.py", line 187, in main engine.run(**engine_run_kwargs(cfg)) File "/home/me/code/deep-person-reid/torchreid/engine/engine.py", line 193, in run open_layers=open_layers File "/home/me/code/deep-person-reid/torchreid/engine/engine.py", line 245, in train loss_summary = self.forward_backward(data) File "/home/me/code/deep-person-reid/torchreid/engine/image/softmax.py", line 85, in forward_backward outputs = self.model(imgs) File "/home/me/.conda/envs/torchreid/lib/python3.7/site-packages/torch/nn/modules/module.py", line 493, in __call__ result = self.forward(*input, **kwargs) File "/home/me/.conda/envs/torchreid/lib/python3.7/site-packages/torch/nn/parallel/data_parallel.py", line 150, in forward return self.module(*inputs[0], **kwargs[0]) File "/home/me/.conda/envs/torchreid/lib/python3.7/site-packages/torch/nn/modules/module.py", line 493, in __call__ result = self.forward(*input, **kwargs) File "/home/me/code/deep-person-reid/torchreid/models/osnet.py", line 429, in forward v = self.fc(v) File "/home/me/.conda/envs/torchreid/lib/python3.7/site-packages/torch/nn/modules/module.py", line 493, in __call__ result = self.forward(*input, **kwargs) File "/home/me/.conda/envs/torchreid/lib/python3.7/site-packages/torch/nn/modules/container.py", line 92, in forward input = module(input) File "/home/me/.conda/envs/torchreid/lib/python3.7/site-packages/torch/nn/modules/module.py", line 493, in __call__ result = self.forward(*input, **kwargs) File "/home/me/.conda/envs/torchreid/lib/python3.7/site-packages/torch/nn/modules/linear.py", line 92, in forward return F.linear(input, self.weight, self.bias) File "/home/me/.conda/envs/torchreid/lib/python3.7/site-packages/torch/nn/functional.py", line 1406, in linear ret = torch.addmm(bias, input, weight.t()) RuntimeError: cublas runtime error : the GPU program failed to execute at /opt/conda/conda-bld/pytorch_1556653215914/work/aten/src/THC/THCBlas.cu:259 ``` What worries me is that I did not install conda in `/opt/conda` but in `/usr/local/miniconda3/condabin/conda`. However conda works perfectly. To reproduce it, one just needs to install it following the README's instructions: ``` # cd to your preferred directory and clone this repo git clone https://github.com/KaiyangZhou/deep-person-reid.git # create environment cd deep-person-reid/ conda create --name torchreid python=3.7 conda activate torchreid # install dependencies # make sure `which python` and `which pip` point to the correct path pip install -r requirements.txt # install torch and torchvision (select the proper cuda version to suit your machine) conda install pytorch torchvision cudatoolkit=9.0 -c pytorch # install torchreid (don't need to re-build it if you modify the source code) python setup.py develop # added by me PATH_TO_DATA="/somewhere_accessible" mkdir -p $PATH_TO_DATA ``` Here's my conda environment ``` (torchreid) $ conda list # packages in environment at /home/me/.conda/envs/torchreid: # # Name Version Build Channel _libgcc_mutex 0.1 main absl-py 0.10.0 pypi_0 pypi blas 1.0 mkl ca-certificates 2020.7.22 0 cachetools 4.1.1 pypi_0 pypi certifi 2020.6.20 py37_0 cffi 1.14.3 py37he30daa8_0 chardet 3.0.4 pypi_0 pypi cudatoolkit 9.0 h13b8566_0 cycler 0.10.0 pypi_0 pypi cython 0.29.21 pypi_0 pypi filelock 3.0.12 pypi_0 pypi flake8 3.8.3 pypi_0 pypi freetype 2.10.2 h5ab3b9f_0 future 0.18.2 pypi_0 pypi gdown 3.12.2 pypi_0 pypi google-auth 1.21.3 pypi_0 pypi google-auth-oauthlib 0.4.1 pypi_0 pypi grpcio 1.32.0 pypi_0 pypi h5py 2.10.0 pypi_0 pypi idna 2.10 pypi_0 pypi imageio 2.9.0 pypi_0 pypi importlib-metadata 2.0.0 pypi_0 pypi intel-openmp 2020.2 254 jpeg 9b h024ee3a_2 kiwisolver 1.2.0 pypi_0 pypi lcms2 2.11 h396b838_0 ld_impl_linux-64 2.33.1 h53a641e_7 libedit 3.1.20191231 h14c3975_1 libffi 3.3 he6710b0_2 libgcc-ng 9.1.0 hdf63c60_0 libpng 1.6.37 hbc83047_0 libstdcxx-ng 9.1.0 hdf63c60_0 libtiff 4.1.0 h2733197_1 lz4-c 1.9.2 he6710b0_1 markdown 3.2.2 pypi_0 pypi matplotlib 3.3.2 pypi_0 pypi mkl 2020.2 256 mkl-service 2.3.0 py37he904b0f_0 mkl_fft 1.2.0 py37h23d657b_0 mkl_random 1.1.1 py37h0573a6f_0 ncurses 6.2 he6710b0_1 ninja 1.10.1 py37hfd86e86_0 numpy 1.19.2 pypi_0 pypi numpy-base 1.19.1 py37hfa32c7d_0 oauthlib 3.1.0 pypi_0 pypi olefile 0.46 py37_0 opencv-python 4.4.0.44 pypi_0 pypi openssl 1.1.1h h7b6447c_0 pillow 7.2.0 py37hb39fc2d_0 pip 20.2.2 py37_0 protobuf 3.13.0 pypi_0 pypi pyasn1 0.4.8 pypi_0 pypi pyasn1-modules 0.2.8 pypi_0 pypi pycparser 2.20 py_2 pyflakes 2.2.0 pypi_0 pypi pyparsing 2.4.7 pypi_0 pypi pysocks 1.7.1 pypi_0 pypi python 3.7.9 h7579374_0 python-dateutil 2.8.1 pypi_0 pypi pytorch 1.1.0 py3.7_cuda9.0.176_cudnn7.5.1_0 pytorch pyyaml 5.3.1 pypi_0 pypi readline 8.0 h7b6447c_0 requests 2.24.0 pypi_0 pypi requests-oauthlib 1.3.0 pypi_0 pypi rsa 4.6 pypi_0 pypi scipy 1.5.2 pypi_0 pypi setuptools 49.6.0 py37_0 six 1.15.0 py_0 sqlite 3.33.0 h62c20be_0 tb-nightly 2.4.0a20200921 pypi_0 pypi tensorboard-plugin-wit 1.7.0 pypi_0 pypi tk 8.6.10 hbc83047_0 torchreid 1.3.3 dev_0 <develop> torchvision 0.3.0 py37_cu9.0.176_1 pytorch tqdm 4.49.0 pypi_0 pypi urllib3 1.25.10 pypi_0 pypi werkzeug 1.0.1 pypi_0 pypi wheel 0.35.1 pypi_0 pypi xz 5.2.5 h7b6447c_0 yacs 0.1.8 pypi_0 pypi yapf 0.30.0 pypi_0 pypi zipp 3.2.0 pypi_0 pypi zlib 1.2.11 h7b6447c_3 zstd 1.4.5 h9ceee32_0 ```
closed
2020-09-24T15:40:36Z
2020-09-24T21:56:23Z
https://github.com/KaiyangZhou/deep-person-reid/issues/375
[]
sopsos
2
pytorch/vision
machine-learning
8,845
`CocoDetection()` doesn't work using some train and validation images with some annotations
### 🚀 The feature [CocoDetection()](https://pytorch.org/vision/stable/generated/torchvision.datasets.CocoDetection.html) doesn't work using `stuff_train2017_pixelmaps` with `stuff_train2017.json` and using `stuff_val2017_pixelmaps` with `stuff_val2017.json` as shown below: ```python from torchvision.datasets import CocoDetection pms_stf_train2017_data = CocoDetection( root="data/coco/anns/stuff_trainval2017/stuff_train2017_pixelmaps", annFile="data/coco/anns/stuff_trainval2017/stuff_train2017.json" ) pms_stf_val2017_data = CocoDetection( root="data/coco/anns/stuff_trainval2017/stuff_val2017_pixelmaps", annFile="data/coco/anns/stuff_trainval2017/stuff_val2017.json" ) pms_stf_train2017_data[0] # Error pms_stf_val2017_data[0] # Error ``` > FileNotFoundError: [Errno 2] No such file or directory: '/.../data/coco/anns/stuff_trainval2017/stuff_train2017_pixelmaps/000000000009.jpg' > FileNotFoundError: [Errno 2] No such file or directory: '/../data/coco/anns/stuff_trainval2017/stuff_val2017_pixelmaps/000000000139.jpg' And, `CocoDetection()` doesn't work using `panoptic_train2017` with `panoptic_train2017.json` and using `panoptic_val2017` and `panoptic_val2017.json` as shown below: ```python from torchvision.datasets import CocoDetection pan_train2017_data = CocoDetection( root="data/coco/anns/panoptic_trainval2017/panoptic_train2017", annFile="data/coco/anns/panoptic_trainval2017/panoptic_train2017.json" ) # Error pan_val2017_data = CocoDetection( root="data/coco/anns/panoptic_trainval2017/panoptic_val2017", annFile="data/coco/anns/panoptic_trainval2017/panoptic_val2017.json" ) # Error ``` > KeyError: 'id' ### Motivation, pitch So, `CocoDetection()` should support for them. ### Alternatives _No response_ ### Additional context _No response_
open
2025-01-10T15:36:41Z
2025-01-13T10:53:20Z
https://github.com/pytorch/vision/issues/8845
[]
hyperkai
1
roboflow/supervision
computer-vision
1,709
Object detection precision-ap
### Search before asking - [X] I have searched the Supervision [issues](https://github.com/roboflow/supervision/issues) and found no similar feature requests. ### Question Suppose we have a single class object detection problem. Shouldn't the average precision and the precision metrics across the different iou thresholds be the same? This does not seem to be the case here. ### Additional _No response_
open
2024-12-03T12:55:58Z
2024-12-04T10:16:47Z
https://github.com/roboflow/supervision/issues/1709
[ "question" ]
GiannisApost
1
mlfoundations/open_clip
computer-vision
549
build_cls_mask() in CoCa TextTransfotmer
TL, DR: current implementation of `build_cls_mask()` produces `cls_mask` for [CLS] being as the first token. But in CoCa, [CLS] is the end token. In [Issue 312](https://github.com/mlfoundations/open_clip/pull/312), `build_cls_mask()` was introduced by @gpucce in `TextTransformer` in CoCa to "preventing the CLS token at the end of the sequence from attending to padded tokens". ```python # https://github.com/mlfoundations/open_clip/blob/main/src/open_clip/transformer.py#L587 def build_cls_mask(self, text, cast_dtype: torch.dtype): cls_mask = (text != self.pad_id).unsqueeze(1) cls_mask = F.pad(cls_mask, (1, 0, cls_mask.shape[2], 0), value=1.0) additive_mask = torch.empty(cls_mask.shape, dtype=cast_dtype, device=cls_mask.device) additive_mask.fill_(0) additive_mask.masked_fill_(~cls_mask, float("-inf")) additive_mask = torch.repeat_interleave(additive_mask, self.heads, 0) return additive_mask ``` Taking `text = torch.tensor([[1,2,3,4,0,0,0]])` as an example, ```python import torch import torch.nn.functional as F text = torch.tensor([[1,2,3,4,0,0,0]]) ### batch size 1, sequence 4 with 3 padding (pad_id=0) pad_id = 0 cls_mask = (text != pad_id).unsqueeze(1) cls_mask = F.pad(cls_mask, (1, 0, cls_mask.shape[2], 0), value=1.0) additive_mask = torch.empty(cls_mask.shape) additive_mask.fill_(0) additive_mask.masked_fill_(~cls_mask, float("-inf")) print(additive_mask) ``` This output ``` tensor([[[0., 0., 0., 0., 0., 0., 0., 0.], [0., 0., 0., 0., 0., 0., 0., 0.], [0., 0., 0., 0., 0., 0., 0., 0.], [0., 0., 0., 0., 0., 0., 0., 0.], [0., 0., 0., 0., 0., 0., 0., 0.], [0., 0., 0., 0., 0., 0., 0., 0.], [0., 0., 0., 0., 0., 0., 0., 0.], [0., 0., 0., 0., 0., -inf, -inf, -inf]]]) ``` In @lucidrains [implementation](https://github.com/lucidrains/CoCa-pytorch/blob/main/coca_pytorch/coca_pytorch.py#L384-L385) ```python # https://github.com/lucidrains/CoCa-pytorch/blob/main/coca_pytorch/coca_pytorch.py#L384-L385 cls_mask = rearrange(text!=self.pad_id, 'b j -> b 1 j') attn_mask = F.pad(cls_mask, (0, 1, seq, 0), value=True) ``` taking the same `text` as the example ```python import einops import torch import torch.nn.functional as F from einops import rearrange, repeat text = torch.tensor([[1,2,3,4,0,0,0]]) pad_id = 0 seq = text.shape[1] cls_mask = rearrange(text!=pad_id, 'b j -> b 1 j') attn_mask = F.pad(cls_mask, (0, 1, seq, 0), value=True) print(attn_mask) ``` it produces (which I believe should be the desired outcome) ``` tensor([[[ True, True, True, True, True, True, True, True], [ True, True, True, True, True, True, True, True], [ True, True, True, True, True, True, True, True], [ True, True, True, True, True, True, True, True], [ True, True, True, True, True, True, True, True], [ True, True, True, True, True, True, True, True], [ True, True, True, True, True, True, True, True], [ True, True, True, True, False, False, False, True]]]) ``` Since [CLS] token is appended at the end of a sequence, ```python # https://github.com/mlfoundations/open_clip/blob/main/src/open_clip/transformer.py#L607 x = torch.cat([x, self._repeat(self.cls_emb, x.shape[0])], dim=1) ``` I feel that the current implementation in `open_clip` is wrong? Do I miss anything?
open
2023-06-24T04:05:42Z
2023-09-20T17:31:30Z
https://github.com/mlfoundations/open_clip/issues/549
[]
yiren-jian
2
CatchTheTornado/text-extract-api
api
55
[feat] Add `markitdown` support
https://github.com/microsoft/markitdown It can be added as another OCR strategy
open
2025-01-08T10:32:31Z
2025-01-19T16:55:21Z
https://github.com/CatchTheTornado/text-extract-api/issues/55
[ "feature" ]
pkarw
0
deezer/spleeter
deep-learning
775
[Discussion] How can I make sure separate running on GPU?
The separate worked, I use --verbose and it shows some info, but i'm not sure it run on GPU. how can i make sure it? ``` C:\Users\Administrator\Desktop\testSound>python -m spleeter separate -p spleeter:5stems -o output --verbose audio_example.mp3 INFO:tensorflow:Using config: {'_model_dir': 'pretrained_models\\5stems', '_tf_random_seed': None, '_save_summary_steps': 100, '_save_checkpoints_steps': None, '_save_checkpoints_secs': 600, '_session_config': gpu_options { per_process_gpu_memory_fraction: 0.7 } , '_keep_checkpoint_max': 5, '_keep_checkpoint_every_n_hours': 10000, '_log_step_count_steps': 100, '_train_distribute': None, '_device_fn': None, '_protocol': None, '_eval_distribute': None, '_experimental_distribute': None, '_experimental_max_worker_delay_secs': None, '_session_creation_timeout_secs': 7200, '_checkpoint_save_graph_def': True, '_service': None, '_cluster_spec': ClusterSpec({}), '_task_type': 'worker', '_task_id': 0, '_global_id_in_cluster': 0, '_master': '', '_evaluation_master': '', '_is_chief': True, '_num_ps_replicas': 0, '_num_worker_replicas': 1} WARNING:tensorflow:From C:\Users\Administrator\AppData\Local\Programs\Python\Python310\lib\site-packages\spleeter\separator.py:146: calling DatasetV2.from_generator (from tensorflow.python.data.ops.dataset_ops) with output_types is deprecated and will be removed in a future version. Instructions for updating: Use output_signature instead WARNING:tensorflow:From C:\Users\Administrator\AppData\Local\Programs\Python\Python310\lib\site-packages\spleeter\separator.py:146: calling DatasetV2.from_generator (from tensorflow.python.data.ops.dataset_ops) with output_shapes is deprecated and will be removed in a future version. Instructions for updating: Use output_signature instead INFO:tensorflow:Calling model_fn. INFO:tensorflow:Apply unet for vocals_spectrogram WARNING:tensorflow:From C:\Users\Administrator\AppData\Local\Programs\Python\Python310\lib\site-packages\keras\layers\normalization\batch_normalization.py:532: _colocate_with (from tensorflow.python.framework.ops) is deprecated and will be removed in a future version. Instructions for updating: Colocations handled automatically by placer. INFO:tensorflow:Apply unet for piano_spectrogram INFO:tensorflow:Apply unet for drums_spectrogram INFO:tensorflow:Apply unet for bass_spectrogram INFO:tensorflow:Apply unet for other_spectrogram INFO:tensorflow:Done calling model_fn. INFO:tensorflow:Graph was finalized. INFO:tensorflow:Restoring parameters from pretrained_models\5stems\model INFO:tensorflow:Running local_init_op. INFO:tensorflow:Done running local_init_op. INFO:spleeter:File output\audio_example/piano.wav written succesfully INFO:spleeter:File output\audio_example/other.wav written succesfully INFO:spleeter:File output\audio_example/vocals.wav written succesfully INFO:spleeter:File output\audio_example/drums.wav written succesfully INFO:spleeter:File output\audio_example/bass.wav written succesfully ``` I don't think it worked on GPU, because it run 28s. I don't think this is the right speed. Am I right?
open
2022-06-24T04:23:56Z
2024-02-04T05:38:42Z
https://github.com/deezer/spleeter/issues/775
[ "question" ]
limengqilove
2
plotly/dash-core-components
dash
637
soft link to LICENSE.txt makes the github license invalid
![Screen Shot 2019-09-04 at 5 03 08 PM](https://user-images.githubusercontent.com/1394467/64291343-ed1fa080-cf35-11e9-98b7-35319322f852.png) @rpkyle your change of soft link makes the license tab on github invalid. Can you see how to fix that?
closed
2019-09-04T21:04:48Z
2019-09-05T11:29:37Z
https://github.com/plotly/dash-core-components/issues/637
[]
byronz
1
BMW-InnovationLab/BMW-YOLOv4-Training-Automation
rest-api
10
Inference after training the model
Are there any ways to do inference/predictions using the latest weight after the model is trained? I am able to do predictions during the training process using the Custom API at port 8099. However, the port is also closed after the training is finished. Thanks!
closed
2020-07-27T19:46:10Z
2021-03-12T18:38:05Z
https://github.com/BMW-InnovationLab/BMW-YOLOv4-Training-Automation/issues/10
[]
LSQI15
4
mckinsey/vizro
plotly
460
Multiple Series Line Chart Updates
### Description When creating a line chart with multiple series, I've noticed it's not possible to update the legend and y_axis titles (potentially more). If specifying a single series, the titles are updated and represented accordingly. ### Expected behavior I'd expect to be able to update the y_axis title and potentially also the legend titles when passing multiple column names (as a python list) to a px.line call, as can be done with dash. ### Which package? vizro ### Package version 0.1.16 ### Python version 3.10.14 ### OS Ubuntu 20.04 ### How to Reproduce Code to reproduce the behavior I'm observing ``` from vizro import Vizro import vizro.plotly.express as px import vizro.models as vm df = px.data.stocks() page = vm.Page( title="My first dashboard", components=[ vm.Graph(id="scatter_chart", figure=px.line(df, x="date", y=["GOOG", 'AAPL']).update_layout(yaxis_title='New Y-axis', legend_title='new legend')), ] ) dashboard = vm.Dashboard(pages=[page]) Vizro().build(dashboard).run() ``` ### Output Sharing here the resulting dashboard view, titles are not being updated even though updates are specified. ![image](https://github.com/mckinsey/vizro/assets/35462293/4bdcef27-785a-4092-aae8-2bf23f384bd0) ### Code of Conduct - [X] I agree to follow the [Code of Conduct](https://github.com/mckinsey/vizro/blob/main/CODE_OF_CONDUCT.md).
closed
2024-05-07T01:22:16Z
2024-05-08T01:26:01Z
https://github.com/mckinsey/vizro/issues/460
[ "Docs :spiral_notepad:", "General Question :question:" ]
mkretsch327
2
jstrieb/github-stats
asyncio
15
Update cronjob schedule
Hi! Nice work, but I believe running the github action every hour is a bit overkill. (Over 3000 commits on just generated image) Maybe everyday or every month could be a bit more reasonable. What do you think?
closed
2021-01-14T14:30:23Z
2021-01-19T22:46:07Z
https://github.com/jstrieb/github-stats/issues/15
[]
sylhare
1
zappa/Zappa
flask
842
[Migrated] 502 error while deploying
Originally from: https://github.com/Miserlou/Zappa/issues/2084 by [tomekbuszewski](https://github.com/tomekbuszewski) I am getting 502 error while trying to deploy the app. ## Context I have an app written with Django (3.0.4), works great locally. For dev purposes, I use sqlite only. I wanted to deploy it to AWS today, but I am getting 502 errors. Sorry, I cannot post an app here, since it's private. ## Expected Behavior It should be deployed without problems. ## Actual Behavior Results in 502 error. ## Your Environment Zappa version: 0.51.0 Django version: 3.0.4 Python: 3.7 Zappa config: ``` { "dev": { "aws_region": "eu-central-1", "django_settings": "hay.hay.settings", "profile_name": "default", "project_name": "hundred-a-year", "runtime": "python3.7", "s3_bucket": "hay-dev", "slim_handler": false } } ``` Error on AWS: ``` "{'message': 'An uncaught exception happened while servicing this request. You can investigate this with the `zappa tail` command.', 'traceback': ['Traceback (most recent call last):\\n', ' File \"/var/task/handler.py\", line 540, in handler\\n with Response.from_app(self.wsgi_app, environ) as response:\\n', ' File \"/var/task/werkzeug/wrappers/base_response.py\", line 287, in from_app\\n return cls(*_run_wsgi_app(app, environ, buffered))\\n', ' File \"/var/task/werkzeug/wrappers/base_response.py\", line 26, in _run_wsgi_app\\n return _run_wsgi_app(*args)\\n', ' File \"/var/task/werkzeug/test.py\", line 1119, in run_wsgi_app\\n app_rv = app(environ, start_response)\\n', \"TypeError: 'NoneType' object is not callable\\n\"]}" ``` Results of `zappa tail`: ``` [1587228323192] [DEBUG] 2020-04-18T16:45:23.192Z f41f3ef4-dfa2-46dc-95c2-99d6fd9f0852 Zappa Event: {'resource': '/', 'path': '/', 'httpMethod': 'GET', 'headers': {'Accept': 'text/html,application/xhtml+xml,application/xml;q=0.9,image/webp,image/apng,*/*;q=0.8,application/signed-exchange;v=b3;q=0.9', 'Accept-Encoding': 'gzip, deflate, br', 'Accept-Language': 'pl-PL,pl;q=0.9,en-US;q=0.8,en;q=0.7,ru;q=0.6,so;q=0.5', 'cache-control': 'max-age=0', 'CloudFront-Forwarded-Proto': 'https', 'CloudFront-Is-Desktop-Viewer': 'true', 'CloudFront-Is-Mobile-Viewer': 'false', 'CloudFront-Is-SmartTV-Viewer': 'false', 'CloudFront-Is-Tablet-Viewer': 'false', 'CloudFront-Viewer-Country': 'PL', 'Host': 'yiudwi21ux.execute-api.eu-central-1.amazonaws.com', 'Referer': 'https://eu-central-1.console.aws.amazon.com/apigateway/home?region=eu-central-1', 'sec-fetch-dest': 'document', 'sec-fetch-mode': 'navigate', 'sec-fetch-site': 'cross-site', 'sec-fetch-user': '?1', 'upgrade-insecure-requests': '1', 'User-Agent': 'Mozilla/5.0 (Macintosh; Intel Mac OS X 10_15_4) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/80.0.3987.163 Safari/537.36', 'Via': '2.0 70d111e01220d4724cfea727fa9dfb91.cloudfront.net (CloudFront)', 'X-Amz-Cf-Id': 'VMUK9rPoyIcqWqOjowgpHrWsI-qW0OTnW1YAUIJ0BZHH-Vr7Z3rWCw==', 'X-Amzn-Trace-Id': 'Root=1-5e9b2ea3-5181626ee6404a71bf9a1db9', 'X-Forwarded-For': '89.64.74.188, 54.239.171.153', 'X-Forwarded-Port': '443', 'X-Forwarded-Proto': 'https'}, 'multiValueHeaders': {'Accept': ['text/html,application/xhtml+xml,application/xml;q=0.9,image/webp,image/apng,*/*;q=0.8,application/signed-exchange;v=b3;q=0.9'], 'Accept-Encoding': ['gzip, deflate, br'], 'Accept-Language': ['pl-PL,pl;q=0.9,en-US;q=0.8,en;q=0.7,ru;q=0.6,so;q=0.5'], 'cache-control': ['max-age=0'], 'CloudFront-Forwarded-Proto': ['https'], 'CloudFront-Is-Desktop-Viewer': ['true'], 'CloudFront-Is-Mobile-Viewer': ['false'], 'CloudFront-Is-SmartTV-Viewer': ['false'], 'CloudFront-Is-Tablet-Viewer': ['false'], 'CloudFront-Viewer-Country': ['PL'], 'Host': ['yiudwi21ux.execute-api.eu-central-1.amazonaws.com'], 'Referer': ['https://eu-central-1.console.aws.amazon.com/apigateway/home?region=eu-central-1'], 'sec-fetch-dest': ['document'], 'sec-fetch-mode': ['navigate'], 'sec-fetch-site': ['cross-site'], 'sec-fetch-user': ['?1'], 'upgrade-insecure-requests': ['1'], 'User-Agent': ['Mozilla/5.0 (Macintosh; Intel Mac OS X 10_15_4) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/80.0.3987.163 Safari/537.36'], 'Via': ['2.0 70d111e01220d4724cfea727fa9dfb91.cloudfront.net (CloudFront)'], 'X-Amz-Cf-Id': ['VMUK9rPoyIcqWqOjowgpHrWsI-qW0OTnW1YAUIJ0BZHH-Vr7Z3rWCw=='], 'X-Amzn-Trace-Id': ['Root=1-5e9b2ea3-5181626ee6404a71bf9a1db9'], 'X-Forwarded-For': ['89.64.74.188, 54.239.171.153'], 'X-Forwarded-Port': ['443'], 'X-Forwarded-Proto': ['https']}, 'queryStringParameters': None, 'multiValueQueryStringParameters': None, 'pathParameters': None, 'stageVariables': None, 'requestContext': {'resourceId': '3achqb2vbg', 'resourcePath': '/', 'httpMethod': 'GET', 'extendedRequestId': 'LMQ5gG_WliAFbBw=', 'requestTime': '18/Apr/2020:16:45:23 +0000', 'path': '/dev', 'accountId': '536687225340', 'protocol': 'HTTP/1.1', 'stage': 'dev', 'domainPrefix': 'yiudwi21ux', 'requestTimeEpoch': 1587228323137, 'requestId': 'd3fd55d0-bec3-4424-877b-f6cf00c7affb', 'identity': {'cognitoIdentityPoolId': None, 'accountId': None, 'cognitoIdentityId': None, 'caller': None, 'sourceIp': '89.64.74.188', 'principalOrgId': None, 'accessKey': None, 'cognitoAuthenticationType': None, 'cognitoAuthenticationProvider': None, 'userArn': None, 'userAgent': 'Mozilla/5.0 (Macintosh; Intel Mac OS X 10_15_4) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/80.0.3987.163 Safari/537.36', 'user': None}, 'domainName': 'yiudwi21ux.execute-api.eu-central-1.amazonaws.com', 'apiId': 'yiudwi21ux'}, 'body': None, 'isBase64Encoded': False} [1587228323192] [DEBUG] 2020-04-18T16:45:23.192Z f41f3ef4-dfa2-46dc-95c2-99d6fd9f0852 host found: [yiudwi21ux.execute-api.eu-central-1.amazonaws.com] [1587228323192] [DEBUG] 2020-04-18T16:45:23.192Z f41f3ef4-dfa2-46dc-95c2-99d6fd9f0852 amazonaws found in host [1587228323192] 'NoneType' object is not callable ``` I've tried various solutions found on the internet, disabling `slim_handler`, removing `pyc` files, reinstalling venv. Nothing helped.
closed
2021-02-20T12:52:22Z
2022-07-16T05:45:41Z
https://github.com/zappa/Zappa/issues/842
[]
jneves
3
autogluon/autogluon
computer-vision
4,441
[BUG] feature_prune_kwargs={"force_prune": True} does not work when tuning_data is on for presets="medium_quality",
``` Verbosity: 4 (Maximum Logging) =================== System Info =================== AutoGluon Version: 1.1.1 Python Version: 3.11.9 Operating System: Linux Platform Machine: x86_64 Platform Version: #1 SMP Fri Mar 29 23:14:13 UTC 2024 CPU Count: 8 GPU Count: 1 Memory Avail: 8.36 GB / 23.47 GB (35.6%) Disk Space Avail: 326.38 GB / 911.84 GB (35.8%) =================================================== Presets specified: ['medium_quality'] ============ fit kwarg info ============ User Specified kwargs: {'auto_stack': False, 'feature_prune_kwargs': {'force_prune': True}, 'verbosity': 4} Full kwargs: {'_feature_generator_kwargs': None, '_save_bag_folds': None, 'ag_args': None, 'ag_args_ensemble': None, 'ag_args_fit': None, 'auto_stack': False, 'calibrate': 'auto', 'ds_args': {'clean_up_fits': True, 'detection_time_frac': 0.25, 'enable_ray_logging': True, 'holdout_data': None, 'holdout_frac': 0.1111111111111111, 'memory_safe_fits': True, 'n_folds': 2, 'n_repeats': 1, 'validation_procedure': 'holdout'}, 'excluded_model_types': None, 'feature_generator': 'auto', 'feature_prune_kwargs': {'force_prune': True}, 'holdout_frac': None, 'hyperparameter_tune_kwargs': None, 'included_model_types': None, 'keep_only_best': False, 'name_suffix': None, 'num_bag_folds': None, 'num_bag_sets': None, 'num_stack_levels': None, 'pseudo_data': None, 'refit_full': False, 'save_bag_folds': None, 'save_space': False, 'set_best_to_refit_full': False, 'unlabeled_data': None, 'use_bag_holdout': False, 'verbosity': 4} ======================================== Warning: Training may take a very long time because `time_limit` was not specified and `train_data` is large (500000 samples, 88.0 MB). Consider setting `time_limit` to ensure training finishes within an expected duration or experiment with a small portion of `train_data` to identify an ideal `presets` and `hyperparameters` configuration. Saving /mnt/d/python_directory_2/models_temp/learner.pkl Saving /mnt/d/python_directory_2/models_temp/predictor.pkl Beginning AutoGluon training ... AutoGluon will save models to "/mnt/d/python_directory_2/models_temp" Train Data Rows: 500000 Train Data Columns: 41 Tuning Data Rows: 75000 Tuning Data Columns: 41 Label Column: target AutoGluon infers your prediction problem is: 'binary' (because only two unique label-values observed). 2 unique label values: [0, 1] If 'binary' is not the correct problem_type, please manually specify the problem_type parameter during Predictor init (You may specify problem_type as one of: ['binary', 'multiclass', 'regression', 'quantile']) Problem Type: binary Preprocessing data ... Selected class <--> label mapping: class 1 = 1, class 0 = 0 Using Feature Generators to preprocess the data ... Performing general data preprocessing with merged train & validation data, so validation performance may not accurately reflect performance on new test data Fitting AutoMLPipelineFeatureGenerator... Available Memory: 8580.40 MB Train Data (Original) Memory Usage: 92.13 MB (1.1% of available memory) Inferring data type of each feature based on column values. Set feature_metadata_in to manually specify special dtypes of the features. Stage 1 Generators: Fitting AsTypeFeatureGenerator... Note: Converting 2 features to boolean dtype as they only contain 2 unique values. Original Features (exact raw dtype, raw dtype): ('datetime64[ns]', 'datetime') : 1 | ['time_date'] ('float32', 'float') : 37 | ['STOCH_slowk_3_n_01s__SB_BinaryStats_diff_longstretch0', 'DIV_5_n_01s__DN_Mean', 'KAMA_10_n_01s__CO_FirstMin_ac', 'MACDEXT_macd_10_n_03s__SP_Summaries_welch_rect_centroid', 'MIN_15_n_05s__DN_Spread_Std', ...] Types of features in original data (raw dtype, special dtypes): ('datetime', []) : 1 | ['time_date'] ('float', []) : 37 | ['STOCH_slowk_3_n_01s__SB_BinaryStats_diff_longstretch0', 'DIV_5_n_01s__DN_Mean', 'KAMA_10_n_01s__CO_FirstMin_ac', 'MACDEXT_macd_10_n_03s__SP_Summaries_welch_rect_centroid', 'MIN_15_n_05s__DN_Spread_Std', ...] Types of features in processed data (exact raw dtype, raw dtype): ('datetime64[ns]', 'datetime') : 1 | ['time_date'] ('float32', 'float') : 35 | ['DIV_5_n_01s__DN_Mean', 'KAMA_10_n_01s__CO_FirstMin_ac', 'MACDEXT_macd_10_n_03s__SP_Summaries_welch_rect_centroid', 'MIN_15_n_05s__DN_Spread_Std', 'CCI_3_n_10s__SP_Summaries_welch_rect_centroid', ...] ('int8', 'int') : 2 | ['STOCH_slowk_3_n_01s__SB_BinaryStats_diff_longstretch0', 'MAX_3_n_30s__FC_LocalSimple_mean1_tauresrat'] Types of features in processed data (raw dtype, special dtypes): ('datetime', []) : 1 | ['time_date'] ('float', []) : 35 | ['DIV_5_n_01s__DN_Mean', 'KAMA_10_n_01s__CO_FirstMin_ac', 'MACDEXT_macd_10_n_03s__SP_Summaries_welch_rect_centroid', 'MIN_15_n_05s__DN_Spread_Std', 'CCI_3_n_10s__SP_Summaries_welch_rect_centroid', ...] ('int', ['bool']) : 2 | ['STOCH_slowk_3_n_01s__SB_BinaryStats_diff_longstretch0', 'MAX_3_n_30s__FC_LocalSimple_mean1_tauresrat'] 0.2s = Fit runtime 38 features in original data used to generate 38 features in processed data. Stage 2 Generators: Fitting FillNaFeatureGenerator... Types of features in original data (raw dtype, special dtypes): ('datetime', []) : 1 | ['time_date'] ('float', []) : 35 | ['DIV_5_n_01s__DN_Mean', 'KAMA_10_n_01s__CO_FirstMin_ac', 'MACDEXT_macd_10_n_03s__SP_Summaries_welch_rect_centroid', 'MIN_15_n_05s__DN_Spread_Std', 'CCI_3_n_10s__SP_Summaries_welch_rect_centroid', ...] ('int', ['bool']) : 2 | ['STOCH_slowk_3_n_01s__SB_BinaryStats_diff_longstretch0', 'MAX_3_n_30s__FC_LocalSimple_mean1_tauresrat'] Types of features in processed data (exact raw dtype, raw dtype): ('datetime64[ns]', 'datetime') : 1 | ['time_date'] ('float32', 'float') : 35 | ['DIV_5_n_01s__DN_Mean', 'KAMA_10_n_01s__CO_FirstMin_ac', 'MACDEXT_macd_10_n_03s__SP_Summaries_welch_rect_centroid', 'MIN_15_n_05s__DN_Spread_Std', 'CCI_3_n_10s__SP_Summaries_welch_rect_centroid', ...] ('int8', 'int') : 2 | ['STOCH_slowk_3_n_01s__SB_BinaryStats_diff_longstretch0', 'MAX_3_n_30s__FC_LocalSimple_mean1_tauresrat'] Types of features in processed data (raw dtype, special dtypes): ('datetime', []) : 1 | ['time_date'] ('float', []) : 35 | ['DIV_5_n_01s__DN_Mean', 'KAMA_10_n_01s__CO_FirstMin_ac', 'MACDEXT_macd_10_n_03s__SP_Summaries_welch_rect_centroid', 'MIN_15_n_05s__DN_Spread_Std', 'CCI_3_n_10s__SP_Summaries_welch_rect_centroid', ...] ('int', ['bool']) : 2 | ['STOCH_slowk_3_n_01s__SB_BinaryStats_diff_longstretch0', 'MAX_3_n_30s__FC_LocalSimple_mean1_tauresrat'] 0.1s = Fit runtime 38 features in original data used to generate 38 features in processed data. Stage 3 Generators: Fitting IdentityFeatureGenerator... Types of features in original data (raw dtype, special dtypes): ('float', []) : 35 | ['DIV_5_n_01s__DN_Mean', 'KAMA_10_n_01s__CO_FirstMin_ac', 'MACDEXT_macd_10_n_03s__SP_Summaries_welch_rect_centroid', 'MIN_15_n_05s__DN_Spread_Std', 'CCI_3_n_10s__SP_Summaries_welch_rect_centroid', ...] ('int', ['bool']) : 2 | ['STOCH_slowk_3_n_01s__SB_BinaryStats_diff_longstretch0', 'MAX_3_n_30s__FC_LocalSimple_mean1_tauresrat'] Types of features in processed data (exact raw dtype, raw dtype): ('float32', 'float') : 35 | ['DIV_5_n_01s__DN_Mean', 'KAMA_10_n_01s__CO_FirstMin_ac', 'MACDEXT_macd_10_n_03s__SP_Summaries_welch_rect_centroid', 'MIN_15_n_05s__DN_Spread_Std', 'CCI_3_n_10s__SP_Summaries_welch_rect_centroid', ...] ('int8', 'int') : 2 | ['STOCH_slowk_3_n_01s__SB_BinaryStats_diff_longstretch0', 'MAX_3_n_30s__FC_LocalSimple_mean1_tauresrat'] Types of features in processed data (raw dtype, special dtypes): ('float', []) : 35 | ['DIV_5_n_01s__DN_Mean', 'KAMA_10_n_01s__CO_FirstMin_ac', 'MACDEXT_macd_10_n_03s__SP_Summaries_welch_rect_centroid', 'MIN_15_n_05s__DN_Spread_Std', 'CCI_3_n_10s__SP_Summaries_welch_rect_centroid', ...] ('int', ['bool']) : 2 | ['STOCH_slowk_3_n_01s__SB_BinaryStats_diff_longstretch0', 'MAX_3_n_30s__FC_LocalSimple_mean1_tauresrat'] 0.1s = Fit runtime 37 features in original data used to generate 37 features in processed data. Skipping CategoryFeatureGenerator: No input feature with required dtypes. Fitting DatetimeFeatureGenerator... Types of features in original data (raw dtype, special dtypes): ('datetime', []) : 1 | ['time_date'] Types of features in processed data (exact raw dtype, raw dtype): ('int64', 'int') : 5 | ['time_date', 'time_date.year', 'time_date.month', 'time_date.day', 'time_date.dayofweek'] Types of features in processed data (raw dtype, special dtypes): ('int', ['datetime_as_int']) : 5 | ['time_date', 'time_date.year', 'time_date.month', 'time_date.day', 'time_date.dayofweek'] 0.1s = Fit runtime 1 features in original data used to generate 5 features in processed data. Skipping TextSpecialFeatureGenerator: No input feature with required dtypes. Skipping TextNgramFeatureGenerator: No input feature with required dtypes. Skipping IdentityFeatureGenerator: No input feature with required dtypes. Skipping IsNanFeatureGenerator: No input feature with required dtypes. Stage 4 Generators: Fitting DropUniqueFeatureGenerator... Types of features in original data (raw dtype, special dtypes): ('float', []) : 35 | ['DIV_5_n_01s__DN_Mean', 'KAMA_10_n_01s__CO_FirstMin_ac', 'MACDEXT_macd_10_n_03s__SP_Summaries_welch_rect_centroid', 'MIN_15_n_05s__DN_Spread_Std', 'CCI_3_n_10s__SP_Summaries_welch_rect_centroid', ...] ('int', ['bool']) : 2 | ['STOCH_slowk_3_n_01s__SB_BinaryStats_diff_longstretch0', 'MAX_3_n_30s__FC_LocalSimple_mean1_tauresrat'] ('int', ['datetime_as_int']) : 4 | ['time_date', 'time_date.month', 'time_date.day', 'time_date.dayofweek'] Types of features in processed data (exact raw dtype, raw dtype): ('float32', 'float') : 35 | ['DIV_5_n_01s__DN_Mean', 'KAMA_10_n_01s__CO_FirstMin_ac', 'MACDEXT_macd_10_n_03s__SP_Summaries_welch_rect_centroid', 'MIN_15_n_05s__DN_Spread_Std', 'CCI_3_n_10s__SP_Summaries_welch_rect_centroid', ...] ('int64', 'int') : 4 | ['time_date', 'time_date.month', 'time_date.day', 'time_date.dayofweek'] ('int8', 'int') : 2 | ['STOCH_slowk_3_n_01s__SB_BinaryStats_diff_longstretch0', 'MAX_3_n_30s__FC_LocalSimple_mean1_tauresrat'] Types of features in processed data (raw dtype, special dtypes): ('float', []) : 35 | ['DIV_5_n_01s__DN_Mean', 'KAMA_10_n_01s__CO_FirstMin_ac', 'MACDEXT_macd_10_n_03s__SP_Summaries_welch_rect_centroid', 'MIN_15_n_05s__DN_Spread_Std', 'CCI_3_n_10s__SP_Summaries_welch_rect_centroid', ...] ('int', ['bool']) : 2 | ['STOCH_slowk_3_n_01s__SB_BinaryStats_diff_longstretch0', 'MAX_3_n_30s__FC_LocalSimple_mean1_tauresrat'] ('int', ['datetime_as_int']) : 4 | ['time_date', 'time_date.month', 'time_date.day', 'time_date.dayofweek'] 0.2s = Fit runtime 41 features in original data used to generate 41 features in processed data. Stage 5 Generators: Fitting DropDuplicatesFeatureGenerator... Types of features in original data (raw dtype, special dtypes): ('float', []) : 35 | ['DIV_5_n_01s__DN_Mean', 'KAMA_10_n_01s__CO_FirstMin_ac', 'MACDEXT_macd_10_n_03s__SP_Summaries_welch_rect_centroid', 'MIN_15_n_05s__DN_Spread_Std', 'CCI_3_n_10s__SP_Summaries_welch_rect_centroid', ...] ('int', ['bool']) : 2 | ['STOCH_slowk_3_n_01s__SB_BinaryStats_diff_longstretch0', 'MAX_3_n_30s__FC_LocalSimple_mean1_tauresrat'] ('int', ['datetime_as_int']) : 4 | ['time_date', 'time_date.month', 'time_date.day', 'time_date.dayofweek'] Types of features in processed data (exact raw dtype, raw dtype): ('float32', 'float') : 35 | ['DIV_5_n_01s__DN_Mean', 'KAMA_10_n_01s__CO_FirstMin_ac', 'MACDEXT_macd_10_n_03s__SP_Summaries_welch_rect_centroid', 'MIN_15_n_05s__DN_Spread_Std', 'CCI_3_n_10s__SP_Summaries_welch_rect_centroid', ...] ('int64', 'int') : 4 | ['time_date', 'time_date.month', 'time_date.day', 'time_date.dayofweek'] ('int8', 'int') : 2 | ['STOCH_slowk_3_n_01s__SB_BinaryStats_diff_longstretch0', 'MAX_3_n_30s__FC_LocalSimple_mean1_tauresrat'] Types of features in processed data (raw dtype, special dtypes): ('float', []) : 35 | ['DIV_5_n_01s__DN_Mean', 'KAMA_10_n_01s__CO_FirstMin_ac', 'MACDEXT_macd_10_n_03s__SP_Summaries_welch_rect_centroid', 'MIN_15_n_05s__DN_Spread_Std', 'CCI_3_n_10s__SP_Summaries_welch_rect_centroid', ...] ('int', ['bool']) : 2 | ['STOCH_slowk_3_n_01s__SB_BinaryStats_diff_longstretch0', 'MAX_3_n_30s__FC_LocalSimple_mean1_tauresrat'] ('int', ['datetime_as_int']) : 4 | ['time_date', 'time_date.month', 'time_date.day', 'time_date.dayofweek'] 0.1s = Fit runtime 41 features in original data used to generate 41 features in processed data. Useless Original Features (Count: 3): ['STDDEV_30_n_03s__SC_FluctAnal_2_rsrangefit_50_1_logi_prop_r1', 'STOCHRSI_fastk_3_n_03m__CO_HistogramAMI_even_2_5', 'TEMA_3_n_15m__SC_FluctAnal_2_dfa_50_1_2_logi_prop_r1'] These features carry no predictive signal and should be manually investigated. This is typically a feature which has the same value for all rows. These features do not need to be present at inference time. Types of features in original data (exact raw dtype, raw dtype): ('datetime64[ns]', 'datetime') : 1 | ['time_date'] ('float32', 'float') : 37 | ['STOCH_slowk_3_n_01s__SB_BinaryStats_diff_longstretch0', 'DIV_5_n_01s__DN_Mean', 'KAMA_10_n_01s__CO_FirstMin_ac', 'MACDEXT_macd_10_n_03s__SP_Summaries_welch_rect_centroid', 'MIN_15_n_05s__DN_Spread_Std', ...] Types of features in original data (raw dtype, special dtypes): ('datetime', []) : 1 | ['time_date'] ('float', []) : 37 | ['STOCH_slowk_3_n_01s__SB_BinaryStats_diff_longstretch0', 'DIV_5_n_01s__DN_Mean', 'KAMA_10_n_01s__CO_FirstMin_ac', 'MACDEXT_macd_10_n_03s__SP_Summaries_welch_rect_centroid', 'MIN_15_n_05s__DN_Spread_Std', ...] Types of features in processed data (exact raw dtype, raw dtype): ('float32', 'float') : 35 | ['DIV_5_n_01s__DN_Mean', 'KAMA_10_n_01s__CO_FirstMin_ac', 'MACDEXT_macd_10_n_03s__SP_Summaries_welch_rect_centroid', 'MIN_15_n_05s__DN_Spread_Std', 'CCI_3_n_10s__SP_Summaries_welch_rect_centroid', ...] ('int64', 'int') : 4 | ['time_date', 'time_date.month', 'time_date.day', 'time_date.dayofweek'] ('int8', 'int') : 2 | ['STOCH_slowk_3_n_01s__SB_BinaryStats_diff_longstretch0', 'MAX_3_n_30s__FC_LocalSimple_mean1_tauresrat'] Types of features in processed data (raw dtype, special dtypes): ('float', []) : 35 | ['DIV_5_n_01s__DN_Mean', 'KAMA_10_n_01s__CO_FirstMin_ac', 'MACDEXT_macd_10_n_03s__SP_Summaries_welch_rect_centroid', 'MIN_15_n_05s__DN_Spread_Std', 'CCI_3_n_10s__SP_Summaries_welch_rect_centroid', ...] ('int', ['bool']) : 2 | ['STOCH_slowk_3_n_01s__SB_BinaryStats_diff_longstretch0', 'MAX_3_n_30s__FC_LocalSimple_mean1_tauresrat'] ('int', ['datetime_as_int']) : 4 | ['time_date', 'time_date.month', 'time_date.day', 'time_date.dayofweek'] 1.0s = Fit runtime 38 features in original data used to generate 41 features in processed data. Train Data (Processed) Memory Usage: 95.42 MB (1.1% of available memory) Data preprocessing and feature engineering runtime = 1.11s ... AutoGluon will gauge predictive performance using evaluation metric: 'roc_auc' This metric expects predicted probabilities rather than predicted class labels, so you'll need to use predict_proba() instead of predict() To change this, specify the eval_metric parameter of Predictor() Saving /mnt/d/python_directory_2/models_temp/learner.pkl User-specified model hyperparameters to be fit: { 'GBM': [{'extra_trees': True, 'ag_args': {'name_suffix': 'XT'}, 'learning_rate': 0.45}, {'learning_rate': 0.45}], 'CAT': {'iterations': 10000, 'learning_rate': 0.75, 'allow_writing_files': False, 'eval_metric': 'Logloss', 'thread_count': 8}, } Saving /mnt/d/python_directory_2/models_temp/utils/data/X.pkl Saving /mnt/d/python_directory_2/models_temp/utils/data/y.pkl Saving /mnt/d/python_directory_2/models_temp/utils/data/X_val.pkl Saving /mnt/d/python_directory_2/models_temp/utils/data/y_val.pkl Model configs that will be trained (in order): LightGBMXT: {'extra_trees': True, 'ag_args': {'name_suffix': 'XT', 'model_type': <class 'autogluon.tabular.models.lgb.lgb_model.LGBModel'>, 'priority': 90}, 'learning_rate': 0.45} LightGBM: {'learning_rate': 0.45, 'ag_args': {'model_type': <class 'autogluon.tabular.models.lgb.lgb_model.LGBModel'>, 'priority': 90}} CatBoost: {'iterations': 10000, 'learning_rate': 0.75, 'allow_writing_files': False, 'eval_metric': 'Logloss', 'thread_count': 8, 'ag_args': {'model_type': <class 'autogluon.tabular.models.catboost.catboost_model.CatBoostModel'>, 'priority': 70}} Fitting 3 L1 models ... Fitting model: LightGBMXT ... Dropped 0 of 41 features. Fitting LightGBMXT with 'num_gpus': 0, 'num_cpus': 8 Fitting 10000 rounds... Hyperparameters: {'learning_rate': 0.45, 'extra_trees': True} [1] valid_set's binary_logloss: 0.690857 [2] valid_set's binary_logloss: 0.68494 [3] valid_set's binary_logloss: 0.684353 [4] valid_set's binary_logloss: 0.686686 [5] valid_set's binary_logloss: 0.691233 [6] valid_set's binary_logloss: 0.691459 [7] valid_set's binary_logloss: 0.701168 [8] valid_set's binary_logloss: 0.712405 [9] valid_set's binary_logloss: 0.716977 [10] valid_set's binary_logloss: 0.717592 [11] valid_set's binary_logloss: 0.715944 [12] valid_set's binary_logloss: 0.700291 [13] valid_set's binary_logloss: 0.702016 [14] valid_set's binary_logloss: 0.706436 [15] valid_set's binary_logloss: 0.703444 [16] valid_set's binary_logloss: 0.702883 [17] valid_set's binary_logloss: 0.702753 [18] valid_set's binary_logloss: 0.70785 [19] valid_set's binary_logloss: 0.70979 [20] valid_set's binary_logloss: 0.711334 [21] valid_set's binary_logloss: 0.710002 [22] valid_set's binary_logloss: 0.709985 [23] valid_set's binary_logloss: 0.710674 [24] valid_set's binary_logloss: 0.711116 Saving /mnt/d/python_directory_2/models_temp/models/LightGBMXT/model.pkl Saving /mnt/d/python_directory_2/models_temp/utils/attr/LightGBMXT/y_pred_proba_val.pkl 0.541 = Validation score (roc_auc) 1.13s = Training runtime 0.02s = Validation runtime 3801989.4 = Inference throughput (rows/s | 75000 batch size) Saving /mnt/d/python_directory_2/models_temp/models/trainer.pkl Fitting model: LightGBM ... Dropped 0 of 41 features. Fitting LightGBM with 'num_gpus': 0, 'num_cpus': 8 Fitting 10000 rounds... Hyperparameters: {'learning_rate': 0.45} [1] valid_set's binary_logloss: 0.726226 [2] valid_set's binary_logloss: 0.741846 [3] valid_set's binary_logloss: 0.737792 [4] valid_set's binary_logloss: 0.791165 [5] valid_set's binary_logloss: 0.838001 [6] valid_set's binary_logloss: 0.838667 [7] valid_set's binary_logloss: 0.839008 [8] valid_set's binary_logloss: 0.861499 [9] valid_set's binary_logloss: 0.85956 [10] valid_set's binary_logloss: 0.847175 [11] valid_set's binary_logloss: 0.84703 [12] valid_set's binary_logloss: 0.846104 [13] valid_set's binary_logloss: 0.832776 [14] valid_set's binary_logloss: 0.833003 [15] valid_set's binary_logloss: 0.831728 [16] valid_set's binary_logloss: 0.834595 [17] valid_set's binary_logloss: 0.833374 [18] valid_set's binary_logloss: 0.833442 [19] valid_set's binary_logloss: 0.833311 [20] valid_set's binary_logloss: 0.824875 [21] valid_set's binary_logloss: 0.822567 Saving /mnt/d/python_directory_2/models_temp/models/LightGBM/model.pkl Saving /mnt/d/python_directory_2/models_temp/utils/attr/LightGBM/y_pred_proba_val.pkl 0.5 = Validation score (roc_auc) 1.01s = Training runtime 0.02s = Validation runtime 3795109.1 = Inference throughput (rows/s | 75000 batch size) Saving /mnt/d/python_directory_2/models_temp/models/trainer.pkl Fitting model: CatBoost ... Dropped 0 of 41 features. Fitting CatBoost with 'num_gpus': 0, 'num_cpus': 8 Catboost model hyperparameters: {'iterations': 10000, 'learning_rate': 0.75, 'random_seed': 0, 'allow_writing_files': False, 'eval_metric': 'Logloss', 'thread_count': 8} 0: learn: 0.6533180 test: 0.7174133 best: 0.7174133 (0) total: 44.1ms remaining: 7m 20s 1: learn: 0.6298120 test: 0.7229527 best: 0.7174133 (0) total: 82.8ms remaining: 6m 53s 2: learn: 0.6126701 test: 0.7287912 best: 0.7174133 (0) total: 122ms remaining: 6m 46s 3: learn: 0.6073046 test: 0.7166248 best: 0.7166248 (3) total: 158ms remaining: 6m 35s 4: learn: 0.5929676 test: 0.7188954 best: 0.7166248 (3) total: 194ms remaining: 6m 27s 5: learn: 0.5770597 test: 0.7453000 best: 0.7166248 (3) total: 227ms remaining: 6m 18s 6: learn: 0.5684997 test: 0.7506057 best: 0.7166248 (3) total: 271ms remaining: 6m 26s 7: learn: 0.5586667 test: 0.6851480 best: 0.6851480 (7) total: 313ms remaining: 6m 31s 8: learn: 0.5533758 test: 0.6924916 best: 0.6851480 (7) total: 356ms remaining: 6m 35s 9: learn: 0.5475293 test: 0.6908779 best: 0.6851480 (7) total: 398ms remaining: 6m 37s 10: learn: 0.5413246 test: 0.6897852 best: 0.6851480 (7) total: 431ms remaining: 6m 31s 11: learn: 0.5372513 test: 0.6887706 best: 0.6851480 (7) total: 466ms remaining: 6m 27s 12: learn: 0.5334703 test: 0.6926530 best: 0.6851480 (7) total: 500ms remaining: 6m 24s 13: learn: 0.5298752 test: 0.6958383 best: 0.6851480 (7) total: 541ms remaining: 6m 25s 14: learn: 0.5268030 test: 0.6941996 best: 0.6851480 (7) total: 581ms remaining: 6m 26s 15: learn: 0.5180547 test: 0.6958168 best: 0.6851480 (7) total: 631ms remaining: 6m 33s 16: learn: 0.5125751 test: 0.6883260 best: 0.6851480 (7) total: 670ms remaining: 6m 33s 17: learn: 0.5112367 test: 0.6874500 best: 0.6851480 (7) total: 719ms remaining: 6m 38s 18: learn: 0.5060452 test: 0.6895496 best: 0.6851480 (7) total: 765ms remaining: 6m 42s 19: learn: 0.5014992 test: 0.6867924 best: 0.6851480 (7) total: 808ms remaining: 6m 43s 20: learn: 0.4988651 test: 0.6899766 best: 0.6851480 (7) total: 848ms remaining: 6m 43s 21: learn: 0.4955378 test: 0.6872165 best: 0.6851480 (7) total: 894ms remaining: 6m 45s 22: learn: 0.4904387 test: 0.6821777 best: 0.6821777 (22) total: 944ms remaining: 6m 49s 23: learn: 0.4883393 test: 0.6846079 best: 0.6821777 (22) total: 985ms remaining: 6m 49s 24: learn: 0.4851253 test: 0.6906374 best: 0.6821777 (22) total: 1.02s remaining: 6m 47s 25: learn: 0.4807459 test: 0.6861663 best: 0.6821777 (22) total: 1.07s remaining: 6m 49s 26: learn: 0.4782397 test: 0.6860673 best: 0.6821777 (22) total: 1.12s remaining: 6m 52s 27: learn: 0.4775118 test: 0.6907217 best: 0.6821777 (22) total: 1.16s remaining: 6m 54s 28: learn: 0.4745040 test: 0.7047771 best: 0.6821777 (22) total: 1.2s remaining: 6m 53s 29: learn: 0.4728311 test: 0.7234329 best: 0.6821777 (22) total: 1.24s remaining: 6m 53s 30: learn: 0.4691995 test: 0.7221445 best: 0.6821777 (22) total: 1.29s remaining: 6m 54s 31: learn: 0.4675475 test: 0.7224027 best: 0.6821777 (22) total: 1.33s remaining: 6m 53s 32: learn: 0.4658990 test: 0.7221766 best: 0.6821777 (22) total: 1.37s remaining: 6m 54s 33: learn: 0.4650485 test: 0.7219495 best: 0.6821777 (22) total: 1.4s remaining: 6m 51s 34: learn: 0.4638509 test: 0.7243998 best: 0.6821777 (22) total: 1.45s remaining: 6m 52s 35: learn: 0.4621719 test: 0.7230784 best: 0.6821777 (22) total: 1.49s remaining: 6m 52s 36: learn: 0.4615038 test: 0.7235578 best: 0.6821777 (22) total: 1.53s remaining: 6m 52s 37: learn: 0.4609860 test: 0.7225358 best: 0.6821777 (22) total: 1.57s remaining: 6m 51s 38: learn: 0.4590647 test: 0.7220221 best: 0.6821777 (22) total: 1.61s remaining: 6m 52s 39: learn: 0.4584479 test: 0.7252117 best: 0.6821777 (22) total: 1.66s remaining: 6m 52s 40: learn: 0.4566977 test: 0.7285681 best: 0.6821777 (22) total: 1.71s remaining: 6m 54s 41: learn: 0.4550106 test: 0.7298182 best: 0.6821777 (22) total: 1.76s remaining: 6m 58s 42: learn: 0.4537528 test: 0.7286350 best: 0.6821777 (22) total: 1.82s remaining: 7m 2s 43: learn: 0.4530702 test: 0.7217189 best: 0.6821777 (22) total: 1.91s remaining: 7m 13s 44: learn: 0.4515620 test: 0.7219945 best: 0.6821777 (22) total: 1.99s remaining: 7m 20s 45: learn: 0.4501892 test: 0.7196554 best: 0.6821777 (22) total: 2.07s remaining: 7m 27s 46: learn: 0.4476642 test: 0.7128948 best: 0.6821777 (22) total: 2.12s remaining: 7m 28s 47: learn: 0.4466513 test: 0.7867258 best: 0.6821777 (22) total: 2.17s remaining: 7m 29s 48: learn: 0.4461165 test: 0.7878465 best: 0.6821777 (22) total: 2.22s remaining: 7m 30s 49: learn: 0.4450580 test: 0.7877115 best: 0.6821777 (22) total: 2.27s remaining: 7m 32s bestTest = 0.6821776997 bestIteration = 22 Shrink model to first 23 iterations. Saving /mnt/d/python_directory_2/models_temp/models/CatBoost/model.pkl Saving /mnt/d/python_directory_2/models_temp/utils/attr/CatBoost/y_pred_proba_val.pkl 0.6525 = Validation score (roc_auc) 2.64s = Training runtime 0.01s = Validation runtime 7341255.5 = Inference throughput (rows/s | 75000 batch size) Saving /mnt/d/python_directory_2/models_temp/models/trainer.pkl Loading: /mnt/d/python_directory_2/models_temp/models/LightGBMXT/model.pkl Performing feature pruning with model: FeatureSelector_LightGBMXT, total time limit: 300s, stop threshold: 10, prune ratio: 0.05, prune threshold: noise. Number of training samples 500000 is greater than 50000. Using 50000 samples as training data. Traceback (most recent call last): File "<stdin>", line 3, in <module> File "/home/artur/miniforge3/envs/py311_ag/lib/python3.11/site-packages/autogluon/core/utils/decorators.py", line 31, in _call return f(*gargs, **gkwargs) ^^^^^^^^^^^^^^^^^^^^ File "/home/artur/miniforge3/envs/py311_ag/lib/python3.11/site-packages/autogluon/tabular/predictor/predictor.py", line 1167, in fit self._fit(ag_fit_kwargs=ag_fit_kwargs, ag_post_fit_kwargs=ag_post_fit_kwargs) File "/home/artur/miniforge3/envs/py311_ag/lib/python3.11/site-packages/autogluon/tabular/predictor/predictor.py", line 1173, in _fit self._learner.fit(**ag_fit_kwargs) File "/home/artur/miniforge3/envs/py311_ag/lib/python3.11/site-packages/autogluon/tabular/learner/abstract_learner.py", line 159, in fit return self._fit(X=X, X_val=X_val, **kwargs) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/home/artur/miniforge3/envs/py311_ag/lib/python3.11/site-packages/autogluon/tabular/learner/default_learner.py", line 122, in _fit trainer.fit( File "/home/artur/miniforge3/envs/py311_ag/lib/python3.11/site-packages/autogluon/tabular/trainer/auto_trainer.py", line 125, in fit self._train_multi_and_ensemble( File "/home/artur/miniforge3/envs/py311_ag/lib/python3.11/site-packages/autogluon/core/trainer/abstract_trainer.py", line 2589, in _train_multi_and_ensemble model_names_fit = self.train_multi_levels( ^^^^^^^^^^^^^^^^^^^^^^^^ File "/home/artur/miniforge3/envs/py311_ag/lib/python3.11/site-packages/autogluon/core/trainer/abstract_trainer.py", line 452, in train_multi_levels base_model_names, aux_models = self.stack_new_level( ^^^^^^^^^^^^^^^^^^^^^ File "/home/artur/miniforge3/envs/py311_ag/lib/python3.11/site-packages/autogluon/core/trainer/abstract_trainer.py", line 600, in stack_new_level core_models = self.stack_new_level_core( ^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/home/artur/miniforge3/envs/py311_ag/lib/python3.11/site-packages/autogluon/core/trainer/abstract_trainer.py", line 730, in stack_new_level_core return self._train_multi( ^^^^^^^^^^^^^^^^^^ File "/home/artur/miniforge3/envs/py311_ag/lib/python3.11/site-packages/autogluon/core/trainer/abstract_trainer.py", line 2539, in _train_multi model_names_trained = self._train_multi_initial( ^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/home/artur/miniforge3/envs/py311_ag/lib/python3.11/site-packages/autogluon/core/trainer/abstract_trainer.py", line 2422, in _train_multi_initial candidate_features = self._proxy_model_feature_prune( ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/home/artur/miniforge3/envs/py311_ag/lib/python3.11/site-packages/autogluon/core/trainer/abstract_trainer.py", line 2669, in _proxy_model_feature_prune candidate_features = selector.select_features(**feature_prune_kwargs, **model_fit_kwargs) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/home/artur/miniforge3/envs/py311_ag/lib/python3.11/site-packages/autogluon/core/utils/feature_selection.py", line 225, in select_features X, y, X_val, y_val, X_fi, y_fi, prune_threshold, noise_columns, feature_metadata = self.setup( ^^^^^^^^^^^ File "/home/artur/miniforge3/envs/py311_ag/lib/python3.11/site-packages/autogluon/core/utils/feature_selection.py", line 549, in setup X_train, _, y_train, _ = generate_train_test_split(X=X, y=y, problem_type=self.problem_type, random_state=random_state, test_size=drop_ratio) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/home/artur/miniforge3/envs/py311_ag/lib/python3.11/site-packages/autogluon/core/utils/utils.py", line 513, in generate_train_test_split random.seed(random_state) File "/home/artur/miniforge3/envs/py311_ag/lib/python3.11/random.py", line 160, in seed raise TypeError('The only supported seed types are: None,\n' TypeError: The only supported seed types are: None, int, float, str, bytes, and bytearray. ```
open
2024-08-28T19:05:37Z
2024-11-25T22:47:13Z
https://github.com/autogluon/autogluon/issues/4441
[ "bug", "module: tabular", "Needs Triage" ]
arturdaraujo
1
neuml/txtai
nlp
295
Labels pipeline outputs changed with transformers 4.20.0
Unit tests are now failing with transformers 4.20.0. The output of the text-classification pipeline changed when passing a single text element in. Previously, a list of lists was produced, now only a single list with a dict of label outputs is produced. https://github.com/huggingface/transformers/issues/17754 filed upstream to determine if this change was intentional (need to fix in txtai) or a bug that would be addressed in transformers.
closed
2022-06-17T13:19:11Z
2022-07-25T19:03:41Z
https://github.com/neuml/txtai/issues/295
[ "bug" ]
davidmezzetti
2
scrapy/scrapy
python
6,552
Document @inthread
I’ve seen @VMRuiz recently use [`@inthread`](https://github.com/scrapy/scrapy/blob/efb53aafdcaae058962c6189ddecb3dc62b02c31/scrapy/utils/decorators.py#L58). I had not seen that before, and it seems like something we should cover in documentation about asynchronous code. Not sure where in the docs it fits best, but I think we need to point people to that for scenarios where they need to use some expensive non-async code, like some web service client without async support.
open
2024-11-19T08:57:30Z
2025-01-10T13:10:02Z
https://github.com/scrapy/scrapy/issues/6552
[ "enhancement", "docs" ]
Gallaecio
2
ultralytics/yolov5
machine-learning
13,357
How to train with new pictures?
### Search before asking - [X] I have searched the YOLOv5 [issues](https://github.com/ultralytics/yolov5/issues) and [discussions](https://github.com/ultralytics/yolov5/discussions) and found no similar questions. ### Question Hi, I have to train as code YOLOV5-6.1 and get good result.Now , I add some new images to the dataset.I want to know how to continue training on the best.pt before. And please tell me how to call this operation,thanks! ### Additional _No response_
open
2024-10-14T14:44:04Z
2024-11-09T06:29:27Z
https://github.com/ultralytics/yolov5/issues/13357
[ "question" ]
PengPeng-JunJun
2
mouredev/Hello-Python
fastapi
205
网上网赌被黑平台赢钱提不了款怎么解决
被黑平台赢钱提不了款的解决办法: 当你遇到这种情况千万不要跟客服发生争执,如果你谩就会骂客服那你的账号就被冻结了,同时你的分也被清空了,薇:lcly479飞机@ly2088先出款,后收费! 一、与平台沟通相关 保持冷静沟通 首先不要和客服理论,尤其是在盈利款额巨大的情况下,因为一直纠缠可能会导致被封号,而且客服会找各种理由拖延时间、推脱责任。被黑时如果平台出现提款不到账、系统维护、正在审核等情况,很可能是平台不想让你提款。此时可以不动声色地与平台客服沟通,称系统有问题,如果解决得好可以追加大量投资,但由于前期投的钱出不了金,心中有顾虑而不敢投入更多的钱,看是否能诱导平台给你出款。不过要注意这只是一种尝试的策略,并不一定能成功。 二、收集证据相关 证据材料收集 及时收集和整理好银行卡流水、提款记录等证据材料,并且保留好所有提交给客服的聊天记录、电话录音等资料,以便后续维权时使用。这些证据可能在向相关机构投诉或者寻求法律帮助时起到关键作用。 三、寻求外部帮助相关 联系客服或监管部门 如果是正规的银行或支付平台方面提示取款通道维护不能提款时,可以及时联系银行或支付平台客服解决。可在银行或支付平台官网上查看是否有通知或公告说明维护时间和具体处理方式,联系方式通常可以在官网上找到,也可以通过支付平台或银行APP内的在线客服咨询或拨打客服热线联系。如果是网赌平台不给出款,可以向平台网站留言,在论坛、贴吧发帖等形式反映平台的错误做法,要求平台及时更正;也可以向网信办投诉网站进行投诉(如果平台存在违规行为的话)。如果怀疑遇到诈骗,还可以向公安机关报案,但需要注意网赌本身是涉嫌赌博罪的行为,赌资不受法律保护。 咨询律师 可以在线咨询律师或者寻求专业法律人士的帮助,例如向擅长婚姻家庭、债权债务、经济纠纷等相关领域的律师咨询这种情况下自己可能采取的措施以及面临的风险等情况。
closed
2024-11-10T12:16:32Z
2024-11-28T13:34:03Z
https://github.com/mouredev/Hello-Python/issues/205
[]
lcly479
0
serengil/deepface
deep-learning
712
How change min_face_size?
### How to change min_face_size parameter using detector_backend and MTCNN parameter? def build_model(): from mtcnn import MTCNN face_detector = MTCNN() return face_detector _Here you can not change and add the item min_face_size?_
closed
2023-04-07T15:56:08Z
2023-04-07T20:04:11Z
https://github.com/serengil/deepface/issues/712
[ "question" ]
PrincePepper
1
localstack/localstack
python
12,321
bug: Apigatewaymanagementapi for Websocket API stack trace on GetConnection
### Is there an existing issue for this? - [x] I have searched the existing issues ### Current Behavior My API Gateway is setup properly and I can use almost all features including lambda authorisation, request integrations and responses from backend etc. I can also SendMessage to connected websocket clients via the API and CLI. But there is an issue using the GetConnection endpoint via the CLI or API. There seems to be a bug in the Localstack code when looking for a User-Agent header that doesnt exist. See below Debug Stack trace. Even though the connection is active and I can Post a message to the connection I cant Get the connection ever (always stack trace). ``` 2025-03-02 01:06:31,042 - MainThread - awscli.clidriver - DEBUG - CLI version: aws-cli/2.17.25 Python/3.11.9 Darwin/24.1.0 exe/x86_64 2025-03-02 01:06:31,044 - MainThread - awscli.clidriver - DEBUG - Arguments entered to CLI: ['apigatewaymanagementapi', 'get-connection', '--connection-id', '8807fd50', '--debug', '--endpoint-url', 'http://localhost:4566/_aws/execute-api/0ea7ff6b/local'] 2025-03-02 01:06:31,074 - MainThread - botocore.hooks - DEBUG - Event building-command-table.main: calling handler <function add_s3 at 0x110d01260> 2025-03-02 01:06:31,074 - MainThread - botocore.hooks - DEBUG - Event building-command-table.main: calling handler <function add_ddb at 0x110b14c20> 2025-03-02 01:06:31,074 - MainThread - botocore.hooks - DEBUG - Event building-command-table.main: calling handler <bound method BasicCommand.add_command of <class 'awscli.customizations.configure.configure.ConfigureCommand'>> 2025-03-02 01:06:31,074 - MainThread - botocore.hooks - DEBUG - Event building-command-table.main: calling handler <function change_name at 0x110a7e700> 2025-03-02 01:06:31,074 - MainThread - botocore.hooks - DEBUG - Event building-command-table.main: calling handler <function change_name at 0x110a7fce0> 2025-03-02 01:06:31,074 - MainThread - botocore.hooks - DEBUG - Event building-command-table.main: calling handler <function alias_opsworks_cm at 0x110d03c40> 2025-03-02 01:06:31,074 - MainThread - botocore.hooks - DEBUG - Event building-command-table.main: calling handler <function add_history_commands at 0x110b634c0> 2025-03-02 01:06:31,074 - MainThread - botocore.hooks - DEBUG - Event building-command-table.main: calling handler <bound method BasicCommand.add_command of <class 'awscli.customizations.devcommands.CLIDevCommand'>> 2025-03-02 01:06:31,074 - MainThread - botocore.hooks - DEBUG - Event building-command-table.main: calling handler <function add_waiters at 0x110d03b00> 2025-03-02 01:06:31,074 - MainThread - botocore.hooks - DEBUG - Event building-command-table.main: calling handler <bound method AliasSubCommandInjector.on_building_command_table of <awscli.alias.AliasSubCommandInjector object at 0x110dd19d0>> 2025-03-02 01:06:31,075 - MainThread - botocore.loaders - DEBUG - Loading JSON file: /usr/local/aws-cli/awscli/data/cli.json 2025-03-02 01:06:31,076 - MainThread - botocore.hooks - DEBUG - Event top-level-args-parsed: calling handler <function resolve_types at 0x110c34f40> 2025-03-02 01:06:31,076 - MainThread - botocore.hooks - DEBUG - Event top-level-args-parsed: calling handler <function no_sign_request at 0x110c35260> 2025-03-02 01:06:31,076 - MainThread - botocore.hooks - DEBUG - Event top-level-args-parsed: calling handler <function resolve_verify_ssl at 0x110c351c0> 2025-03-02 01:06:31,076 - MainThread - botocore.hooks - DEBUG - Event top-level-args-parsed: calling handler <function resolve_cli_read_timeout at 0x110c353a0> 2025-03-02 01:06:31,076 - MainThread - botocore.hooks - DEBUG - Event top-level-args-parsed: calling handler <function resolve_cli_connect_timeout at 0x110c35300> 2025-03-02 01:06:31,076 - MainThread - botocore.hooks - DEBUG - Event top-level-args-parsed: calling handler <built-in method update of dict object at 0x110dcab80> 2025-03-02 01:06:31,076 - MainThread - awscli.clidriver - DEBUG - CLI version: aws-cli/2.17.25 Python/3.11.9 Darwin/24.1.0 exe/x86_64 2025-03-02 01:06:31,076 - MainThread - awscli.clidriver - DEBUG - Arguments entered to CLI: ['apigatewaymanagementapi', 'get-connection', '--connection-id', '8807fd50', '--debug', '--endpoint-url', 'http://localhost:4566/_aws/execute-api/0ea7ff6b/local'] 2025-03-02 01:06:31,076 - MainThread - botocore.hooks - DEBUG - Event session-initialized: calling handler <function add_timestamp_parser at 0x110d01b20> 2025-03-02 01:06:31,076 - MainThread - botocore.hooks - DEBUG - Event session-initialized: calling handler <function register_uri_param_handler at 0x110384860> 2025-03-02 01:06:31,076 - MainThread - botocore.hooks - DEBUG - Event session-initialized: calling handler <function add_binary_formatter at 0x110d94680> 2025-03-02 01:06:31,076 - MainThread - botocore.hooks - DEBUG - Event session-initialized: calling handler <function no_pager_handler at 0x110246a20> 2025-03-02 01:06:31,076 - MainThread - botocore.hooks - DEBUG - Event session-initialized: calling handler <function inject_assume_role_provider_cache at 0x1103ac180> 2025-03-02 01:06:31,082 - MainThread - botocore.utils - DEBUG - IMDS ENDPOINT: http://169.254.169.254/ 2025-03-02 01:06:31,091 - MainThread - botocore.hooks - DEBUG - Event session-initialized: calling handler <function attach_history_handler at 0x110b4bce0> 2025-03-02 01:06:31,091 - MainThread - botocore.hooks - DEBUG - Event session-initialized: calling handler <function inject_json_file_cache at 0x110b08ea0> 2025-03-02 01:06:31,099 - MainThread - botocore.loaders - DEBUG - Loading JSON file: /usr/local/aws-cli/awscli/botocore/data/apigatewaymanagementapi/2018-11-29/service-2.json 2025-03-02 01:06:31,099 - MainThread - botocore.hooks - DEBUG - Event building-command-table.apigatewaymanagementapi: calling handler <function add_waiters at 0x110d03b00> 2025-03-02 01:06:31,107 - MainThread - botocore.hooks - DEBUG - Event building-command-table.apigatewaymanagementapi: calling handler <bound method AliasSubCommandInjector.on_building_command_table of <awscli.alias.AliasSubCommandInjector object at 0x110dd19d0>> 2025-03-02 01:06:31,107 - MainThread - awscli.clidriver - DEBUG - OrderedDict([('connection-id', <awscli.arguments.CLIArgument object at 0x110e34810>)]) 2025-03-02 01:06:31,107 - MainThread - botocore.hooks - DEBUG - Event building-argument-table.apigatewaymanagementapi.get-connection: calling handler <function add_streaming_output_arg at 0x110d02020> 2025-03-02 01:06:31,107 - MainThread - botocore.hooks - DEBUG - Event building-argument-table.apigatewaymanagementapi.get-connection: calling handler <function add_cli_input_json at 0x1103acae0> 2025-03-02 01:06:31,107 - MainThread - botocore.hooks - DEBUG - Event building-argument-table.apigatewaymanagementapi.get-connection: calling handler <function add_cli_input_yaml at 0x1103acb80> 2025-03-02 01:06:31,107 - MainThread - botocore.hooks - DEBUG - Event building-argument-table.apigatewaymanagementapi.get-connection: calling handler <function unify_paging_params at 0x110b15260> 2025-03-02 01:06:31,117 - MainThread - botocore.loaders - DEBUG - Loading JSON file: /usr/local/aws-cli/awscli/botocore/data/apigatewaymanagementapi/2018-11-29/paginators-1.json 2025-03-02 01:06:31,117 - MainThread - botocore.hooks - DEBUG - Event building-argument-table.apigatewaymanagementapi.get-connection: calling handler <function add_generate_skeleton at 0x110c07740> 2025-03-02 01:06:31,117 - MainThread - botocore.hooks - DEBUG - Event before-building-argument-table-parser.apigatewaymanagementapi.get-connection: calling handler <bound method OverrideRequiredArgsArgument.override_required_args of <awscli.customizations.cliinput.CliInputJSONArgument object at 0x110e34c50>> 2025-03-02 01:06:31,117 - MainThread - botocore.hooks - DEBUG - Event before-building-argument-table-parser.apigatewaymanagementapi.get-connection: calling handler <bound method OverrideRequiredArgsArgument.override_required_args of <awscli.customizations.cliinput.CliInputYAMLArgument object at 0x110e352d0>> 2025-03-02 01:06:31,117 - MainThread - botocore.hooks - DEBUG - Event before-building-argument-table-parser.apigatewaymanagementapi.get-connection: calling handler <bound method GenerateCliSkeletonArgument.override_required_args of <awscli.customizations.generatecliskeleton.GenerateCliSkeletonArgument object at 0x110e3f110>> 2025-03-02 01:06:31,117 - MainThread - botocore.hooks - DEBUG - Event building-command-table.apigatewaymanagementapi_get-connection: calling handler <function add_waiters at 0x110d03b00> 2025-03-02 01:06:31,117 - MainThread - botocore.hooks - DEBUG - Event building-command-table.apigatewaymanagementapi_get-connection: calling handler <bound method AliasSubCommandInjector.on_building_command_table of <awscli.alias.AliasSubCommandInjector object at 0x110dd19d0>> 2025-03-02 01:06:31,118 - MainThread - botocore.hooks - DEBUG - Event load-cli-arg.execute-api.get-connection.connection-id: calling handler <awscli.paramfile.URIArgumentHandler object at 0x11041ce90> 2025-03-02 01:06:31,118 - MainThread - botocore.hooks - DEBUG - Event process-cli-arg.apigatewaymanagementapi.get-connection: calling handler <awscli.argprocess.ParamShorthandParser object at 0x110296210> 2025-03-02 01:06:31,118 - MainThread - awscli.arguments - DEBUG - Unpacked value of '8807fd50' for parameter "connection_id": '8807fd50' 2025-03-02 01:06:31,118 - MainThread - botocore.hooks - DEBUG - Event load-cli-arg.execute-api.get-connection.cli-input-json: calling handler <awscli.paramfile.URIArgumentHandler object at 0x11041ce90> 2025-03-02 01:06:31,118 - MainThread - botocore.hooks - DEBUG - Event load-cli-arg.execute-api.get-connection.cli-input-yaml: calling handler <awscli.paramfile.URIArgumentHandler object at 0x11041ce90> 2025-03-02 01:06:31,118 - MainThread - botocore.hooks - DEBUG - Event load-cli-arg.execute-api.get-connection.generate-cli-skeleton: calling handler <awscli.paramfile.URIArgumentHandler object at 0x11041ce90> 2025-03-02 01:06:31,118 - MainThread - botocore.hooks - DEBUG - Event calling-command.apigatewaymanagementapi.get-connection: calling handler <bound method CliInputArgument.add_to_call_parameters of <awscli.customizations.cliinput.CliInputJSONArgument object at 0x110e34c50>> 2025-03-02 01:06:31,118 - MainThread - botocore.hooks - DEBUG - Event calling-command.apigatewaymanagementapi.get-connection: calling handler <bound method CliInputArgument.add_to_call_parameters of <awscli.customizations.cliinput.CliInputYAMLArgument object at 0x110e352d0>> 2025-03-02 01:06:31,118 - MainThread - botocore.hooks - DEBUG - Event calling-command.apigatewaymanagementapi.get-connection: calling handler <bound method GenerateCliSkeletonArgument.generate_skeleton of <awscli.customizations.generatecliskeleton.GenerateCliSkeletonArgument object at 0x110e3f110>> 2025-03-02 01:06:31,118 - MainThread - botocore.credentials - DEBUG - Looking for credentials via: env 2025-03-02 01:06:31,118 - MainThread - botocore.credentials - DEBUG - Looking for credentials via: assume-role 2025-03-02 01:06:31,118 - MainThread - botocore.credentials - DEBUG - Looking for credentials via: assume-role-with-web-identity 2025-03-02 01:06:31,118 - MainThread - botocore.credentials - DEBUG - Looking for credentials via: sso 2025-03-02 01:06:31,118 - MainThread - botocore.credentials - DEBUG - Looking for credentials via: shared-credentials-file 2025-03-02 01:06:31,118 - MainThread - botocore.credentials - INFO - Found credentials in shared credentials file: ~/.aws/credentials 2025-03-02 01:06:31,119 - MainThread - botocore.loaders - DEBUG - Loading JSON file: /usr/local/aws-cli/awscli/botocore/data/endpoints.json 2025-03-02 01:06:31,128 - MainThread - botocore.hooks - DEBUG - Event choose-service-name: calling handler <function handle_service_name_alias at 0x10eed7f60> 2025-03-02 01:06:31,138 - MainThread - botocore.loaders - DEBUG - Loading JSON file: /usr/local/aws-cli/awscli/botocore/data/apigatewaymanagementapi/2018-11-29/endpoint-rule-set-1.json 2025-03-02 01:06:31,139 - MainThread - botocore.loaders - DEBUG - Loading JSON file: /usr/local/aws-cli/awscli/botocore/data/partitions.json 2025-03-02 01:06:31,139 - MainThread - botocore.hooks - DEBUG - Event creating-client-class.apigatewaymanagementapi: calling handler <function add_generate_presigned_url at 0x10ee00c20> 2025-03-02 01:06:31,139 - MainThread - botocore.regions - DEBUG - Creating a regex based endpoint for execute-api, eu-west-1 2025-03-02 01:06:31,144 - MainThread - botocore.endpoint - DEBUG - Setting execute-api timeout as (60, 60) 2025-03-02 01:06:31,145 - MainThread - botocore.hooks - DEBUG - Event provide-client-params.apigatewaymanagementapi.GetConnection: calling handler <function base64_decode_input_blobs at 0x110d94720> 2025-03-02 01:06:31,145 - MainThread - botocore.hooks - DEBUG - Event before-parameter-build.apigatewaymanagementapi.GetConnection: calling handler <function generate_idempotent_uuid at 0x10eefa2a0> 2025-03-02 01:06:31,145 - MainThread - botocore.regions - DEBUG - Calling endpoint provider with parameters: {'Region': 'eu-west-1', 'UseDualStack': False, 'UseFIPS': False, 'Endpoint': 'http://localhost:4566/_aws/execute-api/0ea7ff6b/local'} 2025-03-02 01:06:31,145 - MainThread - botocore.regions - DEBUG - Endpoint provider result: http://localhost:4566/_aws/execute-api/0ea7ff6b/local 2025-03-02 01:06:31,146 - MainThread - botocore.hooks - DEBUG - Event before-call.apigatewaymanagementapi.GetConnection: calling handler <function inject_api_version_header_if_needed at 0x10eefbd80> 2025-03-02 01:06:31,146 - MainThread - botocore.endpoint - DEBUG - Making request for OperationModel(name=GetConnection) with params: {'url_path': '/@connections/8807fd50', 'query_string': {}, 'method': 'GET', 'headers': {'User-Agent': 'aws-cli/2.17.25 md/awscrt#0.21.2 ua/2.0 os/macos#24.1.0 md/arch#x86_64 lang/python#3.11.9 md/pyimpl#CPython cfg/retry-mode#standard md/installer#exe md/prompt#off md/command#apigatewaymanagementapi.get-connection'}, 'body': b'', 'url': 'http://localhost:4566/_aws/execute-api/0ea7ff6b/local/@connections/8807fd50', 'context': {'client_region': 'eu-west-1', 'client_config': <botocore.config.Config object at 0x11840d6d0>, 'has_streaming_input': False, 'auth_type': None}} 2025-03-02 01:06:31,146 - MainThread - botocore.hooks - DEBUG - Event request-created.apigatewaymanagementapi.GetConnection: calling handler <bound method RequestSigner.handler of <botocore.signers.RequestSigner object at 0x1183caf50>> 2025-03-02 01:06:31,147 - MainThread - botocore.hooks - DEBUG - Event choose-signer.apigatewaymanagementapi.GetConnection: calling handler <function set_operation_specific_signer at 0x10eefa160> 2025-03-02 01:06:31,147 - MainThread - botocore.auth - DEBUG - Calculating signature using v4 auth. 2025-03-02 01:06:31,147 - MainThread - botocore.auth - DEBUG - CanonicalRequest: GET /_aws/execute-api/0ea7ff6b/local/%40connections/8807fd50 host:localhost:4566 x-amz-date:20250302T010631Z host;x-amz-date e3b0c44298fc1c149afbf4c8996fb92427ae41e4649b934ca495991b7852b855 2025-03-02 01:06:31,147 - MainThread - botocore.auth - DEBUG - StringToSign: AWS4-HMAC-SHA256 20250302T010631Z 20250302/eu-west-1/execute-api/aws4_request c7e0f8bbcf7ff24541071dc1cd177b4a807b9d6164524f500b6b5a5107d9bd37 2025-03-02 01:06:31,148 - MainThread - botocore.auth - DEBUG - Signature: a9c5238786639dbc9a5113fef1f7e0315d98b522f8eea57d7948baf66b4b0fde 2025-03-02 01:06:31,148 - MainThread - botocore.endpoint - DEBUG - Sending http request: <AWSPreparedRequest stream_output=False, method=GET, url=http://localhost:4566/_aws/execute-api/0ea7ff6b/local/@connections/8807fd50, headers={'User-Agent': b'aws-cli/2.17.25 md/awscrt#0.21.2 ua/2.0 os/macos#24.1.0 md/arch#x86_64 lang/python#3.11.9 md/pyimpl#CPython cfg/retry-mode#standard md/installer#exe md/prompt#off md/command#apigatewaymanagementapi.get-connection', 'X-Amz-Date': b'20250302T010631Z', 'Authorization': b'AWS4-HMAC-SHA256 Credential=test/20250302/eu-west-1/execute-api/aws4_request, SignedHeaders=host;x-amz-date, Signature=a9c5238786639dbc9a5113fef1f7e0315d98b522f8eea57d7948baf66b4b0fde'}> 2025-03-02 01:06:31,149 - MainThread - urllib3.connectionpool - DEBUG - Starting new HTTP connection (1): localhost:4566 2025-03-02 01:06:31,160 - MainThread - urllib3.connectionpool - DEBUG - http://localhost:4566 "GET /_aws/execute-api/0ea7ff6b/local/@connections/8807fd50 HTTP/1.1" 500 2453 2025-03-02 01:06:31,161 - MainThread - botocore.parsers - DEBUG - Response headers: {'Server': 'TwistedWeb/24.3.0', 'Date': 'Sun, 02 Mar 2025 01:06:31 GMT', 'Content-Type': 'application/json', 'X-Amzn-Errortype': 'InternalError', 'Content-Length': '2453', 'x-amzn-requestid': '5e066851-ef66-44d5-8f70-5f36109d2173', 'x-amz-request-id': '5e066851-ef66-44d5-8f70-5f36109d2173'} 2025-03-02 01:06:31,161 - MainThread - botocore.parsers - DEBUG - Response body: b'{"__type": "InternalError", "message": "exception while calling iot with unknown operation: Traceback (most recent call last):\\n File \\"/opt/code/localstack/.venv/lib/python3.11/site-packages/rolo/gateway/chain.py\\", line 166, in handle\\n handler(self, self.context, response)\\n File \\"/opt/code/localstack/.venv/lib/python3.11/site-packages/rolo/gateway/handlers.py\\", line 27, in __call__\\n router_response = self.router.dispatch(context.request)\\n ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\\n File \\"/opt/code/localstack/.venv/lib/python3.11/site-packages/rolo/routing/router.py\\", line 326, in dispatch\\n return self.dispatcher(request, handler, args)\\n ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\\n File \\"/opt/code/localstack/.venv/lib/python3.11/site-packages/rolo/routing/handler.py\\", line 64, in __call__\\n result = self.invoke_endpoint(request, endpoint, request_args)\\n ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\\n File \\"/opt/code/localstack/.venv/lib/python3.11/site-packages/rolo/routing/pydantic.py\\", line 82, in invoke_endpoint\\n return super().invoke_endpoint(request, endpoint, request_args)\\n ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\\n File \\"/opt/code/localstack/.venv/lib/python3.11/site-packages/rolo/routing/handler.py\\", line 73, in invoke_endpoint\\n return endpoint(request, **request_args)\\n ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\\n File \\"/opt/code/localstack/.venv/lib/python3.11/site-packages/localstack/pro/core/services/apigatewayv2/next_gen/execute_api/websockets/apigatewaymanagementapi/core.py.enc\\", line 22, in call_apigwmgmtapi\\n try:H=G(A,connection_id,**F)\\n ^^^^^^^^^^^^^^^^^^^^^^\\n File \\"/opt/code/localstack/.venv/lib/python3.11/site-packages/localstack/pro/core/services/apigatewayv2/next_gen/execute_api/websockets_legacy/management_api_routes.py.enc\\", line 19, in get_connection\\n D={\'SourceIp\':A[_B].remote_address[0],\'UserAgent\':A[_B].request_headers[\'User-Agent\']};return GetConnectionResponse(ConnectedAt=A[\'connected_at\'],Identity=D,LastActiveAt=A[\'last_active_at\'])\\n ~~~~~~~~~~~~~~~~~~~~~^^^^^^^^^^^^^^\\n File \\"/opt/code/localstack/.venv/lib/python3.11/site-packages/websockets/datastructures.py\\", line 107, in __getitem__\\n value = self._dict[key.lower()]\\n ~~~~~~~~~~^^^^^^^^^^^^^\\nKeyError: \'user-agent\'\\n"}' 2025-03-02 01:06:31,161 - MainThread - botocore.hooks - DEBUG - Event needs-retry.apigatewaymanagementapi.GetConnection: calling handler <bound method RetryHandler.needs_retry of <botocore.retries.standard.RetryHandler object at 0x11840f410>> 2025-03-02 01:06:31,161 - MainThread - botocore.retries.standard - DEBUG - Retry needed, retrying request after delay of: 0.082056313224804 2025-03-02 01:06:31,161 - MainThread - botocore.endpoint - DEBUG - Response received to retry, sleeping for 0.082056313224804 seconds 2025-03-02 01:06:31,248 - MainThread - botocore.hooks - DEBUG - Event request-created.apigatewaymanagementapi.GetConnection: calling handler <bound method RequestSigner.handler of <botocore.signers.RequestSigner object at 0x1183caf50>> 2025-03-02 01:06:31,248 - MainThread - botocore.hooks - DEBUG - Event choose-signer.apigatewaymanagementapi.GetConnection: calling handler <function set_operation_specific_signer at 0x10eefa160> 2025-03-02 01:06:31,248 - MainThread - botocore.auth - DEBUG - Calculating signature using v4 auth. 2025-03-02 01:06:31,249 - MainThread - botocore.auth - DEBUG - CanonicalRequest: GET /_aws/execute-api/0ea7ff6b/local/%40connections/8807fd50 host:localhost:4566 x-amz-date:20250302T010631Z host;x-amz-date e3b0c44298fc1c149afbf4c8996fb92427ae41e4649b934ca495991b7852b855 2025-03-02 01:06:31,249 - MainThread - botocore.auth - DEBUG - StringToSign: AWS4-HMAC-SHA256 20250302T010631Z 20250302/eu-west-1/execute-api/aws4_request c7e0f8bbcf7ff24541071dc1cd177b4a807b9d6164524f500b6b5a5107d9bd37 2025-03-02 01:06:31,249 - MainThread - botocore.auth - DEBUG - Signature: a9c5238786639dbc9a5113fef1f7e0315d98b522f8eea57d7948baf66b4b0fde 2025-03-02 01:06:31,249 - MainThread - botocore.endpoint - DEBUG - Sending http request: <AWSPreparedRequest stream_output=False, method=GET, url=http://localhost:4566/_aws/execute-api/0ea7ff6b/local/@connections/8807fd50, headers={'User-Agent': b'aws-cli/2.17.25 md/awscrt#0.21.2 ua/2.0 os/macos#24.1.0 md/arch#x86_64 lang/python#3.11.9 md/pyimpl#CPython cfg/retry-mode#standard md/installer#exe md/prompt#off md/command#apigatewaymanagementapi.get-connection', 'X-Amz-Date': b'20250302T010631Z', 'Authorization': b'AWS4-HMAC-SHA256 Credential=test/20250302/eu-west-1/execute-api/aws4_request, SignedHeaders=host;x-amz-date, Signature=a9c5238786639dbc9a5113fef1f7e0315d98b522f8eea57d7948baf66b4b0fde'}> 2025-03-02 01:06:31,251 - MainThread - urllib3.connectionpool - DEBUG - http://localhost:4566 "GET /_aws/execute-api/0ea7ff6b/local/@connections/8807fd50 HTTP/1.1" 500 2453 2025-03-02 01:06:31,252 - MainThread - botocore.parsers - DEBUG - Response headers: {'Server': 'TwistedWeb/24.3.0', 'Date': 'Sun, 02 Mar 2025 01:06:31 GMT', 'Content-Type': 'application/json', 'X-Amzn-Errortype': 'InternalError', 'Content-Length': '2453', 'x-amzn-requestid': 'f6da2c9c-2f79-4d95-9aa9-15481c0315bb', 'x-amz-request-id': 'f6da2c9c-2f79-4d95-9aa9-15481c0315bb'} 2025-03-02 01:06:31,252 - MainThread - botocore.parsers - DEBUG - Response body: b'{"__type": "InternalError", "message": "exception while calling iot with unknown operation: Traceback (most recent call last):\\n File \\"/opt/code/localstack/.venv/lib/python3.11/site-packages/rolo/gateway/chain.py\\", line 166, in handle\\n handler(self, self.context, response)\\n File \\"/opt/code/localstack/.venv/lib/python3.11/site-packages/rolo/gateway/handlers.py\\", line 27, in __call__\\n router_response = self.router.dispatch(context.request)\\n ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\\n File \\"/opt/code/localstack/.venv/lib/python3.11/site-packages/rolo/routing/router.py\\", line 326, in dispatch\\n return self.dispatcher(request, handler, args)\\n ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\\n File \\"/opt/code/localstack/.venv/lib/python3.11/site-packages/rolo/routing/handler.py\\", line 64, in __call__\\n result = self.invoke_endpoint(request, endpoint, request_args)\\n ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\\n File \\"/opt/code/localstack/.venv/lib/python3.11/site-packages/rolo/routing/pydantic.py\\", line 82, in invoke_endpoint\\n return super().invoke_endpoint(request, endpoint, request_args)\\n ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\\n File \\"/opt/code/localstack/.venv/lib/python3.11/site-packages/rolo/routing/handler.py\\", line 73, in invoke_endpoint\\n return endpoint(request, **request_args)\\n ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\\n File \\"/opt/code/localstack/.venv/lib/python3.11/site-packages/localstack/pro/core/services/apigatewayv2/next_gen/execute_api/websockets/apigatewaymanagementapi/core.py.enc\\", line 22, in call_apigwmgmtapi\\n try:H=G(A,connection_id,**F)\\n ^^^^^^^^^^^^^^^^^^^^^^\\n File \\"/opt/code/localstack/.venv/lib/python3.11/site-packages/localstack/pro/core/services/apigatewayv2/next_gen/execute_api/websockets_legacy/management_api_routes.py.enc\\", line 19, in get_connection\\n D={\'SourceIp\':A[_B].remote_address[0],\'UserAgent\':A[_B].request_headers[\'User-Agent\']};return GetConnectionResponse(ConnectedAt=A[\'connected_at\'],Identity=D,LastActiveAt=A[\'last_active_at\'])\\n ~~~~~~~~~~~~~~~~~~~~~^^^^^^^^^^^^^^\\n File \\"/opt/code/localstack/.venv/lib/python3.11/site-packages/websockets/datastructures.py\\", line 107, in __getitem__\\n value = self._dict[key.lower()]\\n ~~~~~~~~~~^^^^^^^^^^^^^\\nKeyError: \'user-agent\'\\n"}' 2025-03-02 01:06:31,252 - MainThread - botocore.hooks - DEBUG - Event needs-retry.apigatewaymanagementapi.GetConnection: calling handler <bound method RetryHandler.needs_retry of <botocore.retries.standard.RetryHandler object at 0x11840f410>> 2025-03-02 01:06:31,252 - MainThread - botocore.retries.standard - DEBUG - Retry needed, retrying request after delay of: 1.7712764872614646 2025-03-02 01:06:31,252 - MainThread - botocore.endpoint - DEBUG - Response received to retry, sleeping for 1.7712764872614646 seconds 2025-03-02 01:06:33,028 - MainThread - botocore.hooks - DEBUG - Event request-created.apigatewaymanagementapi.GetConnection: calling handler <bound method RequestSigner.handler of <botocore.signers.RequestSigner object at 0x1183caf50>> 2025-03-02 01:06:33,029 - MainThread - botocore.hooks - DEBUG - Event choose-signer.apigatewaymanagementapi.GetConnection: calling handler <function set_operation_specific_signer at 0x10eefa160> 2025-03-02 01:06:33,030 - MainThread - botocore.auth - DEBUG - Calculating signature using v4 auth. 2025-03-02 01:06:33,030 - MainThread - botocore.auth - DEBUG - CanonicalRequest: GET /_aws/execute-api/0ea7ff6b/local/%40connections/8807fd50 host:localhost:4566 x-amz-date:20250302T010633Z host;x-amz-date e3b0c44298fc1c149afbf4c8996fb92427ae41e4649b934ca495991b7852b855 2025-03-02 01:06:33,030 - MainThread - botocore.auth - DEBUG - StringToSign: AWS4-HMAC-SHA256 20250302T010633Z 20250302/eu-west-1/execute-api/aws4_request 2e0eeae4c9f3f7e36dd448d74ef2d63713fd286a3eee49dd142efa88cb1351c0 2025-03-02 01:06:33,030 - MainThread - botocore.auth - DEBUG - Signature: 8a584434b234270fa0478f3b67a6db66be513eb07ea3e9abe40d5d5d537eef72 2025-03-02 01:06:33,030 - MainThread - botocore.endpoint - DEBUG - Sending http request: <AWSPreparedRequest stream_output=False, method=GET, url=http://localhost:4566/_aws/execute-api/0ea7ff6b/local/@connections/8807fd50, headers={'User-Agent': b'aws-cli/2.17.25 md/awscrt#0.21.2 ua/2.0 os/macos#24.1.0 md/arch#x86_64 lang/python#3.11.9 md/pyimpl#CPython cfg/retry-mode#standard md/installer#exe md/prompt#off md/command#apigatewaymanagementapi.get-connection', 'X-Amz-Date': b'20250302T010633Z', 'Authorization': b'AWS4-HMAC-SHA256 Credential=test/20250302/eu-west-1/execute-api/aws4_request, SignedHeaders=host;x-amz-date, Signature=8a584434b234270fa0478f3b67a6db66be513eb07ea3e9abe40d5d5d537eef72'}> 2025-03-02 01:06:33,034 - MainThread - urllib3.connectionpool - DEBUG - http://localhost:4566 "GET /_aws/execute-api/0ea7ff6b/local/@connections/8807fd50 HTTP/1.1" 500 2453 2025-03-02 01:06:33,035 - MainThread - botocore.parsers - DEBUG - Response headers: {'Server': 'TwistedWeb/24.3.0', 'Date': 'Sun, 02 Mar 2025 01:06:33 GMT', 'Content-Type': 'application/json', 'X-Amzn-Errortype': 'InternalError', 'Content-Length': '2453', 'x-amzn-requestid': '85bfadc6-e106-4ce0-a5da-160ed72ac253', 'x-amz-request-id': '85bfadc6-e106-4ce0-a5da-160ed72ac253'} 2025-03-02 01:06:33,035 - MainThread - botocore.parsers - DEBUG - Response body: b'{"__type": "InternalError", "message": "exception while calling iot with unknown operation: Traceback (most recent call last):\\n File \\"/opt/code/localstack/.venv/lib/python3.11/site-packages/rolo/gateway/chain.py\\", line 166, in handle\\n handler(self, self.context, response)\\n File \\"/opt/code/localstack/.venv/lib/python3.11/site-packages/rolo/gateway/handlers.py\\", line 27, in __call__\\n router_response = self.router.dispatch(context.request)\\n ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\\n File \\"/opt/code/localstack/.venv/lib/python3.11/site-packages/rolo/routing/router.py\\", line 326, in dispatch\\n return self.dispatcher(request, handler, args)\\n ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\\n File \\"/opt/code/localstack/.venv/lib/python3.11/site-packages/rolo/routing/handler.py\\", line 64, in __call__\\n result = self.invoke_endpoint(request, endpoint, request_args)\\n ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\\n File \\"/opt/code/localstack/.venv/lib/python3.11/site-packages/rolo/routing/pydantic.py\\", line 82, in invoke_endpoint\\n return super().invoke_endpoint(request, endpoint, request_args)\\n ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\\n File \\"/opt/code/localstack/.venv/lib/python3.11/site-packages/rolo/routing/handler.py\\", line 73, in invoke_endpoint\\n return endpoint(request, **request_args)\\n ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\\n File \\"/opt/code/localstack/.venv/lib/python3.11/site-packages/localstack/pro/core/services/apigatewayv2/next_gen/execute_api/websockets/apigatewaymanagementapi/core.py.enc\\", line 22, in call_apigwmgmtapi\\n try:H=G(A,connection_id,**F)\\n ^^^^^^^^^^^^^^^^^^^^^^\\n File \\"/opt/code/localstack/.venv/lib/python3.11/site-packages/localstack/pro/core/services/apigatewayv2/next_gen/execute_api/websockets_legacy/management_api_routes.py.enc\\", line 19, in get_connection\\n D={\'SourceIp\':A[_B].remote_address[0],\'UserAgent\':A[_B].request_headers[\'User-Agent\']};return GetConnectionResponse(ConnectedAt=A[\'connected_at\'],Identity=D,LastActiveAt=A[\'last_active_at\'])\\n ~~~~~~~~~~~~~~~~~~~~~^^^^^^^^^^^^^^\\n File \\"/opt/code/localstack/.venv/lib/python3.11/site-packages/websockets/datastructures.py\\", line 107, in __getitem__\\n value = self._dict[key.lower()]\\n ~~~~~~~~~~^^^^^^^^^^^^^\\nKeyError: \'user-agent\'\\n"}' 2025-03-02 01:06:33,036 - MainThread - botocore.hooks - DEBUG - Event needs-retry.apigatewaymanagementapi.GetConnection: calling handler <bound method RetryHandler.needs_retry of <botocore.retries.standard.RetryHandler object at 0x11840f410>> 2025-03-02 01:06:33,036 - MainThread - botocore.retries.standard - DEBUG - Max attempts of 3 reached. 2025-03-02 01:06:33,036 - MainThread - botocore.retries.standard - DEBUG - Not retrying request. 2025-03-02 01:06:33,036 - MainThread - botocore.hooks - DEBUG - Event after-call.apigatewaymanagementapi.GetConnection: calling handler <bound method RetryQuotaChecker.release_retry_quota of <botocore.retries.standard.RetryQuotaChecker object at 0x11840edd0>> 2025-03-02 01:06:33,037 - MainThread - awscli.clidriver - DEBUG - Exception caught in main() Traceback (most recent call last): File "awscli/clidriver.py", line 499, in main File "awscli/clidriver.py", line 634, in __call__ File "awscli/clidriver.py", line 837, in __call__ File "awscli/clidriver.py", line 963, in invoke File "awscli/clidriver.py", line 975, in _make_client_call File "awscli/botocore/client.py", line 360, in _api_call File "awscli/botocore/client.py", line 739, in _make_api_call botocore.exceptions.ClientError: An error occurred (InternalError) when calling the GetConnection operation (reached max retries: 2): exception while calling iot with unknown operation: Traceback (most recent call last): File "/opt/code/localstack/.venv/lib/python3.11/site-packages/rolo/gateway/chain.py", line 166, in handle handler(self, self.context, response) File "/opt/code/localstack/.venv/lib/python3.11/site-packages/rolo/gateway/handlers.py", line 27, in __call__ router_response = self.router.dispatch(context.request) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/opt/code/localstack/.venv/lib/python3.11/site-packages/rolo/routing/router.py", line 326, in dispatch return self.dispatcher(request, handler, args) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/opt/code/localstack/.venv/lib/python3.11/site-packages/rolo/routing/handler.py", line 64, in __call__ result = self.invoke_endpoint(request, endpoint, request_args) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/opt/code/localstack/.venv/lib/python3.11/site-packages/rolo/routing/pydantic.py", line 82, in invoke_endpoint return super().invoke_endpoint(request, endpoint, request_args) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/opt/code/localstack/.venv/lib/python3.11/site-packages/rolo/routing/handler.py", line 73, in invoke_endpoint return endpoint(request, **request_args) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/opt/code/localstack/.venv/lib/python3.11/site-packages/localstack/pro/core/services/apigatewayv2/next_gen/execute_api/websockets/apigatewaymanagementapi/core.py.enc", line 22, in call_apigwmgmtapi try:H=G(A,connection_id,**F) ^^^^^^^^^^^^^^^^^^^^^^ File "/opt/code/localstack/.venv/lib/python3.11/site-packages/localstack/pro/core/services/apigatewayv2/next_gen/execute_api/websockets_legacy/management_api_routes.py.enc", line 19, in get_connection D={'SourceIp':A[_B].remote_address[0],'UserAgent':A[_B].request_headers['User-Agent']};return GetConnectionResponse(ConnectedAt=A['connected_at'],Identity=D,LastActiveAt=A['last_active_at']) ~~~~~~~~~~~~~~~~~~~~~^^^^^^^^^^^^^^ File "/opt/code/localstack/.venv/lib/python3.11/site-packages/websockets/datastructures.py", line 107, in __getitem__ value = self._dict[key.lower()] ~~~~~~~~~~^^^^^^^^^^^^^ KeyError: 'user-agent' ``` ### Expected Behavior Would expect the cli command to return the connection details ### How are you starting LocalStack? With a docker-compose file ### Steps To Reproduce #### How are you starting localstack (e.g., `bin/localstack` command, arguments, or `docker-compose.yml`) docker compose with apigateway,apigatewayv2 and apigatewaymanagementapi enabled #### Client commands (e.g., AWS SDK code snippet, or sequence of "awslocal" commands) ``` aws apigatewaymanagementapi get-connection \ --connection-id 8807fd50 \ --debug --endpoint-url http://localhost:4566/_aws/execute-api/0ea7ff6b/local ``` ### Environment ```markdown - OS: MacOS Sequoia 15.1.1 (but running in Docker) - LocalStack: LocalStack version: latest pro 4.2.1.dev8 LocalStack Docker image sha: sha256:ea9ce6df7278e06988dbafc768fa410c37230fb24aeee2f44c42f2bf22f1b201 LocalStack build date: 2025-02-28 LocalStack build git hash: 9ae2f1f7d ``` ### Anything else? `aws apigatewaymanagementapi post-to-connection` works ok to same connection and api
closed
2025-03-02T01:30:14Z
2025-03-07T17:30:39Z
https://github.com/localstack/localstack/issues/12321
[ "type: bug", "status: resolved/fixed", "aws:apigatewayv2" ]
slegs
5
PaddlePaddle/PaddleHub
nlp
2,084
使用出现RuntimeError
- 版本、环境信息 1)PaddleHub和PaddlePaddle版本:paddlehub==1.8.3,paddlepaddle==2.3.2 2)系统环境:Windows 7 - 复现信息:如为报错,请给出复现环境、复现步骤 ``` import paddlehub as hub def genImage(): module = hub.Module(name="ernie_vilg") results = module.generate_image(text_prompts=["王者之心"]) def textAna(): lac = hub.Module(name="lac") test_text = ["今天是个好天气。"] results = lac.cut( text=test_text, use_gpu=False, batch_size=1, return_tag=True) print(results) #{'word': ['今天', '是', '个', '好天气', '。'], 'tag': ['TIME', 'v', 'q', 'n', 'w']} genImage() #textAna() ``` 无论调用genImage()还是textAna()函数,都会出现RuntimeError ``` [2022-10-24 09:40:04,237] [ INFO] - Installing ernie_vilg module Traceback (most recent call last): File "e:/MyProjects/py-demo/aiApp/ai02.py", line 26, in <module> genImage() File "e:/MyProjects/py-demo/aiApp/ai02.py", line 12, in genImage module = hub.Module(name="ernie_vilg") File "E:\Python37\lib\site-packages\paddlehub\module\module.py", line 102, in __new__ name=name, version=version, **kwargs) File "E:\Python37\lib\site-packages\paddlehub\module\module.py", line 171, in init_with_name module_name=name, module_version=version, extra=extra) File "E:\Python37\lib\site-packages\paddlehub\module\manager.py", line 127, in install_module self.all_modules(update=True) File "E:\Python37\lib\site-packages\paddlehub\module\manager.py", line 105, in all_modules valid, info = self.check_module_valid(sub_dir_path) File "E:\Python37\lib\site-packages\paddlehub\module\manager.py", line 72, in check_module_valid "{}.module".format(basename)) File "E:\Python37\lib\importlib\__init__.py", line 127, in import_module return _bootstrap._gcd_import(name[level:], package, level) File "<frozen importlib._bootstrap>", line 1006, in _gcd_import File "<frozen importlib._bootstrap>", line 983, in _find_and_load File "<frozen importlib._bootstrap>", line 967, in _find_and_load_unlocked File "<frozen importlib._bootstrap>", line 677, in _load_unlocked File "<frozen importlib._bootstrap_external>", line 728, in exec_module File "<frozen importlib._bootstrap>", line 219, in _call_with_frames_removed File "C:\Users\liaoxuewei\.paddlehub\modules\ernie_vilg\module.py", line 29, in <module> author_email="paddle-dev@baidu.com") File "E:\Python37\lib\site-packages\paddlehub\module\module.py", line 80, in _wrapper raise RuntimeError RuntimeError ```
closed
2022-10-24T01:43:57Z
2022-10-24T02:23:36Z
https://github.com/PaddlePaddle/PaddleHub/issues/2084
[]
liaoxuewei
4
keras-team/keras
tensorflow
20,382
Low frequency issue using the tensorflow-GPU backend
The following code ```python import keras print(keras.__version__) import tensorflow as tf print(tf.__version__) from keras.src import layers from keras.src import models import numpy as np from keras.src.metrics import iou_metrics as metrics m_obj = metrics.MeanIoU(num_classes=2, ignore_class=0) model = models.Sequential([layers.Dense(2, activation="softmax"), ]) model.compile(optimizer="rmsprop", loss="mse", metrics=[m_obj], jit_compile=False) for i in range(20000): print(i) model.fit(np.array([[1.0, 1.0]]), np.array([[1.0, 0.0]])) ``` Eventually ends up with: ``` 1/1 ━━━━━━━━━━━━━━━━━━━━ 0s 62ms/step - loss: 0.2777 - mean_io_u: 0.0000e+00 237 1/1 ━━━━━━━━━━━━━━━━━━━━ 0s 56ms/step - loss: 0.2761 - mean_io_u: 0.0000e+00 238 --------------------------------------------------------------------------- ValueError Traceback (most recent call last) [<ipython-input-2-9e7607e81947>](https://localhost:8080/#) in <cell line: 15>() 15 for i in range(20000): 16 print(i) ---> 17 model.fit(np.array([[1.0, 1.0]]), np.array([[1.0, 0.0]])) 1 frames [/usr/local/lib/python3.10/dist-packages/keras/src/ops/operation_utils.py](https://localhost:8080/#) in compute_take_along_axis_output_shape(input_shape, indices_shape, axis) 359 360 if len(input_shape) != len(indices_shape): --> 361 raise ValueError( 362 "`x` and `indices` must have the same number of dimensions, " 363 f"but receive shape {input_shape} and {indices_shape}." ValueError: `x` and `indices` must have the same number of dimensions, but receive shape [2, 2] and [2]. ```
closed
2024-10-18T19:26:21Z
2024-10-19T21:06:33Z
https://github.com/keras-team/keras/issues/20382
[]
nicolaspi
1
joke2k/django-environ
django
369
UnicodeDecodeError: 'ascii' codec can't decode
I have an error when I run server with wsgi I am using python3.6 and Django3.2. I tried both django-environ==0.8.1 and django-environ==0.7.0 ``` App 10334 output: from .base import * App 10334 output: File "myproject/config/settings/base.py", line 11, in <module> App 10334 output: environ.Env.read_env(BASE_DIR / '.env') App 10334 output: File ".../env/lib/python3.6/site-packages/environ/environ.py", line 801, in read_env App 10334 output: content = f.read() App 10334 output: File "/usr/lib/python3.6/encodings/ascii.py", line 26, in decode App 10334 output: return codecs.ascii_decode(input, self.errors)[0] App 10334 output: UnicodeDecodeError: 'ascii' codec can't decode byte 0xc2 in position 379: ordinal not in range(128) ```
open
2022-03-02T20:57:02Z
2022-06-16T15:11:20Z
https://github.com/joke2k/django-environ/issues/369
[ "bug" ]
kkutayozgun
4
thp/urlwatch
automation
406
Some help for a noob please
If this is the wrong place please let me know and I'll post somewhere else. I'm currently migrating my systems from Windows to Ubuntu. So far so good. With 1 exception. I use a windows tool called Website Watcher that monitors web site changes via a gui and some nice UI friendly tools. I've moved most of the URLs that I want to check over to urlwatch. However I would like a little help if possible. 1) In WSW I could specify an item of text on a particular page and WSW would ignore any changes BEFORE that text was found on the page. This is useful for ignoring dynamic menus etc. I Could not find anything similar so far in urlwatch 2) Similar to the above WSW lets me specify an item of text and then any changes AFTER that text would be Excluded. Again useful for dynamic footers/dates/time/years at the end of pages etc 3) Sometimes the underlying CSS/HMTL changes on a page. Is there a way so that these changes do not get flagged as changed. I'm typically only interested visible readable text on the page. eg lots of Google based sites use generated class names/ids. These normally change but the text etc on the readable page stays the same. So far urlwatch is flagging these css/html changes. Many thanks in advance or any help/links/examples
closed
2019-05-29T14:54:44Z
2020-07-10T12:50:21Z
https://github.com/thp/urlwatch/issues/406
[]
darkufo
1
ivy-llc/ivy
numpy
28,523
Fix Frontend Failing Test: tensorflow - tensor.paddle.Tensor.mean
To-do List: https://github.com/unifyai/ivy/issues/27499
open
2024-03-09T20:50:30Z
2024-03-09T20:50:30Z
https://github.com/ivy-llc/ivy/issues/28523
[ "Sub Task" ]
ZJay07
0
mirumee/ariadne
api
986
Create GraphQL Chat app with subscriptions example
People ask us once in a while for example of subscriptions app. I while ago I've implemented simple GraphQL chat using encode/broadcaster for messaging. I should cleanup my code and put it on Github for people to see. <img width="1168" alt="Zrzut ekranu 2022-12-5 o 18 21 28" src="https://user-images.githubusercontent.com/750553/205704022-7b815e5b-35b8-4e33-9c85-16dc24fa1076.png">
closed
2022-12-05T17:34:01Z
2022-12-13T16:15:56Z
https://github.com/mirumee/ariadne/issues/986
[ "meta" ]
rafalp
2
google/seq2seq
tensorflow
311
Optimizing time complexity of LCS calculation in ROUGE calculation using Suffix Trees
In seq2seq/seq2seq/metrics/rouge.py, The longest common sequence code implementation for calculating ROUGE L score for comparing two sequences is implemented using Dynamic Programming technique which yields a time complexity of O(n*m) where n is the length of sequence 1 and m is the length of sequence 2, the sequences which are to be compared with. Using Suffix Trees, the time complexity could be dropped to O(m+n), essentially bring the time complexity down by nearly an order of magnitude to linear bounds.
open
2018-01-19T08:25:51Z
2018-01-19T08:25:51Z
https://github.com/google/seq2seq/issues/311
[]
Anirudhsekar96
0
geex-arts/django-jet
django
251
After deploying website Through Heroku, Mixed content error is summoned by chrome
After deploying website Through Heroku, Mixed content error is summoned by chrome, and also a missing js file 404 occurs, I dont understand why because everything else in my static files goes through except this particular folder "jet/js/build" Basically, all the forms in jet are not working after deployment but in local server it works fine.
open
2017-09-13T21:09:52Z
2017-09-15T11:16:26Z
https://github.com/geex-arts/django-jet/issues/251
[]
fauziame
1
Sanster/IOPaint
pytorch
170
Adding "--precision full" and "--no-half" args to sd
If I'm trying inpainting with sd1.5 or sd2, the mask is replaced by black. I used sd a few months back and it had the same problem, but adding "--precision full --no-half" solved my issue. The problem is that I've searched and I couldn't find where to pass the args.
closed
2022-12-30T07:09:21Z
2023-01-04T18:50:46Z
https://github.com/Sanster/IOPaint/issues/170
[]
yattsu
1
pydata/pandas-datareader
pandas
26
Add pandas-data reader to pypi
closed
2015-03-26T03:32:32Z
2015-03-26T03:55:38Z
https://github.com/pydata/pandas-datareader/issues/26
[]
davidastephens
1
danimtb/dasshio
dash
111
Where to put the config file ?
Hi I have installed it easily in Hassio but I'm quite confused where to put the /data/options.json as I have no data directory in all hassio directories visible through samba share :( Thanks for clarification ;) Vincèn
closed
2022-10-18T14:09:10Z
2023-06-12T07:39:48Z
https://github.com/danimtb/dasshio/issues/111
[]
vincegre
2
FlareSolverr/FlareSolverr
api
834
[yggtorrent] Error connecting to FlareSolverr server: System.Net.Http.HttpRequestException: Connection refused (
### Have you checked our README? - [X] I have checked the README ### Have you followed our Troubleshooting? - [X] I have followed your Troubleshooting ### Is there already an issue for your problem? - [X] I have checked older issues, open and closed ### Have you checked the discussions? - [X] I have read the Discussions ### Environment ```markdown - FlareSolverr version: 2.2.1.0 - Last working FlareSolverr version:2.2.1.0 - Operating system:dsm 7 - Are you using Docker: [yes/no] yes - FlareSolverr User-Agent (see log traces or / endpoint): - Are you using a VPN: [yes/no] yes - Are you using a Proxy: [yes/no] no - Are you using Captcha Solver: [yes/no] yes - If using captcha solver, which one: harvester - URL to test this issue: ``` ### Description Connection refused Delete indexer Unable to add and connect indexer ### Logged Error Messages ```text FlareSolverrSharp.Exceptions.FlareSolverrException: Error connecting to FlareSolverr server: System.Net.Http.HttpRequestException: Connection refused (192.168.1.44:8191) ---> System.Net.Sockets.SocketException (111): Connection refused at System.Net.Sockets.Socket.AwaitableSocketAsyncEventArgs.ThrowException(SocketError error, CancellationToken cancellationToken) at System.Net.Sockets.Socket.AwaitableSocketAsyncEventArgs.System.Threading.Tasks.Sources.IValueTaskSource.GetResult(Int16 token) at System.Net.Sockets.Socket.<ConnectAsync>g__WaitForConnectWithCancellation|277_0(AwaitableSocketAsyncEventArgs saea, ValueTask connectTask, CancellationToken cancellationToken) at System.Net.Http.HttpConnectionPool.ConnectToTcpHostAsync(String host, Int32 port, HttpRequestMessage initialRequest, Boolean async, CancellationToken cancellationToken) --- End of inner exception stack trace --- at System.Net.Http.HttpConnectionPool.ConnectToTcpHostAsync(String host, Int32 port, HttpRequestMessage initialRequest, Boolean async, CancellationToken cancellationToken) at System.Net.Http.HttpConnectionPool.ConnectAsync(HttpRequestMessage request, Boolean async, CancellationToken cancellationToken) at System.Net.Http.HttpConnectionPool.CreateHttp11ConnectionAsync(HttpRequestMessage request, Boolean async, CancellationToken cancellationToken) at System.Net.Http.HttpConnectionPool.AddHttp11ConnectionAsync(HttpRequestMessage request) at System.Threading.Tasks.TaskCompletionSourceWithCancellation`1.WaitWithCancellationAsync(CancellationToken cancellationToken) at System.Net.Http.HttpConnectionPool.GetHttp11ConnectionAsync(HttpRequestMessage request, Boolean async, CancellationToken cancellationToken) at System.Net.Http.HttpConnectionPool.SendWithVersionDetectionAndRetryAsync(HttpRequestMessage request, Boolean async, Boolean doRequestAuth, CancellationToken cancellationToken) at System.Net.Http.DiagnosticsHandler.SendAsyncCore(HttpRequestMessage request, Boolean async, CancellationToken cancellationToken) at System.Net.Http.RedirectHandler.SendAsync(HttpRequestMessage request, Boolean async, CancellationToken cancellationToken) at System.Net.Http.HttpClient.<SendAsync>g__Core|83_0(HttpRequestMessage request, HttpCompletionOption completionOption, CancellationTokenSource cts, Boolean disposeCts, CancellationTokenSource pendingRequestsCts, CancellationToken originalCancellationToken) at FlareSolverrSharp.Solvers.FlareSolverr.<>c__DisplayClass10_0.<<SendFlareSolverrRequest>b__0>d.MoveNext() Exception in GetConfigurationForSetup (yggtorrent): FlareSolverrSharp.Exceptions.FlareSolverrException: Error connecting to FlareSolverr server: System.Net.Http.HttpRequestException: Connection refused (192.168.1.44:8191) ---> System.Net.Sockets.SocketException (111): Connection refused at System.Net.Sockets.Socket.AwaitableSocketAsyncEventArgs.ThrowException(SocketError error, CancellationToken cancellationToken) at System.Net.Sockets.Socket.AwaitableSocketAsyncEventArgs.System.Threading.Tasks.Sources.IValueTaskSource.GetResult(Int16 token) at System.Net.Sockets.Socket.<ConnectAsync>g__WaitForConnectWithCancellation|277_0(AwaitableSocketAsyncEventArgs saea, ValueTask connectTask, CancellationToken cancellationToken) at System.Net.Http.HttpConnectionPool.ConnectToTcpHostAsync(String host, Int32 port, HttpRequestMessage initialRequest, Boolean async, CancellationToken cancellationToken) --- End of inner exception stack trace --- at System.Net.Http.HttpConnectionPool.ConnectToTcpHostAsync(String host, Int32 port, HttpRequestMessage initialRequest, Boolean async, CancellationToken cancellationToken) at System.Net.Http.HttpConnectionPool.ConnectAsync(HttpRequestMessage request, Boolean async, CancellationToken cancellationToken) at System.Net.Http.HttpConnectionPool.CreateHttp11ConnectionAsync(HttpRequestMessage request, Boolean async, CancellationToken cancellationToken) at System.Net.Http.HttpConnectionPool.AddHttp11ConnectionAsync(HttpRequestMessage request) at System.Threading.Tasks.TaskCompletionSourceWithCancellation`1.WaitWithCancellationAsync(CancellationToken cancellationToken) at System.Net.Http.HttpConnectionPool.GetHttp11ConnectionAsync(HttpRequestMessage request, Boolean async, CancellationToken cancellationToken) at System.Net.Http.HttpConnectionPool.SendWithVersionDetectionAndRetryAsync(HttpRequestMessage request, Boolean async, Boolean doRequestAuth, CancellationToken cancellationToken) at System.Net.Http.DiagnosticsHandler.SendAsyncCore(HttpRequestMessage request, Boolean async, CancellationToken cancellationToken) at System.Net.Http.RedirectHandler.SendAsync(HttpRequestMessage request, Boolean async, CancellationToken cancellationToken) at System.Net.Http.HttpClient.<SendAsync>g__Core|83_0(HttpRequestMessage request, HttpCompletionOption completionOption, CancellationTokenSource cts, Boolean disposeCts, CancellationTokenSource pendingRequestsCts, CancellationToken originalCancellationToken) at FlareSolverrSharp.Solvers.FlareSolverr.<>c__DisplayClass10_0.<<SendFlareSolverrRequest>b__0>d.MoveNext() at FlareSolverrSharp.Solvers.FlareSolverr.<>c__DisplayClass10_0.<<SendFlareSolverrRequest>b__0>d.MoveNext() --- End of stack trace from previous location --- at FlareSolverrSharp.Utilities.SemaphoreLocker.LockAsync[T](Func`1 worker) at FlareSolverrSharp.Solvers.FlareSolverr.SendFlareSolverrRequest(HttpContent flareSolverrRequest) at FlareSolverrSharp.Solvers.FlareSolverr.Solve(HttpRequestMessage request, String sessionId) at FlareSolverrSharp.ClearanceHandler.SendAsync(HttpRequestMessage request, CancellationToken cancellationToken) at System.Net.Http.HttpClient.<SendAsync>g__Core|83_0(HttpRequestMessage request, HttpCompletionOption completionOption, CancellationTokenSource cts, Boolean disposeCts, CancellationTokenSource pendingRequestsCts, CancellationToken originalCancellationToken) at Jackett.Common.Utils.Clients.HttpWebClient2.Run(WebRequest webRequest) in ./Jackett.Common/Utils/Clients/HttpWebClient2.cs:line 178 at Jackett.Common.Utils.Clients.WebClient.GetResultAsync(WebRequest request) in ./Jackett.Common/Utils/Clients/WebClient.cs:line 186 at Jackett.Common.Indexers.BaseWebIndexer.RequestWithCookiesAsync(String url, String cookieOverride, RequestType method, String referer, IEnumerable`1 data, Dictionary`2 headers, String rawbody, Nullable`1 emulateBrowser) in ./Jackett.Common/Indexers/BaseIndexer.cs:line 531 at Jackett.Common.Indexers.CardigannIndexer.GetConfigurationForSetup(Boolean automaticlogin) in ./Jackett.Common/Indexers/CardigannIndexer.cs:line 966 at Jackett.Common.Indexers.CardigannIndexer.GetConfigurationForSetup() in ./Jackett.Common/Indexers/CardigannIndexer.cs:line 943 FlareSolverrSharp.Exceptions.FlareSolverrException: Error connecting to FlareSolverr server: System.Net.Http.HttpRequestException: Connection refused (192.168.1.44:8191) ---> System.Net.Sockets.SocketException (111): Connection refused at System.Net.Sockets.Socket.AwaitableSocketAsyncEventArgs.ThrowException(SocketError error, CancellationToken cancellationToken) at System.Net.Sockets.Socket.AwaitableSocketAsyncEventArgs.System.Threading.Tasks.Sources.IValueTaskSource.GetResult(Int16 token) at System.Net.Sockets.Socket.<ConnectAsync>g__WaitForConnectWithCancellation|277_0(AwaitableSocketAsyncEventArgs saea, ValueTask connectTask, CancellationToken cancellationToken) at System.Net.Http.HttpConnectionPool.ConnectToTcpHostAsync(String host, Int32 port, HttpRequestMessage initialRequest, Boolean async, CancellationToken cancellationToken) --- End of inner exception stack trace --- at System.Net.Http.HttpConnectionPool.ConnectToTcpHostAsync(String host, Int32 port, HttpRequestMessage initialRequest, Boolean async, CancellationToken cancellationToken) at System.Net.Http.HttpConnectionPool.ConnectAsync(HttpRequestMessage request, Boolean async, CancellationToken cancellationToken) at System.Net.Http.HttpConnectionPool.CreateHttp11ConnectionAsync(HttpRequestMessage request, Boolean async, CancellationToken cancellationToken) at System.Net.Http.HttpConnectionPool.AddHttp11ConnectionAsync(HttpRequestMessage request) at System.Threading.Tasks.TaskCompletionSourceWithCancellation`1.WaitWithCancellationAsync(CancellationToken cancellationToken) at System.Net.Http.HttpConnectionPool.GetHttp11ConnectionAsync(HttpRequestMessage request, Boolean async, CancellationToken cancellationToken) at System.Net.Http.HttpConnectionPool.SendWithVersionDetectionAndRetryAsync(HttpRequestMessage request, Boolean async, Boolean doRequestAuth, CancellationToken cancellationToken) at System.Net.Http.DiagnosticsHandler.SendAsyncCore(HttpRequestMessage request, Boolean async, CancellationToken cancellationToken) at System.Net.Http.RedirectHandler.SendAsync(HttpRequestMessage request, Boolean async, CancellationToken cancellationToken) at System.Net.Http.HttpClient.<SendAsync>g__Core|83_0(HttpRequestMessage request, HttpCompletionOption completionOption, CancellationTokenSource cts, Boolean disposeCts, CancellationTokenSource pendingRequestsCts, CancellationToken originalCancellationToken) at FlareSolverrSharp.Solvers.FlareSolverr.<>c__DisplayClass10_0.<<SendFlareSolverrRequest>b__0>d.MoveNext() ``` ### Screenshots _No response_
closed
2023-07-30T18:41:27Z
2023-07-31T10:14:53Z
https://github.com/FlareSolverr/FlareSolverr/issues/834
[ "duplicate" ]
DaGreenX
6
ScrapeGraphAI/Scrapegraph-ai
machine-learning
733
SearchGraph Invalid json output
**Describe the bug** when I use searchGraph, I meet the following error. ```bash Traceback (most recent call last): File "/Users/miyamo/miniconda3/envs/mofa/lib/python3.10/site-packages/langchain_core/output_parsers/json.py", line 83, in parse_result return parse_json_markdown(text) File "/Users/miyamo/miniconda3/envs/mofa/lib/python3.10/site-packages/langchain_core/utils/json.py", line 144, in parse_json_markdown return _parse_json(json_str, parser=parser) File "/Users/miyamo/miniconda3/envs/mofa/lib/python3.10/site-packages/langchain_core/utils/json.py", line 160, in _parse_json return parser(json_str) File "/Users/miyamo/miniconda3/envs/mofa/lib/python3.10/site-packages/langchain_core/utils/json.py", line 118, in parse_partial_json return json.loads(s, strict=strict) File "/Users/miyamo/miniconda3/envs/mofa/lib/python3.10/json/__init__.py", line 359, in loads return cls(**kw).decode(s) File "/Users/miyamo/miniconda3/envs/mofa/lib/python3.10/json/decoder.py", line 337, in decode obj, end = self.raw_decode(s, idx=_w(s, 0).end()) File "/Users/miyamo/miniconda3/envs/mofa/lib/python3.10/json/decoder.py", line 355, in raw_decode raise JSONDecodeError("Expecting value", s, err.value) from None json.decoder.JSONDecodeError: Expecting value: line 16 column 5 (char 504) The above exception was the direct cause of the following exception: Traceback (most recent call last): File "/Users/miyamo/pdfscrawl.py", line 90, in <module> result = search_graph.run() File "/Users/miyamo/miniconda3/envs/mofa/lib/python3.10/site-packages/scrapegraphai/graphs/search_graph.py", line 120, in run self.final_state, self.execution_info = self.graph.execute(inputs) File "/Users/miyamo/miniconda3/envs/mofa/lib/python3.10/site-packages/scrapegraphai/graphs/base_graph.py", line 259, in execute return self._execute_standard(initial_state) File "/Users/miyamo/miniconda3/envs/mofa/lib/python3.10/site-packages/scrapegraphai/graphs/base_graph.py", line 180, in _execute_standard raise e File "/Users/miyamo/miniconda3/envs/mofa/lib/python3.10/site-packages/scrapegraphai/graphs/base_graph.py", line 164, in _execute_standard result = current_node.execute(state) File "/Users/miyamo/miniconda3/envs/mofa/lib/python3.10/site-packages/scrapegraphai/nodes/graph_iterator_node.py", line 73, in execute state = asyncio.run(self._async_execute(state, batchsize)) File "/Users/miyamo/miniconda3/envs/mofa/lib/python3.10/asyncio/runners.py", line 44, in run return loop.run_until_complete(main) File "/Users/miyamo/miniconda3/envs/mofa/lib/python3.10/asyncio/base_events.py", line 649, in run_until_complete return future.result() File "/Users/miyamo/miniconda3/envs/mofa/lib/python3.10/site-packages/scrapegraphai/nodes/graph_iterator_node.py", line 136, in _async_execute answers = await tqdm.gather( File "/Users/miyamo/miniconda3/envs/mofa/lib/python3.10/site-packages/tqdm/asyncio.py", line 79, in gather res = [await f for f in cls.as_completed(ifs, loop=loop, timeout=timeout, File "/Users/miyamo/miniconda3/envs/mofa/lib/python3.10/site-packages/tqdm/asyncio.py", line 79, in <listcomp> res = [await f for f in cls.as_completed(ifs, loop=loop, timeout=timeout, File "/Users/miyamo/miniconda3/envs/mofa/lib/python3.10/asyncio/tasks.py", line 571, in _wait_for_one return f.result() # May raise f.exception(). File "/Users/miyamo/miniconda3/envs/mofa/lib/python3.10/site-packages/tqdm/asyncio.py", line 76, in wrap_awaitable return i, await f File "/Users/miyamo/miniconda3/envs/mofa/lib/python3.10/site-packages/scrapegraphai/nodes/graph_iterator_node.py", line 126, in _async_run return await asyncio.to_thread(graph.run) File "/Users/miyamo/miniconda3/envs/mofa/lib/python3.10/asyncio/threads.py", line 25, in to_thread return await loop.run_in_executor(None, func_call) File "/Users/miyamo/miniconda3/envs/mofa/lib/python3.10/concurrent/futures/thread.py", line 58, in run result = self.fn(*self.args, **self.kwargs) File "/Users/miyamo/miniconda3/envs/mofa/lib/python3.10/site-packages/scrapegraphai/graphs/smart_scraper_graph.py", line 115, in run self.final_state, self.execution_info = self.graph.execute(inputs) File "/Users/miyamo/miniconda3/envs/mofa/lib/python3.10/site-packages/scrapegraphai/graphs/base_graph.py", line 259, in execute return self._execute_standard(initial_state) File "/Users/miyamo/miniconda3/envs/mofa/lib/python3.10/site-packages/scrapegraphai/graphs/base_graph.py", line 180, in _execute_standard raise e File "/Users/miyamo/miniconda3/envs/mofa/lib/python3.10/site-packages/scrapegraphai/graphs/base_graph.py", line 164, in _execute_standard result = current_node.execute(state) File "/Users/miyamo/miniconda3/envs/mofa/lib/python3.10/site-packages/scrapegraphai/nodes/generate_answer_node.py", line 111, in execute batch_results = async_runner.invoke({"question": user_prompt}) File "/Users/miyamo/miniconda3/envs/mofa/lib/python3.10/site-packages/langchain_core/runnables/base.py", line 3723, in invoke output = {key: future.result() for key, future in zip(steps, futures)} File "/Users/miyamo/miniconda3/envs/mofa/lib/python3.10/site-packages/langchain_core/runnables/base.py", line 3723, in <dictcomp> output = {key: future.result() for key, future in zip(steps, futures)} File "/Users/miyamo/miniconda3/envs/mofa/lib/python3.10/concurrent/futures/_base.py", line 451, in result return self.__get_result() File "/Users/miyamo/miniconda3/envs/mofa/lib/python3.10/concurrent/futures/_base.py", line 403, in __get_result raise self._exception File "/Users/miyamo/miniconda3/envs/mofa/lib/python3.10/concurrent/futures/thread.py", line 58, in run result = self.fn(*self.args, **self.kwargs) File "/Users/miyamo/miniconda3/envs/mofa/lib/python3.10/site-packages/langchain_core/runnables/base.py", line 3707, in _invoke_step return context.run( File "/Users/miyamo/miniconda3/envs/mofa/lib/python3.10/site-packages/langchain_core/runnables/base.py", line 3022, in invoke input = context.run(step.invoke, input, config) File "/Users/miyamo/miniconda3/envs/mofa/lib/python3.10/site-packages/langchain_core/output_parsers/base.py", line 192, in invoke return self._call_with_config( File "/Users/miyamo/miniconda3/envs/mofa/lib/python3.10/site-packages/langchain_core/runnables/base.py", line 1926, in _call_with_config context.run( File "/Users/miyamo/miniconda3/envs/mofa/lib/python3.10/site-packages/langchain_core/runnables/config.py", line 394, in call_func_with_variable_args return func(input, **kwargs) # type: ignore[call-arg] File "/Users/miyamo/miniconda3/envs/mofa/lib/python3.10/site-packages/langchain_core/output_parsers/base.py", line 193, in <lambda> lambda inner_input: self.parse_result( File "/Users/miyamo/miniconda3/envs/mofa/lib/python3.10/site-packages/langchain_core/output_parsers/json.py", line 86, in parse_result raise OutputParserException(msg, llm_output=text) from e langchain_core.exceptions.OutputParserException: Invalid json output: ```json { "answer": null, "question": { "title": "Finding the first positive number in the array", "url": "/questions/74095793/finding-the-first-positive-number-in-the-array" }, "relatedQuestions": [ { "title": "Question score (upvotes - downvotes)", "url": "/questions/74095793/finding-the-first-positive-number-in-the-array" }, { "title": "Question score (upvotes - downvotes)", "url": "/questions/21225399/first-non-negative-element-in-an-array" }, // ... Other related questions... ] } ``` **To Reproduce** source code ```python """ Example of Search Graph """ import os from dotenv import load_dotenv from scrapegraphai.graphs import SearchGraph from scrapegraphai.utils import convert_to_csv, convert_to_json, prettify_exec_info # ************************************************ # Define the configuration for the graph # ************************************************ load_dotenv() groq_key = os.getenv("GROQ_APIKEY") graph_config = { "llm": { "model": "groq/gemma-7b-it", "api_key": "", "temperature": 0 }, "headless": False } # ************************************************ # Create the SearchGraph instance and run it # ************************************************ search_graph = SearchGraph( prompt="give me the first positive number", config=graph_config ) result = search_graph.run() print(result) ``` **Desktop :** - OS: MacOS
closed
2024-10-09T03:22:32Z
2024-10-10T08:33:55Z
https://github.com/ScrapeGraphAI/Scrapegraph-ai/issues/733
[]
Gege-Wang
3
apify/crawlee-python
automation
427
Add support for `preNavigationHooks` in Playwright
As the title, I want to know does the crawlee-python support the `preNavigationHooks` function?
closed
2024-08-13T11:42:22Z
2024-10-31T08:39:42Z
https://github.com/apify/crawlee-python/issues/427
[ "enhancement", "t-tooling", "hacktoberfest" ]
viewv
8
3b1b/manim
python
1,463
Chained Animations not working
I was using the CoordinateSystemExample class from the examples page. The last animations (rescaling and movement) were not running in parallel. Only the first animation was rendered. I'm running manim on arch linux in python 3.9.
closed
2021-04-07T21:43:58Z
2021-04-08T21:21:41Z
https://github.com/3b1b/manim/issues/1463
[ "bug" ]
mortimervonchappuis
1
modin-project/modin
pandas
6,710
Don't materialize index in `_groupby_shuffle` internal function
closed
2023-11-05T16:32:09Z
2023-11-07T09:47:05Z
https://github.com/modin-project/modin/issues/6710
[ "Performance 🚀" ]
anmyachev
0
deeppavlov/DeepPavlov
nlp
789
Answers broken
![8B8F17A2-8380-4321-953A-072248E91FA9](https://user-images.githubusercontent.com/8425396/55565971-7cb0f100-5703-11e9-9533-626ca030b8b0.png)
closed
2019-04-04T15:00:44Z
2019-09-24T19:22:44Z
https://github.com/deeppavlov/DeepPavlov/issues/789
[]
alexriabtsev
2
Morizeyao/GPT2-Chinese
nlp
195
同样的代码,在win笔记本可以train,在linux台式机下显示killed
代码完全一样,复制到google colab运行一会儿之后,不显示loss,然后出现一个^C就结束了。 我就把代码复制到linux台式机上去跑,结果能看到2行step和loss,然后就直接显示Killed,可能是什么原因? ![tttttttttttttttt](https://user-images.githubusercontent.com/875627/109435766-006cd880-7a14-11eb-94cd-bde316e81f41.PNG)
closed
2021-02-28T22:26:39Z
2021-06-08T05:21:36Z
https://github.com/Morizeyao/GPT2-Chinese/issues/195
[]
libralibra
1
automagica/automagica
automation
120
read_cell_formula() is not working
When I try below read_cell_formula() example in Automagica Documentation Release 2(Apr 02, 2020). >excel = Excel() > excel.write_cell_formula(1, 1, '=1+1') > excel.read_cell_formula(1, 1) '=1+1' error message is below : File "C:\automagica\utilities.py", line 17, in wrapper return func(*args, **kwargs) TypeError: read_cell_formula() missing 1 required positional argument: 'formula'
closed
2020-04-13T03:44:24Z
2020-09-07T21:37:29Z
https://github.com/automagica/automagica/issues/120
[]
taesikkim
1
mljar/mljar-supervised
scikit-learn
124
Issues during installation and with module shap
I tried to install but already get an error during the installation process: ![installation](https://user-images.githubusercontent.com/55921277/87794344-c2d18b80-c846-11ea-8efc-dc1758f3df42.jpg) I didn't see, that Visual Studio is a requirement for "Mljar-Supervised". After having downloaded "Visual Studio" I tried to use your example "binary_classifier.py" but it says, that the "module shap is missing" even if it is there in the correct folder: ![shap_missing](https://user-images.githubusercontent.com/55921277/87794635-31aee480-c847-11ea-9bde-483aec344834.jpg) And when I have a look into "shap.py" I see another "import shap" there (I don't know if this causes the error): ![line13](https://user-images.githubusercontent.com/55921277/87794296-b2211580-c846-11ea-8e88-ebb55f7c8bb7.jpg)
closed
2020-07-17T14:07:20Z
2020-10-26T17:42:28Z
https://github.com/mljar/mljar-supervised/issues/124
[ "installation" ]
AndreasTraut
2
SciTools/cartopy
matplotlib
2,251
Mask A Contour With Coastlines
Hello, I am in the process from switching from `matplotlib.basemap` to cartopy and the standout feature that is missing is the `maskoceans` / `maskland` feature set. I see that there are `ax.addFeature(LAND/OCEAN...)` however this completely ruins the nice basemap I have included. The data is already there, so how do we make a mask out of it. I understand that projection transforms are somewhat complicated but it should be doable ultimately. Example of the map I am trying to make. ![ECONOMICS_MODEL LCOE](https://github.com/SciTools/cartopy/assets/10334493/749fc44e-985e-4331-9fdc-da595d888a78) Several requests for similar features, the only way to do this would be to plot twice and use PIL to mask the image from the LAND data https://stackoverflow.com/questions/72078262/how-to-mask-data-that-appears-in-the-ocean-using-cartopy-and-matplotlib
open
2023-09-27T18:49:40Z
2023-10-02T21:40:15Z
https://github.com/SciTools/cartopy/issues/2251
[]
SoundsSerious
3
AutoGPTQ/AutoGPTQ
nlp
606
[BUG] TypeError: LlamaRotaryEmbedding.forward() got an unexpected keyword argument 'seq_len'
**Describe the bug** Encounter this when running genreration_speed.py. Input model is Llama-2-7b-chat-gptq. Complete error log: Traceback (most recent call last): File "AutoGPTQ/examples/benchmark/generation_speed.py", line 326, in <module> main() File "AutoGPTQ/examples/benchmark/generation_speed.py", line 316, in main benchmark_generation_speed(model, tokenizer, examples, generation_config) File "AutoGPTQ/examples/benchmark/generation_speed.py", line 197, in benchmark_generation_speed outputs_ids = model.generate( File "venv/auto-gptq-cuda/lib/python3.10/site-packages/auto_gptq/modeling/_base.py", line 532, in generate return self.model.generate(**kwargs) File "venv/auto-gptq-cuda/lib/python3.10/site-packages/torch/utils/_contextlib.py", line 115, in decorate_context return func(*args, **kwargs) File "venv/auto-gptq-cuda/lib/python3.10/site-packages/transformers/generation/utils.py", line 1527, in generate result = self._greedy_search( File "venv/auto-gptq-cuda/lib/python3.10/site-packages/transformers/generation/utils.py", line 2411, in _greedy_search outputs = self( File "venv/auto-gptq-cuda/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1511, in _wrapped_call_impl return self._call_impl(*args, **kwargs) File "venv/auto-gptq-cuda/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1520, in _call_impl return forward_call(*args, **kwargs) File "venv/auto-gptq-cuda/lib/python3.10/site-packages/transformers/models/llama/modeling_llama.py", line 1196, in forward outputs = self.model( File "venv/auto-gptq-cuda/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1511, in _wrapped_call_impl return self._call_impl(*args, **kwargs) File "venv/auto-gptq-cuda/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1520, in _call_impl return forward_call(*args, **kwargs) File "venv/auto-gptq-cuda/lib/python3.10/site-packages/transformers/models/llama/modeling_llama.py", line 1016, in forward layer_outputs = decoder_layer( File "venv/auto-gptq-cuda/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1511, in _wrapped_call_impl return self._call_impl(*args, **kwargs) File "venv/auto-gptq-cuda/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1520, in _call_impl return forward_call(*args, **kwargs) File "venv/auto-gptq-cuda/lib/python3.10/site-packages/transformers/models/llama/modeling_llama.py", line 739, in forward hidden_states, self_attn_weights, present_key_value = self.self_attn( File "venv/auto-gptq-cuda/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1511, in _wrapped_call_impl return self._call_impl(*args, **kwargs) File "venv/auto-gptq-cuda/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1520, in _call_impl return forward_call(*args, **kwargs) File "venv/auto-gptq-cuda/lib/python3.10/site-packages/auto_gptq/nn_modules/fused_llama_attn.py", line 76, in forward cos, sin = self.rotary_emb(value_states, seq_len=kv_seq_len) File "venv/auto-gptq-cuda/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1511, in _wrapped_call_impl return self._call_impl(*args, **kwargs) File "venv/auto-gptq-cuda/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1520, in _call_impl return forward_call(*args, **kwargs) File "venv/auto-gptq-cuda/lib/python3.10/site-packages/torch/utils/_contextlib.py", line 115, in decorate_context return func(*args, **kwargs) TypeError: LlamaRotaryEmbedding.forward() got an unexpected keyword argument 'seq_len' **Hardware details** Information about CPU and GPU, such as RAM, number, etc. Nvidia V100 **Software version** Version of relevant software such as operation system, cuda toolkit, python, auto-gptq, pytorch, transformers, accelerate, etc. CUDA: 12.3 , Pytorch: 2.2.1, transformers: 4.39.1, accelerate: 0.28.0
open
2024-03-25T08:59:52Z
2024-03-29T20:40:34Z
https://github.com/AutoGPTQ/AutoGPTQ/issues/606
[ "bug" ]
timefliesfang
3
ray-project/ray
python
51,277
[core][gpu-objects] Overlap compute / communication of CUDA streams
### Description as title ### Use case _No response_
open
2025-03-11T22:44:30Z
2025-03-11T22:44:50Z
https://github.com/ray-project/ray/issues/51277
[ "enhancement", "P2", "core", "gpu-objects" ]
kevin85421
0
CatchTheTornado/text-extract-api
api
71
[feat] research minerU + pdf-extract-api
Let's check if it's somehow useful for us to add an extraction integration with the https://github.com/opendatalab/MinerU/tree/master/demo By the way I really like their pipelines architecture which makes multistage extraction and parsing more scalable
open
2025-01-11T09:30:11Z
2025-01-19T16:55:22Z
https://github.com/CatchTheTornado/text-extract-api/issues/71
[]
pkarw
0
SALib/SALib
numpy
506
How to handle integer parameter?
What I want to do is building energy consumption analysis, some variables are integer types (such as weather file 1, weather file 2), how can I get integer parameters when sampling?
closed
2022-04-28T08:16:48Z
2022-04-29T10:05:52Z
https://github.com/SALib/SALib/issues/506
[]
FOANFAN
3
CorentinJ/Real-Time-Voice-Cloning
tensorflow
490
How do I train a model with my own data? Where can I find the instruction?
How do I train a model with my own data? Where can I find the instructions on how to do it? Need help
closed
2020-08-13T12:58:53Z
2020-08-25T23:29:24Z
https://github.com/CorentinJ/Real-Time-Voice-Cloning/issues/490
[]
justinjohn0306
4
mars-project/mars
pandas
3,039
Support get_chunk_meta in RayExecutionContext
Currently `RayExecutionContext.get_chunk_meta` is not supported, which will make any operands relied on this API failed on tiling, such as when call `DataFrame.groupby`: ``` df = md.DataFrame(mt.random.rand(300, 4, chunk_size=100), columns=list("abcd")) df["a"], df["b"] = (df["a"] * 5).astype(int), (df["b"] * 2).astype(int) df.groupby(["a", "b"]).apply(lambda pdf: pdf.sum()).execute() ``` Will got following error: ``` ================================================================================== FAILURES ================================================================================== ________________________________________________________________________________ test_shuffle ________________________________________________________________________________ ray_start_regular_shared2 = RayContext(dashboard_url='127.0.0.1:8265', python_version='3.8.2', ray_version='1.12.0', ray_commit='f18fc31c756299095...127.0.0.1:55710', 'address': '127.0.0.1:55710', 'node_id': '38787319e06bc89f95d7600524069ed4dfba256068c917c261fe697f'}) create_cluster = (<mars.deploy.oscar.local.LocalClient object at 0x7fb22aaf38b0>, {}) @require_ray @pytest.mark.asyncio async def test_shuffle(ray_start_regular_shared2, create_cluster): df = md.DataFrame(mt.random.rand(300, 4, chunk_size=100), columns=list("abcd")) # `describe` contains multiple shuffle. df.describe().execute() arr = np.random.RandomState(0).rand(31, 27) t1 = mt.tensor(arr, chunk_size=10).reshape(27, 31) t1.op.extra_params["_reshape_with_shuffle"] = True np.testing.assert_almost_equal(arr.reshape(27, 31), t1.to_numpy()) np.testing.assert_equal(mt.bincount(mt.arange(5, 10)).to_numpy(), np.bincount(np.arange(5, 10))) # `RayExecutionContext.get_chunk_meta` not supported, skip dataframe.groupby df["a"], df["b"] = (df["a"] * 5).astype(int), (df["b"] * 2).astype(int) > df.groupby(["a", "b"]).apply(lambda pdf: pdf.sum()).execute() mars/deploy/oscar/tests/test_ray_dag.py:147: _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ mars/core/entity/tileables.py:462: in execute result = self.data.execute(session=session, **kw) mars/core/entity/executable.py:144: in execute return execute(self, session=session, **kw) mars/deploy/oscar/session.py:1855: in execute return session.execute( mars/deploy/oscar/session.py:1649: in execute execution_info: ExecutionInfo = fut.result( ../../../../../opt/anaconda3/envs/mars-py3.8-dev/lib/python3.8/concurrent/futures/_base.py:439: in result return self.__get_result() ../../../../../opt/anaconda3/envs/mars-py3.8-dev/lib/python3.8/concurrent/futures/_base.py:388: in __get_result raise self._exception mars/deploy/oscar/session.py:1835: in _execute await execution_info mars/deploy/oscar/session.py:105: in wait return await self._aio_task mars/deploy/oscar/session.py:953: in _run_in_background raise task_result.error.with_traceback(task_result.traceback) mars/services/task/supervisor/processor.py:364: in run async for stage_args in self._iter_stage_chunk_graph(): mars/services/task/supervisor/processor.py:158: in _iter_stage_chunk_graph chunk_graph = await self._get_next_chunk_graph(chunk_graph_iter) mars/services/task/supervisor/processor.py:149: in _get_next_chunk_graph chunk_graph = await fut mars/lib/aio/_threads.py:36: in to_thread return await loop.run_in_executor(None, func_call) ../../../../../opt/anaconda3/envs/mars-py3.8-dev/lib/python3.8/concurrent/futures/thread.py:57: in run result = self.fn(*self.args, **self.kwargs) mars/services/task/supervisor/processor.py:144: in next_chunk_graph return next(chunk_graph_iter) mars/services/task/supervisor/preprocessor.py:194: in tile for chunk_graph in chunk_graph_builder.build(): mars/core/graph/builder/chunk.py:440: in build yield from self._build() mars/core/graph/builder/chunk.py:434: in _build graph = next(tile_iterator) mars/services/task/supervisor/preprocessor.py:74: in _iter_without_check to_update_tileables = self._iter() mars/core/graph/builder/chunk.py:317: in _iter self._tile( mars/core/graph/builder/chunk.py:211: in _tile need_process = next(tile_handler) mars/core/graph/builder/chunk.py:183: in _tile_handler tiled_tileables = yield from handler.tile(tiled_tileables) mars/core/entity/tileables.py:79: in tile tiled_result = yield from tile_handler(op) mars/dataframe/groupby/apply.py:151: in tile return [auto_merge_chunks(get_context(), ret)] mars/dataframe/utils.py:1333: in auto_merge_chunks metas = ctx.get_chunks_meta( mars/services/context.py:188: in get_chunks_meta return self._call(self._get_chunks_meta(data_keys, fields=fields, error=error)) mars/services/context.py:84: in _call return fut.result() ../../../../../opt/anaconda3/envs/mars-py3.8-dev/lib/python3.8/concurrent/futures/_base.py:439: in result return self.__get_result() ../../../../../opt/anaconda3/envs/mars-py3.8-dev/lib/python3.8/concurrent/futures/_base.py:388: in __get_result raise self._exception _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ self = <mars.services.task.execution.ray.context.RayExecutionContext object at 0x7fb22b3485e0> data_keys = ['9f92dcd8196d32f25e43e33ba1f56e02_0', '223590f1093c414359f466c42a698006_0', 'dc80798f45b8ed8bb358a7b39b6d8170_0'], fields = ['memory_size'], error = 'ignore' async def _get_chunks_meta( self, data_keys: List[str], fields: List[str] = None, error: str = "raise" ) -> List[Dict]: # get chunks meta get_metas = [] for data_key in data_keys: meta = self._meta_api.get_chunk_meta.delay( data_key, fields=["bands"], error=error ) get_metas.append(meta) metas = await self._meta_api.get_chunk_meta.batch(*get_metas) api_to_keys_calls = defaultdict(lambda: (list(), list())) for data_key, meta in zip(data_keys, metas): > addr = meta["bands"][0][0] E TypeError: 'NoneType' object is not subscriptable mars/services/context.py:145: TypeError ``` We need to support get_chunk_meta for ray task backend.
open
2022-05-17T03:57:35Z
2022-05-17T05:58:26Z
https://github.com/mars-project/mars/issues/3039
[]
chaokunyang
0
alteryx/featuretools
scikit-learn
2,017
release Featuretools v1.9.0
closed
2022-04-13T14:51:25Z
2022-04-27T17:09:14Z
https://github.com/alteryx/featuretools/issues/2017
[]
gsheni
0
sebp/scikit-survival
scikit-learn
489
Documentation: clarification of the estimate argument for each metric
**Problem** I spent a day trying to understand the `cumulative_dynamic_auc` function because I was consistently getting values in the 0.15-0.3 range. After experimenting, I realized that for the `estimate` argument, I was using the output of `predict_survival_function` (in a `RandomSurvivalForest` model) instead of `predict`, and switching this fixed the issue and gave me reasonable AUCs. **Proposed solution** The documentation for `cumulative_dynamic_auc` is not very descriptive on what kind of outputted predictions (e.g., probabilities, estimated times-to-event) the `estimate` argument should be, and I think this could be clearer for all metrics.
open
2024-10-29T14:21:21Z
2024-11-02T07:38:41Z
https://github.com/sebp/scikit-survival/issues/489
[ "documentation" ]
RandallJEllis
1
browser-use/browser-use
python
814
Ollama - Does not support tools (status code: 400) Issue
### Bug Description I am new here but i am facing this issue. When I am trying to use browse-use with Ollama but i am getting "does not support tools error". Intrestingly my code is working if i use Deepeek but not when using qwen, phi, llama ### Reproduction Steps 1. Use Ollama with either of these models - qwen:14b, phi4, llama3.1, deepseek-coder:6.7b ### Code Sample ```python llm=ChatOllama(model="qwen:14b", num_ctx=32000) async def main(): agent = Agent( task=task, llm=llm , controller=controller, system_prompt_class=MySystemPrompt) result = await agent.run() print(result) # Access (some) useful information result.urls() # List of visited URLs result.screenshots() # List of screenshot paths result.action_names() # Names of executed actions result.extracted_content() # Content extracted during execution result.errors() # Any errors that occurred result.model_actions() # All actions with their parameters asyncio.run(main()) ``` ### Version 0.1.37 ### LLM Model Other (specify in description) ### Operating System macOS 14.7.2 ### Relevant Log Output ```shell INFO [agent] 📍 Step 1 ERROR [agent] ❌ Result failed 1/3 times: qwen:14b does not support tools (status code: 400) INFO [agent] 📍 Step 1 ERROR [agent] ❌ Result failed 2/3 times: qwen:14b does not support tools (status code: 400) INFO [agent] 📍 Step 1 ERROR [agent] ❌ Result failed 3/3 times: qwen:14b does not support tools (status code: 400) ERROR [agent] ❌ Stopping due to 3 consecutive failures INFO [agent] Created GIF at agent_history.gif ```
open
2025-02-22T06:09:02Z
2025-03-05T04:39:59Z
https://github.com/browser-use/browser-use/issues/814
[ "bug" ]
ArpitSureka
6
microsoft/unilm
nlp
931
ViT-S for BeiT v2
Hi! Thank you for this great repo. In the Table 4 of your paper you show ablation studies with a ViT-S (Small & 1x384x6). Is it possible to have access to those pretrained weights or to have the command to reproduce this results? Thanks! Elias
open
2022-11-25T14:46:09Z
2022-11-28T13:03:13Z
https://github.com/microsoft/unilm/issues/931
[]
elias-ramzi
2
pallets-eco/flask-sqlalchemy
flask
585
Dynamically creating Tables
I'd like to dynamically creating tables for a user when the user signs up , but could not find any resources for it. I tried using SQLAlchemys declarative_base() function , but could not get it to work. Would really appreciate the help. Also Thanks for the awesome work!
closed
2018-01-13T16:12:26Z
2020-12-05T20:46:34Z
https://github.com/pallets-eco/flask-sqlalchemy/issues/585
[]
highoncarbs
1
huggingface/datasets
machine-learning
6,788
A Question About the Map Function
### Describe the bug Hello, I have a question regarding the map function in the Hugging Face datasets. The situation is as follows: when I load a jsonl file using load_dataset(..., streaming=False), and then utilize the map function to process it, I specify that the returned example should be of type Torch.tensor. However, I noticed that after applying the map function, the datatype automatically changes to List, which leads to errors in my program. I attempted to use load_dataset(..., streaming=True), and this issue no longer occurs. I'm not entirely clear on why this happens. Could you please provide some insights into this? ### Steps to reproduce the bug 1.dataset = load_dataset(xxx, streaming = False) 2. dataset.map(function), function will return torch.Tensor. 3. you will find the format of data in dataset is List. ### Expected behavior I expected to receieve the format of data is torch.Tensor. ### Environment info 2.18.0
closed
2024-04-06T11:45:23Z
2024-04-11T05:29:35Z
https://github.com/huggingface/datasets/issues/6788
[]
Klein-Lan
2
dask/dask
scikit-learn
11,802
assertionerror when trying to compute reversed cumulative sum
<!-- Please include a self-contained copy-pastable example that generates the issue if possible. Please be concise with code posted. See guidelines below on how to provide a good bug report: - Craft Minimal Bug Reports http://matthewrocklin.com/blog/work/2018/02/28/minimal-bug-reports - Minimal Complete Verifiable Examples https://stackoverflow.com/help/mcve Bug reports that follow these guidelines are easier to diagnose, and so are often handled much more quickly. --> **Describe the issue**: **Minimal Complete Verifiable Example**: ```python import pandas as pd import dask.dataframe as dd df = dd.from_pandas(pd.DataFrame({'a': [1,2,3], 'b': [4,5,6]}), npartitions=2) df['a'][::-1].cumsum() ``` throws ``` In [7]: df['a'][::-1].cumsum().compute() --------------------------------------------------------------------------- AssertionError Traceback (most recent call last) Cell In[7], line 1 ----> 1 df['a'][::-1].cumsum().compute() File ~/polars-api-compat-dev/.venv/lib/python3.12/site-packages/dask/dataframe/dask_expr/_collection.py:4134, in Series.__getitem__(self, key) 4132 if isinstance(key, Series) or self.npartitions == 1: 4133 return super().__getitem__(key) -> 4134 return self.loc[key] File ~/polars-api-compat-dev/.venv/lib/python3.12/site-packages/dask/dataframe/dask_expr/_indexing.py:84, in LocIndexer.__getitem__(self, key) 81 if isinstance(cindexer, np.generic): 82 cindexer = cindexer.item() ---> 84 return self._loc(iindexer, cindexer) File ~/polars-api-compat-dev/.venv/lib/python3.12/site-packages/dask/dataframe/dask_expr/_indexing.py:103, in LocIndexer._loc(self, iindexer, cindexer) 100 iindexer = self._maybe_partial_time_string(iindexer, unit=unit) 102 if isinstance(iindexer, slice): --> 103 return self._loc_slice(iindexer, cindexer) 104 elif is_series_like(iindexer) and not is_bool_dtype(iindexer.dtype): 105 return new_collection(LocList(self.obj, iindexer.values, cindexer)) File ~/polars-api-compat-dev/.venv/lib/python3.12/site-packages/dask/dataframe/dask_expr/_indexing.py:157, in LocIndexer._loc_slice(self, iindexer, cindexer) 155 def _loc_slice(self, iindexer, cindexer): 156 assert isinstance(iindexer, slice) --> 157 assert iindexer.step in (None, 1) 158 return new_collection(LocSlice(self.obj, iindexer, cindexer)) AssertionError: ``` **Anything else we need to know?**: **Environment**: - Dask version: 2025.2.0 - Python version: 3.12 - Operating System: linux - Install method (conda, pip, source): pip
closed
2025-03-03T15:08:53Z
2025-03-04T12:01:02Z
https://github.com/dask/dask/issues/11802
[ "needs triage" ]
MarcoGorelli
1
vllm-project/vllm
pytorch
14,446
[Usage]: After starting the QwQ-32B model normally, it was found that the model could not output the thought tag normally
### Your current environment INFO 03-08 00:00:39 __init__.py:190] Automatically detected platform cuda. Collecting environment information... PyTorch version: 2.5.1+cu124 Is debug build: False CUDA used to build PyTorch: 12.4 ROCM used to build PyTorch: N/A OS: CentOS Linux release 7.9.2009 (Core) (x86_64) GCC version: (GCC) 11.2.0 Clang version: Could not collect CMake version: version 3.31.5 Libc version: glibc-2.17 Python version: 3.12.9 | packaged by Anaconda, Inc. | (main, Feb 6 2025, 18:56:27) [GCC 11.2.0] (64-bit runtime) Python platform: Linux-3.10.0-1160.92.1.el7.x86_64-x86_64-with-glibc2.17 Is CUDA available: True CUDA runtime version: 12.4.131 CUDA_MODULE_LOADING set to: LAZY GPU models and configuration: GPU 0: NVIDIA A10 GPU 1: NVIDIA A10 GPU 2: NVIDIA A10 GPU 3: NVIDIA A10 GPU 4: NVIDIA A10 GPU 5: NVIDIA A10 GPU 6: NVIDIA A10 GPU 7: NVIDIA A10 Nvidia driver version: 550.127.08 cuDNN version: Probably one of the following: /usr/local/cuda-12.4/targets/x86_64-linux/lib/libcudnn.so.9.2.0 /usr/local/cuda-12.4/targets/x86_64-linux/lib/libcudnn_adv.so.9.2.0 /usr/local/cuda-12.4/targets/x86_64-linux/lib/libcudnn_cnn.so.9.2.0 /usr/local/cuda-12.4/targets/x86_64-linux/lib/libcudnn_engines_precompiled.so.9.2.0 /usr/local/cuda-12.4/targets/x86_64-linux/lib/libcudnn_engines_runtime_compiled.so.9.2.0 /usr/local/cuda-12.4/targets/x86_64-linux/lib/libcudnn_graph.so.9.2.0 /usr/local/cuda-12.4/targets/x86_64-linux/lib/libcudnn_heuristic.so.9.2.0 /usr/local/cuda-12.4/targets/x86_64-linux/lib/libcudnn_ops.so.9.2.0 HIP runtime version: N/A MIOpen runtime version: N/A Is XNNPACK available: True CPU: Architecture: x86_64 CPU op-mode(s): 32-bit, 64-bit Byte Order: Little Endian CPU(s): 128 On-line CPU(s) list: 0-127 Thread(s) per core: 2 Core(s) per socket: 32 Socket(s): 2 NUMA node(s): 2 Vendor ID: GenuineIntel CPU family: 6 Model: 106 Model name: Intel(R) Xeon(R) Platinum 8369B CPU @ 2.90GHz Stepping: 6 CPU MHz: 3499.859 CPU max MHz: 3500.0000 CPU min MHz: 800.0000 BogoMIPS: 5800.00 Virtualization: VT-x L1d cache: 48K L1i cache: 32K L2 cache: 1280K L3 cache: 49152K NUMA node0 CPU(s): 0-31,64-95 NUMA node1 CPU(s): 32-63,96-127 Flags: fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush dts acpi mmx fxsr sse sse2 ss ht tm pbe syscall nx pdpe1gb rdtscp lm constant_tsc art arch_perfmon pebs bts rep_good nopl xtopology nonstop_tsc aperfmperf eagerfpu pni pclmulqdq dtes64 monitor ds_cpl vmx smx est tm2 ssse3 sdbg fma cx16 xtpr pdcm pcid dca sse4_1 sse4_2 x2apic movbe popcnt tsc_deadline_timer aes xsave avx f16c rdrand lahf_lm abm 3dnowprefetch epb cat_l3 invpcid_single ssbd mba rsb_ctxsw ibrs ibpb stibp ibrs_enhanced tpr_shadow vnmi flexpriority ept vpid fsgsbase tsc_adjust bmi1 hle avx2 smep bmi2 erms invpcid rtm cqm rdt_a avx512f avx512dq rdseed adx smap avx512ifma clflushopt clwb intel_pt avx512cd sha_ni avx512bw avx512vl xsaveopt xsavec xgetbv1 cqm_llc cqm_occup_llc cqm_mbm_total cqm_mbm_local dtherm ida arat pln pts hwp hwp_act_window hwp_epp hwp_pkg_req avx512vbmi umip pku ospke avx512_vbmi2 gfni vaes vpclmulqdq avx512_vnni avx512_bitalg avx512_vpopcntdq md_clear pconfig spec_ctrl intel_stibp flush_l1d arch_capabilities Versions of relevant libraries: [pip3] numpy==1.26.4 [pip3] nvidia-cublas-cu12==12.4.5.8 [pip3] nvidia-cuda-cupti-cu12==12.4.127 [pip3] nvidia-cuda-nvrtc-cu12==12.4.127 [pip3] nvidia-cuda-runtime-cu12==12.4.127 [pip3] nvidia-cudnn-cu12==9.1.0.70 [pip3] nvidia-cufft-cu12==11.2.1.3 [pip3] nvidia-curand-cu12==10.3.5.147 [pip3] nvidia-cusolver-cu12==11.6.1.9 [pip3] nvidia-cusparse-cu12==12.3.1.170 [pip3] nvidia-cusparselt-cu12==0.6.2 [pip3] nvidia-ml-py==12.570.86 [pip3] nvidia-nccl-cu12==2.21.5 [pip3] nvidia-nvjitlink-cu12==12.4.127 [pip3] nvidia-nvtx-cu12==12.4.127 [pip3] pyzmq==26.2.1 [pip3] torch==2.5.1 [pip3] torchaudio==2.5.1 [pip3] torchvision==0.20.1 [pip3] transformers==4.49.0 [pip3] triton==3.1.0 [conda] numpy 1.26.4 pypi_0 pypi [conda] nvidia-cublas-cu12 12.4.5.8 pypi_0 pypi [conda] nvidia-cuda-cupti-cu12 12.4.127 pypi_0 pypi [conda] nvidia-cuda-nvrtc-cu12 12.4.127 pypi_0 pypi [conda] nvidia-cuda-runtime-cu12 12.4.127 pypi_0 pypi [conda] nvidia-cudnn-cu12 9.1.0.70 pypi_0 pypi [conda] nvidia-cufft-cu12 11.2.1.3 pypi_0 pypi [conda] nvidia-curand-cu12 10.3.5.147 pypi_0 pypi [conda] nvidia-cusolver-cu12 11.6.1.9 pypi_0 pypi [conda] nvidia-cusparse-cu12 12.3.1.170 pypi_0 pypi [conda] nvidia-cusparselt-cu12 0.6.2 pypi_0 pypi [conda] nvidia-ml-py 12.570.86 pypi_0 pypi [conda] nvidia-nccl-cu12 2.21.5 pypi_0 pypi [conda] nvidia-nvjitlink-cu12 12.4.127 pypi_0 pypi [conda] nvidia-nvtx-cu12 12.4.127 pypi_0 pypi [conda] pyzmq 26.2.1 pypi_0 pypi [conda] torch 2.5.1 pypi_0 pypi [conda] torchaudio 2.5.1 pypi_0 pypi [conda] torchvision 0.20.1 pypi_0 pypi [conda] transformers 4.49.0 pypi_0 pypi [conda] triton 3.1.0 pypi_0 pypi ROCM Version: Could not collect Neuron SDK Version: N/A vLLM Version: 0.7.2 vLLM Build Flags: CUDA Archs: Not Set; ROCm: Disabled; Neuron: Disabled GPU Topology: GPU0 GPU1 GPU2 GPU3 GPU4 GPU5 GPU6 GPU7 CPU Affinity NUMA Affinity GPU NUMA ID GPU0 X PIX NODE NODE SYS SYS SYS SYS 0-31,64-95 0 N/A GPU1 PIX X NODE NODE SYS SYS SYS SYS 0-31,64-95 0 N/A GPU2 NODE NODE X PIX SYS SYS SYS SYS 0-31,64-95 0 N/A GPU3 NODE NODE PIX X SYS SYS SYS SYS 0-31,64-95 0 N/A GPU4 SYS SYS SYS SYS X PIX NODE NODE 32-63,96-127 1 N/A GPU5 SYS SYS SYS SYS PIX X NODE NODE 32-63,96-127 1 N/A GPU6 SYS SYS SYS SYS NODE NODE X PIX 32-63,96-127 1 N/A GPU7 SYS SYS SYS SYS NODE NODE PIX X 32-63,96-127 1 N/A Legend: X = Self SYS = Connection traversing PCIe as well as the SMP interconnect between NUMA nodes (e.g., QPI/UPI) NODE = Connection traversing PCIe as well as the interconnect between PCIe Host Bridges within a NUMA node PHB = Connection traversing PCIe as well as a PCIe Host Bridge (typically the CPU) PXB = Connection traversing multiple PCIe bridges (without traversing the PCIe Host Bridge) PIX = Connection traversing at most a single PCIe bridge NV# = Connection traversing a bonded set of # NVLinks CUDA_PATH=/usr/local/cuda-12.4 LD_LIBRARY_PATH=/root/anaconda3/envs/vllm/lib/python3.12/site-packages/cv2/../../lib64:/usr/local/cuda/lib64:/usr/local/lib64:/usr/local/cuda-12.4/lib64: NCCL_CUMEM_ENABLE=0 TORCHINDUCTOR_COMPILE_THREADS=1 CUDA_MODULE_LOADING=LAZY ### 🐛 Describe the bug The call example is as follows, the model normally replies to the content, you can see that there is thinking content, but there is no thinking label ![Image](https://github.com/user-attachments/assets/07295d2d-cecb-45b3-be01-2dfd8437b9e1) ![Image](https://github.com/user-attachments/assets/60cf6a5c-3fa2-4873-b7b8-47e9736ed016) I don't know if this is a feature of the model or a vllm problem, but I remember wrapping it with the <think> tag ### Before submitting a new issue... - [x] Make sure you already searched for relevant issues, and asked the chatbot living at the bottom right corner of the [documentation page](https://docs.vllm.ai/en/latest/), which can answer lots of frequently asked questions.
open
2025-03-07T16:05:25Z
2025-03-10T08:23:34Z
https://github.com/vllm-project/vllm/issues/14446
[ "usage" ]
shatang123
8
tensorpack/tensorpack
tensorflow
569
AttributeError: 'PrefetchDataZMQ' object has no attribute 'socket'
when i'm using dataflow.dataloader, i got this exception. By scanning the source code in prefetch.py, i find that **reset_state** method is **not called** in the construction function of PrefetchDataZMQ, but the 'socket' property is set in the _reset_once function which is not called. I think reset_state can be called when initializing PrefetchDataZMQ, that will be more convenient.
closed
2017-12-25T03:21:47Z
2018-05-30T20:59:30Z
https://github.com/tensorpack/tensorpack/issues/569
[ "usage" ]
youkaichao
2
neuml/txtai
nlp
258
Unit tests failing with transformers==4.18.0
CLIP models through sentence_transformers are failing to load. Issue https://github.com/UKPLab/sentence-transformers/issues/1515 has been filed with sentence_transformers to determine where the issue is coming from.
closed
2022-04-09T13:05:58Z
2022-04-13T14:58:03Z
https://github.com/neuml/txtai/issues/258
[ "bug" ]
davidmezzetti
0
ageitgey/face_recognition
python
1,082
AttributeError: 'Image' object has no attribute 'read' in Google Colab
* face_recognition version: * Python version: 3.0 * Operating System: Google Colab ### Description I am trying to implement the library in Google Colab. Here is my code so far #pip install dependencies !pip install face_recognition !pip install os !pip install cv2 import face_recognition import os import cv2 #Load the Drive helper and mount from google.colab import drive #This will prompt for authorization. drive.mount('/content/drive') #After executing the cell above, Drive #files will be present in "/content/drive/My Drive". !ls "/content/drive/My Drive" !ls "/content/drive/My Drive/faces/unknown" #'sigrid.jpeg' is in unknown from IPython.display import Image from IPython.display import display Embed = Image('sigrid.jpeg') Embed #Here the image doesn't fully show in the notebook import face_recognition image = face_recognition.load_image_file(Embed) face_locations = face_recognition.face_locations(image) #Then I get this error AttributeError: 'Image' object has no attribute 'read' Can anyone please tell me how to solve this? Thanks.
closed
2020-03-10T02:08:47Z
2020-03-25T22:36:49Z
https://github.com/ageitgey/face_recognition/issues/1082
[]
aanis
2
freqtrade/freqtrade
python
10,788
How do I get my strategy to place sell trades with trading_mode spot?
<!-- Have you searched for similar issues before posting it? Yes Did you have a VERY good look at the [documentation](https://www.freqtrade.io/en/latest/) and are sure that the question is not explained there Yes Please do not use the question template to report bugs or to request new features. --> ## Describe your environment * Operating system: Docker (Mac OS 15.0.1) * Python Version: Python 3.12.5 (`python -V`) * CCXT version: ccxt==4.3.88 (`pip freeze | grep ccxt`) * Freqtrade Version: freqtrade 2024.8 (`freqtrade -V` or `docker compose run --rm freqtrade -V` for Freqtrade running in docker) ## Your question I am building a strategy based off the sample strategy. I see within the `populate_entry_trend` method "enter_long" & "enter_short" I am using Binance and Binance allows me to "buy" and "sell" on the spot market ![image](https://github.com/user-attachments/assets/9d61a1ac-b199-49bc-a5fc-b4d9bc4a7bd0) My strategy places "buy" trades just fine, but it does not place "sell" trades. I do not want to use futures and short trades, I simply want to place sell trades. My trading mode in the config is `"trading_mode": "spot",` In the strategy Bandtastic I see `populate_buy_trend` and `populate_sell_trend` methods. Is this the way to do it?
closed
2024-10-14T02:35:29Z
2024-10-15T00:48:20Z
https://github.com/freqtrade/freqtrade/issues/10788
[ "Question" ]
WinstonN
4
deepspeedai/DeepSpeed
deep-learning
6,618
[BUG] Long sequence parallelism (Ulysses) got error list index out of range
**Describe the bug** test ulysses got error list index out of range **To Reproduce** test ulysses with [test_ulysses.py](https://github.com/microsoft/DeepSpeedExamples/blob/uly-hf/post_training/sequence_parallelism/test_ulysses.py) torchrun --nproc_per_node=8 test_ulysses.py **Expected behavior** work fine **ds_report output** ``` collect2: error: ld returned 1 exit status gds .................... [NO] ....... [NO] transformer_inference .. [NO] ....... [OKAY] inference_core_ops ..... [NO] ....... [OKAY] cutlass_ops ............ [NO] ....... [OKAY] quantizer .............. [NO] ....... [OKAY] ragged_device_ops ...... [NO] ....... [OKAY] ragged_ops ............. [NO] ....... [OKAY] random_ltd ............. [NO] ....... [OKAY] [WARNING] sparse_attn requires a torch version >= 1.5 and < 2.0 but detected 2.3 [WARNING] using untested triton version (2.3.0), only 1.0.0 is known to be compatible sparse_attn ............ [NO] ....... [NO] spatial_inference ...... [NO] ....... [OKAY] transformer ............ [NO] ....... [OKAY] stochastic_transformer . [NO] ....... [OKAY] -------------------------------------------------- DeepSpeed general environment info: torch install path ............... ['/opt/conda/lib/python3.10/site-packages/torch'] torch version .................... 2.3.0+cu121 deepspeed install path ........... ['/opt/conda/lib/python3.10/site-packages/deepspeed'] deepspeed info ................... 0.15.2, unknown, unknown torch cuda version ............... 12.1 torch hip version ................ None nvcc version ..................... 12.1 deepspeed wheel compiled w. ...... torch 2.3, cuda 12.1 shared memory (/dev/shm) size .... 100.00 GB ``` **Screenshots** got error: list index out of range ``` [rank6]: Traceback (most recent call last): [rank6]: File "/data1/nfs15/nfs/zhanglei335/mlsys/train/long-context-train/llm-train/uly_sp_test.py", line 166, in <module> [rank6]: get_loss(model, data_loader, DS_CONFIG) [rank6]: File "/data1/nfs15/nfs/zhanglei335/mlsys/train/long-context-train/llm-train/uly_sp_test.py", line 112, in get_loss [rank6]: model, _, _, _ = deepspeed.initialize(model=model, [rank6]: File "/opt/conda/lib/python3.10/site-packages/deepspeed/__init__.py", line 193, in initialize [rank6]: engine = DeepSpeedEngine(args=args, [rank6]: File "/opt/conda/lib/python3.10/site-packages/deepspeed/runtime/engine.py", line 269, in __init__ [rank6]: self._configure_distributed_model(model) [rank6]: File "/opt/conda/lib/python3.10/site-packages/deepspeed/runtime/engine.py", line 1188, in _configure_distributed_model [rank6]: self.data_parallel_group = groups._get_data_parallel_group() [rank6]: File "/opt/conda/lib/python3.10/site-packages/deepspeed/utils/groups.py", line 405, in _get_data_parallel_group [rank6]: return mesh_device.get_group(mesh_dim="data_parallel") [rank6]: File "/opt/conda/lib/python3.10/site-packages/torch/distributed/device_mesh.py", line 423, in get_group [rank6]: _find_pg_by_ranks_and_tag(*self._dim_group_infos[mesh_dim][:2]) [rank6]: IndexError: list index out of range [rank7]: Traceback (most recent call last): [rank7]: File "/data1/nfs15/nfs/zhanglei335/mlsys/train/long-context-train/llm-train/uly_sp_test.py", line 166, in <module> [rank7]: get_loss(model, data_loader, DS_CONFIG) [rank7]: File "/data1/nfs15/nfs/zhanglei335/mlsys/train/long-context-train/llm-train/uly_sp_test.py", line 112, in get_loss [rank7]: model, _, _, _ = deepspeed.initialize(model=model, [rank7]: File "/opt/conda/lib/python3.10/site-packages/deepspeed/__init__.py", line 193, in initialize [rank7]: engine = DeepSpeedEngine(args=args, [rank7]: File "/opt/conda/lib/python3.10/site-packages/deepspeed/runtime/engine.py", line 269, in __init__ [rank7]: self._configure_distributed_model(model) [rank7]: File "/opt/conda/lib/python3.10/site-packages/deepspeed/runtime/engine.py", line 1188, in _configure_distributed_model [rank7]: self.data_parallel_group = groups._get_data_parallel_group() [rank7]: File "/opt/conda/lib/python3.10/site-packages/deepspeed/utils/groups.py", line 405, in _get_data_parallel_group [rank7]: return mesh_device.get_group(mesh_dim="data_parallel") [rank7]: File "/opt/conda/lib/python3.10/site-packages/torch/distributed/device_mesh.py", line 423, in get_group [rank7]: _find_pg_by_ranks_and_tag(*self._dim_group_infos[mesh_dim][:2]) [rank7]: IndexError: list index out of range ``` **Launcher context** Are you launching your experiment with the `deepspeed` launcher, MPI, or something else?
closed
2024-10-10T03:41:30Z
2024-10-14T02:21:57Z
https://github.com/deepspeedai/DeepSpeed/issues/6618
[ "bug", "training" ]
Lzhang-hub
2
NVlabs/neuralangelo
computer-vision
218
Lego Sequence Video link not working
The Lego Sequence Video link not working, it is asking to request access but its been many days the access is not given yet
open
2025-02-12T16:56:24Z
2025-02-12T16:56:24Z
https://github.com/NVlabs/neuralangelo/issues/218
[]
Mandar800
0
gradio-app/gradio
data-visualization
10,713
The progress bar still exists when the backend raise an error
### Describe the bug In the previous version (5.12.0), when an error was raised, the progress bar would immediately disappear and display a closable alert. gradio 5.20.0 ![Image](https://github.com/user-attachments/assets/62fb676a-ea14-42a3-9519-aa0668213a1e) gradio 5.12.0 ![Image](https://github.com/user-attachments/assets/f18d2ba3-a7b9-4d01-905d-7fbb37da031c) ### Have you searched existing issues? 🔎 - [x] I have searched and found no existing issues ### Reproduction ```python import gradio as gr def submit(input_value): raise gr.Error("test") with gr.Blocks() as demo: chatbot = gr.Chatbot() input = gr.Textbox() input.submit(submit, inputs=input, outputs=[input, chatbot]) demo.launch() ``` ### Screenshot _No response_ ### Logs ```shell ``` ### System Info ```shell gradio version: 5.20.0 gradio_client version: 1.7.2 ``` ### Severity Blocking usage of gradio
closed
2025-03-03T11:40:38Z
2025-03-04T22:17:14Z
https://github.com/gradio-app/gradio/issues/10713
[ "bug", "Regression" ]
Col0ring
1
remorses/mongoke
graphql
4
[Improvement] Better plurals
This is just an improvement I thought about while demoing on a live stream today. But we had a type called `Activity` and Mongoke made it `Activitys` which is a wrong plural, should be "Activities"
closed
2020-04-10T02:16:17Z
2020-04-11T00:58:49Z
https://github.com/remorses/mongoke/issues/4
[]
khaosdoctor
3
mljar/mljar-supervised
scikit-learn
398
Hard coded `n_jobs` during computing importance
Dear @pplonski , I observe significant overload during training despite setting or not setting the n_jobs flag. I am not absolutely sure why this happens, but is there a chance that n_jobs is passed both to CV and the algorithms, i.e. doubling the effective number of available cores? ![error](https://user-images.githubusercontent.com/45383051/118767264-cde9b380-b87d-11eb-941e-598e60efa8cc.png)
closed
2021-05-19T06:40:27Z
2021-06-08T10:27:38Z
https://github.com/mljar/mljar-supervised/issues/398
[ "bug" ]
BeZie
5
docarray/docarray
pydantic
1,409
docs: Small fixes to README
Light copy-editing of README.md.
closed
2023-04-18T07:32:49Z
2023-04-18T12:51:17Z
https://github.com/docarray/docarray/issues/1409
[]
scott-martens
0
mirumee/ariadne-codegen
graphql
67
Plugin system
We could have a plugin system for code generator where plugin would be a class instance given with a method called with at least one argument: Python AST from codegen or previous plugin For example: ```python class MyPlugin: def __init__(self, settings, schema_ast, query_ast): ... def generate_enum(self, enum_ast): # Do something with enum_ast or return it as it is ... ```
closed
2023-02-01T19:53:28Z
2023-03-16T07:55:37Z
https://github.com/mirumee/ariadne-codegen/issues/67
[ "roadmap" ]
rafalp
0
fastapi/sqlmodel
sqlalchemy
44
My mobile phones are hacked and duplicated, I need help please
closed
2021-08-27T17:31:29Z
2022-08-30T16:38:56Z
https://github.com/fastapi/sqlmodel/issues/44
[]
Joshuagriffin9
1
psf/black
python
3,711
Decouple using .gitignore from `exclude`
**Is your feature request related to a problem? Please describe.** This is the current behaviour for handling `.gitignore`: > https://black.readthedocs.io/en/stable/usage_and_configuration/file_collection_and_discovery.html#gitignore > > If --exclude is not set, Black will automatically ignore files and directories in .gitignore file(s), if present. > > If you want Black to continue using .gitignore while also configuring the exclusion rules, please use --extend-exclude. However, my case does not fill in those two scenarios. - Our project is a build system, and one of its subpackages is named `build`. - The directory named `build` is listed in `DEFAULT_EXCLUDES`: https://github.com/psf/black/blob/a4032dce645b83e1faccc7274864869ddfe279fc/src/black/const.py#L2 - Black does not have an option to remove items from the default list, so my only alternative to workaround this is redefining the `exclude` option/setting. By doing so, black stops honouring my `.gitignore` file 😭 **Alternatives I've tried** - `exclude` has precedence over `include`, so doing `--include '/build/|\.pyi?$'` does not work. - The only way I could make it work was additionally informing the full path for the Python files inside the `build` directories: `black . foo/cli/build/*.py tests/foo/cli/build/*.py`. But this is not an ideal solution since I cannot specify this as black configuration in `pyproject.toml`. As a result, developers running `black .` will think they're formatting everything (no warning about the skipped files), and those paths will have to be hardcoded somewhere else (Makefile, tox.ini, ...). **Describe the solution you'd like** I want `black .` to format all the Python files in the repository, skipping all the paths listed in `.gitignore`. I have the same problem with **ruff** and **isort**, since they also exclude `build` by default. But both have specific options to control the reading of the `.gitignore` file: - [respect-gitignore](https://beta.ruff.rs/docs/settings/#respect-gitignore) in ruff - [skip-gitignore](https://pycqa.github.io/isort/docs/configuration/options.html#skip-gitignore) in isort Then I can use settings to workaround this issue: ```toml [tool.ruff] # Files in `foo/cli/build/` are ignored by ruff because the `exclude` # list contains the `build` word by default. We redefine it with just # `.git`, since the other paths are already listed in `.gitignore`. respect-gitignore = true exclude = [".git"] ```
open
2023-05-30T13:03:55Z
2023-05-30T13:19:14Z
https://github.com/psf/black/issues/3711
[ "T: enhancement" ]
aureliojargas
0
ultralytics/yolov5
pytorch
12,882
mAP@0.5 of YOLOv5n on COCO val
### Search before asking - [X] I have searched the YOLOv5 [issues](https://github.com/ultralytics/yolov5/issues) and [discussions](https://github.com/ultralytics/yolov5/discussions) and found no similar questions. ### Question Can you provide mAP@0.5 of YOLOv5n on COCO val? (For detection) Thanks ### Additional _No response_
closed
2024-04-04T12:49:34Z
2024-10-20T19:42:59Z
https://github.com/ultralytics/yolov5/issues/12882
[ "question", "Stale" ]
YairSmadar
3