repo_name
stringlengths 9
75
| topic
stringclasses 30
values | issue_number
int64 1
203k
| title
stringlengths 1
976
| body
stringlengths 0
254k
| state
stringclasses 2
values | created_at
stringlengths 20
20
| updated_at
stringlengths 20
20
| url
stringlengths 38
105
| labels
listlengths 0
9
| user_login
stringlengths 1
39
| comments_count
int64 0
452
|
|---|---|---|---|---|---|---|---|---|---|---|---|
flasgger/flasgger
|
rest-api
| 78
|
Deploy all the example apps to PythonAnywhere
|
Aggregate all apps in `/examples` folder in an unique application using `DipatchMIddleware` and then deploy as demo app to PythonAnywhere instance.
|
closed
|
2017-03-30T21:57:33Z
|
2017-04-01T19:07:34Z
|
https://github.com/flasgger/flasgger/issues/78
|
[] |
rochacbruno
| 1
|
lgienapp/aquarel
|
matplotlib
| 9
|
Boxplot handles do not conform to selected theme colors
|
closed
|
2022-08-12T21:36:38Z
|
2022-08-12T22:31:51Z
|
https://github.com/lgienapp/aquarel/issues/9
|
[
"bug"
] |
lgienapp
| 0
|
|
rthalley/dnspython
|
asyncio
| 458
|
Consider making requests an optional dependency (since it's only needed for DOH)
|
I'm the author of [vpn-slice](https://github.com/dlenski/vpn-slice), a tool for split-tunnel setup. I recently [switched from using janky `subprocess.Popen(['dig', …])` to using `dnspython`](https://github.com/dlenski/vpn-slice/commits/dnspython), and it's great. :+1:
One of the things I notice is that `dnspython` requires `requests` as its only non-stdlib dependency, a relatively "heavyweight" one for folks who are trying to run on a low-resource system (e.g. OpenWRT).
Since `requests` is used _only_ for DNS-over-HTTPS, would you consider making `import requests` and DOH support optional, in the same way that `IDNA` and `cryptography` dependencies are?
|
closed
|
2020-05-01T00:08:35Z
|
2020-05-01T19:03:57Z
|
https://github.com/rthalley/dnspython/issues/458
|
[] |
dlenski
| 1
|
iperov/DeepFaceLive
|
machine-learning
| 126
|
Batch file missing?
|
In your instructions you say "Download Windows build exe from one of the sources".
Can you provide a link?
Thanks.
|
closed
|
2023-01-27T01:27:45Z
|
2023-01-27T04:21:38Z
|
https://github.com/iperov/DeepFaceLive/issues/126
|
[] |
ghost
| 0
|
itamarst/eliot
|
numpy
| 107
|
Cache *all* downloads when running tox tests
|
Would be trivial except I suspect installing main package is done via setup.py which bypasses pip...
|
closed
|
2014-09-06T14:14:54Z
|
2019-05-09T18:23:26Z
|
https://github.com/itamarst/eliot/issues/107
|
[
"enhancement"
] |
itamarst
| 1
|
sanic-org/sanic
|
asyncio
| 2,258
|
ErrorHandler.lookup signature change
|
The `ErrorHandler.lookup` now **requires** two positional arguments:
```python
def lookup(self, exception, route_name: Optional[str]):
```
A non-conforming method will cause Blueprint-specific exception handlers to not properly attach.
---
Related to #2250
|
closed
|
2021-10-02T19:54:44Z
|
2021-10-02T22:02:57Z
|
https://github.com/sanic-org/sanic/issues/2258
|
[
"advisory"
] |
ahopkins
| 0
|
Lightning-AI/pytorch-lightning
|
pytorch
| 20,065
|
enable loading `universal checkpointing` checkpoint in `DeepSpeedStrategy`
|
### Description & Motivation
After I trained a model in some numbers of gpus, say, 8 gpus for a while, It's difficult to load the checkpoint to 16 gpus with optimizer and model states unchanged. The deepspeed has developed the universal checkpointing strategy to solve this problem, but I didn't see the `pytorch-lightning` has this feature.
### Pitch
I want the `pytorch-lightning` could support this feature
### Alternatives
try to add `universal_checkpoint` as a param of `DeepSpeedStrategy` and modify the class refering to `https://www.deepspeed.ai/tutorials/universal-checkpointing/`
### Additional context
_No response_
cc @borda @awaelchli
|
open
|
2024-07-09T14:45:32Z
|
2024-07-11T13:20:49Z
|
https://github.com/Lightning-AI/pytorch-lightning/issues/20065
|
[
"feature",
"help wanted",
"strategy: deepspeed"
] |
zhoubay
| 1
|
iperov/DeepFaceLab
|
machine-learning
| 481
|
Addressing out of memory issues during training phase of the process
|
This project looks fantastic, some immense work, and I am super keen to have it working on my system.
To eliminate the possibility that my system is not up to the task, my spec is as follows:
17-9900K CPU
32gb ram
Windows 10
GeForce GTX 1050ti GPU with 4gb vram.
Is it possible at all to run this project on this sepc?
If so, the issues I have is at the training phase of running the batch files, (h64 for example) I get multiple of the following out of memory errors:
2019-11-07 12:23:05.880304: E tensorflow/stream_executor/cuda/cuda_driver.cc:806] failed to allocate 869.42M (911657984 bytes) from device: CUDA_ERROR_OUT_OF_MEMORY: out of memory
2019-11-07 12:23:05.885463: E tensorflow/stream_executor/cuda/cuda_driver.cc:806] failed to allocate 782.48M (820492288 bytes) from device: CUDA_ERROR_OUT_OF_MEMORY: out of memory
2019-11-07 12:23:05.891215: E tensorflow/stream_executor/cuda/cuda_driver.cc:806] failed to allocate 704.23M (738443008 bytes) from device: CUDA_ERROR_OUT_OF_MEMORY: out of memory
2019-11-07 12:23:05.895912: E tensorflow/stream_executor/cuda/cuda_driver.cc:806] failed to allocate 633.81M (664598784 bytes) from device: CUDA_ERROR_OUT_OF_MEMORY: out of memory
I have seen your previous comments to other users with the same problem (read the manual!) :-), so I hope this is not an irritation, but I would greatly appreciate some direction on exactly what is needed to stop these errors from triggering, or perhaps a link to the step by step which details what drivers and toolkits need to be installed.
|
closed
|
2019-11-07T12:34:45Z
|
2020-10-29T09:15:21Z
|
https://github.com/iperov/DeepFaceLab/issues/481
|
[] |
skynetster
| 11
|
piskvorky/gensim
|
machine-learning
| 3,085
|
Prepare 4.0.0 release candidate 1
|
https://github.com/RaRe-Technologies/gensim/issues/2963
Gentlemen, could you please review the https://github.com/RaRe-Technologies/gensim/tree/4.0.0.rc1 tag and let me know if you spot anything out of the ordinary?
It looks like the wheels are building fine on CI, so we can make this release pretty much any time from now.
|
closed
|
2021-03-20T13:20:56Z
|
2021-03-24T12:58:39Z
|
https://github.com/piskvorky/gensim/issues/3085
|
[] |
mpenkov
| 22
|
RobertCraigie/prisma-client-py
|
asyncio
| 51
|
Improve editor experience
|
## Problem
<!-- A clear and concise description of what the problem is. Ex. I'm always frustrated when [...] -->
One of the biggest reasons for using an ORM is the autocomplete value that it provides, currently all of our query arguments will not be autocompleted, this is an issue as it greatly decreases user experience.
Users can only see what arguments are required when constructing the dictionary and will have to jump to the definition to get the full list of arguments.
## Suggested solution
<!-- A clear and concise description of what you want to happen. -->
This is a tracking issue as the solution for this can only be implemented outside of this repository.
List of language servers / editor extensions that should support this:
- [pylance](https://github.com/microsoft/pylance-release/issues/654)
- [jedi](https://github.com/davidhalter/jedi)
- *Please comment if there are any other projects that should be added*
## Alternatives
<!-- A clear and concise description of any alternative solutions or features you've considered. -->
I did consider refactoring the entire client API (#7) however I've decided against this as the current API is very unique to prisma and in my opinion makes complicated queries more readable than other ORMs such as SQLAlchemy.
|
open
|
2021-08-25T11:39:05Z
|
2022-07-05T13:55:55Z
|
https://github.com/RobertCraigie/prisma-client-py/issues/51
|
[
"kind/improvement",
"topic: external",
"priority/high",
"level/unknown"
] |
RobertCraigie
| 4
|
pyppeteer/pyppeteer
|
automation
| 276
|
Can't listen response if has set proxy
|
I have code like this
```
imgs = {}
async def _wait_track_phones(resp: Response):
if "some_thing?" in resp.url:
imgs[resp.url] = await resp.buffer()
# else:
# print(f"url: {resp.url}")
def track_response(resp):
asyncio.ensure_future(_wait_track_phones(resp))
self.page.on('response', track_response)
```
It works, but after I set an proxy, the `some_thing` request cant captcha(I know this by print the log)
set proxy code:
```
# fixed_ip = get_kdl_fixed_ip()
# print(f'use fixed ip: {fixed_ip}')
# self.page = await self._pp.get_page(user_data_dir=self.user_data_dir, proxy=f'{fixed_ip.ip}:{fixed_ip.port}')
# await self.page.authenticate({'username': fixed_ip.account, 'password': fixed_ip.pwd})
```
I'm doubt it's pyppeteer issue or the proxy server did something?
|
open
|
2021-06-22T08:26:32Z
|
2023-10-15T16:26:50Z
|
https://github.com/pyppeteer/pyppeteer/issues/276
|
[] |
huhuang03
| 3
|
microsoft/nni
|
machine-learning
| 5,146
|
{"error":"File not found: C:\\Users\\SiWuXie\\Desktop\\Dll\\experiment\\9yg5pvfz\\trials\\a8qqO\\trial.log"}
|
**Describe the issue**:
[2022-09-28 10:44:47] INFO (main) Start NNI manager
[2022-09-28 10:44:48] INFO (NNIDataStore) Datastore initialization done
[2022-09-28 10:44:48] INFO (RestServer) Starting REST server at port 8080, URL prefix: "/"
[2022-09-28 10:44:48] INFO (RestServer) REST server started.
[2022-09-28 10:44:48] INFO (NNIManager) Starting experiment: 9yg5pvfz
[2022-09-28 10:44:48] INFO (NNIManager) Setup training service...
[2022-09-28 10:44:48] INFO (LocalTrainingService) Construct local machine training service.
[2022-09-28 10:44:48] INFO (NNIManager) Setup tuner...
[2022-09-28 10:44:48] INFO (NNIManager) Change NNIManager status from: INITIALIZED to: RUNNING
[2022-09-28 10:44:49] INFO (NNIManager) Add event listeners
[2022-09-28 10:44:49] INFO (LocalTrainingService) Run local machine training service.
[2022-09-28 10:44:49] INFO (NNIManager) NNIManager received command from dispatcher: ID,
[2022-09-28 10:44:49] INFO (NNIManager) NNIManager received command from dispatcher: TR, {"parameter_id": 0, "parameter_source": "algorithm", "parameters": {"batch_size": 16, "hidden_size0": 64, "hidden_size1": 16, "hidden_size2": 128, "hidden_size3": 16, "lr": 0.1}, "parameter_index": 0}
[2022-09-28 10:44:49] INFO (NNIManager) NNIManager received command from dispatcher: TR, {"parameter_id": 1, "parameter_source": "algorithm", "parameters": {"batch_size": 32, "hidden_size0": 16, "hidden_size1": 64, "hidden_size2": 128, "hidden_size3": 32, "lr": 0.1}, "parameter_index": 0}
[2022-09-28 10:44:54] INFO (NNIManager) submitTrialJob: form: {
sequenceId: 0,
hyperParameters: {
value: '{"parameter_id": 0, "parameter_source": "algorithm", "parameters": {"batch_size": 16, "hidden_size0": 64, "hidden_size1": 16, "hidden_size2": 128, "hidden_size3": 16, "lr": 0.1}, "parameter_index": 0}',
index: 0
},
placementConstraint: { type: 'None', gpus: [] }
}
[2022-09-28 10:44:54] INFO (NNIManager) submitTrialJob: form: {
sequenceId: 1,
hyperParameters: {
value: '{"parameter_id": 1, "parameter_source": "algorithm", "parameters": {"batch_size": 32, "hidden_size0": 16, "hidden_size1": 64, "hidden_size2": 128, "hidden_size3": 32, "lr": 0.1}, "parameter_index": 0}',
index: 0
},
placementConstraint: { type: 'None', gpus: [] }
}
[2022-09-28 10:45:05] INFO (NNIManager) Trial job a8qqO status changed from WAITING to SUCCEEDED
[2022-09-28 10:45:05] INFO (NNIManager) NNIManager received command from dispatcher: TR, {"parameter_id": 2, "parameter_source": "algorithm", "parameters": {"batch_size": 16, "hidden_size0": 16, "hidden_size1": 128, "hidden_size2": 128, "hidden_size3": 64, "lr": 0.1}, "parameter_index": 0}
[2022-09-28 10:45:06] ERROR (NNIRestHandler) Error: File not found: C:\Users\SiWuXie\Desktop\Dll\experiment\9yg5pvfz\trials\a8qqO\trial.log
at LocalTrainingService.getTrialFile (C:\Users\SiWuXie\AppData\Roaming\Python\Python39\site-packages\nni_node\training_service\local\localTrainingService.js:146:19)
at NNIManager.getTrialFile (C:\Users\SiWuXie\AppData\Roaming\Python\Python39\site-packages\nni_node\core\nnimanager.js:333:37)
at C:\Users\SiWuXie\AppData\Roaming\Python\Python39\site-packages\nni_node\rest_server\restHandler.js:284:29
at Layer.handle [as handle_request] (C:\Users\SiWuXie\AppData\Roaming\Python\Python39\site-packages\nni_node\node_modules\express\lib\router\layer.js:95:5)
at next (C:\Users\SiWuXie\AppData\Roaming\Python\Python39\site-packages\nni_node\node_modules\express\lib\router\route.js:137:13)
at Route.dispatch (C:\Users\SiWuXie\AppData\Roaming\Python\Python39\site-packages\nni_node\node_modules\express\lib\router\route.js:112:3)
at Layer.handle [as handle_request] (C:\Users\SiWuXie\AppData\Roaming\Python\Python39\site-packages\nni_node\node_modules\express\lib\router\layer.js:95:5)
at C:\Users\SiWuXie\AppData\Roaming\Python\Python39\site-packages\nni_node\node_modules\express\lib\router\index.js:281:22
at param (C:\Users\SiWuXie\AppData\Roaming\Python\Python39\site-packages\nni_node\node_modules\express\lib\router\index.js:360:14)
at param (C:\Users\SiWuXie\AppData\Roaming\Python\Python39\site-packages\nni_node\node_modules\express\lib\router\index.js:371:14)
[2022-09-28 10:45:10] INFO (NNIManager) submitTrialJob: form: {
sequenceId: 2,
hyperParameters: {
value: '{"parameter_id": 2, "parameter_source": "algorithm", "parameters": {"batch_size": 16, "hidden_size0": 16, "hidden_size1": 128, "hidden_size2": 128, "hidden_size3": 64, "lr": 0.1}, "parameter_index": 0}',
index: 0
},
placementConstraint: { type: 'None', gpus: [] }
}
[2022-09-28 10:45:16] INFO (NNIManager) Trial job v0t9o status changed from WAITING to SUCCEEDED
[2022-09-28 10:45:16] INFO (NNIManager) NNIManager received command from dispatcher: TR, {"parameter_id": 3, "parameter_source": "algorithm", "parameters": {"batch_size": 64, "hidden_size0": 64, "hidden_size1": 64, "hidden_size2": 128, "hidden_size3": 16, "lr": 0.001}, "parameter_index": 0}
**Environment**:
- NNI version: 2.9
- Training service (local|remote|pai|aml|etc):
- Client OS:
- Server OS (for remote mode only):
- Python version:
- PyTorch/TensorFlow version:
- Is conda/virtualenv/venv used?:
- Is running in Docker?:
**Configuration**:
- Experiment config (remember to remove secrets!):
- Search space:
**Log message**:
- nnimanager.log:
- dispatcher.log:
- nnictl stdout and stderr:
<!--
Where can you find the log files:
LOG: https://github.com/microsoft/nni/blob/master/docs/en_US/Tutorial/HowToDebug.md#experiment-root-director
STDOUT/STDERR: https://nni.readthedocs.io/en/stable/reference/nnictl.html#nnictl-log-stdout
-->
**How to reproduce it?**:
|
closed
|
2022-09-28T02:45:41Z
|
2022-10-08T08:30:29Z
|
https://github.com/microsoft/nni/issues/5146
|
[
"support"
] |
siwuxei
| 13
|
streamlit/streamlit
|
data-science
| 10,154
|
End date expired on web page
|
### Checklist
- [X] I have searched the [existing issues](https://github.com/streamlit/streamlit/issues) for similar issues.
- [X] I added a very descriptive title to this issue.
- [X] I have provided sufficient information below to help reproduce this issue.
### Summary
The web page of wind curtailment fails to work because dates after the new year are invalid.
### Reproducible Code Example
_No response_
### Steps To Reproduce
Open the Web page.
### Expected Behavior
Today to be a valid date. Why use a fixed end date?
### Current Behavior
StreamlitAPIException: The default value of [datetime.date(2025, 1, 10)] must lie between the min_value of 2021-01-01 and the max_value of 2025-01-01, inclusively.
Traceback:
File "/src/main.py", line 176, in <module>
select_date = st.date_input("Select Date", min_value=MIN_DATE, max_value=MAX_DATE, value=st.session_state.today_date)
File "<string>", line 7, in __init__
### Is this a regression?
- [ ] Yes, this used to work in a previous version.
### Debug info
- Streamlit version:
- Python version:
- Operating System:
- Browser:
Don't know
### Additional Information
_No response_
|
closed
|
2025-01-10T13:47:50Z
|
2025-01-10T15:05:43Z
|
https://github.com/streamlit/streamlit/issues/10154
|
[] |
Snidosil
| 2
|
open-mmlab/mmdetection
|
pytorch
| 11,710
|
How to train a model with different task datasets at the same time?
|
Now i have two datasets with different tasks, dataset-A (detection task) and dataset-B (segmentation task), these two datasets has different image, i.e. one image only has detection label or segmentation label. The model has a shared backbone, and two head for detection task and segmentation task respectively. I want to train this model in multiple gpu, in first step, all gpu get the samples from dataset-A and forward/backward, in second step, all gpu get the samples from dataset-B and forward/backward, and so on until finish the training process, how can i achieve this? Thanks!
|
open
|
2024-05-14T03:31:34Z
|
2024-05-14T03:31:50Z
|
https://github.com/open-mmlab/mmdetection/issues/11710
|
[] |
kamiLight
| 0
|
littlecodersh/ItChat
|
api
| 825
|
使用itchat的时候,有办法中断run()吗
|
在提交前,请确保您已经检查了以下内容!
- [x] 您可以在浏览器中登陆微信账号,但不能使用`itchat`登陆
- [x] 我已经阅读并按[文档][document] 中的指引进行了操作
- [x] 您的问题没有在[issues][issues]报告,否则请在原有issue下报告
- [x] 本问题确实关于`itchat`, 而不是其他项目.
- [x] 如果你的问题关于稳定性,建议尝试对网络稳定性要求极低的[itchatmp][itchatmp]项目
请使用`itchat.run(debug=True)`运行,并将输出粘贴在下面:
```
[在这里粘贴完整日志]
```
您的itchat版本为:`[在这里填写版本号]`。(可通过`python -c "import itchat;print(itchat.__version__)"`获取)
其他的内容或者问题更详细的描述都可以添加在下面:
> 因为想在多线程中使用itchat,想问下run()方法中有没有什么退出标志位,退出方法也行
[document]: http://itchat.readthedocs.io/zh/latest/
[issues]: https://github.com/littlecodersh/itchat/issues
[itchatmp]: https://github.com/littlecodersh/itchatmp
|
open
|
2019-05-10T09:43:50Z
|
2019-07-24T02:13:48Z
|
https://github.com/littlecodersh/ItChat/issues/825
|
[] |
WatsonJay
| 1
|
iperov/DeepFaceLab
|
deep-learning
| 5,538
|
How to De-age the face?
|
I can use this project to implement the face change, but I don't know how to change the age of the face
|
open
|
2022-07-20T16:34:54Z
|
2023-07-26T03:43:55Z
|
https://github.com/iperov/DeepFaceLab/issues/5538
|
[] |
liuda-zhang
| 3
|
amisadmin/fastapi-amis-admin
|
sqlalchemy
| 129
|
cannot import name 'DictIntStrAny' from 'fastapi.encoders'
|
File "/home/echoshoot/.conda/envs/LLMAPIs/lib/python3.11/site-packages/fastapi_sqlmodel_crud/_sqlmodel.py", line 18, in <module>
from fastapi.encoders import DictIntStrAny, SetIntStr
ImportError: cannot import name 'DictIntStrAny' from 'fastapi.encoders'
|
closed
|
2023-09-27T08:19:37Z
|
2024-03-11T01:48:56Z
|
https://github.com/amisadmin/fastapi-amis-admin/issues/129
|
[] |
EchoShoot
| 1
|
twopirllc/pandas-ta
|
pandas
| 767
|
Fix for RMA - adjust=False
|
Flag adjust=False is missing from the WildeR's MA - RMA - so for EMA to be calculated accordingly:
[PANDAS - EWM](https://pandas.pydata.org/pandas-docs/stable/reference/api/pandas.Series.ewm.html#pandas.Series.ewm)
$ y_0 = x_0 $
$ y_t = ( 1 - \alpha )y_{t-1} + \alpha x_t $
Submitted a PR for your consideration #766
|
closed
|
2024-02-16T15:07:29Z
|
2024-02-20T03:45:47Z
|
https://github.com/twopirllc/pandas-ta/issues/767
|
[
"bug"
] |
freyastreamlit
| 0
|
slackapi/python-slack-sdk
|
asyncio
| 1,017
|
Support to chat.postMessage.unfurl_links
|
In the Slack API docs, there are plenty of arguments for `chat.postMessage`, including `unfurl_links`:
https://api.slack.com/methods/chat.postMessage
However in this SDK there only a few arguments documented:
https://slack.dev/python-slack-sdk/api-docs/slack_sdk/web/client.html#slack_sdk.web.client.WebClient.chat_postMessage
Overviewing the source code, it seems to forward the arguments to the API:
```return self.api_call("chat.postMessage", json=kwargs)```
Does it actually work or the arguments need to be manually mapped?
If so, may I contribute with a pull request?
|
closed
|
2021-05-18T19:46:15Z
|
2021-05-19T01:09:29Z
|
https://github.com/slackapi/python-slack-sdk/issues/1017
|
[
"discussion",
"web-client",
"Version: 3x"
] |
fsmaia
| 2
|
d2l-ai/d2l-en
|
tensorflow
| 2,057
|
DecoderBlock code wrong?
|
Hi guys, firstly, I can't find DecoderBlock in d2l/torch.py (although it exists in book).
Secondly, in forward function, there lies
`if self.training:`
However, no definition of `training` in `__init__` or other part of codes.
|
closed
|
2022-03-05T16:10:12Z
|
2022-05-16T12:55:59Z
|
https://github.com/d2l-ai/d2l-en/issues/2057
|
[] |
Y-H-Joe
| 1
|
lensacom/sparkit-learn
|
scikit-learn
| 16
|
Implement ArrayRDD.max() method
|
see http://docs.scipy.org/doc/numpy/reference/generated/numpy.ndarray.min.html#numpy.ndarray.max
|
closed
|
2015-06-11T12:01:17Z
|
2015-06-17T16:14:41Z
|
https://github.com/lensacom/sparkit-learn/issues/16
|
[
"enhancement"
] |
kszucs
| 0
|
ultralytics/ultralytics
|
machine-learning
| 18,737
|
How do you configure the hyperparameters with a `.yaml` file for finetuning a model?
|
### Search before asking
- [x] I have searched the Ultralytics YOLO [issues](https://github.com/ultralytics/ultralytics/issues) and [discussions](https://github.com/ultralytics/ultralytics/discussions) and found no similar questions.
### Question
In the older `yolov5` repo, you can simply load a `scratch` yaml file to use as hyperparameters for finetuning a `yolov5` model. How do you do this with the `YOLO` class in the newer `ultralytics` repo? I only see the parameters defined as part of the method but no parameter to accept a hyperparameter `.yaml` file. Also, these default hyperparameters, do they change according to the model you choose? How do we know if they have to be changed from their default values or not depending on the model used?
### Additional
_No response_
|
closed
|
2025-01-17T12:20:40Z
|
2025-01-20T18:53:24Z
|
https://github.com/ultralytics/ultralytics/issues/18737
|
[
"question",
"detect"
] |
mesllo-bc
| 2
|
axnsan12/drf-yasg
|
django
| 299
|
Support nested coreschema in CoreAPI compat layer
|
```python
class MyFilterBackend(BaseFilterBackend):
def get_schema_fields(self, view):
return [coreapi.Field(
name="values"
required=False,
schema=coreschema.Array(items=coreschema.Integer(), unique_items=True),
location='query'
)]
```
Result:

|
open
|
2019-01-22T08:18:42Z
|
2025-03-07T12:16:45Z
|
https://github.com/axnsan12/drf-yasg/issues/299
|
[
"triage"
] |
khomyakov42
| 1
|
python-arq/arq
|
asyncio
| 491
|
Tasks are not being deferred/delayed correctly when raising Retry
|
Hello,
I created a test to check if my tasks are being deferred correctly before being retried. I noticed that timing does not add up when I raise a retry. I am assuming that if a task is retried four times with a 1.5 multiplier delay, I should expect the total time elapsed to be over 15 seconds. However, the numbers I am seeing are very random. Sometimes, it is over 15 seconds, and sometimes, it is less than 9 seconds.
math:
1st job try fail:
1.5 sec delay -- tot: 1.5 sec
2nd job try fail:
3.0 sec delay -- tot: 4.5 sec
3rd job try fail:
4.5 sec delay --tot: 9.0 sec
4th job try fail:
6.0 sec delay --tot: 15.0 sec
```
@pytest.mark.asyncio
@pytest.mark.arq_delay
async def test_transactional_email_time_elapsed_with_4_errors(
arq_redis: ArqRedis, worker: Worker
) -> None:
async def retry_deferred(ctx):
job_try: int = ctx["job_try"]
if job_try <= 4:
raise Retry(defer=job_try * 1.5)
return "success"
start_time: float = time.time()
await arq_redis.enqueue_job(
"retry_deferred", _job_id="testing"
)
worker: Worker = worker(
functions=[func(retry_deferred, max_tries=5, name="retry_deferred")]
)
await worker.main()
elapsed_time = time.time() - start_time
# 1.5 + 3 + 4.5 + 6 = 15 seconds
assert (
elapsed_time >= 15
), f"Expected elapsed time to be at least 15 seconds, but got {elapsed_time:.2f}"
assert worker.jobs_complete == 1
assert worker.jobs_failed == 0
assert worker.jobs_retried == 4
```
From the console:
```
INFO arq.worker:worker.py:322 Starting worker for 1 functions: retry_deferred
INFO arq.worker:connections.py:296 redis_version=7.2.5 mem_usage=1.87M clients_connected=3 db_keys=2
INFO arq.worker:worker.py:544 0.01s → testing:retry_deferred()
INFO arq.worker:worker.py:565 0.00s ↻ testing:retry_deferred retrying job in 1.50s
INFO arq.worker:worker.py:544 1.51s → testing:retry_deferred() try=2
INFO arq.worker:worker.py:565 0.00s ↻ testing:retry_deferred retrying job in 3.00s
INFO arq.worker:worker.py:544 1.51s → testing:retry_deferred() try=3
INFO arq.worker:worker.py:565 0.00s ↻ testing:retry_deferred retrying job in 4.50s
INFO arq.worker:worker.py:544 6.01s → testing:retry_deferred() try=4
INFO arq.worker:worker.py:565 0.00s ↻ testing:retry_deferred retrying job in 6.00s
INFO arq.worker:worker.py:544 6.01s → testing:retry_deferred() try=5
INFO arq.worker:worker.py:588 0.00s ← testing:retry_deferred ● 'success'
=================================================================================================================================================================================================== short test summary info ====================================================================================================================================================================================================
FAILED test_invite_user.py::::test_transactional_email_time_elapsed_with_4_errors - AssertionError: Expected elapsed time to be at least 15 seconds, but got 8.40
```
arq version 0.24.0
Thank you for your time.
|
open
|
2024-12-13T19:02:48Z
|
2024-12-13T19:03:52Z
|
https://github.com/python-arq/arq/issues/491
|
[] |
williamle92
| 0
|
modin-project/modin
|
data-science
| 7,109
|
Remove the outdated aws_example.yaml file.
|
closed
|
2024-03-20T11:16:42Z
|
2024-03-20T13:56:05Z
|
https://github.com/modin-project/modin/issues/7109
|
[] |
Retribution98
| 0
|
|
seleniumbase/SeleniumBase
|
pytest
| 2,663
|
Emojis and return lines not being processed properly by self.type
|
Hello, i am using self.type to type a message like this Testing emojis: ⭐❤️
Testing return:
Return test
and instead of returning the exact text it returns, every row being typed separately and no emojis being sent.
I couldn't find anyone else having this issue
|
closed
|
2024-04-04T19:17:32Z
|
2024-04-04T22:03:38Z
|
https://github.com/seleniumbase/SeleniumBase/issues/2663
|
[
"can't reproduce"
] |
st7xxx
| 1
|
s3rius/FastAPI-template
|
fastapi
| 205
|
Anyone has pytest working with docker-compose ?
|
I tried the following, but it does not work and prompt me with help information on command parameters for docker-compose:
```
docker-compose -f deploy/docker-compose.yml -f deploy/docker-compose.dev.yml --project-directory . run --build --rm api pytest -vv .
```
|
closed
|
2024-03-09T17:38:29Z
|
2024-03-09T22:05:38Z
|
https://github.com/s3rius/FastAPI-template/issues/205
|
[] |
rcholic
| 2
|
matterport/Mask_RCNN
|
tensorflow
| 2,834
|
Very bad predictions
|
I am getting very poor training and validation results. I am trying to train on a custom dataset using coco_weights as a starting weight. There is only one instance of the object in every image.
Is it possible my config parameters setting is not good?
IMAGES_PER_GPU = 1
TRAIN_ROIS_PER_IMAGE=300
NUM_CLASSES = 1 + 1
STEPS_PER_EPOCH =100
DETECTION_MIN_CONFIDENCE = 0.95
VALIDATION_STEPS = 50
MAX_GT_INSTANCES = 1
DETECTION_MAX_INSTANCES = 1
USE_MINI_MASK = True
I set DETECTION_MAX_INSTANCES to 1 because there is only 1 object in all images. i have tried other configurations though with no result
I also set MAX_GT_INSTANCES to 1 because all training images also have only one image, even though I have also tried large values with no result.
I have also tried varying values for TRAIN_ROIS_PER_IMAGE
I have set USE_MINI_MASK to both true and false.
Also, I need to know what values would work best for IMAGE_MIN_DIM,RPN_ANCHOR_SCALES
Also, I suspected the TensorFlow version. Was obtaining poor results using Tensorflow 2.6.0.
I then tested writing a basic program predicting a simple coco image using coco weights and was obtaining a result predicting 4+ bbs when there was only 1 object. This changes after downgrading to 2.5.0 and I was able to get an accurate prediction on the basic image.
However, using Tensorflow 2.5.0, I am still getting poor training and validation results on my custom dataset.
Please any advice would be appreciated

|
open
|
2022-05-20T09:12:45Z
|
2023-07-05T12:13:28Z
|
https://github.com/matterport/Mask_RCNN/issues/2834
|
[] |
OguntolaIbrahim
| 7
|
Farama-Foundation/PettingZoo
|
api
| 1,007
|
[Question] Why is the first `reset()` of `performance_benchmark()` not counted in the runtime.
|
### Question
https://github.com/Farama-Foundation/PettingZoo/blob/969c079164e6fd84b4878f26debfeff644aa2786/pettingzoo/test/performance_benchmark.py#L7-L13
```py
env.reset() # not counted as part of the profile
start = time.time() # this after the first reset()
```
|
closed
|
2023-06-29T15:06:12Z
|
2023-07-06T19:12:55Z
|
https://github.com/Farama-Foundation/PettingZoo/issues/1007
|
[
"question"
] |
Kallinteris-Andreas
| 2
|
serengil/deepface
|
machine-learning
| 1,121
|
tf 2.16
|
## Problem
Command `pip install deepface` installs tf 2.16 if tf dependency is not installed yet. And it seems tf 2.16 causes the following error.
```
ValueError: A KerasTensor cannot be used as input to a TensorFlow function. A KerasTensor is a symbolic placeholder for a shape and dtype, used when constructing Keras Functional models or Keras Functions. You can only use it as input to a Keras layer or a Keras operation (from the namespaces `keras.layers` and `keras.operations`). You are likely doing something like:
```
This is because Keras 3 is the default Keras version for TensorFlow 2.16 onwards according to [official tutorial](https://blog.tensorflow.org/2024/03/whats-new-in-tensorflow-216.html),
## What should be done
- [ ] tf-keras dependency should be added to requirements. But this dependency requires python >= 3.9. Maybe we should throw an exception if this package is not installed.
- [ ] Switching Keras 2 is required with commands `import os;os.environ["TF_USE_LEGACY_KERAS"]="1"` before importing tensorflow.
- [ ] we cannot get the version of tf with `tf.__version__` because we have to do these controls before importing tf.
- [ ] Enforcing users to have tf 2.12 or less for FbDeepFace will not be required anymore - https://github.com/serengil/deepface/blob/master/deepface/basemodels/FbDeepFace.py#L47
|
closed
|
2024-03-18T15:03:19Z
|
2024-03-19T09:42:14Z
|
https://github.com/serengil/deepface/issues/1121
|
[
"bug",
"enhancement"
] |
serengil
| 1
|
Farama-Foundation/Gymnasium
|
api
| 1,108
|
[Bug Report] `torch_to_numpy` fails when tensors are not on cpu
|
### Describe the bug
[torch_to_numpy](https://github.com/Farama-Foundation/Gymnasium/blob/57136fbaeec340c023ae6939348ac2e7cfe3b3cb/gymnasium/wrappers/numpy_to_torch.py#L31) fails when `torch.Tensor` is on a _non-cpu_ device or has `requires_grad` set to `True`.
> [!note]
> Calling [torch_to_jax](https://github.com/Farama-Foundation/Gymnasium/blob/57136fbaeec340c023ae6939348ac2e7cfe3b3cb/gymnasium/wrappers/jax_to_torch.py#L47) does not cause these problems when `jax` is installed with GPU support - otherwise it also throws an error but informs the user that `jax` should be installed with GPU support.
### Code example
```shell
import torch
from gymnasium.wrappers.numpy_to_torch import torch_to_numpy
t = torch.tensor(0.0, device="cuda")
torch_to_numpy(t) # throws error
```
### System info
`python=3.11`, `gymnasium=1.0.0a2`
### Additional context
Error traceback from the example script:
```bash
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
File "anaconda/envs/sddl/lib/python3.11/functools.py", line 909, in wrapper
return dispatch(args[0].__class__)(*args, **kw)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "anaconda/envs/sddl/lib/python3.11/site-packages/gymnasium/wrappers/numpy_to_torch.py", line 51, in _number_torch_to_numpy
return np.array(value)
^^^^^^^^^^^^^^^
File "anaconda/envs/sddl/lib/python3.11/site-packages/torch/_tensor.py", line 1087, in __array__
return self.numpy()
^^^^^^^^^^^^
TypeError: can't convert cuda:0 device type tensor to numpy. Use Tensor.cpu() to copy the tensor to host memory first.
```
### Checklist
- [X] I have checked that there is no similar [issue](https://github.com/Farama-Foundation/Gymnasium/issues) in the repo
|
closed
|
2024-07-02T17:42:50Z
|
2024-07-03T09:19:28Z
|
https://github.com/Farama-Foundation/Gymnasium/issues/1108
|
[
"bug"
] |
mantasu
| 1
|
vitalik/django-ninja
|
django
| 1,355
|
Consider HTTP verb when routing
|
**Is your feature request related to a problem? Please describe.**
Thanks for your excellent work. Currently Ninja doesn't support different view functions with the same URL pattern and different HTTP verbs. For example, the following code won't work:
```python
@router.put("/resources/{res_id}")
def modify_resource(request, res_id, params=SomeSchema):
pass
@router.delete("/resources/{res_id}")
def delete_resource(request, res_id):
pass
```
This kind of URL pattern is usually necessary to implement the RESTful style API. So can you add support for this?
**Describe the solution you'd like**
I don't have a solution for this.
|
closed
|
2024-12-09T00:05:33Z
|
2024-12-09T01:40:21Z
|
https://github.com/vitalik/django-ninja/issues/1355
|
[] |
chang-ph
| 1
|
BeanieODM/beanie
|
pydantic
| 429
|
`save_changes()` always update document (with `use_revision = True`).
|
This is **bug** or **feature** but calling `save_changes()` with turned `use_revision = True` always update document with new revision, even if nothing is changed.
```python
# beanie/odm/utils/dump.py
def get_dict(document: "Document", to_db: bool = False):
exclude = set()
if document.id is None:
exclude.add("_id")
if not document.get_settings().use_revision:
exclude.add("revision_id")
return Encoder(by_alias=True, exclude=exclude, to_db=to_db).encode(
document
)
```
|
closed
|
2022-11-20T11:25:42Z
|
2023-07-10T02:10:43Z
|
https://github.com/BeanieODM/beanie/issues/429
|
[
"Stale"
] |
guesswh0
| 4
|
pytest-dev/pytest-xdist
|
pytest
| 982
|
Print out time taken by each file when when using `loadfile`
|
We are using `--dist loadfile` to distribute tests, and it is working well. We suspect some of our files are taking a long time, but have no good way to verify. We are using `--verbose`, so we get setup/run/teardown times, but don't have a 100% sure way of counting time elapsed by each file.
It would be nice if, at the end of the run, xdist would output the name of each file, and the total time taken by the file.
|
open
|
2023-12-05T21:54:16Z
|
2023-12-06T17:18:57Z
|
https://github.com/pytest-dev/pytest-xdist/issues/982
|
[
"question"
] |
jkugler
| 2
|
Asabeneh/30-Days-Of-Python
|
numpy
| 254
|
Euclidian distance
|
The link to in the README file [Euclidian distance](https://en.wikipedia.org/wiki/Euclidean_distance#:~:text=In%20mathematics%2C%20the%20Euclidean%20distance,being%20called%20the%20Pythagorean%20distance.) should open on another tab
|
open
|
2022-07-20T16:05:21Z
|
2022-07-20T16:06:34Z
|
https://github.com/Asabeneh/30-Days-Of-Python/issues/254
|
[] |
codewithmide
| 0
|
plotly/dash-cytoscape
|
dash
| 25
|
Support for layout options that accept JS func
|
## Description
Certain layout contains options that accept functions. For example, the grid layout:
```JavaScript
let options = {
name: 'grid',
fit: true, // whether to fit the viewport to the graph
padding: 30, // padding used on fit
boundingBox: undefined, // constrain layout bounds; { x1, y1, x2, y2 } or { x1, y1, w, h }
avoidOverlap: true, // prevents node overlap, may overflow boundingBox if not enough space
avoidOverlapPadding: 10, // extra spacing around nodes when avoidOverlap: true
nodeDimensionsIncludeLabels: false, // Excludes the label when calculating node bounding boxes for the layout algorithm
spacingFactor: undefined, // Applies a multiplicative factor (>0) to expand or compress the overall area that the nodes take up
condense: false, // uses all available space on false, uses minimal space on true
rows: undefined, // force num of rows in the grid
cols: undefined, // force num of columns in the grid
position: function( node ){}, // returns { row, col } for element
sort: undefined, // a sorting function to order the nodes; e.g. function(a, b){ return a.data('weight') - b.data('weight') }
animate: false, // whether to transition the node positions
animationDuration: 500, // duration of animation in ms if enabled
animationEasing: undefined, // easing of animation if enabled
animateFilter: function ( node, i ){ return true; }, // a function that determines whether the node should be animated. All nodes animated by default on animate enabled. Non-animated nodes are positioned immediately when the layout starts
ready: undefined, // callback on layoutready
stop: undefined, // callback on layoutstop
transform: function (node, position ){ return position; } // transform a given node position. Useful for changing flow direction in discrete layouts
};
```
Obviously, `position` is not accepted in Python, since it accepts input that is a function mapping from a node object to a `{row,col}` object.
## Possible Solution
Optionally accept dictionary input instead of a function on the Dash/Python side. On the react side, the function will simply look up the object's names (i.e. dictionary keys) and return the corresponding value.
|
closed
|
2019-01-13T20:54:58Z
|
2024-02-19T14:10:50Z
|
https://github.com/plotly/dash-cytoscape/issues/25
|
[
"suggestion"
] |
xhluca
| 7
|
deepset-ai/haystack
|
machine-learning
| 8,240
|
Update tests in core integration: langfuse
|
closed
|
2024-08-15T14:26:11Z
|
2024-08-16T14:21:40Z
|
https://github.com/deepset-ai/haystack/issues/8240
|
[] |
shadeMe
| 0
|
|
plotly/dash-table
|
dash
| 315
|
Ability to clear the contents of a column
|
- The UI for this feature may overload the existing `x` button on the column or it may introduce a new icon or an additional context menu.
- Deleting the contents of a column will delete the contents across all pages (if pagination is enabled)
- If a filtering expression is applied on a column of which the contents are deleted, the filter will be cleared as well.
|
closed
|
2018-12-19T22:20:21Z
|
2019-08-08T20:28:20Z
|
https://github.com/plotly/dash-table/issues/315
|
[
"dash-type-enhancement",
"dash-meta-sponsored",
"size: 3"
] |
chriddyp
| 3
|
jina-ai/serve
|
machine-learning
| 5,856
|
Jina's Client is extremely buggy and slow
|
I've encountered a problem in my pipeline when I was trying to have some executors be executed at specific moments, like database writes and reads.. etc, but this meant that these executors can't go inside the Flow DAG pipeline, because I don't want them to be executed sequentially, but only when the stars align.. I need to be able to go back and forth in execution, etc. Basically I need a different type of network topology, something with vast freedom and connectivity, like mesh topology of microservices, at least the backend should work like this, frontend can stay the classic Flow.
There seems to be only 2 solutions for this. Either use Flow conditioning or multiple deployments (or multiple flows).
I quickly figured out flow conditioning is probably pretty useless for my use case and multiple deployments require multiple ports, therefore multiple clients as well. That wouldn't be such a problem with regular 3rd party HTTP requests.. if they actually worked. Well they didn't. After 100s of tests I realized where is the problem. Deployments need the flow to actually have exposed entrypoints, probably because they have no gateway. And something like this, won't work:
`dep = Deployment(protocol=['http'], port=[8081], uses="quick_test/config.yml", entrypoint="/test")`
Anyway, this is where I actually started to lose my mind. Not only that now I had to rethink the whole architecture and make the most logic client-side outside of flows and executors, but I also started to discover that Jina's native Client barely works.
I created a simple deployment like above with executor that only outputs simple text. Then I tested it with this code:
```
s = time.perf_counter()
c = Client(host="localhost", port=8081, protocol='http')
da = c.post('/test', DocumentArray.empty(1))
print(da.texts)
e = time.perf_counter()
print(e-s)
```
And what do I see? Correct result, but in 5 seconds, every single time. After getting furious for a moment I tried to change "localhost" to "127.0.0.1" and it did work actually lowering it to 250 ms. Probably just a bug, because Deployments are kinda a new thing if I understand correctly, anyway.. I needed to lower that latency..
---
So I did try to actually measure what's going on with this code:
```
c = Client(host='http://127.0.0.1:8081')
c.profiling()
```
.. which btw also didn't work everytime, sometimes I had to switch to this instead (for some magical reason):
```
c = Client(host='127.0.0.1:8081', protocol='http')
c.profiling()
```
..and sometimes even that didn't work, so I had to change the whole constellation so the stars can align, because it mostly works with gRPC only... anyway..
---
20% of the time it gave me this error:
```
...
jina.excepts.BadClient: no such endpoint http://127.0.0.1:8081/post
```
40% of the time it gave me this error:
```
Traceback (most recent call last):
File "e:\Muru\WORK\MAIN\SYSTEM\client_latency_test.py", line 20, in <module>
c.profiling()
File "E:\Muru\venv\Lib\site-packages\jina\clients\mixin.py", line 190, in profiling
return _render_response_table(r[0], st, ed, show_table=show_table)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "E:\Muru\venv\Lib\site-packages\jina\clients\mixin.py", line 159, in _render_response_table
'Gateway-executors network', server_network, server_network / gateway_time
~~~~~~~~~~~~~~~^~~~~~~~~~~~~~
ZeroDivisionError: division by zero
```
30% of the time it gave me this error:
```
Traceback (most recent call last):
File "E:\Muru\WORK\main\SYSTEM\client_latency_test.py", line 19, in <module>
c.profiling()
File "E:\Muru\venv\Lib\site-packages\jina\clients\mixin.py", line 190, in profiling
return _render_response_table(r[0], st, ed, show_table=show_table)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "E:\Muru\venv\Lib\site-packages\jina\clients\mixin.py", line 132, in _render_response_table
route[0].end_time.ToMilliseconds() - route[0].start_time.ToMilliseconds()
~~~~~^^^
IndexError: list index out of range
```
And finally 10% of the time it gave me the actual answer like this:
```
Roundtrip 79ms 100%
├── Client-server network 78ms 99%
└── Server 1ms 1%
├── Gateway-executors network 1683664004186ms 168366400418600%
└── executor -1683664004185ms -168366400418500%
Time measured via time.perf_counter(): 104 ms.
```
This last test was when I tried to make it faster by using gRPC. I didn't manage it to go under 60 ms. And what I don't understand is how can it be this slow? If I understand this correctly, executors are basically communicating instantly in 1 ms using gRPC, but when I send the request from client, it takes so much time, is that normal?
I was confused, so I tested my own python gRPC server/client with normal protobufs and it takes just 2 ms to send and receive message. Why is Client.post so slow then? Even regular "3rd party" HTTP post request was way faster than this native one (that 1% of the time it actually worked) - it took about 30 ms.
---
**Environment**
- jina 3.15.2
- docarray 0.21.0
- jcloud 0.2.5
- jina-hubble-sdk 0.35.0
- jina-proto 0.1.17
- protobuf 4.22.1
- proto-backend upb
- grpcio 1.53.0
- pyyaml 6.0
- python 3.11.2
- platform Windows
- platform-release 10
- platform-version 10.0.25324
- architecture AMD64
- processor AMD64 Family 23 Model 1 Stepping 1, AuthenticAMD
- uid 91762274940
- session-id 651d801a-eeb8-11ed-9b70-00155d75327c
- uptime 2023-05-10T00:25:24.972039
- ci-vendor (unset)
- internal False
* JINA_DEFAULT_HOST (unset)
* JINA_DEFAULT_TIMEOUT_CTRL (unset)
* JINA_DEPLOYMENT_NAME (unset)
* JINA_DISABLE_UVLOOP (unset)
* JINA_EARLY_STOP (unset)
* JINA_FULL_CLI (unset)
* JINA_GATEWAY_IMAGE (unset)
* JINA_GRPC_RECV_BYTES (unset)
* JINA_GRPC_SEND_BYTES (unset)
* JINA_HUB_NO_IMAGE_REBUILD (unset)
* JINA_LOG_CONFIG (unset)
* JINA_LOG_LEVEL (unset)
* JINA_LOG_NO_COLOR (unset)
* JINA_MP_START_METHOD (unset)
* JINA_OPTOUT_TELEMETRY (unset)
* JINA_RANDOM_PORT_MAX (unset)
* JINA_RANDOM_PORT_MIN (unset)
* JINA_LOCKS_ROOT (unset)
* JINA_K8S_ACCESS_MODES (unset)
* JINA_K8S_STORAGE_CLASS_NAME (unset)
* JINA_K8S_STORAGE_CAPACITY (unset)
* JINA_STREAMER_ARGS (unset)
|
closed
|
2023-05-09T23:07:35Z
|
2023-05-10T12:40:48Z
|
https://github.com/jina-ai/serve/issues/5856
|
[] |
Murugurugan
| 23
|
apache/airflow
|
machine-learning
| 47,499
|
TriggerDagRunOperator is failing for reason 'Direct database access via the ORM is not allowed in Airflow 3.0'
|
### Apache Airflow version
3.0.0b2
### If "Other Airflow 2 version" selected, which one?
_No response_
### What happened?
TriggerDagRunOperator is trying to connect to the DB when 'wait_for_completion' is true.
[2025-03-07, 12:54:58] - Task failed with exception logger="task" error_detail=[{"exc_type":"RuntimeError","exc_value":"Direct database access via the ORM is not allowed in Airflow 3.0","syntax_error":null,"is_cause":false,"frames":[{"filename":"/opt/airflow/task_sdk/src/airflow/sdk/execution_time/task_runner.py","lineno":609,"name":"run"},{"filename":"/opt/airflow/task_sdk/src/airflow/sdk/execution_time/task_runner.py","lineno":730,"name":"_execute_task"},{"filename":"/opt/airflow/airflow/models/baseoperator.py","lineno":168,"name":"wrapper"},{"filename":"/opt/airflow/providers/standard/src/airflow/providers/standard/operators/trigger_dagrun.py","lineno":207,"name":"execute"},{"filename":"/opt/airflow/airflow/utils/session.py","lineno":100,"name":"wrapper"},{"filename":"/usr/local/lib/python3.9/contextlib.py","lineno":119,"name":"__enter__"},{"filename":"/opt/airflow/airflow/utils/session.py","lineno":40,"name":"create_session"},{"filename":"/opt/airflow/task_sdk/src/airflow/sdk/execution_time/supervisor.py","lineno":207,"name":"__init__"}]}]
[2025-03-07, 12:54:58] - Top level error logger="task" error_detail=[{"exc_type":"RuntimeError","exc_value":"Direct database access via the ORM is not allowed in Airflow 3.0","syntax_error":null,"is_cause":false,"frames":[{"filename":"/opt/airflow/task_sdk/src/airflow/sdk/execution_time/task_runner.py","lineno":817,"name":"main"},{"filename":"/opt/airflow/task_sdk/src/airflow/sdk/execution_time/task_runner.py","lineno":786,"name":"finalize"},{"filename":"/opt/airflow/providers/standard/src/airflow/providers/standard/operators/trigger_dagrun.py","lineno":80,"name":"get_link"},{"filename":"/opt/airflow/airflow/utils/session.py","lineno":100,"name":"wrapper"},{"filename":"/usr/local/lib/python3.9/contextlib.py","lineno":119,"name":"__enter__"},{"filename":"/opt/airflow/airflow/utils/session.py","lineno":40,"name":"create_session"},{"filename":"/opt/airflow/task_sdk/src/airflow/sdk/execution_time/supervisor.py","lineno":207,"name":"__init__"}]}]
### What you think should happen instead?
A DAG utilising TriggerDagRunOperator should pass.
### How to reproduce
Run the below DAG in AF3 beta2:
Controller DAG:
```python
from airflow import DAG
from airflow.operators.trigger_dagrun import TriggerDagRunOperator
from pendulum import today
dag = DAG(
dag_id="trigger_controller_dag",
default_args={"owner": "airflow", "start_date": today('UTC').add(days=-2)},
schedule=None,
tags=["core"],
)
trigger = TriggerDagRunOperator(
task_id="test_trigger_dagrun",
trigger_dag_id="trigger_target_dag",
reset_dag_run=True,
wait_for_completion=True,
conf={"message": "Hello World"},
dag=dag,
)
```
Target DAG:
```python
from airflow.models import DAG
from airflow.providers.standard.operators.bash import BashOperator
from airflow.providers.standard.operators.python import PythonOperator
from pendulum import today
dag = DAG(
dag_id="trigger_target_dag",
default_args={"start_date": today('UTC').add(days=-2), "owner": "Airflow"},
tags=["core"],
schedule=None, # This must be none so it's triggered by the controller
is_paused_upon_creation=False, # This must be set so other workers can pick this dag up. mabye it's a bug idk
)
def run_this_func(**context):
print(
f"Remotely received value of {context['dag_run'].conf['message']} for key=message "
)
run_this = PythonOperator(
task_id="run_this",
python_callable=run_this_func,
dag=dag,
)
# You can also access the DagRun object in templates
bash_task = BashOperator(
task_id="bash_task",
bash_command='echo "Here is the message: $message"',
env={"message": '{{ dag_run.conf["message"] if dag_run else "" }}'},
dag=dag,
)
```
### Operating System
Linux
### Versions of Apache Airflow Providers
_No response_
### Deployment
Other
### Deployment details
_No response_
### Anything else?
_No response_
### Are you willing to submit PR?
- [ ] Yes I am willing to submit a PR!
### Code of Conduct
- [x] I agree to follow this project's [Code of Conduct](https://github.com/apache/airflow/blob/main/CODE_OF_CONDUCT.md)
|
closed
|
2025-03-07T14:37:56Z
|
2025-03-19T07:42:20Z
|
https://github.com/apache/airflow/issues/47499
|
[
"kind:bug",
"priority:high",
"area:core",
"affected_version:3.0.0beta"
] |
atul-astronomer
| 4
|
marshmallow-code/flask-marshmallow
|
sqlalchemy
| 36
|
@property for class Meta within schema?
|
Because my property is derived... is there a way for me to include this within the schema? Thanks!
It keeps giving me AttributeError: "full_name" is is not a valid field for Model...
``` python
class Model(db.Model):
first_name = String()
last_name = String()
@property
def full_name(self):
return self.first_name + ' ' + self.last_name
class ModelSchema(ma.Schema)
class Meta:
fields = (
'first_name',
'last_name',
'full_name',
)
```
|
closed
|
2016-04-07T23:12:34Z
|
2016-04-07T23:50:42Z
|
https://github.com/marshmallow-code/flask-marshmallow/issues/36
|
[] |
rlam3
| 1
|
huggingface/pytorch-image-models
|
pytorch
| 1,501
|
[FEATURE] usinghelp
|
hello,
Excuse me
I'm a beginner in deep learning.Want to use your code, but don't know how to call it.I was wondering where the help documentation for using this code.
Thank you for your help](javascript.
Thank you very much.
|
closed
|
2022-10-17T09:39:12Z
|
2022-10-27T22:22:29Z
|
https://github.com/huggingface/pytorch-image-models/issues/1501
|
[
"enhancement"
] |
cvliucv
| 1
|
sammchardy/python-binance
|
api
| 1,047
|
proxy for AsyncClient
|
Error using proxy in AsyncClient
I use this code to create client:
```
from binance.client import AsyncClient
import asyncio
proxies = {
'http': 'http://username:password@myip:port',
}
async def main():
client = AsyncClient(api_key, secret_key, {'proxies': proxies})
print(await client.get_symbol_ticker(symbol='BTCUSDT', requests_params={'proxies': proxies}))
loop = asyncio.get_event_loop()
server = loop.run_until_complete(main())
loop.run_forever()
loop.close()
```
**Expected behavior**
I want to connect to binance api , but i get error.
`TypeError: _request() got an unexpected keyword argument 'proxies'`
- Python version: 3.8
- Virtual Env: virtualenv
- OS: windows
- python-binance version: 1.0.12
My error log is:
```
Traceback (most recent call last):
File "C:/Projects/Targetbot/a.py", line 13, in <module>
server = loop.run_until_complete(main())
File "C:\Users\Amin\AppData\Local\Programs\Python\Python38\lib\asyncio\base_events.py", line 616, in run_until_complete
return future.result()
File "C:/Projects/Targetbot/a.py", line 10, in main
print(await client.get_symbol_ticker(symbol='BTCUSDT', requests_params={'proxies': proxies}))
File "C:\Projects\pythonProject\venv\lib\site-packages\binance\client.py", line 6836, in get_symbol_ticker
return await self._get('ticker/price', data=params, version=self.PRIVATE_API_VERSION)
File "C:\Projects\pythonProject\venv\lib\site-packages\binance\client.py", line 6551, in _get
return await self._request_api('get', path, signed, version, **kwargs)
File "C:\Projects\pythonProject\venv\lib\site-packages\binance\client.py", line 6514, in _request_api
return await self._request(method, uri, signed, **kwargs)
File "C:\Projects\pythonProject\venv\lib\site-packages\binance\client.py", line 6495, in _request
async with getattr(self.session, method)(uri, **kwargs) as response:
File "C:\Projects\pythonProject\venv\lib\site-packages\aiohttp\client.py", line 896, in get
self._request(hdrs.METH_GET, url, allow_redirects=allow_redirects, **kwargs)
TypeError: _request() got an unexpected keyword argument 'proxies'
Unclosed client session
client_session: <aiohttp.client.ClientSession object at 0x000001A5216A9BB0>
```
|
open
|
2021-09-30T17:25:46Z
|
2023-07-25T11:36:55Z
|
https://github.com/sammchardy/python-binance/issues/1047
|
[] |
hyperdesignir
| 5
|
ymcui/Chinese-LLaMA-Alpaca-2
|
nlp
| 186
|
run_sft.sh使用 --flash_attn会报错
|
### 提交前必须检查以下项目
- [X] 请确保使用的是仓库最新代码(git pull),一些问题已被解决和修复。
- [X] 我已阅读[项目文档](https://github.com/ymcui/Chinese-LLaMA-Alpaca-2/wiki)和[FAQ章节](https://github.com/ymcui/Chinese-LLaMA-Alpaca-2/wiki/常见问题)并且已在Issue中对问题进行了搜索,没有找到相似问题和解决方案。
- [X] 第三方插件问题:例如[llama.cpp](https://github.com/ggerganov/llama.cpp)、[LangChain](https://github.com/hwchase17/langchain)、[text-generation-webui](https://github.com/oobabooga/text-generation-webui)等,同时建议到对应的项目中查找解决方案。
### 问题类型
模型量化和部署
### 基础模型
Alpaca-2-13B
### 操作系统
Linux
### 详细描述问题
使用 --flash_attn参数进行指令精调出错
lr=1e-4
lora_rank=64
lora_alpha=128
lora_trainable="q_proj,v_proj,k_proj,o_proj,gate_proj,down_proj,up_proj"
modules_to_save="embed_tokens,lm_head"
lora_dropout=0.05
pretrained_model=/workspace/ypg/Chinese-LLaMA-Alpaca-2/llama2_model/chinese-alpaca-2-13b
chinese_tokenizer_path=/workspace/ypg/Chinese-LLaMA-Alpaca-2/llama2_model/chinese-alpaca-2-13b
dataset_dir=/workspace/ypg/Chinese-LLaMA-Alpaca-2/dataset/cailian_data/20230824_1000_data
per_device_train_batch_size=1
per_device_eval_batch_size=1
gradient_accumulation_steps=8
output_dir=/workspace/ypg/Chinese-LLaMA-Alpaca-2/output/20230824_1000_data/1_epoch
# peft_model=/workspace/ypg/Chinese-LLaMA-Alpaca-2/llama2_model/chinese-alpaca-2-13b
validation_file=/workspace/ypg/Chinese-LLaMA-Alpaca-2/dataset/cailian_data/20230824_1000_data/output.json
num_train_epochs=1
max_seq_length=5000
deepspeed_config_file=ds_zero2_no_offload.json
torchrun --nnodes 1 --nproc_per_node 2 run_clm_sft_with_peft.py \
--deepspeed ${deepspeed_config_file} \
--model_name_or_path ${pretrained_model} \
--tokenizer_name_or_path ${chinese_tokenizer_path} \
--dataset_dir ${dataset_dir} \
--validation_split_percentage 0.001 \
--per_device_train_batch_size ${per_device_train_batch_size} \
--per_device_eval_batch_size ${per_device_eval_batch_size} \
--do_train \
--do_eval \
--seed $RANDOM \
--fp16 \
--num_train_epochs ${num_train_epochs} \
--lr_scheduler_type cosine \
--learning_rate ${lr} \
--warmup_ratio 0.03 \
--weight_decay 0 \
--logging_strategy steps \
--logging_steps 10 \
--save_strategy steps \
--save_total_limit 3 \
--evaluation_strategy steps \
--eval_steps 1000 \
--save_steps 1000 \
--gradient_accumulation_steps ${gradient_accumulation_steps} \
--preprocessing_num_workers 8 \
--max_seq_length ${max_seq_length} \
--output_dir ${output_dir} \
--overwrite_output_dir \
--ddp_timeout 30000 \
--logging_first_step True \
--lora_rank ${lora_rank} \
--lora_alpha ${lora_alpha} \
--trainable ${lora_trainable} \
--modules_to_save ${modules_to_save} \
--lora_dropout ${lora_dropout} \
--torch_dtype float16 \
--validation_file ${validation_file} \
--gradient_checkpointing \
--ddp_find_unused_parameters False \
--flash_attn
File "/opt/conda/envs/llama_env/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1501, in _call_impl
return forward_call(*args, **kwargs)
File "/workspace/ypg/Chinese-LLaMA-Alpaca-2/scripts/training/flash_attn_patch.py", line 59, in forward
assert not use_cache, "use_cache is not supported"
AssertionError: use_cache is not supported
ERROR:torch.distributed.elastic.multiprocessing.api:failed (exitcode: 1) local_rank: 0 (pid: 2097697) of binary: /opt/conda/envs/llama_env/bin/python
### 依赖情况(代码类问题务必提供)
peft 0.3.0.dev0
sentence-transformers 2.2.2
torch 2.0.1
torchvision 0.15.2
transformers 4.31.0
### 运行日志或截图
File "/workspace/ypg/Chinese-LLaMA-Alpaca-2/scripts/training/flash_attn_patch.py", line 59, in forward
assert not use_cache, "use_cache is not supported"
AssertionError: use_cache is not supported
ERROR:torch.distributed.elastic.multiprocessing.api:failed (exitcode: 1) local_rank: 0 (pid: 2097697) of binary: /opt/conda/envs/llama_env/bin/python

|
closed
|
2023-08-24T08:36:43Z
|
2023-09-03T23:48:29Z
|
https://github.com/ymcui/Chinese-LLaMA-Alpaca-2/issues/186
|
[
"stale"
] |
zzx528
| 2
|
Asabeneh/30-Days-Of-Python
|
pandas
| 12
|
Reference code for exercises
|
Thanks for the open source code, is there any reference code for the exercise?
|
closed
|
2019-12-20T10:59:13Z
|
2019-12-20T11:28:18Z
|
https://github.com/Asabeneh/30-Days-Of-Python/issues/12
|
[] |
Donaghys
| 1
|
keras-team/keras
|
data-science
| 20,564
|
Unable to Assign Different Weights to Classes for Functional Model
|
I'm struggling to assign weights to different classes for my functional Keras model. I've looked for solutions online and everything that I have tried has not worked. I have tried the following:
1) Assigning `class_weights` as an argument for `model.fit`. I convert my datasets to numpy iterators first as `tf.data.Dataset` does not work with `class_weight`
```python
history = model.fit(
train_dataset.as_numpy_iterator(),
validation_data=val_dataset.as_numpy_iterator(),
epochs=epochs,
steps_per_epoch=steps_per_epoch,
validation_steps=validation_steps,
class_weight={0: 0.1, 1: 5.0}
)
```
2) Mapping sample weights directly to my `tf.data.Dataset` object.
```python
def add_sample_weights(features, labels):
class_weights = tf.constant([0.1, 5.0])
sample_weights = tf.gather(class_weights, labels)
return features, labels, sample_weights
train_dataset = train_dataset.shuffle(buffer_size=10).batch(4).repeat().map(add_sample_weights)
val_dataset = val_dataset.batch(4).repeat().map(add_sample_weights)
# ...
model.compile(optimizer=optimizer, loss='binary_crossentropy', metrics=['accuracy'], weighted_metrics=['accuracy'])
```
For context, my dataset contains an imbalanced split of 70 / 30, for classes 0 and 1 respectively. I've been assigning the 0 a class weight of 0.1 and class 1 a class weight of 5. To my understanding, this should mean that the model "prioritizes" class 1 over class 0. However, my model still classifies everything as 0. How can I resolve this and is there anything that I'm missing?
Let me know if you need any additional information to help me with this issue.
|
closed
|
2024-11-29T02:25:58Z
|
2024-11-29T06:49:47Z
|
https://github.com/keras-team/keras/issues/20564
|
[] |
Soontosh
| 1
|
junyanz/pytorch-CycleGAN-and-pix2pix
|
deep-learning
| 998
|
How to use even more CPU's
|
Hi,
I have been getting great results for inference but now wish to train a new model.
I have started the training with the following command for 800x600 images:
python3 train.py --dataroot datasets/style_constable --name constable_cyclegan --model cycle_gan --preprocess scale_width_and_crop --load_size 800 --crop_size 800 --gpu_ids -1
I have CPUs only, but quite a lot of them (120).
The inference and above training example uses about 22 CPUs on average - (varying between 28 and 8) as monitored by the Linux top command.
Do you know whether it is possible to increase the parallelism even further ? I'm assuming that there is no parameter for this but was wondering whether this could be forced using some batching or larger sized work unit ?
Thanks again.
|
open
|
2020-04-18T06:19:58Z
|
2020-04-22T16:48:18Z
|
https://github.com/junyanz/pytorch-CycleGAN-and-pix2pix/issues/998
|
[] |
Adrian-1234
| 4
|
JaidedAI/EasyOCR
|
deep-learning
| 342
|
How can I train model?
|
Korean text recognition is not good so I want add more data in EasyOCR-master\easyocr\dict\ko.txt
Please let me know if there is another way
|
closed
|
2021-01-04T06:56:54Z
|
2021-07-02T08:53:00Z
|
https://github.com/JaidedAI/EasyOCR/issues/342
|
[] |
kimlia545
| 1
|
huggingface/transformers
|
tensorflow
| 35,983
|
Add support for context parallelism
|
### Feature request
Long context models like [Qwen/Qwen2.5-7B-Instruct-1M](https://huggingface.co/Qwen/Qwen2.5-7B-Instruct-1M) have support for up to 1M tokens. However, fine-tuning such models in `transformers` leads to OOM errors and special methods like Ring Attention are needed. A similar issue arises during inference, where generating on a 1M prefill gives OOM.
It would be very exciting to have support for _context parallelism_ where in each layer we split the KQV computation across GPUs.
As far as an API goes, having something like `attn_implemention="ring"` in `from_pretrained()` would likely be the simplest way to support this feature.
Links to papers and code:
* Ring Attention: https://arxiv.org/abs/2310.01889
* Reference code: https://github.com/zhuzilin/ring-flash-attention/blob/main/ring_flash_attn/adapters/hf_adapter.py
* Picotron code: https://github.com/huggingface/picotron/blob/main/picotron/context_parallel/context_parallel.py
### Motivation
The main motivation is two-fold: to support fine-tuning large context models and to enable online RL methods like GRPO to scale better in TRL (where we generate potentially large CoTs during training)
### Your contribution
Happy to review / test the feature on TRL side
|
open
|
2025-01-31T11:00:02Z
|
2025-03-14T17:15:34Z
|
https://github.com/huggingface/transformers/issues/35983
|
[
"Feature request"
] |
lewtun
| 5
|
zwczou/weixin-python
|
flask
| 40
|
请教一下收到文本 自动回复图文消息的问题
|
回复内容变成了
```python
<xml><ToUserName><![CDATA[ou0MI53DizZvCYWE6N3lmOkN1jKc
```
是的,变成了一个纯文本的信息,内容还被截取了一截
我的代码如下:
```python
# 微信公众平台消息
@msg.text()
def text_messages(**kwargs):
"""
监听所有文本消息
"""
print('\n**************\nfrom weixin:\n', kwargs,'**************\n\n')
articles = [{
'title':'测试消息',
'description':'描述内容',
'picurl':'http://design.xxx.com/img/weixin_pmg.png',
'url':'http://xxx.com/pmg_unit/suite/1/unit/0/pick/895a'
}]
ret= msg.reply(kwargs['sender'],type="news", sender=kwargs['receiver'], articles=articles)
print(ret)
return ret
```
打印出来的ret是完成的xml字符串
```python
<xml><ToUserName><![CDATA[ou0MI53DizZvCYWE6N3lmOkN1jKc]]></ToUserName><FromUserName><![CDATA[gh_411e765ff6e3]]></FromUserName><CreateTime>1549984263</CreateTime><MsgType><![CDATA[news]]></MsgType><ArticleCount>1</ArticleCount><Articles><item><Title><![CDATA[测试消息]]></Title><Description><![CDATA[描述内容]]></Description><PicUrl><![CDATA[http://design.zhibiyanjiusuo.com/img/weixin_pmg.png]]></PicUrl><Url><![CDATA[http://zhibiyanjiusuo.com/pmg_unit/suite/1/unit/0/pick/895a]]></Url></item></Articles></xml>
```
|
closed
|
2019-02-12T15:35:07Z
|
2019-02-17T03:45:48Z
|
https://github.com/zwczou/weixin-python/issues/40
|
[] |
pcloth
| 2
|
long2ice/fastapi-cache
|
fastapi
| 89
|
Exclude db session injection in key generation
|
The default key generation doesn't produce a repeatable key when using the Depends method of injecting a new db SQLAlchemy Session into the function. I believe this is a fairly common use case.
```
async def get_thing(
db: Session = Depends(get_db),
start_date: date | None = None,
end_date: date | None = None,
):
```
Since the get_db function returns a unique session, the automatically generated key always differs, and the cache is not utilized. Could the cache decorator be configured to ignore dependency injection arguments? I can make it works with a custom key builder, but I'd rather not have to include that in every route if possible, and make sure I call the variable the exact same thing.
|
closed
|
2022-09-28T19:49:17Z
|
2024-02-21T01:07:55Z
|
https://github.com/long2ice/fastapi-cache/issues/89
|
[] |
reubengann
| 2
|
voila-dashboards/voila
|
jupyter
| 518
|
Voila Won't Start - cannot import name ConverterMapping
|
System is a fresh Debian 10.
Voila was installed successfully with pip
```
phill@debian10:~$ pip install --user voila
Collecting voila
Downloading https://files.pythonhosted.org/packages/82/50/191abf9f09817c2725c9f6542c812aa80ea090bf5fa3ad82fa7b918e55d7/voila-0.1.18-py2.py3-none-any.whl (5.9MB)
100% |████████████████████████████████| 5.9MB 197kB/s
Collecting jupyter-server<0.2.0,>=0.1.0 (from voila)
Downloading https://files.pythonhosted.org/packages/0f/9e/4b5bddfbab2f56b561b672d04c300d6c2d4b3f2c7c99e7935892104f2efa/jupyter_server-0.1.1-py2.py3-none-any.whl (183kB)
100% |████████████████████████████████| 184kB 1.8MB/s
Requirement already satisfied: nbconvert<6,>=5.6.0 in ./.local/lib/python2.7/site-packages (from voila) (5.6.1)
Requirement already satisfied: pygments<3,>=2.4.1 in ./.local/lib/python2.7/site-packages (from voila) (2.5.2)
Collecting jupyterlab-pygments<0.2,>=0.1.0 (from voila)
Using cached https://files.pythonhosted.org/packages/d8/4d/579c4613dbc656c07fa424663818f8bddd77ecafe6956497f66cab82b130/jupyterlab_pygments-0.1.0-py2.py3-none-any.whl
Requirement already satisfied: ipykernel in ./.local/lib/python2.7/site-packages (from jupyter-server<0.2.0,>=0.1.0->voila) (4.10.1)
Requirement already satisfied: traitlets>=4.2.1 in ./.local/lib/python2.7/site-packages (from jupyter-server<0.2.0,>=0.1.0->voila) (4.3.3)
Requirement already satisfied: jupyter-core>=4.4.0 in ./.local/lib/python2.7/site-packages (from jupyter-server<0.2.0,>=0.1.0->voila) (4.6.1)
Requirement already satisfied: jupyter-client>=5.3.1 in ./.local/lib/python2.7/site-packages (from jupyter-server<0.2.0,>=0.1.0->voila) (5.3.4)
Requirement already satisfied: tornado>=4 in ./.local/lib/python2.7/site-packages (from jupyter-server<0.2.0,>=0.1.0->voila) (5.1.1)
Requirement already satisfied: Send2Trash in ./.local/lib/python2.7/site-packages (from jupyter-server<0.2.0,>=0.1.0->voila) (1.5.0)
Requirement already satisfied: jinja2 in ./.local/lib/python2.7/site-packages (from jupyter-server<0.2.0,>=0.1.0->voila) (2.10.3)
Requirement already satisfied: prometheus-client in ./.local/lib/python2.7/site-packages (from jupyter-server<0.2.0,>=0.1.0->voila) (0.7.1)
Requirement already satisfied: pyzmq>=17 in ./.local/lib/python2.7/site-packages (from jupyter-server<0.2.0,>=0.1.0->voila) (18.1.1)
Requirement already satisfied: terminado>=0.8.1 in ./.local/lib/python2.7/site-packages (from jupyter-server<0.2.0,>=0.1.0->voila) (0.8.3)
Requirement already satisfied: ipython-genutils in ./.local/lib/python2.7/site-packages (from jupyter-server<0.2.0,>=0.1.0->voila) (0.2.0)
Requirement already satisfied: nbformat in ./.local/lib/python2.7/site-packages (from jupyter-server<0.2.0,>=0.1.0->voila) (5.0.3)
Requirement already satisfied: ipaddress; python_version == "2.7" in /usr/lib/python2.7/dist-packages (from jupyter-server<0.2.0,>=0.1.0->voila) (1.0.17)
Requirement already satisfied: bleach in ./.local/lib/python2.7/site-packages (from nbconvert<6,>=5.6.0->voila) (3.1.0)
Requirement already satisfied: mistune<2,>=0.8.1 in ./.local/lib/python2.7/site-packages (from nbconvert<6,>=5.6.0->voila) (0.8.4)
Requirement already satisfied: testpath in ./.local/lib/python2.7/site-packages (from nbconvert<6,>=5.6.0->voila) (0.4.4)
Requirement already satisfied: defusedxml in ./.local/lib/python2.7/site-packages (from nbconvert<6,>=5.6.0->voila) (0.6.0)
Requirement already satisfied: pandocfilters>=1.4.1 in ./.local/lib/python2.7/site-packages (from nbconvert<6,>=5.6.0->voila) (1.4.2)
Requirement already satisfied: entrypoints>=0.2.2 in /usr/lib/python2.7/dist-packages (from nbconvert<6,>=5.6.0->voila) (0.3)
Requirement already satisfied: ipython>=4.0.0 in ./.local/lib/python2.7/site-packages (from ipykernel->jupyter-server<0.2.0,>=0.1.0->voila) (5.8.0)
Requirement already satisfied: decorator in ./.local/lib/python2.7/site-packages (from traitlets>=4.2.1->jupyter-server<0.2.0,>=0.1.0->voila) (4.4.1)
Requirement already satisfied: enum34; python_version == "2.7" in /usr/lib/python2.7/dist-packages (from traitlets>=4.2.1->jupyter-server<0.2.0,>=0.1.0->voila) (1.1.6)
Requirement already satisfied: six in /usr/lib/python2.7/dist-packages (from traitlets>=4.2.1->jupyter-server<0.2.0,>=0.1.0->voila) (1.12.0)
Requirement already satisfied: python-dateutil>=2.1 in ./.local/lib/python2.7/site-packages (from jupyter-client>=5.3.1->jupyter-server<0.2.0,>=0.1.0->voila) (2.8.1)
Requirement already satisfied: backports-abc>=0.4 in ./.local/lib/python2.7/site-packages (from tornado>=4->jupyter-server<0.2.0,>=0.1.0->voila) (0.5)
Requirement already satisfied: singledispatch in ./.local/lib/python2.7/site-packages (from tornado>=4->jupyter-server<0.2.0,>=0.1.0->voila) (3.4.0.3)
Requirement already satisfied: futures in ./.local/lib/python2.7/site-packages (from tornado>=4->jupyter-server<0.2.0,>=0.1.0->voila) (3.3.0)
Requirement already satisfied: MarkupSafe>=0.23 in ./.local/lib/python2.7/site-packages (from jinja2->jupyter-server<0.2.0,>=0.1.0->voila) (1.1.1)
Requirement already satisfied: ptyprocess; os_name != "nt" in ./.local/lib/python2.7/site-packages (from terminado>=0.8.1->jupyter-server<0.2.0,>=0.1.0->voila) (0.6.0)
Requirement already satisfied: jsonschema!=2.5.0,>=2.4 in ./.local/lib/python2.7/site-packages (from nbformat->jupyter-server<0.2.0,>=0.1.0->voila) (3.2.0)
Requirement already satisfied: webencodings in ./.local/lib/python2.7/site-packages (from bleach->nbconvert<6,>=5.6.0->voila) (0.5.1)
Requirement already satisfied: pickleshare in ./.local/lib/python2.7/site-packages (from ipython>=4.0.0->ipykernel->jupyter-server<0.2.0,>=0.1.0->voila) (0.7.5)
Requirement already satisfied: prompt-toolkit<2.0.0,>=1.0.4 in ./.local/lib/python2.7/site-packages (from ipython>=4.0.0->ipykernel->jupyter-server<0.2.0,>=0.1.0->voila) (1.0.18)
Requirement already satisfied: backports.shutil-get-terminal-size; python_version == "2.7" in ./.local/lib/python2.7/site-packages (from ipython>=4.0.0->ipykernel->jupyter-server<0.2.0,>=0.1.0->voila) (1.0.0)
Requirement already satisfied: setuptools>=18.5 in /usr/lib/python2.7/dist-packages (from ipython>=4.0.0->ipykernel->jupyter-server<0.2.0,>=0.1.0->voila) (40.8.0)
Requirement already satisfied: pexpect; sys_platform != "win32" in ./.local/lib/python2.7/site-packages (from ipython>=4.0.0->ipykernel->jupyter-server<0.2.0,>=0.1.0->voila) (4.7.0)
Requirement already satisfied: pathlib2; python_version == "2.7" or python_version == "3.3" in ./.local/lib/python2.7/site-packages (from ipython>=4.0.0->ipykernel->jupyter-server<0.2.0,>=0.1.0->voila) (2.3.5)
Requirement already satisfied: simplegeneric>0.8 in ./.local/lib/python2.7/site-packages (from ipython>=4.0.0->ipykernel->jupyter-server<0.2.0,>=0.1.0->voila) (0.8.1)
Requirement already satisfied: importlib-metadata; python_version < "3.8" in ./.local/lib/python2.7/site-packages (from jsonschema!=2.5.0,>=2.4->nbformat->jupyter-server<0.2.0,>=0.1.0->voila) (1.4.0)
Requirement already satisfied: attrs>=17.4.0 in ./.local/lib/python2.7/site-packages (from jsonschema!=2.5.0,>=2.4->nbformat->jupyter-server<0.2.0,>=0.1.0->voila) (19.3.0)
Requirement already satisfied: functools32; python_version < "3" in ./.local/lib/python2.7/site-packages (from jsonschema!=2.5.0,>=2.4->nbformat->jupyter-server<0.2.0,>=0.1.0->voila) (3.2.3.post2)
Requirement already satisfied: pyrsistent>=0.14.0 in ./.local/lib/python2.7/site-packages (from jsonschema!=2.5.0,>=2.4->nbformat->jupyter-server<0.2.0,>=0.1.0->voila) (0.15.7)
Requirement already satisfied: wcwidth in ./.local/lib/python2.7/site-packages (from prompt-toolkit<2.0.0,>=1.0.4->ipython>=4.0.0->ipykernel->jupyter-server<0.2.0,>=0.1.0->voila) (0.1.8)
Requirement already satisfied: scandir; python_version < "3.5" in ./.local/lib/python2.7/site-packages (from pathlib2; python_version == "2.7" or python_version == "3.3"->ipython>=4.0.0->ipykernel->jupyter-server<0.2.0,>=0.1.0->voila) (1.10.0)
Requirement already satisfied: configparser>=3.5; python_version < "3" in ./.local/lib/python2.7/site-packages (from importlib-metadata; python_version < "3.8"->jsonschema!=2.5.0,>=2.4->nbformat->jupyter-server<0.2.0,>=0.1.0->voila) (4.0.2)
Requirement already satisfied: contextlib2; python_version < "3" in ./.local/lib/python2.7/site-packages (from importlib-metadata; python_version < "3.8"->jsonschema!=2.5.0,>=2.4->nbformat->jupyter-server<0.2.0,>=0.1.0->voila) (0.6.0.post1)
Requirement already satisfied: zipp>=0.5 in ./.local/lib/python2.7/site-packages (from importlib-metadata; python_version < "3.8"->jsonschema!=2.5.0,>=2.4->nbformat->jupyter-server<0.2.0,>=0.1.0->voila) (1.0.0)
Requirement already satisfied: more-itertools in ./.local/lib/python2.7/site-packages (from zipp>=0.5->importlib-metadata; python_version < "3.8"->jsonschema!=2.5.0,>=2.4->nbformat->jupyter-server<0.2.0,>=0.1.0->voila) (5.0.0)
Installing collected packages: jupyter-server, jupyterlab-pygments, voila
Successfully installed jupyter-server-0.1.1 jupyterlab-pygments-0.1.0 voila-0.1.18
```
Starting `Voila` standalone returns the following error:
```
phill@debian10:~$ voila
Traceback (most recent call last):
File "/home/phill/.local/bin/voila", line 6, in <module>
from voila.app import main
File "/home/phill/.local/lib/python2.7/site-packages/voila/__init__.py", line 11, in <module>
from .server_extension import load_jupyter_server_extension # noqa
File "/home/phill/.local/lib/python2.7/site-packages/voila/server_extension.py", line 20, in <module>
from .handler import VoilaHandler
File "/home/phill/.local/lib/python2.7/site-packages/voila/handler.py", line 19, in <module>
from nbconvert.preprocessors import ClearOutputPreprocessor
File "/home/phill/.local/lib/python2.7/site-packages/nbconvert/__init__.py", line 4, in <module>
from .exporters import *
File "/home/phill/.local/lib/python2.7/site-packages/nbconvert/exporters/__init__.py", line 1, in <module>
from .base import (export, get_exporter,
File "/home/phill/.local/lib/python2.7/site-packages/nbconvert/exporters/base.py", line 8, in <module>
import entrypoints
File "/usr/lib/python2.7/dist-packages/entrypoints.py", line 16, in <module>
import configparser
File "/home/phill/.local/lib/python2.7/site-packages/configparser.py", line 11, in <module>
from backports.configparser import (
ImportError: cannot import name ConverterMapping
```
Have to admit I'm new to Jupyter and Python in general, although experienced with other languages (R, Julia) and have reasonable Debian Server Ops experience. Apologies if I'm missing something straightforward.
|
closed
|
2020-01-17T15:55:41Z
|
2020-01-18T21:27:35Z
|
https://github.com/voila-dashboards/voila/issues/518
|
[] |
phillc73
| 1
|
iperov/DeepFaceLab
|
deep-learning
| 5,416
|
SAEHD: Not compatible with high core count cpus
|
## Expected behavior
SAEHD trains when you run the file [6) train SAEHD.bat]
## Actual behavior
`cv2.error: OpenCV(4.1.0) C:\projects\opencv-python\opencv\modules\core\src\alloc.cpp:55: error: (-4:Insufficient memory) Failed to allocate 1048576 bytes in function 'cv::OutOfMemoryError'`
my ram and vram barely are being utilised, with about 20GB free of both
## Steps to reproduce
Ryzen 5950x, run SAEHD with full core count.
## Other relevant information
Specs:
Ryzen 5950x, RTX 3090, 32GB Memory
running file [6) train SAEHD.bat] from DeepFaceLab_NVIDIA_RTX3000_series 21/10/2021 (latest release)
using prebuilt windows binary
When I switch out my ryzen 9 5950x to my previous cpu, ryzen 5 3600x I have no issue.
My 5950x is stable and works fine with all other programs.
If i manually restrict cpu count through windows msconfig, it works fine (although i don't want to have to do this)
quick96 has no issue when using all 32 logical cores
|
open
|
2021-10-21T13:14:06Z
|
2023-06-14T17:37:34Z
|
https://github.com/iperov/DeepFaceLab/issues/5416
|
[] |
JoshuaShawFreelance
| 5
|
onnx/onnx
|
pytorch
| 5,878
|
OSError: SavedModel file does not exist at: /content/drive/MyDrive/craft_mlt_25k.h5/{saved_model.pbtxt|saved_model.pb}
|
open
|
2024-01-29T12:17:08Z
|
2025-01-29T06:43:48Z
|
https://github.com/onnx/onnx/issues/5878
|
[
"stale"
] |
RNVALA
| 1
|
|
graphdeco-inria/gaussian-splatting
|
computer-vision
| 278
|
About render speed
|
I want to max the render speed, but when i try -r 8, it's even slower than -r 2.
|
open
|
2023-10-04T22:55:09Z
|
2023-10-25T23:31:38Z
|
https://github.com/graphdeco-inria/gaussian-splatting/issues/278
|
[] |
HenghuiB
| 6
|
numba/numba
|
numpy
| 9,399
|
Numba confuses float32 and float64 in numpy-array causing compilation failure
|
## Reporting a bug
<!--
Before submitting a bug report please ensure that you can check off these boxes:
-->
- [X] I have tried using the latest released version of Numba (most recent is
visible in the release notes
(https://numba.readthedocs.io/en/stable/release-notes-overview.html).
- [X] I have included a self contained code sample to reproduce the problem.
i.e. it's possible to run as 'python bug.py'.
# Bug description
Minimal reproducing example:
```Python
import numpy as np
from numba import njit
@njit
def get_array(velocities):
return np.zeros((3,), dtype=np.float32)
@njit
def sim_loop():
velocities = np.zeros((3,), dtype=np.float32)
for _ in range(10):
velocities = 3. * get_array(velocities)
sim_loop()
```
Expected behavior: Compiles
Obeserved behavior: Fails to compile with `Cannot unify array(float32, 1d, C) and array(float64, 1d, C) for 'velocities.2'`
Full output:
<details>
<summary>Click to expand</summary>
Traceback (most recent call last):
File "/home/user/numba_scalar_mismatch.py", line 15, in <module>
sim_loop()
File "/home/user/miniconda3/envs/py311/lib/python3.11/site-packages/numba/core/dispatcher.py", line 468, in _compile_for_args
error_rewrite(e, 'typing')
File "/home/user/miniconda3/envs/py311/lib/python3.11/site-packages/numba/core/dispatcher.py", line 409, in error_rewrite
raise e.with_traceback(None)
numba.core.errors.TypingError: Failed in nopython mode pipeline (step: nopython frontend)
Cannot unify array(float32, 1d, C) and array(float64, 1d, C) for 'velocities.2', defined at /home/user/numba_scalar_mismatch.py (12)
File "numba_scalar_mismatch.py", line 12:
def sim_loop():
<source elided>
for _ in range(10):
^
During: typing of assignment at /home/user/numba_scalar_mismatch.py (12)
File "numba_scalar_mismatch.py", line 12:
def sim_loop():
<source elided>
for _ in range(10):
^
</details>
Tested with:
Python 3.11
numpy 1.26.3
numba 0.58.1
Somehow numba thinks the array is of type float64 but actually multiplying a float32-array with a constant results in a float32-array.
Note that the compilation succeeds when replacing the function call to `get_array` with the `np.zeros((3,), dtype=np.float32)`-expression. It also succeeds when `velocities` is not passed to the function or if the scalar multiplication (or the loop) is omitted.
Seems similar to https://github.com/numba/numba/issues/9025
Thanks for having a look
|
closed
|
2024-01-21T14:27:01Z
|
2024-01-30T15:34:21Z
|
https://github.com/numba/numba/issues/9399
|
[
"numpy",
"Blocked awaiting long term feature",
"bug - typing"
] |
iv461
| 5
|
writer/writer-framework
|
data-visualization
| 548
|
Multiple leftbars in mobile view
|
I have an application with two leftbars. In web view this is completely fine, but in mobile view (i.e., on my phone) these are incorrectly placed.
Using one leftbar works fine on mobile. It is transferred to be located on top of the page. However, with two, these are stacked horisontally instead of vertically.
See attatched media.
https://github.com/user-attachments/assets/fb5d1a1e-0d06-4e92-b68d-9b4eb2bb8b9d
|
open
|
2024-08-30T15:51:08Z
|
2024-08-30T20:36:47Z
|
https://github.com/writer/writer-framework/issues/548
|
[
"bug"
] |
hallvardnmbu
| 1
|
autokey/autokey
|
automation
| 508
|
Autokey does not work at start of line, nor directly after '('
|
## Classification: Bug
## Reproducibility: ~Always
## Autokey Version: 0.95.10-1
Used GUI (Gtk, Qt, or both): Not sure
Installed via: apt
Linux Distribution: Linux Mint 20
## Summary
Autokey does not work at start of line, nor directly after '('
## Steps to Reproduce (if applicable)
- Set a new autokey text
- Set an abbreviation for the same, for example '//a'.
-- Options ticked for this: "remove typed abbreviation" and "trigger immediately (don't require a trigger character). All others unticked. Make sure abbreviation is added.
- Go to a terminal window
- Press few spaces after the Bash command prompt and try the new //a key
- It will work.
- Now open a text editor like vi, go to insert mode ('i') and try and type the //a at the start of a line: it will fail (nothing happens)
- Now type a few other things and then type the character '(', and immediately add the //a: it will fail (nothing happens)
- HOWEVER, if you enter a space first, i.e. '( ' and then type //a, the autokey sequence will be executed/work
## Expected Results
- Autokey should also work at the start of a line or directly after '('
## Actual Results
- Instead, this does not happen. Nothing is pasted :(
|
open
|
2021-01-30T00:03:39Z
|
2021-02-01T12:51:15Z
|
https://github.com/autokey/autokey/issues/508
|
[
"bug",
"autokey triggers"
] |
RoelVdP
| 3
|
predict-idlab/plotly-resampler
|
data-visualization
| 19
|
Make `FigureResampler` work on existing figures
|
i.e. when we wrap a `go.Figure` _with already some high-dim traces registered_, we do not yet extract these traces and process them as new traces.
This issue addresses this problem and suggests that these pre-existing traces **should** be handled as potential to-be-downsampled traces.
e.g.
```python
import plotly.express as px
# now this just shows the non-aggregated RAW data, very slow
FigureResampler(px.line(very_large_df, x='time', y='sensor_name'))
```
Fair questions:
- [x] How slow is px.line on high-freq data? (w.r.t. creating a scatter instead of whole go.Figure )
|
closed
|
2022-01-19T16:04:32Z
|
2022-03-10T14:53:56Z
|
https://github.com/predict-idlab/plotly-resampler/issues/19
|
[
"enhancement"
] |
jonasvdd
| 1
|
ipython/ipython
|
jupyter
| 14,007
|
8.12 breaks help (pinfo) for numpy array methods
|
<!-- This is the repository for IPython command line, if you can try to make sure this question/bug/feature belong here and not on one of the Jupyter repositories.
If it's a generic Python/Jupyter question, try other forums or discourse.jupyter.org.
If you are unsure, it's ok to post here, though, there are few maintainer so you might not get a fast response.
-->
ipython 8.12 breaks help (pinfo) for numpy array methods - if there is more than one element on the array.
```python
import numpy as np
arr = np.array([1, 1])
arr.mean?
```
This is distinct from #13920 as the example there works in 8.12 and the workaround given there does not help
```python-traceback
---------------------------------------------------------------------------
ValueError Traceback (most recent call last)
Cell In[12], line 1
----> 1 get_ipython().run_line_magic('pinfo', 'arr.mean')
File ~/.conda/envs/cmip_processing/lib/python3.11/site-packages/IPython/core/interactiveshell.py:2414, in InteractiveShell.run_line_magic(self, magic_name, line, _stack_depth)
2412 kwargs['local_ns'] = self.get_local_scope(stack_depth)
2413 with self.builtin_trap:
-> 2414 result = fn(*args, **kwargs)
2416 # The code below prevents the output from being displayed
2417 # when using magics with decodator @output_can_be_silenced
2418 # when the last Python token in the expression is a ';'.
2419 if getattr(fn, magic.MAGIC_OUTPUT_CAN_BE_SILENCED, False):
File ~/.conda/envs/cmip_processing/lib/python3.11/site-packages/IPython/core/magics/namespace.py:58, in NamespaceMagics.pinfo(self, parameter_s, namespaces)
56 self.psearch(oname)
57 else:
---> 58 self.shell._inspect('pinfo', oname, detail_level=detail_level,
59 namespaces=namespaces)
File ~/.conda/envs/cmip_processing/lib/python3.11/site-packages/IPython/core/interactiveshell.py:1795, in InteractiveShell._inspect(self, meth, oname, namespaces, **kw)
1793 pmethod(info.obj, oname, formatter)
1794 elif meth == 'pinfo':
-> 1795 pmethod(
1796 info.obj,
1797 oname,
1798 formatter,
1799 info,
1800 enable_html_pager=self.enable_html_pager,
1801 **kw,
1802 )
1803 else:
1804 pmethod(info.obj, oname)
File ~/.conda/envs/cmip_processing/lib/python3.11/site-packages/IPython/core/oinspect.py:782, in Inspector.pinfo(self, obj, oname, formatter, info, detail_level, enable_html_pager, omit_sections)
758 """Show detailed information about an object.
759
760 Optional arguments:
(...)
779 - omit_sections: set of section keys and titles to omit
780 """
781 assert info is not None
--> 782 info_b: Bundle = self._get_info(
783 obj, oname, formatter, info, detail_level, omit_sections=omit_sections
784 )
785 if not enable_html_pager:
786 del info_b["text/html"]
File ~/.conda/envs/cmip_processing/lib/python3.11/site-packages/IPython/core/oinspect.py:738, in Inspector._get_info(self, obj, oname, formatter, info, detail_level, omit_sections)
712 def _get_info(
713 self,
714 obj: Any,
(...)
719 omit_sections=(),
720 ) -> Bundle:
721 """Retrieve an info dict and format it.
722
723 Parameters
(...)
735 Titles or keys to omit from output (can be set, tuple, etc., anything supporting `in`)
736 """
--> 738 info_dict = self.info(obj, oname=oname, info=info, detail_level=detail_level)
739 bundle = self._make_info_unformatted(
740 obj,
741 info_dict,
(...)
744 omit_sections=omit_sections,
745 )
746 return self.format_mime(bundle)
File ~/.conda/envs/cmip_processing/lib/python3.11/site-packages/IPython/core/oinspect.py:838, in Inspector.info(self, obj, oname, info, detail_level)
836 parents_docs = None
837 prelude = ""
--> 838 if info and info.parent and hasattr(info.parent, HOOK_NAME):
839 parents_docs_dict = getattr(info.parent, HOOK_NAME)
840 parents_docs = parents_docs_dict.get(att_name, None)
ValueError: The truth value of an array with more than one element is ambiguous. Use a.any() or a.all()
```
|
closed
|
2023-04-06T09:04:14Z
|
2023-04-11T09:52:23Z
|
https://github.com/ipython/ipython/issues/14007
|
[] |
mathause
| 2
|
sqlalchemy/alembic
|
sqlalchemy
| 1,553
|
Sqlite drop table operation not rollbacked during a revision up/downgrade
|
**Describe the bug**
Hello,
I test the transactional feature during migrations of alembic with a SQLite database, but during a downgrade, if an error occurs after a drop table operation, a rollback is done but the table is still deleted in the database. And since the migration's version in the _alembic_version_ table has not been updated because of the error, the DB is in a corrupted state (the stored alembic version is invalid for the current DB schema).
**Expected behavior**
The dropped table should be still present after the rollback.
**To Reproduce**
- Create a SQLAlchemy model for a table.
- Create a default alembic workspace (single-database configuration).
- Set the sqlalchemy.url property with a DB URL to a SQLite database (sqlite:///<path>/mydb.db)
- In the env.py module:
- set _target_metadata_ with the metadata owning your model class
- add the properties arguments _transactional_ddl=True, transaction_per_migration=False_ into the _context.configure(...)_ calls
- Create a revision file: _alembic revision --autogenerate_
- Modify this revision file, to add a statement generating an error during the downgrade after the drop of the table (ex. by adding a drop table command on a non existing table)
- Upgrade the sqlite database: _alembic upgrade head_
- Downgrade the sqlite database to the base revision: _alembic downgrade base_
Here is my revision file:
```py
from typing import Sequence, Union
from alembic import op
import sqlalchemy as sa
# revision identifiers, used by Alembic.
revision: str = "1.0"
down_revision: Union[str, None] = None
branch_labels: Union[str, Sequence[str], None] = None
depends_on: Union[str, Sequence[str], None] = None
def upgrade() -> None:
# ### commands auto generated by Alembic - please adjust! ###
op.create_table(
"person",
sa.Column("name", sa.String(), nullable=False),
sa.Column(
"gender",
sa.Enum("MALE", "FEMALE", name="person_gender_model_enum", create_constraint=True),
nullable=False,
),
sa.Column("id", sa.Integer(), autoincrement=True, nullable=False),
sa.Column(
"created_at",
sa.TIMESTAMP(timezone=True),
server_default=sa.text("(CURRENT_TIMESTAMP)"),
nullable=False,
),
sa.PrimaryKeyConstraint("id")
)
# ### end Alembic commands ###
def downgrade() -> None:
# ### commands auto generated by Alembic - please adjust! ###
op.drop_table("person")
# ### end Alembic commands ###
op.drop_table("toto") # I added this command manually to provoke an error during the SQL transaction of the downgrade
```
Here is my env.py file main methods:
```py
...
def run_migrations_offline() -> None:
"""Run migrations in 'offline' mode.
This configures the context with just a URL
and not an Engine, though an Engine is acceptable
here as well. By skipping the Engine creation
we don't even need a DBAPI to be available.
Calls to context.execute() here emit the given string to the
script output.
"""
url = config.get_main_option("sqlalchemy.url")
context.configure(
url=url,
target_metadata=target_metadata,
transactional_ddl=True,
transaction_per_migration=False,
literal_binds=True,
dialect_opts={"paramstyle": "named"},
)
with context.begin_transaction():
context.run_migrations()
def run_migrations_online() -> None:
"""Run migrations in 'online' mode.
In this scenario we need to create an Engine
and associate a connection with the context.
"""
connectable = engine_from_config(
config.get_section(config.config_ini_section, {}),
prefix="sqlalchemy.",
poolclass=pool.NullPool,
)
with connectable.connect() as connection:
context.configure(
connection=connection,
target_metadata=target_metadata,
transactional_ddl=True,
transaction_per_migration=False,
)
with context.begin_transaction():
context.run_migrations()
if context.is_offline_mode():
run_migrations_offline()
else:
run_migrations_online()
```
**Error**
Here are all the DEBUG logs generated during the _alembic downgrade base_ command:
```
INFO [alembic.runtime.migration] Context impl SQLiteImpl.
INFO [alembic.runtime.migration] Will assume transactional DDL.
INFO [sqlalchemy.engine.Engine] BEGIN (implicit)
INFO [sqlalchemy.engine.Engine] PRAGMA main.table_info("alembic_version")
INFO [sqlalchemy.engine.Engine] [raw sql] ()
DEBUG [sqlalchemy.engine.Engine] Col ('cid', 'name', 'type', 'notnull', 'dflt_value', 'pk')
DEBUG [sqlalchemy.engine.Engine] Row (0, 'version_num', 'VARCHAR(32)', 1, None, 1)
INFO [sqlalchemy.engine.Engine] SELECT alembic_version.version_num
FROM alembic_version
INFO [sqlalchemy.engine.Engine] [generated in 0.00035s] ()
DEBUG [sqlalchemy.engine.Engine] Col ('version_num',)
DEBUG [sqlalchemy.engine.Engine] Row ('1.0',)
INFO [alembic.runtime.migration] Running downgrade 1.0 -> , Migration script for initial revision v1.0.
INFO [sqlalchemy.engine.Engine]
DROP TABLE user
INFO [sqlalchemy.engine.Engine] [no key 0.00029s] ()
INFO [sqlalchemy.engine.Engine]
DROP TABLE toto
INFO [sqlalchemy.engine.Engine] [no key 0.00048s] ()
INFO [sqlalchemy.engine.Engine] ROLLBACK
Traceback (most recent call last):
File "C:\myapp\venv\Lib\site-packages\sqlalchemy\engine\base.py", line 1967, in _exec_single_context
self.dialect.do_execute(
File "C:\myapp\venv\Lib\site-packages\sqlalchemy\engine\default.py", line 941, in do_execute
cursor.execute(statement, parameters)
sqlite3.OperationalError: no such table: toto
The above exception was the direct cause of the following exception:
Traceback (most recent call last):
File "<frozen runpy>", line 198, in _run_module_as_main
File "<frozen runpy>", line 88, in _run_code
File "C:\myapp\venv\Scripts\alembic.exe\__main__.py", line 7, in <module>
File "C:\myapp\venv\Lib\site-packages\alembic\config.py", line 636, in main
CommandLine(prog=prog).main(argv=argv)
File "C:\myapp\venv\Lib\site-packages\alembic\config.py", line 626, in main
self.run_cmd(cfg, options)
File "C:\myapp\venv\Lib\site-packages\alembic\config.py", line 603, in run_cmd
fn(
File "C:\myapp\venv\Lib\site-packages\alembic\command.py", line 453, in downgrade
script.run_env()
File "C:\myapp\venv\Lib\site-packages\alembic\script\base.py", line 586, in run_env
util.load_python_file(self.dir, "env.py")
File "C:\myapp\venv\Lib\site-packages\alembic\util\pyfiles.py", line 95, in load_python_file
module = load_module_py(module_id, path)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "C:\myapp\venv\Lib\site-packages\alembic\util\pyfiles.py", line 113, in load_module_py
spec.loader.exec_module(module) # type: ignore
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "<frozen importlib._bootstrap_external>", line 995, in exec_module
File "<frozen importlib._bootstrap>", line 488, in _call_with_frames_removed
File "C:\myapp\alembic\env.py", line 87, in <module>
run_migrations_online()
File "C:\myapp\alembic\env.py", line 81, in run_migrations_online
context.run_migrations()
File "<string>", line 8, in run_migrations
File "C:\myapp\venv\Lib\site-packages\alembic\runtime\environment.py", line 946, in run_migrations
self.get_context().run_migrations(**kw)
File "C:\myapp\venv\Lib\site-packages\alembic\runtime\migration.py", line 628, in run_migrations
step.migration_fn(**kw)
File "C:\myapp\alembic\versions\20241023_v1.0.py", line 53, in downgrade
op.drop_table("toto")
File "<string>", line 8, in drop_table
File "<string>", line 3, in drop_table
File "C:\myapp\venv\Lib\site-packages\alembic\operations\ops.py", line 1422, in drop_table
operations.invoke(op)
File "C:\myapp\venv\Lib\site-packages\alembic\operations\base.py", line 442, in invoke
return fn(self, operation)
^^^^^^^^^^^^^^^^^^^
File "C:\myapp\venv\Lib\site-packages\alembic\operations\toimpl.py", line 88, in drop_table
operations.impl.drop_table(
File "C:\myapp\venv\Lib\site-packages\alembic\ddl\impl.py", line 392, in drop_table
self._exec(schema.DropTable(table, **kw))
File "C:\myapp\venv\Lib\site-packages\alembic\ddl\impl.py", line 210, in _exec
return conn.execute(construct, params)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "C:\myapp\venv\Lib\site-packages\sqlalchemy\engine\base.py", line 1418, in execute
return meth(
^^^^^
File "C:\myapp\venv\Lib\site-packages\sqlalchemy\sql\ddl.py", line 180, in _execute_on_connection
return connection._execute_ddl(
^^^^^^^^^^^^^^^^^^^^^^^^
File "C:\myapp\venv\Lib\site-packages\sqlalchemy\engine\base.py", line 1529, in _execute_ddl
ret = self._execute_context(
^^^^^^^^^^^^^^^^^^^^^^
File "C:\myapp\venv\Lib\site-packages\sqlalchemy\engine\base.py", line 1846, in _execute_context
return self._exec_single_context(
^^^^^^^^^^^^^^^^^^^^^^^^^^
File "C:\myapp\venv\Lib\site-packages\sqlalchemy\engine\base.py", line 1986, in _exec_single_context
self._handle_dbapi_exception(
File "C:\myapp\venv\Lib\site-packages\sqlalchemy\engine\base.py", line 2355, in _handle_dbapi_exception
raise sqlalchemy_exception.with_traceback(exc_info[2]) from e
File "C:\myapp\venv\Lib\site-packages\sqlalchemy\engine\base.py", line 1967, in _exec_single_context
self.dialect.do_execute(
File "C:\myapp\venv\Lib\site-packages\sqlalchemy\engine\default.py", line 941, in do_execute
cursor.execute(statement, parameters)
sqlalchemy.exc.OperationalError: (sqlite3.OperationalError) no such table: toto
[SQL:
DROP TABLE toto]
(Background on this error at: https://sqlalche.me/e/20/e3q8)
```
Although the logs indicate a ROLLBACK in an opened SQL transaction, the table is still deleted in the database file at the end of the command.
It is certainly the behavior of _sqlite3_, but did I forget something in my configuration to have a well working rollback?
**Versions.**
- OS: Windows 11 Enterprise
- Python: 3.12.6
- Alembic: 1.13.3
- SQLAlchemy: 2.0.35
- Database: SQLite
- DBAPI: sqlite3
**Additional context**
I've got the same problem with an error in the upgrade() method: with 2 revisions files, the first create a table in the upgrade() method, the second generates an error in its upgrade() method (like dropping a non existent table). When you launch a _alembic upgrade head_ on an empty database file, at the end the table is still in the SQLite database although I use the _transactional_ddl=True, transaction_per_migration=False_ configuration. The debug logs show there is a unique transaction opened and closed (rollbacked) including the application of the 2 revisions, but the table is still present.
Thanks.
**Have a nice day!**
|
closed
|
2024-10-23T18:26:43Z
|
2024-10-23T21:42:44Z
|
https://github.com/sqlalchemy/alembic/issues/1553
|
[] |
vtgn
| 2
|
aio-libs/aiomysql
|
sqlalchemy
| 788
|
Discord server for aiomysql
|
Hey there,
Well I went through the gitter link you guys have on your github page and it seems not every1 has gitter so I though of making a discord server! I have made it and you can join it via https://discord.gg/f3ncE2w6FJ I will be more than happy to hand over the server to the main developer if you could contact me at my discord on leothewolf122#1319 or tell me through the server!
Hope you can understand the need, also yea I made a logo for aiomysql if you wanna use it!

**Disclaimer :** The logo is not all mine I have taken help from google
|
closed
|
2022-05-21T10:39:23Z
|
2022-05-21T19:16:46Z
|
https://github.com/aio-libs/aiomysql/issues/788
|
[] |
leothewolf
| 0
|
akfamily/akshare
|
data-science
| 5,956
|
同样问题: ak.stock_zh_a_hist报错
|
最近使用ak.stock_zh_a_hist拉取历史数据报错,显示错误在股票代码,之前也出现过一样问题后来解决了,现在再次出现;akshare版本已经更新,代码如下:
import akshare as ak
stock_zh_a_hist_df=ak.stock_zh_a_hist(symbol='600519', period="daily", start_date="20170301", end_date='20240528', adjust="")
print(stock_zh_a_hist_df)

|
closed
|
2025-03-24T02:22:05Z
|
2025-03-24T06:09:11Z
|
https://github.com/akfamily/akshare/issues/5956
|
[
"bug"
] |
ZaynG
| 1
|
stanfordnlp/stanza
|
nlp
| 810
|
stanza.server.client.PermanentlyFailedException: Error: unable to start the CoreNLP server on port 9000 (possibly something is already running there)
|
I keep getting this error, but the thing is, I can see the server is already running by visiting http://localhost:9000/
|
closed
|
2021-09-18T12:53:40Z
|
2021-10-06T09:52:20Z
|
https://github.com/stanfordnlp/stanza/issues/810
|
[
"bug"
] |
jiangweiatgithub
| 7
|
google-research/bert
|
tensorflow
| 723
|
Pretraining statistics
|
Are you able to share any statistics from pretraining such as:
- global_step/sec
- examples/sec
- masked_lm_loss curve / value at convergence
- next_sentence_loss curve / value at convergnce
Thanks!
|
open
|
2019-06-27T02:56:03Z
|
2019-06-27T02:56:03Z
|
https://github.com/google-research/bert/issues/723
|
[] |
bkj
| 0
|
deepfakes/faceswap
|
machine-learning
| 714
|
Deep Fake like this
|
I was wondering if you can do this type of video conversion where the end result isn't faceswap but editing something to seem authentic with audio also
https://www.youtube.com/watch?v=cQ54GDm1eL0
|
closed
|
2019-04-27T17:04:14Z
|
2019-04-29T22:17:57Z
|
https://github.com/deepfakes/faceswap/issues/714
|
[] |
sam-thecoder
| 1
|
public-apis/public-apis
|
api
| 3,166
|
Add ' API Hubs ' section
|
It would be handy if a API Hub section can be added which can include large-scale API repositories/platforms such as rapidAPI.
Those seeking more options can always look up these sites for something extra that's not in this repo.
This would also allow the repo to just feature the most popular / helpful ones, making it more simple and less confusing.
|
closed
|
2022-05-23T08:12:50Z
|
2022-05-24T17:59:12Z
|
https://github.com/public-apis/public-apis/issues/3166
|
[
"enhancement"
] |
kaustav202
| 2
|
awesto/django-shop
|
django
| 883
|
Compatibility for Django 4.x+
|
Are there any plans for updating this to allow compatibility with currently supported versions of Django (4.2 and 5.x)?
If not, would you like me to fork it and update it? Or would you add me as a collaborator so some of the pull requests that are backlogged can be cleared out?
|
open
|
2024-09-17T13:10:28Z
|
2024-09-17T13:10:28Z
|
https://github.com/awesto/django-shop/issues/883
|
[] |
panatale1
| 0
|
anselal/antminer-monitor
|
dash
| 5
|
Remarks not displayed on active miners
|
I would like to inform you that Remarks on any add does not show
|
closed
|
2017-10-08T18:02:06Z
|
2017-10-21T09:50:06Z
|
https://github.com/anselal/antminer-monitor/issues/5
|
[
":bug: bug"
] |
l2airj
| 1
|
ansible/awx
|
automation
| 14,936
|
Is there a way to get the awx user info in the launch job vars?
|
### Please confirm the following
- [ ] I agree to follow this project's [code of conduct](https://docs.ansible.com/ansible/latest/community/code_of_conduct.html).
- [ ] I have checked the [current issues](https://github.com/ansible/awx/issues) for duplicates.
- [ ] I understand that AWX is open source software provided for free and that I might not receive a timely response.
### Feature type
New Feature
### Feature Summary
when launch the job, get the awx user info from built-in vars
### Select the relevant components
- [ ] UI
- [ ] API
- [ ] Docs
- [ ] Collection
- [ ] CLI
- [ ] Other
### Steps to reproduce
when launch the job, get the awx user info from built-in vars
### Current results
no user info found in ansible_env
### Sugested feature result
when launch the job, get the awx user info from built-in vars
### Additional information
_No response_
|
closed
|
2024-02-28T11:20:33Z
|
2024-02-28T16:29:05Z
|
https://github.com/ansible/awx/issues/14936
|
[
"type:enhancement",
"needs_triage",
"community"
] |
ngsin
| 1
|
alteryx/featuretools
|
scikit-learn
| 2,428
|
Can't serialize `NumberOfCommonWords` feature with custom word set
|
If I use a custom word set for the `NumberOfCommonWords`, and generate a feature for it, I can not serialize it.
```
from featuretools.feature_base.features_serializer import save_features
common_word_set = {"hi", "my"}
num_common_words = NumberOfCommonWords(word_set=common_word_set)
fm, fd = ft.dfs(entityset=es, target_dataframe_name="df", trans_primitives=[num_common_words])
feat = fd[-1]
save_features([feat])
```
Resolving this issue likely involves converting the set to a JSON serializable format like a list.
|
closed
|
2022-12-22T20:31:50Z
|
2023-01-03T17:48:20Z
|
https://github.com/alteryx/featuretools/issues/2428
|
[
"bug"
] |
sbadithe
| 0
|
SYSTRAN/faster-whisper
|
deep-learning
| 473
|
How to run faster_whisper on multiple GPUs?
|
thanks for your answers. In the previous issues, I saw the answer about CPU multithreadings running, but I wanted to see how to run faster on multiple gpus? Can you give me some examples? @guillaumekln
|
closed
|
2023-09-14T01:35:00Z
|
2023-09-19T09:19:27Z
|
https://github.com/SYSTRAN/faster-whisper/issues/473
|
[] |
lazyseacow
| 1
|
python-gino/gino
|
sqlalchemy
| 13
|
What about relationship?
|
~Honestly I have no idea yet.~
Update:
GINO now has a loader mechanism to load a result matrix into objects on need, please see examples below. It allows relationship be implemented in a primitive way. Next, we'll try to introduce some high level relationship encapsulation.
|
closed
|
2017-07-22T02:05:49Z
|
2020-04-20T21:26:54Z
|
https://github.com/python-gino/gino/issues/13
|
[
"help wanted",
"question"
] |
fantix
| 32
|
MorvanZhou/tutorials
|
numpy
| 3
|
周大大,能录制一期关于分布式TENSORFLOW的搭建教程吗?
|
https://github.com/tensorflow/serving
默认的,没有视频清楚,而且没有如何使其学习的例子
|
closed
|
2016-10-10T06:07:36Z
|
2016-11-06T10:44:25Z
|
https://github.com/MorvanZhou/tutorials/issues/3
|
[] |
bournes
| 1
|
gradio-app/gradio
|
python
| 10,543
|
Support logging in with HF OAuth in `gr.load()`
|
Currently, we support users providing their own API token by setting `gr.load(..., user_token=True)`. It would be even cleaner if, on Spaces, a user could just login with HF and their HF_TOKEN would automatically get used for HF inference as well as with inference providers.
See [internal discussion](https://github.com/huggingface-internal/moon-landing/issues/12460#issuecomment-2643846569)
|
closed
|
2025-02-07T19:39:05Z
|
2025-02-26T22:19:16Z
|
https://github.com/gradio-app/gradio/issues/10543
|
[
"enhancement"
] |
abidlabs
| 0
|
mwaskom/seaborn
|
data-science
| 3,176
|
pandas plotting backend?
|
Plotly has toplevel `.plot` function which allows for a [pandas plotting backend](https://github.com/pandas-dev/pandas/blob/d95bf9a04f10590fff41e75de94c321a8743af72/pandas/plotting/_core.py#L1848-L1861) to exist:
https://github.com/plotly/plotly.py/blob/4363c51448cda178463277ff3c12becf35dbd3b8/packages/python/plotly/plotly/__init__.py
Like this, if people have `plotly` installed, they can do:
```
pd.set_option('plotting.backend', 'plotly')
```
and then `df.plot.line(x=x, y=y)` will defer to `plotly.express.line(data_frame=df, x=x, y=y)`:

It'd be nice to be able to do
```
pd.set_option('plotting.backend', 'seaborn')
```
and then have `df.plot.line(x=x, y=y)` defer to `seaborn.line(data=df, x=x, y=y)`
Would you be open to these ~150 lines of code or so to allow `seaborn` to be set as a plotting backend in pandas? Check the link above to see what it looks like in `plotly`. I'd be happy to implement this, just checking if it'd be welcome
|
closed
|
2022-12-05T08:43:50Z
|
2022-12-06T21:32:08Z
|
https://github.com/mwaskom/seaborn/issues/3176
|
[] |
MarcoGorelli
| 8
|
yunjey/pytorch-tutorial
|
deep-learning
| 43
|
graph visualization in tensorboard with pytorch?
|
Hi, thanks a lot for this wonderful repo and recent work on tensorboard!
I just wonder when would be graph visualization of tensorboard for pytorch could be available.
any idea?
Thanks again for your great work!
|
closed
|
2017-05-29T23:23:42Z
|
2017-05-30T04:25:15Z
|
https://github.com/yunjey/pytorch-tutorial/issues/43
|
[] |
EmbraceLife
| 1
|
lgienapp/aquarel
|
data-visualization
| 8
|
Trim transform should take axes argument
|
The trim function should take an optional axes argument, can be "x", "y", or "both".
|
closed
|
2022-08-12T21:26:38Z
|
2022-08-19T12:05:39Z
|
https://github.com/lgienapp/aquarel/issues/8
|
[
"enhancement"
] |
lgienapp
| 0
|
miguelgrinberg/python-socketio
|
asyncio
| 281
|
RedisManager listen hang
|
I'm using Flask-Socketio building a websocket server.I have read doc about flask socketio 【Emitting from an External Process】and chose to use this feature.But after hours the websocket server could not receive message from this external process(flask-socketio server will listen this event and do something).I have spend some time to dig into it ,describe as below:
- When emitting from an external process,I chose redis as message queue.
- redis server indeed receive emit from the external process. this is Sub/Pub support by redis,I could subscribe the same channel to prove it, it's not the external process problem.
- SocketIO server will user RedisManager as client manager,which will launch a thread subscribe redis channel, [redis_manager](https://github.com/miguelgrinberg/python-socketio/blob/master/socketio/redis_manager.py#L104)
- Listen will get connection from a redis connection pool [here](https://github.com/miguelgrinberg/python-socketio/blob/master/socketio/redis_manager.py#L66)
In my case , my redis server will close connection after a client is idl for 600 seconds(common config for redis server).I found some CLOSE_WAIT status connections which is suspicious.It seems reids pubsub listen method will not raise timeout exception in such case and do not realize this socket has been closed from redis server.I add socket_timeout=60,retry_on_timeout=True to "from_url" method,it works.This will force the socket timeout 60s and raise a timeout error,which retry_on_timeout=True will tell redis client to retry connect [here](https://github.com/andymccurdy/redis-py/blob/master/redis/client.py#L3016)
env:
uwsgi+gevent
python-engineio==3.1.2
python-socketio==3.0.0
redis==2.10.6
gevent==1.4.0
uwsgi==2.0.17.1
Another question, why after 5~8 hours redis manager hangs(due to 600 seconds timeout in redis server), something about gevent ?
|
closed
|
2019-03-28T07:38:12Z
|
2019-03-28T11:36:59Z
|
https://github.com/miguelgrinberg/python-socketio/issues/281
|
[] |
wangyang02
| 4
|
inducer/pudb
|
pytest
| 270
|
Lastest release (2017.1.3) dies when trying to display a mock object
|
The latest release (2017.1.3) dies with the following stack trace (slightly edited to anonimize it):
```
File "/usr/lib/python2.7/bdb.py", line 49, in trace_dispatch
return self.dispatch_line(frame)
File ".../site-packages/pudb/debugger.py", line 160, in dispatch_line
self.user_line(frame)
File ".../site-packages/pudb/debugger.py", line 381, in user_line
self.interaction(frame)
File ".../site-packages/pudb/debugger.py", line 349, in interaction
show_exc_dialog=show_exc_dialog)
File ".../site-packages/pudb/debugger.py", line 2084, in call_with_ui
return f(*args, **kwargs)
File ".../site-packages/pudb/debugger.py", line 2322, in interaction
self.event_loop()
File ".../site-packages/pudb/debugger.py", line 2280, in event_loop
canvas = toplevel.render(self.size, focus=True)
File ".../site-packages/urwid/widget.py", line 141, in cached_render
canv = fn(self, size, focus=focus)
File ".../site-packages/urwid/widget.py", line 1751, in render
canv = get_delegate(self).render(size, focus=focus)
File ".../site-packages/urwid/widget.py", line 141, in cached_render
canv = fn(self, size, focus=focus)
File ".../site-packages/urwid/container.py", line 1083, in render
focus and self.focus_part == 'body')
File ".../site-packages/urwid/widget.py", line 141, in cached_render
canv = fn(self, size, focus=focus)
File ".../site-packages/urwid/decoration.py", line 225, in render
canv = self._original_widget.render(size, focus=focus)
File ".../site-packages/urwid/widget.py", line 141, in cached_render
canv = fn(self, size, focus=focus)
File ".../site-packages/urwid/container.py", line 2085, in render
focus = focus and self.focus_position == i)
File ".../site-packages/urwid/widget.py", line 141, in cached_render
canv = fn(self, size, focus=focus)
File ".../site-packages/urwid/widget.py", line 1751, in render
canv = get_delegate(self).render(size, focus=focus)
File ".../site-packages/urwid/widget.py", line 141, in cached_render
canv = fn(self, size, focus=focus)
File ".../site-packages/urwid/container.py", line 1526, in render
canv = w.render((maxcol, rows), focus=focus and item_focus)
File ".../site-packages/urwid/widget.py", line 141, in cached_render
canv = fn(self, size, focus=focus)
File ".../site-packages/urwid/decoration.py", line 225, in render
canv = self._original_widget.render(size, focus=focus)
File ".../site-packages/urwid/widget.py", line 141, in cached_render
canv = fn(self, size, focus=focus)
File ".../site-packages/urwid/container.py", line 1526, in render
canv = w.render((maxcol, rows), focus=focus and item_focus)
File ".../site-packages/urwid/widget.py", line 141, in cached_render
canv = fn(self, size, focus=focus)
File ".../site-packages/urwid/decoration.py", line 225, in render
canv = self._original_widget.render(size, focus=focus)
File ".../site-packages/urwid/widget.py", line 141, in cached_render
canv = fn(self, size, focus=focus)
File ".../site-packages/urwid/widget.py", line 1751, in render
canv = get_delegate(self).render(size, focus=focus)
File ".../site-packages/urwid/widget.py", line 141, in cached_render
canv = fn(self, size, focus=focus)
File ".../site-packages/urwid/listbox.py", line 457, in render
(maxcol, maxrow), focus=focus)
File ".../site-packages/urwid/listbox.py", line 339, in calculate_visible
self._set_focus_complete( (maxcol, maxrow), focus )
File ".../site-packages/urwid/listbox.py", line 704, in _set_focus_complete
(maxcol,maxrow), focus)
File ".../site-packages/urwid/listbox.py", line 674, in _set_focus_first_selectable
(maxcol, maxrow), focus=focus)
File ".../site-packages/urwid/listbox.py", line 406, in calculate_visible
n_rows = next.rows( (maxcol,) )
File ".../site-packages/urwid/widget.py", line 201, in cached_rows
return fn(self, size, focus)
File ".../site-packages/pudb/var_view.py", line 120, in rows
return len(self._get_text(size))
File ".../site-packages/pudb/var_view.py", line 108, in _get_text
alltext = var_label + ": " + value_str
TypeError: cannot concatenate 'str' and 'Mock' objects
```
From what I can tell pudb tries to stringify a Mock object. I'm using the [mock](https://pypi.python.org/pypi/mock/) library with Python 2.7. The code works with the previous version (2017.1.2). My pudb config is:
```
[pudb]
breakpoints_weight = 1
current_stack_frame = top
custom_shell =
custom_stringifier =
custom_theme =
display = auto
line_numbers = True
prompt_on_quit = True
seen_welcome = e032
shell = internal
sidebar_width = 0.5
stack_weight = 1
stringifier = type
theme = classic
variables_weight = 1
wrap_variables = True
```
|
closed
|
2017-09-02T11:43:54Z
|
2017-09-02T18:47:49Z
|
https://github.com/inducer/pudb/issues/270
|
[] |
cdman
| 1
|
PrefectHQ/prefect
|
data-science
| 17,547
|
Task caching doesn't work when using default policy if an input is a result from another task that is a pandas dataframe
|
### Bug summary
I have noticed when retrying failed flows runs that many of the tasks in the flow are rerunning despite having completely successfully and have a persisted result. Looking into it further it appears the issue is when the one or more of the inputs for one task is the result of another task and that result is a pandas dataframe. We recently migrated from Prefect 2 to Prefect 3 and this was never an issue previously so let me know if I am missing a behavior change. I am able to see the resulting dataframe result on S3. I suspect this must be some issues related to serialization.
We have the following settings:
```
PREFECT_RESULTS_PERSIST_BY_DEFAULT=true
PREFECT_DEFAULT_RESULT_STORAGE_BLOCK='s3-bucket/my-bucket-block'
```
Here is an example. We are running this on AWS ECS Fargate and using Prefect Cloud. There should be cache hits on tasks 7, 8, and 9 on an initial run. If you retry the flow you should have cache hits on all tasks. What I'm seeing is only cache hits on 7 and 9 on an initial run and hits on 1, 4, 5 , 6, 7 and 9. Missing on 2, 3, and 8.
```python
from prefect import flow, task, get_run_logger
import pandas as pd
import boto3
import requests
import io
@task(name="Getting data")
def extract(return_df=True):
if return_df:
url = 'https://www.federalreserve.gov/datadownload/Output.aspx?rel=H8&series=dd48166f12d986aede821fb86d9185d7&lastobs=&from=&to=&filetype=csv&label=include&layout=seriescolumn&type=package'
r = requests.get(url)
df = pd.read_csv(io.StringIO(r.content.decode('utf-8')))
return df
else:
return 'this is a string'
@task(name="Doing something with data")
def transform(df_or_string):
if isinstance(df_or_string, str):
print('this is a string')
the_string = df_or_string +' more string'
return the_string
else:
df = df_or_string
df['new_col'] = 'Add this constant'
return df
@task(name="Log output of transform")
def load(df):
logger = get_run_logger()
logger.info(df)
@flow(name="Retry Test")
def main_flow():
data_frame = extract(return_df = True) ### caches on retry
df_transformed = transform(data_frame) ### DOES NOT CACHE ON RETRY BECAUSE IT HAS AN INPUT THAT IS A DATAFRAME TYPE CACHED TASK RESULT
_ = load(df_transformed) ### DOES NOT CACHE ON RETRY BECAUSE IT HAS AN INPUT THAT IS A DATAFRAME TYPE CACHED TASK RESULT
string_text = extract(return_df = False) ### caches on retry
longer_string = transform(string_text) ### caches on retry
_ = load(longer_string) ### caches on retry
data_frame2 = extract(return_df = True) ### this caches on initial run and on caches on retry
df_transformed2 = transform(data_frame2) ### DOES NOT CACHE ON INTITIAL RUN OR RETRY BECAUSE IT HAS AN INPUT THAT IS A DATAFRAME TYPE CACHED TASK RESULT
_ = load(df_transformed2) ### THIS CACHES ON INTIIAL RUN AND ON RETRY..... not sure why this is different than the line above...???????
if __name__ == "__main__":
main_flow()
```
### Version info
```Text
Version: 3.2.7
API version: 0.8.4
Python version: 3.10.16
Git commit: d4d9001e
Built: Fri, Feb 21, 2025 7:41 PM
OS/Arch: linux/x86_64
Profile: ephemeral
Server type: ephemeral
Pydantic version: 2.10.6
Server:
Database: sqlite
SQLite version: 3.40.1
Integrations:
prefect-aws: 0.5.5
prefect-dask: 0.3.3
prefect-dbt: 0.7.0rc1
prefect-shell: 0.3.1
prefect-snowflake: 0.28.2
prefect-redis: 0.2.2
```
### Additional context
_No response_
|
open
|
2025-03-20T20:17:28Z
|
2025-03-22T03:07:45Z
|
https://github.com/PrefectHQ/prefect/issues/17547
|
[
"bug"
] |
jbnitorum
| 6
|
roboflow/supervision
|
machine-learning
| 1,594
|
Why does traffic analysis output differ from the video in readme file?
|
### Search before asking
- [X] I have searched the Supervision [issues](https://github.com/roboflow/supervision/issues) and found no similar feature requests.
### Question
I tried running the same code in readme to generate video outputs but accuracy is not upto mark.
Here are videos with different thresholds:
https://drive.google.com/drive/folders/1TFcEJcSvVSQXaMEYQTnhw2s-QNXP-LOz?usp=sharing
### Additional
_No response_
|
open
|
2024-10-13T21:43:18Z
|
2024-10-15T07:09:19Z
|
https://github.com/roboflow/supervision/issues/1594
|
[
"question"
] |
INF800
| 24
|
babysor/MockingBird
|
deep-learning
| 263
|
求解
|
usage: demo_toolbox.py [-h] [-d DATASETS_ROOT] [-e ENC_MODELS_DIR] [-s SYN_MODELS_DIR] [-v VOC_MODELS_DIR] [--cpu] [--seed SEED] [--no_mp3_support]
demo_toolbox.py: error: argument -d/--datasets_root: expected one argument
|
closed
|
2021-12-12T00:38:54Z
|
2021-12-26T03:22:52Z
|
https://github.com/babysor/MockingBird/issues/263
|
[] |
pangranaaa
| 1
|
schemathesis/schemathesis
|
pytest
| 1,739
|
[WARNING] HypothesisDeprecationWarning: `HealthCheck.all()` is deprecated
|
**Checklist**
- [x] I checked the [FAQ section](https://schemathesis.readthedocs.io/en/stable/faq.html#frequently-asked-questions) of the documentation
- [x] I looked for similar issues in the [issue tracker](https://github.com/schemathesis/schemathesis/issues)
**Describe the bug**
Running either the `st` command line tool or in my own pytest suite, I receive a deprecation warning:
```
.../venv/lib/python3.10/site-packages/hypothesis/_settings.py:467: HypothesisDeprecationWarning: `Healthcheck.all()` is deprecated; use `list(HealthCheck)` instead.
The `hypothesis codemod` command-line tool can automatically refactor your code to fix this warning.
note_deprecation(
```
The warning isn't reproduced on all endpoints; however, as shown below, it is reported at least on the POST operation on https://example.schemathesis.io/api/payload .
**To Reproduce**
Steps to reproduce the behavior:
1. Run `st run --endpoint /api/payload https://example.schemathesis.io/openapi.json`
2. Output:
```
================================= Schemathesis test session starts =================================
Schema location: https://example.schemathesis.io/openapi.json
Base URL: https://example.schemathesis.io/api
Specification version: Open API 3.0.2
Workers: 1
Collected API operations: 1
.../venv/lib/python3.10/site-packages/hypothesis/_settings.py:467: HypothesisDeprecationWarning: `Healthcheck.all()` is deprecated; use `list(HealthCheck)` instead.
The `hypothesis codemod` command-line tool can automatically refactor your code to fix this warning.
note_deprecation(
POST /api/payload . [100%]
============================================= SUMMARY ==============================================
Performed checks:
not_a_server_error 101 / 101 passed PASSED
Hint: You can visualize test results in Schemathesis.io by using `--report` in your CLI command.
======================================== 1 passed in 12.76s ========================================
```
**Expected behavior**
No warnings to be shown.
**Environment (please complete the following information):**
- OS: MacOS 12.6.3
- Python version: 3.10.10
- Schemathesis version: 3.19.1
- Hypothesis version: 6.75.3
- Spec version: Open API 3.0.2
|
closed
|
2023-05-22T17:29:00Z
|
2023-05-25T07:59:29Z
|
https://github.com/schemathesis/schemathesis/issues/1739
|
[
"Type: Bug",
"Difficulty: Beginner"
] |
kgutwin
| 0
|
mljar/mljar-supervised
|
scikit-learn
| 283
|
Dont use model with Random Feature in training
|
Model with Random Feature can be used in training, for ensembling or stacking or for hill climbing steps. Please don't use it.
|
closed
|
2021-01-10T11:14:05Z
|
2021-01-10T14:19:13Z
|
https://github.com/mljar/mljar-supervised/issues/283
|
[
"bug"
] |
pplonski
| 0
|
ultrafunkamsterdam/undetected-chromedriver
|
automation
| 1,774
|
Do Not Track and Enhanced Security
|
Is possible to activate in some way the "Do Not Track" option and raise security to "Enhanced" in order to lower fingerprint profile?
Ty!
|
open
|
2024-03-01T15:59:49Z
|
2024-03-03T14:56:58Z
|
https://github.com/ultrafunkamsterdam/undetected-chromedriver/issues/1774
|
[] |
AeonDave
| 1
|
microsoft/Bringing-Old-Photos-Back-to-Life
|
pytorch
| 198
|
Pytorch version issue
|
What version of PyTorch do we need to install? I have tried 1.2, 1.4, 1.6,1.9 so far, but I still can't find torch
|
closed
|
2021-09-16T03:03:14Z
|
2021-12-15T08:06:05Z
|
https://github.com/microsoft/Bringing-Old-Photos-Back-to-Life/issues/198
|
[] |
shumile66
| 2
|
joeyespo/grip
|
flask
| 47
|
READMEs are 790px wide now
|
It would be great if Grip's output was the same width as GitHub's. I'm trying to compare the two.


|
closed
|
2014-02-04T17:18:24Z
|
2014-07-20T19:08:24Z
|
https://github.com/joeyespo/grip/issues/47
|
[
"bug"
] |
chadwhitacre
| 1
|
axnsan12/drf-yasg
|
django
| 72
|
Add support for Marshmallow Schemas
|
[Marshmallow](https://marshmallow.readthedocs.io/) is a Schema module for django rest framework.
It can be used instead of DRF Serializers but is not supported by **drf-yasg**.
It'd be nice to have it.
|
closed
|
2018-02-27T13:51:38Z
|
2018-02-27T14:51:53Z
|
https://github.com/axnsan12/drf-yasg/issues/72
|
[] |
luxcem
| 1
|
explosion/spaCy
|
machine-learning
| 13,663
|
Hyphenated words in French
|
## How to reproduce the behaviour
`j'imagine des grands-pères` is tokanized to `j'`, `imagine`, `des`, `grands-`, `pères`
https://demos.explosion.ai/displacy?text=j%27imagine%20des%20grands-p%C3%A8res&model=fr_core_news_sm&cpu=1&cph=1
I would expect it to tokanize to `j'`, `imagine`, `des`, `grands-pères`, ie. not split `grands-pères`.
In English `he is a top-performer` does not split `top-performer`
https://demos.explosion.ai/displacy?text=he%20is%20a%20top-performer&model=en_core_web_sm&cpu=1&cph=1
Is this intended or a bug?
If it is intended, how can I adjust the tokanization to not split hyphened words?
|
closed
|
2024-10-15T14:39:36Z
|
2024-11-16T00:03:16Z
|
https://github.com/explosion/spaCy/issues/13663
|
[] |
lsmith77
| 3
|
akfamily/akshare
|
data-science
| 5,487
|
AKShare 接口问题报告 | currency_boc_safe()无法获取数据
|
Python版本:3.12.6
akshare版本:1.15.62
操作系统:Windows 11 专业版,23H2,22631.4602
代码:
import akshare as ak
currency_boc_safe_df = ak.currency_boc_safe()
print(currency_boc_safe_df)
无法返回任何信息
|
closed
|
2025-01-02T02:52:40Z
|
2025-01-02T15:29:44Z
|
https://github.com/akfamily/akshare/issues/5487
|
[
"bug"
] |
yong900630
| 1
|
nvbn/thefuck
|
python
| 1,400
|
Conflicting documentation for correct syntax of alias
|
<!-- If you have any issue with The Fuck, sorry about that, but we will do what we
can to fix that. Actually, maybe we already have, so first thing to do is to
update The Fuck and see if the bug is still there. -->
<!-- If it is (sorry again), check if the problem has not already been reported and
if not, just open an issue on [GitHub](https://github.com/nvbn/thefuck) with
the following basic information: -->
The output of `thefuck --version` (something like `The Fuck 3.1 using Python
3.5.0 and Bash 4.4.12(1)-release`):
The Fuck 3.32 using Python 3.11.4 and Bash 5.2.15(1)-release
Your system (Debian 7, ArchLinux, Windows, etc.):
openSUSE Tumbleweed
How to reproduce the bug:
Read the documentation. It says to use
`eval $(thefuck --alias)`
Now run the application (type `fuck`) without the alias present, and it will mention that the alias is missing, and it can be automatically added by typing `fuck` again
Do so. The entry is created:
`eval "$(thefuck --alias)"`
Note the quotes or lack thereof.
Anything else you think is relevant:
I have also seen issues where single quotes are used, ie:
`eval '$(thefuck --alias)'`
What is the correct syntax?
<!-- It's only with enough information that we can do something to fix the problem. -->
|
open
|
2023-09-03T05:24:08Z
|
2023-09-03T05:24:08Z
|
https://github.com/nvbn/thefuck/issues/1400
|
[] |
pallaswept
| 0
|
gradio-app/gradio
|
data-science
| 9,922
|
MultimodalTextbox file_count "single" doesn't restrict the number of files as input.
|
### Describe the bug
When setting file_count in MultimodalTextbox to "single", only one file can be uploaded at a time. However, when you continue to upload another file, it is uploaded in addition to the old one instead of replacing it.
### Have you searched existing issues? 🔎
- [X] I have searched and found no existing issues
### Reproduction
```python
import gradio as gr
def submit(msg):
return msg
demo = gr.Interface(fn=submit, inputs=gr.MultimodalTextbox(file_count='single'), outputs="textbox")
if __name__ == "__main__":
demo.launch()
```
### Screenshot

### Logs
_No response_
### System Info
```shell
Gradio Environment Information:
------------------------------
Operating System: Linux
gradio version: 5.5.0
gradio_client version: 1.4.2
------------------------------------------------
gradio dependencies in your environment:
aiofiles: 23.2.1
anyio: 4.4.0
audioop-lts is not installed.
fastapi: 0.115.4
ffmpy: 0.3.2
gradio-client==1.4.2 is not installed.
httpx: 0.27.0
huggingface-hub: 0.26.2
jinja2: 3.1.4
markupsafe: 2.1.5
numpy: 1.26.4
orjson: 3.10.6
packaging: 24.1
pandas: 2.2.2
pillow: 10.4.0
pydantic: 2.9.2
pydub: 0.25.1
python-multipart==0.0.12 is not installed.
pyyaml: 5.4.1
ruff: 0.5.4
safehttpx: 0.1.1
semantic-version: 2.10.0
starlette: 0.41.2
tomlkit==0.12.0 is not installed.
typer: 0.12.3
typing-extensions: 4.12.2
urllib3: 2.2.2
uvicorn: 0.30.3
authlib; extra == 'oauth' is not installed.
itsdangerous; extra == 'oauth' is not installed.
```
### Severity
I can work around it
|
closed
|
2024-11-09T18:50:58Z
|
2024-11-12T22:39:58Z
|
https://github.com/gradio-app/gradio/issues/9922
|
[
"bug"
] |
sthemeow
| 2
|
hack4impact/flask-base
|
sqlalchemy
| 211
|
SyntaxError: invalid syntax while running Flask in venv
|
Why could this be happening? Please help!
```
Traceback (most recent call last):
File "/opt/anaconda3/bin/flask", line 10, in <module>
sys.exit(main())
File "/opt/anaconda3/lib/python3.7/site-packages/flask/cli.py", line 967, in main
cli.main(args=sys.argv[1:], prog_name="python -m flask" if as_module else None)
File "/opt/anaconda3/lib/python3.7/site-packages/flask/cli.py", line 586, in main
return super(FlaskGroup, self).main(*args, **kwargs)
File "/opt/anaconda3/lib/python3.7/site-packages/click/core.py", line 782, in main
rv = self.invoke(ctx)
File "/opt/anaconda3/lib/python3.7/site-packages/click/core.py", line 1259, in invoke
return _process_result(sub_ctx.command.invoke(sub_ctx))
File "/opt/anaconda3/lib/python3.7/site-packages/click/core.py", line 1066, in invoke
return ctx.invoke(self.callback, **ctx.params)
File "/opt/anaconda3/lib/python3.7/site-packages/click/core.py", line 610, in invoke
return callback(*args, **kwargs)
File "/opt/anaconda3/lib/python3.7/site-packages/click/decorators.py", line 73, in new_func
return ctx.invoke(f, obj, *args, **kwargs)
File "/opt/anaconda3/lib/python3.7/site-packages/click/core.py", line 610, in invoke
return callback(*args, **kwargs)
File "/opt/anaconda3/lib/python3.7/site-packages/flask/cli.py", line 848, in run_command
app = DispatchingApp(info.load_app, use_eager_loading=eager_loading)
File "/opt/anaconda3/lib/python3.7/site-packages/flask/cli.py", line 305, in __init__
self._load_unlocked()
File "/opt/anaconda3/lib/python3.7/site-packages/flask/cli.py", line 330, in _load_unlocked
self._app = rv = self.loader()
File "/opt/anaconda3/lib/python3.7/site-packages/flask/cli.py", line 392, in load_app
app = locate_app(self, import_name, None, raise_if_not_found=False)
File "/opt/anaconda3/lib/python3.7/site-packages/flask/cli.py", line 240, in locate_app
__import__(module_name)
File "/Users/raghav/Downloads/CS_Training/PS/plumber_app/app/app.py", line 1, in <module>
import pandas as pd
File "/opt/anaconda3/lib/python3.7/site-packages/pandas/__init__.py", line 30, in <module>
from pandas._libs import hashtable as _hashtable, lib as _lib, tslib as _tslib
File "/opt/anaconda3/lib/python3.7/site-packages/pandas/_libs/__init__.py", line 13, in <module>
from pandas._libs.interval import Interval
File "pandas/_libs/interval.pyx", line 1, in init pandas._libs.interval
File "pandas/_libs/hashtable.pyx", line 1, in init pandas._libs.hashtable
File "pandas/_libs/missing.pyx", line 1, in init pandas._libs.missing
File "/opt/anaconda3/lib/python3.7/site-packages/pandas/_libs/tslibs/__init__.py", line 30, in <module>
from .conversion import OutOfBoundsTimedelta, localize_pydatetime
File "pandas/_libs/tslibs/conversion.pyx", line 1, in init pandas._libs.tslibs.conversion
File "pandas/_libs/tslibs/timezones.pyx", line 7, in init pandas._libs.tslibs.timezones
File "/opt/anaconda3/lib/python3.7/site-packages/dateutil/tz.py", line 78
`self._name`,
```
|
closed
|
2020-11-09T19:50:55Z
|
2021-05-11T23:50:39Z
|
https://github.com/hack4impact/flask-base/issues/211
|
[] |
rpalri
| 6
|
onnx/onnx
|
deep-learning
| 6,822
|
ONNX Build Issue
|
Hi Team,
While building ONNX using following procedure on linux system - https://pypi.org/project/onnx/
Getting below error.
× Building editable for onnx (pyproject.toml) did not run successfully.
│ exit code: 1
╰─> See above for output.
note: This error originates from a subprocess, and is likely not a problem with pip.
full command: /usr/local/bin/python3.10 /usr/local/lib/python3.10/site-packages/pip/_vendor/pyproject_hooks/_in_process/_in_process.py build_editable /tmp/tmp8xo8gk75
cwd: /root/onnx
Building editable for onnx (pyproject.toml) ... error
ERROR: Failed building editable for onnx
Failed to build onnx
ERROR: Failed to build installable wheels for some pyproject.toml based projects (onnx)
Attached build logs for the same.
[Onnx Build Logs.txt](https://github.com/user-attachments/files/19382698/Onnx.Build.Logs.txt)
|
open
|
2025-03-21T05:54:38Z
|
2025-03-21T14:06:13Z
|
https://github.com/onnx/onnx/issues/6822
|
[] |
vijayaramaraju-kalidindi
| 10
|
encode/httpx
|
asyncio
| 2,706
|
Unable to get cookies
|
I am trying to get the cookies on the response of the following request:
```
https://www.nfce.fazenda.sp.gov.br/qrcode?p=35230547508411150980653010000502991929293282|2|1|1|C34073C1C020480295BCB68D8E4A31C2CA80A1FB
```
I can get it using `requests`:
```
with requests.Session() as s:
r = s.get('https://www.nfce.fazenda.sp.gov.br/qrcode?p=35230547508411150980653010000502991929293282|2|1|1|C34073C1C020480295BCB68D8E4A31C2CA80A1FB')
print(r.cookies)
```
But it does not work for `httpx`
|
closed
|
2023-05-17T19:34:58Z
|
2023-05-18T12:57:18Z
|
https://github.com/encode/httpx/issues/2706
|
[] |
jalvespinto
| 3
|
microsoft/nni
|
machine-learning
| 4,839
|
Hyper-parameter overview top trials order
|
In the WebUI under: Trial details -> Hyper-parameter one can choose to show the top xx% trials.
By default this shows the trials with the highest scores.
Is there a way to change this to showing the trials with the lowest scores, e.g. if my metric is MSE?
So far my workaround is to just to multiply my score by (-1).
|
open
|
2022-05-04T16:53:44Z
|
2022-09-07T09:11:43Z
|
https://github.com/microsoft/nni/issues/4839
|
[] |
TimSchim
| 12
|
modin-project/modin
|
data-science
| 7,162
|
Pin pandas to 2.2.1 for a single release
|
We will remove the hard pin in a subsequent release.
|
closed
|
2024-04-09T21:46:27Z
|
2024-04-12T09:06:45Z
|
https://github.com/modin-project/modin/issues/7162
|
[] |
sfc-gh-dpetersohn
| 2
|
sloria/TextBlob
|
nlp
| 201
|
Lemmatize error: MissingCorpusError
|
I have a list of (word, pos_tag) pairs and would like to lemmatize the word using textblob.lemmatize.
```
wds=[
[('This', ''), ('is', 'v'), ('the', ''), ('first', 'a'), ('sentences', 'n')],
[('This', ''), ('is', 'v'), ('the', ''), ('second', 'a'), ('one', '')]
]
lem_wds_2=[[Word(wd).lemmatize(pos) for wd, pos in item] for item in wds]
```
Above code raise following error:
```
MissingCorpusError:
Looks like you are missing some required data for this feature.
To download the necessary data, simply run
python -m textblob.download_corpora
or use the NLTK downloader to download the missing data: http://nltk.org/data.html
If this doesn't fix the problem, file an issue at https://github.com/sloria/TextBlob/issues.
```
|
open
|
2018-03-27T04:56:53Z
|
2018-06-16T13:14:17Z
|
https://github.com/sloria/TextBlob/issues/201
|
[] |
RayLei
| 1
|
ultralytics/ultralytics
|
python
| 19,188
|
Selecting a better metric for the "best" model
|
### Search before asking
- [x] I have searched the Ultralytics YOLO [issues](https://github.com/ultralytics/ultralytics/issues) and [discussions](https://github.com/orgs/ultralytics/discussions) and found no similar questions.
### Question
Hi!
I am using YOLO11 segmentation with the large model. For my use case, I have 3-4 large objects in each image, and reasoning which class each belongs to is very obvious, so I never have issues with precision or recall. I also don't care about bounding boxes. The only thing I care about is how accurate are the segmentation masks.
Often times, the best model (from a mAP standpoint) does not produce the best segmentation masks. I normally will train a certain amount of epochs, save the model at each epoch, and select the weights from the epoch with the lowest validation segmentation loss, and deploy that model.
This works, but is there a way to be able to change my "best" evaluation metric? Is there an even better metric for my application than validation seg loss?
Thanks!
### Additional
_No response_
|
open
|
2025-02-11T17:09:35Z
|
2025-02-14T19:42:37Z
|
https://github.com/ultralytics/ultralytics/issues/19188
|
[
"question",
"segment"
] |
Tom-Forsyth
| 6
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.