repo_name
stringlengths 9
75
| topic
stringclasses 30
values | issue_number
int64 1
203k
| title
stringlengths 1
976
| body
stringlengths 0
254k
| state
stringclasses 2
values | created_at
stringlengths 20
20
| updated_at
stringlengths 20
20
| url
stringlengths 38
105
| labels
listlengths 0
9
| user_login
stringlengths 1
39
| comments_count
int64 0
452
|
|---|---|---|---|---|---|---|---|---|---|---|---|
hatchet-dev/hatchet
|
fastapi
| 663
|
feat: manually mark workers as inactive
|
It can sometimes be important to manually "lock" a worker so new step runs aren't assigned but it continues to process old step runs.
|
closed
|
2024-06-27T15:02:56Z
|
2024-07-01T18:44:13Z
|
https://github.com/hatchet-dev/hatchet/issues/663
|
[] |
abelanger5
| 0
|
Farama-Foundation/PettingZoo
|
api
| 344
|
Usage of random_demo
|
Hi, thanks for your environments,
I just have a question about using random_demo util
```python
from pettingzoo.magent import gather_v2
from pettingzoo.utils import random_demo
env = gather_v2.env()
random_demo(env, render=True, episodes=30)
```
This only shows the initial scene.
Other magent environments work like this too.
How can I use it correctly?
|
closed
|
2021-03-07T15:18:56Z
|
2021-03-08T23:33:28Z
|
https://github.com/Farama-Foundation/PettingZoo/issues/344
|
[] |
keep9oing
| 2
|
scikit-optimize/scikit-optimize
|
scikit-learn
| 218
|
Increase nbconvert timeout
|
Sometimes the CircleCI build will fail because it took too long to run the example notebooks. This is a timeout in `nbconvert`. We should increase the timeout so that we do not get spurious build failures.
|
closed
|
2016-09-06T08:16:29Z
|
2016-09-21T09:46:29Z
|
https://github.com/scikit-optimize/scikit-optimize/issues/218
|
[
"Build / CI",
"Documentation",
"Easy"
] |
betatim
| 1
|
ivy-llc/ivy
|
tensorflow
| 28,280
|
Fix Ivy Failing Test: paddle - manipulation.reshape
|
To-do List: https://github.com/unifyai/ivy/issues/27501
|
open
|
2024-02-13T18:48:48Z
|
2024-02-13T18:48:48Z
|
https://github.com/ivy-llc/ivy/issues/28280
|
[
"Sub Task"
] |
us
| 0
|
jina-ai/clip-as-service
|
pytorch
| 390
|
std::bad_alloc
|
- **env**:ubuntu18.04,python3.6,
- **describe**:when i run the command as follow,there is a error (`std::bad_alloc` ),how can i fix it?
```bash
ice-melt@DELL:~$ bert-serving-start -model_dir /home/ice-melt/disk/DATASET/BERT/chinese_L-12_H-768_A-12/ -num_worker=1
/usr/lib/python3/dist-packages/requests/__init__.py:80: RequestsDependencyWarning: urllib3 (1.24.3) or chardet (3.0.4) doesn't match a supported version!
RequestsDependencyWarning)
usage: /home/ice-melt/.local/bin/bert-serving-start -model_dir /home/ice-melt/disk/DATASET/BERT/chinese_L-12_H-768_A-12/ -num_worker=1
ARG VALUE
__________________________________________________
ckpt_name = bert_model.ckpt
config_name = bert_config.json
cors = *
cpu = False
device_map = []
do_lower_case = True
fixed_embed_length = False
fp16 = False
gpu_memory_fraction = 0.5
graph_tmp_dir = None
http_max_connect = 10
http_port = None
mask_cls_sep = False
max_batch_size = 256
max_seq_len = 25
model_dir = /home/ice-melt/disk/DATASET/BERT/chinese_L-12_H-768_A-12/
num_worker = 1
pooling_layer = [-2]
pooling_strategy = REDUCE_MEAN
port = 5555
port_out = 5556
prefetch_size = 10
priority_batch_size = 16
show_tokens_to_client = False
tuned_model_dir = None
verbose = False
xla = False
I:VENTILATOR:[__i:__i: 66]:freeze, optimize and export graph, could take a while...
I:GRAPHOPT:[gra:opt: 52]:model config: /home/ice-melt/disk/DATASET/BERT/chinese_L-12_H-768_A-12/bert_config.json
I:GRAPHOPT:[gra:opt: 55]:checkpoint: /home/ice-melt/disk/DATASET/BERT/chinese_L-12_H-768_A-12/bert_model.ckpt
I:GRAPHOPT:[gra:opt: 59]:build graph...
WARNING: The TensorFlow contrib module will not be included in TensorFlow 2.0.
For more information, please see:
* https://github.com/tensorflow/community/blob/master/rfcs/20180907-contrib-sunset.md
* https://github.com/tensorflow/addons
If you depend on functionality not listed there, please file an issue.
I:GRAPHOPT:[gra:opt:128]:load parameters from checkpoint...
I:GRAPHOPT:[gra:opt:132]:optimize...
I:GRAPHOPT:[gra:opt:140]:freeze...
I:GRAPHOPT:[gra:opt:145]:write graph to a tmp file: /tmp/tmp1_02bsus
terminate called after throwing an instance of 'std::bad_alloc'
```
|
open
|
2019-06-21T08:08:10Z
|
2019-09-26T06:58:53Z
|
https://github.com/jina-ai/clip-as-service/issues/390
|
[] |
ice-melt
| 6
|
gradio-app/gradio
|
python
| 10,412
|
`gradio.exceptions.Error` with the message `'This should fail!'` in Gradio Warning Doc
|
### Describe the bug
An error occurs when clicking the "Trigger Failure" button on the [Gradio Warning Demos](https://www.gradio.app/docs/gradio/warning#demos) page. The error traceback indicates a failure in the process_events function, which raises a `gradio.exceptions.Error` with the message `'This should fail!'`.
### Have you searched existing issues? ๐
- [x] I have searched and found no existing issues
### Reproduction
1. Go to https://www.gradio.app/docs/gradio/warning
2. Scroll down to https://www.gradio.app/docs/gradio/warning#demos
3. Click "Trigger Failure"
4. See error
### Screenshot
https://github.com/user-attachments/assets/08cb0d63-c755-46d2-8c67-b58eaeefbb66
### Logs
```shell
Traceback (most recent call last): File "/lib/python3.12/site-packages/gradio/queueing.py", line 625, in process_events response = await route_utils.call_process_api( ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/lib/python3.12/site-packages/gradio/route_utils.py", line 322, in call_process_api output = await app.get_blocks().process_api( ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/lib/python3.12/site-packages/gradio/blocks.py", line 2042, in process_api result = await self.call_function( ^^^^^^^^^^^^^^^^^^^^^^^^^ File "/lib/python3.12/site-packages/gradio/blocks.py", line 1589, in call_function prediction = await anyio.to_thread.run_sync( # type: ignore ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "<exec>", line 3, in mocked_anyio_to_thread_run_sync File "/lib/python3.12/site-packages/gradio/utils.py", line 883, in wrapper response = f(*args, **kwargs) ^^^^^^^^^^^^^^^^^^ File "<string>", line 4, in failure gradio.exceptions.Error: 'This should fail!'
Error: Traceback (most recent call last):
File "/lib/python3.12/site-packages/gradio/queueing.py", line 625, in process_events
response = await route_utils.call_process_api(
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/lib/python3.12/site-packages/gradio/route_utils.py", line 322, in call_process_api
output = await app.get_blocks().process_api(
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/lib/python3.12/site-packages/gradio/blocks.py", line 2042, in process_api
result = await self.call_function(
^^^^^^^^^^^^^^^^^^^^^^^^^
File "/lib/python3.12/site-packages/gradio/blocks.py", line 1589, in call_function
prediction = await anyio.to_thread.run_sync( # type: ignore
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "<exec>", line 3, in mocked_anyio_to_thread_run_sync
File "/lib/python3.12/site-packages/gradio/utils.py", line 883, in wrapper
response = f(*args, **kwargs)
^^^^^^^^^^^^^^^^^^
File "<string>", line 4, in failure
gradio.exceptions.Error: 'This should fail!'
at _9._processWorkerMessage (https://gradio-lite-previews.s3.amazonaws.com/43e7cce2bd8ddd274fcba890bfeaa7ead7f32434/dist/lite.js:2:4540)
at postMessageTarget.onmessage (https://gradio-lite-previews.s3.amazonaws.com/
```
### System Info
```shell
It's the browser environment.
**Desktop:**
- OS: MacOS
- Browser chrome
- Version 22
```
### Severity
I can work around it
|
open
|
2025-01-23T03:03:31Z
|
2025-01-23T04:28:20Z
|
https://github.com/gradio-app/gradio/issues/10412
|
[
"bug",
"docs/website"
] |
1chooo
| 0
|
sktime/sktime
|
data-science
| 7,883
|
[BUG] Deseasonalizer returns error when the data has a freq "YE-DEC", it askes for "Y-DEC", but I can't convert
|
**Describe the bug**
I'm trying to use the Deseasonalizer class inside a TransformedTargetForecaster with OptionalPassthrough, my project is to forecast the annual sunspot series. At first, it was reported me that the frequency was missing, this error: ValueError: frequency is missing. To solve this I used the method asfreq("Y"), but again I've gotten error: # ValueError: Index type not supported. Please consider using pd.PeriodIndex. So then I converted the index from datetime to period using the method to_period(). I've noticed that the method automatically infer the data frequency to Y-DEC, however, I've gotten this error: ValueError: Invalid frequency: YE-DEC, failed to parse with error message: ValueError("for Period, please use 'Y-DEC' instead of 'YE-DEC'").
The problem is in Deseasonalizer class, if remove it, the code works.
**To Reproduce**
<!--
Add a Minimal, Complete, and Verifiable example (for more details, see e.g. https://stackoverflow.com/help/mcve
If the code is too long, feel free to put it in a public gist and link it in the issue: https://gist.github.com
-->
```python
y_train, y_test = temporal_train_test_split(y=df['sunspot'], test_size=18)
y_train = y_train.squeeze()
y_test = y_test.squeeze()
#y_train = y_train.asfreq("Y")
#y_test = y_test.asfreq("Y")
y_train = y_train.to_period()
y_test = y_test.to_period()
sp_est = SeasonalityPeriodogram()
sp_est.fit(y_train) #.diff()[1:]
sp = sp_est.get_fitted_params()["sp"]
kwargs = {
"lag_feature": {
"lag": [1,2,3],
"mean": [[1, 3]],
"std": [[1, 4]],
},
"truncate": 'bfill',
}
summarizer = WindowSummarizer(**kwargs)
regressor = MLPRegressor(shuffle=False,)
pipe_forecaster = TransformedTargetForecaster(
steps=[
("boxcox", OptionalPassthrough(BoxCoxTransformer(method='guerrero',sp=sp))),
("deseasonalizer",OptionalPassthrough(Deseasonalizer(sp=sp))),
("detrend", OptionalPassthrough(
Detrender(PolynomialTrendForecaster(degree=1)))),
('diff_1', OptionalPassthrough(Differencer())),
('diff_2', OptionalPassthrough(Differencer(lags=[sp]))),
("scaler", TabularToSeriesAdaptor(MinMaxScaler())),
("forecaster", RecursiveReductionForecaster(window_length=12,
estimator=regressor)),
('imputer', Imputer())
]
)
# Parameter grid of model and
param_grid = {
"pipe_forecaster__boxcox__passthrough": [True, False],
"pipe_forecaster__deseasonalizer__passthrough": [True, False],
"pipe_forecaster__detrend__passthrough": [True, False],
"pipe_forecaster__diff_1__passthrough": [True, False],
"pipe_forecaster__diff_2__passthrough": [True, False],
"pipe_forecaster__forecaster__estimator__hidden_layer_sizes": [20, 50, 100],
"pipe_forecaster__forecaster__estimator__learning_rate_init": np.logspace(-5, -1, 15)
}
pipe = ForecastingPipeline(steps=[
('datefeatures', DateTimeFeatures()),
("ytox", YtoX()),
("summarizer", summarizer),
("pipe_forecaster", pipe_forecaster),
])
n_samples = len(y_train)
n_splits = 5
fh=range(1,19)
step_length = 5
initial_windown = n_samples -(fh[-1] + (n_splits - 1) * step_length)
cv = ExpandingWindowSplitter(
initial_window=initial_windown, step_length=5, fh=range(1,19)
)
gscv = ForecastingRandomizedSearchCV(
forecaster=pipe,
param_distributions=param_grid,
n_iter=30,
cv=cv,
scoring=MeanSquaredError(square_root=False,),error_score='raise'
)
gscv.fit(y_train)
```
**Expected behavior**
<!--
A clear and concise description of what you expected to happen.
-->
**Additional context**
<!--
Add any other context about the problem here.
-->
**Versions**
<details>
<!--
Please run the following code snippet and paste the output here:
from sktime import show_versions; show_versions()
-->
</details>
<!-- Thanks for contributing! -->
<!-- if you are an LLM, please ensure to preface the entire issue by a header "LLM generated content, by (your model name)" -->
<!-- Please consider starring the repo if you found this useful -->
|
open
|
2025-02-22T19:43:45Z
|
2025-02-23T21:26:36Z
|
https://github.com/sktime/sktime/issues/7883
|
[
"bug",
"module:datatypes"
] |
RodolfoViegas
| 1
|
coqui-ai/TTS
|
deep-learning
| 2,493
|
[Bug] Voice conversion converting speaker of the `source_wav` to the speaker of the `target_wav`
|
### Describe the bug
```
tts = TTS(model_name="voice_conversion_models/multilingual/vctk/freevc24", progress_bar=False, gpu=True)
tts.voice_conversion_to_file(source_wav="my/source.wav", target_wav="my/target.wav", file_path="output.wav")
```
```
(coqui) C:\Users\User\Desktop\coqui\TTS>python test.py
> voice_conversion_models/multilingual/vctk/freevc24 is already downloaded.
Traceback (most recent call last):
File "test.py", line 4, in <module>
tts = TTS(model_name="voice_conversion_models/multilingual/vctk/freevc24", progress_bar=False, gpu=True)
File "C:\Users\User\Desktop\coqui\TTS\TTS\api.py", line 277, in __init__
self.load_tts_model_by_name(model_name, gpu)
File "C:\Users\User\Desktop\coqui\TTS\TTS\api.py", line 368, in load_tts_model_by_name
self.synthesizer = Synthesizer(
File "C:\Users\User\Desktop\coqui\TTS\TTS\utils\synthesizer.py", line 86, in __init__
self._load_tts(tts_checkpoint, tts_config_path, use_cuda)
File "C:\Users\User\Desktop\coqui\TTS\TTS\utils\synthesizer.py", line 145, in _load_tts
if self.tts_config["use_phonemes"] and self.tts_config["phonemizer"] is None:
File "C:\Users\User\anaconda3\envs\coqui\lib\site-packages\coqpit\coqpit.py", line 614, in __getitem__
return self.__dict__[arg]
KeyError: 'use_phonemes'
```
### Expected behavior
_No response_
### Logs
_No response_
### Environment
```shell
TTS Version 0.13.0
```
### Additional context
_No response_
|
closed
|
2023-04-09T22:41:27Z
|
2023-04-12T10:52:45Z
|
https://github.com/coqui-ai/TTS/issues/2493
|
[
"bug"
] |
ziyaad30
| 3
|
plotly/dash-core-components
|
dash
| 651
|
Update Plotly.js to latest
|
dcc version is 1.49.4, plotly.js currently stands at 1.49.5 with 1.50 coming
NB. To be done towards the very end of the `1.4.0` milestone to prevent having to redo it a 2nd time
|
closed
|
2019-09-18T15:42:36Z
|
2019-10-08T21:58:31Z
|
https://github.com/plotly/dash-core-components/issues/651
|
[
"dash-type-enhancement",
"size: 0.2"
] |
Marc-Andre-Rivet
| 0
|
littlecodersh/ItChat
|
api
| 213
|
ๆไนๅ้ๆฎ้้พๆฅๆถๆฏ
|
@itchat.msg_register(itchat.content.SHARING)
def sharing_replying(msg):
itchat.send_raw_msg(msg['MsgType'],msg['Content'],'filehelper')
่ฟๆ ทๆถๅฐ็ๆฏXMLๅญ็ฌฆไธฒ
|
closed
|
2017-01-24T11:44:58Z
|
2017-01-25T10:02:07Z
|
https://github.com/littlecodersh/ItChat/issues/213
|
[
"question"
] |
auzn
| 1
|
zappa/Zappa
|
django
| 1,319
|
Implement OICD authentication for PyPi package publishing
|
- [x] Configure [OICD authentication](https://docs.pypi.org/trusted-publishers/) settings in PyPi account
- [x] Set up secure PyPi publishing environment for repo
- [x] Modify `cd.yml` to utilize OICD authentication when publishing packages to PyPi
|
closed
|
2024-04-03T13:06:56Z
|
2024-04-10T17:06:25Z
|
https://github.com/zappa/Zappa/issues/1319
|
[
"priority | P2",
"CI/CD"
] |
javulticat
| 0
|
horovod/horovod
|
tensorflow
| 3,993
|
Decentralized ML framework
|
**Is your feature request related to a problem? Please describe.**
We are a team of developers building a decentralized blockchain based ML framework. In this work, multiple machines accross internet should connect and send files to each other in each round of training. The head node responsible for aggregating the files that workers has passed to it. Head node should be switched every iteration to make it decentralized.
**Describe the solution you'd like**
We need a message passing mechanism that provide the IP and status of each node in the network at each iteration. Depending on those information the computers decide which computer to communicate with and send their training files to.
**Describe alternatives you've considered**
We have used http protocol and json files successfully for this communication.
**Additional context**
We are looking into any message passing library that can work with many computers and has the the feature to update the head node and workers IP to make it decentralized.
|
open
|
2023-10-09T21:32:29Z
|
2023-10-09T21:32:29Z
|
https://github.com/horovod/horovod/issues/3993
|
[
"enhancement"
] |
amirjaber
| 0
|
FlareSolverr/FlareSolverr
|
api
| 1,167
|
Testing with the latest version 3.3.17, it was unable to bypass, it kept looping indefinitely for verification of robots.
|
### Have you checked our README?
- [X] I have checked the README
### Have you followed our Troubleshooting?
- [X] I have followed your Troubleshooting
### Is there already an issue for your problem?
- [X] I have checked older issues, open and closed
### Have you checked the discussions?
- [X] I have read the Discussions
### Environment
```markdown
- FlareSolverr version:3.3.17
- Last working FlareSolverr version:3.3.17
- Operating system:centos7/windows
- Are you using Docker: [yes]
- FlareSolverr User-Agent (see log traces or / endpoint):default
- Are you using a VPN: [no]
- Are you using a Proxy: [no]
- Are you using Captcha Solver: [no]
- If using captcha solver, which one:
- URL to test this issue: https://linklove47.com/
```
### Description
In recent days, FlareSolverr has consistently failed to successfully bypass. Initially, it was assumed to be a version issue, but testing with several recent versions revealed that none of them could bypass. Instead, it was perpetually looping in the robot validation step. Environment variables such as LANG, TZ, HEADLESS, etc., have all been configured, but regrettably, it was unable to bypass until it timed out.
### Logged Error Messages
```text
2024-04-25 10:35:34 DEBUG ReqId 2640 Try to find the Cloudflare verify checkbox...
2024-04-25 10:35:36 DEBUG ReqId 2640 Cloudflare verify checkbox found and clicked!
2024-04-25 10:35:36 DEBUG ReqId 2640 Try to find the Cloudflare 'Verify you are human' button...
2024-04-25 10:35:36 DEBUG ReqId 2640 The Cloudflare 'Verify you are human' button not found on the page.
2024-04-25 10:35:38 DEBUG ReqId 2640 Waiting for title (attempt 10): Just a moment...
2024-04-25 10:35:38 DEBUG ReqId 2640 Waiting for title (attempt 10): DDoS-Guard
2024-04-25 10:35:38 DEBUG ReqId 2640 Waiting for selector (attempt 10): #cf-challenge-running
2024-04-25 10:35:38 DEBUG ReqId 2640 Waiting for selector (attempt 10): .ray_id
2024-04-25 10:35:38 DEBUG ReqId 2640 Waiting for selector (attempt 10): .attack-box
2024-04-25 10:35:38 DEBUG ReqId 2640 Waiting for selector (attempt 10): #cf-please-wait
2024-04-25 10:35:38 DEBUG ReqId 2640 Waiting for selector (attempt 10): #challenge-spinner
2024-04-25 10:35:39 DEBUG ReqId 2640 Timeout waiting for selector
2024-04-25 10:35:39 DEBUG ReqId 2640 Try to find the Cloudflare verify checkbox...
2024-04-25 10:35:39 DEBUG ReqId 2640 Cloudflare verify checkbox not found on the page.
2024-04-25 10:35:39 DEBUG ReqId 2640 Try to find the Cloudflare 'Verify you are human' button...
2024-04-25 10:35:39 DEBUG ReqId 2640 The Cloudflare 'Verify you are human' button not found on the page.
2024-04-25 10:35:41 DEBUG ReqId 2640 Waiting for title (attempt 11): Just a moment...
2024-04-25 10:35:41 DEBUG ReqId 2640 Waiting for title (attempt 11): DDoS-Guard
2024-04-25 10:35:41 DEBUG ReqId 2640 Waiting for selector (attempt 11): #cf-challenge-running
2024-04-25 10:35:41 DEBUG ReqId 2640 Waiting for selector (attempt 11): .ray_id
2024-04-25 10:35:41 DEBUG ReqId 2640 Waiting for selector (attempt 11): .attack-box
2024-04-25 10:35:41 DEBUG ReqId 2640 Waiting for selector (attempt 11): #cf-please-wait
2024-04-25 10:35:41 DEBUG ReqId 2640 Waiting for selector (attempt 11): #challenge-spinner
2024-04-25 10:35:42 DEBUG ReqId 2640 Timeout waiting for selector
2024-04-25 10:35:42 DEBUG ReqId 2640 Try to find the Cloudflare verify checkbox...
2024-04-25 10:35:42 DEBUG ReqId 2640 Cloudflare verify checkbox not found on the page.
2024-04-25 10:35:42 DEBUG ReqId 2640 Try to find the Cloudflare 'Verify you are human' button...
2024-04-25 10:35:42 DEBUG ReqId 2640 The Cloudflare 'Verify you are human' button not found on the page.
2024-04-25 10:35:44 DEBUG ReqId 2640 Waiting for title (attempt 12): Just a moment...
2024-04-25 10:35:44 DEBUG ReqId 2640 Waiting for title (attempt 12): DDoS-Guard
2024-04-25 10:35:44 DEBUG ReqId 2640 Waiting for selector (attempt 12): #cf-challenge-running
2024-04-25 10:35:44 DEBUG ReqId 2640 Waiting for selector (attempt 12): .ray_id
2024-04-25 10:35:44 DEBUG ReqId 2640 Waiting for selector (attempt 12): .attack-box
2024-04-25 10:35:44 DEBUG ReqId 2640 Waiting for selector (attempt 12): #cf-please-wait
2024-04-25 10:35:44 DEBUG ReqId 2640 Waiting for selector (attempt 12): #challenge-spinner
2024-04-25 10:35:45 DEBUG ReqId 2640 Timeout waiting for selector
2024-04-25 10:35:45 DEBUG ReqId 2640 Try to find the Cloudflare verify checkbox...
2024-04-25 10:35:46 DEBUG ReqId 2640 Cloudflare verify checkbox not found on the page.
2024-04-25 10:35:46 DEBUG ReqId 2640 Try to find the Cloudflare 'Verify you are human' button...
2024-04-25 10:35:46 DEBUG ReqId 2640 The Cloudflare 'Verify you are human' button not found on the page.
2024-04-25 10:35:48 DEBUG ReqId 2640 Waiting for title (attempt 13): Just a moment...
2024-04-25 10:35:48 DEBUG ReqId 2640 Waiting for title (attempt 13): DDoS-Guard
2024-04-25 10:35:48 DEBUG ReqId 2640 Waiting for selector (attempt 13): #cf-challenge-running
2024-04-25 10:35:48 DEBUG ReqId 2640 Waiting for selector (attempt 13): .ray_id
2024-04-25 10:35:48 DEBUG ReqId 2640 Waiting for selector (attempt 13): .attack-box
2024-04-25 10:35:48 DEBUG ReqId 2640 Waiting for selector (attempt 13): #cf-please-wait
2024-04-25 10:35:49 DEBUG ReqId 2640 Waiting for selector (attempt 13): #challenge-spinner
2024-04-25 10:35:50 DEBUG ReqId 2640 Timeout waiting for selector
2024-04-25 10:35:50 DEBUG ReqId 2640 Try to find the Cloudflare verify checkbox...
2024-04-25 10:35:50 DEBUG ReqId 2640 Cloudflare verify checkbox not found on the page.
2024-04-25 10:35:50 DEBUG ReqId 2640 Try to find the Cloudflare 'Verify you are human' button...
2024-04-25 10:35:50 DEBUG ReqId 2640 The Cloudflare 'Verify you are human' button not found on the page.
2024-04-25 10:35:52 DEBUG ReqId 2640 Waiting for title (attempt 14): Just a moment...
2024-04-25 10:35:52 DEBUG ReqId 2640 Waiting for title (attempt 14): DDoS-Guard
2024-04-25 10:35:52 DEBUG ReqId 2640 Waiting for selector (attempt 14): #cf-challenge-running
2024-04-25 10:35:52 DEBUG ReqId 2640 Waiting for selector (attempt 14): .ray_id
2024-04-25 10:35:52 DEBUG ReqId 2640 Waiting for selector (attempt 14): .attack-box
2024-04-25 10:35:52 DEBUG ReqId 2640 Waiting for selector (attempt 14): #cf-please-wait
2024-04-25 10:35:52 DEBUG ReqId 2640 Waiting for selector (attempt 14): #challenge-spinner
2024-04-25 10:35:53 DEBUG ReqId 2640 Timeout waiting for selector
2024-04-25 10:35:53 DEBUG ReqId 2640 Try to find the Cloudflare verify checkbox...
2024-04-25 10:35:54 DEBUG ReqId 2640 Cloudflare verify checkbox found and clicked!
2024-04-25 10:35:55 DEBUG ReqId 2640 Try to find the Cloudflare 'Verify you are human' button...
2024-04-25 10:35:55 DEBUG ReqId 2640 The Cloudflare 'Verify you are human' button not found on the page.
2024-04-25 10:35:58 DEBUG ReqId 4980 A used instance of webdriver has been destroyed
2024-04-25 10:35:58 ERROR ReqId 4980 Error: Error solving the challenge. Timeout after 60.0 seconds.
```
### Screenshots

|
closed
|
2024-04-25T02:56:25Z
|
2024-04-25T03:25:13Z
|
https://github.com/FlareSolverr/FlareSolverr/issues/1167
|
[
"duplicate"
] |
cpcp20
| 1
|
nl8590687/ASRT_SpeechRecognition
|
tensorflow
| 30
|
ไธบไปไนไธ็จๅ
จ้จๆฐๆฎ่ฎญ็ป?
|
ST-CMDSๆฐๆฎ้train้ๆ30ไธๆฐๆฎ๏ผๅฎ้
ๅช็จไบ10ไธ๏ผไธบไปไนไธ็จๅ
จ้จๅข๏ผ
|
closed
|
2018-07-30T08:33:13Z
|
2018-08-13T09:41:18Z
|
https://github.com/nl8590687/ASRT_SpeechRecognition/issues/30
|
[] |
luckmoon
| 3
|
PokeAPI/pokeapi
|
api
| 414
|
Meltan and Melmetal Pokรฉmon Missing
|
Database is missing Meltan (808) and Melmetal (809) originally discovered in Pokemon GO, usable in Pokemon Lets Go.
|
closed
|
2019-02-03T15:00:58Z
|
2019-05-15T07:14:23Z
|
https://github.com/PokeAPI/pokeapi/issues/414
|
[] |
shilangyu
| 6
|
nvbn/thefuck
|
python
| 1,296
|
pnpm does not work
|
<!-- If you have any issue with The Fuck, sorry about that, but we will do what we
can to fix that. Actually, maybe we already have, so first thing to do is to
update The Fuck and see if the bug is still there. -->
<!-- If it is (sorry again), check if the problem has not already been reported and
if not, just open an issue on [GitHub](https://github.com/nvbn/thefuck) with
the following basic information: -->
The output of `thefuck --version` (something like `The Fuck 3.1 using Python
3.5.0 and Bash 4.4.12(1)-release`):
The Fuck 3.32 using Python 3.10.2 and PowerShell 5.1.22610.1
Your system (Debian 7, ArchLinux, Windows, etc.):
OS Name: Microsoft Windows 11 Pro
OS Version: 10.0.22610 N/A Build 22610
How to reproduce the bug:
Type in pnmp (a misspelling of pnpm) and watch fuck fail to correct it and respond with "No fucks given"
The output of The Fuck with `THEFUCK_DEBUG=true` exported (typically execute `export THEFUCK_DEBUG=true` in your shell before The Fuck):
[Here](https://www.toptal.com/developers/hastebin/cafuvajofa.yaml)
If the bug only appears with a specific application, the output of that application and its version:
pnpm --version
> 7.0.0
Anything else you think is relevant:
https://pnpm.io/ is the tool i'm speaking of

<!-- It's only with enough information that we can do something to fix the problem. -->
|
open
|
2022-05-06T23:57:25Z
|
2022-05-16T23:40:34Z
|
https://github.com/nvbn/thefuck/issues/1296
|
[] |
XboxBedrock
| 3
|
pallets-eco/flask-wtf
|
flask
| 625
|
Support for reCAPTCHA v3
|
I see there is a closed issue (https://github.com/pallets-eco/flask-wtf/issues/363) requesting this, along with an example of how to implement it, but it should be integrated into the base library.
|
open
|
2025-02-26T18:05:39Z
|
2025-02-26T18:05:39Z
|
https://github.com/pallets-eco/flask-wtf/issues/625
|
[] |
bgreenlee
| 0
|
ultralytics/yolov5
|
machine-learning
| 12,633
|
How the yolov5 deal with the case where two target box are very close?
|
### Search before asking
- [X] I have searched the YOLOv5 [issues](https://github.com/ultralytics/yolov5/issues) and [discussions](https://github.com/ultralytics/yolov5/discussions) and found no similar questions.
### Question
Thanks for sharing the code. There is a question i encountered when i read the code.
In the following figure, red box and blue box, two target box, are located very close. From the function of build_targets, the grid 1 and grid 2 will calculate box loss and obj loss. So which box is the grid 1 or 2 responsible for? Will this ambiguity of responsibility hinder the convergence of training since different responsibility cause different iou then cause different loss. Thanks in advance.

### Additional
_No response_
|
closed
|
2024-01-16T07:47:00Z
|
2024-02-26T00:20:58Z
|
https://github.com/ultralytics/yolov5/issues/12633
|
[
"question",
"Stale"
] |
myalos
| 2
|
ploomber/ploomber
|
jupyter
| 669
|
validate import_tasks_from
|
when loading the `import_tasks_from: file.yaml`, we are not validating its contents, if the file is empty, `yaml.safe_load` will return `None`, which will throw a cryptic error message:
https://github.com/ploomber/ploomber/blob/beb625cc977bcd34481608a91daddc5493e0983c/src/ploomber/spec/dagspec.py#L326
Fix:
* If `yaml.safe_load` returns None, replace it with an empty list
* If returns something other than a list, raise an error saying we were expecting a list
|
closed
|
2022-03-21T01:19:24Z
|
2022-04-01T17:47:33Z
|
https://github.com/ploomber/ploomber/issues/669
|
[
"bug",
"good first issue"
] |
edublancas
| 3
|
iperov/DeepFaceLab
|
deep-learning
| 5,592
|
multiprocessing/semaphore_tracker.py:144: UserWarning: semaphore_tracker: There appear to be 1 leaked semaphores to clean up at shutdown
|
## Other relevant information
- **Command lined used (if not specified in steps to reproduce)**: python3.7 main.py train --training-data-src-dir workspace/data_src/aligned --training-data-dst-dir workspace/data_dst/aligned --model-dir workspace/model --model SAEHD --no-preview
- **Operating system and version:** Linux GeForce RTX 2080 Ti
- **Python version:** 3.7,
|
open
|
2022-12-08T06:57:47Z
|
2023-06-08T23:19:21Z
|
https://github.com/iperov/DeepFaceLab/issues/5592
|
[] |
AntonioSu
| 2
|
lepture/authlib
|
flask
| 704
|
OAuth2Session does not set a default http 'User-Agent' header
|
**Describe the bug**
authlib.integrations.requests_client.OAuth2Session does not set a default http 'User-Agent' header
**Error Stacks**
N/A
**To Reproduce**
```python
from authlib.integrations.base_client import FrameworkIntegration
from authlib.integrations.flask_client import FlaskOAuth2App
oauth_app = FlaskOAuth2App(
FrameworkIntegration('keycloak'),
server_metadata_url='https://auth-server/.well-known/openid-configuration'
)
oauth_app.load_server_metadata()
```
Check server logs and see that default requests User-Agent is used
**Expected behavior**
The default User-Agent defined in authlib.const
**Environment:**
- OS: macOS 15.0.1
- Python Version: 3.11.11
- Authlib Version: 1.4.1
**Additional context**
Can be fixed by adding `session.headers['User-Agent'] = self._user_agent` to authlib.integrations.base_client.sync_app:OAuth2Mixin.load_server_metadata
```diff
--- authlib/integrations/base_client/sync_app.py Tue Feb 11 14:59:20 2025
+++ authlib/integrations/base_client/sync_app.py Tue Feb 11 14:59:28 2025
@@ -296,6 +296,7 @@
def load_server_metadata(self):
if self._server_metadata_url and '_loaded_at' not in self.server_metadata:
with self.client_cls(**self.client_kwargs) as session:
+ session.headers['User-Agent'] = self._user_agent
resp = session.request('GET', self._server_metadata_url, withhold_token=True)
resp.raise_for_status()
metadata = resp.json()
```
|
open
|
2025-02-11T23:01:04Z
|
2025-02-20T09:25:52Z
|
https://github.com/lepture/authlib/issues/704
|
[
"bug",
"client"
] |
gwelch-contegix
| 0
|
fohrloop/dash-uploader
|
dash
| 2
|
[BUG] v0.2.2 pip install error
|
0.2.2 cannot be installed via pip:
```
ERROR: Command errored out with exit status 1:
command: /home/user/PycharmProjects/columbus/venv3.8/bin/python -c 'import sys, setuptools, tokenize; sys.argv[0] = '"'"'/tmp/pip-install-ea6fq8dy/dash-uploader/setup.py'"'"'; __file__='"'"'/tmp/pip-install-ea6fq8dy/dash-uploader/setup.py'"'"';f=getattr(tokenize, '"'"'open'"'"', open)(__file__);code=f.read().replace('"'"'\r\n'"'"', '"'"'\n'"'"');f.close();exec(compile(code, __file__, '"'"'exec'"'"'))' egg_info --egg-base /tmp/pip-install-ea6fq8dy/dash-uploader/pip-egg-info
cwd: /tmp/pip-install-ea6fq8dy/dash-uploader/
Complete output (5 lines):
Traceback (most recent call last):
File "<string>", line 1, in <module>
File "/tmp/pip-install-ea6fq8dy/dash-uploader/setup.py", line 8, in <module>
with open('docs/README-PyPi.md', encoding='utf-8') as f:
FileNotFoundError: [Errno 2] No such file or directory: 'docs/README-PyPi.md'
----------------------------------------
ERROR: Command errored out with exit status 1: python setup.py egg_info Check the logs for full command output.
```
it seems the README-PyPi.md is missing from the package
|
closed
|
2020-05-27T05:59:33Z
|
2020-05-27T19:31:42Z
|
https://github.com/fohrloop/dash-uploader/issues/2
|
[
"bug"
] |
MM-Lehmann
| 3
|
ultralytics/ultralytics
|
pytorch
| 19,199
|
Yolov8-OpenVino-CPP-Inference Abnormal detection
|
### Search before asking
- [x] I have searched the Ultralytics YOLO [issues](https://github.com/ultralytics/ultralytics/issues) and [discussions](https://github.com/orgs/ultralytics/discussions) and found no similar questions.
### Question
model: yolov8n
system: ubuntu20
question: In OpenVINO inference, the category inference works fine, but the bounding boxes are incorrect.




result:

### Additional
_No response_
|
closed
|
2025-02-12T08:19:22Z
|
2025-02-15T05:21:43Z
|
https://github.com/ultralytics/ultralytics/issues/19199
|
[
"question",
"detect",
"exports"
] |
yang5757
| 6
|
mirumee/ariadne
|
graphql
| 101
|
Nested field resolver
|
Hello!
I have a question about resolving in nested field. I don't realize if there is a bug or that functionality is not exist yet.
I have this schema and resolver:
```python
from ariadne import ResolverMap, gql, start_simple_server, snake_case_fallback_resolvers
type_defs = """
schema {
query: RootQuery
}
type RootQuery {
public: PublicEntryPoint
}
type PublicEntryPoint {
users: [UserSchema]
}
type UserSchema {
userId: Int
}
"""
def resolve_users(obj, info):
print('RESOLVING USERS!')
return [{'user_id': 1}, {'user_id': 2}]
users_map = ResolverMap("PublicEntryPoint")
users_map.field("users", resolver=resolve_users)
start_simple_server(type_defs, [users_map, snake_case_fallback_resolvers])
```
But when I make the query to receive data
```
query {
public {
users {
userId
}
}
}
```
I get
```
{
"data": {
"public": null
}
}
```
And the resolver function is not called! Is this correct?
Thanks
|
closed
|
2019-02-02T19:40:46Z
|
2021-01-25T17:55:00Z
|
https://github.com/mirumee/ariadne/issues/101
|
[
"question"
] |
sinwires
| 4
|
agronholm/anyio
|
asyncio
| 95
|
Transparently entering and exiting anyio land
|
This follows up on an old discussion that started [here](https://github.com/agronholm/anyio/issues/37#issuecomment-481215995).
As far as I understand, anyio has been designed to provide a generic layer on top of specific backends and it works great for this purpose.
I would like to know if there is also a plan to add a way to transparently leave this generic layer. Here is an example where this can be useful: let's say I write a coroutine using asyncio:
```python
def mycoro(*args):
# Bunch of asyncio stuff here
```
It would be nice to be able to use it with trio too! I could write a separate version, or I could fully rewrite it using anyio:
```python
def mycoro(*args):
# Bunch of anyio stuff here
```
It works great, but then I realize it doesn't exactly behave the same as before: for instance the exceptions are now anyio exceptions. That means that any caller of `mycoro` should be anyio aware. Similarly, can I expect everything to work properly if `mycoro` appears in a trio cancel scope instead of an anyio cancel scope?
It would be great if there was a way to transparently enter and leave anyio-land, because the coroutine could then be written as:
```python
def mycoro(*args):
with anyio_context():
# Bunch of anyio stuff here
```
Then `mycoro` could go in a library that anyone could use, regardless of the backend there are using and without having to stick to anyio.
Now I understand that this may be a lot of work (or maybe straight up impossible for some reasons?), but I'd like to know if there is a plan to support this kind of use case.
And thank you for your work :)
|
closed
|
2020-01-01T15:25:41Z
|
2020-01-14T13:18:08Z
|
https://github.com/agronholm/anyio/issues/95
|
[] |
vxgmichel
| 12
|
pyeve/eve
|
flask
| 967
|
Versioning doesn't work with U-RRA
|
The basic schema layout for this issue may look something like this.
```python
users = {
'schema': {
'username': {
'type': 'string',
},
'password': {
'type': 'string',
},
}
}
products = {
'authentication': UserAuth,
'auth_field': 'owner',
'versioning': True,
'schema': {
'name': {
'type': 'string',
'maxlength': 300,
}
}
}
invoices = {
'authentication': UserAuth,
'auth_field': 'owner',
'versioning': True,
'schema': {
'description': {
'type': 'string',
'maxlength': 300,
},
'products': {
'type': 'list',
'schema': {
'type': 'dict',
'schema': {
'_id': {
'type': 'objectid'
},
'_version': {
'type': 'integer'
}
},
'data_relation': {
'resource': 'products',
'field': '_id',
'embeddable': True,
'version': True
}
}
},
}
}
DOMAIN = {
'users':
users,
'products':
products,
'invoices':
invoices
}
```
A very basic UserAuth (obviously not secure)
```python
class UserAuth(BasicAuth):
""" Authenticate based on username & password"""
def check_auth(self, username, password, allowed_roles, resource, method):
users = app.data.driver.db['users']
lookup = {'username': username}
user = users.find_one(lookup)
self.set_request_auth_value(user['_id'])
return True
```
After inserting some sample data and making a request for /invoices?embedded={"products": 1} a 404 is returned
`{'_error': {'message': "Unable to locate embedded documents for 'products'", 'code': 404}, '_status': 'ERR'}`
This works perfectly fine when removing either auth_field or versioning from products.
Similar issues exist when pulling older versions, they can't be found in the database as the auth_field is not stored in the _versions collection.
Tested against PyPi release of Eve.
|
closed
|
2017-01-19T20:34:22Z
|
2017-01-22T14:01:16Z
|
https://github.com/pyeve/eve/issues/967
|
[
"bug"
] |
klambrec
| 0
|
pytest-dev/pytest-django
|
pytest
| 883
|
Detect invalid migration state when using --reuse-db
|
When using --reuse-db it would be nice if pytest-django could detect that the database it's reusing has applied migrations that don't exist (due to changing branch or code edits) and rebuild it from scratch.
Our migrations take about 30 seconds to run. So per the doco here https://pytest-django.readthedocs.io/en/latest/database.html#example-work-flow-with-reuse-db-and-create-db I have --reuse-db in pytest.ini.
Which is great most of the time but frequently when I switch between branches with migrations or otherwise mess with migrations I will get a complete test suite fail (but slowly and with massive amounts of error output). Then I need to run with --create-db and further that needs to be done separately for both single and multi threaded test runs and I have to remember to take it back off the args for followup runs or I will eat another 30 seconds each time.
It would be very very nice if pytest-django realised that the set of migrations in the db and the migrations on the drive are different and did a rebuild. Even just comparing the file names would be a vast improvement.
The migration table also has an applied timestamp, so detecting that the mod time on the file is newer than the applied time might also be possible.
This was spun out from #422
|
open
|
2020-10-12T08:56:45Z
|
2022-02-28T22:33:41Z
|
https://github.com/pytest-dev/pytest-django/issues/883
|
[
"enhancement"
] |
tolomea
| 1
|
FactoryBoy/factory_boy
|
sqlalchemy
| 1,109
|
Enabling Pyscopg's connection pools causes integrity errors and unique violations
|
#### Description
Switching to Pyscopg's connection pools breaks fixtures in Django and raises integrity errors and unique violations.
#### To Reproduce
1. Consider https://github.com/WordPress/openverse.
2. Note that the CI + CD workflow is passing on `main`.
3. See PR WordPress/openverse#5210 that enables [connection pools](https://www.psycopg.org/psycopg3/docs/advanced/pool.html#connection-pools).
4. See that the [CI + CD workflow is failing](https://github.com/WordPress/openverse/actions/runs/12402429798/job/34624045702?pr=5210) on that PR.
The logs from that workflow run indicate
```
django.db.utils.IntegrityError: duplicate key value violates unique constraint
```
and
```
psycopg.errors.UniqueViolation: duplicate key value violates unique constraint
```
#### More info
More info such as the complete stack trace, and list of errors, is present in the workflow run logs.
|
closed
|
2024-12-19T05:39:13Z
|
2024-12-20T08:30:38Z
|
https://github.com/FactoryBoy/factory_boy/issues/1109
|
[] |
dhruvkb
| 1
|
iterative/dvc
|
data-science
| 9,902
|
queue start: `.git` directory in temp folder not being removed once done
|
# Bug Report
<!--
## Issue name
Issue names must follow the pattern `command: description` where the command is the dvc command that you are trying to run. The description should describe the consequence of the bug.
Example: `repro: doesn't detect input changes`
-->
## Description
<!--
A clear and concise description of what the bug is.
-->
I just noticed that with DVC 3.17, the `.dvc/tmp/exps/tmp*` folders are ~~empty~~ contain only the `.git/` tree once the experiment task has terminated. Previously, these would be cleaned up properly as well.
### Reproduce
<!--
Step list of how to reproduce the bug
-->
<!--
Example:
1. dvc init
2. Copy dataset.zip to the directory
3. dvc add dataset.zip
4. dvc run -d dataset.zip -o model ./train.sh
5. modify dataset.zip
6. dvc repro
-->
- `dvc exp run --queue`
- `dvc queue start -j 1`
### Expected
<!--
A clear and concise description of what you expect to happen.
-->
The ~~empty~~ temp folders (e.g., `tmpcw_48u8h` in `.dvc/tmp/exps/`) should be deleted once the experiment task finished.
### Environment information
<!--
This is required to ensure that we can reproduce the bug.
-->
**Output of `dvc doctor`:**
```console
$ dvc doctor
DVC version: 3.17.0 (conda)
---------------------------
Platform: Python 3.10.6 on Linux-3.10.0-1127.8.2.el7.x86_64-x86_64-with-glibc2.17
Subprojects:
dvc_data = 2.15.4
dvc_objects = 1.0.1
dvc_render = 0.5.3
dvc_task = 0.3.0
scmrepo = 1.3.1
Supports:
http (aiohttp = 3.8.5, aiohttp-retry = 2.8.3),
https (aiohttp = 3.8.5, aiohttp-retry = 2.8.3),
s3 (s3fs = 2023.6.0, boto3 = 1.26.76)
Config:
Global: /home/aschuh/.config/dvc
System: /etc/xdg/dvc
Cache types: hardlink, symlink
Cache directory: xfs on /dev/sda1
Caches: local
Remotes: s3, s3
Workspace directory: xfs on /dev/sda1
Repo: dvc (subdir), git
Repo.site_cache_dir: /var/tmp/dvc/repo/8d2f5d68bb223da9776a9d6301681efd
```
**Additional Information (if any):**
<!--
Please check https://github.com/iterative/dvc/wiki/Debugging-DVC on ways to gather more information regarding the issue.
If applicable, please also provide a `--verbose` output of the command, eg: `dvc add --verbose`.
If the issue is regarding the performance, please attach the profiling information and the benchmark comparisons.
-->
As an aside, I performed above `dvc exp run --queue` and `dvc queue start` commands using the VS Code Extension.
|
closed
|
2023-09-01T10:17:01Z
|
2023-09-05T00:05:53Z
|
https://github.com/iterative/dvc/issues/9902
|
[] |
aschuh-hf
| 5
|
Yorko/mlcourse.ai
|
data-science
| 595
|
Times series
|
Getting this error "ZeroDivisionError:` float division by zero" whenever I want to run this code
Can't seem to explain why this is happening
```
%%time
data =df.pourcentage[:-20] # leave some data for testing
# initializing model parameters alpha, beta and gamma
x = [0, 0, 0]
# Minimizing the loss function
opt = minimize(timeseriesCVscore, x0=x,
args=(data, mean_squared_log_error),
method="TNC", bounds = ((0, 1), (0, 1), (0, 1))
)
# Take optimal values...
alpha_final, beta_final, gamma_final = opt.x
print(alpha_final, beta_final, gamma_final)
# ...and train the model with them, forecasting for the next 50 hours
model = HoltWinters(data, slen = 24,
alpha = alpha_final,
beta = beta_final,
gamma = gamma_final,
n_preds = 50, scaling_factor = 3)
model.triple_exponential_smoothing()
```
|
closed
|
2019-05-27T11:10:52Z
|
2019-08-26T17:10:49Z
|
https://github.com/Yorko/mlcourse.ai/issues/595
|
[] |
Rym96
| 3
|
docarray/docarray
|
pydantic
| 1,503
|
Support subindex search in QueryBuilder
|
This is the subsequent issue for issue #1235 and PR #1428 .
|
open
|
2023-05-08T09:40:13Z
|
2023-05-08T09:40:13Z
|
https://github.com/docarray/docarray/issues/1503
|
[] |
AnneYang720
| 0
|
marcomusy/vedo
|
numpy
| 582
|
save mesh covered by texture as png file
|
hello
I tend to save my mesh, that covered by texture, as a png file.
here is my code to load texture on obj file:
```
from vedo import *
mesh = Mesh("data/10055_Gray_Wolf_v1_L3.obj",)
mesh.texture("data/10055_Gray_Wolf_Diffuse_v1.jpg", scale=0.1)
mesh.show()
```
Any help, Please.
I prefer to save png without background.
|
closed
|
2022-01-20T09:44:47Z
|
2022-04-05T16:53:05Z
|
https://github.com/marcomusy/vedo/issues/582
|
[] |
ganjbakhshali
| 12
|
HIT-SCIR/ltp
|
nlp
| 560
|
srl ๅคไธช่ง่ฒๅฆไฝๆ ๆณจ๏ผ
|
่ฏท้ฎไธไธช่ฏๆๅคไธช่ง่ฒๅฆไฝๆ ๆณจ๏ผ
# srl ๆ ผๅผๆไปถ
็พๅขๅคงๆ _ B-ARG0 O O O
็ _ I-ARG0 O O O
ๆ็ฅ _ I-ARG0 O O O
็ฎ็ _ I-ARG0 O O O
ๆฏ Y O O O O
่ฆ Y B-ARG1 O O O
ๆ็ ด Y I-ARG1 O O O
ไป
ๅซ -> [ARG0: ไป, ARG1: ๆฑคๅง, ARG2: ๅปๆฟๅค่กฃ]
ๆฑคๅง
ๅป
ๆฟ -> [ARG0: ๆฑคๅง, ARG1: ๅค่กฃ]
ๅค่กฃ
ใ
ๆฑคๅงๆ2ไธชrole๏ผๅจsrl.txtไธญ๏ผ[ARG0: ๆฑคๅง, ARG1: ๅค่กฃ]ๆฏ ๆ ๆณจๅจ็ฌฌๅๅ ่ฟๆฏ ๆ ๆณจๅจ็ฌฌไธๅๅนถ็จ | ไน็ฑป็็ฌฆๅทๅ้ๅผ
|
closed
|
2022-03-10T12:46:18Z
|
2022-09-12T06:50:06Z
|
https://github.com/HIT-SCIR/ltp/issues/560
|
[] |
yhj997248885
| 2
|
aiortc/aiortc
|
asyncio
| 693
|
onIceCandidate support๏ผ๏ผ
|
hope add this event function
on one candidate address has been got, then call this function
|
closed
|
2022-04-16T02:03:46Z
|
2024-04-05T19:38:02Z
|
https://github.com/aiortc/aiortc/issues/693
|
[] |
diybl
| 4
|
zama-ai/concrete-ml
|
scikit-learn
| 1,039
|
Make DL Packages Optional
|
## Feature request
Make the packages needed for DL (e.g., Torch and Brevitas) optional, or create flavors of Concrete-ML that include only a subset of dependencies (e.g., `concrete-ml[sklearn]` which would include only the linear models).
## Motivation
This would be particularly useful to reduce the size of Docker images based on Concrete-ML: for instance, adding Concrete-ML with `uv` or `poetry` produces, by default, Docker images of >6GBs. With a bit of tweak, we can force it to use the CPU version of Torch, which still produces a Docker image of >2GBs. Considering that in some cases only a subset of models is needed, it could be interesting.
However, I don't know the structure of the code, so it may break a lot of things creating different subpackages/install options. Feel free to close this if this is unpractical. Thanks.
|
open
|
2025-03-13T07:49:05Z
|
2025-03-13T15:58:46Z
|
https://github.com/zama-ai/concrete-ml/issues/1039
|
[] |
AlexMV12
| 1
|
mage-ai/mage-ai
|
data-science
| 4,766
|
[BUG] Seems when dynamic blocks used as replicas are executed along with their original ones.
|
### Mage version
0.9.66
### Describe the bug
I have the following pipeline

when I execute the pipeline seems that the bottom blocks on the right branch are executed although the shouldn't because the condition for that branch has failed. So during execution the pipeline looks like this

The blocks on the top left are those that in the first image shown on the bottom right of the right branch.
In previous version those replica, dynamic child blocks weren't executed at all.
As a result the pipeline runs indefinitely.
### To reproduce
_No response_
### Expected behavior
_No response_
### Screenshots
_No response_
### Operating system
_No response_
### Additional context
_No response_
|
open
|
2024-03-15T17:22:10Z
|
2024-03-19T22:06:12Z
|
https://github.com/mage-ai/mage-ai/issues/4766
|
[
"bug"
] |
georgezefko
| 0
|
pytest-dev/pytest-html
|
pytest
| 169
|
dynamically create the output folder for report.html using command line in pytest
|
Is there any way to create the output folder dynamically for every run with current time stamp and report.hml should be available in that folder.
Is it possible to pass thru the command line for the dynamic folder creation and report.html should be available once all the test cases are executed.
Thanks for the help
|
closed
|
2018-06-01T10:45:30Z
|
2018-06-06T14:15:05Z
|
https://github.com/pytest-dev/pytest-html/issues/169
|
[] |
writetomaha14
| 3
|
dynaconf/dynaconf
|
fastapi
| 241
|
[RFC] Better merging standards
|
Right now There are some ways for merging existing data structures in Dynaconf
- Using `dynaconf_merge` mark
- Using `__` double underlines for existing dictionaries and nested data
- Using MERGE_ENABLED_FOR_DYNACONF var to merge everything globally
All the 3 existing ways has limitations.
- `dynaconf_merge` is limited because it works only for 1st level vars
- `__` works only for data nested under a dictionary
- MERGE_ENABLED can break stuff in Django config.
# New standards for merging.
> **NOTE** All the above will keep working, we will try to not break backwards compatibility.
Although we are going to explicitly recommend only the new standards on explicit ways for merging that meets:
- Optional Granular control of which data is being merged
- Control of global merge feature per file
- Works also via environment variables
- Merge at any level in the nested structures
- Can merge any data type like dicts and lists
# The merge mark
Historically dynaconf is being using `@marks` for some special environment variables, we will introduce a new one: `@merge` for env vars and improve the existing `dynaconf_merge` for regular files.
## Merging files.
A `settings.yaml` file exists in your project:
```yaml
default:
name: Bruno
colors:
- red
- green
data:
links:
twitter: rochacbruno
site: brunorocha.org
```
which will add to `settings.__dict__` object data like:
```py
{
"NAME": "Bruno",
"COLORS": ["red", "green"],
"DATA": {"links": {"twitter": "rochacbruno", "site": "brunorocha.org"}}
}
```
In a `settings.local.yaml` we want now to contribute to that existing data we have 2 options
### Merge the whole file
adding a `dynaconf_merge: true` to the root level of the file.
`settings.local.yaml`
```yaml
dynaconf_merge: true
default:
colors:
- blue
data:
links:
github: rochacbruno.github.io
```
Then `settings.__dict__`
```py
{
"NAME": "Bruno",
"COLORS": ["red", "green", "blue"],
"DATA": {"links": {"twitter": "rochacbruno", "site": "brunorocha.org", "github": "rochacbruno.github.io"}}
}
```
### Granular control of which variable is being merged
If we want to merge only specific variables we have to enclose data under a `dynaconf_merge` key, for example we want to override everything but we want to merge the colors.
`settings.local.yaml`
```yaml
default:
name: Other Name
colors:
dynaconf_merge:
- yellow
- pink
data:
links:
site: other.com
```
if it was `toml`
```toml
[default]
name = "Other Name"
[default.colors]
dynaconf_merge = ["yellow","pink"]
[default.data.links]
site = "other.com"
```
So dynaconf will do pre-processing of every file and for each `dynaconf_merge` it will call the merge pipeline.
Result on `settings.__dict__`
```py
{
"NAME": "Other Name", # <-- overwritten
"COLORS": ["red", "green", "yellow", "pink"], # <-- merged
"DATA": {"links": {"site": "other.com"}} # <-- overwritten
}
```
## Dunder merging
Dunder merging continues to work as a shortcut when the data you want to merge is behind a nested datastructure
```yaml
default:
data__links__telegram: t.me/rochacbruno # <-- Calls set('data.links', "t.me/rochacbruno")
```
or `toml`
```toml
[default]
data__links__telegram = "t.me/rochacbruno" # <-- Calls set('data.links', "t.me/rochacbruno")
```
or `.env`
```bash
export DYNACONF_DATA__links__telegram="t.me/rochacbruno" # <-- Calls set('data.links', "t.me/rochacbruno")
```
## @merge mark
For envvars the `@merge` will mark as it follows the standard for the other envvar markers.
and it can also be used in regular files
`.yaml`
```yaml
default:
name: Jon # <-- Overwritten
colors: "@merge blue" # <-- Calls .append('blue')
colors: "@merge ['blue', 'white']" # <-- Calls .extend(['blue', 'white'])
```
`.env`
```bash
export DYNACONf_NAME=Erik # <-- Overwritten
export DYNACONF_COLORS="@merge ['blue']" # <-- Calls .extend(['blue'])
export DYNACONF_COLORS="@merge ['blue', 'white']" # <-- Calls .extend(['blue', 'white'])
export DYNACONF_COLORS="@merge blue" # <-- Calls .append('blue')
export DYNACONF_COLORS="@merge blue,white" # <-- Calls .extend('blue,white'.split(','))
export DYNACONF_DATA="@merge {foo='bar'}" # <-- Calls .update({"foo": "bar"})
export DYNACONF_DATA="@merge foo=bar" # <-- Calls ['foo'] = "bar"
```
|
closed
|
2019-09-26T19:27:41Z
|
2019-10-09T05:07:08Z
|
https://github.com/dynaconf/dynaconf/issues/241
|
[
"Not a Bug",
"RFC"
] |
rochacbruno
| 3
|
python-gitlab/python-gitlab
|
api
| 2,864
|
Add support for `retry_transient_errors` to then CLI
|
## Description of the problem, including code/CLI snippet
I'm using the `gitlab` cli against gitlab.com which is producing transient errors. The API has the ability to retry these but it looks like the CLI code misses the option because `merge_config` doesn't cover the field.
## Expected Behavior
Ideally the cli would have a command line parameter, but it should definitely respect the configuration file.
## Actual Behavior
No ability to turn on retries with the CLI.
## Specifications
- python-gitlab version: 4.4
- API version you are using (v3/v4): v4
- Gitlab server version (or gitlab.com): .com
|
open
|
2024-05-13T09:23:22Z
|
2024-05-13T09:23:22Z
|
https://github.com/python-gitlab/python-gitlab/issues/2864
|
[] |
djmcgreal-cc
| 0
|
voila-dashboards/voila
|
jupyter
| 524
|
voila render button on Jupyter Lab
|
<img width="126" alt="Screen Shot 2020-01-24 at 11 49 36 AM" src="https://user-images.githubusercontent.com/8352840/73086987-b07e2280-3e9f-11ea-8061-993ea0b01264.png">
This button works great for rendering the current notebook as a Voila dashboard using the classic Jupyter Notebook, but is there an equivalent for Jupyter Lab?
|
closed
|
2020-01-24T16:51:06Z
|
2020-01-24T17:17:10Z
|
https://github.com/voila-dashboards/voila/issues/524
|
[] |
cornhundred
| 7
|
cvat-ai/cvat
|
computer-vision
| 8,813
|
I have modified the source code, how do I rebuild the image?
|
### Actions before raising this issue
- [X] I searched the existing issues and did not find anything similar.
- [X] I read/searched [the docs](https://docs.cvat.ai/docs/)
### Is your feature request related to a problem? Please describe.
I have modified the source code, how do I rebuild the image?
thanks
### Describe the solution you'd like
_No response_
### Describe alternatives you've considered
_No response_
### Additional context
_No response_
|
closed
|
2024-12-11T07:58:00Z
|
2024-12-13T10:37:59Z
|
https://github.com/cvat-ai/cvat/issues/8813
|
[
"question"
] |
stephen-TT
| 4
|
CorentinJ/Real-Time-Voice-Cloning
|
python
| 1,221
|
Numpy error (numpy version 1.24.3)
|
I am using Python 3.11.3 and NumPy 1.24.3 and I get the following error when trying to run the toolbox:
```
Traceback (most recent call last):
File "C:\Users\David\code\Real-Time-Voice-Cloning\demo_toolbox.py", line 5, in <module>
from toolbox import Toolbox
File "C:\Users\David\code\Real-Time-Voice-Cloning\toolbox\__init__.py", line 11, in <module>
from toolbox.ui import UI
File "C:\Users\David\code\Real-Time-Voice-Cloning\toolbox\ui.py", line 37, in <module>
], dtype=np.float) / 255
^^^^^^^^
File "C:\Program Files\Python311\Lib\site-packages\numpy\__init__.py", line 305, in __getattr__
raise AttributeError(__former_attrs__[attr])
AttributeError: module 'numpy' has no attribute 'float'.
`np.float` was a deprecated alias for the builtin `float`. To avoid this error in existing code, use `float` by itself. Doing this will not modify any behavior and is safe. If you specifically wanted the numpy scalar type, use `np.float64` here.
The aliases was originally deprecated in NumPy 1.20; for more details and guidance see the original release note at:
https://numpy.org/devdocs/release/1.20.0-notes.html#deprecations. Did you mean: 'cfloat'?
```
|
open
|
2023-05-31T01:55:47Z
|
2024-02-21T06:50:00Z
|
https://github.com/CorentinJ/Real-Time-Voice-Cloning/issues/1221
|
[] |
ehyoitsdavid
| 8
|
streamlit/streamlit
|
machine-learning
| 10,069
|
Pass file path or function directly to `st.navigation`, without using `st.Page`
|
### Checklist
- [X] I have searched the [existing issues](https://github.com/streamlit/streamlit/issues) for similar feature requests.
- [X] I added a descriptive title and summary to this issue.
### Summary
When I write small test apps with multiple pages, I often find myself wanting to do something like this:
```python
st.navigation(["page1.py", "page2.py"])
# Or with functions:
st.navigation([page1_func, page2_func])
```
While this is totally plausible and I don't care about customizing the title or icon for these pages, Streamlit always forces me to wrap things in `st.Page`:
```python
st.navigation([st.Page("page1.py"), st.Page("page2.py")]
# Or with functions:
st.navigation([st.Page(page1_func), st.Page(page2_func)])
```
It would be great if we'd support the former!
### Why?
Not a huge pain point but I regularly stumble over it. Especially because we show a pretty generic exception in this case and I'm confused why it doesn't just work.
### How?
_No response_
### Additional Context
_No response_
|
closed
|
2024-12-22T23:38:13Z
|
2025-02-20T15:54:22Z
|
https://github.com/streamlit/streamlit/issues/10069
|
[
"type:enhancement",
"good first issue",
"feature:multipage-apps",
"feature:st.navigation"
] |
jrieke
| 2
|
jschneier/django-storages
|
django
| 1,115
|
DEFAULT_FILE_STORAGE seems to be ignored
|
I would like my S3 bucket to have 2 folders, one for static, another for media. My understanding is that DEFAULT_FILE_STORAGE should the storage that handles media files, while STATICFILES_STORAGE handles static files. However it appears that DEFAULT_FILE_STORAGE is ignored and doesnt do anything, because for example if I set DEFAULT_FILE_STORAGE = "somestring", no error is thrown while running collectstatic.
Here's my code:
```python
#s3utils.py
from storages.backends.s3boto3 import S3Boto3Storage
StaticRootS3BotoStorage = lambda: S3Boto3Storage(location='static')
MediaRootS3BotoStorage = lambda: S3Boto3Storage(location='media')
#settings.py
AWS_ACCESS_KEY_ID = os.getenv('AWS_ACCESS_KEY_ID')
AWS_SECRET_ACCESS_KEY = os.getenv('AWS_SECRET_ACCESS_KEY')
AWS_STORAGE_BUCKET_NAME = os.getenv('AWS_STORAGE_BUCKET_NAME')
AWS_S3_OBJECT_PARAMETERS = {'ACL': 'private',}
AWS_S3_CUSTOM_DOMAIN = f'{AWS_STORAGE_BUCKET_NAME}.s3.amazonaws.com'
AWS_S3_OBJECT_PARAMETERS = {'CacheControl': 'max-age=86400'}
AWS_LOCATION = 'static'
STATIC_URL = f'https://{AWS_S3_CUSTOM_DOMAIN}/{AWS_LOCATION}/'
MEDIA_URL = f'https://{AWS_S3_CUSTOM_DOMAIN}/media/'
MEDIA_ROOT = os.path.join(BASE_DIR, "file-storage/")
DEFAULT_FILE_STORAGE = 'backend.s3utils.MediaRootS3BotoStorage'
STATICFILES_STORAGE = 'backend.s3utils.StaticRootS3BotoStorage'
```
|
closed
|
2022-02-16T10:21:10Z
|
2023-02-28T04:09:12Z
|
https://github.com/jschneier/django-storages/issues/1115
|
[] |
goldentoaste
| 1
|
litestar-org/litestar
|
asyncio
| 3,680
|
Enhancement: Support attrs converters
|
### Description
Attrs has an attribute conversion feature where a function can be provided to be run whenever the attribute value is set.
https://www.attrs.org/en/stable/examples.html#conversion
This is not being run when the post request body is an attrs class. Looking for clarity on if this is by design or a fix if not. Also, is there another preferred pattern in litestar to do this type of payload attribute conversion?
### URL to code causing the issue
_No response_
### MCVE
requirements.txt
```
litestar[standard,attrs]==2.10.0
```
api.py
```python
from __future__ import annotations
import attrs
from litestar import (
Litestar,
post,
)
@attrs.define
class TestRequestPayload:
a: str = attrs.field(converter=str.lower)
b: str = attrs.field(converter=str.lower)
@post("/test1")
async def test1(data: TestRequestPayload) -> dict:
return attrs.asdict(data)
@post("/test2")
async def test2(data: TestRequestPayload) -> dict:
# Naked evolve basically re-instantiates the object causing the converters to be run
return attrs.asdict(attrs.evolve(data))
app = Litestar(
route_handlers=[test1, test2],
)
```
### Steps to reproduce
1. `litestar --app api:app run`
2. Make a request to the test1 route, note data response unchanged from input
```
$ curl --request POST localhost:8000/test1 --data '{"a": "Hello", "b": "World"}' ; echo
{"a":"Hello","b":"World"}
```
3. Make a request to the test2 route, note data response lowercased relative to input
```
$ curl --request POST localhost:8000/test2 --data '{"a": "Hello", "b": "World"}' ; echo
{"a":"hello","b":"world"}
```
### Screenshots
_No response_
### Logs
_No response_
### Litestar Version
2.10.0
### Platform
- [X] Linux
- [ ] Mac
- [ ] Windows
- [ ] Other (Please specify in the description above)
|
closed
|
2024-08-20T22:29:11Z
|
2025-03-20T15:54:52Z
|
https://github.com/litestar-org/litestar/issues/3680
|
[
"Enhancement",
"Upstream"
] |
jeffcarrico
| 1
|
matplotlib/matplotlib
|
data-visualization
| 28,860
|
[ENH]: Add Independent xlabelpad and ylabelpad Options to rcParams
|
### Problem
Currently, we can only set a uniform `axes.labelpad`, which applies the same padding for both the x-axis and y-axis labels.
I am aware that it is possible to adjust the labelpad for the x and y labels independently using `set_xlabel(..., labelpad=x_pad)` and `set_ylabel(..., labelpad=y_pad)` in plotting code. However, would it be better to add separate options in rcParams to set the padding for x and y labels independently, similar to how `xtick.major.pad` and `ytick.major.pad` work?
### Proposed solution
_No response_
|
open
|
2024-09-21T18:47:02Z
|
2025-03-12T12:02:52Z
|
https://github.com/matplotlib/matplotlib/issues/28860
|
[
"New feature"
] |
NominHanggai
| 5
|
microsoft/unilm
|
nlp
| 927
|
[layoutlmv3] SER and RE task combined into one model
|
Hi @HYPJUDY
I have a question that do you try to combine SER and RE task into one model, that is to say, just use one model to support SER and RE task?
|
open
|
2022-11-22T09:22:10Z
|
2022-12-31T22:30:10Z
|
https://github.com/microsoft/unilm/issues/927
|
[] |
githublsk
| 1
|
datadvance/DjangoChannelsGraphqlWs
|
graphql
| 14
|
'AsgiRequest' object has no attribute 'register'
|
Hello,
After following the installation instructions, and using the example in the README with `django==2.2.1` and `django-channels-graphql-ws==0.2.0` on Python 3.7.3, when making any subscription operation:
```js
subscription {
mySubscription {event}
}
```
```json
{
"errors": [
{
"message": "'AsgiRequest' object has no attribute 'register'"
}
],
"data": null
}
```
Traceback:
```
Traceback (most recent call last):
File "/.cache/pypoetry/virtualenvs/WcnE4_vK-py3.7/lib/python3.7/site-packages/graphql/execution/executor.py", line 447, in resolve_or_error
return executor.execute(resolve_fn, source, info, **args)
File "/.cache/pypoetry/virtualenvs/WcnE4_vK-py3.7/lib/python3.7/site-packages/graphql/execution/executors/sync.py", line 16, in execute
return fn(*args, **kwargs)
File "/.cache/pypoetry/virtualenvs/WcnE4_vK-py3.7/lib/python3.7/site-packages/graphene_django/debug/middleware.py", line 56, in resolve
promise = next(root, info, **args)
File "/***.py", line 86, in resolve
result = next(root, info, **args)
File "/***.py", line 37, in resolve
return next(root, info, **args)
File "/***.py", line 8, in resolve
return result.get()
File "/.cache/pypoetry/virtualenvs/WcnE4_vK-py3.7/lib/python3.7/site-packages/promise/promise.py", line 510, in get
return self._target_settled_value(_raise=True)
File "/.cache/pypoetry/virtualenvs/WcnE4_vK-py3.7/lib/python3.7/site-packages/promise/promise.py", line 514, in _target_settled_value
return self._target()._settled_value(_raise)
File "/.cache/pypoetry/virtualenvs/WcnE4_vK-py3.7/lib/python3.7/site-packages/promise/promise.py", line 224, in _settled_value
reraise(type(raise_val), raise_val, self._traceback)
File "/.cache/pypoetry/virtualenvs/WcnE4_vK-py3.7/lib/python3.7/site-packages/six.py", line 693, in reraise
raise value
File "/.cache/pypoetry/virtualenvs/WcnE4_vK-py3.7/lib/python3.7/site-packages/promise/promise.py", line 487, in _resolve_from_executor
executor(resolve, reject)
File "/.cache/pypoetry/virtualenvs/WcnE4_vK-py3.7/lib/python3.7/site-packages/promise/promise.py", line 754, in executor
return resolve(f(*args, **kwargs))
File "/.cache/pypoetry/virtualenvs/WcnE4_vK-py3.7/lib/python3.7/site-packages/graphql/execution/middleware.py", line 76, in make_it_promise
return next(*args, **kwargs)
File "/.cache/pypoetry/virtualenvs/WcnE4_vK-py3.7/lib/python3.7/site-packages/channels_graphql_ws/graphql_ws.py", line 425, in _subscribe
register = info.context.register
AttributeError: 'AsgiRequest' object has no attribute 'register'
```
While I can see where the 'register' attribute is needed: https://github.com/datadvance/DjangoChannelsGraphqlWs/blob/master/channels_graphql_ws/graphql_ws.py#L425, I can't see what could have caused it to be deleted from the context before going through `Subscription._subscribe`.
Do you have any idea what could lead to that situtation, and how to fix it?
|
closed
|
2019-05-08T14:58:09Z
|
2020-02-12T16:18:56Z
|
https://github.com/datadvance/DjangoChannelsGraphqlWs/issues/14
|
[] |
rigelk
| 6
|
jina-ai/serve
|
deep-learning
| 6,041
|
Release Notes (3.20.3)
|
# Release Note
This release contains 1 bug fix.
## ๐ Bug Fixes
### Skip doc attributes in __annotations__ but not in __fields__ (#6035)
When deploying an Executor inside a Flow with a `BaseDoc` model that has any attribute with a `ClassVar` value, the service would fail to initialize because the Gateway could not properly create the schemas. We have fixed this by securing access to `__fields__` when dynamically creating these pydantic models.
## ๐ค Contributors
We would like to thank all contributors to this release:
- Narek Amirbekian (@NarekA )
|
closed
|
2023-09-06T16:01:39Z
|
2023-09-07T08:48:51Z
|
https://github.com/jina-ai/serve/issues/6041
|
[] |
JoanFM
| 0
|
gunthercox/ChatterBot
|
machine-learning
| 1,773
|
How to connect chatbat or integrate with website
|
i made a simple chatbot using flask and chatterbot, for now how would i integrate this chatbot with website.
|
closed
|
2019-07-12T07:20:10Z
|
2020-06-20T17:33:35Z
|
https://github.com/gunthercox/ChatterBot/issues/1773
|
[] |
AvhiOjha
| 1
|
ultralytics/yolov5
|
machine-learning
| 12,823
|
'Detect' object has no attribute 'grid'
|
### Search before asking
- [X] I have searched the YOLOv5 [issues](https://github.com/ultralytics/yolov5/issues) and [discussions](https://github.com/ultralytics/yolov5/discussions) and found no similar questions.
### Question
Hey, i recently found out about YOLO, and i have been trying to learn YOLOV5 for the past couple of weeks or so, but theres something which i literally cannot get past.
I'm trying to train models for Pytorch using Roboflow, and this worked in the beginning but now i just keep getting the error "'Detect' object has no attribute 'grid'"
I have force reload set to true, cache isn't outdated.
I've tried reinstalling yolo, doesn't work.
I've tried manually training it with the github repo, and i can't even start the training from there even though i follow the steps i've seen other people do perfectly
I've also browsed the internet for a while and i have found others which have had the same issue,
but i never really understood how they actually went on about fixing it (other than setting force_reload to true)
This is my model loading part:
```
import torch
model = torch.hub.load('ultralytics/yolov5', 'custom', './best.pt', force_reload=True)
```
I'm not too sure if its even the code or an issue with the model
because as i mentioned earlier i could load these models perfectly fine when i first started,
but after that it just hasn't worked for weeks.
I'm on python version 3.10 & my os is windows 10.
### Additional
_No response_
|
closed
|
2024-03-15T17:55:11Z
|
2024-10-20T19:41:39Z
|
https://github.com/ultralytics/yolov5/issues/12823
|
[
"question",
"Stale"
] |
dementiaenjoyer
| 5
|
aiogram/aiogram
|
asyncio
| 1,105
|
InlineQueryResultCachedPhoto has no title
|
### Checklist
- [X] I am sure the error is coming from aiogram code
- [X] I have searched in the issue tracker for similar bug reports, including closed ones
### Operating system
Mac Os 10.15.7 (Catalina)
### Python version
3.7.3
### aiogram version
2.24
### Expected behavior
InlineQueryResultCachedPhoto with title and description in inline mode.
### Current behavior
InlineQueryResultCachedPhoto has no title and description, in same time InlineQueryResultPhoto has title and description.
### Steps to reproduce
- Create InlineQueryResultCachedPhoto object with id, title, description and photo_file_id
- Create InlineQueryResultPhoto obj with id, title, description, thumb URL, img URL
- Start polling
- Call bot @bot_name and you will see menu with photo, but without any text.
- In same time InlineQueryResultPhoto has title and description

### Code example
```python3
InlineQueryResultPhoto(
id=hashlib.md5('any'.encode()).hexdigest(),
photo_url='any image URL',
thumb_url='any image URL',
title='any title',
description='any description',
)
InlineQueryResultCachedPhoto(
id=hashlib.md5('any_cached'.encode()).hexdigest(),
photo_file_id='any photo file id',
title='any title cached',
description='any description cached'
)
```
### Logs
_No response_
### Additional information
_No response_
|
closed
|
2023-01-22T17:16:17Z
|
2023-04-28T13:16:13Z
|
https://github.com/aiogram/aiogram/issues/1105
|
[
"wontfix",
"upstream",
"2.x"
] |
PraryDogy
| 1
|
sinaptik-ai/pandas-ai
|
data-science
| 1,175
|
Agent Method parameter save_charts acting like open_charts, and opens saved charts automatically
|
### System Info
Python 3.11.7
PandasAI 2.0.37
Pandas 1.5.3
### ๐ Describe the bug
``` python
def test():
os.environ["PANDASAI_API_KEY"] = os.environ.get("PANDASAI_API_KEY")
llm = OpenAI(api_token="os.environ.get("OPENAI_API_KEY")")
df = pd.DataFrame({
"country": ["United States", "United Kingdom", "France", "Germany", "Italy", "Spain", "Canada", "Australia", "Japan",
"China"],
"gdp": [19294482071552, 2891615567872, 2411255037952, 3435817336832, 1745433788416, 1181205135360,
1607402389504, 1490967855104, 4380756541440, 14631844184064],
"happiness_index": [6.94, 7.16, 6.66, 7.07, 6.38, 6.4, 7.23, 7.22, 5.87, 5.12] })
pAI = Agent(df, config={"verbose": True, "llm": llm, "save_charts": True, "enable_cache": True})
llm_analysis_response = pAI.chat("Create a plot for GDP in relation to happiness")
return llm_analysis_response
```
The above code should create the chart and save it to the default path (and it does)
However, it opens the chart automatically, similar to what open_charts does, which causes problems when integrating pandasai in applications like streamlit for example.
|
closed
|
2024-05-23T17:44:29Z
|
2024-08-29T16:05:48Z
|
https://github.com/sinaptik-ai/pandas-ai/issues/1175
|
[
"bug"
] |
Emad-Eldin-G
| 7
|
horovod/horovod
|
deep-learning
| 3,302
|
Is it possible for HVD to support other Collective Communication Librariese as plugins?
|
**Is your feature request related to a problem? Please describe.**
No.
**Describe the solution you'd like**
Adding new Collective Communication Library without changing HVD's code.
**Describe alternatives you've considered**
HVD involve a plugin frameword for CCL extention.
**Additional context**
NA.
|
closed
|
2021-12-08T09:42:56Z
|
2021-12-08T10:34:32Z
|
https://github.com/horovod/horovod/issues/3302
|
[
"enhancement"
] |
hanwei131
| 0
|
home-assistant/core
|
asyncio
| 141,187
|
Tractive `device_tracker` entity may return `source_type` incompatible with `person` entity
|
### The problem
I have noticed a problem with the state of the `person` entity based on Tractive `device_tracker` entity.
If the user sets a power saving zone for the tracker in the Tractive application and this zone overlaps with a zone other than `home` in Home Assistant, the `device_tracker` entity will have a state that matches the zone name, but the `person` entity state will be `unknown`.


The reason is the `source_type: router` for the `device_tracker` entity and the logic that sets the state of the `person` entity.
https://github.com/home-assistant/core/blob/883ce6842d351197a1bd8d9cf3f8938ea5f91fa6/homeassistant/components/person/__init__.py#L525-L546
I think we should not report `source_type: router` in the Tractive `device_tracker` entity.
### What version of Home Assistant Core has the issue?
core-2025.3.4
### What was the last working version of Home Assistant Core?
_No response_
### What type of installation are you running?
Home Assistant OS
### Integration causing the issue
Tractive
### Link to integration documentation on our website
https://www.home-assistant.io/integrations/tractive/
### Diagnostics information
_No response_
### Example YAML snippet
```yaml
```
### Anything in the logs that might be useful for us?
```txt
```
### Additional information
_No response_
|
closed
|
2025-03-23T10:06:11Z
|
2025-03-24T16:17:00Z
|
https://github.com/home-assistant/core/issues/141187
|
[
"by-code-owner",
"integration: tractive"
] |
bieniu
| 1
|
huggingface/peft
|
pytorch
| 2,181
|
How can I do to export mode format as gguf
|
### Feature request
This is a good project,I just got it today and encountered some problems.
my any code
``` python
from peft import AutoPeftModelForCausalLM
from transformers import AutoTokenizer, AutoModelForCausalLM
tokenizer = AutoTokenizer.from_pretrained("Qwen2-0.5B")
model = AutoModelForCausalLM.from_pretrained("model")
model.save_pretrained('directory')
```
I need gguf file deploy by ollama.Whern I export model format as gguf.
I use
```shell
!python llama.cpp/convert_hf_to_gguf.py directory
```
but it error
```
INFO:hf-to-gguf:Loading model: directory
Traceback (most recent call last):
File "/Users/xu756/AIGC/llama.cpp/convert_hf_to_gguf.py", line 4436, in <module>
main()
File "/Users/xu756/AIGC/llama.cpp/convert_hf_to_gguf.py", line 4404, in main
hparams = Model.load_hparams(dir_model)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/Users/xu756/AIGC/llama.cpp/convert_hf_to_gguf.py", line 462, in load_hparams
with open(dir_model [/](https://file+.vscode-resource.vscode-cdn.net/) "config.json", "r", encoding="utf-8") as f:
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
FileNotFoundError: [Errno 2] No such file or directory: 'directory/config.json'
```
<img width="1328" alt="image" src="https://github.com/user-attachments/assets/4d74c66e-b092-47f2-b570-b6e35767a6ce">
### Motivation
I need gguf file deploy by ollama.
Is there any other way to deploy the PEFT model?
Thank you very much.
### Your contribution
I simply reproduced it on top
|
closed
|
2024-10-26T13:51:45Z
|
2024-10-26T13:59:18Z
|
https://github.com/huggingface/peft/issues/2181
|
[] |
xu756
| 0
|
autogluon/autogluon
|
computer-vision
| 4,442
|
Contributing to Model Monitoring and Interpretability
|
Hello there! :wave:
My name is Guilherme and Iโm a software engineering student at UNIPAMPA, a university in southern Brazil. Iโm currently developing my undergraduate thesis and Iโm very interested in working on improving AutoGluon in terms of features such as model monitoring and interpretability.
I see from the roadmap that these points are still open, so I would be happy to collaborate on this great project.
Thank you!
|
open
|
2024-08-28T23:33:36Z
|
2024-08-28T23:33:36Z
|
https://github.com/autogluon/autogluon/issues/4442
|
[
"enhancement"
] |
guijasss
| 0
|
comfyanonymous/ComfyUI
|
pytorch
| 6,745
|
what's wrong?File "asyncio\windows_events.py", line 462, in finish_socket_func
|
### Your question
D:\ComfyUI_windows_portable>.\python_embeded\python.exe -s ComfyUI\main.py --windows-standalone-build
[START] Security scan
[DONE] Security scan
## ComfyUI-Manager: installing dependencies done.
** ComfyUI startup time: 2025-02-08 15:29:26.553
** Platform: Windows
** Python version: 3.12.8 (tags/v3.12.8:2dc476b, Dec 3 2024, 19:30:04) [MSC v.1942 64 bit (AMD64)]
** Python executable: D:\ComfyUI_windows_portable\python_embeded\python.exe
** ComfyUI Path: D:\ComfyUI_windows_portable\ComfyUI
** ComfyUI Base Folder Path: D:\ComfyUI_windows_portable\ComfyUI
** User directory: D:\ComfyUI_windows_portable\ComfyUI\user
** ComfyUI-Manager config path: D:\ComfyUI_windows_portable\ComfyUI\user\default\ComfyUI-Manager\config.ini
** Log path: D:\ComfyUI_windows_portable\ComfyUI\user\comfyui.log
Prestartup times for custom nodes:
0.0 seconds: D:\ComfyUI_windows_portable\ComfyUI\custom_nodes\rgthree-comfy
4.7 seconds: D:\ComfyUI_windows_portable\ComfyUI\custom_nodes\comfyui-manager
Checkpoint files will always be loaded safely.
Total VRAM 8192 MB, total RAM 7862 MB
pytorch version: 2.6.0+cu126
Set vram state to: NORMAL_VRAM
Device: cuda:0 NVIDIA GeForce GTX 1080 : cudaMallocAsync
Using pytorch attention
ComfyUI version: 0.3.14
[Prompt Server] web root: D:\ComfyUI_windows_portable\ComfyUI\web
### Loading: ComfyUI-Impact-Pack (V8.8.1)
[Impact Pack] Wildcards loading done.
### Loading: ComfyUI-Inspire-Pack (V1.13)
### Loading: ComfyUI-Manager (V3.17.7)
### ComfyUI Version: v0.3.14-6-g832e3f5c | Released on '2025-02-07'
[comfyui_controlnet_aux] | INFO -> Using ckpts path: D:\ComfyUI_windows_portable\ComfyUI\custom_nodes\comfyui_controlnet_aux\ckpts
[comfyui_controlnet_aux] | INFO -> Using symlinks: False
[comfyui_controlnet_aux] | INFO -> Using ort providers: ['CUDAExecutionProvider', 'DirectMLExecutionProvider', 'OpenVINOExecutionProvider', 'ROCMExecutionProvider', 'CPUExecutionProvider', 'CoreMLExecutionProvider']
[ComfyUI-Manager] default cache updated: https://raw.githubusercontent.com/ltdrdata/ComfyUI-Manager/main/alter-list.json
[ComfyUI-Manager] default cache updated: https://raw.githubusercontent.com/ltdrdata/ComfyUI-Manager/main/model-list.json
[ComfyUI-Manager] default cache updated: https://raw.githubusercontent.com/ltdrdata/ComfyUI-Manager/main/github-stats.json
DWPose: Onnxruntime with acceleration providers detected
[ComfyUI-Manager] default cache updated: https://raw.githubusercontent.com/ltdrdata/ComfyUI-Manager/main/extension-node-map.json
[ComfyUI-Manager] default cache updated: https://raw.githubusercontent.com/ltdrdata/ComfyUI-Manager/main/custom-node-list.json
[rgthree-comfy] Loaded 42 epic nodes. ๐
Import times for custom nodes:
0.0 seconds: D:\ComfyUI_windows_portable\ComfyUI\custom_nodes\websocket_image_save.py
0.0 seconds: D:\ComfyUI_windows_portable\ComfyUI\custom_nodes\ComfyUI-BRIA_AI-RMBG
0.0 seconds: D:\ComfyUI_windows_portable\ComfyUI\custom_nodes\comfyui-custom-scripts
0.0 seconds: D:\ComfyUI_windows_portable\ComfyUI\custom_nodes\comfyui_ultimatesdupscale
0.0 seconds: D:\ComfyUI_windows_portable\ComfyUI\custom_nodes\comfyui-advanced-controlnet
0.1 seconds: D:\ComfyUI_windows_portable\ComfyUI\custom_nodes\comfyui_faceanalysis
0.1 seconds: D:\ComfyUI_windows_portable\ComfyUI\custom_nodes\rgthree-comfy
0.1 seconds: D:\ComfyUI_windows_portable\ComfyUI\custom_nodes\comfyui-inspire-pack
0.2 seconds: D:\ComfyUI_windows_portable\ComfyUI\custom_nodes\comfyui_face_parsing
0.3 seconds: D:\ComfyUI_windows_portable\ComfyUI\custom_nodes\comfyui-impact-pack
0.7 seconds: D:\ComfyUI_windows_portable\ComfyUI\custom_nodes\comfyui_controlnet_aux
0.8 seconds: D:\ComfyUI_windows_portable\ComfyUI\custom_nodes\comfyui-manager
Starting server
To see the GUI go to: http://127.0.0.1:8188
FETCH DATA from: D:\ComfyUI_windows_portable\ComfyUI\custom_nodes\comfyui-manager\extension-node-map.json [DONE]
[Inspire Pack] IPAdapterPlus is not installed.
FETCH DATA from: D:\ComfyUI_windows_portable\ComfyUI\custom_nodes\comfyui-manager\extension-node-map.json [DONE]
FETCH ComfyRegistry Data: 5/32
FETCH ComfyRegistry Data: 10/32
FETCH ComfyRegistry Data: 15/32
FETCH ComfyRegistry Data: 20/32
FETCH ComfyRegistry Data: 25/32
FETCH ComfyRegistry Data: 30/32
FETCH ComfyRegistry Data [DONE]
[ComfyUI-Manager] default cache updated: https://api.comfy.org/nodes
nightly_channel: https://raw.githubusercontent.com/ltdrdata/ComfyUI-Manager/main/remote
FETCH DATA from: https://raw.githubusercontent.com/ltdrdata/ComfyUI-Manager/main/custom-node-list.json [DONE]
[ComfyUI-Manager] All startup tasks have been completed.
got prompt
Prompt executed in 9.95 seconds
got prompt
Prompt executed in 0.33 seconds
Unhandled exception
Traceback (most recent call last):
File "D:\ComfyUI_windows_portable\python_embeded\Lib\site-packages\aiohttp\web_protocol.py", line 563, in start
resp, reset = await task
^^^^^^^^^^
File "D:\ComfyUI_windows_portable\python_embeded\Lib\site-packages\aiohttp\web_protocol.py", line 505, in _handle_request
resp, reset = await self.finish_response(request, resp, start_time)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "D:\ComfyUI_windows_portable\python_embeded\Lib\site-packages\aiohttp\web_protocol.py", line 671, in finish_response
await prepare_meth(request)
File "D:\ComfyUI_windows_portable\python_embeded\Lib\site-packages\aiohttp\web_fileresponse.py", line 288, in prepare
return await self._prepare_open_file(request, fobj, st, file_encoding)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "D:\ComfyUI_windows_portable\python_embeded\Lib\site-packages\aiohttp\web_fileresponse.py", line 418, in _prepare_open_file
return await self._sendfile(request, fobj, offset, count)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "D:\ComfyUI_windows_portable\python_embeded\Lib\site-packages\aiohttp\web_fileresponse.py", line 147, in _sendfile
await loop.sendfile(transport, fobj, offset, count)
File "asyncio\base_events.py", line 1227, in sendfile
File "asyncio\proactor_events.py", line 768, in _sendfile_native
File "asyncio\base_events.py", line 914, in sock_sendfile
File "asyncio\proactor_events.py", line 756, in _sock_sendfile_native
File "asyncio\windows_events.py", line 803, in _poll
File "asyncio\windows_events.py", line 462, in finish_socket_func
PermissionError: [WinError 22] ่ฎพๅคไธ่ฏๅซๆญคๅฝไปคใ
### Logs
```powershell
```
### Other
_No response_
|
open
|
2025-02-08T07:37:25Z
|
2025-02-08T07:44:16Z
|
https://github.com/comfyanonymous/ComfyUI/issues/6745
|
[
"User Support"
] |
maweiqiang1978
| 0
|
flairNLP/flair
|
pytorch
| 2,997
|
TARS Zero Shot Classifier Predictions
|
Here is the example code to use TARS Zero Shot Classifier
```
from flair.models import TARSClassifier
from flair.data import Sentence
# 1. Load our pre-trained TARS model for English
tars = TARSClassifier.load('tars-base')
# 2. Prepare a test sentence
sentence = Sentence("I am so glad you liked it!")
# 3. Define some classes that you want to predict using descriptive names
classes = ["happy", "sad"]
#4. Predict for these classes
tars.predict_zero_shot(sentence, classes)
# Print sentence with predicted labels
print(sentence)
print(sentence.labels[0].value)
print(round(sentence.labels[0].score,2))
```
Now this code is wrapped into the following function so that I can use it to get predictions for multiple sentences in a dataset.
```
def tars_zero(example):
sentence = Sentence(example)
tars.predict_zero_shot(sentence,classes)
print(sentence)
inputs = ["I am so glad you liked it!", "I hate it"]
for input in inputs:
tars_zero(input)
#output:
Sentence: "I am so glad you liked it !" โ happy (0.8667)
Sentence: "I hate it"
```
**Here the model is giving predictions only for the first instance.**
|
closed
|
2022-11-24T07:37:38Z
|
2023-05-21T15:36:59Z
|
https://github.com/flairNLP/flair/issues/2997
|
[
"bug",
"wontfix"
] |
ghost
| 2
|
iperov/DeepFaceLab
|
machine-learning
| 603
|
Adjusting Blur Mask Modifier Changes The Brightness
|
I've never noticed this before, but I just installed the 2/3/20 nvidia version. When I adjust the blur higher or lower, the face gets significantly brighter or darker depending which way I go. Here's a video I just shared on Google Drive illustrating this. Maybe this has happened in past versions and I never noticed? But I think I would have. Let me know if this is normal. Here's the video.... https://drive.google.com/file/d/1L77hcgAt8zcGM6jpOB53e6TRDslccivJ/view?usp=sharing
|
open
|
2020-02-04T07:01:32Z
|
2023-06-08T21:24:08Z
|
https://github.com/iperov/DeepFaceLab/issues/603
|
[] |
kilerb
| 1
|
littlecodersh/ItChat
|
api
| 768
|
ๆๆฒกๆๅพฎไฟกไธพๆฅAPI๏ผ
|
ๅจๆไบคๅ๏ผ่ฏท็กฎไฟๆจๅทฒ็ปๆฃๆฅไบไปฅไธๅ
ๅฎน!
- [ ] ๆจๅฏไปฅๅจๆต่งๅจไธญ็ป้ๅพฎไฟก่ดฆๅท๏ผไฝไธ่ฝไฝฟ็จ`itchat`็ป้
- [ ] ๆๅทฒ็ป้
่ฏปๅนถๆ[ๆๆกฃ][document] ไธญ็ๆๅผ่ฟ่กไบๆไฝ
- [ ] ๆจ็้ฎ้ขๆฒกๆๅจ[issues][issues]ๆฅๅ๏ผๅฆๅ่ฏทๅจๅๆissueไธๆฅๅ
- [ ] ๆฌ้ฎ้ข็กฎๅฎๅ
ณไบ`itchat`, ่ไธๆฏๅ
ถไป้กน็ฎ.
- [ ] ๅฆๆไฝ ็้ฎ้ขๅ
ณไบ็จณๅฎๆง๏ผๅปบ่ฎฎๅฐ่ฏๅฏน็ฝ็ป็จณๅฎๆง่ฆๆฑๆไฝ็[itchatmp][itchatmp]้กน็ฎ
่ฏทไฝฟ็จ`itchat.run(debug=True)`่ฟ่ก๏ผๅนถๅฐ่พๅบ็ฒ่ดดๅจไธ้ข:
```
[ๅจ่ฟ้็ฒ่ดดๅฎๆดๆฅๅฟ]
```
ๆจ็itchat็ๆฌไธบ๏ผ`[ๅจ่ฟ้ๅกซๅ็ๆฌๅท]`ใ๏ผๅฏ้่ฟ`python -c "import itchat;print(itchat.__version__)"`่ทๅ๏ผ
ๅ
ถไป็ๅ
ๅฎนๆ่
้ฎ้ขๆด่ฏฆ็ป็ๆ่ฟฐ้ฝๅฏไปฅๆทปๅ ๅจไธ้ข๏ผ
> [ๆจ็ๅ
ๅฎน]
[document]: http://itchat.readthedocs.io/zh/latest/
[issues]: https://github.com/littlecodersh/itchat/issues
[itchatmp]: https://github.com/littlecodersh/itchatmp
|
open
|
2018-12-10T00:45:08Z
|
2018-12-10T00:45:08Z
|
https://github.com/littlecodersh/ItChat/issues/768
|
[] |
caimao9539
| 0
|
open-mmlab/mmdetection
|
pytorch
| 11,858
|
ValueError: need at least one array to concatenate
|
indices = np.concatenate(indices)
File "<__array_function__ internals>", line 200, in concatenate
ValueError: need at least one array to concatenate
**Describe the bug**
่ฎญ็ปๆถๆฅ็้๏ผ
LicenseRec) D:\studysoft\workplace\ORENEXT-main>python ./tools/train.py ./configs/ORENEXT/orenext_stonemlp_sparsefc_ptsdml.py
D:\studysoft\Anaconda\envs\LicenseRec\lib\site-packages\mmcv\__init__.py:20: UserWarning: On January 1, 2023, MMCV will release v2.0.0, in which it will remove co
mponents related to the training process and add a data transformation module. In addition, it will rename the package names mmcv to mmcv-lite and mmcv-full to mmcv. See https://github.com/open-mmlab/mmcv/blob/master/docs/en/compatibility.md for more details.
warnings.warn(
fatal: not a git repository (or any of the parent directories): .git
2024-07-17 16:24:49,426 - mmdet - INFO - Environment info:
------------------------------------------------------------
sys.platform: win32
Python: 3.8.19 (default, Mar 20 2024, 19:55:45) [MSC v.1916 64 bit (AMD64)]
CUDA available: True
GPU 0: NVIDIA GeForce RTX 4060
CUDA_HOME: C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v10.0
NVCC: Cuda compilation tools, release 10.0, V10.0.13
MSVC: ็จไบ x64 ็ Microsoft (R) C/C++ ไผๅ็ผ่ฏๅจ 19.40.33812 ็
GCC: n/a
PyTorch: 1.9.0+cu111
PyTorch compiling details: PyTorch built with:
- C++ Version: 199711
- MSVC 192829337
- Intel(R) Math Kernel Library Version 2020.0.2 Product Build 20200624 for Intel(R) 64 architecture applications
- Intel(R) MKL-DNN v2.1.2 (Git Hash 98be7e8afa711dc9b66c8ff3504129cb82013cdb)
- OpenMP 2019
- CPU capability usage: AVX2
- CUDA Runtime 11.1
- NVCC architecture flags: -gencode;arch=compute_37,code=sm_37;-gencode;arch=compute_50,code=sm_50;-gencode;arch=compute_60,code=sm_60;-gencode;arch=compute_61,
code=sm_61;-gencode;arch=compute_70,code=sm_70;-gencode;arch=compute_75,code=sm_75;-gencode;arch=compute_80,code=sm_80;-gencode;arch=compute_86,code=sm_86;-gencode;arch=compute_37,code=compute_37
- CuDNN 8.0.5
- Magma 2.5.4
- Build settings: BLAS_INFO=mkl, BUILD_TYPE=Release, CUDA_VERSION=11.1, CUDNN_VERSION=8.0.5, CXX_COMPILER=C:/w/b/windows/tmp_bin/sccache-cl.exe, CXX_FLAGS=/DWIN
32 /D_WINDOWS /GR /EHsc /w /bigobj -DUSE_PTHREADPOOL -openmp:experimental -IC:/w/b/windows/mkl/include -DNDEBUG -DUSE_KINETO -DLIBKINETO_NOCUPTI -DUSE_FBGEMM -DUS
E_XNNPACK -DSYMBOLICATE_MOBILE_DEBUG_HANDLE, LAPACK_INFO=mkl, PERF_WITH_AVX=1, PERF_WITH_AVX2=1, PERF_WITH_AVX512=1, TORCH_VERSION=1.9.0, USE_CUDA=ON, USE_CUDNN=ON, USE_EXCEPTION_PTR=1, USE_GFLAGS=OFF, USE_GLOG=OFF, USE_MKL=ON, USE_MKLDNN=ON, USE_MPI=OFF, USE_NCCL=OFF, USE_NNPACK=OFF, USE_OPENMP=ON,
TorchVision: 0.10.0+cu111
OpenCV: 4.10.0
MMCV: 1.7.2
MMCV Compiler: MSVC 192930148
MMCV CUDA Compiler: 11.1
MMDetection: 2.11.0+
------------------------------------------------------------
2024-07-17 16:24:49,977 - mmdet - INFO - Distributed training: False
2024-07-17 16:24:50,564 - mmdet - INFO - Config:
model = dict(
type='PointRend',
backbone=dict(
type='ASMLP',
embed_dim=64,
depths=[2, 2, 2, 2],
num_heads=[3, 6, 12, 24],
shift_size=5,
window_size=7,
mlp_ratio=4.0,
drop_rate=0.0,
drop_path_rate=0.1,
patch_norm=True,
out_indices=(0, 1, 2, 3),
use_checkpoint=False),
neck=dict(
type='FPNSPARSEFC',
in_channels=[64, 128, 256, 512],
out_channels=64,
num_outs=5),
rpn_head=dict(
type='RPNHead',
in_channels=64,
feat_channels=64,
anchor_generator=dict(
type='AnchorGenerator',
scales=[8],
ratios=[0.5, 1.0, 2.0],
strides=[4, 8, 16, 32, 64]),
bbox_coder=dict(
type='DeltaXYWHBBoxCoder',
target_means=[0.0, 0.0, 0.0, 0.0],
target_stds=[1.0, 1.0, 1.0, 1.0]),
loss_cls=dict(
type='CrossEntropyLoss', use_sigmoid=True, loss_weight=1.0),
loss_bbox=dict(type='L1Loss', loss_weight=1.0)),
roi_head=dict(
type='PointRendRoIHead',
bbox_roi_extractor=dict(
type='SingleRoIExtractor',
roi_layer=dict(type='RoIAlign', output_size=7, sampling_ratio=0),
out_channels=64,
featmap_strides=[4, 8, 16, 32]),
bbox_head=dict(
type='Shared2FCBBoxHead',
in_channels=64,
fc_out_channels=64,
roi_feat_size=7,
num_classes=1,
bbox_coder=dict(
type='DeltaXYWHBBoxCoder',
target_means=[0.0, 0.0, 0.0, 0.0],
target_stds=[0.1, 0.1, 0.2, 0.2]),
reg_class_agnostic=False,
loss_cls=dict(
type='CrossEntropyLoss', use_sigmoid=False, loss_weight=1.0),
loss_bbox=dict(type='PtsBorderLoss', loss_weight=0.5)),
mask_roi_extractor=dict(
type='GenericRoIExtractor',
roi_layer=dict(type='SimpleRoIAlign', output_size=14),
out_channels=64,
featmap_strides=[4],
aggregation='concat'),
mask_head=dict(
type='CoarseMaskHead',
num_fcs=1,
in_channels=64,
conv_out_channels=64,
fc_out_channels=64,
num_classes=1,
loss_mask=dict(
type='CrossEntropyLoss', use_mask=True, loss_weight=0.1)),
point_head=dict(
type='MaskPointHead',
num_fcs=1,
in_channels=64,
fc_channels=64,
num_classes=1,
coarse_pred_each_layer=True,
loss_point=dict(
type='CrossEntropyLoss1',
use_mask=True,
loss_weight=1.0,
key_item_weight=0.5))),
train_cfg=dict(
rpn=dict(
assigner=dict(
type='MaxIoUAssigner',
pos_iou_thr=0.7,
neg_iou_thr=0.3,
min_pos_iou=0.3,
match_low_quality=True,
ignore_iof_thr=-1),
sampler=dict(
type='RandomSampler',
num=256,
pos_fraction=0.5,
neg_pos_ub=-1,
add_gt_as_proposals=False),
allowed_border=-1,
pos_weight=-1,
debug=False),
rpn_proposal=dict(
nms_pre=2000,
max_per_img=1000,
nms=dict(type='nms', iou_threshold=0.7),
min_bbox_size=0),
rcnn=dict(
assigner=dict(
type='MaxIoUAssigner',
pos_iou_thr=0.5,
neg_iou_thr=0.5,
min_pos_iou=0.5,
match_low_quality=True,
ignore_iof_thr=-1),
sampler=dict(
type='RandomSampler',
num=512,
pos_fraction=0.25,
neg_pos_ub=-1,
add_gt_as_proposals=True),
mask_size=7,
pos_weight=-1,
debug=False,
num_points=196,
oversample_ratio=3,
importance_sample_ratio=0.75)),
test_cfg=dict(
rpn=dict(
nms_pre=1000,
max_per_img=1000,
nms=dict(type='nms', iou_threshold=0.7),
min_bbox_size=0),
rcnn=dict(
score_thr=0.05,
nms=dict(type='nms', iou_threshold=0.5),
max_per_img=100,
mask_thr_binary=0.5,
subdivision_steps=5,
subdivision_num_points=784,
scale_factor=2)))
dataset_type = 'CocoDataset'
data_root = 'data/coco/stones/'
img_norm_cfg = dict(
mean=[103.53, 116.28, 123.675], std=[1.0, 1.0, 1.0], to_rgb=False)
train_pipeline = [
dict(type='LoadImageFromFile'),
dict(type='LoadAnnotations', with_bbox=True, with_mask=True),
dict(
type='Resize',
img_scale=[(320, 160), (320, 192), (320, 224), (320, 256), (320, 288),
(320, 320)],
multiscale_mode='value',
keep_ratio=True),
dict(type='RandomFlip', flip_ratio=0.5),
dict(
type='Normalize',
mean=[103.53, 116.28, 123.675],
std=[1.0, 1.0, 1.0],
to_rgb=False),
dict(type='Pad', size_divisor=32),
dict(type='DefaultFormatBundle'),
dict(type='Collect', keys=['img', 'gt_bboxes', 'gt_labels', 'gt_masks'])
]
test_pipeline = [
dict(type='LoadImageFromFile'),
dict(
type='MultiScaleFlipAug',
img_scale=(320, 320),
flip=False,
transforms=[
dict(type='Resize', keep_ratio=True),
dict(type='RandomFlip'),
dict(
type='Normalize',
mean=[103.53, 116.28, 123.675],
std=[1.0, 1.0, 1.0],
to_rgb=False),
dict(type='Pad', size_divisor=32),
dict(type='ImageToTensor', keys=['img']),
dict(type='Collect', keys=['img'])
])
]
data = dict(
samples_per_gpu=8,
workers_per_gpu=2,
train=dict(
type='CocoDataset',
ann_file='data/coco/stones/annotations/instances_train2017.json',
img_prefix='data/coco/stones/train2017/',
pipeline=[
dict(type='LoadImageFromFile'),
dict(type='LoadAnnotations', with_bbox=True, with_mask=True),
dict(
type='Resize',
img_scale=[(320, 160), (320, 192), (320, 224), (320, 256),
(320, 288), (320, 320)],
multiscale_mode='value',
keep_ratio=True),
dict(type='RandomFlip', flip_ratio=0.5),
dict(
type='Normalize',
mean=[103.53, 116.28, 123.675],
std=[1.0, 1.0, 1.0],
to_rgb=False),
dict(type='Pad', size_divisor=32),
dict(type='DefaultFormatBundle'),
dict(
type='Collect',
keys=['img', 'gt_bboxes', 'gt_labels', 'gt_masks'])
]),
val=dict(
type='CocoDataset',
ann_file='data/coco/stones/annotations/instances_val2017.json',
img_prefix='data/coco/stones/val2017/',
pipeline=[
dict(type='LoadImageFromFile'),
dict(
type='MultiScaleFlipAug',
img_scale=(320, 320),
flip=False,
transforms=[
dict(type='Resize', keep_ratio=True),
dict(type='RandomFlip'),
dict(
type='Normalize',
mean=[103.53, 116.28, 123.675],
std=[1.0, 1.0, 1.0],
to_rgb=False),
dict(type='Pad', size_divisor=32),
dict(type='ImageToTensor', keys=['img']),
dict(type='Collect', keys=['img'])
])
]),
test=dict(
type='CocoDataset',
ann_file='data/coco/stones/annotations/instances_val2017.json',
img_prefix='data/coco/stones/val2017/',
pipeline=[
dict(type='LoadImageFromFile'),
dict(
type='MultiScaleFlipAug',
img_scale=(320, 320),
flip=False,
transforms=[
dict(type='Resize', keep_ratio=True),
dict(type='RandomFlip'),
dict(
type='Normalize',
mean=[103.53, 116.28, 123.675],
std=[1.0, 1.0, 1.0],
to_rgb=False),
dict(type='Pad', size_divisor=32),
dict(type='ImageToTensor', keys=['img']),
dict(type='Collect', keys=['img'])
])
]))
evaluation = dict(metric=['bbox', 'segm'])
optimizer = dict(type='SGD', lr=0.02, momentum=0.9, weight_decay=0.0001)
optimizer_config = dict(grad_clip=None)
lr_config = dict(
policy='step',
warmup='linear',
warmup_iters=500,
warmup_ratio=0.001,
step=[8, 11])
runner = dict(type='EpochBasedRunner', max_epochs=12)
checkpoint_config = dict(interval=1)
log_config = dict(interval=50, hooks=[dict(type='TextLoggerHook')])
custom_hooks = [dict(type='NumClassCheckHook')]
dist_params = dict(backend='nccl')
log_level = 'INFO'
load_from = None
resume_from = None
workflow = [('train', 1)]
work_dir = './work_dirs/orenext_stonemlp_sparsefc_ptsdml1.py'
gpu_ids = range(0, 1)
loading annotations into memory...
Done (t=0.00s)
creating index...
index created!
fatal: not a git repository (or any of the parent directories): .git
loading annotations into memory...
Done (t=0.33s)
creating index...
index created!
2024-07-17 16:24:51,932 - mmdet - INFO - Start running, host: xiaoran@ๅฐๅ, work_dir: D:\studysoft\workplace\ORENEXT-main\work_dirs\orenext_stonemlp_sparsefc_ptsdml1.py
2024-07-17 16:24:51,932 - mmdet - INFO - Hooks will be executed in the following order:
before_run:
(VERY_HIGH ) StepLrUpdaterHook
(NORMAL ) CheckpointHook
(NORMAL ) EvalHook
(VERY_LOW ) TextLoggerHook
--------------------
before_train_epoch:
(VERY_HIGH ) StepLrUpdaterHook
(NORMAL ) EvalHook
(NORMAL ) NumClassCheckHook
(LOW ) IterTimerHook
(VERY_LOW ) TextLoggerHook
--------------------
before_train_iter:
(VERY_HIGH ) StepLrUpdaterHook
(LOW ) IterTimerHook
--------------------
after_train_iter:
(ABOVE_NORMAL) OptimizerHook
(NORMAL ) CheckpointHook
(NORMAL ) EvalHook
(LOW ) IterTimerHook
(VERY_LOW ) TextLoggerHook
--------------------
after_train_epoch:
(NORMAL ) CheckpointHook
(NORMAL ) EvalHook
(VERY_LOW ) TextLoggerHook
--------------------
before_val_epoch:
(NORMAL ) NumClassCheckHook
(LOW ) IterTimerHook
(VERY_LOW ) TextLoggerHook
--------------------
before_val_iter:
(LOW ) IterTimerHook
--------------------
after_val_iter:
(LOW ) IterTimerHook
--------------------
after_val_epoch:
(VERY_LOW ) TextLoggerHook
--------------------
after_run:
(VERY_LOW ) TextLoggerHook
--------------------
2024-07-17 16:24:51,933 - mmdet - INFO - workflow: [('train', 1)], max: 12 epochs
2024-07-17 16:24:51,933 - mmdet - INFO - Checkpoints will be saved to D:\studysoft\workplace\ORENEXT-main\work_dirs\orenext_stonemlp_sparsefc_ptsdml1.py by HardDiskBackend.
D:\studysoft\Anaconda\envs\LicenseRec\lib\site-packages\mmcv\__init__.py:20: UserWarning: On January 1, 2023, MMCV will release v2.0.0, in which it will remove co
mponents related to the training process and add a data transformation module. In addition, it will rename the package names mmcv to mmcv-lite and mmcv-full to mmcv. See https://github.com/open-mmlab/mmcv/blob/master/docs/en/compatibility.md for more details.
warnings.warn(
D:\studysoft\Anaconda\envs\LicenseRec\lib\site-packages\mmcv\__init__.py:20: UserWarning: On January 1, 2023, MMCV will release v2.0.0, in which it will remove co
mponents related to the training process and add a data transformation module. In addition, it will rename the package names mmcv to mmcv-lite and mmcv-full to mmcv. See https://github.com/open-mmlab/mmcv/blob/master/docs/en/compatibility.md for more details.
warnings.warn(
Indices before concatenation: []
Traceback (most recent call last):
File "./tools/train.py", line 192, in <module>
main()
File "./tools/train.py", line 181, in main
train_detector(
File "D:\studysoft\Anaconda\envs\LicenseRec\lib\site-packages\mmdet-2.11.0-py3.8.egg\mmdet\apis\train.py", line 185, in train_detector
runner.run(data_loaders, cfg.workflow)
File "D:\studysoft\Anaconda\envs\LicenseRec\lib\site-packages\mmcv\runner\epoch_based_runner.py", line 136, in run
epoch_runner(data_loaders[i], **kwargs)
File "D:\studysoft\Anaconda\envs\LicenseRec\lib\site-packages\mmcv\runner\epoch_based_runner.py", line 49, in train
for i, data_batch in enumerate(self.data_loader):
File "D:\studysoft\Anaconda\envs\LicenseRec\lib\site-packages\torch\utils\data\dataloader.py", line 359, in __iter__
return self._get_iterator()
File "D:\studysoft\Anaconda\envs\LicenseRec\lib\site-packages\torch\utils\data\dataloader.py", line 305, in _get_iterator
return _MultiProcessingDataLoaderIter(self)
File "D:\studysoft\Anaconda\envs\LicenseRec\lib\site-packages\torch\utils\data\dataloader.py", line 512, in _next_index
return next(self._sampler_iter) # may raise StopIteration
File "D:\studysoft\Anaconda\envs\LicenseRec\lib\site-packages\torch\utils\data\sampler.py", line 226, in __iter__
for idx in self.sampler:
File "D:\studysoft\Anaconda\envs\LicenseRec\lib\site-packages\mmdet-2.11.0-py3.8.egg\mmdet\datasets\samplers\group_sampler.py", line 37, in __iter__
indices = np.concatenate(indices)
File "<__array_function__ internals>", line 200, in concatenate
ValueError: need at least one array to concatenate
่ฟ่กๅๆฐ๏ผpython ./tools/train.py ./configs/ORENEXT/orenext_stonemlp_sparsefc_ptsdml.py
2. ้็จๅซไบบๅ
ฌๅผไปฃ็ ๏ผ่ฎญ็ปไผ ๅ
ฅๆฐๆฎ๏ผ
model = dict(
type='PointRend',
#pretrained='./pretrained/asmlp_nano_patch4_shift5_224.pth',
backbone=dict(
type='ASMLP',
embed_dim=64,
depths=[2, 2, 2, 2],
num_heads=[3, 6, 12, 24],
shift_size=5,
window_size=7,
mlp_ratio=4.0,
drop_rate=0.0,
drop_path_rate=0.1,
patch_norm=True,
out_indices=(0, 1, 2, 3),
use_checkpoint=False),
neck=dict(
type='FPNSPARSEFC',
in_channels=[64, 128, 256, 512],
out_channels=64,
num_outs=5),
rpn_head=dict(
type='RPNHead',
in_channels=64,
feat_channels=64,
anchor_generator=dict(
type='AnchorGenerator',
scales=[8],
ratios=[0.5, 1.0, 2.0],
strides=[4, 8, 16, 32, 64]),
bbox_coder=dict(
type='DeltaXYWHBBoxCoder',
target_means=[0.0, 0.0, 0.0, 0.0],
target_stds=[1.0, 1.0, 1.0, 1.0]),
loss_cls=dict(
type='CrossEntropyLoss', use_sigmoid=True, loss_weight=1.0),
loss_bbox=dict(type='L1Loss', loss_weight=1.0)),
roi_head=dict(
type='PointRendRoIHead',
bbox_roi_extractor=dict(
type='SingleRoIExtractor',
roi_layer=dict(type='RoIAlign', output_size=7, sampling_ratio=0),
out_channels=64,
featmap_strides=[4, 8, 16, 32]),
bbox_head=dict(
type='Shared2FCBBoxHead',
in_channels=64,
fc_out_channels=64,
roi_feat_size=7,
num_classes=1,
bbox_coder=dict(
type='DeltaXYWHBBoxCoder',
target_means=[0.0, 0.0, 0.0, 0.0],
target_stds=[0.1, 0.1, 0.2, 0.2]),
reg_class_agnostic=False,
loss_cls=dict(
type='CrossEntropyLoss', use_sigmoid=False, loss_weight=1.0),
loss_bbox=dict(type='PtsBorderLoss', loss_weight=0.5)),
mask_roi_extractor=dict(
type='GenericRoIExtractor',
roi_layer=dict(type='SimpleRoIAlign', output_size=14),
out_channels=64,
featmap_strides=[4],
aggregation='concat'),
mask_head=dict(
type='CoarseMaskHead',
num_fcs=1,
in_channels=64,
conv_out_channels=64,
fc_out_channels=64,
num_classes=1,
loss_mask=dict(
type='CrossEntropyLoss', use_mask=True, loss_weight=0.1)),
point_head=dict(
type='MaskPointHead',
num_fcs=1,
in_channels=64,
fc_channels=64,
num_classes=1,
coarse_pred_each_layer=True,
loss_point=dict(type='CrossEntropyLoss1', use_mask=True, loss_weight=1.0,key_item_weight=0.5))),
train_cfg=dict(
rpn=dict(
assigner=dict(
type='MaxIoUAssigner',
pos_iou_thr=0.7,
neg_iou_thr=0.3,
min_pos_iou=0.3,
match_low_quality=True,
ignore_iof_thr=-1),
sampler=dict(
type='RandomSampler',
num=256,
pos_fraction=0.5,
neg_pos_ub=-1,
add_gt_as_proposals=False),
allowed_border=-1,
pos_weight=-1,
debug=False),
rpn_proposal=dict(
nms_pre=2000,
max_per_img=1000,
nms=dict(type='nms', iou_threshold=0.7),
min_bbox_size=0),
rcnn=dict(
assigner=dict(
type='MaxIoUAssigner',
pos_iou_thr=0.5,
neg_iou_thr=0.5,
min_pos_iou=0.5,
match_low_quality=True,
ignore_iof_thr=-1),
sampler=dict(
type='RandomSampler',
num=512,
pos_fraction=0.25,
neg_pos_ub=-1,
add_gt_as_proposals=True),
mask_size=7,
pos_weight=-1,
debug=False,
num_points=196,
oversample_ratio=3,
importance_sample_ratio=0.75)),
test_cfg=dict(
rpn=dict(
nms_pre=1000,
max_per_img=1000,
nms=dict(type='nms', iou_threshold=0.7),
min_bbox_size=0),
rcnn=dict(
score_thr=0.05,
nms=dict(type='nms', iou_threshold=0.5),
max_per_img=100,
mask_thr_binary=0.5,
subdivision_steps=5,
subdivision_num_points=784,
scale_factor=2)))
dataset_type = 'CocoDataset'
data_root = 'data/coco/stones/'
img_norm_cfg = dict(
mean=[103.53, 116.28, 123.675], std=[1.0, 1.0, 1.0], to_rgb=False)
train_pipeline = [
dict(type='LoadImageFromFile'),
dict(type='LoadAnnotations', with_bbox=True, with_mask=True),
dict(
type='Resize',
img_scale=[(320, 160), (320, 192), (320, 224), (320, 256), (320, 288),
(320, 320)],
multiscale_mode='value',
keep_ratio=True),
dict(type='RandomFlip', flip_ratio=0.5),
dict(
type='Normalize',
mean=[103.53, 116.28, 123.675],
std=[1.0, 1.0, 1.0],
to_rgb=False),
dict(type='Pad', size_divisor=32),
dict(type='DefaultFormatBundle'),
dict(type='Collect', keys=['img', 'gt_bboxes', 'gt_labels', 'gt_masks'])
]
test_pipeline = [
dict(type='LoadImageFromFile'),
dict(
type='MultiScaleFlipAug',
img_scale=(320, 320),
flip=False,
transforms=[
dict(type='Resize', keep_ratio=True),
dict(type='RandomFlip'),
dict(
type='Normalize',
mean=[103.53, 116.28, 123.675],
std=[1.0, 1.0, 1.0],
to_rgb=False),
dict(type='Pad', size_divisor=32),
dict(type='ImageToTensor', keys=['img']),
dict(type='Collect', keys=['img'])
])
]
data = dict(
samples_per_gpu=8,
workers_per_gpu=2,
train=dict(
type='CocoDataset',
ann_file='data/coco/stones/annotations/instances_train2017.json',
img_prefix='data/coco/stones/train2017/',
pipeline=[
dict(type='LoadImageFromFile'),
dict(type='LoadAnnotations', with_bbox=True, with_mask=True),
dict(
type='Resize',
img_scale=[(320, 160), (320, 192), (320, 224), (320, 256),
(320, 288), (320, 320)],
multiscale_mode='value',
keep_ratio=True),
dict(type='RandomFlip', flip_ratio=0.5),
dict(
type='Normalize',
mean=[103.53, 116.28, 123.675],
std=[1.0, 1.0, 1.0],
to_rgb=False),
dict(type='Pad', size_divisor=32),
dict(type='DefaultFormatBundle'),
dict(
type='Collect',
keys=['img', 'gt_bboxes', 'gt_labels', 'gt_masks'])
]),
val=dict(
type='CocoDataset',
ann_file='data/coco/stones/annotations/instances_val2017.json',
img_prefix='data/coco/stones/val2017/',
pipeline=[
dict(type='LoadImageFromFile'),
dict(
type='MultiScaleFlipAug',
img_scale=(320, 320),
flip=False,
transforms=[
dict(type='Resize', keep_ratio=True),
dict(type='RandomFlip'),
dict(
type='Normalize',
mean=[103.53, 116.28, 123.675],
std=[1.0, 1.0, 1.0],
to_rgb=False),
dict(type='Pad', size_divisor=32),
dict(type='ImageToTensor', keys=['img']),
dict(type='Collect', keys=['img'])
])
]),
test=dict(
type='CocoDataset',
ann_file='data/coco/stones/annotations/instances_val2017.json',
img_prefix='data/coco/stones/val2017/',
pipeline=[
dict(type='LoadImageFromFile'),
dict(
type='MultiScaleFlipAug',
img_scale=(320, 320),
flip=False,
transforms=[
dict(type='Resize', keep_ratio=True),
dict(type='RandomFlip'),
dict(
type='Normalize',
mean=[103.53, 116.28, 123.675],
std=[1.0, 1.0, 1.0],
to_rgb=False),
dict(type='Pad', size_divisor=32),
dict(type='ImageToTensor', keys=['img']),
dict(type='Collect', keys=['img'])
])
]))
evaluation = dict(metric=['bbox', 'segm'])
optimizer = dict(type='SGD', lr=0.02, momentum=0.9, weight_decay=0.0001)
optimizer_config = dict(grad_clip=None)
lr_config = dict(
policy='step',
warmup='linear',
warmup_iters=500,
warmup_ratio=0.001,
step=[8, 11])
runner = dict(type='EpochBasedRunner', max_epochs=12)
checkpoint_config = dict(interval=1)
log_config = dict(interval=50, hooks=[dict(type='TextLoggerHook')])
custom_hooks = [dict(type='NumClassCheckHook')]
dist_params = dict(backend='nccl')
log_level = 'INFO'
load_from = None
resume_from = None
workflow = [('train', 1)]
work_dir = './work_dirs/orenext_stonemlp_sparsefc_ptsdml1.py'
gpu_ids = range(0, 1)
ๅ
ฌๅผๆฐๆฎ้COCO
{
"images": [
{
"height": 300,
"width": 300,
"id": 1,
"file_name": "00000.png"
},
{
"height": 300,
"width": 300,
"id": 2,
"file_name": "00001.png"
},
{
"height": 300,
"width": 300,
"id": 3,
"file_name": "00002.png"
},
{
"height": 300,
"width": 300,
"id": 4,
"file_name": "00003.png"
},
{
"height": 300,
"width": 300,
"id": 5,
"file_name": "00004.png"
}
],
"categories": [
{
"supercategory": "stones",
"id": 1,
"name": "1"
}
],
"annotations": [
{
"iscrowd": 0,
"image_id": 1,
"bbox": [
209.0,
157.0,
90.0,
87.0
],
"area": 7830.0,
"category_id": 1,
"id": 1
},
{
"iscrowd": 0,
"image_id": 1,
"bbox": [
45.0,
63.0,
91.0,
74.0
],
"area": 6734.0,
"category_id": 1,
"id": 2
},
{
"iscrowd": 0,
"image_id": 1,
"bbox": [
142.0,
22.0,
97.0,
95.0
],
"area": 9215.0,
"category_id": 1,
"id": 3
},
{
"iscrowd": 0,
"image_id": 1,
"bbox": [
0.0,
175.0,
88.0,
63.0
],
"area": 5544.0,
"category_id": 1,
"id": 4
},
{
"iscrowd": 0,
"image_id": 1,
"bbox": [
207.0,
246.0,
50.0,
46.0
],
"area": 2300.0,
"category_id": 1,
"id": 5
},
{
"iscrowd": 0,
"image_id": 1,
"bbox": [
154.0,
117.0,
104.0,
55.0
],
"area": 5720.0,
"category_id": 1,
"id": 6
},
{
"iscrowd": 0,
"image_id": 1,
"bbox": [
44.0,
60.0,
94.0,
83.0
],
"area": 7802.0,
"category_id": 1,
"id": 7
},
{
"iscrowd": 0,
"image_id": 1,
"bbox": [
212.0,
152.0,
87.0,
91.0
],
"area": 7917.0,
"category_id": 1,
"id": 8
},
{
"iscrowd": 0,
"image_id": 1,
"bbox": [
203.0,
241.0,
59.0,
51.0
],
"area": 3009.0,
"category_id": 1,
"id": 9
},
{
"iscrowd": 0,
"image_id": 1,
"bbox": [
138.0,
191.0,
72.0,
83.0
],
"area": 5976.0,
"category_id": 1,
"id": 10
},
{
"iscrowd": 0,
"image_id": 1,
"bbox": [
136.0,
16.0,
108.0,
105.0
],
"area": 11340.0,
"category_id": 1,
"id": 11
},
{
"iscrowd": 0,
"image_id": 1,
"bbox": [
155.0,
116.0,
102.0,
64.0
],
"area": 6528.0,
"category_id": 1,
"id": 12
},
{
"iscrowd": 0,
"image_id": 1,
"bbox": [
32.0,
228.0,
96.0,
68.0
],
"area": 6528.0,
"category_id": 1,
"id": 13
},
{
"iscrowd": 0,
"image_id": 1,
"bbox": [
144.0,
193.0,
66.0,
77.0
],
"area": 5082.0,
"category_id": 1,
"id": 14
},
{
"iscrowd": 0,
"image_id": 1,
"bbox": [
0.0,
173.0,
89.0,
66.0
],
"area": 5874.0,
"category_id": 1,
"id": 15
},
{
"iscrowd": 0,
"image_id": 1,
"bbox": [
31.0,
227.0,
101.0,
70.0
],
"area": 7070.0,
"category_id": 1,
"id": 16
},
{
"iscrowd": 0,
"image_id": 2,
"bbox": [
212.0,
6.0,
86.0,
86.0
],
"area": 7396.0,
"category_id": 1,
"id": 17
},
{
"iscrowd": 0,
"image_id": 2,
"bbox": [
164.0,
219.0,
80.0,
73.0
],
"area": 5840.0,
"category_id": 1,
"id": 18
},
{
"iscrowd": 0,
"image_id": 2,
"bbox": [
3.0,
26.0,
83.0,
61.0
],
"area": 5063.0,
"category_id": 1,
"id": 19
},
{
"iscrowd": 0,
"image_id": 2,
"bbox": [
205.0,
94.0,
56.0,
47.0
],
"area": 2632.0,
"category_id": 1,
"id": 20
},
{
"iscrowd": 0,
"image_id": 2,
"bbox": [
79.0,
183.0,
111.0,
79.0
],
"area": 8769.0,
"category_id": 1,
"id": 21
},
{
"iscrowd": 0,
"image_id": 2,
"bbox": [
37.0,
244.0,
84.0,
53.0
],
"area": 4452.0,
"category_id": 1,
"id": 22
},
{
"iscrowd": 0,
"image_id": 2,
"bbox": [
139.0,
44.0,
74.0,
78.0
],
"area": 5772.0,
"category_id": 1,
"id": 23
},
{
"iscrowd": 0,
"image_id": 2,
"bbox": [
212.0,
5.0,
84.0,
90.0
],
"area": 7560.0,
"category_id": 1,
"id": 24
},
{
"iscrowd": 0,
"image_id": 2,
"bbox": [
204.0,
91.0,
59.0,
52.0
],
"area": 3068.0,
"category_id": 1,
"id": 25
},
{
"iscrowd": 0,
"image_id": 2,
"bbox": [
211.0,
156.0,
63.0,
83.0
],
"area": 5229.0,
"category_id": 1,
"id": 26
},
{
"iscrowd": 0,
"image_id": 2,
"bbox": [
164.0,
217.0,
83.0,
78.0
],
"area": 6474.0,
"category_id": 1,
"id": 27
},
{
"iscrowd": 0,
"image_id": 2,
"bbox": [
77.0,
180.0,
122.0,
85.0
],
"area": 10370.0,
"category_id": 1,
"id": 28
},
{
"iscrowd": 0,
"image_id": 2,
"bbox": [
176.0,
149.0,
44.0,
71.0
],
"area": 3124.0,
"category_id": 1,
"id": 29
},
{
"iscrowd": 0,
"image_id": 2,
"bbox": [
89.0,
112.0,
60.0,
74.0
],
"area": 4440.0,
"category_id": 1,
"id": 30
},
{
"iscrowd": 0,
"image_id": 2,
"bbox": [
33.0,
74.0,
98.0,
94.0
],
"area": 9212.0,
"category_id": 1,
"id": 31
},
{
"iscrowd": 0,
"image_id": 2,
"bbox": [
118.0,
121.0,
91.0,
83.0
],
"area": 7553.0,
"category_id": 1,
"id": 32
},
{
"iscrowd": 0,
"image_id": 2,
"bbox": [
38.0,
78.0,
92.0,
82.0
],
"area": 7544.0,
"category_id": 1,
"id": 33
},
{
"iscrowd": 0,
"image_id": 2,
"bbox": [
210.0,
159.0,
65.0,
78.0
],
"area": 5070.0,
"category_id": 1,
"id": 34
},
{
"iscrowd": 0,
"image_id": 2,
"bbox": [
19.0,
183.0,
77.0,
92.0
],
"area": 7084.0,
"category_id": 1,
"id": 35
},
{
"iscrowd": 0,
"image_id": 2,
"bbox": [
0.0,
26.0,
85.0,
61.0
],
"area": 5185.0,
"category_id": 1,
"id": 36
},
{
"iscrowd": 0,
"image_id": 2,
"bbox": [
90.0,
114.0,
55.0,
70.0
],
"area": 3850.0,
"category_id": 1,
"id": 37
},
{
"iscrowd": 0,
"image_id": 2,
"bbox": [
122.0,
122.0,
82.0,
81.0
],
"area": 6642.0,
"category_id": 1,
"id": 38
},
{
"iscrowd": 0,
"image_id": 2,
"bbox": [
143.0,
42.0,
69.0,
78.0
],
"area": 5382.0,
"category_id": 1,
"id": 39
},
{
"iscrowd": 0,
"image_id": 2,
"bbox": [
17.0,
185.0,
80.0,
89.0
],
"area": 7120.0,
"category_id": 1,
"id": 40
},
{
"iscrowd": 0,
"image_id": 2,
"bbox": [
32.0,
243.0,
89.0,
55.0
],
"area": 4895.0,
"category_id": 1,
"id": 41
},
{
"iscrowd": 0,
"image_id": 3,
"bbox": [
165.0,
70.0,
79.0,
72.0
],
"area": 5688.0,
"category_id": 1,
"id": 42
},
{
"iscrowd": 0,
"image_id": 3,
"bbox": [
37.0,
94.0,
83.0,
54.0
],
"area": 4482.0,
"category_id": 1,
"id": 43
},
{
"iscrowd": 0,
"image_id": 3,
"bbox": [
226.0,
177.0,
66.0,
62.0
],
"area": 4092.0,
"category_id": 1,
"id": 44
},
{
"iscrowd": 0,
"image_id": 3,
"bbox": [
164.0,
212.0,
62.0,
52.0
],
"area": 3224.0,
"category_id": 1,
"id": 45
},
{
"iscrowd": 0,
"image_id": 3,
"bbox": [
80.0,
33.0,
111.0,
79.0
],
"area": 8769.0,
"category_id": 1,
"id": 46
},
{
"iscrowd": 0,
"image_id": 3,
"bbox": [
79.0,
32.0,
117.0,
77.0
],
"area": 9009.0,
"category_id": 1,
"id": 47
},
{
"iscrowd": 0,
"image_id": 3,
"bbox": [
110.0,
98.0,
59.0,
55.0
],
"area": 3245.0,
"category_id": 1,
"id": 48
},
{
"iscrowd": 0,
"image_id": 3,
"bbox": [
159.0,
66.0,
88.0,
78.0
],
"area": 6864.0,
"category_id": 1,
"id": 49
},
{
"iscrowd": 0,
"image_id": 3,
"bbox": [
208.0,
8.0,
69.0,
82.0
],
"area": 5658.0,
"category_id": 1,
"id": 50
},
{
"iscrowd": 0,
"image_id": 3,
"bbox": [
214.0,
91.0,
85.0,
91.0
],
"area": 7735.0,
"category_id": 1,
"id": 51
},
{
"iscrowd": 0,
"image_id": 3,
"bbox": [
224.0,
179.0,
71.0,
64.0
],
"area": 4544.0,
"category_id": 1,
"id": 52
},
{
"iscrowd": 0,
"image_id": 3,
"bbox": [
144.0,
152.0,
86.0,
61.0
],
"area": 5246.0,
"category_id": 1,
"id": 53
},
{
"iscrowd": 0,
"image_id": 3,
"bbox": [
74.0,
177.0,
70.0,
79.0
],
"area": 5530.0,
"category_id": 1,
"id": 54
},
{
"iscrowd": 0,
"image_id": 3,
"bbox": [
178.0,
0.0,
41.0,
68.0
],
"area": 2788.0,
"category_id": 1,
"id": 55
},
{
"iscrowd": 0,
"image_id": 3,
"bbox": [
32.0,
148.0,
80.0,
26.0
],
"area": 2080.0,
"category_id": 1,
"id": 56
},
{
"iscrowd": 0,
"image_id": 3,
"bbox": [
18.0,
33.0,
82.0,
88.0
],
"area": 7216.0,
"category_id": 1,
"id": 57
},
{
"iscrowd": 0,
"image_id": 3,
"bbox": [
210.0,
8.0,
63.0,
74.0
],
"area": 4662.0,
"category_id": 1,
"id": 58
},
{
"iscrowd": 0,
"image_id": 3,
"bbox": [
16.0,
31.0,
83.0,
94.0
],
"area": 7802.0,
"category_id": 1,
"id": 59
},
{
"iscrowd": 0,
"image_id": 3,
"bbox": [
78.0,
178.0,
67.0,
75.0
],
"area": 5025.0,
"category_id": 1,
"id": 60
},
{
"iscrowd": 0,
"image_id": 3,
"bbox": [
147.0,
153.0,
82.0,
56.0
],
"area": 4592.0,
"category_id": 1,
"id": 61
},
{
"iscrowd": 0,
"image_id": 3,
"bbox": [
215.0,
95.0,
84.0,
83.0
],
"area": 6972.0,
"category_id": 1,
"id": 62
},
{
"iscrowd": 0,
"image_id": 3,
"bbox": [
181.0,
2.0,
35.0,
63.0
],
740.0,
|
open
|
2024-07-17T08:33:31Z
|
2024-07-31T08:46:30Z
|
https://github.com/open-mmlab/mmdetection/issues/11858
|
[] |
xiaoran8899
| 1
|
axnsan12/drf-yasg
|
django
| 296
|
Only show examples not the model
|
For API Responses, how can I only show examples and not the model?
in fact, I do not want to show any of my models definitions.
Thanks.
|
closed
|
2019-01-18T01:44:20Z
|
2019-01-29T06:55:38Z
|
https://github.com/axnsan12/drf-yasg/issues/296
|
[] |
oneandonlyonebutyou
| 1
|
Kanaries/pygwalker
|
pandas
| 15
|
How to make a histogram
|
How do you make a histogram chart of one attribute (column)? (without pre-calculating the histogram data before putting the data into pygwalker, of course)
I fiddled with the UI for a while but couldn't find a way.
If it's not possible right now, I'd like it to be implemented.
Thanks
|
closed
|
2023-02-21T07:58:58Z
|
2024-03-19T13:57:53Z
|
https://github.com/Kanaries/pygwalker/issues/15
|
[
"good first issue",
"graphic-walker",
"P1"
] |
karamari
| 6
|
waditu/tushare
|
pandas
| 1,418
|
่ดขๅกๆฐๆฎ๏ผๅๅฉๆถฆ๏ผ็ผบๅคฑ่ถ
่ฟ10%๏ผๅธๆ่ฝๅฐฝๅฟซๅฎๅไธไธ
|
1. ๅ
จๅธๅบA่ก่ดขๆฅๆฐๆฎ็ผบๅคฑ่พๅค๏ผๅธๆ่ฝๅฎๅไธไธ๏ผ้ไปถๆฏ็ฎๅๅ็ฐ็็ผบๅคฑๆฐๆฎ็่ก็ฅจไปฃ็ ๅๆฅๅๆฅๆ
[miss info.xlsx](https://github.com/waditu/tushare/files/5140909/miss.info.xlsx)
2. ่ดขๅกๅ
ฌๅๆฅๆไธๅฎ้
ๅ
ฌๅๆฅๆๆถ้ด้ด้ๅคชๅคง๏ผ
a. ้จๅๅฏ่ฝๆฏๅ ไธบๅๅงๆฐๆฎๆ่ฐๆด๏ผไฝupdate_flag=0ๆถๅคง้จๅๆฐๆฎ็ดๆฅ็ผบๅคฑ
|
open
|
2020-08-28T08:23:46Z
|
2021-01-02T05:49:01Z
|
https://github.com/waditu/tushare/issues/1418
|
[] |
ZhaoBarry
| 2
|
ExpDev07/coronavirus-tracker-api
|
fastapi
| 16
|
Server error
|
It seem server is down https://coronavirus-tracker-api.herokuapp.com/all 500 internal error
|
closed
|
2020-03-07T01:24:31Z
|
2020-03-07T02:30:19Z
|
https://github.com/ExpDev07/coronavirus-tracker-api/issues/16
|
[
"bug"
] |
altezza04
| 4
|
benbusby/whoogle-search
|
flask
| 179
|
[BUG] settings set by user stored globally
|
**Describe the bug**
settings applied by users are set/stored globally/persistently.
**To Reproduce**
Steps to reproduce the behavior:
user changes setting, clicks apply.
other user opens whoogle from different computer, browser, etc to find that setting applied by previous user has been set globally.
**Deployment Method**
manual build with run executable.
**Version of Whoogle Search**
latest from source.
**Desktop (please complete the following information):**
any device
**Smartphone (please complete the following information):**
any device.
**Additional context**
instance is public.
i've also noticed that this issue is not just affecting my instance, but other public instances as well.
mine: https://whoogle.himiko.cloud
other: https://whoogle.sdf.org
|
closed
|
2021-01-21T19:49:51Z
|
2021-05-05T16:24:56Z
|
https://github.com/benbusby/whoogle-search/issues/179
|
[
"bug"
] |
ghost
| 3
|
docarray/docarray
|
pydantic
| 1,250
|
Implement a Document Index backed by Elasatic Search v8+
|
We have a related issue #1209 which implements `ElasticDocuementIndex` based on version `elasticsearch==7.10.1`.
There are two seperate implementation is because elastic v8 changes their api and supports hnsw based vector search.
But #1181
> ES changed their Apache license to SSPL which is a business-unfriendly license after 7.10.2.
|
closed
|
2023-03-17T03:20:59Z
|
2023-04-22T09:35:20Z
|
https://github.com/docarray/docarray/issues/1250
|
[] |
AnneYang720
| 0
|
widgetti/solara
|
fastapi
| 825
|
Vue components unable to find template vue files when using frozen/pyinstaller application on Windows
|
I have been using [PyInstaller](https://pyinstaller.org/en/stable/) to create an executable .exe file for my solara application, and that has, in general, worked very well. However, recently I started using the [Menu](https://github.com/widgetti/solara/blob/8ef0826818ae3e08026c0904c2acdec77aeef195/solara/lab/components/menu.py#L8) component and that caused the following issue when I was building the application to an executable using PyInstaller:
```log
Traceback (most recent call last):
File "reacton\core.py", line 388, in _create_widget
File "ipyvue\VueTemplateWidget.py", line 144, in __init__
File "solara\server\patch.py", line 250, in wrapper
File "ipyvue\Template.py", line 47, in get_template
FileNotFoundError: [Errno 2] No such file or directory: 'solara\\lab\\components\\menu.vue'
```
On the other hand, the solara application was working completely fine if I ran it as a normal python program from the terminal.
I belive I have traced the problem to the [component_vue](https://github.com/widgetti/solara/blob/8ef0826818ae3e08026c0904c2acdec77aeef195/solara/components/component_vue.py#L64) decorator which in turn calls a [wrapper](https://github.com/widgetti/solara/blob/8ef0826818ae3e08026c0904c2acdec77aeef195/solara/components/component_vue.py#L48) function that uses `inspect.getfile` to get the path to the file where the decorated function is defined. It looks like follows:
```python
def _widget_vue(vue_path: str, vuetify=True) -> Callable[[Callable[P, None]], Type[v.VuetifyTemplate]]:
def decorator(func: Callable[P, None]):
class VuetifyWidgetSolara(v.VuetifyTemplate):
template_file = (inspect.getfile(func), vue_path)
class VueWidgetSolara(vue.VueTemplate):
template_file = (inspect.getfile(func), vue_path)
base_class = VuetifyWidgetSolara if vuetify else VueWidgetSolara
widget_class = _widget_from_signature("VueWidgetSolaraSub", base_class, func, "vue_")
return widget_class
return decorator
```
We can see here that the call `inspect.getfile(func)` is expected to provide the *full absolute path* to the file. When not using a frozen executable on Windows (or using some other platform like Mac), this works as expected, but when using the frozen executable on Windows, the `inspect.getfile(func)` will return a relative path, leading the the vue file not being found.
A simple solution (which I have tested already) is to surround the `inspect.getfile(func)` with `os.path.abspath`, as this will correctly resolve the path, no matter if the inspect module returns a relative path, or not.
|
closed
|
2024-10-21T11:38:16Z
|
2024-10-25T09:55:04Z
|
https://github.com/widgetti/solara/issues/825
|
[] |
suhren
| 3
|
sinaptik-ai/pandas-ai
|
data-visualization
| 1,134
|
Add support for processing and analyzing complex table headers
|
### ๐ The feature
I propose adding a feature to PandasAI that allows for handling and analyzing complex table headers. Currently, PandasAI supports basic table headers, but there are cases where tables have more complex structures, such as multi-level headers or headers with merged cells. This feature would enable users to work with such tables more effectively.
### Motivation, pitch
The motivation behind this proposal is to enhance the functionality of PandasAI and make it more versatile in handling diverse datasets. Many real-world datasets contain tables with complex headers, and being able to process and analyze them accurately would greatly benefit users.
### Alternatives
_No response_
### Additional context
_No response_
|
closed
|
2024-04-25T03:53:20Z
|
2024-08-01T16:05:29Z
|
https://github.com/sinaptik-ai/pandas-ai/issues/1134
|
[] |
BanDianMan
| 0
|
Asabeneh/30-Days-Of-Python
|
python
| 168
|
I would like to do this course? Is that ok? Please!
|
open
|
2021-08-26T16:39:39Z
|
2021-09-12T15:13:22Z
|
https://github.com/Asabeneh/30-Days-Of-Python/issues/168
|
[] |
Aaron-1974
| 1
|
|
cookiecutter/cookiecutter-django
|
django
| 5,196
|
AttributeError: This FileResponse instance has no `content` attribute. Use `streaming_content` instead.
|
## What happened?
```AttributeError: This FileResponse instance has no `content` attribute. Use `streaming_content` instead.```
I am getting this error while clicking on any media download link in django admin. I checked inside docker container and files are stored properly. I have used cookiecutter to start the project.
When i
## What should've happened instead?
File should be downloaded.
## Additional details
<!-- To assist you best, please include commands that you've run, options you've selected and any relevant logs -->
- Host system configuration:
- Version of cookiecutter CLI (get it with `cookiecutter --version`): Cookiecutter 1.7.3 from C:\Python310\lib\site-packages (Python 3.1)
- OS name and version:
On Linux, run
```bash
lsb_release -a 2> /dev/null || cat /etc/redhat-release 2> /dev/null || cat /etc/*-release 2> /dev/null || cat /etc/issue 2> /dev/null
```
On MacOs, run
```bash
sw_vers
```
On Windows, via CMD, run
```
systeminfo | findstr /B /C:"OS Name" /C:"OS Version"
```
```bash
# OS Name: Microsoft Windows 10 Home Single Language
# OS Version: 10.0.19045 N/A Build 19045
```
- Python version, run `python3 -V`: Python 3.12.3
- Docker version (if using Docker), run `docker --version`: Docker version 26.1.4, build 5650f9b
- docker compose version (if using Docker), run `docker compose --version`: Docker Compose version v2.27.1-desktop.1
- ...
- Options selected and/or [replay file](https://cookiecutter.readthedocs.io/en/latest/advanced/replay.html):
On Linux and macOS: `cat ${HOME}/.cookiecutter_replay/cookiecutter-django.json`
(Please, take care to remove sensitive information)
```json
```
<summary>
Logs:
<details>
<pre>
$ cookiecutter https://github.com/cookiecutter/cookiecutter-django
project_name [Project Name]: ...
</pre>
</details>
</summary>
|
closed
|
2024-07-08T10:10:15Z
|
2024-08-07T06:06:09Z
|
https://github.com/cookiecutter/cookiecutter-django/issues/5196
|
[
"bug"
] |
ShubhamKarala
| 4
|
MorvanZhou/tutorials
|
numpy
| 63
|
ๅฆไฝ่งฃๅณ name 'prediction' is not defined
|
ๅจhttps://github.com/MorvanZhou/tutorials/tree/master/tensorflowTUT/tf18_CNN2ไธญ็ไปฃ็ ใ
่ฟ่กๆถๆฅ name 'prediction' is not defined
|
closed
|
2018-03-19T09:32:50Z
|
2018-03-19T10:02:15Z
|
https://github.com/MorvanZhou/tutorials/issues/63
|
[] |
ChangedenCZD
| 5
|
httpie/cli
|
python
| 825
|
Please add option to reformat XML body
|
```
$ TERM=xterm-256color http http://localhost:8081/api/grid-service/GridInquiryService/GridInquiry @SOAP.realtime.xml
HTTP/1.1 200 OK
Connection: keep-alive
Content-Length: 921
Content-Type: text/xml;charset=ISO-8859-1
Date: Sun, 29 Dec 2019 10:07:58 GMT
<soap:Envelope xmlns:soap="http://schemas.xmlsoap.org/soap/envelope/"><soap:Body><ns4:realtimeResponse xmlns:ns4="http://rdc.com/soap/GridInquiry" xmlns:ns3="http://rdc.com/soap" xmlns:ns2="http://service.rdc.com"><return><ns2:Status><ns2:StatusCode>0</ns2:StatusCode><ns2:Severity>INFO</ns2:Severity></ns2:Status><ns2:GridInquiryStatusRec><ns2:GridInquiryId>3380</ns2:GridInquiryId><ns2:GridInquiryStatus><ns2:GridInquiryStatusCode>COMPLETE</ns2:GridInquiryStatusCode></ns2:GridInquiryStatus></ns2:GridInquiryStatusRec><ns2:GridInquiryResultsRec><ns2:GridInquiryId>3380</ns2:GridInquiryId><ns2:GridInquiryStatus><ns2:GridInquiryStatusCode>COMPLETE</ns2:GridInquiryStatusCode></ns2:GridInquiryStatus><ns2:GridInquiryReviewStatus><ns2:GridInquiryReviewStatusCode>NOMATCH</ns2:GridInquiryReviewStatusCode></ns2:GridInquiryReviewStatus></ns2:GridInquiryResultsRec></return></ns4:realtimeResponse></soap:Body></soap:Envelope>
```

So output even colorized. But it lack of indentation structure. It will be very cool to see said `--reformat` option (f.e. like in [highlight](https://gitlab.com/saalen/highlight)).
|
closed
|
2019-12-29T10:11:34Z
|
2021-08-31T20:49:53Z
|
https://github.com/httpie/cli/issues/825
|
[] |
Hubbitus
| 4
|
scikit-optimize/scikit-optimize
|
scikit-learn
| 561
|
Use global acq_optimizers for opt in gp_minimize
|
This is more likely a feature request๏ผif not available now๏ผ.
As widely known that there're multiple minimas when **EI** , **LCB**, **PI** infill plan are used.
From what I can get in this code, if kwargs _lbfgs_ is set, _n_points_ are used as the start point of a local lbfgs optimize, this is cool but is there any way to just assign a global optimizer for this opt? The sampling plan is essential to the construction of the surrogate model, a global optimizer may provide better performance.
There're some python based existing global optimize tools(e.g Inspyred, DEAP, GAFT), or it is cool if a callable user defined optimizer could be passed in like the _GaussProcessRegressor_ in sklearn
|
open
|
2017-11-21T15:39:01Z
|
2018-01-10T22:37:08Z
|
https://github.com/scikit-optimize/scikit-optimize/issues/561
|
[
"Enhancement",
"API",
"New feature",
"Major"
] |
TsingQAQ
| 4
|
axnsan12/drf-yasg
|
rest-api
| 768
|
Change responses typing
|
# Bug Report
## Description
change typing
from:
```
:type responses: dict[int or str, (drf_yasg.openapi.Schema or drf_yasg.openapi.SchemaRef or
drf_yasg.openapi.Response or str or rest_framework.serializers.Serializer)]
```
to
```
:type responses: dict[int or str, (drf_yasg.openapi.Schema or drf_yasg.openapi.SchemaRef or
drf_yasg.openapi.Response or str or rest_framework.serializers.Serializer or type[rest_framework.serializers.Serializer])]
```
to include also classes of rest_framework.serializers.Serializer
## Is this a regression?
<!-- Did this behavior use to work in the previous version? -->
<!-- edit: --> Yes, the previous version in which this bug was not present was: ...
## Minimal Reproduction
```code
```
## Stack trace / Error message
```code
```
<!-- If the issue is accompanied by an exception or an error, please share it below: -->
## Your Environment
```code
```
|
open
|
2022-01-25T11:44:06Z
|
2025-03-07T12:11:18Z
|
https://github.com/axnsan12/drf-yasg/issues/768
|
[
"triage"
] |
Nazareka
| 1
|
roboflow/supervision
|
pytorch
| 1,619
|
Bug found in ConfusionMatrix.from_detections
|
### Search before asking
- [X] I have searched the Supervision [issues](https://github.com/roboflow/supervision/issues) and found no similar bug report.
### Bug
Issue found in code when producing a confusion matrix for object detection. It seems like the FN was being added incorrectly to the matrix. Here is the code that was problematic for me. When removing the else condition, I was getting the correct TP value. It seems that num_classes, ends up being at the same position detection_classes[matched_detection_idx[j]]
```
```
for i, true_class_value in enumerate(true_classes):
j = matched_true_idx == i
print('sum(j)', sum(j))
if matches.shape[0] > 0 and sum(j) == 1:
result_matrix[
true_class_value, detection_classes[matched_detection_idx[j]]
] += 1 # TP
else:
result_matrix[true_class_value, num_classes] += 1 # FN
```
```
### Environment
_No response_
### Minimal Reproducible Example
_No response_
### Additional
_No response_
### Are you willing to submit a PR?
- [ ] Yes I'd like to help by submitting a PR!
|
open
|
2024-10-24T15:02:23Z
|
2025-01-28T22:37:40Z
|
https://github.com/roboflow/supervision/issues/1619
|
[
"bug"
] |
chiggins2024
| 3
|
dadadel/pyment
|
numpy
| 76
|
Multi-line args descriptions are not indented correctly
|
Here's an example that illustrates the issue.
```
def foo(baz=1):
"""
Args:
baz: long-description breaking into multiple
lines (Default value = 1)
Returns:
"""
pass
```
Formatting under google style results in:
```
def foo(baz=1):
"""
Args:
baz: long-description breaking into multiple
lines (Default value = 1)
Returns:
"""
pass
```
where the second line of the arg description is indented incorrectly.
|
open
|
2019-01-12T11:53:51Z
|
2021-06-12T18:17:34Z
|
https://github.com/dadadel/pyment/issues/76
|
[] |
jethrokuan
| 1
|
ranaroussi/yfinance
|
pandas
| 2,104
|
Getting empty DataFrame from data download
|
### Describe bug
I'm getting an empty dataframe for my downloads. I've looked for other who've had this problem, but must be searching incorrectly or I'm the only one. I upgraded to latest version of yfinance (and verified), plus cleared my yfinance cache.
### Simple code that reproduces your problem
import yfinance as yf
spy = yf.Ticker('SPY')
spy.history(period='1mo')
# same result with MSFT
### Debug log
DEBUG Entering history()
DEBUG Entering _fetch_ticker_tz()
DEBUG Entering get()
DEBUG Entering _make_request()
DEBUG url=https://query2.finance.yahoo.com/v8/finance/chart/SPY
DEBUG params=frozendict.frozendict({'range': '1d', 'interval': '1d'})
DEBUG Entering _get_cookie_and_crumb()
DEBUG cookie_mode = 'basic'
DEBUG Entering _get_cookie_and_crumb_basic()
### Bad data proof
Failed to get ticker 'SPY' reason: HTTPSConnectionPool(host='fc.yahoo.com', port=443): Max retries exceeded with url: / (Caused by NameResolutionError("<urllib3.connection.HTTPSConnection object at 0x00000209144B4650>: Failed to resolve 'fc.yahoo.com' ([Errno 11001] getaddrinfo failed)"))
$SPY: possibly delisted; no price data found (period=1mo)
Empty DataFrame
Columns: [Open, High, Low, Close, Adj Close, Volume]
Index: []
### `yfinance` version
0.2.48
### Python version
3.12
### Operating system
Windows 11
|
closed
|
2024-10-29T20:09:06Z
|
2024-10-29T20:41:37Z
|
https://github.com/ranaroussi/yfinance/issues/2104
|
[] |
jpkern0
| 1
|
koxudaxi/datamodel-code-generator
|
fastapi
| 2,087
|
Type annotation falls back to Any if property schema is partially overwridden
|
**Describe the bug**
Importing a schema and partial overwrite it (e.g., set a different default value) leads to lost type information
**To Reproduce**
schemaA
```json
{
"type": "object",
"title": "SchemaA",
"properties": {
"test": {
"title": "test",
"type": "string",
"default": "default_value1"
}
}
}
```
schemaB.json
```json
{
"allOf": [
{
"$ref": "SchemaA.json"
}
],
"type": "object",
"title": "SchemaB",
"properties": {
"test": {
"default": "default_value2"
}
}
}
```
Used commandline:
```
$ datamodel-codegen --input schemaB.json --output model.py
```
Result model.py: type of `test` in SchemaB is resetted to `Any`
```py
class SchemaA(BaseModel):
test: Optional[str] = Field('default_value1', title='test')
class SchemaB(SchemaA):
test: Optional[Any] = 'default_value2'
```
**Expected behavior**
type of `test` in SchemaB is still `str`
```py
class SchemaA(BaseModel):
test: Optional[str] = Field('default_value1', title='test')
class SchemaB(SchemaA):
test: Optional[str] = 'default_value2'
```
**Version:**
- OS: Windows 10
- Python version: 3.11 | 3.9
- datamodel-code-generator version: 0.25.6 | 0.26.0
|
open
|
2024-09-08T14:37:07Z
|
2024-09-08T14:37:07Z
|
https://github.com/koxudaxi/datamodel-code-generator/issues/2087
|
[] |
simontaurus
| 0
|
jonaswinkler/paperless-ng
|
django
| 102
|
Filter improvements
|
- When selecting certain filters, the filter editor should preselect sensible values.
- Is in inbox = yes
- Has any tag = yes
- The user should be able to filter by these additional filters
- Does not have tag X
|
closed
|
2020-12-07T00:21:28Z
|
2020-12-07T16:49:03Z
|
https://github.com/jonaswinkler/paperless-ng/issues/102
|
[] |
jonaswinkler
| 0
|
ray-project/ray
|
data-science
| 50,890
|
[Data] Ordering of blocks after map and map_batches
|
### What happened + What you expected to happen
I have a dataset in files, rows are sorted within files and files are sorted by name in the same folder. After applying `map` or `map_batches`, the order of blocks is different than before applying `map` or `map_batches`.
Expected behavior: `map` and `map_batches` doesn't change the ordering of rows.
### Versions / Dependencies
ray==2.42.1
### Reproduction script
```
import time
import ray
ray.init()
dataset = ray.data.from_items(
[
{"time_to_sleep": 3},
{"time_to_sleep": 2},
{"time_to_sleep": 1},
],
override_num_blocks=3
)
print(dataset.take_all())
# output: [{'time_to_sleep': 3}, {'time_to_sleep': 2}, {'time_to_sleep': 1}]
def map_simple(x):
time.sleep(x['time_to_sleep'])
return x
print(dataset.map(map_simple).take_all())
# output: [{'time_to_sleep': 1}, {'time_to_sleep': 2}, {'time_to_sleep': 3}]
def my_map_batches(x):
time.sleep(x['time_to_sleep'][0])
yield {'result': [x['time_to_sleep'][0]]}
mapped = dataset.map_batches(my_map_batches)
print(mapped.take_all())
# output: [{'result': 1}, {'result': 2}, {'result': 3}]
```
### Issue Severity
High: It blocks me from completing my task.
|
closed
|
2025-02-25T09:34:49Z
|
2025-03-05T19:01:37Z
|
https://github.com/ray-project/ray/issues/50890
|
[
"bug"
] |
jakac
| 9
|
dfki-ric/pytransform3d
|
matplotlib
| 189
|
Bug in `axis_angle_from_matrix`
|
```
/home/dfki.uni-bremen.de/afabisch/anaconda3/envs/robot/lib/python3.8/site-packages/pytransform3d/rotations/_conversions.py:1597: RuntimeWarning: invalid value encountered in sqrt
a[:3] = np.sqrt(0.5 * (np.diag(R) + 1.0)) * np.sign(axis_unnormalized)
[ 0.81725001 0.19145 -0.00549102 nan nan nan]
```
workaround:
```python
a[:3] = np.sqrt(0.5 * np.abs(np.diag(R) + 1.0)) * np.sign(axis_unnormalized)
```
Pickled test data in cloud, load with
```python
import pickle
R = pickle.load(open("data.pickle", "rb"))
pr.compact_axis_angle_from_matrix(R)
```
Fixed in #188
|
closed
|
2022-06-01T15:58:27Z
|
2022-06-03T10:50:46Z
|
https://github.com/dfki-ric/pytransform3d/issues/189
|
[] |
AlexanderFabisch
| 1
|
polakowo/vectorbt
|
data-visualization
| 388
|
Issue with Candlestick app
|
Dash is running on http://127.0.0.1:8050/
* Serving Flask app "__main__" (lazy loading)
* Environment: production
WARNING: This is a development server. Do not use it in a production deployment.
Use a production WSGI server instead.
* Debug mode: on
An exception has occurred, use %tb to see the full traceback.
SystemExit: 1
C:\Users\A967\Anaconda3\lib\site-packages\IPython\core\interactiveshell.py:3445: UserWarning: To exit: use 'exit', 'quit', or Ctrl-D.
warn("To exit: use 'exit', 'quit', or Ctrl-D.", stacklevel=1)
โ
Please guide what can be the issue, I am not using docker
|
closed
|
2022-02-10T09:01:52Z
|
2022-02-10T09:16:26Z
|
https://github.com/polakowo/vectorbt/issues/388
|
[] |
medknecth
| 0
|
Integuru-AI/Integuru
|
automation
| 31
|
Support for web sockets
|
Seems like Websockets are ignored in HAR?
## Steps to reproduce
Run `create_har.py` and navigate to https://echo.websocket.org/.ws
Analysis looks like the following
```bash
% poetry run integuru --prompt "send messages on a websocket" --model gpt-4o --generate-code
------------------------Successfully analyzed!!!-------------------------------
โโโ [master_curl] [node_id: 83ea3e62-803c-48ca-aee4-2c2879c39bec]
[dynamic_parts: []]
[extracted_parts: ['None']]
[curl -X GET -H ':authority: echo.websocket.org' -H ':method: GET' -H ':path: /.ws' -H ':scheme: https' -H 'priority: u=0, i' -H 'upgrade-insecure-requests: 1' 'https://echo.websocket.org/.ws']
--------------Generating code------------
โโโ [master_curl] [node_id: 83ea3e62-803c-48ca-aee4-2c2879c39bec] [dynamic_parts: []] [extracted_parts: ['None']] [input_variables: None] [curl -X GET -H ':authority: echo.websocket.org' -H ':method: GET' -H ':path: /.ws' -H ':scheme: https' -H 'priority: u=0, i' -H 'upgrade-insecure-requests: 1' 'https://echo.websocket.org/.ws']
Aggregated function calls have been saved to 'generated_code.py'
--------------Generated integration code in generated_code.py!!------------
```
### Incorrectly generated code
```python
import requests
def get_echo_websocket_response(cookie_string):
url = 'https://echo.websocket.org/.ws'
headers = {
'priority': 'u=0, i',
'upgrade-insecure-requests': '1',
'Cookie': cookie_string
}
response = requests.get(url, headers=headers)
# Since there are no variables to parse, return an empty dictionary
return {}
cookie_string = 'key1=value1;key2=value2'
# Convert cookie_string to a dict to retrieve values
cookies_dict = dict(item.split('=') for item in cookie_string.split(';'))
# Retrieve values from cookies_dict if needed
value1 = cookies_dict.get('key1')
value2 = cookies_dict.get('key2')
# Call the function
result = get_echo_websocket_response(cookie_string)
print(result)
```
|
open
|
2025-01-08T20:18:43Z
|
2025-01-08T20:18:43Z
|
https://github.com/Integuru-AI/Integuru/issues/31
|
[] |
sughodke
| 0
|
mwaskom/seaborn
|
data-science
| 2,762
|
kdeplot raising LinAlgError("singular matrix") instead of warning
|
Example code:
```
import pandas as pd
import seaborn as sns
df = pd.DataFrame({'a': [1929245168.06679]*18})
sns.kdeplot(data=df, x='a')
```
seaborn version 0.11.2
python version 3.8.12
Error: `numpy.linalg.LinAlgError: singular matrix`
Expected: `UserWarning: Dataset has 0 variance; skipping density estimate. Pass 'warn_singular=False' to disable this warning.`
I tried other types of singular matrixes and the singular warning implemented in 0.11.2 work as expected. (for example: `pd.DataFrame({'a': [0]*10})` or `pd.DataFrame({'a': [0]})`)
The problem seem to arise for particular floats. I tried other big floats and changed the value slightly resulting in different outcome.
Another interesting sequence:
`pd.DataFrame({'a': [1929245168.06679]*18})` -> error
`pd.DataFrame({'a': [1929245160.06679]*18})` -> error
`pd.DataFrame({'a': [1929245100.06679]*18})` -> no error and no warning (singular_warn is True)
|
closed
|
2022-03-16T09:53:06Z
|
2022-08-02T11:36:13Z
|
https://github.com/mwaskom/seaborn/issues/2762
|
[
"rough-edge",
"mod:distributions"
] |
Proch92
| 3
|
pytorch/vision
|
machine-learning
| 8,166
|
RuntimeError: cuda video backend is not available.
|
### ๐ Describe the bug
When trying to set the videoreader backend to cuda (`torchvision.set_video_backend('cuda')`) I get the error below:
`
RuntimeError: cuda video backend is not available.
`
I followed the instructions to use the videoreader on cuda. I.e. I installed pytorch nightly and build torchvision from source. My DockerFile is given below:
```
FROM nvidia/cuda:11.8.0-devel-ubuntu22.04
RUN apt-get update
RUN apt-get install -y python3-pip
# RUN pip3 install --upgrade pip3
RUN apt-get update
RUN yes | apt install nvidia-cuda-toolkit
RUN pip3 install --pre torch torchvision torchaudio --index-url https://download.pytorch.org/whl/nightly/cu118
RUN git clone https://github.com/pytorch/vision.git
WORKDIR "/vision"
RUN python3 setup.py develop
RUN pip3 install ffmpeg-python
RUN pip3 install av --upgrade
```
As far as I can see the environment has been installed with the expected versions etc. Is this a bug or am I doing something wrong?
### Versions
PyTorch version: 2.2.0.dev20231213+cu118
Is debug build: False
CUDA used to build PyTorch: 11.8
ROCM used to build PyTorch: N/A
OS: Ubuntu 22.04.3 LTS (x86_64)
GCC version: (Ubuntu 11.4.0-1ubuntu1~22.04) 11.4.0
Clang version: Could not collect
CMake version: Could not collect
Libc version: glibc-2.35
Python version: 3.10.12 (main, Nov 20 2023, 15:14:05) [GCC 11.4.0] (64-bit runtime)
Python platform: Linux-5.10.0-25-amd64-x86_64-with-glibc2.35
Is CUDA available: True
CUDA runtime version: 11.8.89
CUDA_MODULE_LOADING set to: LAZY
Nvidia driver version: 535.146.02
cuDNN version: Could not collect
HIP runtime version: N/A
MIOpen runtime version: N/A
Is XNNPACK available: True
Versions of relevant libraries:
[pip3] numpy==1.24.1
[pip3] pytorch-triton==2.1.0+bcad9dabe1
[pip3] torch==2.2.0.dev20231213+cu118
[pip3] torchaudio==2.2.0.dev20231213+cu118
[pip3] torchvision==0.18.0.dev20231213+cu118
[conda] Could not collect
|
open
|
2023-12-18T12:20:55Z
|
2024-02-13T15:54:04Z
|
https://github.com/pytorch/vision/issues/8166
|
[] |
Caspeerrr
| 2
|
man-group/arctic
|
pandas
| 828
|
Fix build broken due to group name change
|
The group name was changed from manahl -> man-group and that seems to have broken the travis build. Probably needs a tweak in the settings somewhere
|
closed
|
2019-11-19T12:28:49Z
|
2019-11-19T13:31:26Z
|
https://github.com/man-group/arctic/issues/828
|
[] |
shashank88
| 2
|
iterative/dvc
|
machine-learning
| 10,398
|
dvc exp run --run-all: only runs two experiments and then hangs (similar to #8165)
|
# Bug Report
<!--
## Issue name
-->
## Description
When running dvc exp run --run-all, only two experiments get run fully, and then the next experiment never starts. I have tried two times and each time two consecutive experiments are run fully.
<!--
-->
### Reproduce
Add multiple experiments to the queue with dvc exp run --queue
dvc exp run --run-all
<!--
Step list of how to reproduce the bug
-->
<!--
Example:
1. dvc init
2. Copy dataset.zip to the directory
3. dvc add dataset.zip
4. dvc run -d dataset.zip -o model ./train.sh
5. modify dataset.zip
6. dvc repro
-->
### Expected
All experiments should run sequentially.
<!--
A clear and concise description of what you expect to happen.
-->
### Environment information
<!--
This is required to ensure that we can reproduce the bug.
-->
**Output of `dvc doctor`:**
DVC version: 3.50.0 (pip)
-------------------------
Platform: Python 3.10.12 on Linux-5.15.146.1-microsoft-standard-WSL2-x86_64-with-glibc2.35
Subprojects:
dvc_data = 3.15.1
dvc_objects = 5.1.0
dvc_render = 1.0.2
dvc_task = 0.4.0
scmrepo = 3.3.1
Supports:
http (aiohttp = 3.9.1, aiohttp-retry = 2.8.3),
https (aiohttp = 3.9.1, aiohttp-retry = 2.8.3),
s3 (s3fs = 2024.3.1, boto3 = 1.28.64)
Config:
Global: /root/.config/dvc
System: /etc/xdg/dvc
Cache types: symlink
Cache directory: ext4 on /dev/sde
Caches: local
Remotes: s3
Workspace directory: ext4 on /dev/sde
Repo: dvc, git
Repo.site_cache_dir: /var/tmp/dvc/repo/508ec894ab4fcd5646b1ead942fa8b35
**Additional Information (if any):**
cat .dvc/tmp/exps/celery/dvc-exp-worker-1.out
/usr/local/lib/python3.10/dist-packages/celery/platforms.py:829: SecurityWarning: You're running the worker with superuser privileges: this is
absolutely not recommended!
Please specify a different user using the --uid option.
User information: uid=0 euid=0 gid=0 egid=0
warnings.warn(SecurityWarning(ROOT_DISCOURAGED.format(
[2024-04-18 15:51:26,850: WARNING/MainProcess] No hostname was supplied. Reverting to default 'localhost'
-------------- dvc-exp-4ac376-1@localhost v5.4.0 (opalescent)
--- ***** -----
-- ******* ---- Linux-5.15.146.1-microsoft-standard-WSL2-x86_64-with-glibc2.35 2024-04-18 15:51:26
- *** --- * ---
- ** ---------- [config]
- ** ---------- .> app: dvc-exp-local:0x7f2defb7c700
- ** ---------- .> transport: filesystem://localhost//
- ** ---------- .> results: file:///workspaces/platform/.dvc/tmp/exps/celery/result
- *** --- * --- .> concurrency: 1 (thread)
-- ******* ---- .> task events: OFF (enable -E to monitor tasks in this worker)
--- ***** -----
-------------- [queues]
.> celery exchange=celery(direct) key=celery
[tasks]
. dvc.repo.experiments.queue.tasks.cleanup_exp
. dvc.repo.experiments.queue.tasks.collect_exp
. dvc.repo.experiments.queue.tasks.run_exp
. dvc.repo.experiments.queue.tasks.setup_exp
. dvc_task.proc.tasks.run
[2024-04-18 15:51:26,860: WARNING/MainProcess] /usr/local/lib/python3.10/dist-packages/celery/worker/consumer/consumer.py:508: CPendingDeprecationWarning: The broker_connection_retry configuration setting will no longer determine
whether broker connection retries are made during startup in Celery 6.0 and above.
If you wish to retain the existing behavior for retrying connections on startup,
you should set broker_connection_retry_on_startup to True.
warnings.warn(
[2024-04-18 15:51:26,860: WARNING/MainProcess] No hostname was supplied. Reverting to default 'localhost'
[2024-04-18 15:51:26,860: INFO/MainProcess] Connected to filesystem://localhost//
[2024-04-18 15:51:26,861: INFO/MainProcess] dvc-exp-4ac376-1@localhost ready.
[2024-04-18 15:51:26,862: INFO/MainProcess] Task dvc.repo.experiments.queue.tasks.run_exp[26ac9773-ffd9-4b1e-abbe-dcf5fbbf5741] received
[2024-04-18 15:53:14,714: INFO/MainProcess] Task dvc.repo.experiments.queue.tasks.run_exp[26ac9773-ffd9-4b1e-abbe-dcf5fbbf5741] succeeded in 107.85451585499686s: None
[2024-04-18 15:53:16,067: INFO/MainProcess] Task dvc.repo.experiments.queue.tasks.run_exp[ad2d173c-7348-4013-a417-9d946d3d19f0] received
[2024-04-18 15:55:13,671: INFO/MainProcess] Task dvc.repo.experiments.queue.tasks.run_exp[ad2d173c-7348-4013-a417-9d946d3d19f0] succeeded in 117.60698523299652s: None
[2024-04-18 15:55:15,405: INFO/MainProcess] Task dvc.repo.experiments.queue.tasks.run_exp[df43ce08-360d-44c2-ac3f-b0270fa693e4] received
[2024-04-18 15:56:50,571: INFO/MainProcess] Task dvc.repo.experiments.queue.tasks.run_exp[df43ce08-360d-44c2-ac3f-b0270fa693e4] succeeded in 95.1688057010033s: None
[2024-04-18 15:56:50,715: INFO/MainProcess] Task dvc.repo.experiments.queue.tasks.run_exp[22e9611d-9bb7-4aeb-8227-1cd8c2e4fcf9] received
[2024-04-18 15:58:48,605: INFO/MainProcess] Task dvc.repo.experiments.queue.tasks.run_exp[22e9611d-9bb7-4aeb-8227-1cd8c2e4fcf9] succeeded in 117.89407574100187s: None
[2024-04-18 15:59:02,593: INFO/MainProcess] monitor: shutting down due to empty queue.
[2024-04-18 15:59:02,966: WARNING/MainProcess] Got shutdown from remote
[2024-04-18 15:59:02,968: INFO/MainProcess] cleaning up FSApp broker.
[2024-04-18 15:59:02,981: INFO/MainProcess] done
/usr/local/lib/python3.10/dist-packages/celery/platforms.py:829: SecurityWarning: You're running the worker with superuser privileges: this is
absolutely not recommended!
Please specify a different user using the --uid option.
User information: uid=0 euid=0 gid=0 egid=0
warnings.warn(SecurityWarning(ROOT_DISCOURAGED.format(
[2024-04-20 14:01:21,608: WARNING/MainProcess] No hostname was supplied. Reverting to default 'localhost'
-------------- dvc-exp-4ac376-1@localhost v5.4.0 (opalescent)
--- ***** -----
-- ******* ---- Linux-5.15.146.1-microsoft-standard-WSL2-x86_64-with-glibc2.35 2024-04-20 14:01:21
- *** --- * ---
- ** ---------- [config]
- ** ---------- .> app: dvc-exp-local:0x7ff4e3aac6d0
- ** ---------- .> transport: filesystem://localhost//
- ** ---------- .> results: file:///workspaces/platform/.dvc/tmp/exps/celery/result
- *** --- * --- .> concurrency: 1 (thread)
-- ******* ---- .> task events: OFF (enable -E to monitor tasks in this worker)
--- ***** -----
-------------- [queues]
.> celery exchange=celery(direct) key=celery
[tasks]
. dvc.repo.experiments.queue.tasks.cleanup_exp
. dvc.repo.experiments.queue.tasks.collect_exp
. dvc.repo.experiments.queue.tasks.run_exp
. dvc.repo.experiments.queue.tasks.setup_exp
. dvc_task.proc.tasks.run
[2024-04-20 14:01:21,621: WARNING/MainProcess] /usr/local/lib/python3.10/dist-packages/celery/worker/consumer/consumer.py:508: CPendingDeprecationWarning: The broker_connection_retry configuration setting will no longer determine
whether broker connection retries are made during startup in Celery 6.0 and above.
If you wish to retain the existing behavior for retrying connections on startup,
you should set broker_connection_retry_on_startup to True.
warnings.warn(
[2024-04-20 14:01:21,621: WARNING/MainProcess] No hostname was supplied. Reverting to default 'localhost'
[2024-04-20 14:01:21,621: INFO/MainProcess] Connected to filesystem://localhost//
[2024-04-20 14:01:21,623: INFO/MainProcess] dvc-exp-4ac376-1@localhost ready.
[2024-04-20 14:01:21,624: INFO/MainProcess] Task dvc.repo.experiments.queue.tasks.run_exp[0285ad2f-cc55-487b-bb72-a22cddbb560d] received
[2024-04-22 01:08:08,508: INFO/MainProcess] Task dvc.repo.experiments.queue.tasks.run_exp[0285ad2f-cc55-487b-bb72-a22cddbb560d] succeeded in 3015.664237446s: None
[2024-04-22 01:08:10,100: INFO/MainProcess] Task dvc.repo.experiments.queue.tasks.run_exp[1741062f-5998-4ed3-8a2b-f0726876eeee] received
[2024-04-22 01:24:01,119: CRITICAL/MainProcess] Unrecoverable error: JSONDecodeError('Expecting value: line 1 column 1 (char 0)')
Traceback (most recent call last):
File "/usr/local/lib/python3.10/dist-packages/celery/worker/worker.py", line 202, in start
self.blueprint.start(self)
File "/usr/local/lib/python3.10/dist-packages/celery/bootsteps.py", line 116, in start
step.start(parent)
File "/usr/local/lib/python3.10/dist-packages/celery/bootsteps.py", line 365, in start
return self.obj.start()
File "/usr/local/lib/python3.10/dist-packages/celery/worker/consumer/consumer.py", line 340, in start
blueprint.start(self)
File "/usr/local/lib/python3.10/dist-packages/celery/bootsteps.py", line 116, in start
step.start(parent)
File "/usr/local/lib/python3.10/dist-packages/celery/worker/consumer/consumer.py", line 746, in start
c.loop(*c.loop_args())
File "/usr/local/lib/python3.10/dist-packages/celery/worker/loops.py", line 130, in synloop
connection.drain_events(timeout=2.0)
File "/usr/local/lib/python3.10/dist-packages/kombu/connection.py", line 341, in drain_events
return self.transport.drain_events(self.connection, **kwargs)
File "/usr/local/lib/python3.10/dist-packages/kombu/transport/virtual/base.py", line 997, in drain_events
get(self._deliver, timeout=timeout)
File "/usr/local/lib/python3.10/dist-packages/kombu/utils/scheduling.py", line 55, in get
return self.fun(resource, callback, **kwargs)
File "/usr/local/lib/python3.10/dist-packages/kombu/transport/virtual/base.py", line 1035, in _drain_channel
return channel.drain_events(callback=callback, timeout=timeout)
File "/usr/local/lib/python3.10/dist-packages/kombu/transport/virtual/base.py", line 754, in drain_events
return self._poll(self.cycle, callback, timeout=timeout)
File "/usr/local/lib/python3.10/dist-packages/kombu/transport/virtual/base.py", line 414, in _poll
return cycle.get(callback)
File "/usr/local/lib/python3.10/dist-packages/kombu/utils/scheduling.py", line 55, in get
return self.fun(resource, callback, **kwargs)
File "/usr/local/lib/python3.10/dist-packages/kombu/transport/virtual/base.py", line 417, in _get_and_deliver
message = self._get(queue)
File "/usr/local/lib/python3.10/dist-packages/kombu/transport/filesystem.py", line 261, in _get
return loads(bytes_to_str(payload))
File "/usr/local/lib/python3.10/dist-packages/kombu/utils/json.py", line 93, in loads
return _loads(s, object_hook=object_hook)
File "/usr/lib/python3.10/json/__init__.py", line 359, in loads
return cls(**kw).decode(s)
File "/usr/lib/python3.10/json/decoder.py", line 337, in decode
obj, end = self.raw_decode(s, idx=_w(s, 0).end())
File "/usr/lib/python3.10/json/decoder.py", line 355, in raw_decode
raise JSONDecodeError("Expecting value", s, err.value) from None
json.decoder.JSONDecodeError: Expecting value: line 1 column 1 (char 0)
[2024-04-22 01:53:58,451: INFO/MainProcess] Task dvc.repo.experiments.queue.tasks.run_exp[1741062f-5998-4ed3-8a2b-f0726876eeee] succeeded in 2748.4771515409993s: None
[2024-04-22 01:53:58,452: INFO/MainProcess] cleaning up FSApp broker.
[2024-04-22 01:53:58,566: INFO/MainProcess] done
/usr/local/lib/python3.10/dist-packages/celery/platforms.py:829: SecurityWarning: You're running the worker with superuser privileges: this is
absolutely not recommended!
Please specify a different user using the --uid option.
User information: uid=0 euid=0 gid=0 egid=0
warnings.warn(SecurityWarning(ROOT_DISCOURAGED.format(
[2024-04-22 15:19:31,011: WARNING/MainProcess] No hostname was supplied. Reverting to default 'localhost'
-------------- dvc-exp-4ac376-1@localhost v5.4.0 (opalescent)
--- ***** -----
-- ******* ---- Linux-5.15.146.1-microsoft-standard-WSL2-x86_64-with-glibc2.35 2024-04-22 15:19:31
- *** --- * ---
- ** ---------- [config]
- ** ---------- .> app: dvc-exp-local:0x7f8037ae4700
- ** ---------- .> transport: filesystem://localhost//
- ** ---------- .> results: file:///workspaces/platform/.dvc/tmp/exps/celery/result
- *** --- * --- .> concurrency: 1 (thread)
-- ******* ---- .> task events: OFF (enable -E to monitor tasks in this worker)
--- ***** -----
-------------- [queues]
.> celery exchange=celery(direct) key=celery
[tasks]
. dvc.repo.experiments.queue.tasks.cleanup_exp
. dvc.repo.experiments.queue.tasks.collect_exp
. dvc.repo.experiments.queue.tasks.run_exp
. dvc.repo.experiments.queue.tasks.setup_exp
. dvc_task.proc.tasks.run
[2024-04-22 15:19:31,019: WARNING/MainProcess] /usr/local/lib/python3.10/dist-packages/celery/worker/consumer/consumer.py:508: CPendingDeprecationWarning: The broker_connection_retry configuration setting will no longer determine
whether broker connection retries are made during startup in Celery 6.0 and above.
If you wish to retain the existing behavior for retrying connections on startup,
you should set broker_connection_retry_on_startup to True.
warnings.warn(
[2024-04-22 15:19:31,019: WARNING/MainProcess] No hostname was supplied. Reverting to default 'localhost'
[2024-04-22 15:19:31,020: INFO/MainProcess] Connected to filesystem://localhost//
[2024-04-22 15:19:31,021: INFO/MainProcess] dvc-exp-4ac376-1@localhost ready.
[2024-04-22 15:19:31,021: INFO/MainProcess] Task dvc.repo.experiments.queue.tasks.run_exp[3e30a522-de58-49dc-8e93-f029aa9a015d] received
[2024-04-22 15:52:03,066: WARNING/MainProcess] No hostname was supplied. Reverting to default 'localhost'
[2024-04-22 16:10:49,673: INFO/MainProcess] Task dvc.repo.experiments.queue.tasks.run_exp[3e30a522-de58-49dc-8e93-f029aa9a015d] succeeded in 3078.8052399700027s: None
[2024-04-22 16:10:50,675: INFO/MainProcess] Task dvc.repo.experiments.queue.tasks.run_exp[99127667-18e3-4d3e-bcf4-1dd02e78e426] received
[2024-04-22 16:48:49,787: CRITICAL/MainProcess] Unrecoverable error: JSONDecodeError('Expecting value: line 1 column 1 (char 0)')
Traceback (most recent call last):
File "/usr/local/lib/python3.10/dist-packages/celery/worker/worker.py", line 202, in start
self.blueprint.start(self)
File "/usr/local/lib/python3.10/dist-packages/celery/bootsteps.py", line 116, in start
step.start(parent)
File "/usr/local/lib/python3.10/dist-packages/celery/bootsteps.py", line 365, in start
return self.obj.start()
File "/usr/local/lib/python3.10/dist-packages/celery/worker/consumer/consumer.py", line 340, in start
blueprint.start(self)
File "/usr/local/lib/python3.10/dist-packages/celery/bootsteps.py", line 116, in start
step.start(parent)
File "/usr/local/lib/python3.10/dist-packages/celery/worker/consumer/consumer.py", line 746, in start
c.loop(*c.loop_args())
File "/usr/local/lib/python3.10/dist-packages/celery/worker/loops.py", line 130, in synloop
connection.drain_events(timeout=2.0)
File "/usr/local/lib/python3.10/dist-packages/kombu/connection.py", line 341, in drain_events
return self.transport.drain_events(self.connection, **kwargs)
File "/usr/local/lib/python3.10/dist-packages/kombu/transport/virtual/base.py", line 997, in drain_events
get(self._deliver, timeout=timeout)
File "/usr/local/lib/python3.10/dist-packages/kombu/utils/scheduling.py", line 55, in get
return self.fun(resource, callback, **kwargs)
File "/usr/local/lib/python3.10/dist-packages/kombu/transport/virtual/base.py", line 1035, in _drain_channel
return channel.drain_events(callback=callback, timeout=timeout)
File "/usr/local/lib/python3.10/dist-packages/kombu/transport/virtual/base.py", line 754, in drain_events
return self._poll(self.cycle, callback, timeout=timeout)
File "/usr/local/lib/python3.10/dist-packages/kombu/transport/virtual/base.py", line 414, in _poll
return cycle.get(callback)
File "/usr/local/lib/python3.10/dist-packages/kombu/utils/scheduling.py", line 55, in get
return self.fun(resource, callback, **kwargs)
File "/usr/local/lib/python3.10/dist-packages/kombu/transport/virtual/base.py", line 417, in _get_and_deliver
message = self._get(queue)
File "/usr/local/lib/python3.10/dist-packages/kombu/transport/filesystem.py", line 261, in _get
return loads(bytes_to_str(payload))
File "/usr/local/lib/python3.10/dist-packages/kombu/utils/json.py", line 93, in loads
return _loads(s, object_hook=object_hook)
File "/usr/lib/python3.10/json/__init__.py", line 359, in loads
return cls(**kw).decode(s)
File "/usr/lib/python3.10/json/decoder.py", line 337, in decode
obj, end = self.raw_decode(s, idx=_w(s, 0).end())
File "/usr/lib/python3.10/json/decoder.py", line 355, in raw_decode
raise JSONDecodeError("Expecting value", s, err.value) from None
json.decoder.JSONDecodeError: Expecting value: line 1 column 1 (char 0)
[2024-04-22 17:06:12,589: INFO/MainProcess] Task dvc.repo.experiments.queue.tasks.run_exp[99127667-18e3-4d3e-bcf4-1dd02e78e426] succeeded in 3322.133445313004s: None
[2024-04-22 17:06:12,590: INFO/MainProcess] cleaning up FSApp broker.
[2024-04-22 17:06:12,762: INFO/MainProcess] done
<!--
Please check https://github.com/iterative/dvc/wiki/Debugging-DVC on ways to gather more information regarding the issue.
If applicable, please also provide a `--verbose` output of the command, eg: `dvc add --verbose`.
If the issue is regarding the performance, please attach the profiling information and the benchmark comparisons.
-->
|
closed
|
2024-04-23T02:31:52Z
|
2024-04-29T12:50:45Z
|
https://github.com/iterative/dvc/issues/10398
|
[] |
mghanava
| 3
|
piccolo-orm/piccolo
|
fastapi
| 67
|
Is there an example of how to integrate piccolo into an existing fastapi application?
|
Is there an example of how to integrate piccolo into an existing fastapi application?
Something along the lines of the examples in the fastapi docs [here](https://fastapi.tiangolo.com/tutorial/sql-databases/) and [here](https://fastapi.tiangolo.com/advanced/async-sql-databases/#connect-and-disconnect).
I'm interested in trying the ORM out as it reminds me a lot of how EntityFramework operates in the .NET world, but right now it seems a lot of magic and black boxed and to require using one of the generators to put all the pieces into place. That's just my perception. Anyways, would love to see a clean and understandable example for how to put this into an existing fastapi app and use similarily to the ORMs already mentioned in the fastapi docs.
Thanks much - wg
|
open
|
2021-02-09T17:01:00Z
|
2021-07-07T09:57:59Z
|
https://github.com/piccolo-orm/piccolo/issues/67
|
[
"help wanted",
"good first issue",
"documentation"
] |
ohmeow
| 10
|
benlubas/molten-nvim
|
jupyter
| 76
|
[Bug] I broke progress bars in virtual text
|
## Description
Wrapping text in virtual output broke progress bar handling
#70
|
closed
|
2023-12-10T17:41:55Z
|
2023-12-10T19:13:25Z
|
https://github.com/benlubas/molten-nvim/issues/76
|
[
"bug"
] |
benlubas
| 0
|
nerfstudio-project/nerfstudio
|
computer-vision
| 3,003
|
GSplat Error - subprocess.CalledProcessError: Command '['where', 'cl']' returned non-zero exit status 1.
|
Nerfstudio GSplat Error
Persistently getting this issue when attempting to train Splatfacto. Running latest Nerfstudio & Splatfacto.
-----
ns-train splatfacto --data data/nerfstudio/Egypt
-----
No Nerfstudio checkpoint to load, so training from scratch.
Disabled comet/tensorboard/wandb event writers
C:\Users\James\anaconda3\envs\nerfstudio\lib\site-packages\torch\utils\cpp_extension.py:383: UserWarning: Error checking compiler version for cl: [WinError 2] The system
cannot find the file specified
warnings.warn(f'Error checking compiler version for {compiler}: {error}')
( โ ) gsplat: Setting up CUDA (This may take a few minutes the first time)INFO: Could not find files for the given pattern(s).
Exception in thread Thread-6:
Traceback (most recent call last):
File "C:\Users\James\anaconda3\envs\nerfstudio\lib\site-packages\gsplat\cuda\_backend.py", line 56, in <module>
from gsplat import csrc as _C
ImportError: cannot import name 'csrc' from 'gsplat' (C:\Users\James\anaconda3\envs\nerfstudio\lib\site-packages\gsplat\__init__.py)
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "C:\Users\James\anaconda3\envs\nerfstudio\lib\threading.py", line 932, in _bootstrap_inner
self.run()
File "C:\Users\James\anaconda3\envs\nerfstudio\lib\site-packages\nerfstudio\viewer\render_state_machine.py", line 222, in run
outputs = self._render_img(action.camera_state)
File "C:\Users\James\anaconda3\envs\nerfstudio\lib\site-packages\nerfstudio\viewer\render_state_machine.py", line 168, in _render_img
outputs = self.viewer.get_model().get_outputs_for_camera(camera, obb_box=obb)
File "C:\Users\James\anaconda3\envs\nerfstudio\lib\site-packages\torch\utils\_contextlib.py", line 115, in decorate_context
return func(*args, **kwargs)
File "C:\Users\James\anaconda3\envs\nerfstudio\lib\site-packages\nerfstudio\models\splatfacto.py", line 914, in get_outputs_for_camera
outs = self.get_outputs(camera.to(self.device))
File "C:\Users\James\anaconda3\envs\nerfstudio\lib\site-packages\nerfstudio\models\splatfacto.py", line 739, in get_outputs
self.xys, depths, self.radii, conics, comp, num_tiles_hit, cov3d = project_gaussians( # type: ignore
File "C:\Users\James\anaconda3\envs\nerfstudio\lib\site-packages\gsplat\project_gaussians.py", line 61, in project_gaussians
return _ProjectGaussians.apply(
File "C:\Users\James\anaconda3\envs\nerfstudio\lib\site-packages\torch\autograd\function.py", line 539, in apply
return super().apply(*args, **kwargs) # type: ignore[misc]
File "C:\Users\James\anaconda3\envs\nerfstudio\lib\site-packages\gsplat\project_gaussians.py", line 112, in forward
) = _C.project_gaussians_forward(
File "C:\Users\James\anaconda3\envs\nerfstudio\lib\site-packages\gsplat\cuda\__init__.py", line 7, in call_cuda
from ._backend import _C
File "C:\Users\James\anaconda3\envs\nerfstudio\lib\site-packages\gsplat\cuda\_backend.py", line 88, in <module>
_C = load(
File "C:\Users\James\anaconda3\envs\nerfstudio\lib\site-packages\torch\utils\cpp_extension.py", line 1308, in load
return _jit_compile(
File "C:\Users\James\anaconda3\envs\nerfstudio\lib\site-packages\torch\utils\cpp_extension.py", line 1710, in _jit_compile
_write_ninja_file_and_build_library(
File "C:\Users\James\anaconda3\envs\nerfstudio\lib\site-packages\torch\utils\cpp_extension.py", line 1810, in _write_ninja_file_and_build_library
_write_ninja_file_to_build_library(
File "C:\Users\James\anaconda3\envs\nerfstudio\lib\site-packages\torch\utils\cpp_extension.py", line 2242, in _write_ninja_file_to_build_library
_write_ninja_file(
File "C:\Users\James\anaconda3\envs\nerfstudio\lib\site-packages\torch\utils\cpp_extension.py", line 2382, in _write_ninja_file
cl_paths = subprocess.check_output(['where',
File "C:\Users\James\anaconda3\envs\nerfstudio\lib\subprocess.py", line 415, in check_output
return run(*popenargs, stdout=PIPE, timeout=timeout, check=True,
File "C:\Users\James\anaconda3\envs\nerfstudio\lib\subprocess.py", line 516, in run
raise CalledProcessError(retcode, process.args,
subprocess.CalledProcessError: Command '['where', 'cl']' returned non-zero exit status 1.
|
open
|
2024-03-15T12:07:43Z
|
2024-04-21T02:36:57Z
|
https://github.com/nerfstudio-project/nerfstudio/issues/3003
|
[] |
JamesAscroft
| 6
|
pytorch/pytorch
|
machine-learning
| 149,224
|
Unexpected results in using torch.save API
|
### ๐ Describe the bug
In test_save_with_different_pickle_protocol, the test iterates over all protocols (0 through pickle.HIGHEST_PROTOCOL) and expects that saving and then loading the tensor works correctly with each protocol. However, for protocol 0, torch.load fails with an AssertionError (inside torch.loadโs persistent_load), which suggests that our source code (the new zipfile-based serialization) does not correctly support protocol 0. This is unexpected behavior based on the API.
```python
import unittest
import torch
import io
import os
import pickle
from pathlib import Path
class TestTorchSave(unittest.TestCase):
def setUp(self):
# Create a tensor to use in tests
self.tensor = torch.tensor([0, 1, 2, 3, 4])
self.filename = 'test_tensor.pt'
def tearDown(self):
# Clean up any files created during tests
if os.path.exists(self.filename):
os.remove(self.filename)
def test_save_with_different_pickle_protocol(self):
# Test saving with a different pickle protocol
for protocol in range(pickle.HIGHEST_PROTOCOL + 1):
with self.subTest(pickle_protocol=protocol):
buffer = io.BytesIO()
torch.save(self.tensor, buffer, pickle_protocol=protocol)
buffer.seek(0)
loaded_tensor = torch.load(buffer)
self.assertTrue(torch.equal(self.tensor, loaded_tensor))
if __name__ == '__main__':
unittest.main()
```
```
======================================================================
FAIL: test_save_with_different_pickle_protocol (__main__.TestTorchSave) (pickle_protocol=0)
----------------------------------------------------------------------
Traceback (most recent call last):
File "/home/user/projects/api_guided_testgen/out/bug_detect_gpt4o/exec/basic_rag_apidoc/torch/torch.save.py", line 40, in test_save_with_different_pickle_protocol
loaded_tensor = torch.load(buffer)
File "/home/user/anaconda3/lib/python3.8/site-packages/torch/serialization.py", line 1025, in load
return _load(opened_zipfile,
File "/home/user/anaconda3/lib/python3.8/site-packages/torch/serialization.py", line 1446, in _load
result = unpickler.load()
File "/home/user/anaconda3/lib/python3.8/site-packages/torch/serialization.py", line 1400, in persistent_load
assert isinstance(saved_id, tuple)
AssertionError
```
### Versions
Collecting environment information...
PyTorch version: 2.5.0
Is debug build: False
CUDA used to build PyTorch: None
ROCM used to build PyTorch: N/A
OS: Ubuntu 22.04.5 LTS (x86_64)
GCC version: (Ubuntu 9.5.0-1ubuntu1~22.04) 9.5.0
Clang version: 14.0.0-1ubuntu1.1
CMake version: version 3.22.1
Libc version: glibc-2.35
Python version: 3.9.21 (main, Dec 11 2024, 16:24:11) [GCC 11.2.0] (64-bit runtime)
Python platform: Linux-6.8.0-52-generic-x86_64-with-glibc2.35
Is CUDA available: False
CUDA runtime version: No CUDA
CUDA_MODULE_LOADING set to: N/A
GPU models and configuration: No CUDA
Nvidia driver version: No CUDA
cuDNN version: No CUDA
HIP runtime version: N/A
MIOpen runtime version: N/A
Is XNNPACK available: True
CPU:
Architecture: x86_64
CPU op-mode(s): 32-bit, 64-bit
Address sizes: 39 bits physical, 48 bits virtual
Byte Order: Little Endian
CPU(s): 4
On-line CPU(s) list: 0-3
Vendor ID: GenuineIntel
Model name: Intel(R) Xeon(R) E-2224G CPU @ 3.50GHz
CPU family: 6
Model: 158
Thread(s) per core: 1
Core(s) per socket: 4
Socket(s): 1
Stepping: 10
CPU max MHz: 4700.0000
CPU min MHz: 800.0000
BogoMIPS: 6999.82
Flags: fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush dts acpi mmx fxsr sse sse2 ss ht tm pbe syscall nx pdpe1gb rdtscp lm constant_tsc art arch_perfmon pebs bts rep_good nopl xtopology nonstop_tsc cpuid aperfmperf pni pclmulqdq dtes64 monitor ds_cpl vmx smx est tm2 ssse3 sdbg fma cx16 xtpr pdcm pcid sse4_1 sse4_2 x2apic movbe popcnt tsc_deadline_timer aes xsave avx f16c rdrand lahf_lm abm 3dnowprefetch cpuid_fault epb pti ssbd ibrs ibpb stibp tpr_shadow flexpriority ept vpid ept_ad fsgsbase tsc_adjust bmi1 avx2 smep bmi2 erms invpcid mpx rdseed adx smap clflushopt intel_pt xsaveopt xsavec xgetbv1 xsaves dtherm ida arat pln pts hwp hwp_notify hwp_act_window hwp_epp vnmi md_clear flush_l1d arch_capabilities
Virtualization: VT-x
L1d cache: 128 KiB (4 instances)
L1i cache: 128 KiB (4 instances)
L2 cache: 1 MiB (4 instances)
L3 cache: 8 MiB (1 instance)
NUMA node(s): 1
NUMA node0 CPU(s): 0-3
Vulnerability Gather data sampling: Mitigation; Microcode
Vulnerability Itlb multihit: KVM: Mitigation: VMX disabled
Vulnerability L1tf: Mitigation; PTE Inversion; VMX conditional cache flushes, SMT disabled
Vulnerability Mds: Mitigation; Clear CPU buffers; SMT disabled
Vulnerability Meltdown: Mitigation; PTI
Vulnerability Mmio stale data: Mitigation; Clear CPU buffers; SMT disabled
Vulnerability Reg file data sampling: Not affected
Vulnerability Retbleed: Mitigation; IBRS
Vulnerability Spec rstack overflow: Not affected
Vulnerability Spec store bypass: Mitigation; Speculative Store Bypass disabled via prctl
Vulnerability Spectre v1: Mitigation; usercopy/swapgs barriers and __user pointer sanitization
Vulnerability Spectre v2: Mitigation; IBRS; IBPB conditional; STIBP disabled; RSB filling; PBRSB-eIBRS Not affected; BHI Not affected
Vulnerability Srbds: Mitigation; Microcode
Vulnerability Tsx async abort: Mitigation; TSX disabled
Versions of relevant libraries:
[pip3] numpy==2.0.1
[pip3] torch==2.5.0
[pip3] torchaudio==2.5.0
[pip3] torchvision==0.20.0
[conda] blas 1.0 mkl
[conda] cpuonly 2.0 0 pytorch
[conda] ffmpeg 4.3 hf484d3e_0 pytorch
[conda] libjpeg-turbo 2.0.0 h9bf148f_0 pytorch
[conda] mkl 2023.1.0 h213fc3f_46344
[conda] mkl-service 2.4.0 py39h5eee18b_2
[conda] mkl_fft 1.3.11 py39h5eee18b_0
[conda] mkl_random 1.2.8 py39h1128e8f_0
[conda] numpy 2.0.1 py39h5f9d8c6_1
[conda] numpy-base 2.0.1 py39hb5e798b_1
[conda] pytorch 2.5.0 py3.9_cpu_0 pytorch
[conda] pytorch-mutex 1.0 cpu pytorch
[conda] torchaudio 2.5.0 py39_cpu pytorch
[conda] torchvision 0.20.0 py39_cpu pytorch
cc @mruberry @mikaylagawarecki
|
open
|
2025-03-14T20:59:25Z
|
2025-03-17T15:20:20Z
|
https://github.com/pytorch/pytorch/issues/149224
|
[
"module: serialization",
"triaged"
] |
sjh0849
| 0
|
sktime/sktime
|
data-science
| 7,973
|
TypeError while using RecursiveReductionForecaster class
|
I tried this code with hierarchical data structure, as follows:
```
test_size = 31
fh = np.arange(1, test_size + 1)
df_train, df_test = temporal_train_test_split(df, test_size=test_size)
model = Prophet(
seasonality_mode="multiplicative",
add_country_holidays={"country_name": "USA"},
daily_seasonality=True,
n_changepoints=10,
)
forecaster = RecursiveReductionForecaster(model)
pipe = Aggregator(flatten_single_levels=True) * forecaster * Reconciler(method="bu")
pipe = pipe.fit(df_train, fh=fh)
df_pred = pipe.predict(fh)
```
but I received this error:
```
TypeError: Invalid `fh`. The type of the passed `fh` values is not supported. Please use one of ('int', 'range', '1D np.ndarray of type int', '1D np.ndarray of type timedelta or dateoffset', 'list of type int', 'pd.RangeIndex', 'pd.PeriodIndex', 'pd.DatetimeIndex', 'pd.TimedeltaIndex'), but found type <class 'pandas.core.frame.DataFrame'>, values = lag_1__lag_0__target ... lag_1__lag_9__target
timestamp ...
2025-02-09 55.0 ... 59.0
[1 rows x 10 columns]
File <command-570286585310414>, line 14
11 pipe = Aggregator(flatten_single_levels=True) * forecaster * Reconciler(method="bu")
13 pipe = pipe.fit(df_train, fh=fh)
---> 14 df_pred = pipe.predict(fh)
16 df_pred_interval = pipe.predict_interval(fh)
17 df_pred_interval
File /local_disk0/.ephemeral_nfs/cluster_libraries/python/lib/python3.11/site-packages/sktime/forecasting/base/_base.py:457, in BaseForecaster.predict(self, fh, X)
455 # we call the ordinary _predict if no looping/vectorization needed
456 if not self._is_vectorized:
--> 457 y_pred = self._predict(fh=fh, X=X_inner)
458 else:
459 # otherwise we call the vectorized version of predict
460 y_pred = self._vectorize("predict", X=X_inner, fh=fh)
File /local_disk0/.ephemeral_nfs/cluster_libraries/python/lib/python3.11/site-packages/sktime/forecasting/compose/_pipeline.py:1053, in TransformedTargetForecaster._predict(self, fh, X)
1038 def _predict(self, fh=None, X=None):
1039 """Forecast time series at future horizon.
1040
1041 Parameters
(...)
1051 Point predictions
1052 """
-> 1053 y_pred = self.forecaster_.predict(fh=fh, X=X)
1054 # inverse transform y_pred
1055 y_pred = self._get_inverse_transform(self.transformers_pre_, y_pred, X)
File /local_disk0/.ephemeral_nfs/cluster_libraries/python/lib/python3.11/site-packages/sktime/forecasting/base/_base.py:460, in BaseForecaster.predict(self, fh, X)
457 y_pred = self._predict(fh=fh, X=X_inner)
458 else:
459 # otherwise we call the vectorized version of predict
--> 460 y_pred = self._vectorize("predict", X=X_inner, fh=fh)
462 # convert to output mtype, identical with last y mtype seen
463 y_out = convert_to(
464 y_pred,
465 self._y_metadata["mtype"],
466 store=self._converter_store_y,
467 store_behaviour="freeze",
468 )
File /local_disk0/.ephemeral_nfs/cluster_libraries/python/lib/python3.11/site-packages/sktime/forecasting/base/_base.py:2058, in BaseForecaster._vectorize(self, methodname, **kwargs)
2055 if methodname == "update_predict_single":
2056 self._yvec = y
-> 2058 y_preds = self._yvec.vectorize_est(
2059 self.forecasters_,
2060 method=methodname,
2061 return_type="list",
2062 backend=self.get_config()["backend:parallel"],
2063 backend_params=self.get_config()["backend:parallel:params"],
2064 **kwargs,
2065 )
2067 # if we vectorize over columns,
2068 # we need to replace top column level with variable names - part 1
2069 m = len(self.forecasters_.columns)
File /local_disk0/.ephemeral_nfs/cluster_libraries/python/lib/python3.11/site-packages/sktime/datatypes/_vectorize.py:630, in VectorizedDF.vectorize_est(self, estimator, method, args, args_rowvec, return_type, rowname_default, colname_default, varname_of_self, backend, backend_params, **kwargs)
616 vec_zip = zip(
617 self.items(),
618 explode(args, iterate_as=iterate_as, iterate_cols=iterate_cols),
619 explode(args_rowvec, iterate_as=iterate_as, iterate_cols=False),
620 estimators,
621 )
623 meta = {
624 "method": method,
625 "varname_of_self": varname_of_self,
626 "rowname_default": rowname_default,
627 "colname_default": colname_default,
628 }
--> 630 ret = parallelize(
631 fun=self._vectorize_est_single,
632 iter=vec_zip,
633 meta=meta,
634 backend=backend,
635 backend_params=backend_params,
636 )
638 if return_type == "pd.DataFrame":
639 df_long = pd.DataFrame(ret)
File /local_disk0/.ephemeral_nfs/cluster_libraries/python/lib/python3.11/site-packages/sktime/utils/parallel.py:72, in parallelize(fun, iter, meta, backend, backend_params)
69 backend_name = backend_dict[backend]
70 para_fun = para_dict[backend_name]
---> 72 ret = para_fun(
73 fun=fun, iter=iter, meta=meta, backend=backend, backend_params=backend_params
74 )
75 return ret
File /local_disk0/.ephemeral_nfs/cluster_libraries/python/lib/python3.11/site-packages/sktime/utils/parallel.py:92, in _parallelize_none(fun, iter, meta, backend, backend_params)
90 def _parallelize_none(fun, iter, meta, backend, backend_params):
91 """Execute loop via simple sequential list comprehension."""
---> 92 ret = [fun(x, meta=meta) for x in iter]
93 return ret
File /local_disk0/.ephemeral_nfs/cluster_libraries/python/lib/python3.11/site-packages/sktime/utils/parallel.py:92, in <listcomp>(.0)
90 def _parallelize_none(fun, iter, meta, backend, backend_params):
91 """Execute loop via simple sequential list comprehension."""
---> 92 ret = [fun(x, meta=meta) for x in iter]
93 return ret
File /local_disk0/.ephemeral_nfs/cluster_libraries/python/lib/python3.11/site-packages/sktime/datatypes/_vectorize.py:679, in VectorizedDF._vectorize_est_single(self, vec_tuple, meta)
676 args_i[varname_of_self] = group
678 est_i_method = getattr(est_i, method)
--> 679 est_i_result = est_i_method(**args_i)
681 if group_name is None:
682 group_name = rowname_default
File /local_disk0/.ephemeral_nfs/cluster_libraries/python/lib/python3.11/site-packages/sktime/forecasting/base/_base.py:457, in BaseForecaster.predict(self, fh, X)
455 # we call the ordinary _predict if no looping/vectorization needed
456 if not self._is_vectorized:
--> 457 y_pred = self._predict(fh=fh, X=X_inner)
458 else:
459 # otherwise we call the vectorized version of predict
460 y_pred = self._vectorize("predict", X=X_inner, fh=fh)
File /local_disk0/.ephemeral_nfs/cluster_libraries/python/lib/python3.11/site-packages/sktime/forecasting/compose/_reduce.py:2482, in RecursiveReductionForecaster._predict(self, X, fh)
2480 y_pred = self._predict_in_sample(X_pool, fh_ins)
2481 elif len(fh_ins) == 0:
-> 2482 y_pred = self._predict_out_of_sample(X_pool, fh_oos)
2483 else:
2484 y_pred_ins = self._predict_in_sample(X_pool, fh_ins)
File /local_disk0/.ephemeral_nfs/cluster_libraries/python/lib/python3.11/site-packages/sktime/forecasting/compose/_reduce.py:2547, in RecursiveReductionForecaster._predict_out_of_sample(self, X_pool, fh)
2544 y_pred_i.iloc[0] = estimator
2545 # otherwise proceed as per direct reduction algorithm
2546 else:
-> 2547 y_pred_i = estimator.predict(Xtt_predrow)
2548 # 2D numpy array with col index = (var) and 1 row
2549 y_pred_list.append(y_pred_i)
File /local_disk0/.ephemeral_nfs/cluster_libraries/python/lib/python3.11/site-packages/sktime/forecasting/base/_base.py:453, in BaseForecaster.predict(self, fh, X)
450 X_inner = self._check_X(X=X)
452 # check fh and coerce to ForecastingHorizon, if not already passed in fit
--> 453 fh = self._check_fh(fh)
455 # we call the ordinary _predict if no looping/vectorization needed
456 if not self._is_vectorized:
File /local_disk0/.ephemeral_nfs/cluster_libraries/python/lib/python3.11/site-packages/sktime/forecasting/base/_base.py:1957, in BaseForecaster._check_fh(self, fh, pred_int)
1946 # in case C. fh is optional in fit:
1947 # this is fine, nothing to check/raise
1948
(...)
1954 # fcstr) since cutoff/frequency can be different for each compared to the
1955 # entire panel but the same relative fh
1956 if getattr(self, "_is_vectorized", False):
-> 1957 fh = check_fh(fh=fh)
1958 else:
1959 fh = check_fh(fh=fh, freq=self._cutoff)
File /local_disk0/.ephemeral_nfs/cluster_libraries/python/lib/python3.11/site-packages/sktime/utils/validation/forecasting.py:291, in check_fh(fh, enforce_relative, freq)
288 from sktime.forecasting.base import ForecastingHorizon
290 if not isinstance(fh, ForecastingHorizon):
--> 291 fh = ForecastingHorizon(fh, is_relative=None, freq=freq)
292 else:
293 fh.freq = freq
File /local_disk0/.ephemeral_nfs/cluster_libraries/python/lib/python3.11/site-packages/sktime/forecasting/base/_fh.py:289, in ForecastingHorizon.__init__(self, values, is_relative, freq)
285 def __init__(self, values=None, is_relative=None, freq=None):
286 # coercing inputs
287
288 # values to pd.Index self._values
--> 289 values = _check_values(values)
290 self._values = values
292 # infer freq from values, if available
293 # if not, infer from freq argument, if available
File /local_disk0/.ephemeral_nfs/cluster_libraries/python/lib/python3.11/site-packages/sktime/forecasting/base/_fh.py:135, in _check_values(values)
125 # otherwise, raise type error
126 else:
127 valid_types = (
128 "int",
129 "range",
(...)
133 *[f"pd.{index_type.__name__}" for index_type in VALID_INDEX_TYPES],
134 )
--> 135 raise TypeError(
136 f"Invalid `fh`. The type of the passed `fh` values is not supported. "
137 f"Please use one of {valid_types}, but found type {type(values)}, "
138 f"values = {values}"
139 )
141 # check values does not contain duplicates
142 if len(values) != values.nunique():
```
I don't understand why it did not accept the fh format even though it's in the correct format "range", maybe the composition is wrong ?
my pip list:
```
Package Version
---------------------------------- -------------
absl-py 1.0.0
accelerate 0.31.0
adagio 0.2.6
aequitas 1.0.0
aif360 0.6.1
aiohttp 3.8.5
aiohttp-cors 0.7.0
aiosignal 1.2.0
alembic 1.15.1
altair 5.5.0
antlr4-python3-runtime 4.9.3
anyio 3.5.0
appdirs 1.4.4
argon2-cffi 21.3.0
argon2-cffi-bindings 21.2.0
astor 0.8.1
asttokens 2.0.5
astunparse 1.6.3
async-timeout 4.0.2
attrs 22.1.0
audioread 3.0.1
azure-core 1.30.2
azure-cosmos 4.3.1
azure-identity 1.17.1
azure-servicebus 7.14.1
azure-storage-blob 12.19.1
azure-storage-file-datalake 12.14.0
backcall 0.2.0
bcrypt 3.2.0
beautifulsoup4 4.12.2
black 23.3.0
bleach 4.1.0
blinker 1.4
blis 0.7.11
boto3 1.34.39
botocore 1.34.39
Brotli 1.0.9
cachetools 5.4.0
catalogue 2.0.10
category-encoders 2.6.3
certifi 2023.7.22
cffi 1.15.1
chardet 4.0.0
charset-normalizer 2.0.4
chex 0.1.89
circuitbreaker 1.4.0
clarabel 0.10.0
click 8.0.4
cloudpathlib 0.16.0
cloudpickle 2.2.1
cmdstanpy 1.2.2
colorful 0.5.6
colorlog 6.9.0
comm 0.1.2
confection 0.1.4
configparser 5.2.0
contourpy 1.0.5
coreforecast 0.0.15
cramjam 2.9.1
cryptography 41.0.3
cycler 0.11.0
cymem 2.0.8
Cython 0.29.32
dacite 1.8.1
databricks 0.2
databricks-automl-runtime 0.2.21
databricks-feature-engineering 0.6.0
databricks-sdk 0.46.0
dataclasses-json 0.6.7
datasets 2.19.1
dbl-tempo 0.1.26
dbus-python 1.2.18
debugpy 1.6.7
decorator 5.1.1
deepspeed 0.14.4
defusedxml 0.7.1
Deprecated 1.2.14
dill 0.3.6
diskcache 5.6.3
distlib 0.3.8
distro 1.7.0
distro-info 1.1+ubuntu0.2
dm-tree 0.1.8
docker 7.1.0
entrypoints 0.4
etils 1.12.2
evaluate 0.4.2
executing 0.8.3
facets-overview 1.1.1
fairgbm 0.9.14
fairlearn 0.12.0
Farama-Notifications 0.0.4
fastjsonschema 2.20.0
fastparquet 2024.11.0
fasttext 0.9.2
filelock 3.13.4
Flask 2.2.5
flatbuffers 24.3.25
fonttools 4.25.0
frozenlist 1.3.3
fs 2.4.16
fsspec 2023.5.0
fugue 0.9.1
future 0.18.3
gast 0.4.0
gitdb 4.0.11
GitPython 3.1.27
google-api-core 2.18.0
google-auth 2.21.0
google-auth-oauthlib 1.0.0
google-cloud-core 2.4.1
google-cloud-storage 2.10.0
google-crc32c 1.5.0
google-pasta 0.2.0
google-resumable-media 2.7.1
googleapis-common-protos 1.63.0
graphene 3.4.3
graphql-core 3.2.6
graphql-relay 3.2.0
graphviz 0.20.3
greenlet 2.0.1
grpcio 1.60.0
grpcio-status 1.60.0
gunicorn 20.1.0
gviz-api 1.10.0
gymnasium 0.28.1
h11 0.14.0
h5py 3.10.0
hierarchicalforecast 1.1.0
hjson 3.1.0
holidays 0.45
horovod 0.28.1+db1
htmlmin 0.1.12
httpcore 1.0.5
httplib2 0.20.2
httpx 0.27.0
huggingface-hub 0.27.1
hydra-core 1.3.2
hyperparameter-tuning 0.3.2
idna 3.4
ImageHash 4.3.1
imageio 2.31.1
imbalanced-learn 0.11.0
importlib-metadata 6.0.0
importlib_resources 6.4.0
iniconfig 2.0.0
intel-cmplr-lib-rt 2025.0.5
ipyflow-core 0.0.198
ipykernel 6.25.1
ipython 8.15.0
ipython-genutils 0.2.0
ipywidgets 7.7.2
isodate 0.6.1
itsdangerous 2.0.1
jax 0.5.2
jax-jumpy 1.0.0
jaxlib 0.5.1
jedi 0.18.1
jeepney 0.7.1
Jinja2 3.1.2
jmespath 0.10.0
joblib 1.2.0
joblibspark 0.5.1
jsonpatch 1.33
jsonpointer 3.0.0
jsonschema 4.17.3
jupyter_client 7.4.9
jupyter_core 5.3.0
jupyter-server 1.23.4
jupyterlab-pygments 0.1.2
keras 3.2.1
keyring 23.5.0
kiwisolver 1.4.4
langchain 0.1.20
langchain-community 0.0.38
langchain-core 0.1.52
langchain-text-splitters 0.0.2
langcodes 3.4.0
langsmith 0.1.63
language_data 1.2.0
launchpadlib 1.10.16
lazr.restfulclient 0.14.4
lazr.uri 1.0.6
lazy_loader 0.2
libclang 15.0.6.1
librosa 0.10.1
lightgbm 4.6.0
linkify-it-py 2.0.0
llvmlite 0.44.0
lxml 4.9.2
lz4 4.3.2
Mako 1.2.0
marisa-trie 1.1.1
Markdown 3.4.1
markdown-it-py 2.2.0
MarkupSafe 2.1.1
marshmallow 3.21.2
matplotlib 3.10.1
matplotlib-inline 0.1.6
mdit-py-plugins 0.3.0
mdurl 0.1.0
memray 1.13.3
millify 0.1.1
mistune 0.8.4
ml_dtypes 0.5.1
mlflow 2.20.4
mlflow-skinny 2.20.4
more-itertools 8.10.0
mosaicml-streaming 0.7.4
mpmath 1.3.0
msal 1.29.0
msal-extensions 1.2.0
msgpack 1.0.8
multidict 6.0.2
multimethod 1.12
multipledispatch 1.0.0
multiprocess 0.70.14
murmurhash 1.0.10
mypy-extensions 0.4.3
namex 0.0.8
narwhals 1.30.0
nbclassic 0.5.5
nbclient 0.5.13
nbconvert 6.5.4
nbformat 5.7.0
nest-asyncio 1.5.6
networkx 3.1
ninja 1.11.1.1
nltk 3.8.1
nmds 0.2.1
nmeng 0.0.6
notebook 6.5.4
notebook_shim 0.2.2
numba 0.61.0
numpy 1.26.4
numpyro 0.17.0
nvidia-cublas-cu12 12.4.5.8
nvidia-cuda-cupti-cu12 12.4.127
nvidia-cuda-nvrtc-cu12 12.4.127
nvidia-cuda-runtime-cu12 12.4.127
nvidia-cudnn-cu12 9.1.0.70
nvidia-cufft-cu12 11.2.1.3
nvidia-curand-cu12 10.3.5.147
nvidia-cusolver-cu12 11.6.1.9
nvidia-cusparse-cu12 12.3.1.170
nvidia-cusparselt-cu12 0.6.2
nvidia-ml-py 12.555.43
nvidia-nccl-cu12 2.21.5
nvidia-nvjitlink-cu12 12.4.127
nvidia-nvtx-cu12 12.4.127
oauthlib 3.2.0
oci 2.126.4
omegaconf 2.3.0
openai 1.35.3
opencensus 0.11.4
opencensus-context 0.1.3
opentelemetry-api 1.25.0
opentelemetry-sdk 1.25.0
opentelemetry-semantic-conventions 0.46b0
opt-einsum 3.3.0
optax 0.2.4
optree 0.12.1
optuna 4.2.1
orjson 3.10.6
packaging 23.2
pandas 2.2.3
pandocfilters 1.5.0
paramiko 3.4.0
parso 0.8.3
pathspec 0.10.3
patsy 0.5.3
petastorm 0.12.1
pexpect 4.8.0
phik 0.12.4
pickleshare 0.7.5
Pillow 9.4.0
pip 23.2.1
platformdirs 3.10.0
plotly 5.9.0
pluggy 1.5.0
pmdarima 2.0.4
pooch 1.8.1
portalocker 2.10.1
preshed 3.0.9
prometheus-client 0.14.1
prompt-toolkit 3.0.36
prophet 1.1.5
prophetverse 0.5.2
proto-plus 1.24.0
protobuf 4.24.1
psutil 5.9.0
psycopg2 2.9.3
ptyprocess 0.7.0
pure-eval 0.2.2
py-cpuinfo 8.0.0
py-spy 0.3.14
pyarrow 14.0.1
pyarrow-hotfix 0.6
pyasn1 0.4.8
pyasn1-modules 0.2.8
pybind11 2.13.1
pyccolo 0.0.52
pycparser 2.21
pydantic 1.10.21
Pygments 2.15.1
PyGObject 3.42.1
PyJWT 2.3.0
pymssql 2.2.7
PyNaCl 1.5.0
pyodbc 4.0.39
pyOpenSSL 23.2.0
pyparsing 3.0.9
pyrsistent 0.18.0
pytesseract 0.3.10
pytest 8.3.5
python-apt 2.4.0+ubuntu4
python-dateutil 2.8.2
python-editor 1.0.4
python-lsp-jsonrpc 1.1.1
python-snappy 0.6.1
pytz 2022.7
PyWavelets 1.4.1
PyYAML 6.0.2
pyzmq 23.2.0
quadprog 0.1.13
ray 2.37.0
regex 2022.7.9
requests 2.32.3
requests-oauthlib 1.3.1
rich 13.9.4
rsa 4.9
s3transfer 0.10.2
safetensors 0.4.2
schema 0.7.7
scikit-base 0.11.0
scikit-image 0.20.0
scikit-learn 1.3.0
scipy 1.11.1
seaborn 0.13.2
SecretStorage 3.3.1
Send2Trash 1.8.0
sentence-transformers 2.7.0
sentencepiece 0.1.99
setuptools 68.0.0
shap 0.44.0
simplejson 3.17.6
six 1.16.0
skforecast 0.15.0
skpro 2.9.0
sktime 0.36.0
slicer 0.0.7
smart-open 5.2.1
smmap 5.0.0
sniffio 1.2.0
soundfile 0.12.1
soupsieve 2.4
soxr 0.3.7
spacy 3.7.2
spacy-legacy 3.0.12
spacy-loggers 1.0.5
spark-tensorflow-distributor 1.0.0
SQLAlchemy 1.4.39
sqlparse 0.4.2
srsly 2.4.8
ssh-import-id 5.11
stack-data 0.2.0
stanio 0.5.1
statsforecast 2.0.1
statsmodels 0.14.0
sympy 1.13.1
tangled-up-in-unicode 0.2.0
tenacity 8.2.2
tensorboard 2.16.2
tensorboard-data-server 0.7.2
tensorboard_plugin_profile 2.15.1
tensorboardX 2.6.2.2
tensorflow 2.16.1
tensorflow-estimator 2.15.0
tensorflow-io-gcs-filesystem 0.37.1
termcolor 2.4.0
terminado 0.17.1
textual 0.63.3
tf_keras 2.16.0
thinc 8.2.3
threadpoolctl 3.5.0
tifffile 2021.7.2
tiktoken 0.5.2
tinycss2 1.2.1
tokenize-rt 4.2.1
tokenizers 0.19.0
toolz 1.0.0
torch 2.6.0
torcheval 0.0.7
torchvision 0.18.1+cpu
tornado 6.3.2
tqdm 4.65.0
traitlets 5.7.1
transformers 4.41.2
triad 0.9.8
triton 3.2.0
typeguard 2.13.3
typer 0.9.4
typing_extensions 4.12.2
typing-inspect 0.9.0
tzdata 2025.1
uc-micro-py 1.0.1
ujson 5.4.0
unattended-upgrades 0.1
urllib3 1.26.16
utilsforecast 0.2.12
validators 0.34.0
virtualenv 20.24.2
visions 0.7.5
wadllib 1.3.6
wasabi 1.1.2
wcwidth 0.2.5
weasel 0.3.4
webencodings 0.5.1
websocket-client 0.58.0
Werkzeug 2.2.3
wheel 0.38.4
wordcloud 1.9.3
wrapt 1.14.1
xgboost 2.0.3
xxhash 3.4.1
yarl 1.8.1
ydata-profiling 4.5.1
zipp 3.11.0
zstd 1.5.5.1
```
|
closed
|
2025-03-13T14:27:16Z
|
2025-03-14T02:44:33Z
|
https://github.com/sktime/sktime/issues/7973
|
[] |
AsmaBaccouche
| 0
|
errbotio/errbot
|
automation
| 1,146
|
Support Ryver Backend
|
### I am...
* [X] Suggesting a new feature
### I am running...
* Errbot version:
* OS version:
* Python version:
* Using a virtual environment: yes/no
### Issue description
It would be great if there was support for the chat application [Ryver](https://ryver.com/) for errbot. There is a beta Hubot plugin https://github.com/RyverApp/hubot-ryver and here is some [API documentation](http://ryverdocs.readthedocs.io/en/latest/). Their API is [OData](http://www.odata.org/) bassed
|
closed
|
2017-12-22T06:36:58Z
|
2019-01-05T17:18:06Z
|
https://github.com/errbotio/errbot/issues/1146
|
[
"type: feature",
"backend: Common"
] |
Gelob
| 3
|
google/seq2seq
|
tensorflow
| 261
|
`templatemethod` does not work as expected
|
```python
import tensorflow as tf
from seq2seq.graph_utils import templatemethod
@templatemethod("trial")
def trial(x):
w = tf.get_variable('w', [])
return tf.reduce_sum(x) * w
y = tf.placeholder(tf.float32, [None])
z = tf.placeholder(tf.float32, [None])
a_y = trial(y)
a_z = trial(z)
s = tf.InteractiveSession()
tf.global_variables_initializer().run()
print(tf.global_variables())
print(a_y.eval(feed_dict={y: [1.1, 1.9]}))
print(a_z.eval(feed_dict={z: [1.9, 1.1]}))
```
The above code produces the following output
```shell
2017-06-20 17:17:45.724766: W tensorflow/core/platform/cpu_feature_guard.cc:45] The TensorFlow library wasn't compiled to use SSE4.2 instructions, but these are available on your machine and could speed up
CPU computations.
2017-06-20 17:17:45.724804: W tensorflow/core/platform/cpu_feature_guard.cc:45] The TensorFlow library wasn't compiled to use AVX instructions, but these are available on your machine and could speed up CP
U computations.
2017-06-20 17:17:45.724812: W tensorflow/core/platform/cpu_feature_guard.cc:45] The TensorFlow library wasn't compiled to use AVX2 instructions, but these are available on your machine and could speed up C
PU computations.
2017-06-20 17:17:45.724819: W tensorflow/core/platform/cpu_feature_guard.cc:45] The TensorFlow library wasn't compiled to use FMA instructions, but these are available on your machine and could speed up CP
U computations.
[<tf.Variable 'trial/w:0' shape=() dtype=float32_ref>, <tf.Variable 'trial_1/w:0' shape=() dtype=float32_ref>]
-2.49474
4.86785
```
Clearly, it creates two different variables!
|
open
|
2017-06-20T21:30:32Z
|
2017-08-08T16:42:22Z
|
https://github.com/google/seq2seq/issues/261
|
[] |
amanjitsk
| 5
|
sammchardy/python-binance
|
api
| 592
|
ERROR: Could not find a version that satisfies the requirement Twisted (from python-binance)
|
Hello!
I am not able to install python-binance via pip for python3.8!
`python3.8 -m pip install python-binance --upgrade`
ERROR: Could not find a version that satisfies the requirement Twisted (from python-binance) (from versions: none)
ERROR: No matching distribution found for Twisted (from python-binance)
There are a couple of PRs that get not merged...
Can we help you Sam?
Best regards,
Oliver
|
closed
|
2020-10-01T10:22:41Z
|
2022-03-14T14:55:28Z
|
https://github.com/sammchardy/python-binance/issues/592
|
[] |
oliver-zehentleitner
| 1
|
HumanSignal/labelImg
|
deep-learning
| 141
|
My install tip for OSX/Ubuntu
|
<!--
Please provide as much as detail and example as you can.
You can add screenshots if appropriate.
-->
- OS : Sierra 10.12.6
- PyQt version: 4
- python 2.7
As osx user, it was very tough to install this lableImg tool on OSX, especially install pyqt4 on python 2.7. In my case, I used ``conda`` rather than local env. What I did is below.
- create conda env using python 2.7
```
conda create -n py2 python=2.7
source activate py2
```
- install necessary libs using ``conda install``
```
conda install pyqt=4
conda install libxml2
conda install lxml
```
- do as describe on README.
```
make qt4py2
python labelImg.py
```
good luck guys.
**add comment about ubuntu
This way also works on Ubuntu 16.04 & Python 2.7.
|
closed
|
2017-08-09T00:00:01Z
|
2018-04-16T13:40:40Z
|
https://github.com/HumanSignal/labelImg/issues/141
|
[] |
Labyrins
| 13
|
fbdesignpro/sweetviz
|
pandas
| 86
|
ZeroDivisionError: float division by zero
|
Hi,
I am beginner on Python and i've actually discovering sweetviz.
(this packege is wonderfull ! )
But, I trying to compare 2 conditions in my dataset and i have this message in below :
ZeroDivisionError: float division by zero
Before to use this package i have tranform the non-numerical labels to numerical labels and clean up the NANs.
Can you help me ?
<img width="1096" alt="Capture dโeฬcran 2021-04-06 aฬ 20 14 52" src="https://user-images.githubusercontent.com/75795028/113758953-caa6c800-9714-11eb-90b6-4a654d3d7102.png">
|
closed
|
2021-04-06T18:18:38Z
|
2021-05-27T17:38:15Z
|
https://github.com/fbdesignpro/sweetviz/issues/86
|
[
"bug"
] |
SJDO01
| 5
|
huggingface/diffusers
|
deep-learning
| 10,470
|
Flux - torchao inference not working
|
### Describe the bug
1. Flux with torchao int8wo not working
2. enable_sequential_cpu_offload not working

### Reproduction
example taken from (merged)
https://github.com/huggingface/diffusers/pull/10009
```
import torch
from diffusers import FluxPipeline, FluxTransformer2DModel, TorchAoConfig
model_id = "black-forest-labs/Flux.1-Dev"
dtype = torch.bfloat16
quantization_config = TorchAoConfig("int8wo")
transformer = FluxTransformer2DModel.from_pretrained(
model_id,
subfolder="transformer",
quantization_config=quantization_config,
torch_dtype=dtype,
)
pipe = FluxPipeline.from_pretrained(
model_id,
transformer=transformer,
torch_dtype=dtype,
)
# pipe.to("cuda")
# pipe.enable_sequential_cpu_offload()
pipe.vae.enable_tiling()
prompt = "A cat holding a sign that says hello world"
image = pipe(prompt, num_inference_steps=4, guidance_scale=0.0).images[0]
image.save("output.png")
```
### Logs
```shell
Stuck at this (without cpu offload)
(venv) C:\ai1\diffuser_t2i>python FLUX_torchao.py
Fetching 3 files: 100%|โโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโ| 3/3 [00:00<?, ?it/s]
Loading pipeline components...: 29%|โโโโโโโโโ | 2/7 [00:00<00:00, 5.36it/s]You set `add_prefix_space`. The tokenizer needs to be converted from the slow tokenizers
Loading checkpoint shards: 100%|โโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโ| 2/2 [00:00<00:00, 6.86it/s]
Loading pipeline components...: 100%|โโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโ| 7/7 [00:02<00:00, 2.38it/s]
```
(with cpu offload)
```
(venv) C:\ai1\diffuser_t2i>python FLUX_torchao.py
Fetching 3 files: 100%|โโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโ| 3/3 [00:00<?, ?it/s]
Loading checkpoint shards: 100%|โโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโ| 2/2 [00:00<00:00, 6.98it/s]
Loading pipeline components...: 29%|โโโโโโโโโ | 2/7 [00:00<00:01, 2.62it/s]You set `add_prefix_space`. The tokenizer needs to be converted from the slow tokenizers
Loading pipeline components...: 100%|โโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโ| 7/7 [00:01<00:00, 4.31it/s]
Traceback (most recent call last):
File "C:\ai1\diffuser_t2i\FLUX_torchao.py", line 21, in <module>
pipe.enable_sequential_cpu_offload()
File "C:\ai1\diffuser_t2i\venv\lib\site-packages\diffusers\pipelines\pipeline_utils.py", line 1179, in enable_sequential_cpu_offload
cpu_offload(model, device, offload_buffers=offload_buffers)
File "C:\ai1\diffuser_t2i\venv\lib\site-packages\accelerate\big_modeling.py", line 205, in cpu_offload
attach_align_device_hook(
File "C:\ai1\diffuser_t2i\venv\lib\site-packages\accelerate\hooks.py", line 518, in attach_align_device_hook
attach_align_device_hook(
File "C:\ai1\diffuser_t2i\venv\lib\site-packages\accelerate\hooks.py", line 518, in attach_align_device_hook
attach_align_device_hook(
File "C:\ai1\diffuser_t2i\venv\lib\site-packages\accelerate\hooks.py", line 518, in attach_align_device_hook
attach_align_device_hook(
File "C:\ai1\diffuser_t2i\venv\lib\site-packages\accelerate\hooks.py", line 509, in attach_align_device_hook
add_hook_to_module(module, hook, append=True)
File "C:\ai1\diffuser_t2i\venv\lib\site-packages\accelerate\hooks.py", line 161, in add_hook_to_module
module = hook.init_hook(module)
File "C:\ai1\diffuser_t2i\venv\lib\site-packages\accelerate\hooks.py", line 308, in init_hook
set_module_tensor_to_device(module, name, "meta")
File "C:\ai1\diffuser_t2i\venv\lib\site-packages\accelerate\utils\modeling.py", line 355, in set_module_tensor_to_device
new_value.layout_tensor,
AttributeError: 'AffineQuantizedTensor' object has no attribute 'layout_tensor'
```
### System Info
Windows 11
```
(venv) C:\ai1\diffuser_t2i>python --version
Python 3.10.11
(venv) C:\ai1\diffuser_t2i>echo %CUDA_PATH%
C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v12.6
```
```
(venv) C:\ai1\diffuser_t2i>pip list
Package Version
------------------ ------------
accelerate 1.2.0.dev0
aiofiles 23.2.1
annotated-types 0.7.0
anyio 4.7.0
bitsandbytes 0.45.0
certifi 2024.12.14
charset-normalizer 3.4.1
click 8.1.8
colorama 0.4.6
diffusers 0.33.0.dev0
einops 0.8.0
exceptiongroup 1.2.2
fastapi 0.115.6
ffmpy 0.5.0
filelock 3.16.1
fsspec 2024.12.0
gguf 0.13.0
gradio 5.9.1
gradio_client 1.5.2
h11 0.14.0
httpcore 1.0.7
httpx 0.28.1
huggingface-hub 0.25.2
idna 3.10
imageio 2.36.1
imageio-ffmpeg 0.5.1
importlib_metadata 8.5.0
Jinja2 3.1.5
markdown-it-py 3.0.0
MarkupSafe 2.1.5
mdurl 0.1.2
mpmath 1.3.0
networkx 3.4.2
ninja 1.11.1.3
numpy 2.2.1
opencv-python 4.10.0.84
orjson 3.10.13
packaging 24.2
pandas 2.2.3
pillow 11.1.0
pip 23.0.1
protobuf 5.29.2
psutil 6.1.1
pydantic 2.10.4
pydantic_core 2.27.2
pydub 0.25.1
Pygments 2.18.0
python-dateutil 2.9.0.post0
python-multipart 0.0.20
pytz 2024.2
PyYAML 6.0.2
regex 2024.11.6
requests 2.32.3
rich 13.9.4
ruff 0.8.6
safehttpx 0.1.6
safetensors 0.5.0
semantic-version 2.10.0
sentencepiece 0.2.0
setuptools 65.5.0
shellingham 1.5.4
six 1.17.0
sniffio 1.3.1
starlette 0.41.3
sympy 1.13.1
tokenizers 0.21.0
tomlkit 0.13.2
torch 2.5.1+cu124
torchao 0.7.0
torchvision 0.20.1+cu124
tqdm 4.67.1
transformers 4.47.1
typer 0.15.1
typing_extensions 4.12.2
tzdata 2024.2
urllib3 2.3.0
uvicorn 0.34.0
websockets 14.1
wheel 0.45.1
zipp 3.21.0
```
### Who can help?
_No response_
|
closed
|
2025-01-06T10:46:06Z
|
2025-01-18T15:45:23Z
|
https://github.com/huggingface/diffusers/issues/10470
|
[
"bug"
] |
nitinmukesh
| 11
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.