repo_name
stringlengths 9
75
| topic
stringclasses 30
values | issue_number
int64 1
203k
| title
stringlengths 1
976
| body
stringlengths 0
254k
| state
stringclasses 2
values | created_at
stringlengths 20
20
| updated_at
stringlengths 20
20
| url
stringlengths 38
105
| labels
listlengths 0
9
| user_login
stringlengths 1
39
| comments_count
int64 0
452
|
|---|---|---|---|---|---|---|---|---|---|---|---|
dask/dask
|
numpy
| 11,226
|
Negative lookahead suddenly incorrectly parsed
|
In Dask 2024.2.1 we suddenly have an issue with a regex with a negative lookahead. It somehow is invalid now.
```python
import dask.dataframe as dd
regex = 'negativelookahead(?!/check)'
ddf = dd.from_dict(
{
"test": ["negativelookahead", "negativelookahead/check/negativelookahead", ],
},
npartitions=1)
ddf["test"].str.contains(regex).head()
```
This results in the following error:
```python
---------------------------------------------------------------------------
ArrowInvalid Traceback (most recent call last)
Cell In[2], line 8
2 regex = 'negativelookahead(?!/check)'
3 ddf = dd.from_dict(
4 {
5 "test": ["negativelookahead", "negativelookahead/check/negativelookahead", ],
6 },
7 npartitions=1)
----> 8 ddf["test"].str.contains(regex).head()
File /opt/conda/lib/python3.10/site-packages/dask_expr/_collection.py:702, in FrameBase.head(self, n, npartitions, compute)
700 out = new_collection(expr.Head(self, n=n, npartitions=npartitions))
701 if compute:
--> 702 out = out.compute()
703 return out
File /opt/conda/lib/python3.10/site-packages/dask_expr/_collection.py:476, in FrameBase.compute(self, fuse, **kwargs)
474 out = out.repartition(npartitions=1)
475 out = out.optimize(fuse=fuse)
--> 476 return DaskMethodsMixin.compute(out, **kwargs)
File /opt/conda/lib/python3.10/site-packages/dask/base.py:375, in DaskMethodsMixin.compute(self, **kwargs)
351 def compute(self, **kwargs):
352 """Compute this dask collection
353
354 This turns a lazy Dask collection into its in-memory equivalent.
(...)
373 dask.compute
374 """
--> 375 (result,) = compute(self, traverse=False, **kwargs)
376 return result
File /opt/conda/lib/python3.10/site-packages/dask/base.py:661, in compute(traverse, optimize_graph, scheduler, get, *args, **kwargs)
658 postcomputes.append(x.__dask_postcompute__())
660 with shorten_traceback():
--> 661 results = schedule(dsk, keys, **kwargs)
663 return repack([f(r, *a) for r, (f, a) in zip(results, postcomputes)])
File /opt/conda/lib/python3.10/site-packages/dask_expr/_expr.py:3727, in Fused._execute_task(graph, name, *deps)
3725 for i, dep in enumerate(deps):
3726 graph["_" + str(i)] = dep
-> 3727 return dask.core.get(graph, name)
File /opt/conda/lib/python3.10/site-packages/dask_expr/_accessor.py:102, in FunctionMap.operation(obj, accessor, attr, args, kwargs)
100 @staticmethod
101 def operation(obj, accessor, attr, args, kwargs):
--> 102 out = getattr(getattr(obj, accessor, obj), attr)(*args, **kwargs)
103 return maybe_wrap_pandas(obj, out)
File /opt/conda/lib/python3.10/site-packages/pyarrow/compute.py:263, in _make_generic_wrapper.<locals>.wrapper(memory_pool, options, *args, **kwargs)
261 if args and isinstance(args[0], Expression):
262 return Expression._call(func_name, list(args), options)
--> 263 return func.call(args, options, memory_pool)
File /opt/conda/lib/python3.10/site-packages/pyarrow/_compute.pyx:385, in pyarrow._compute.Function.call()
File /opt/conda/lib/python3.10/site-packages/pyarrow/error.pxi:154, in pyarrow.lib.pyarrow_internal_check_status()
File /opt/conda/lib/python3.10/site-packages/pyarrow/error.pxi:91, in pyarrow.lib.check_status()
ArrowInvalid: Invalid regular expression: invalid perl operator: (?!
```
**Environment**:
- Dask version: 2024.2.1
- Python version: 3.10
- Operating System: Linux
- Install method (conda, pip, source): pip
|
closed
|
2024-07-15T07:23:02Z
|
2024-07-17T12:59:24Z
|
https://github.com/dask/dask/issues/11226
|
[
"needs triage"
] |
manschoe
| 3
|
iperov/DeepFaceLab
|
deep-learning
| 5,230
|
DFL training on RTX 3090 produces error "illegal instruction, core dumped" Linux but also some Windows installations
|
## Expected behavior
Training SAEHD or XSeg on DFL with RTX 3090, tensorflow 2.4.0
## Actual behavior
Python throws Error code of "illegal instruction, core dumped" on last line of DFL script which says "train"
This is despite Tensorflow 2.4.0 correctly recognising the RTX 3090, and despite cuda 11.0 or 11.1 and compatible nvidia drivers (455.28) all working correctly.
## Steps to reproduce
Install DFL on Windows or Linux as per Nagadit repository but use python 3.8 instead, and cudnn 8.0.5 and cudatoolkit 11.0 from Conda or 11.1 from Nvidia direct. Tensorflow 2.4.0
Also same error on my friends Windows 10 installation of DFL for RTX 3090.
**Solution:**
This will only apply to some people out there with older CPUs, but here is what I eventually found:
This is a Tensorflow 2.4.0 problem. Even if RTX 3090 works with TF 2.4.0, older CPUs do not in Linux and on some Windows builds it seems. TF requires AVX or AVX2 support. TF 2.3 supports AVX and AVX2. The tensorflow guys forgot to include AVX support in 2.4.0, despite it being compatible! Newer CPUs with AVX2 support will be ok.
I therefore compiled my own tensorflow for my machine, and this produced TF 2.5.0 which had AVX support. I can now fully train DFL using RTX 3090!
I don't have the Windows guide to compiling TF, but for linux TF you can use: https://www.tensorflow.org/install/source
I compiled with cudnn 8.0.5 dev files (filename libcudnn8-dev_8.0.5.39-1+cuda11.1) and cudatoolkit 11.1 installed.
This produced tensorflow 2.5.0 and this works great with RTX3090 and current DFL build.
Don't think this problem is common (less so on Windows machines) but hopefully of some use to someone out there
|
closed
|
2021-01-02T23:51:20Z
|
2021-01-28T00:19:00Z
|
https://github.com/iperov/DeepFaceLab/issues/5230
|
[] |
Joe-121
| 2
|
deepset-ai/haystack
|
nlp
| 8,177
|
🧪 Tools: support for tools in 4 Chat Generators
|
```[tasklist]
### Tasks
- [ ] https://github.com/deepset-ai/haystack/issues/8178
- [ ] https://github.com/deepset-ai/haystack/issues/8190
- [ ] https://github.com/deepset-ai/haystack/issues/8261
- [ ] https://github.com/deepset-ai/haystack-experimental/pull/120
```
|
closed
|
2024-08-08T15:14:47Z
|
2024-10-30T11:25:42Z
|
https://github.com/deepset-ai/haystack/issues/8177
|
[
"P1"
] |
anakin87
| 1
|
betodealmeida/shillelagh
|
sqlalchemy
| 36
|
Implement different modes for GSheets DML
|
See https://github.com/betodealmeida/shillelagh/pull/35
|
closed
|
2021-06-27T02:40:01Z
|
2021-06-30T21:50:48Z
|
https://github.com/betodealmeida/shillelagh/issues/36
|
[] |
betodealmeida
| 1
|
rthalley/dnspython
|
asyncio
| 1,174
|
custom verify path for dns.query.quic() and dns.query.https() (h3) only works for files, not dirs
|
**Describe the bug**
Providing a custom verify path for `dns.query.quic()` and `dns.query.https()` (h3 only) lookups only works when the path is a file because this call to `aioquic.quic.configuration.QuicConfiguration.load_verify_locations()`:
https://github.com/rthalley/dnspython/blob/19a5f048ec2fdd60ca6e5cd8b68d5b70ad8e0556/dns/quic/_common.py#L248
uses positional args and always hits the first arg which is `cafile`:
https://github.com/aiortc/aioquic/blob/9bc1e43d13be3f06339841aca7c8560825053371/src/aioquic/quic/configuration.py#L153
I think the right fix would be to use `pathlib` to determine if the arg is a dir or file and provide the `cafile` or `capath` keyword arg to `aioquic.quic.configuration.QuicConfiguration.load_verify_locations()` as appropriate. In my own code I also support passing it as `cadata` (the third option) if `pathlib` says it is neither a file or a dir.
I would be happy to provide a PR if you agree the above would be the right fix. Let me know :)
**To Reproduce**
In this example `/etc/ssl/certs/` is a normal linux ca dir with `c_rehash` style symlinks and also one big file with all the certs at `/etc/ssl/certs/ca-certificates.crt`. The first example shows the issue, the second shows it working with a file, the third shows the dir working with `dns.query.tls()` just for reference
```
(venv) user@privat-dev:~/devel/dns_exporter/src$ python
Python 3.10.13 (main, Nov 15 2023, 13:09:29) [GCC 10.2.1 20210110] on linux
Type "help", "copyright", "credits" or "license" for more information.
>>> import dns.message, dns.name, dns.query
>>> dns.query.quic(q, "91.239.100.100", port=853, server_hostname="anycast.censurfridns.dk", verify="/etc/ssl/certs/")
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
File "/home/user/devel/dnspython/dns/query.py", line 1424, in quic
with the_connection.make_stream(timeout) as stream: # pyright: ignore
File "/home/user/devel/dnspython/dns/quic/_sync.py", line 236, in make_stream
raise UnexpectedEOF
dns.quic._common.UnexpectedEOF
>>> dns.query.quic(q, "91.239.100.100", port=853, server_hostname="anycast.censurfridns.dk", verify="/etc/ssl/certs/ca-certificates.crt")
<DNS message, ID 0>
>>> dns.query.tls(q, "91.239.100.100", server_hostname="anycast.censurfridns.dk", verify="/etc/ssl/certs/")
<DNS message, ID 0>
>>>
```
**Context (please complete the following information):**
- dnspython 2.7.0
- Python 3.10.13
- OS: qubes (debian)
|
closed
|
2025-01-12T21:24:51Z
|
2025-01-29T19:37:10Z
|
https://github.com/rthalley/dnspython/issues/1174
|
[
"Bug",
"Fixed"
] |
tykling
| 2
|
deepinsight/insightface
|
pytorch
| 1,960
|
blank
|
closed
|
2022-04-04T08:29:03Z
|
2022-04-04T13:56:50Z
|
https://github.com/deepinsight/insightface/issues/1960
|
[] |
huynhtruc0309
| 0
|
|
pytorch/pytorch
|
machine-learning
| 149,389
|
[Docs] `torch.Library`'s `kind` is inconsistent with the code
|
### 🐛 Describe the bug
The doc says that `kind` defaults to `IMPL` but it actually does not.
<img width="821" alt="Image" src="https://github.com/user-attachments/assets/2eb7b65a-d642-4a13-b111-edc43080b3a0" />
Calling `torch.library.Library("fsdp")` will get this:
```
TypeError: Library.__init__() missing 1 required positional argument: 'kind'
```
### Versions
main
cc @anjali411 @chauhang @penguinwu @zou3519 @bdhirsh
|
closed
|
2025-03-18T08:40:02Z
|
2025-03-21T04:42:13Z
|
https://github.com/pytorch/pytorch/issues/149389
|
[
"triaged",
"actionable",
"module: library"
] |
shink
| 0
|
Guovin/iptv-api
|
api
| 660
|
最近几天工作流运行到排序阶段,每次都是到90%400个左右地址时就卡死了
|
### Don't skip these steps
- [X] I understand that I will be **blocked** if I *intentionally* remove or skip any mandatory\* field
- [X] I have checked through the search that there are no similar issues that already exist
- [X] I will not submit any issues that are not related to this project
### Occurrence environment
- [X] Workflow
- [ ] GUI
- [ ] Docker
- [ ] Command line
### Question description

### Related log
```shell
2024-12-11T22:49:36.9375379Z Sorting: 88%|████████▊ | 398/452 [08:28<00:22, 2.43it/s]
2024-12-11T22:49:39.4137289Z Sorting: 88%|████████▊ | 399/452 [08:30<00:35, 1.49it/s]
2024-12-11T22:49:40.4403014Z Sorting: 88%|████████▊ | 400/452 [08:32<00:51, 1.01it/s]
2024-12-11T22:49:42.2786633Z Sorting: 89%|████████▊ | 401/452 [08:33<00:51, 1.00s/it]
2024-12-11T22:49:43.4948484Z Sorting: 89%|████████▉ | 402/452 [08:35<00:59, 1.19s/it]
2024-12-11T22:49:43.6563080Z Sorting: 89%|████████▉ | 404/452 [08:37<00:46, 1.04it/s]
2024-12-11T22:49:45.9195749Z Sorting: 90%|████████▉ | 406/452 [08:37<00:29, 1.55it/s]
2024-12-11T22:49:47.9230660Z Sorting: 90%|█████████ | 409/452 [08:39<00:29, 1.44it/s]
2024-12-11T22:49:48.1289660Z Sorting: 91%|█████████ | 410/452 [08:41<00:38, 1.09it/s]
2024-12-11T22:49:50.1347287Z Sorting: 91%|█████████ | 411/452 [08:41<00:31, 1.29it/s]
2024-12-11T22:49:50.5647559Z Sorting: 92%|█████████▏| 414/452 [08:43<00:27, 1.37it/s]
2024-12-11T22:49:52.5679480Z Sorting: 92%|█████████▏| 415/452 [08:44<00:24, 1.48it/s]
2024-12-11T22:49:53.8792987Z Sorting: 92%|█████████▏| 416/452 [08:46<00:34, 1.06it/s]
2024-12-11T22:49:54.1267363Z Sorting: 92%|█████████▏| 417/452 [08:47<00:35, 1.03s/it]
2024-12-11T22:49:55.1093107Z Sorting: 92%|█████████▏| 418/452 [08:47<00:28, 1.19it/s]
2024-12-11T22:49:57.1147241Z Sorting: 93%|█████████▎| 419/452 [08:48<00:28, 1.14it/s]
2024-12-11T22:49:59.6656754Z Sorting: 94%|█████████▍| 424/452 [08:50<00:15, 1.76it/s]
2024-12-11T22:50:01.9091305Z Sorting: 94%|█████████▍| 425/452 [08:53<00:23, 1.14it/s]
2024-12-11T22:50:03.3305628Z Sorting: 94%|█████████▍| 427/452 [08:55<00:23, 1.05it/s]
2024-12-11T22:50:06.2629138Z Sorting: 95%|█████████▍| 428/452 [08:56<00:24, 1.04s/it]
2024-12-12T00:40:02.5542511Z ##[error]The operation was canceled.
2024-12-12T00:40:02.5624821Z Post job cleanup.
2024-12-12T00:40:02.6329510Z [command]/usr/bin/git version
2024-12-12T00:40:02.6365704Z git version 2.47.1
2024-12-12T00:40:02.6412663Z Temporarily overriding HOME='/home/runner/work/_temp/869a1f78-211b-41bc-acae-71e77d7ab470' before making global git config changes
2024-12-12T00:40:02.6413589Z Adding repository directory to the temporary git global config as a safe directory
2024-12-12T00:40:02.6416682Z [command]/usr/bin/git config --global --add safe.directory /home/runner/work/zg/zg
2024-12-12T00:40:02.6447380Z [command]/usr/bin/git config --local --name-only --get-regexp core\.sshCommand
2024-12-12T00:40:02.6475288Z [command]/usr/bin/git submodule foreach --recursive sh -c "git config --local --name-only --get-regexp 'core\.sshCommand' && git config --local --unset-all 'core.sshCommand' || :"
2024-12-12T00:40:02.6693801Z [command]/usr/bin/git config --local --name-only --get-regexp http\.https\:\/\/github\.com\/\.extraheader
2024-12-12T00:40:02.6713414Z http.https://github.com/.extraheader
2024-12-12T00:40:02.6724364Z [command]/usr/bin/git config --local --unset-all http.https://github.com/.extraheader
2024-12-12T00:40:02.6752185Z [command]/usr/bin/git submodule foreach --recursive sh -c "git config --local --name-only --get-regexp 'http\.https\:\/\/github\.com\/\.extraheader' && git config --local --unset-all 'http.https://github.com/.extraheader' || :"
2024-12-12T00:40:02.7073806Z Cleaning up orphan processes
2024-12-12T00:40:02.7270384Z Terminate orphan process: pid (2152) (python)
```
|
closed
|
2024-12-12T08:07:40Z
|
2024-12-12T08:31:06Z
|
https://github.com/Guovin/iptv-api/issues/660
|
[
"duplicate",
"question"
] |
zg4321
| 3
|
iterative/dvc
|
data-science
| 10,306
|
pull: "Fetching" step takes forever
|
# pull: "Fetching" takes forever
## Description
Since the update to the version 3.45, ``dvc pull`` started to spend a massive amount of time for "Fetching".
Can't tell precisely what is the reason, but at least the computation of the md5 of a large file is done repetitively within different ``dvc pull`` executions, even though it is stated that the computation is done only once.
### Reproduce
1. dvc pull
### Expected
The "Fetching" should last very short, which is the situation that I have from another device where DVC 3.38.1 is being used.
### Environment information
Problematic environment:
- OS: macOS Sonoma 14.3
- DVC: 3.45.0 (brew)
- Remote storage: S3 bucket
Properly working environment:
- OS: Ubuntu 22.04.3 LTS
- DVC: 3.38.1 (pip)
- Remote storage: S3 bucket (the same of before)
**Output of `dvc doctor`:**
```console
$ dvc doctor
DVC version: 3.45.0 (brew)
--------------------------
Platform: Python 3.12.2 on macOS-14.3-arm64-arm-64bit
Subprojects:
dvc_data = 3.13.0
dvc_objects = 5.0.0
dvc_render = 1.0.1
dvc_task = 0.3.0
scmrepo = 3.1.0
Supports:
azure (adlfs = 2024.2.0, knack = 0.11.0, azure-identity = 1.15.0),
gdrive (pydrive2 = 1.19.0),
gs (gcsfs = 2024.2.0),
http (aiohttp = 3.9.3, aiohttp-retry = 2.8.3),
https (aiohttp = 3.9.3, aiohttp-retry = 2.8.3),
oss (ossfs = 2023.12.0),
s3 (s3fs = 2024.2.0, boto3 = 1.34.34),
ssh (sshfs = 2023.10.0),
webdav (webdav4 = 0.9.8),
webdavs (webdav4 = 0.9.8),
webhdfs (fsspec = 2024.2.0)
Config:
Global: /Users/zhf231298/Library/Application Support/dvc
System: /opt/homebrew/share/dvc
```
|
closed
|
2024-02-16T16:01:25Z
|
2024-04-26T15:36:46Z
|
https://github.com/iterative/dvc/issues/10306
|
[
"bug",
"performance",
"regression"
] |
zhf231298
| 5
|
ray-project/ray
|
data-science
| 51,483
|
CI test windows://python/ray/tests:test_ray_init_2 is consistently_failing
|
CI test **windows://python/ray/tests:test_ray_init_2** is consistently_failing. Recent failures:
- https://buildkite.com/ray-project/postmerge/builds/8965#0195aad4-a541-45a9-b1ef-d27f9a1da383
- https://buildkite.com/ray-project/postmerge/builds/8965#0195aa03-5c4f-4168-a0da-6cbdc8cbd2df
DataCaseName-windows://python/ray/tests:test_ray_init_2-END
Managed by OSS Test Policy
|
closed
|
2025-03-18T23:07:30Z
|
2025-03-19T21:54:11Z
|
https://github.com/ray-project/ray/issues/51483
|
[
"bug",
"triage",
"core",
"flaky-tracker",
"ray-test-bot",
"ci-test",
"weekly-release-blocker",
"stability"
] |
can-anyscale
| 2
|
pytorch/pytorch
|
numpy
| 149,037
|
(Will PR if ok) Support generator returning values
|
### 🐛 Describe the bug
Hi thanks for the library! It would be great if generators returning values could be supported. I will make a PR if this feature looks OK.
For example:
```python
import torch
def exhaust_generator(g):
ans = []
while True:
try:
ans.append(next(g))
except StopIteration as e:
ans.append(e.value)
break
return ans
def outer():
x = torch.tensor([1000])
output_from_inner = yield from inner(x)
yield output_from_inner
yield x + 10
yield x + 20
return x + 30 # DOES NOT WORK
def inner(x):
yield x + 1
yield x + 2
return x + 3 # DOES NOT WORK
print(exhaust_generator(outer()))
print(torch.compile(lambda: exhaust_generator(outer()))())
```
It prints the following (note the two `None`s):
```
[tensor([1001]), tensor([1002]), tensor([1003]), tensor([1010]), tensor([1020]), tensor([1030])]
[tensor([1001]), tensor([1002]), None, tensor([1010]), tensor([1020]), None]
```
In other words, the `return` in generator functions are silently removed.
### Error logs
(see above)
### Versions
Latest master
c c @guilhermeleobas who made the generator support :)
(below is not done by me but done by GitHub auto template; it seems the bot wants to change my cc above... so try "c c")
cc @chauhang @penguinwu
|
open
|
2025-03-12T12:17:33Z
|
2025-03-13T06:30:28Z
|
https://github.com/pytorch/pytorch/issues/149037
|
[
"triaged",
"oncall: pt2"
] |
fzyzcjy
| 6
|
JaidedAI/EasyOCR
|
machine-learning
| 543
|
Missing chars from latin model
|
Hi! There are missing characters in the latin model, as I cannot see the `ő` and `Ő` characters, that are otherwise available in hungarian. Can you add them and update your latin model?
OFF: The hungarian language file is incorrect, so I will provide a language update in a pull request later.
|
closed
|
2021-09-21T07:21:22Z
|
2022-05-31T12:03:41Z
|
https://github.com/JaidedAI/EasyOCR/issues/543
|
[] |
timurlenk07
| 3
|
nvbn/thefuck
|
python
| 1,392
|
Last history contained "\", and get fatal error
|
<!-- If you have any issue with The Fuck, sorry about that, but we will do what we
can to fix that. Actually, maybe we already have, so first thing to do is to
update The Fuck and see if the bug is still there. -->
<!-- If it is (sorry again), check if the problem has not already been reported and
if not, just open an issue on [GitHub](https://github.com/nvbn/thefuck) with
the following basic information: -->
The output of `thefuck --version` (something like `The Fuck 3.1 using Python
3.5.0 and Bash 4.4.12(1)-release`):
The Fuck 3.32 using Python 3.11.3 and ZSH 5.9
Your system (Debian 7, ArchLinux, Windows, etc.):
ArchLinux
How to reproduce the bug:
bug on command
```
any command in Linux with \
```
for example
```
git commut -a -m "commit message" \ <-- last on position
```
next the fuck:
fuck
```
Traceback (most recent call last):
File "/usr/bin/thefuck", line 8, in <module>
sys.exit(main())
^^^^^^
File "/usr/lib/python3.11/site-packages/thefuck/entrypoints/main.py", line 31, in main
fix_command(known_args)
File "/usr/lib/python3.11/site-packages/thefuck/entrypoints/fix_command.py", line 37, in fix_command
command = types.Command.from_raw_script(raw_command)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/usr/lib/python3.11/site-packages/thefuck/types.py", line 82, in from_raw_script
output = get_output(script, expanded)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/usr/lib/python3.11/site-packages/thefuck/output_readers/__init__.py", line 20, in get_output
return rerun.get_output(script, expanded)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/usr/lib/python3.11/site-packages/thefuck/output_readers/rerun.py", line 66, in get_output
if _wait_output(result, is_slow):
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/usr/lib/python3.11/site-packages/thefuck/output_readers/rerun.py", line 36, in _wait_output
proc.wait(settings.wait_slow_command if is_slow
File "/usr/lib/python3.11/site-packages/psutil/__init__.py", line 1270, in wait
self._exitcode = self._proc.wait(timeout)
^^^^^^^^^^^^^^^^^^^^^^^^
File "/usr/lib/python3.11/site-packages/psutil/_pslinux.py", line 1653, in wrapper
return fun(self, *args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/usr/lib/python3.11/site-packages/psutil/_pslinux.py", line 1859, in wait
return _psposix.wait_pid(self.pid, timeout, self._name)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/usr/lib/python3.11/site-packages/psutil/_psposix.py", line 137, in wait_pid
interval = sleep(interval)
^^^^^^^^^^^^^^^
File "/usr/lib/python3.11/site-packages/psutil/_psposix.py", line 115, in sleep
_sleep(interval)
KeyboardInterrupt
```
|
closed
|
2023-07-27T13:00:47Z
|
2023-10-02T13:42:07Z
|
https://github.com/nvbn/thefuck/issues/1392
|
[] |
badcast
| 2
|
allenai/allennlp
|
nlp
| 5,123
|
Make sure that metrics in allennlp-models work in the distributed setting
|
closed
|
2021-04-14T18:35:28Z
|
2021-04-19T07:04:24Z
|
https://github.com/allenai/allennlp/issues/5123
|
[] |
AkshitaB
| 1
|
|
OthersideAI/self-operating-computer
|
automation
| 174
|
[FEATURE] Learning Process
|
If there is some learning process before the actual task it would be working accurately rather than navigating to unnecessary places or clicking on to wrong options.
Like AppAgent which is built for smartphone has a human intervention with learning feature which lets the user to navigate and show how the task is done then it acts upon that learning.
This would make this more faster for repetitive tasks and hence improve the flow reducing time and errors while completing tasks
Thank You :)
|
open
|
2024-03-02T19:15:38Z
|
2024-03-06T07:57:56Z
|
https://github.com/OthersideAI/self-operating-computer/issues/174
|
[
"enhancement"
] |
MirzaAreebBaig
| 1
|
drivendataorg/cookiecutter-data-science
|
data-science
| 278
|
adding Citation files (CFF) to cookiecutter-data-science template
|
With the upcoming release of v2, I think this would be a nice addition.
With the addition of this CFF file Github enables academics and researchers to let people know how to correctly cite their work, especially in academic publications/materials. Originally proposed by the [research software engineering community](https://www.software.ac.uk/blog/2017-12-12-standard-format-citation-files), [CITATION.cff](https://citation-file-format.github.io/) files are plain text files with human- and machine-readable citation information. When we detect a CITATION.cff file in a repository, we use this information to create convenient [APA](https://apastyle.apa.org/) or [BibTeX](https://en.wikipedia.org/wiki/BibTeX) style citation links that can be referenced by others.
This can be done easily with just a lines, as seen in my [branch here](https://github.com/kjgarza/cookiecutter-data-science/tree/citation-cff) (and its corresponding https://github.com/drivendata/cookiecutter-data-science/pull/274)
|
closed
|
2022-08-21T07:04:12Z
|
2024-06-01T22:48:53Z
|
https://github.com/drivendataorg/cookiecutter-data-science/issues/278
|
[] |
kjgarza
| 0
|
iperov/DeepFaceLab
|
machine-learning
| 5,228
|
"data_src faceset extract" Failing
|
Choose one or several GPU idxs (separated by comma).
[CPU] : CPU
[0] : GeForce GTX 960M
[0] Which GPU indexes to choose? :
0
[wf] Face type ( f/wf/head ?:help ) :
wf
[0] Max number of faces from image ( ?:help ) :
0
[512] Image size ( 256-2048 ?:help ) :
512
[90] Jpeg quality ( 1-100 ?:help ) :
90
[n] Write debug images to aligned_debug? ( y/n ) :
n
Extracting faces...
Caching GPU kernels...
Error while subprocess initialization: Traceback (most recent call last):
File "E:\DeepFaceLab_NVIDIA - Copy\_internal\DeepFaceLab\core\joblib\SubprocessorBase.py", line 62, in _subprocess_run
self.on_initialize(client_dict)
File "E:\DeepFaceLab_NVIDIA - Copy\_internal\DeepFaceLab\mainscripts\Extractor.py", line 68, in on_initialize
nn.initialize (device_config)
File "E:\DeepFaceLab_NVIDIA - Copy\_internal\DeepFaceLab\core\leras\nn.py", line 113, in initialize
nn.tf_sess = tf.Session(config=nn.tf_sess_config)
File "E:\DeepFaceLab_NVIDIA - Copy\_internal\python-3.6.8\lib\site-packages\tensorflow\python\client\session.py", line 1596, in __init__
super(Session, self).__init__(target, graph, config=config)
File "E:\DeepFaceLab_NVIDIA - Copy\_internal\python-3.6.8\lib\site-packages\tensorflow\python\client\session.py", line 711, in __init__
self._session = tf_session.TF_NewSessionRef(self._graph._c_graph, opts)
tensorflow.python.framework.errors_impl.InternalError: cudaGetDevice() failed. Status: initialization error
Traceback (most recent call last):
File "E:\DeepFaceLab_NVIDIA - Copy\_internal\DeepFaceLab\main.py", line 324, in <module>
arguments.func(arguments)
File "E:\DeepFaceLab_NVIDIA - Copy\_internal\DeepFaceLab\main.py", line 45, in process_extract
force_gpu_idxs = [ int(x) for x in arguments.force_gpu_idxs.split(',') ] if arguments.force_gpu_idxs is not None else None,
File "E:\DeepFaceLab_NVIDIA - Copy\_internal\DeepFaceLab\mainscripts\Extractor.py", line 853, in main
device_config=device_config).run()
File "E:\DeepFaceLab_NVIDIA - Copy\_internal\DeepFaceLab\core\joblib\SubprocessorBase.py", line 210, in run
raise Exception ( "Unable to start subprocesses." )
Exception: Unable to start subprocesses.
Press any key to continue . . .
|
open
|
2021-01-02T11:18:19Z
|
2023-06-08T21:53:08Z
|
https://github.com/iperov/DeepFaceLab/issues/5228
|
[] |
adam-eme
| 2
|
ultralytics/yolov5
|
pytorch
| 12,850
|
Inaccurate bounding boxes when detecting large images
|
### Search before asking
- [x] I have searched the YOLOv5 [issues](https://github.com/ultralytics/yolov5/issues) and [discussions](https://github.com/ultralytics/yolov5/discussions) and found no similar questions.
### Question
I‘m trying to detect a circle with a diameter of 40 in an image with a resolution of 3500*13000 100M and I get a bounding box that is far from the circle. What can I do to make the center of the detection boxes close to the center of the circle?

### Additional
The training set has 2000 images, and the bounding boxes exactly frame the circular hole when making the training set.
_No response_
|
closed
|
2024-03-25T12:48:34Z
|
2024-10-20T19:42:15Z
|
https://github.com/ultralytics/yolov5/issues/12850
|
[
"question",
"Stale"
] |
kyoryuuu
| 5
|
jupyter/nbgrader
|
jupyter
| 1,305
|
Error with late submission plugin class
|
<!--
Thanks for helping to improve nbgrader!
If you are submitting a bug report or looking for support, please use the below
template so we can efficiently solve the problem.
If you are requesting a new feature, feel free to remove irrelevant pieces of
the issue template.
-->
### Operating system
Ubuntu 16.04.6 LTS
### `nbgrader --version`
0.6.1
### `jupyterhub --version` (if used with JupyterHub)
1.1.0
### `jupyter notebook --version`
6.0.3
### Expected behavior
Autograding with custom late submission penalty class applies the specified penalty
### Actual behavior
`nbgrader autograde <assignment name>` fails with error message
"TypeError: late_submission_penalty() takes 3 positional arguments but 4 were given"
### Steps to reproduce the behavior
Steps are copied from the [example in the docs](https://nbgrader.readthedocs.io/en/stable/plugins/late-plugin.html):
- placed file `late.py` into course directory:
```python
from nbgrader.plugins import BasePlugin
class SubMarks(BasePlugin):
def late_submission_penalty(student_id, score, total_seconds_late):
"""Penalty of 1 mark per hour late"""
hours_late = total_seconds_late / 3600
return round(hours_late, 0)
```
- Added `c.AssignLatePenalties.plugin_class = 'late.SubMarks'` to `nbgrader_config.py` in the course directory.
- Autograded a submitted assignment named "PHY332-test1":
`nbgrader autograde PHY332-test1`
- This command failed with the error message
```
[AutogradeApp | INFO] Copying /home/axel/PHY332/submitted/student1/PHY332-test1/timestamp.txt -> /home/axel/PHY332/autograded/student1/PHY332-test1/timestamp.txt
[AutogradeApp | INFO] Creating/updating student with ID 'student1': {}
[AutogradeApp | INFO] SubmittedAssignment<PHY332-test1 for student1> submitted at 2020-01-22 21:25:35.209420
[AutogradeApp | WARNING] SubmittedAssignment<PHY332-test1 for student1> is 177935.20942 seconds late
[AutogradeApp | INFO] Overwriting files with master versions from the source directory
[AutogradeApp | INFO] Sanitizing /home/axel/PHY332/submitted/student1/PHY332-test1/cube_escape.ipynb
[AutogradeApp | INFO] Converting notebook /home/axel/PHY332/submitted/student1/PHY332-test1/cube_escape.ipynb
[AutogradeApp | INFO] Writing 7939 bytes to /home/axel/PHY332/autograded/student1/PHY332-test1/cube_escape.ipynb
[AutogradeApp | INFO] Autograding /home/axel/PHY332/autograded/student1/PHY332-test1/cube_escape.ipynb
[AutogradeApp | INFO] Converting notebook /home/axel/PHY332/autograded/student1/PHY332-test1/cube_escape.ipynb
[AutogradeApp | INFO] Executing notebook with kernel: python3
[AutogradeApp | WARNING] SubmittedAssignment<PHY332-test1 for student1> is 177935.20942 seconds late
[AutogradeApp | ERROR] There was an error processing assignment: /home/axel/PHY332/submitted/student1/PHY332-test1
[AutogradeApp | ERROR] Traceback (most recent call last):
File "/usr/local/lib/python3.5/dist-packages/nbgrader/converters/base.py", line 336, in convert_notebooks
self.convert_single_notebook(notebook_filename)
File "/usr/local/lib/python3.5/dist-packages/nbgrader/converters/autograde.py", line 195, in convert_single_notebook
super(Autograde, self).convert_single_notebook(notebook_filename)
File "/usr/local/lib/python3.5/dist-packages/nbgrader/converters/base.py", line 292, in convert_single_notebook
output, resources = self.exporter.from_filename(notebook_filename, resources=resources)
File "/usr/local/lib/python3.5/dist-packages/nbconvert/exporters/exporter.py", line 179, in from_filename
return self.from_file(f, resources=resources, **kw)
File "/usr/local/lib/python3.5/dist-packages/nbconvert/exporters/exporter.py", line 197, in from_file
return self.from_notebook_node(nbformat.read(file_stream, as_version=4), resources=resources, **kw)
File "/usr/local/lib/python3.5/dist-packages/nbconvert/exporters/notebook.py", line 32, in from_notebook_node
nb_copy, resources = super(NotebookExporter, self).from_notebook_node(nb, resources, **kw)
File "/usr/local/lib/python3.5/dist-packages/nbconvert/exporters/exporter.py", line 139, in from_notebook_node
nb_copy, resources = self._preprocess(nb_copy, resources)
File "/usr/local/lib/python3.5/dist-packages/nbconvert/exporters/exporter.py", line 316, in _preprocess
nbc, resc = preprocessor(nbc, resc)
File "/usr/local/lib/python3.5/dist-packages/nbconvert/preprocessors/base.py", line 47, in __call__
return self.preprocess(nb, resources)
File "/usr/local/lib/python3.5/dist-packages/nbgrader/preprocessors/latesubmissions.py", line 66, in preprocess
self.student_id, notebook.score, assignment.total_seconds_late)
TypeError: late_submission_penalty() takes 3 positional arguments but 4 were given
[AutogradeApp | WARNING] Removing failed assignment: /home/axel/PHY332/autograded/student1/PHY332-test1
[AutogradeApp | ERROR] There was an error processing assignment 'PHY332-test1' for student 'student1'
[AutogradeApp | ERROR] Please see the the above traceback for details on the specific errors on the above failures.
```
|
open
|
2020-01-22T22:01:16Z
|
2020-01-22T22:01:16Z
|
https://github.com/jupyter/nbgrader/issues/1305
|
[] |
amellinger
| 0
|
onnx/onnxmltools
|
scikit-learn
| 311
|
keras2onnx doesn't support python 2 and renders pip installation fail.
|
Env: python 2.7.15
Steps to reproduce:
```
$ pip install onnxmltools
...
Collecting keras2onnx (from onnxmltools)
Could not find a version that satisfies the requirement keras2onnx (from onnxmltools) (from versions: )
No matching distribution found for keras2onnx (from onnxmltools)
```
|
closed
|
2019-06-09T01:06:35Z
|
2019-09-25T17:32:33Z
|
https://github.com/onnx/onnxmltools/issues/311
|
[] |
turtleizzy
| 3
|
streamlit/streamlit
|
data-visualization
| 10,350
|
st.logo randamly disappears after a while
|
### Checklist
- [x] I have searched the [existing issues](https://github.com/streamlit/streamlit/issues) for similar issues.
- [x] I added a very descriptive title to this issue.
- [x] I have provided sufficient information below to help reproduce this issue.
### Summary
My image that I rendered with st.logo (I am using nightly version) randomly disappear after a few minutes.
### Reproducible Code Example
```Python
```
### Steps To Reproduce
_No response_
### Expected Behavior
_No response_
### Current Behavior
_No response_
### Is this a regression?
- [ ] Yes, this used to work in a previous version.
### Debug info
- Streamlit version:
- Python version:
- Operating System:
- Browser:
### Additional Information
_No response_
|
closed
|
2025-02-05T14:32:40Z
|
2025-02-13T23:59:20Z
|
https://github.com/streamlit/streamlit/issues/10350
|
[
"type:bug",
"status:confirmed",
"priority:P2",
"feature:st.fragment",
"feature:st.logo"
] |
Martijn3161
| 4
|
proplot-dev/proplot
|
data-visualization
| 22
|
proplot having issues with `xarray` objects
|
Currently, when plotting values from an `xarray.DataArray`, `proplot` throws an error. Note that this didn't used to be an issue.
The following works (note `A.values` has to be called, but `A.time.values` does not. So this is only an issue with the actual data being plotted and not coordinates)
```python
import numpy as np
import xarray as xr
A = np.random.rand(120,)
A = xr.DataArray(A, dims='time')
A['time'] = np.arange('1990-01', '2000-01', dtype='datetime64[M]')
f, ax = plot.subplots(width='12cm', aspect=4)
ax.plot(A.time, A.values)
```
This does not work:
```python
import numpy as np
import xarray as xr
A = np.random.rand(120,)
A = xr.DataArray(A, dims='time')
A['time'] = np.arange('1990-01', '2000-01', dtype='datetime64[M]')
f, ax = plot.subplots(width='12cm', aspect=4)
ax.plot(A.time, A)
```
```python-traceback
---------------------------------------------------------------------------
IndexError Traceback (most recent call last)
<ipython-input-37-ae88108929a8> in <module>
5 A['time'] = np.arange('1990-01', '2000-01', dtype='datetime64[M]')
6 f, ax = plot.subplots(width='12cm', aspect=4)
----> 7 ax.plot(A.time, A)
~/miniconda3/envs/python3/lib/python3.7/site-packages/proplot/subplots.py in iterator(*args, **kwargs)
129 ret = []
130 for func in attrs:
--> 131 ret.append(func(*args, **kwargs))
132 if len(ret)==1:
133 return ret[0]
~/miniconda3/envs/python3/lib/python3.7/site-packages/proplot/wrappers.py in wrapper(*args, **kwargs)
2555 @functools.wraps(func)
2556 def wrapper(*args, **kwargs):
-> 2557 return driver(self, func, *args, **kwargs)
2558 return wrapper
2559 return decorator
~/miniconda3/envs/python3/lib/python3.7/site-packages/proplot/wrappers.py in _parse_1d(self, func, *args, **kwargs)
312 if kw:
313 self.format(**kw)
--> 314 return func(x, *yss, *args, **kwargs)
315
316 def _parse_2d(self, func, *args, order='C', **kwargs):
~/miniconda3/envs/python3/lib/python3.7/site-packages/proplot/wrappers.py in wrapper(*args, **kwargs)
2555 @functools.wraps(func)
2556 def wrapper(*args, **kwargs):
-> 2557 return driver(self, func, *args, **kwargs)
2558 return wrapper
2559 return decorator
~/miniconda3/envs/python3/lib/python3.7/site-packages/proplot/wrappers.py in plot_wrapper(self, func, cmap, values, *args, **kwargs)
455 raise ValueError(f'Expected 1-3 plot args, got {len(args)}.')
456 if cmap is None:
--> 457 lines = func(*args, **kwargs)
458 else:
459 lines = self.cmapline(*args, cmap=cmap, values=values, **kwargs)
~/miniconda3/envs/python3/lib/python3.7/site-packages/proplot/wrappers.py in wrapper(*args, **kwargs)
2555 @functools.wraps(func)
2556 def wrapper(*args, **kwargs):
-> 2557 return driver(self, func, *args, **kwargs)
2558 return wrapper
2559 return decorator
~/miniconda3/envs/python3/lib/python3.7/site-packages/proplot/wrappers.py in cycle_wrapper(self, func, cycle, cycle_kw, markers, linestyles, label, labels, values, legend, legend_kw, colorbar, colorbar_kw, *args, **kwargs)
1517 pass
1518 elif isinstance(y, DataArray):
-> 1519 label = y.coords[y.dims[1]].values[i]
1520 label_cl = _auto_label(y.coords[y.dims[1]]) # coordinate label
1521 elif isinstance(y, DataFrame):
IndexError: tuple index out of range
```
|
closed
|
2019-06-27T18:48:55Z
|
2019-09-14T21:22:55Z
|
https://github.com/proplot-dev/proplot/issues/22
|
[
"bug"
] |
bradyrx
| 4
|
matplotlib/matplotlib
|
matplotlib
| 29,090
|
[MNT]: More consistent color parameters for bar()
|
### Summary
From #29072. `bar()` supports
- `color` : color or list of color
- `edgecolor` : color or list of color
- `facecolor`: color
i.e.
- `facecolor` cannot take a sequence
- there are no plural aliase (e.g. `edgecolors`)
- likely (t.b.c.) the aliases also do not support sequences, similar to #28884
### Proposed fix
Make `facecolor` accept sequences and check that the parameter precedence among `color`, `edgecolor` and `facecolor` is reasonable and comparable with `scatter`, which can take an explicit color via `c` (equivalent to `color` here).
For now, I'd refrain from introducing plural aliase. `bar()` is originally and primarily a style-all-bars-identical function. Per-bar styling was added later on, and I don't think there's a strong need to support this additional use case with an added plural alias.
|
closed
|
2024-11-05T22:53:22Z
|
2024-11-30T19:54:18Z
|
https://github.com/matplotlib/matplotlib/issues/29090
|
[
"Maintenance"
] |
timhoffm
| 1
|
tqdm/tqdm
|
jupyter
| 1,192
|
cannot install from source package tqdm-4.61.0.tar.gz
|
Because of my offline environment, I installed the tqdm with source package of pypi.
But after I "pip install tqdm-4.61.1.tar.gz", I got Successfully built UNKNOWN instead of tqdm, how can i fix it.
THANKS
|
closed
|
2021-06-23T03:04:35Z
|
2021-07-29T10:54:29Z
|
https://github.com/tqdm/tqdm/issues/1192
|
[
"invalid ⛔",
"need-feedback 📢",
"p3-framework ⚒"
] |
CnBDM-Su
| 2
|
scrapy/scrapy
|
python
| 6,561
|
Improve the contribution documentation
|
It would be nice to have something like [this](https://github.com/scrapy/scrapy/issues/1615#issuecomment-2497663596) in a section of the contribution docs that we can link easily to such questions.
|
closed
|
2024-11-25T10:52:01Z
|
2024-12-12T10:38:31Z
|
https://github.com/scrapy/scrapy/issues/6561
|
[
"enhancement",
"docs"
] |
Gallaecio
| 2
|
babysor/MockingBird
|
deep-learning
| 31
|
使用预训练模型获得了奇怪的mel spectrogram和杂音
|

voicepart1.mp3 是一段时长为10秒钟、含7个句子的录音片段

voicepart2.wav 是一段时长为5秒钟的类似片段
合成结果均为约2秒的背景杂音,无论输入内容长度。

|
closed
|
2021-08-22T19:21:49Z
|
2021-08-23T03:40:33Z
|
https://github.com/babysor/MockingBird/issues/31
|
[] |
wfjsw
| 6
|
wger-project/wger
|
django
| 1,180
|
Server Error (500) on API /workout/:id/log_data
|
Hi,
I am testing the app and found an issue while investigating a bug with the mobile app (see wger-project/flutter#291) .
The endpoint in object always answers with 500 Internal Server Error.
After investigation it seems related to
```
wger/manager/api/views.py:106
```
In method `log_data` the object `Exercise` doesn't have a `workoutlog_set` but `ExerciseBase` does! So it seems like an easy fix. Is there anything I'm missing?
I tried to implement the fix and everything seems to be working, can anyone review my fix?
Thanks for the great app!
|
closed
|
2022-11-14T02:13:42Z
|
2022-11-29T16:26:40Z
|
https://github.com/wger-project/wger/issues/1180
|
[] |
manto89
| 2
|
LAION-AI/Open-Assistant
|
python
| 3,007
|
Next Iteration Meeting (Friday, May 5, 2023 7:00pm UTC)
|
Topics for the next meeting
|
open
|
2023-05-01T20:17:41Z
|
2023-05-07T16:54:03Z
|
https://github.com/LAION-AI/Open-Assistant/issues/3007
|
[
"meeting"
] |
AbdBarho
| 14
|
plotly/dash
|
dash
| 3,023
|
add tooling to show Dash memory usage
|
It would be useful to have a way for Dash to report how much memory it is using where. The report could be textual (CSV / JSON) or graphical (an introspective chart?).
|
open
|
2024-10-02T16:52:20Z
|
2024-10-02T16:52:20Z
|
https://github.com/plotly/dash/issues/3023
|
[
"feature",
"P3"
] |
gvwilson
| 0
|
huggingface/transformers
|
nlp
| 36,363
|
目前使用Ktransformers进行DEEPSEEK-R1满血版和4bit量化版模型进行推理,推理速度有多少tokens/s?对应的计算资源配置分别是多少?
|
目前使用Ktransformers进行DEEPSEEK-R1满血版和4bit量化版模型进行推理,推理速度有多少tokens/s?对应的计算资源配置分别是多少?
目前本地部署测试能跑4bit量化版和Q2_K量化版,但推理速度只有不到0.1tokens/s,,(...o0^0o...),使用的配置如下:
GPU:tesla A10 24G X 2
CPU:Intel(R) Xeon(R) Platinum 8352V CPU @ 2.10GHz X 100(--cpu_infer 100,支持AVX-512,不支持AMX)
MemTotal:256G
磁盘:12T (15000rpm gpt-1.00 partitioned partitioned:gpt,hdd)
|
open
|
2025-02-24T03:30:06Z
|
2025-02-24T03:33:54Z
|
https://github.com/huggingface/transformers/issues/36363
|
[] |
William-Cai123
| 0
|
Lightning-AI/pytorch-lightning
|
pytorch
| 20,024
|
Multiple subclassing levels required to use LightningDataModule in LightningCLI
|
### Bug description
I get the following error message
```error: Parser key "data":
Import path data.snemi.SNEMIDataModule does not correspond to a subclass of LightningDataModule
```
with yaml config
```yaml
data:
class_path: data.snemi.SNEMIDataModule
```
when defining SNEMIDataModule as follows
```python
class SNEMIDataModule(LightningDataModule):
...
```
this problem is solved by creating a dummy subclass:
```python
class DummyDataModule(LightningDataModule):
pass
class SNEMIDataModule(DummyDataModule):
...
```
Is this intended behavior?
### What version are you seeing the problem on?
v2.2
### How to reproduce the bug
_No response_
### Error messages and logs
```
# Error messages and logs here please
```
### Environment
<details>
<summary>Current environment</summary>
```
#- Lightning Component (e.g. Trainer, LightningModule, LightningApp, LightningWork, LightningFlow):
#- PyTorch Lightning Version (e.g., 1.5.0):
#- Lightning App Version (e.g., 0.5.2):
#- PyTorch Version (e.g., 2.0):
#- Python version (e.g., 3.9):
#- OS (e.g., Linux):
#- CUDA/cuDNN version:
#- GPU models and configuration:
#- How you installed Lightning(`conda`, `pip`, source):
#- Running environment of LightningApp (e.g. local, cloud):
```
</details>
### More info
_No response_
cc @carmocca @mauvilsa
|
open
|
2024-06-27T21:57:17Z
|
2024-06-28T12:29:17Z
|
https://github.com/Lightning-AI/pytorch-lightning/issues/20024
|
[
"bug",
"lightningcli"
] |
jasonkena
| 2
|
Kanaries/pygwalker
|
pandas
| 20
|
[Feat] Detect white-dark theme and use appropriate theme
|
Currently, using `pyg.walk(df)` on a Jupyter Notebook with a dark theme renders a white widget, where most text are so low contrast that they are effectively invisible.
|
closed
|
2023-02-21T20:27:56Z
|
2023-03-19T17:34:22Z
|
https://github.com/Kanaries/pygwalker/issues/20
|
[
"enhancement",
"graphic-walker"
] |
hyiltiz
| 4
|
deezer/spleeter
|
tensorflow
| 665
|
2.3.0 install uses cpu only
|
- [ ] I didn't find a similar issue already open.
- [ ] I read the documentation (README AND Wiki)
- [ ] I have installed FFMpeg
- [ ] My problem is related to Spleeter only, not a derivative product (such as Webapplication, or GUI provided by others)
## Description
<!-- Give us a clear and concise description of the bug you are reporting. -->
## Step to reproduce
<!-- Indicates clearly steps to reproduce the behavior: -->
1. Installed using `...`
2. Run as `...`
3. Got `...` error
## Output
```bash
Share what your terminal says when you run the script (as well as what you would expect).
```
## Environment
<!-- Fill the following table -->
| | |
| ----------------- | ------------------------------- |
| OS | Windows |
| Installation type | Conda / pip |
| Hardware spec | GPU / CPU / etc ... |
## Additional context
did not use gpu
https://oss.canxingtv.com/upload/res.singschool.com/637679332437315016-min.png
|
open
|
2021-09-22T10:52:41Z
|
2022-02-21T03:29:20Z
|
https://github.com/deezer/spleeter/issues/665
|
[
"bug",
"invalid"
] |
yangxing5200
| 2
|
onnx/onnx
|
pytorch
| 5,809
|
Edit Input/Output Onnx file
|
# Ask a Question
### Question
Hi,
My goal is to change inputs/outputs names of Onnx file, I write this code:
`import onnx
onnx_model_path = "ostrack-256.onnx"
original_model = onnx.load(onnx_model_path)
for input in original_model.graph.input:
if input.name == "x":
input.name = "search"
elif input.name == "z":
input.name = "template"
for output in original_model.graph.output:
if output.name == "score_map":
output.name = "output1"
elif output.name == "size_map":
output.name = "output2"
elif output.name == "offset_map":
output.name = "output3"
modified_model_path = "modified_model.onnx"
onnx.save(original_model, modified_model_path)
print(f"Modified model saved to {modified_model_path}")`
Then When I check my new onnx it's look like is change the name but now the input and output nodes not connect to the NN I not understand what I missing, I will be happy to any help.
attached Images:
Before Change name:


After changing names:

|
closed
|
2023-12-18T13:32:02Z
|
2023-12-18T13:50:20Z
|
https://github.com/onnx/onnx/issues/5809
|
[
"question"
] |
arielkantorovich
| 0
|
Evil0ctal/Douyin_TikTok_Download_API
|
api
| 127
|
抖音链接失效了
|
https://www.iesdouyin.com/web/api/v2/aweme/iteminfo/?item_ids=7175083035304398120
这个接口 无法访问了
|
closed
|
2022-12-22T12:38:22Z
|
2023-08-02T03:06:43Z
|
https://github.com/Evil0ctal/Douyin_TikTok_Download_API/issues/127
|
[
"BUG"
] |
5wcx
| 22
|
Anjok07/ultimatevocalremovergui
|
pytorch
| 1,011
|
vocals separation stopped due to memory error
|
Last Error Received:
Process: Ensemble Mode
If this error persists, please contact the developers with the error details.
Raw Error Details:
MemoryError: "Unable to allocate 4.37 GiB for an array with shape (2, 769, 763136) and data type float32"
Traceback Error: "
File "UVR.py", line 6638, in process_start
File "separate.py", line 1055, in seperate
File "separate.py", line 1183, in inference_vr
File "separate.py", line 1159, in _execute
File "<__array_function__ internals>", line 180, in concatenate
"
Error Time Stamp [2023-12-07 16:15:44]
Full Application Settings:
vr_model: Choose Model
aggression_setting: 5
window_size: 512
mdx_segment_size: 256
batch_size: Default
crop_size: 256
is_tta: False
is_output_image: False
is_post_process: False
is_high_end_process: False
post_process_threshold: 0.2
vr_voc_inst_secondary_model: No Model Selected
vr_other_secondary_model: No Model Selected
vr_bass_secondary_model: No Model Selected
vr_drums_secondary_model: No Model Selected
vr_is_secondary_model_activate: False
vr_voc_inst_secondary_model_scale: 0.9
vr_other_secondary_model_scale: 0.7
vr_bass_secondary_model_scale: 0.5
vr_drums_secondary_model_scale: 0.5
demucs_model: Choose Model
segment: Default
overlap: 0.25
overlap_mdx: Default
overlap_mdx23: 8
shifts: 2
chunks_demucs: Auto
margin_demucs: 44100
is_chunk_demucs: False
is_chunk_mdxnet: False
is_primary_stem_only_Demucs: False
is_secondary_stem_only_Demucs: False
is_split_mode: True
is_demucs_combine_stems: True
is_mdx23_combine_stems: True
demucs_voc_inst_secondary_model: No Model Selected
demucs_other_secondary_model: No Model Selected
demucs_bass_secondary_model: No Model Selected
demucs_drums_secondary_model: No Model Selected
demucs_is_secondary_model_activate: False
demucs_voc_inst_secondary_model_scale: 0.9
demucs_other_secondary_model_scale: 0.7
demucs_bass_secondary_model_scale: 0.5
demucs_drums_secondary_model_scale: 0.5
demucs_pre_proc_model: No Model Selected
is_demucs_pre_proc_model_activate: False
is_demucs_pre_proc_model_inst_mix: False
mdx_net_model: Choose Model
chunks: Auto
margin: 44100
compensate: Auto
denoise_option: None
is_match_frequency_pitch: True
phase_option: Automatic
phase_shifts: None
is_save_align: False
is_match_silence: True
is_spec_match: False
is_mdx_c_seg_def: False
is_invert_spec: False
is_deverb_vocals: False
deverb_vocal_opt: Main Vocals Only
voc_split_save_opt: Lead Only
is_mixer_mode: False
mdx_batch_size: Default
mdx_voc_inst_secondary_model: No Model Selected
mdx_other_secondary_model: No Model Selected
mdx_bass_secondary_model: No Model Selected
mdx_drums_secondary_model: No Model Selected
mdx_is_secondary_model_activate: False
mdx_voc_inst_secondary_model_scale: 0.9
mdx_other_secondary_model_scale: 0.7
mdx_bass_secondary_model_scale: 0.5
mdx_drums_secondary_model_scale: 0.5
is_save_all_outputs_ensemble: True
is_append_ensemble_name: False
chosen_audio_tool: Manual Ensemble
choose_algorithm: Min Spec
time_stretch_rate: 2.0
pitch_rate: 2.0
is_time_correction: True
is_gpu_conversion: True
is_primary_stem_only: False
is_secondary_stem_only: True
is_testing_audio: False
is_auto_update_model_params: True
is_add_model_name: False
is_accept_any_input: False
is_task_complete: False
is_normalization: False
is_use_opencl: False
is_wav_ensemble: False
is_create_model_folder: False
mp3_bit_set: 320k
semitone_shift: 0
save_format: WAV
wav_type_set: PCM_16
device_set: Default
help_hints_var: True
set_vocal_splitter: No Model Selected
is_set_vocal_splitter: False
is_save_inst_set_vocal_splitter: False
model_sample_mode: False
model_sample_mode_duration: 30
demucs_stems: All Stems
mdx_stems: All Stems
|
open
|
2023-12-07T10:56:30Z
|
2023-12-08T14:57:50Z
|
https://github.com/Anjok07/ultimatevocalremovergui/issues/1011
|
[] |
StephenBrahmi
| 1
|
robotframework/robotframework
|
automation
| 5,240
|
[Setup] and [Teardown] in test steps overrides Test Setup and Test Teardown from Settings
|
Hi, given below example
```
*** Settings ***
Test Setup Log To Console test setup in settings
Test Teardown Log To Console test teardown in settings
*** Test Cases ***
Test
[Setup] Log To Console test setup in test steps
Comment just testing
[Teardown] Log To Console test teardown in test steps
```
the console output would be
```
====================================================
Test test setup in test steps
..test teardown in test steps
Test | PASS |
----------------------------------------------------------------------------------------
Tests.Helpers.Test | PASS |
1 test, 1 passed, 0 failed
====================================================
```
meaning that steps from Settings declared Test Setup and Test Teardown keywords were overridden by [Setup] and [Teardown] from the test itself.
I've found that behavior described in documentation
https://robotframework.org/robotframework/latest/RobotFrameworkUserGuide.html#test-setup-and-teardown
My proposal is to extend that behavior to make it more controllable.
adding new command line argument for robot execution
```
--testsetupandteardown preservetests|preservesettings|merge Changes how setup and teardown are parsed.
preservetests (default): [Setup] or [Teardown] from test will override Test Setup or Test Teardown from ***Settings***
preservesettings: Test Setup or Test Teardown from ***Settings*** will override [Setup] or [Teardown] from test
merge: if both Test Setup or Test Teardown and [Setup] or [Teardown] are declared they will be executed one after another. [Setup] or [Teardown] will be executed first.
```
Rationale: there are cases when for convenience one would declare a common Test Teardown in Settings for several test cases in the suite. But one test case would have a special [Teardown] logic doing something extra. Current implementation would override Test Teardown and only [Teardown] logic would be executed for that tests which might not be wanted outcome.
|
closed
|
2024-10-17T09:21:05Z
|
2024-11-01T15:41:52Z
|
https://github.com/robotframework/robotframework/issues/5240
|
[] |
MarcinGmurczyk
| 2
|
feature-engine/feature_engine
|
scikit-learn
| 6
|
DecisionTreeDiscretiser what page to read from CiML-v3-book.pdf
|
may you clarify what page from http://www.mtome.com/Publications/CiML/CiML-v3-book.pdf is relevant to read about
https://feature-engine.readthedocs.io/en/latest/discretisers/DecisionTreeDiscretiser.html?highlight=DecisionTreeDiscretiser
as you wrote
The methods is inspired by the following article from the winners of the KDD 2009 competition:
http://www.mtome.com/Publications/CiML/CiML-v3-book.pdf
but there are 130 pages..
|
closed
|
2019-08-06T17:02:59Z
|
2019-09-04T08:03:12Z
|
https://github.com/feature-engine/feature_engine/issues/6
|
[
"question"
] |
Sandy4321
| 1
|
Miserlou/Zappa
|
django
| 1,631
|
multiple api resource with lambda trigger
|
<!--- Provide a general summary of the issue in the Title above -->
## Context
I am new to zappa world.
Can zappa create multiple api gateway resource and their method(GET,POST,PUT) allowing to trigger lambda using JSON settings.
Let me know if above statement made sense.
Thank you
## Expected Behavior
<!--- Tell us what should happen -->
## Actual Behavior
<!--- Tell us what happens instead -->
## Possible Fix
<!--- Not obligatory, but suggest a fix or reason for the bug -->
## Steps to Reproduce
<!--- Provide a link to a live example, or an unambiguous set of steps to -->
<!--- reproduce this bug include code to reproduce, if relevant -->
1.
2.
3.
## Your Environment
<!--- Include as many relevant details about the environment you experienced the bug in -->
* Zappa version used:
* Operating System and Python version:
* The output of `pip freeze`:
* Link to your project (optional):
* Your `zappa_settings.py`:
|
open
|
2018-10-03T09:32:22Z
|
2018-10-03T18:50:15Z
|
https://github.com/Miserlou/Zappa/issues/1631
|
[] |
prashantbaditya
| 2
|
quokkaproject/quokka
|
flask
| 568
|
themes: find a way to download single pelican-themes
|
re-host pelican themes individually for easier download?
https://github.com/rochacbruno/quokka_ng/issues/66
|
closed
|
2018-02-07T01:36:06Z
|
2018-02-07T01:39:06Z
|
https://github.com/quokkaproject/quokka/issues/568
|
[
"1.0.0",
"hacktoberfest"
] |
rochacbruno
| 0
|
nolar/kopf
|
asyncio
| 309
|
Unprocessable Entity
|
> <a href="https://github.com/brutus333"><img align="left" height="50" src="https://avatars0.githubusercontent.com/u/6450276?v=4"></a> An issue by [brutus333](https://github.com/brutus333) at _2020-02-10 14:15:34+00:00_
> Original URL: https://github.com/zalando-incubator/kopf/issues/309
>
## Long story short
I've tried to qualify 0.25 based on existing tests built with KopfRunner context. These tests worked well with all versions from 0.21 to 0.24. However, using 0.25 will raise a framework error.
## Description
One of the simplest tests creates a custom object, lets the operator create a pod based on custom object definition and delete the custom object (which by owner cascaded deletion deletes the pod too).
```python
import kopf
from kopf.testing import KopfRunner
import os
import unittest
import time
import subprocess
KOPF_RUNNER_COMMAND = ['run', 'src/libvirt.py', '--namespace', 'default', '--standalone']
class MyTestCase(unittest.TestCase):
def test_custom_object_creation_and_deletion(self):
with KopfRunner(KOPF_RUNNER_COMMAND, timeout=30) as runner:
# do something while the operator is running.
subprocess.run("kubectl apply -f tests/libvirtds1.yaml", shell=True, check=True)
time.sleep(5) # give it some time to react and to sleep and to retry
subprocess.run("kubectl delete -f tests/libvirtds1.yaml", shell=True, check=True)
time.sleep(30) # give it some time to react
self.assertEqual(runner.exit_code,0)
self.assertIs(runner.exception,None)
self.assertIn('falling back to kubeconfig configuration', runner.stdout)
self.assertIn('Starting to create pod on node', runner.stdout)
self.assertIn('Running delete handler for pod', runner.stdout)
self.assertIn('was deleted by k8s cascaded deletion of owner', runner.stdout)
```
```bash
pytest -x
```
```
==================================================================================== test session starts =====================================================================================
platform linux -- Python 3.7.5, pytest-5.3.5, py-1.8.1, pluggy-0.13.1
rootdir: /src, inifile: pytest.ini
plugins: asyncio-0.10.0
collected 6 items
tests/libvirt_test.py::MyTestCase::test_custom_object_creation_and_deletion
--------------------------------------------------------------------------------------- live log call ----------------------------------------------------------------------------------------
INFO kopf.objects:libvirt.py:38 Starting libvirt operator
WARNING kopf.objects:libvirt.py:44 Can't use in cluster configuration, falling back to kubeconfig configuration
WARNING kopf.reactor.running:running.py:281 OS signals are ignored: running not in the main thread.
INFO kopf.reactor.activities:activities.py:59 Initial authentication has been initiated.
INFO kopf.activities.authentication:handling.py:571 Handler 'login_via_pykube' succeeded.
INFO kopf.activities.authentication:handling.py:571 Handler 'login_via_client' succeeded.
INFO kopf.reactor.activities:activities.py:68 Initial authentication has finished.
INFO kopf.objects:libvirt.py:405 Looking after a daemonset with adoption labels: {'adopt-by': 'libvirt-ds'}
INFO kopf.objects:libvirt.py:358 Node kind-worker does not have a pod. Creating one now.
INFO kopf.objects:libvirt.py:174 Starting to create pod on node kind-worker
INFO kopf.objects:handling.py:571 Handler 'create_libvirtds/kind-worker' succeeded.
INFO kopf.objects:handling.py:571 Handler 'create_libvirtds' succeeded.
INFO kopf.objects:handling.py:329 All handlers succeeded for creation.
INFO kopf.objects:libvirt.py:468 Update handler called with: (('add', ('spec', 'template', 'spec', 'nodeSelector'), None, {'libvirt': 'yes'}), ('remove', ('spec', 'template', 'spec', 'af
finity'), {'nodeAffinity': {'requiredDuringSchedulingIgnoredDuringExecution': {'nodeSelectorTerms': [{'matchFields': [{'key': 'metadata.name', 'operator': 'In', 'values': ['kind-worker']}]}]
}}}, None), ('change', ('spec', 'template', 'spec', 'tolerations'), [{'operator': 'Exists'}, {'operator': 'Exists', 'effect': 'NoSchedule', 'key': 'node.kubernetes.io/disk-pressure'}, {'oper
ator': 'Exists', 'effect': 'NoSchedule', 'key': 'node.kubernetes.io/memory-pressure'}, {'operator': 'Exists', 'effect': 'NoSchedule', 'key': 'node.kubernetes.io/unschedulable'}, {'operator':
'Exists', 'effect': 'NoSchedule', 'key': 'node.kubernetes.io/network-unavailable'}], [{'operator': 'Exists'}]), ('change', ('spec', 'template', 'spec', 'containers'), [{'image': 'nginx:1.8.
1', 'imagePullPolicy': 'IfNotPresent', 'name': 'nginx', 'ports': [{'containerPort': 80, 'protocol': 'TCP'}], 'resources': {}, 'terminationMessagePath': '/dev/termination-log', 'terminationMe
ssagePolicy': 'File'}, {'command': ['/bin/sleep', '36000'], 'image': 'busybox', 'imagePullPolicy': 'IfNotPresent', 'name': 'busybox', 'resources': {'limits': {'memory': '1.74Gi'}, 'requests'
: {'memory': '1.16Gi'}}, 'terminationMessagePath': '/dev/termination-log', 'terminationMessagePolicy': 'File'}], [{'image': 'nginx:1.8.1', 'imagePullPolicy': 'IfNotPresent', 'name': 'nginx',
'ports': [{'containerPort': 80, 'protocol': 'TCP'}], 'resources': {}, 'terminationMessagePath': '/dev/termination-log', 'terminationMessagePolicy': 'File'}, {'command': ['/bin/sleep', '3600
0'], 'image': 'busybox', 'imagePullPolicy': 'IfNotPresent', 'name': 'busybox', 'resources': {}, 'terminationMessagePath': '/dev/termination-log', 'terminationMessagePolicy': 'File'}]))
INFO kopf.objects:libvirt.py:251 Looking at pod my-libvirt-ds-rl25c on node kind-worker
INFO kopf.objects:libvirt.py:262 Found matching pod my-libvirt-ds-rl25c on node kind-worker
INFO kopf.objects:libvirt.py:263 Found pod with metadata: {'annotations': None, 'cluster_name': None, 'creation_timestamp': datetime.datetime(2020, 2, 10, 13, 2, 30, tzinfo=tzlocal()), '
deletion_grace_period_seconds': None, 'deletion_timestamp': None, 'finalizers': ['kopf.zalando.org/KopfFinalizerMarker'], 'generate_name': 'my-libvirt-ds-', 'generation': None, 'initializers
': None, 'labels': {'app': 'qemu', 'comp': 'libvirt', 'owner-object-type': 'libvirt-ds'}, 'managed_fields': None, 'name': 'my-libvirt-ds-rl25c', 'namespace': 'default', 'owner_references': [
{'api_version': 'oiaas.org/v1', 'block_owner_deletion': True, 'controller': True, 'kind': 'LibvirtDaemonSet', 'name': 'my-libvirt-ds', 'uid': 'a4b1ea71-5c3b-4d71-bb0e-b5b35c4599e9'}], 'resou
rce_version': '537734', 'self_link': '/api/v1/namespaces/default/pods/my-libvirt-ds-rl25c', 'uid': 'b1e466b8-3b18-407d-b589-ed2f40179c46'}
INFO kopf.objects:libvirt.py:365 Received pod spec update with diff: (('add', ('spec', 'template', 'spec', 'nodeSelector'), None, {'libvirt': 'yes'}), ('remove', ('spec', 'template', 'sp
ec', 'affinity'), {'nodeAffinity': {'requiredDuringSchedulingIgnoredDuringExecution': {'nodeSelectorTerms': [{'matchFields': [{'key': 'metadata.name', 'operator': 'In', 'values': ['kind-work
er']}]}]}}}, None), ('change', ('spec', 'template', 'spec', 'tolerations'), [{'operator': 'Exists'}, {'operator': 'Exists', 'effect': 'NoSchedule', 'key': 'node.kubernetes.io/disk-pressure'}
, {'operator': 'Exists', 'effect': 'NoSchedule', 'key': 'node.kubernetes.io/memory-pressure'}, {'operator': 'Exists', 'effect': 'NoSchedule', 'key': 'node.kubernetes.io/unschedulable'}, {'op
erator': 'Exists', 'effect': 'NoSchedule', 'key': 'node.kubernetes.io/network-unavailable'}], [{'operator': 'Exists'}]), ('change', ('spec', 'template', 'spec', 'containers'), [{'image': 'ng
inx:1.8.1', 'imagePullPolicy': 'IfNotPresent', 'name': 'nginx', 'ports': [{'containerPort': 80, 'protocol': 'TCP'}], 'resources': {}, 'terminationMessagePath': '/dev/termination-log', 'termi
nationMessagePolicy': 'File'}, {'command': ['/bin/sleep', '36000'], 'image': 'busybox', 'imagePullPolicy': 'IfNotPresent', 'name': 'busybox', 'resources': {'limits': {'memory': '1.74Gi'}, 'r
equests': {'memory': '1.16Gi'}}, 'terminationMessagePath': '/dev/termination-log', 'terminationMessagePolicy': 'File'}], [{'image': 'nginx:1.8.1', 'imagePullPolicy': 'IfNotPresent', 'name':
'nginx', 'ports': [{'containerPort': 80, 'protocol': 'TCP'}], 'resources': {}, 'terminationMessagePath': '/dev/termination-log', 'terminationMessagePolicy': 'File'}, {'command': ['/bin/sleep
', '36000'], 'image': 'busybox', 'imagePullPolicy': 'IfNotPresent', 'name': 'busybox', 'resources': {}, 'terminationMessagePath': '/dev/termination-log', 'terminationMessagePolicy': 'File'}]
))
INFO kopf.objects:libvirt.py:366 Starting to update pod my-libvirt-ds-rl25c on node kind-worker
INFO kopf.objects:libvirt.py:374 Received patch: {'spec': {'containers': [{'image': 'nginx:1.8.1', 'imagePullPolicy': 'IfNotPresent', 'name': 'nginx', 'ports': [{'containerPort': 80, 'pr
otocol': 'TCP'}], 'resources': {}, 'terminationMessagePath': '/dev/termination-log', 'terminationMessagePolicy': 'File'}, {'command': ['/bin/sleep', '36000'], 'image': 'busybox', 'imagePullP
olicy': 'IfNotPresent', 'name': 'busybox', 'resources': {}, 'terminationMessagePath': '/dev/termination-log', 'terminationMessagePolicy': 'File'}]}}
INFO kopf.objects:handling.py:571 Handler 'update_libvirtds/kind-worker' succeeded.
INFO kopf.objects:handling.py:571 Handler 'update_libvirtds' succeeded.
INFO kopf.objects:handling.py:329 All handlers succeeded for update.
INFO kopf.objects:libvirt.py:477 Custom object my-libvirt-ds is scheduled for deletion
INFO kopf.objects:handling.py:571 Handler 'delete_libvirtds' succeeded.
INFO kopf.objects:handling.py:329 All handlers succeeded for deletion.
INFO kopf.objects:libvirt.py:484 Running delete handler for pod my-libvirt-ds-rl25c
INFO kopf.objects:libvirt.py:495 Pod my-libvirt-ds-rl25c was deleted by k8s cascaded deletion of owner
INFO kopf.objects:handling.py:571 Handler 'delete_pod' succeeded.
INFO kopf.objects:handling.py:329 All handlers succeeded for deletion.
ERROR kopf.reactor.queueing:queueing.py:182 functools.partial(<function resource_handler at 0x7f6ba6180ef0>, lifecycle=<function asap at 0x7f6ba6175ef0>, registry=<kopf.toolkits.legacy_re
gistries.SmartGlobalRegistry object at 0x7f6ba40ca710>, memories=<kopf.structs.containers.ResourceMemories object at 0x7f6b9fe15a50>, resource=Resource(group='', version='v1', plural='pods')
, event_queue=<Queue at 0x7f6ba40ca590 maxsize=0 _getters[1] tasks=10>) failed with an exception. Ignoring the event.
Traceback (most recent call last):
File "/usr/local/lib/python3.7/site-packages/kopf/reactor/queueing.py", line 179, in worker
await handler(event=event, replenished=replenished)
File "/usr/local/lib/python3.7/site-packages/kopf/reactor/handling.py", line 223, in resource_handler
await patching.patch_obj(resource=resource, patch=patch, body=body)
File "/usr/local/lib/python3.7/site-packages/kopf/clients/auth.py", line 46, in wrapper
return await fn(*args, **kwargs, context=context)
File "/usr/local/lib/python3.7/site-packages/kopf/clients/patching.py", line 54, in patch_obj
raise_for_status=True,
File "/usr/local/lib/python3.7/site-packages/aiohttp/client.py", line 588, in _request
resp.raise_for_status()
File "/usr/local/lib/python3.7/site-packages/aiohttp/client_reqrep.py", line 946, in raise_for_status
headers=self.headers)
aiohttp.client_exceptions.ClientResponseError: 422, message='Unprocessable Entity', url=URL('https://127.0.0.1:53032/api/v1/namespaces/default/pods/my-libvirt-ds-rl25c')
INFO kopf.reactor.running:running.py:457 Stop-flag is set to True. Operator is stopping.
PASSED
```
## Environment
<!-- The following commands can help:
`kopf --version` or `pip show kopf`
`kubectl version`
`python --version`
-->
* Kopf version: 0.25
* Kubernetes version: 1.15.3
* Python version: 3.7.5
* OS/platform: Linux docker-desktop 4.9.184-linuxkit #1 SMP Tue Jul 2 22:58:16 UTC 2019 x86_64 GNU/Linux
```
aiohttp==3.6.2
aiojobs==0.2.2
async-timeout==3.0.1
attrs==19.3.0
cachetools==4.0.0
certifi==2019.11.28
chardet==3.0.4
Click==7.0
google-auth==1.11.0
idna==2.8
importlib-metadata==1.5.0
iso8601==0.1.12
Jinja2==2.11.1
jsonpatch==1.25
jsonpointer==2.0
kopf==0.25
kubernetes==10.0.0
MarkupSafe==1.1.1
more-itertools==8.2.0
multidict==4.7.4
oauthlib==3.1.0
packaging==20.1
pip==19.3.1
pluggy==0.13.1
py==1.8.1
pyasn1==0.4.8
pyasn1-modules==0.2.8
pykube-ng==20.1.0
pyparsing==2.4.6
pytest==5.3.5
pytest-asyncio==0.10.0
python-dateutil==2.8.1
PyYAML==5.3
requests==2.22.0
requests-oauthlib==1.3.0
rsa==4.0
setuptools==41.4.0
six==1.14.0
typing-extensions==3.7.4.1
urllib3==1.25.8
wcwidth==0.1.8
websocket-client==0.57.0
wheel==0.33.6
yarl==1.4.2
zipp==2.2.0
```
---
> <a href="https://github.com/nolar"><img align="left" height="30" src="https://avatars0.githubusercontent.com/u/544296?v=4"></a> Commented by [nolar](https://github.com/nolar) at _2020-02-13 17:23:57+00:00_
>
Hello. Thanks for this interesting use-case.
I'm quite surprised that it worked with 0.24 and before — it should also fail. There were no changes that could avoid this error.
There is a code for a similar case already — when 404 is returned from patching (see [code](https://github.com/nolar/kopf/blob/0.25/kopf/clients/patching.py#L57-L58)). However, in your case, it is not 404, but 422. We could catch "422 Unprocessable Entity" the same way — in case it is indeed the case from the Kubernetes API point of view.
It would also be useful to see the full response body from this PATCH request — but this definitely should not be put on the logs.
I will take some time to dive deep into the docs to understand why it is 422. Maybe I can reproduce it locally with the same use-case.
---
> <a href="https://github.com/xavierbaude"><img align="left" height="30" src="https://avatars0.githubusercontent.com/u/3736161?v=4"></a> Commented by [xavierbaude](https://github.com/xavierbaude) at _2020-05-16 19:56:49+00:00_
>
Hi, I also ran this issue with 0.25 and not with release 0.24. When deleting an Ingress object, I get an error : aiohttp.client_exceptions.ClientResponseError: 422, message='Unprocessable Entity' from k8s api when I delete the Ingress object. It's look like kopf try to patch or read an object that I've just deleted.
---
> <a href="https://github.com/cliffburdick"><img align="left" height="30" src="https://avatars1.githubusercontent.com/u/30670611?v=4"></a> Commented by [cliffburdick](https://github.com/cliffburdick) at _2020-05-18 20:39:12+00:00_
>
Hi [nolar](https://github.com/nolar), I'm seeing the same thing with an on.delete handler for pods. Do you need help reproducing it? It seems to happen every time the pod is deleted.
Here is a sample:
```
[2020-05-18 22:02:31,184] nhd.Node [INFO ] Removing pod ('mypod-0', 'p09') from node pp-gcomp001.nae07.v3g-pp-compute.viasat.io
[2020-05-18 22:02:31,196] kopf.reactor.queuein [ERROR ] functools.partial(<function process_resource_event at 0x7f41cba1d430>, lifecycle=<function asap at 0x7f41cbb74820>, registry=<kopf.toolkits.legacy_registries.SmartGlobalRegistry object at 0x7f41cba328b0>, memories=<kopf.structs.containers.ResourceMemories object at 0x7f41ceff7fa0>, resource=Resource(group='', version='v1', plural='pods'), event_queue=<Queue at 0x7f41cf004250 maxsize=0 _getters[1] tasks=12>) failed with an exception. Ignoring the event.
Traceback (most recent call last):
File "/usr/local/lib/python3.8/site-packages/kopf/reactor/queueing.py", line 179, in worker
await processor(event=event, replenished=replenished)
File "/usr/local/lib/python3.8/site-packages/kopf/reactor/processing.py", line 114, in process_resource_event
await patching.patch_obj(resource=resource, patch=patch, body=body)
File "/usr/local/lib/python3.8/site-packages/kopf/clients/auth.py", line 45, in wrapper
return await fn(*args, **kwargs, context=context)
File "/usr/local/lib/python3.8/site-packages/kopf/clients/patching.py", line 55, in patch_obj
await context.session.patch(
File "/usr/local/lib/python3.8/site-packages/aiohttp/client.py", line 588, in _request
resp.raise_for_status()
File "/usr/local/lib/python3.8/site-packages/aiohttp/client_reqrep.py", line 941, in raise_for_status
raise ClientResponseError(
aiohttp.client_exceptions.ClientResponseError: 422, message='Unprocessable Entity', url=URL('https://10.220.0.17:6443/api/v1/namespaces/p09/pods/mypod-0')
[2020-05-18 22:02:32,495] kopf.reactor.queuein [ERROR ] functools.partial(<function process_resource_event at 0x7f41cba1d430>, lifecycle=<function asap at 0x7f41cbb74820>, registry=<kopf.toolkits.legacy_registries.SmartGlobalRegistry object at 0x7f41cba328b0>, memories=<kopf.structs.containers.ResourceMemories object at 0x7f41ceff7fa0>, resource=Resource(group='', version='v1', plural='pods'), event_queue=<Queue at 0x7f41cf004250 maxsize=0 _getters[1] tasks=12>) failed with an exception. Ignoring the event.
Traceback (most recent call last):
File "/usr/local/lib/python3.8/site-packages/kopf/reactor/queueing.py", line 179, in worker
await processor(event=event, replenished=replenished)
File "/usr/local/lib/python3.8/site-packages/kopf/reactor/processing.py", line 114, in process_resource_event
await patching.patch_obj(resource=resource, patch=patch, body=body)
File "/usr/local/lib/python3.8/site-packages/kopf/clients/auth.py", line 45, in wrapper
return await fn(*args, **kwargs, context=context)
File "/usr/local/lib/python3.8/site-packages/kopf/clients/patching.py", line 55, in patch_obj
await context.session.patch(
File "/usr/local/lib/python3.8/site-packages/aiohttp/client.py", line 588, in _request
resp.raise_for_status()
File "/usr/local/lib/python3.8/site-packages/aiohttp/client_reqrep.py", line 941, in raise_for_status
raise ClientResponseError(
aiohttp.client_exceptions.ClientResponseError: 422, message='Unprocessable Entity', url=URL('https://10.220.0.17:6443/api/v1/namespaces/p09/pods/mypod-0')
```
---
> <a href="https://github.com/cliffburdick"><img align="left" height="30" src="https://avatars1.githubusercontent.com/u/30670611?v=4"></a> Commented by [cliffburdick](https://github.com/cliffburdick) at _2020-05-26 16:44:08+00:00_
>
[nolar](https://github.com/nolar) / [brutus333](https://github.com/brutus333) I think I see what's happening. Like the OP, I have a controller for a CRD that creates pods. I think what should happen is kopf would add the finalizer to those pods after they're created, but I don't see that happening. Instead, when the pod is deleted kopf tries to add the finalizer:
```
{'metadata': {'finalizers': ['kopf.zalando.org/KopfFinalizerMarker']}}
```
This returns a 422 error code because the pod is already in the terminating state, and this can be reproduced using kubectl as well. I couldn't find the general rule of when kopf is supposed to add the finalizers, but I would think it's before the deletion.
---
> <a href="https://github.com/cliffburdick"><img align="left" height="30" src="https://avatars1.githubusercontent.com/u/30670611?v=4"></a> Commented by [cliffburdick](https://github.com/cliffburdick) at _2020-05-26 19:32:18+00:00_
>
A bit more information: it looks like the pod did indeed have the finalizer handle to begin with. The on.delete handler is being called when the pod is deleted, and the finalizer is removed (correctly). But for some reason there's a second event getting fired that triggers a re-tag of the pod's finalizer, which fails since it's no longer here. Here's an example:
```
[2020-05-26 19:18:54,707] kopf.objects [DEBUG ] [p09/chim-0] Invoking handler 'TriadPodDelete'. <--- on.delete handler
[2020-05-26 19:18:54,708] nhd.TriadController [INFO ] Saw deleted Triad pod p09.chim-0
[2020-05-26 19:18:54,708] nhd.TriadController [INFO ] TriadSet this pod belonged to was deleted. Not restarting pod
[2020-05-26 19:18:54,710] kopf.objects [INFO ] [p09/chim-0] Handler 'TriadPodDelete' succeeded.
[2020-05-26 19:18:54,711] kopf.objects [INFO ] [p09/chim-0] All handlers succeeded for deletion.
[2020-05-26 19:18:54,713] kopf.objects [DEBUG ] [p09/chim-0] Removing the finalizer, thus allowing the actual deletion.
[2020-05-26 19:18:54,713] kopf.objects [DEBUG ] [p09/chim-0] Patching with: {'metadata': {'finalizers': []}}
[2020-05-26 19:18:54,842] kopf.objects [DEBUG ] [p09/chim-0] Adding the finalizer, thus preventing the actual deletion.
[2020-05-26 19:18:54,843] kopf.objects [DEBUG ] [p09/chim-0] Patching with: {'metadata': {'finalizers': ['kopf.zalando.org/KopfFinalizerMarker']}}
```
---
> <a href="https://github.com/nolar"><img align="left" height="30" src="https://avatars0.githubusercontent.com/u/544296?v=4"></a> Commented by [nolar](https://github.com/nolar) at _2020-05-26 19:59:29+00:00_
>
[cliffburdick](https://github.com/cliffburdick) Thank you for this investigation!
Can you please verify this with 0.27rc6 in an isolated environment (because it is a release candidate yet)?
---
The logic for finalizer addition/removal is located in `kopf/reactor/processing.py` ([link1](https://github.com/nolar/kopf/blob/0.27rc6/kopf/reactor/processing.py#L127-L153) & [link2](https://github.com/nolar/kopf/blob/0.27rc6/kopf/reactor/processing.py#L185-L191)). Previously, e.g. in 0.25, it was in `kopf/reactor/handling.py` ([link3](https://github.com/nolar/kopf/blob/0.25/kopf/reactor/handling.py#L295-L306) & [link4](https://github.com/nolar/kopf/blob/0.25/kopf/reactor/handling.py#L342-L343)).
The finalizer decisioning logic was _significantly_ reworked in 0.27 RCs (due to a special type of handlers added: daemons & timers), but it is hard to say which cases were or were not solved as a side-effect compared to 0.25.
However, thanks to your investigation, I can make a hypothesis, that in 0.25, the finalizer was added because it used 2 criteria only: a finalizer is needed (there are deletion handlers) AND the finalizer is absent on the object — as seen in the link 3.
It could only work normally if the object is removed instantly after the finalizer is removed, and there are no additional cycles, e.g. with other controllers with their own finalziers.
In 0.27, an additional 3rd criterion was added (as seen in the link 1): `deletion_is_ongoing` — and if the deletion is indeed ongoing, the finalizer is NOT added even if it seems needed according to the previous two criteria.
So, with some above-zero probability, the issue is solved. But this needs to be verified.
---
> <a href="https://github.com/cliffburdick"><img align="left" height="30" src="https://avatars1.githubusercontent.com/u/30670611?v=4"></a> Commented by [cliffburdick](https://github.com/cliffburdick) at _2020-05-26 20:01:15+00:00_
>
[nolar](https://github.com/nolar) sure! I'll try it out and report back. For what it's worth, when this happens `requires_finalizer` was True, and `has_finalizer` was False
---
> <a href="https://github.com/cliffburdick"><img align="left" height="30" src="https://avatars1.githubusercontent.com/u/30670611?v=4"></a> Commented by [cliffburdick](https://github.com/cliffburdick) at _2020-05-26 20:10:35+00:00_
>
[nolar](https://github.com/nolar) I can confirm that 0.27rc6 indeed fixes the problem!
I did notice a lot more aiohttp traffic to the k8s server while the pod was active compared to 0.25, but I am no longer seeing the 422 error code. I think this one can likely be closed.
---
> <a href="https://github.com/nolar"><img align="left" height="30" src="https://avatars0.githubusercontent.com/u/544296?v=4"></a> Commented by [nolar](https://github.com/nolar) at _2020-05-26 20:30:38+00:00_
>
[cliffburdick](https://github.com/cliffburdick) Regarding the traffic: Can you please create a separate issue with some excerpts and data? Is it in bytes or in rps?
The byte-measured traffic can increase due to double-storage of Kopf's own status: annotations PLUS status — for smooth transitioning. Previously, it was only in status, but Kubernetes's "structural schemas" broke that since K8s 1.16+. This aspect can be [configured](https://kopf.readthedocs.io/en/latest/configuration/#handling-progress).
The rps-measured traffic should not be higher than before. In theory. This is worth checking out.
Anyway, I never tested Kopf for performance yet. Maybe, the time comes to start collecting some data & issues for this.
---
> <a href="https://github.com/cliffburdick"><img align="left" height="30" src="https://avatars1.githubusercontent.com/u/30670611?v=4"></a> Commented by [cliffburdick](https://github.com/cliffburdick) at _2020-05-26 20:34:59+00:00_
>
[nolar](https://github.com/nolar) sure. It was rps -- the bytes didn't increase much. I had some debug print statements in the aio library from trying to debug this issue, and saw those increase. I'll try to write up more detail.
---
> <a href="https://github.com/nolar"><img align="left" height="30" src="https://avatars0.githubusercontent.com/u/544296?v=4"></a> Commented by [nolar](https://github.com/nolar) at _2020-05-26 20:35:13+00:00_
>
[cliffburdick](https://github.com/cliffburdick) PS: 0.27rc6 is going to be 0.27 in a few days — I have finally finished testing it in action. But test it carefully before upgrading anyway — 0.27 is a huge change, and therefore it is risky (despite all backward compatibility and stability attempted) — and 6 (!) release candidates kind of suggest that it wasn't an easy release.
---
> <a href="https://github.com/cliffburdick"><img align="left" height="30" src="https://avatars1.githubusercontent.com/u/30670611?v=4"></a> Commented by [cliffburdick](https://github.com/cliffburdick) at _2020-05-26 20:36:19+00:00_
>
> [cliffburdick](https://github.com/cliffburdick) PS: 0.27rc6 is going to be 0.27 in a few days — I have finally finished testing it in action. But test it carefully before upgrading anyway — 0.27 is a huge change, and therefore it is risky (despite all backward compatibility and stability attempted) — and 6 (!) release candidates kind of suggest that it wasn't an easy release.
Great! Luckily I'm still in the testing phase and it's not officially released anyways, so it shouldn't break anything on my end.
---
> <a href="https://github.com/akojima"><img align="left" height="30" src="https://avatars1.githubusercontent.com/u/3386570?v=4"></a> Commented by [akojima](https://github.com/akojima) at _2020-06-24 23:20:32+00:00_
>
I still see this issue in 0.27, but only when the operator uses a custom finalizer of its own. Whenever the delete handler removes a finalizer, the 422 exception is thrown, after the handler returns.
It's mostly harmless because the delete handler finishes fine and since handlers are supposed to be idempotent anyway, nothing bad happens from the retried delete handler (plus, the extra finalizer is already gone, otherwise I suppose the retries would keep on forever). Still, it would be nice if the exception didn't happen, since that prevents clean test runs and can be confusing when troubleshooting.
Let me know if you'd like me to provide a minimal test case.
|
open
|
2020-08-18T20:03:21Z
|
2020-08-23T20:55:24Z
|
https://github.com/nolar/kopf/issues/309
|
[
"bug",
"archive"
] |
kopf-archiver[bot]
| 0
|
TencentARC/GFPGAN
|
deep-learning
| 529
|
For Free
|
Please make this site free. My father doesn't have much money.
|
open
|
2024-03-20T04:30:08Z
|
2024-06-16T21:28:42Z
|
https://github.com/TencentARC/GFPGAN/issues/529
|
[] |
md-roni-f
| 3
|
assafelovic/gpt-researcher
|
automation
| 245
|
smart_token_limit Exceeds Max Tokens
|
### Description
I've been experimenting with different output token limits for research purposes. However, I encountered an error when setting the `smart_token_limit` to 8000 in `gpt_researcher/config/config.py`.
### Error Encountered
The following error was thrown:Error code: 400 - {'error': {'message': 'max_tokens is too large: 8000. This model supports at most 4096 completion tokens, whereas you provided 8000.`
### Possible Cause
I suspect this issue arises because `config.json` is configured with `smart_llm_model=gpt-4`. Interestingly, in `gpt_researcher/config/config.py`, the model is set to `smart_llm_model=gpt-4-1106-preview`.
### Question
Is the discrepancy between the models in `config.json` and `config.py` intentional? Also, I'm running this setup in a Docker environment. Any insights or suggestions for resolving this issue would be greatly appreciated.
### Environment
- Docker
Thank you!
|
closed
|
2023-11-14T11:10:53Z
|
2023-11-30T14:17:18Z
|
https://github.com/assafelovic/gpt-researcher/issues/245
|
[] |
outpost-caprice
| 2
|
koxudaxi/datamodel-code-generator
|
fastapi
| 1,762
|
Two variations of syntaxes for defining dictionaries/free-form objects give different results
|
**Describe the bug**
According to [this OpenAPI guide](https://swagger.io/docs/specification/data-models/dictionaries/), there are two ways to define free-form objects (a.k.a., a dictionary with values of any type).
They are equivalent and we expect the code generator to produce the same results `Optional[Dict[str, Any]] = None`.
Unfortunately only one of the syntaxes works as expected.
**To Reproduce**
**Example schema with syntax 1 (`additionalProperties: true`)**
```yaml
components:
schemas:
CustomObject:
type: object
properties:
config:
$ref: '#/components/schemas/Config'
Config:
type: object
additionalProperties: true
```
**Output of syntax 1**
```python
class Config(BaseModel):
pass
model_config = ConfigDict(
extra="allow",
)
class CustomObject(BaseModel):
config: Optional[Config] = None
```
**Example schema with syntax 2 (`additionalProperties: {}`)**
```yaml
components:
schemas:
CustomObject:
type: object
properties:
config:
$ref: '#/components/schemas/Config'
Config:
type: object
additionalProperties: {}
```
**Output of syntax 2**
```python
class CustomObject(BaseModel):
config: Optional[Dict[str, Any]] = None
```
Used commandline:
```
$ datamodel-codegen --input test.yaml --input-file-type openapi --output test.py --snake-case-field --target-python-version 3.9 --use-schema-description --field-constraints --use-annotated --collapse-root-models --use-one-literal-as-default --enum-field-as-literal one --output-model-type pydantic_v2.BaseModel
```
**Expected behavior**
We expect both syntaxes will result in **Output of syntax 2**.
**Version:**
- OS: macOS Ventura 13.6.1
- Python version: 3.9.18
- datamodel-code-generator version: 0.22.1, 0.25.1
|
open
|
2023-12-06T19:20:29Z
|
2023-12-22T15:09:36Z
|
https://github.com/koxudaxi/datamodel-code-generator/issues/1762
|
[
"bug"
] |
shuangwu5
| 1
|
bregman-arie/devops-exercises
|
python
| 10,230
|
Docker : is not available
|
docker is used in creating image of project
|
open
|
2023-10-02T11:08:46Z
|
2023-10-02T11:08:46Z
|
https://github.com/bregman-arie/devops-exercises/issues/10230
|
[] |
Madhurchandran
| 0
|
gradio-app/gradio
|
data-visualization
| 10,813
|
ERROR: Exception in ASGI application after downgrading pydantic to 2.10.6
|
### Describe the bug
There were reports of the same error in https://github.com/gradio-app/gradio/issues/10662, and the suggestion is to downgrade pydantic, but even after I downgraded pydantic, I am still seeing the same error.
I am running my code on Kaggle
and the error
```
ERROR: Exception in ASGI application
Traceback (most recent call last):
File "/usr/local/lib/python3.10/dist-packages/uvicorn/protocols/http/h11_impl.py", line 403, in run_asgi
result = await app( # type: ignore[func-returns-value]
File "/usr/local/lib/python3.10/dist-packages/uvicorn/middleware/proxy_headers.py", line 60, in __call__
return await self.app(scope, receive, send)
File "/usr/local/lib/python3.10/dist-packages/fastapi/applications.py", line 1054, in __call__
await super().__call__(scope, receive, send)
File "/usr/local/lib/python3.10/dist-packages/starlette/applications.py", line 112, in __call__
await self.middleware_stack(scope, receive, send)
File "/usr/local/lib/python3.10/dist-packages/starlette/middleware/errors.py", line 187, in __call__
raise exc
File "/usr/local/lib/python3.10/dist-packages/starlette/middleware/errors.py", line 165, in __call__
await self.app(scope, receive, _send)
File "/usr/local/lib/python3.10/dist-packages/gradio/route_utils.py", line 789, in __call__
await self.app(scope, receive, send)
File "/usr/local/lib/python3.10/dist-packages/starlette/middleware/exceptions.py", line 62, in __call__
await wrap_app_handling_exceptions(self.app, conn)(scope, receive, send)
File "/usr/local/lib/python3.10/dist-packages/starlette/_exception_handler.py", line 53, in wrapped_app
raise exc
File "/usr/local/lib/python3.10/dist-packages/starlette/_exception_handler.py", line 42, in wrapped_app
await app(scope, receive, sender)
File "/usr/local/lib/python3.10/dist-packages/starlette/routing.py", line 714, in __call__
await self.middleware_stack(scope, receive, send)
File "/usr/local/lib/python3.10/dist-packages/starlette/routing.py", line 734, in app
await route.handle(scope, receive, send)
File "/usr/local/lib/python3.10/dist-packages/starlette/routing.py", line 288, in handle
await self.app(scope, receive, send)
File "/usr/local/lib/python3.10/dist-packages/starlette/routing.py", line 76, in app
await wrap_app_handling_exceptions(app, request)(scope, receive, send)
File "/usr/local/lib/python3.10/dist-packages/starlette/_exception_handler.py", line 53, in wrapped_app
raise exc
File "/usr/local/lib/python3.10/dist-packages/starlette/_exception_handler.py", line 42, in wrapped_app
await app(scope, receive, sender)
File "/usr/local/lib/python3.10/dist-packages/starlette/routing.py", line 73, in app
response = await f(request)
File "/usr/local/lib/python3.10/dist-packages/fastapi/routing.py", line 301, in app
raw_response = await run_endpoint_function(
File "/usr/local/lib/python3.10/dist-packages/fastapi/routing.py", line 214, in run_endpoint_function
return await run_in_threadpool(dependant.call, **values)
File "/usr/local/lib/python3.10/dist-packages/starlette/concurrency.py", line 37, in run_in_threadpool
return await anyio.to_thread.run_sync(func)
File "/usr/local/lib/python3.10/dist-packages/anyio/to_thread.py", line 33, in run_sync
return await get_asynclib().run_sync_in_worker_thread(
File "/usr/local/lib/python3.10/dist-packages/anyio/_backends/_asyncio.py", line 877, in run_sync_in_worker_thread
return await future
File "/usr/local/lib/python3.10/dist-packages/anyio/_backends/_asyncio.py", line 807, in run
result = context.run(func, *args)
File "/usr/local/lib/python3.10/dist-packages/gradio/routes.py", line 584, in main
gradio_api_info = api_info(request)
File "/usr/local/lib/python3.10/dist-packages/gradio/routes.py", line 615, in api_info
api_info = utils.safe_deepcopy(app.get_blocks().get_api_info())
File "/usr/local/lib/python3.10/dist-packages/gradio/blocks.py", line 3019, in get_api_info
python_type = client_utils.json_schema_to_python_type(info)
File "/usr/local/lib/python3.10/dist-packages/gradio_client/utils.py", line 931, in json_schema_to_python_type
type_ = _json_schema_to_python_type(schema, schema.get("$defs"))
File "/usr/local/lib/python3.10/dist-packages/gradio_client/utils.py", line 985, in _json_schema_to_python_type
des = [
File "/usr/local/lib/python3.10/dist-packages/gradio_client/utils.py", line 986, in <listcomp>
f"{n}: {_json_schema_to_python_type(v, defs)}{get_desc(v)}"
File "/usr/local/lib/python3.10/dist-packages/gradio_client/utils.py", line 993, in _json_schema_to_python_type
f"str, {_json_schema_to_python_type(schema['additionalProperties'], defs)}"
File "/usr/local/lib/python3.10/dist-packages/gradio_client/utils.py", line 939, in _json_schema_to_python_type
type_ = get_type(schema)
File "/usr/local/lib/python3.10/dist-packages/gradio_client/utils.py", line 898, in get_type
if "const" in schema:
TypeError: argument of type 'bool' is not iterable
```
### Have you searched existing issues? 🔎
- [x] I have searched and found no existing issues
### Reproduction
```
!pip install -Uqq fastai
!pip uninstall gradio -y
!pip uninstall pydantic -y
!pip cache purge
!pip install pydantic==2.10.6
!pip install gradio
import gradio as gr
from fastai.learner import load_learner
learn = load_learner('export.pkl')
labels = learn.dls.vocab
def predict(img):
img = PILImage.create(img)
pred,pred_idx,probs = learn.predict(img)
result {labels[i]: float(probs[i].item()) for i in range(len(labels))}
gr.Interface(
fn=predict,
inputs=gr.Image(),
outputs=gr.Label()
).launch(share=True)
```
### Screenshot
_No response_
### Logs
```shell
```
### System Info
```shell
Gradio Environment Information:
------------------------------
Operating System: Linux
gradio version: 5.21.0
gradio_client version: 1.7.2
------------------------------------------------
gradio dependencies in your environment:
aiofiles: 22.1.0
anyio: 3.7.1
audioop-lts is not installed.
fastapi: 0.115.11
ffmpy: 0.5.0
gradio-client==1.7.2 is not installed.
groovy: 0.1.2
httpx: 0.28.1
huggingface-hub: 0.29.0
jinja2: 3.1.4
markupsafe: 2.1.5
numpy: 1.26.4
orjson: 3.10.12
packaging: 24.2
pandas: 2.2.3
pillow: 11.0.0
pydantic: 2.10.6
pydub: 0.25.1
python-multipart: 0.0.20
pyyaml: 6.0.2
ruff: 0.11.0
safehttpx: 0.1.6
semantic-version: 2.10.0
starlette: 0.46.1
tomlkit: 0.13.2
typer: 0.15.1
typing-extensions: 4.12.2
urllib3: 2.3.0
uvicorn: 0.34.0
authlib; extra == 'oauth' is not installed.
itsdangerous; extra == 'oauth' is not installed.
gradio_client dependencies in your environment:
fsspec: 2024.12.0
httpx: 0.28.1
huggingface-hub: 0.29.0
packaging: 24.2
typing-extensions: 4.12.2
websockets: 14.1
```
### Severity
Blocking usage of gradio
|
open
|
2025-03-15T15:27:56Z
|
2025-03-17T18:26:54Z
|
https://github.com/gradio-app/gradio/issues/10813
|
[
"bug"
] |
yumengzhao92
| 1
|
jupyterhub/zero-to-jupyterhub-k8s
|
jupyter
| 3,442
|
Not possible to add a ServiceAccount to the Prepuller
|
### Bug description
Even though `prepuller.hook.serviceaccount` is properly configured, these changes aren't applied in the pods
### How to reproduce
1. Configure `prepuller.hook.serviceaccount` with a service account
2. Apply the changes
3. Check that the pod `image-puller` uses the default service account, even though that was not the service account we defined
#### Expected behaviour
The Service Account should be properly set
#### Actual behaviour
The Service Account used is the default and we have no way to change it
|
closed
|
2024-06-23T11:06:34Z
|
2024-10-15T09:20:46Z
|
https://github.com/jupyterhub/zero-to-jupyterhub-k8s/issues/3442
|
[
"bug"
] |
samyuh
| 2
|
thtrieu/darkflow
|
tensorflow
| 586
|
Failed to use tiny yolo
|
Hi, i tried to use tiny-yolo.cfg and tiny-yolo.weights, when i run the command
`python3 flow --model cfg/tiny-yolo.cfg --load bin/tiny-yolo.weights`
I found some errors like this
> Parsing ./cfg/tiny-yolo.cfg
Parsing cfg/tiny-yolo.cfg
Loading bin/tiny-yolo.weights ...
Traceback (most recent call last):
File "flow", line 6, in <module>
cliHandler(sys.argv)
File "/home/asus/darkflow/darkflow/cli.py", line 26, in cliHandler
tfnet = TFNet(FLAGS)
File "/home/asus/darkflow/darkflow/net/build.py", line 58, in __init__
darknet = Darknet(FLAGS)
File "/home/asus/darkflow/darkflow/dark/darknet.py", line 27, in __init__
self.load_weights()
File "/home/asus/darkflow/darkflow/dark/darknet.py", line 82, in load_weights
wgts_loader = loader.create_loader(*args)
File "/home/asus/darkflow/darkflow/utils/loader.py", line 105, in create_loader
return load_type(path, cfg)
File "/home/asus/darkflow/darkflow/utils/loader.py", line 19, in __init__
self.load(*args)
File "/home/asus/darkflow/darkflow/utils/loader.py", line 70, in load
val = walker.walk(new.wsize[par])
File "/home/asus/darkflow/darkflow/utils/loader.py", line 127, in walk
'Over-read {}'.format(self.path)
AssertionError: Over-read bin/tiny-yolo.weights
But when i used yolo.cfg and yolo.weights , no errors found like that. Anyone can solve this problem ?
|
open
|
2018-02-19T01:37:45Z
|
2019-04-17T13:39:23Z
|
https://github.com/thtrieu/darkflow/issues/586
|
[] |
alfamousts
| 4
|
dgtlmoon/changedetection.io
|
web-scraping
| 2,336
|
Text taken from wrong step of browser steps
|
**Describe the bug**
I'm having the same problem as issue #1911, except the text is being taken from step 3 of 4. I can see the saved snapshot is correct, but the saved text is not. When looking at the steps on disk that were grabbed, I can see the text matches step3.html, and the screenshot matches step4.html.
Step 3 takes a few seconds to load. Step 4 is "wait for seconds" to ensure the page is fully loaded. I also tried setting "Wait seconds before extracting text" under the request tab. Neither fix the issue.
**Version**
v0.45.20
**To Reproduce**
Steps to reproduce the behavior:
1. Create browser steps with 4 steps, where step 3 takes a few seconds to load.
2. Run it and see that the text is from step 3 and the snapshot is from step 4.
**Expected behavior**
The text matches the saved snapshot.
**Screenshots**
I know you want me to share the URL and steps, but unfortunately this one is going to a place that would reveal medical information so I don't want to do that.
**Desktop (please complete the following information):**
- OS: [e.g. iOS] MacOS
- Browser [e.g. chrome, safari] Firefox
- Version [e.g. 22] 124.0.2
|
closed
|
2024-04-26T00:29:59Z
|
2024-04-29T10:19:18Z
|
https://github.com/dgtlmoon/changedetection.io/issues/2336
|
[
"triage",
"browser-steps"
] |
fhriley
| 2
|
nteract/papermill
|
jupyter
| 253
|
Hiding Ingested Parameters when executing with `--report-mode`
|
I want to be able to hide the ingested parameters at least when running in report mode. Since a new cell is created by papermill when feeding in params, there is no way to add metadata for that cell in the notebook. When you want to execute a notebook in order to generate some sort of report where no code is visible I think that in most cases the ingested parameters should be hidden. #135
I'm not super familiar with the codebase but would it be as simple as adding:
```
newcell.metadata['jupyter']['source_hidden'] = True
```
Around here -> https://github.com/nteract/papermill/blob/master/papermill/execute.py#L112-L114 ?
----
Furthermore, it would be nice to have a neat way of handling secrets. According to my understanding it's currently not a very good idea to ingest secrets as parameters since, they are made available in the output notebook.
|
closed
|
2018-11-13T20:14:04Z
|
2018-11-14T16:55:23Z
|
https://github.com/nteract/papermill/issues/253
|
[] |
LeonardAukea
| 2
|
nalepae/pandarallel
|
pandas
| 264
|
Memory usage increases across multiple `parallel_apply`
|
## General
- **Operating System**: Linux
- **Python version**: 3.10.8
- **Pandas version**: 1.5.3
- **Pandarallel version**: 1.6.5
## Acknowledgement
- [x] My issue is **NOT** present when using `pandas` without alone (without `pandarallel`)
- [x] If I am on **Windows**, I read the [Troubleshooting page](https://nalepae.github.io/pandarallel/troubleshooting/)
before writing a new bug report
## Bug description
If I run continuous data processing tasks, each with a huge DataFrame using `parallel_apply` , their MEM footprints somehow accumulates.
### Observed behavior
My code logic looks like below.
```
pandarallel.initialize(progress_bar=True, nb_workers=120)
for file_path in file_paths:
df = pd.read_csv(file_path)
df = pd.DataFrame.from_dict(
df.sample(frac=1.0).parallel_apply(SOME_FUNCTION, axis=1).to_dict(),
orient="columns",
)
```
All tasks should have similar footprints in MEM. However, from the below image, one can tell the MEM drops after the first task is finished but soon climbs back up after loading the second task.
<img width="1758" alt="image" src="https://github.com/nalepae/pandarallel/assets/70972517/bb8b78a1-0ee7-42e8-b740-0a194fa182fb">
### Expected behavior
Given that two tasks have similar MEM footprints, I would assume the MEM pattern to be repeated but not accumulated.
## Minimal but working code sample to ease bug fix for `pandarallel` team
As the pseudocode I attached above.
|
closed
|
2024-03-04T19:56:13Z
|
2024-07-23T15:06:52Z
|
https://github.com/nalepae/pandarallel/issues/264
|
[] |
hogan-roblox
| 3
|
aminalaee/sqladmin
|
fastapi
| 826
|
Add Inline models like Django, Flask-Admin
|

|
closed
|
2024-10-07T07:26:15Z
|
2024-10-14T15:33:15Z
|
https://github.com/aminalaee/sqladmin/issues/826
|
[] |
logicli0n
| 1
|
scrapy/scrapy
|
python
| 5,755
|
警报:Passing a 'spider' argument to ExecutionEngine
|
请问大佬这个警报是什么意思啊,我该怎么解决
运行爬虫时:
2022-12-10 21:09:02 [py.warnings] WARNING: C:\Users\wsy\AppData\Roaming\Python\Python310\site-packages\scrapy_redis\spiders
.py:197: ScrapyDeprecationWarning: Passing a 'spider' argument to ExecutionEngine.crawl is deprecated
self.crawler.engine.crawl(req, spider=self)
|
closed
|
2022-12-11T09:06:22Z
|
2022-12-12T10:51:08Z
|
https://github.com/scrapy/scrapy/issues/5755
|
[] |
maintain99
| 2
|
flairNLP/flair
|
pytorch
| 3,450
|
[Bug]: transformers 4.40.0 assumes infinite sequence length on many models and breaks
|
### Describe the bug
This is due to a regression on the transformers side, see: https://github.com/huggingface/transformers/issues/30643 for details.
Flair uses the `tokenizer.model_max_length` in the TransformerEmbeddings to truncate (if `allow_long_sentences=False`) or split (if `allow_long_sentences=True`) long sentences.
### To Reproduce
```python
from flair.data import Sentence
from flair.embeddings import TransformerWordEmbeddings
emb = TransformerWordEmbeddings("distilbert-base-cased", allow_long_sentences=True)
emb.embed(Sentence("Hallo World "*1024))
```
### Expected behavior
The code should run through without any issue.
### Logs and Stack traces
```stacktrace
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
File "C:\Users\Bened\anaconda3\envs\py312\Lib\site-packages\flair\embeddings\base.py", line 50, in embed
self._add_embeddings_internal(data_points)
File "C:\Users\Bened\anaconda3\envs\py312\Lib\site-packages\flair\embeddings\transformer.py", line 705, in _add_embeddings_internal
embeddings = self._forward_tensors(tensors)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "C:\Users\Bened\anaconda3\envs\py312\Lib\site-packages\flair\embeddings\transformer.py", line 1424, in _forward_tensors
return self.forward(**tensors)
^^^^^^^^^^^^^^^^^^^^^^^
File "C:\Users\Bened\anaconda3\envs\py312\Lib\site-packages\flair\embeddings\transformer.py", line 1324, in forward
hidden_states = self.model(input_ids, **model_kwargs)[-1]
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "C:\Users\Bened\anaconda3\envs\py312\Lib\site-packages\torch\nn\modules\module.py", line 1532, in _wrapped_call_impl
return self._call_impl(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "C:\Users\Bened\anaconda3\envs\py312\Lib\site-packages\torch\nn\modules\module.py", line 1541, in _call_impl
return forward_call(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "C:\Users\Bened\anaconda3\envs\py312\Lib\site-packages\transformers\models\distilbert\modeling_distilbert.py", line 806, in forward
embeddings = self.embeddings(input_ids, inputs_embeds) # (bs, seq_length, dim)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "C:\Users\Bened\anaconda3\envs\py312\Lib\site-packages\torch\nn\modules\module.py", line 1532, in _wrapped_call_impl
return self._call_impl(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "C:\Users\Bened\anaconda3\envs\py312\Lib\site-packages\torch\nn\modules\module.py", line 1541, in _call_impl
return forward_call(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "C:\Users\Bened\anaconda3\envs\py312\Lib\site-packages\transformers\models\distilbert\modeling_distilbert.py", line 144, in forward
embeddings = input_embeds + position_embeddings # (bs, max_seq_length, dim)
~~~~~~~~~~~~~^~~~~~~~~~~~~~~~~~~~~
RuntimeError: The size of tensor a (3074) must match the size of tensor b (512) at non-singleton dimension 1
```
### Screenshots
_No response_
### Additional Context
This bug is on the side of https://github.com/huggingface/transformers/issues/30643 therefore this issue is only for visiblity.
If you run into this problem, you can hotfix it in 2 ways:
* pin `transformers<4.40.0`
* provide the `model_max_length` parameter yourself, e.g. `emb = TransformerWordEmbeddings("distilbert-base-cased", allow_long_sentences=True, model_max_length=512)`
### Environment
#### Versions:
##### Flair
0.13.1
##### Pytorch
2.3.0+cpu
##### Transformers
4.40.0
#### GPU
False
|
closed
|
2024-05-03T17:35:23Z
|
2024-12-31T13:38:55Z
|
https://github.com/flairNLP/flair/issues/3450
|
[
"bug"
] |
helpmefindaname
| 3
|
encode/httpx
|
asyncio
| 2,276
|
GET method doesn't support body payload
|
It would be nice to be able to send a body with a GET request. I understand that this may not be considered a good practice, but this is necessary for the API that I have to work with.
RFC 2616, section 4.3 clearly states:
> A message-body **MUST NOT be included** in a request **if the specification of the request method** (section 5.1.1) **does not allow sending an entity-body** in requests.
> https://tools.ietf.org/html/rfc2616#section-4.3
_However_, in the entirety of [section 9.3](https://tools.ietf.org/html/rfc2616#section-9.3), the section defining the GET verb, **nothing prevents a GET request from having a body**.
But before you draw any conclusions - the functionality defined for the GET verb also **does not include any logic involving the message body**.
In other words, if we are to follow the specification:
1. **It is possible** to send a message body with a GET request per specification.
2. However, the server responding to the GET request **must ignore the body** to follow the standard.
Essentially, there is no point, per standard, to send a body with a GET request, even though it is not explicitly disallowed.
[Roy T. Fielding](http://roy.gbiv.com/) backs this interpretation up:
> ...**any HTTP request message is allowed to contain a message body**, and thus must parse messages with that in mind. **Server semantics for GET**, however, are restricted such that **a body**, if any, **has no semantic meaning to the request**. The requirements on parsing are separate from the requirements on method semantics.
> So, yes, **you can send a body with GET, and no, it is never useful to do so**.
> https://groups.yahoo.com/neo/groups/rest-discuss/conversations/messages/9962
More about this: https://github.com/swagger-api/swagger-ui/issues/2136
|
closed
|
2022-06-23T05:30:47Z
|
2022-06-23T07:41:33Z
|
https://github.com/encode/httpx/issues/2276
|
[] |
ZhymabekRoman
| 2
|
sherlock-project/sherlock
|
python
| 2,369
|
False positive for: HackenProof
|
### Additional info
Searching `goslnt` reliably produces a false positive for HackenProof, and unreliably produced false positives for ArtStation (redirected to 404) and AskFM.
### Code of Conduct
- [X] I agree to follow this project's Code of Conduct
|
open
|
2024-11-17T23:14:48Z
|
2024-11-26T01:56:07Z
|
https://github.com/sherlock-project/sherlock/issues/2369
|
[
"false positive"
] |
sudo-nano
| 3
|
google-research/bert
|
nlp
| 559
|
problem multiclass text classification
|
Hi,
I am trying to classify text in 34 mutually exclusive classes using BERT. After preparing train, dev and test TSV files, and I try to execute the command for training and testing
`!python bert/run_classifier.py \
--task_name=cola \
--do_train=true \
--do_eval=true \
--data_dir=./Bert_Input_Folder \
--vocab_file=./uncased_L-24_H-1024_A-16/vocab.txt \
--bert_config_file=./uncased_L-24_H-1024_A-16/bert_config.json \
--init_checkpoint=./uncased_L-24_H-1024_A-16/bert_model.ckpt \
--max_seq_length=512 \
--train_batch_size=32 \
--learning_rate=2e-5 \
--num_train_epochs=3.0 \
--output_dir=./Bert_Output_Folder`
I get the following error
`WARNING: The TensorFlow contrib module will not be included in TensorFlow 2.0.
For more information, please see:
* https://github.com/tensorflow/community/blob/master/rfcs/20180907-contrib-sunset.md
* https://github.com/tensorflow/addons
If you depend on functionality not listed there, please file an issue.
WARNING:tensorflow:Estimator's model_fn (<function model_fn_builder.<locals>.model_fn at 0x7f4b945a01e0>) includes params argument, but params are not passed to Estimator.
INFO:tensorflow:Using config: {'_model_dir': './Bert_Output_Folder', '_tf_random_seed': None, '_save_summary_steps': 100, '_save_checkpoints_steps': 1000, '_save_checkpoints_secs': None, '_session_config': allow_soft_placement: true
graph_options {
rewrite_options {
meta_optimizer_iterations: ONE
}
}
, '_keep_checkpoint_max': 5, '_keep_checkpoint_every_n_hours': 10000, '_log_step_count_steps': None, '_train_distribute': None, '_device_fn': None, '_protocol': None, '_eval_distribute': None, '_experimental_distribute': None, '_service': None, '_cluster_spec': <tensorflow.python.training.server_lib.ClusterSpec object at 0x7f4b94f366a0>, '_task_type': 'worker', '_task_id': 0, '_global_id_in_cluster': 0, '_master': '', '_evaluation_master': '', '_is_chief': True, '_num_ps_replicas': 0, '_num_worker_replicas': 1, '_tpu_config': TPUConfig(iterations_per_loop=1000, num_shards=8, num_cores_per_replica=None, per_host_input_for_training=3, tpu_job_name=None, initial_infeed_sleep_secs=None, input_partition_dims=None), '_cluster': None}
INFO:tensorflow:_TPUContext: eval_on_tpu True
WARNING:tensorflow:eval_on_tpu ignored because use_tpu is False.
INFO:tensorflow:Writing example 0 of 23834
Traceback (most recent call last):
File "bert/run_classifier.py", line 981, in <module>
tf.app.run()
File "/usr/local/lib/python3.6/dist-packages/tensorflow/python/platform/app.py", line 125, in run
_sys.exit(main(argv))
File "bert/run_classifier.py", line 870, in main
train_examples, label_list, FLAGS.max_seq_length, tokenizer, train_file)
File "bert/run_classifier.py", line 490, in file_based_convert_examples_to_features
max_seq_length, tokenizer)
File "bert/run_classifier.py", line 459, in convert_single_example
label_id = label_map[example.label]
KeyError: '33'`
In the run_classifier.py file, I have modified the get_labels() function, originally written for a binary classification task, to return all 34 classes:
`def get_labels(self):
"""See base class."""
return ["0", "1", "2", ..., "33"]`
Any idea what is wrong or if I am missing additional steps?
Thanks!
|
open
|
2019-04-07T02:13:04Z
|
2020-09-13T14:47:59Z
|
https://github.com/google-research/bert/issues/559
|
[] |
86mm86
| 14
|
flavors/django-graphql-jwt
|
graphql
| 22
|
graphql_jwt.relay.ObtainJSONWebToken returns token when wrong credentials are submitted and Authorization header is set
|
I ran into a case when I had two users, `A` and `B`, and was sending a valid token of `A` when trying to obtain a new token for `B`. The mutation doesn't return any error, but instead returns a new token for `A`.
I dig a little in the code and I found out it was because of using `authenticate` here: https://github.com/flavors/django-graphql-jwt/blob/master/graphql_jwt/decorators.py#L69 , as the middleware will authenticate the user using the token instead of validating against the credentials.
So I ended up in a situation where, no matter what I was sending in the mutation input args, I was getting a valid token for another user.
I believe that mutation should validate the credentials instead of using all middlewares to authenticate the user.
A possible fix that pops in my mind right now would be calling `authenticate` only with username and password.
What do you think about this?
Thank you.
|
closed
|
2018-06-21T13:01:10Z
|
2018-06-29T20:18:32Z
|
https://github.com/flavors/django-graphql-jwt/issues/22
|
[
"bug"
] |
vladcalin
| 2
|
hankcs/HanLP
|
nlp
| 1,214
|
感知机模型人名识别错误
|
<!--
注意事项和版本号必填,否则不回复。若希望尽快得到回复,请按模板认真填写,谢谢合作。
-->
## 注意事项
请确认下列注意事项:
* 我已仔细阅读下列文档,都没有找到答案:
- [首页文档](https://github.com/hankcs/HanLP)
- [wiki](https://github.com/hankcs/HanLP/wiki)
- [常见问题](https://github.com/hankcs/HanLP/wiki/FAQ)
* 我已经通过[Google](https://www.google.com/#newwindow=1&q=HanLP)和[issue区检索功能](https://github.com/hankcs/HanLP/issues)搜索了我的问题,也没有找到答案。
* 我明白开源社区是出于兴趣爱好聚集起来的自由社区,不承担任何责任或义务。我会礼貌发言,向每一个帮助我的人表示感谢。
* [x] 我在此括号内输入x打钩,代表上述事项确认完毕。
## 版本号
<!-- 发行版请注明jar文件名去掉拓展名的部分;GitHub仓库版请注明master还是portable分支 -->
当前最新版本号是:
我使用的版本是:pyhanlp 0.1.45
<!--以上属于必填项,以下可自由发挥-->
## 我的问题
<!-- 请详细描述问题,越详细越可能得到解决 -->
对句子 “这时我女儿凤霞推门进来,又摇摇晃晃地把门关上。凤霞尖声细气地对我说:”分词,
会得到“。凤霞”也是个人名这种匪夷所思的结果。把句号改成逗号,分词结果就会变正常。
我已将“凤霞”加入词典,结果是相同的
## 复现问题
<!-- 你是如何操作导致产生问题的?比如修改了代码?修改了词典或模型?-->
### 步骤
1. 首先……
2. 然后……
3. 接着……
### 触发代码
```
txt = "这时我女儿凤霞推门进来,又摇摇晃晃地把门关上,凤霞尖声细气地对我说:"
data_path = "/home/dream/miniconda3/envs/py37/lib/python3.7/site-packages/pyhanlp/static/data/model/perceptron/large/cws.bin"
PerceptronLexicalAnalyzer = JClass('com.hankcs.hanlp.model.perceptron.PerceptronLexicalAnalyzer')
analyzer = PerceptronLexicalAnalyzer(data_path,
HanLP.Config.PerceptronPOSModelPath,
HanLP.Config.PerceptronNERModelPath)
print(analyzer.seg(txt))
```
### 期望输出
<!-- 你希望输出什么样的正确结果?-->
```
[这时/r, 我/r, 女儿/n, 凤霞/nr, 推门/v, 进来/v, ,/w, 又/d, 摇摇晃晃/v, 地/u, 把/p, 门关/n, 上/f, 。/w, 凤霞/nr, 尖声/nz, 细气/a, 地/u, 对/p, 我/r, 说/v, :/w]
```
### 实际输出
<!-- HanLP实际输出了什么?产生了什么效果?错在哪里?-->
```
[这时/r, 我/r, 女儿/n, 凤霞/nr, 推门/v, 进来/v, ,/w, 又/d, 摇摇晃晃/v, 地/u, 把/p, 门关/n, 上/f, 。 凤霞/nr, 尖声/nz, 细气/a, 地/u, 对/p, 我/r, 说/v, :/w]
```
## 其他信息
<!-- 任何可能有用的信息,包括截图、日志、配置文件、相关issue等等。-->
|
closed
|
2019-06-28T03:36:45Z
|
2020-03-20T10:09:38Z
|
https://github.com/hankcs/HanLP/issues/1214
|
[
"question"
] |
lingjiameng
| 2
|
kizniche/Mycodo
|
automation
| 490
|
Feature: Dedicated AC/Heating Function
|
This thread is for the development of a dedicated AC/Heating Function that incorporates the benefits of PID control with the features required for operating an efficient AC/Heating system. Additionally, features that would enable low temperatures with a wall/compact AC system can be integrated ([coolbot](https://www.storeitcold.com/) clone).
Ref: #484 #346
|
closed
|
2018-06-05T15:40:23Z
|
2020-07-23T18:47:12Z
|
https://github.com/kizniche/Mycodo/issues/490
|
[
"enhancement"
] |
kizniche
| 23
|
huggingface/datasets
|
tensorflow
| 6,854
|
Wrong example of usage when config name is missing for community script-datasets
|
As reported by @Wauplin, when loading a community dataset with script, there is a bug in the example of usage of the error message if the dataset has multiple configs (and no default config) and the user does not pass any config. For example:
```python
>>> ds = load_dataset("google/fleurs")
ValueError: Config name is missing.
Please pick one among the available configs: ['af_za', 'am_et', 'ar_eg', 'as_in', 'ast_es', 'az_az', 'be_by', 'bg_bg', 'bn_in', 'bs_ba', 'ca_es', 'ceb_ph', 'ckb_iq', 'cmn_hans_cn', 'cs_cz', 'cy_gb', 'da_dk', 'de_de', 'el_gr', 'en_us', 'es_419', 'et_ee', 'fa_ir', 'ff_sn', 'fi_fi', 'fil_ph', 'fr_fr', 'ga_ie', 'gl_es', 'gu_in', 'ha_ng', 'he_il', 'hi_in', 'hr_hr', 'hu_hu', 'hy_am', 'id_id', 'ig_ng', 'is_is', 'it_it', 'ja_jp', 'jv_id', 'ka_ge', 'kam_ke', 'kea_cv', 'kk_kz', 'km_kh', 'kn_in', 'ko_kr', 'ky_kg', 'lb_lu', 'lg_ug', 'ln_cd', 'lo_la', 'lt_lt', 'luo_ke', 'lv_lv', 'mi_nz', 'mk_mk', 'ml_in', 'mn_mn', 'mr_in', 'ms_my', 'mt_mt', 'my_mm', 'nb_no', 'ne_np', 'nl_nl', 'nso_za', 'ny_mw', 'oc_fr', 'om_et', 'or_in', 'pa_in', 'pl_pl', 'ps_af', 'pt_br', 'ro_ro', 'ru_ru', 'sd_in', 'sk_sk', 'sl_si', 'sn_zw', 'so_so', 'sr_rs', 'sv_se', 'sw_ke', 'ta_in', 'te_in', 'tg_tj', 'th_th', 'tr_tr', 'uk_ua', 'umb_ao', 'ur_pk', 'uz_uz', 'vi_vn', 'wo_sn', 'xh_za', 'yo_ng', 'yue_hant_hk', 'zu_za', 'all']
Example of usage:
`load_dataset('fleurs', 'af_za')`
```
Note the example of usage in the error message suggests loading "fleurs" instead of "google/fleurs".
|
closed
|
2024-05-02T06:59:39Z
|
2024-05-03T15:51:59Z
|
https://github.com/huggingface/datasets/issues/6854
|
[
"bug"
] |
albertvillanova
| 0
|
ploomber/ploomber
|
jupyter
| 859
|
Shell script task with multiple products
|
I get the following error
```
Error: Failed to initialize task 'clean'
'Getitem' object has no attribute 'name'
```
when running the pipeline
### pipeline.yaml
```yaml
tasks:
- source: get_data.py
product:
nb: get_data.ipynb
data: data.csv
- source: clean.sh
product: output.log
```
### get_data.py
```python
import numpy as np
import pandas as pd
# + tags=['parameters']
upstream = None
product = None
# -
data = pd.DataFrame(
{"A": np.random.randint(0, 10, 10), "B": np.random.randint(10, 20, 10)}
)
data.to_csv(product["data"])
```
### clean.sh
```sh
echo "Starting" >> {{product}}
cat {{upstream['get_data']['data']}}
```
I tried to follow these [docs](https://docs.ploomber.io/en/latest/user-guide/shell.html#shell-tasks), but there wasn't an example using a shell script downstream of a Python script. FWIW, I found a work around by hard-coding my file paths into my shell script.
|
closed
|
2022-06-16T15:42:26Z
|
2022-06-17T15:05:02Z
|
https://github.com/ploomber/ploomber/issues/859
|
[] |
reesehopkins
| 1
|
robotframework/robotframework
|
automation
| 4,497
|
Libdoc: Support setting dark or light mode explicitly
|
The HTML documentation generated by libdoc can be opened in an IDE, where many people use a dark theme. The contrast between the Robot Framework code on a dark background and the library documentation with a white background is unpleasant.
This problem can be solved if libdoc has a stylesheet parameter to specify the CSS file that should be used to style the documentation. And maybe libdoc should include a light and a dark stylesheet.
|
closed
|
2022-10-06T11:01:38Z
|
2022-10-11T17:37:11Z
|
https://github.com/robotframework/robotframework/issues/4497
|
[
"enhancement",
"priority: medium",
"rc 2"
] |
mardukbp
| 16
|
junyanz/pytorch-CycleGAN-and-pix2pix
|
deep-learning
| 1,457
|
Pre-trained Model Archetecture
|
Hi @junyanz , I am using your pre-trained model to compare with my model.
I downloaded the models from [link](http://efrosgans.eecs.berkeley.edu/cyclegan/pretrained_models/), but I am confused by the structure of the models. How should I load the models?
It seems the .pth file contains parameters only and I could not find any information about the structure.
Best/
|
open
|
2022-07-13T13:39:36Z
|
2022-07-13T13:40:59Z
|
https://github.com/junyanz/pytorch-CycleGAN-and-pix2pix/issues/1457
|
[] |
WuhaoStatistic
| 0
|
tensorflow/tensor2tensor
|
deep-learning
| 1,785
|
Common Voice Clean dataset giving error when using t2t-datagen
|
### Description
I've been trying to generate the common voice dataset to improve the ASR checkpoint that was trained on librispeech but when using the command it downloads the file properly but seems to not find cv_corpus_v1. I think it probably doesn't extract the .tar properly
### Environment information
```
OS: Google Colab
$ pip freeze | grep tensor
mesh-tensorflow==0.1.9
tensor2tensor==1.11.0
tensorboard==1.14.0
tensorboardcolab==0.0.22
tensorflow==1.14.0
tensorflow-datasets==2.0.0
tensorflow-estimator==1.14.0
tensorflow-gan==2.0.0
tensorflow-hub==0.7.0
tensorflow-metadata==0.21.1
tensorflow-privacy==0.2.2
tensorflow-probability==0.7.0
$ python -V
3.6.9
```
### For bugs: reproduction and error logs
```
# Steps to reproduce:
use
!t2t-datagen \
--problem=common_voice_clean \
--data_dir=final_dir \
--tmp_dir=tmp_dir
You should see the download happen smoothly until it finishes and get a FileNotFoundError
```
```
# Error logs:
INFO:tensorflow:Successfully downloaded cv_corpus_v1.tar.gz, 12852160484 bytes.
I0204 09:30:56.961190 140237164398464 generator_utils.py:246] Successfully downloaded cv_corpus_v1.tar.gz, 12852160484 bytes.
Traceback (most recent call last):
File "/usr/local/bin/t2t-datagen", line 28, in <module>
tf.app.run()
File "/usr/local/lib/python3.6/dist-packages/tensorflow/python/platform/app.py", line 40, in run
_run(main=main, argv=argv, flags_parser=_parse_flags_tolerate_undef)
File "/usr/local/lib/python3.6/dist-packages/absl/app.py", line 299, in run
_run_main(main, args)
File "/usr/local/lib/python3.6/dist-packages/absl/app.py", line 250, in _run_main
sys.exit(main(argv))
File "/usr/local/bin/t2t-datagen", line 23, in main
t2t_datagen.main(argv)
File "/usr/local/lib/python3.6/dist-packages/tensor2tensor/bin/t2t_datagen.py", line 198, in main
generate_data_for_registered_problem(problem)
File "/usr/local/lib/python3.6/dist-packages/tensor2tensor/bin/t2t_datagen.py", line 260, in generate_data_for_registered_problem
problem.generate_data(data_dir, tmp_dir, task_id)
File "/usr/local/lib/python3.6/dist-packages/tensor2tensor/data_generators/common_voice.py", line 166, in generate_data
self.generator(data_dir, tmp_dir, self.TEST_DATASETS), test_paths)
File "/usr/local/lib/python3.6/dist-packages/tensor2tensor/data_generators/generator_utils.py", line 165, in generate_files
for case in generator:
File "/usr/local/lib/python3.6/dist-packages/tensor2tensor/data_generators/common_voice.py", line 136, in generator
data_tuples = _collect_data(raw_data_dir)
File "/usr/local/lib/python3.6/dist-packages/tensor2tensor/data_generators/common_voice.py", line 53, in _collect_data
filename for filename in os.listdir(directory)
FileNotFoundError: [Errno 2] No such file or directory: 'tmp_dir/cv_corpus_v1'
```
|
closed
|
2020-02-04T09:44:59Z
|
2020-02-04T14:03:17Z
|
https://github.com/tensorflow/tensor2tensor/issues/1785
|
[] |
RegaliaXYZ
| 0
|
ageitgey/face_recognition
|
machine-learning
| 1,251
|
Getting irregular output when running compare faces with lists
|
* face_recognition version: 1.3.0
* Python version: 3.9.0
* Operating System: Windows
I am trying to compare a sample face image with a list of encodings which are from stored in my files
When I ran the compare_faces function on the sample image encoding and the list of encodings (encodings for only 2 images) I got the following:
```
[array([ True, True, True, True, True, True, True, True, True,
True, True, True, True, True, True, True, True, True,
True, True, True, True, True, True, True, True, True,
True, True, True, True, True, True, True, True, True,
True, True, True, True, True, True, True, True, True,
True, True, True, True, True, True, True, True, True,
True, True, True, True, True, True, True, True, True,
True, True, True, True, True, True, True, True, True,
True, True, True, True, True, True, True, True, True,
True, True, True, True, True, True, True, True, True,
True, True, True, True, True, True, True, True, True,
True, True, True, True, True, True, True, True, True,
True, True, True, True, True, True, True, True, True,
True, True, True, True, True, True, True, True, True,
True, True]), array([ True, True, True, True, True, True, True, True, True,
True, True, True, True, True, True, True, True, True,
True, True, True, True, True, True, True, True, True,
True, True, True, True, True, True, True, True, True,
True, True, True, True, True, True, True, True, True,
True, True, True, True, True, True, True, True, True,
True, True, True, True, True, True, True, True, True,
True, True, True, True, True, True, True, True, True,
True, True, True, True, True, True, True, True, True,
True, True, True, True, True, True, True, True, True,
True, True, True, True, True, True, True, True, True,
True, True, True, True, True, True, True, True, True,
True, True])]
```
## What I Did
```
#empty dictionary for encodings in folders
dictionary = {name:[] for name in os.listdir(ROOT)}
#appending encodings to above initialized dictionary
for folder in os.listdir(ROOT):
for image in os.listdir(os.path.join(ROOT,folder)):
img = face_recognition.load_image_file(f"{ROOT}/{folder}/{image}")
dictionary[folder].append(face_recognition.face_encodings(img))
#dictionary looks like: {name:[list of encodings]}
#load single sample image for comparison
my_image = face_recognition.load_image_file("vedank.jpg")
face_encodings = face_recognition.face_encodings(my_image,face_recognition.face_locations(my_image))
#user input to access the above dictionary
user = input("enter your username")
#print results
print(face_recognition.compare_faces(np.array(face_encodings[0]),np.array(dictionary[user])))
```
While playing around with the code, I realized that appending the encodings to lists causes this problem. Any way I can still use the above method and get proper results?
|
closed
|
2020-12-08T12:46:02Z
|
2020-12-09T11:19:25Z
|
https://github.com/ageitgey/face_recognition/issues/1251
|
[] |
VedankPande
| 0
|
tfranzel/drf-spectacular
|
rest-api
| 749
|
Question: Using `TypedDict` as response
|
Hi! I saw in another issue that now we can use `TypedDict` class in the response instead of a serializer. Is it possible to provide an example or a documentation link elaborating this behavior?
Thanks!
|
closed
|
2022-05-30T20:52:53Z
|
2022-06-18T13:37:21Z
|
https://github.com/tfranzel/drf-spectacular/issues/749
|
[] |
kmehran1106
| 1
|
apache/airflow
|
machine-learning
| 47,501
|
AIP-38 | Add API Endpoint to serve connection types and extra form meta data
|
### Body
To be able to implement #47496 and #47497 the connection types and extra form elements meta data needs to be served by an additional API endpoint.
Note: The extra form parameters should be served in the same structure and format like the DAG params such that the form elements of FlexibleForm can be re-used in the UI.
Assumption is that the needed connection types are serialized in a DB table. (No dependency to providers manager should be added to API server)
### Committer
- [x] I acknowledge that I am a maintainer/committer of the Apache Airflow project.
|
closed
|
2025-03-07T14:54:17Z
|
2025-03-12T22:28:20Z
|
https://github.com/apache/airflow/issues/47501
|
[
"kind:feature",
"area:API",
"kind:meta"
] |
jscheffl
| 0
|
encode/uvicorn
|
asyncio
| 1,230
|
Bug: calling `WebSocketProtocol.asgi_receive` returns close frame even if there are data messages before close frame in read queue
|
### Checklist
- [x] The bug is reproducible against the latest release and/or `master`.
- [x] There are no similar issues or pull requests to fix it yet.
### Describe the bug
Once a client sends a close frame, calling `WebSocketProtocol.asgi_receive` returns `{"type": "websocket.disconnect", "code": exc.code}`, even if there are unread messages in the server's read queue that we sent **before** the close frame.
### To reproduce
The easiest way for me to repro is to use an ASGI application framework written on top of `uvicorn`, e.g. FastAPI.
- Install FastAPI and create a module `main.py`
- Run the websocket server: `uvicorn main:app --reload --host 0.0.0.0 --port 8001`
- Open the browser and create a websocket connection to this test endpoint
`main.py`
```py
import asyncio
from fastapi import FastAPI
from starlette.websockets import WebSocket
app = FastAPI()
@app.websocket("/echo")
async def echo_ws(ws: WebSocket):
await ws.accept()
await asyncio.sleep(1)
data = await ws.receive_bytes()
print(data)
```
Paste the following into the browser console:
```js
var socket = new WebSocket('ws://localhost:8001/atom_ws/echo')
socket.onopen = () => {
console.log('socket opened')
socket.send('first')
socket.send('second')
socket.close()
}
```
### Expected behavior
The call to `print(data)` should print `first`, and not raise a `starlette.websockets.WebSocketDisconnect` exception.
### Actual behavior
`starlette.websockets.WebSocketDisconnect` is raised on the first read, even though messages were successfully sent to the server before the close frame, and these messages are in the connection's read queue.
### Debugging material
Logs when running the server code from above:
<img width="839" alt="Screen Shot 2021-11-01 at 9 24 46 PM" src="https://user-images.githubusercontent.com/1524088/139782096-84024344-6044-453e-9b8f-dff193aef53a.png">
If you instead run a simple websocket server written with code directly from the `websockets` library, [as suggested in their docs](https://websockets.readthedocs.io/en/3.0/intro.html#basic-example), you don't have this problem:
```py
import asyncio
import websockets
async def echo(ws, path):
print(f"accepted connection to path {path}")
await asyncio.sleep(1)
# By this time client has already closed connection
# In uvicorn, `await ws.ensure_open()` is called before recv; this is the bug
data = await ws.recv()
print(data) # Prints first
data = await ws.recv()
print(data) # Prints second
# The next `recv` call raises `websockets.exceptions.ConnectionClosedError`, because it reads the close frame
data = await ws.recv()
start_server = websockets.serve(echo, 'localhost', 8001)
asyncio.get_event_loop().run_until_complete(start_server)
asyncio.get_event_loop().run_forever()
```
We can see that the `first` and `second` messages are received, even though the client sent the close frame before the server tried to read any messages:
<img width="933" alt="Screen Shot 2021-11-01 at 9 30 37 PM" src="https://user-images.githubusercontent.com/1524088/139782584-0e436870-0c25-4d6b-b48e-11fdd36bec35.png">
### Environment
- MacOS 10.14.6 / Python 3.8.5 / uvicorn 0.15.0 (this bug is still present in 0.16.0, and in the latest commit in `master`)
- Can also repro on Ubuntu 20.04
- The exact command you're running uvicorn with, all flags you passed included: `uvicorn main:app --reload --host 0.0.0.0 --port 8001`
### Additional context
The bug is caused by this line of code: https://github.com/encode/uvicorn/blob/48edc940522a3d0d7529922a23ac019eeb53f629/uvicorn/protocols/websockets/websockets_impl.py#L286-L286
```py
await self.ensure_open()
data = await self.recv()
```
This change was made in this commit: https://github.com/encode/uvicorn/commit/9a3040c9cd56844631b28631acd5862b5a4eafdd
`uvicorn` depends on `websockets` under the hood, but it shouldn't be calling `ensure_open` before calling `recv`, because `ensure_open` raises an exception if a close frame has been sent by the client **even if there are earlier unread messages in the read queue**.
I'm not sure what the intent of that line of code is, but the server shouldn't raise a "connection closed" exception on read until the actual close frame is read. Otherwise, neither the client nor the server has any way of knowing that data that was successfully sent from the client to the server was ignored by the server.
|
closed
|
2021-11-02T03:33:52Z
|
2021-11-25T09:09:08Z
|
https://github.com/encode/uvicorn/issues/1230
|
[] |
kylebebak
| 1
|
blacklanternsecurity/bbot
|
automation
| 1,452
|
Optimize Neo4j
|
@t94j0 I did some testing with Neo4j, and you're right that it's slow to insert events. In big scans especially, when the events are really flooding in, the Neo4j queue can get backed up.
To fix this, we'll need to figure out how to batch the cypher statements.
|
closed
|
2024-06-12T13:31:35Z
|
2024-08-01T19:47:41Z
|
https://github.com/blacklanternsecurity/bbot/issues/1452
|
[
"enhancement"
] |
TheTechromancer
| 2
|
aeon-toolkit/aeon
|
scikit-learn
| 2,304
|
[test-pycatch22-allnighter] is STALE
|
@web-flow,
test-pycatch22-allnighter has had no activity for 254 days.
This branch will be automatically deleted in 0 days.
|
closed
|
2024-11-04T01:28:25Z
|
2024-11-11T01:28:39Z
|
https://github.com/aeon-toolkit/aeon/issues/2304
|
[
"stale branch"
] |
aeon-actions-bot[bot]
| 0
|
CorentinJ/Real-Time-Voice-Cloning
|
python
| 740
|
Slow training on Tesla P40
|

|
closed
|
2021-04-22T12:00:24Z
|
2021-05-30T07:35:25Z
|
https://github.com/CorentinJ/Real-Time-Voice-Cloning/issues/740
|
[] |
wy192
| 2
|
ansible/ansible
|
python
| 84,726
|
_set_composite_vars doesn't support disable_lookups handling.
|
### Summary
The `_set_composite_vars` method of the Constructable class of the inventory plugin don't have disable_lookups in its parameters. Therefore, when this method calls the `_compose` function of the same class, it always does so with disable_lookups=True.
```
https://github.com/ansible/ansible/blob/devel/lib/ansible/plugins/inventory/__init__.py#L334
def _compose(self, template, variables, disable_lookups=True):
""" helper method for plugins to compose variables for Ansible based on jinja2 expression and inventory vars"""
t = self.templar
```
```
## https://github.com/ansible/ansible/blob/devel/lib/ansible/plugins/inventory/__init__.py#L356
def _set_composite_vars(self, compose, variables, host, strict=False):
""" loops over compose entries to create vars for hosts """
if compose and isinstance(compose, dict):
for varname in compose:
try:
composite = self._compose(compose[varname], variables)
.....
```
### Issue Type
Feature Idea
### Component Name
plugin inventory
### Additional Information
- AWS Inventory plugin for ansible use this function:
https://github.com/ansible-collections/amazon.aws/blob/main/plugins/inventory/aws_ec2.py#L788
### Code of Conduct
- [x] I agree to follow the Ansible Code of Conduct
|
open
|
2025-02-18T09:17:19Z
|
2025-02-18T15:01:05Z
|
https://github.com/ansible/ansible/issues/84726
|
[
"feature",
"data_tagging"
] |
jpaniorte
| 5
|
replicate/cog
|
tensorflow
| 1,406
|
Cog compatible image/container spec / allow base image in cog.yaml
|
Hello!
I know that cog is aimed at research projects/researchers not super familiar with docker. However, I am investigating deploying models on replicate for the company I work for (i.e private models), and we already have a fully containerized workflow, which works with GPUs. It would be great if I could specify a parent image in the build section of `cog.yaml`.
Perhaps this already works, and it is only the api specification of having a `Predictor` and associated server entrypoint in a given docker image that makes it "replicate" compatible, although I somehow doubt this is the case.
My ideal workflow would be:
- I have a prebuilt docker image
- I write a predictor, and create a new docker image with the predictor as the entrypoint
- I can push this image to replicate and run it
Perhaps this is hard! Let me know if you have any suggestions for an ideal approach here. Thanks!
|
open
|
2023-11-30T01:31:27Z
|
2023-12-16T22:39:24Z
|
https://github.com/replicate/cog/issues/1406
|
[] |
DeNeutoy
| 5
|
marshmallow-code/flask-marshmallow
|
rest-api
| 38
|
Some change in the Docs required , for security reasons json can't pass arrray
|
Docs to serialise SQLAlchemy with multiple rows suggest the code bellow:
``` python
users_schema = UserSchema(many=True)
@app.route('/api/users/')
def users():
all_users = User.all()
result = users_schema.dump(all_users)
return jsonify(result.data)
# OR
# return user_schema.jsonify(all_users)
```
This code will actually give error because list objects can't be jsonified, (ref this)[https://github.com/pallets/flask/issues/673]
Instead the example should be:
``` python
users_schema = UserSchema(many=True)
@app.route('/api/users/')
def users():
all_users = User.all()
result = users_schema.dump(all_users)
return jsonify({'data':result.data})
# OR
# return user_schema.jsonify({'data':result.data})
```
|
closed
|
2016-04-23T04:04:03Z
|
2016-04-23T14:29:30Z
|
https://github.com/marshmallow-code/flask-marshmallow/issues/38
|
[] |
karan1276
| 1
|
PokeAPI/pokeapi
|
api
| 1,138
|
Ability changes not recorded
|
<!--
Please search existing issues to avoid creating duplicates.
Describe the feature you'd like.
Certain abilities don't have their future generation effects
i.e
Prankster: Dark types are now immune to prankster speed up moves.
Scrappy: Is now immume to intimidate.
Thank you!
-->
|
open
|
2024-10-07T17:54:27Z
|
2024-10-08T02:51:03Z
|
https://github.com/PokeAPI/pokeapi/issues/1138
|
[] |
XeenProof
| 1
|
ets-labs/python-dependency-injector
|
flask
| 820
|
Cached Value
|
I want to have a Singleton which functions as a cache for a method call.
I want the field `file_content` in my container be initialized one time by calling a given method (`reader.read`). From then on always that result should be returned instead of calling the method again.
I have added a working code example below.
Is there any better way?
Maybe it might be useful if it was rewritten as a new Provider?
```
from pathlib import Path
from dependency_injector import containers, providers
class Reader:
def read(self, filepath: Path, *args, **kwargs) -> str:
print('read file')
print('args:', args)
print('kwargs:', kwargs)
print()
return filepath.read_text('utf-8')
SingletonAsCache = lambda bound_method, *args, **kwargs: bound_method(*args, **kwargs)
class MyContainer(containers.DeclarativeContainer):
reader = providers.Factory(Reader)
file_content = providers.Singleton(SingletonAsCache, reader.provided.read)
container = MyContainer()
def print_first_line():
c: str = container.file_content(Path(__file__), 'any arg', any_kwarg='any_kwarg_value')
print('first line:', c.splitlines()[0])
print_first_line()
print_first_line()
print_first_line()
```
Output:
```
read file
args: ('any arg',)
kwargs: {'any_kwarg': 'any_kwarg_value'}
first line: from pathlib import Path
first line: from pathlib import Path
first line: from pathlib import Path
```
|
open
|
2024-09-26T11:24:48Z
|
2024-11-13T18:28:16Z
|
https://github.com/ets-labs/python-dependency-injector/issues/820
|
[] |
str-it
| 1
|
pallets-eco/flask-sqlalchemy
|
flask
| 959
|
How do i define the model?
|
i use this way to connect to oracle
`SQLALCHEMY_DATABASE_URI = 'oracle://username:password@ip:port/servername'`
How to specify the library when writing Model?
`class MyselfModel(BaseModel):
__tablename__ = 'user'
username = db.Column(db.String(32))
`
How to specify the library corresponding to the user table?
i checked the documention, but did not find.
help me!thank!
|
closed
|
2021-04-23T10:48:52Z
|
2021-05-08T00:03:42Z
|
https://github.com/pallets-eco/flask-sqlalchemy/issues/959
|
[] |
importTthis
| 0
|
marcomusy/vedo
|
numpy
| 398
|
Find the inner and outer contour of a set of points
|
Hi again @marcomusy. I have a set of points that I want to find the inner and outer contour of these points. This means the inner and outer contour of the red points below. The points order aren't organized. This is so I can compute the deviation between the outer and inner contour to the blue lines. I have tried to reconstruct a surface of the points with `recoSurface(pts)`, and tried to find the boundaries of the mesh, but I didnt suceed since the mesh was so coarse for my points. Also creating the surface with `delauney2d`. I have tried to reconstruct the same issue with some code below. In advance, thank you!

```python
from vedo import *
cyl1 = Cylinder(pos=(0,0,0), r=2, height=4, axis=(1,0,0), alpha=.5, cap=0, res=100).triangulate()
cyl2 = Cylinder(pos=(0,0,2), r=1, height=3, axis=(0,0.3,1), alpha=.5, cap=0, res=100).triangulate()
cyl3 = Cylinder(pos=(0,0,2), r=1.1, height=3, axis=(0,0.3,1), alpha=.5, cap=0, res=100).triangulate()
intersect_1 = cyl1.intersectWith(cyl2).join(reset=True).c('b')
intersect_2 = cyl1.intersectWith(cyl3).join(reset=True).c('b')
#show(cyl1,cyl2,cyl3, intersect_1, intersect_2).close()
#Trying to cut out the the surface between the two intersect lines
surf = cyl1.clone()
surf.cutWithMesh(cyl3, invert= True) #These two lines doesn't work for me, to cut out the section between cyl2 and cyl3? Have I done it wrong?
surf.cutWithMesh(cyl2, invert= True) # I tried using the cutWithCylinder also, instead of mesh, but the cut did end up with the same as cutWithMesh. An empty mesh.
show(surf,cyl2, cyl3).close()
#when figuring what I have done wrong with cutWithMesh, extract the points of the surf and find the countour. Maybe randomize the order of the points.
pts = v.Points(surf.points())
#find a way to find the inner and outer contour?
```

|
closed
|
2021-05-18T13:49:15Z
|
2021-05-24T07:58:16Z
|
https://github.com/marcomusy/vedo/issues/398
|
[] |
eivindtn
| 12
|
autogluon/autogluon
|
computer-vision
| 4,938
|
Survival Analysis?
|
Possible to use this library to train Survival Analysis models?
|
open
|
2025-02-26T00:11:58Z
|
2025-03-01T01:31:32Z
|
https://github.com/autogluon/autogluon/issues/4938
|
[
"enhancement"
] |
austinmw
| 1
|
serengil/deepface
|
machine-learning
| 720
|
deepface docker build issue
|
Hello,
I get the below error message when I try to build the deepface docker after cloning the repo:
=> ERROR [13/13] RUN pip install --trusted-host pypi.org --trusted-host pypi.python.org --trusted-host=files.pythonhosted.org -e . 2.2s
------
> [13/13] RUN pip install --trusted-host pypi.org --trusted-host pypi.python.org --trusted-host=files.pythonhosted.org -e .:
#18 1.697 Obtaining file:///app
#18 1.699 Preparing metadata (setup.py): started
#18 1.906 Preparing metadata (setup.py): finished with status 'error'
#18 1.912 error: subprocess-exited-with-error
#18 1.912
#18 1.912 × python setup.py egg_info did not run successfully.
#18 1.912 │ exit code: 1
#18 1.912 ╰─> [6 lines of output]
#18 1.912 Traceback (most recent call last):
#18 1.912 File "<string>", line 2, in <module>
#18 1.912 File "<pip-setuptools-caller>", line 34, in <module>
#18 1.912 File "/app/setup.py", line 6, in <module>
#18 1.912 with open("requirements.txt", "r", encoding="utf-8") as f:
#18 1.912 FileNotFoundError: [Errno 2] No such file or directory: 'requirements.txt'
#18 1.912 [end of output]
#18 1.912
#18 1.912 note: This error originates from a subprocess, and is likely not a problem with pip.
#18 1.914 error: metadata-generation-failed
#18 1.914
#18 1.914 × Encountered error while generating package metadata.
#18 1.914 ╰─> See above for output.
#18 1.914
#18 1.914 note: This is an issue with the package mentioned above, not pip.
#18 1.914 hint: See above for details.
#18 2.109 WARNING: You are using pip version 22.0.4; however, version 23.0.1 is available.
#18 2.109 You should consider upgrading via the '/usr/local/bin/python -m pip install --upgrade pip' command.
------
executor failed running [/bin/sh -c pip install --trusted-host pypi.org --trusted-host pypi.python.org --trusted-host=files.pythonhosted.org -e .]: exit code: 1
|
closed
|
2023-04-13T00:53:46Z
|
2023-04-13T08:22:58Z
|
https://github.com/serengil/deepface/issues/720
|
[
"bug"
] |
WisamAbbasi
| 1
|
onnx/onnx
|
machine-learning
| 6,140
|
Error While Installing ONNX
|
# Bug Report
### Is the issue related to model conversion?
no
### System information
Ubuntu 14.04
onnx 1.9.0
python 2.7.6
protobuf 2.6.1
cmake 3.28.4
gcc 4.8.4
### Describe the bug
```
Building wheels for collected packages: onnx
Building wheel for onnx (PEP 517) ... error
ERROR: Command errored out with exit status 1:
command: /usr/bin/python /usr/local/lib/python2.7/dist-packages/pip-20.3.4-py2.7.egg/pip/_vendor/pep517/_in_process.py build_wheel /tmp/tmpfR3xK2
cwd: /tmp/pip-install-dMWHiR/onnx
Complete output (267 lines):
fatal: Not a git repository (or any of the parent directories): .git
running bdist_wheel
running build
running build_py
running create_version
running cmake_build
Using cmake args: [u'/usr/local/bin/cmake', u'-DPYTHON_INCLUDE_DIR=/usr/include/python2.7', u'-DPYTHON_EXECUTABLE=/usr/bin/python', u'-DBUILD_ONNX_PYTHON=ON', u'-DCMAKE_EXPORT_COMPILE_COMMANDS=ON', u'-DONNX_NAMESPACE=onnx', u'-DPY_EXT_SUFFIX=', u'-DCMAKE_BUILD_TYPE=Release', u'-DONNX_ML=1', '/tmp/pip-install-dMWHiR/onnx']
CMake Deprecation Warning at CMakeLists.txt:2 (cmake_minimum_required):
Compatibility with CMake < 3.5 will be removed from a future version of
CMake.
Update the VERSION argument <min> value or use a ...<max> suffix to tell
CMake that the project does not need compatibility with older versions.
-- The C compiler identification is GNU 4.8.4
-- The CXX compiler identification is GNU 4.8.4
-- Detecting C compiler ABI info
-- Detecting C compiler ABI info - done
-- Check for working C compiler: /usr/bin/cc - skipped
-- Detecting C compile features
-- Detecting C compile features - done
-- Detecting CXX compiler ABI info
-- Detecting CXX compiler ABI info - done
-- Check for working CXX compiler: /usr/bin/c++ - skipped
-- Detecting CXX compile features
-- Detecting CXX compile features - done
CMake Warning (dev) at CMakeLists.txt:114 (find_package):
Policy CMP0148 is not set: The FindPythonInterp and FindPythonLibs modules
are removed. Run "cmake --help-policy CMP0148" for policy details. Use
the cmake_policy command to set the policy and suppress this warning.
This warning is for project developers. Use -Wno-dev to suppress it.
-- Found PythonInterp: /usr/bin/python (found version "2.7.6")
CMake Warning (dev) at CMakeLists.txt:115 (find_package):
Policy CMP0148 is not set: The FindPythonInterp and FindPythonLibs modules
are removed. Run "cmake --help-policy CMP0148" for policy details. Use
the cmake_policy command to set the policy and suppress this warning.
This warning is for project developers. Use -Wno-dev to suppress it.
-- Found PythonLibs: /usr/lib/x86_64-linux-gnu/libpython2.7.so (found version "2.7.6")
-- Found Protobuf: /usr/local/lib/libprotobuf.so (found version "2.6.1")
Generated: /tmp/pip-install-dMWHiR/onnx/.setuptools-cmake-build/onnx/onnx-ml.proto
Generated: /tmp/pip-install-dMWHiR/onnx/.setuptools-cmake-build/onnx/onnx-operators-ml.proto
Generated: /tmp/pip-install-dMWHiR/onnx/.setuptools-cmake-build/onnx/onnx-data.proto
CMake Warning (dev) at /usr/local/share/cmake/pybind11/FindPythonLibsNew.cmake:98 (find_package):
Policy CMP0148 is not set: The FindPythonInterp and FindPythonLibs modules
are removed. Run "cmake --help-policy CMP0148" for policy details. Use
the cmake_policy command to set the policy and suppress this warning.
Call Stack (most recent call first):
/usr/local/share/cmake/pybind11/pybind11Tools.cmake:50 (find_package)
/usr/local/share/cmake/pybind11/pybind11Common.cmake:206 (include)
/usr/local/share/cmake/pybind11/pybind11Config.cmake:250 (include)
CMakeLists.txt:447 (find_package)
This warning is for project developers. Use -Wno-dev to suppress it.
-- Found PythonLibs: /usr/lib/x86_64-linux-gnu/libpython2.7.so
-- Performing Test HAS_FLTO
-- Performing Test HAS_FLTO - Success
-- Found pybind11: /usr/local/include (found version "2.9.0")
--
-- ******** Summary ********
-- CMake version : 3.28.4
-- CMake command : /usr/local/lib/python2.7/dist-packages/cmake/data/bin/cmake
-- System : Linux
-- C++ compiler : /usr/bin/c++
-- C++ compiler version : 4.8.4
-- CXX flags : -Wnon-virtual-dtor
-- Build type : Release
-- Compile definitions :
-- CMAKE_PREFIX_PATH :
-- CMAKE_INSTALL_PREFIX : /usr/local
-- CMAKE_MODULE_PATH :
--
-- ONNX version : 1.9.0
-- ONNX NAMESPACE : onnx
-- ONNX_USE_LITE_PROTO : OFF
-- USE_PROTOBUF_SHARED_LIBS : OFF
-- ONNX_DISABLE_EXCEPTIONS : OFF
-- ONNX_WERROR : OFF
-- ONNX_BUILD_TESTS : OFF
-- ONNX_BUILD_BENCHMARKS : OFF
-- ONNXIFI_DUMMY_BACKEND : OFF
-- ONNXIFI_ENABLE_EXT : OFF
--
-- Protobuf compiler : /usr/local/bin/protoc
-- Protobuf includes : /usr/local/include
-- Protobuf libraries : /usr/local/lib/libprotobuf.so
-- BUILD_ONNX_PYTHON : ON
-- Python version :
-- Python executable : /usr/bin/python
-- Python includes : /usr/include/python2.7
-- Configuring done (1.1s)
-- Generating done (0.0s)
-- Build files have been written to: /tmp/pip-install-dMWHiR/onnx/.setuptools-cmake-build
[ 1%] Running gen_proto.py on onnx/onnx.in.proto
[ 3%] Building C object CMakeFiles/onnxifi_loader.dir/onnx/onnxifi_loader.c.o
[ 4%] Building C object CMakeFiles/onnxifi_dummy.dir/onnx/onnxifi_dummy.c.o
Processing /tmp/pip-install-dMWHiR/onnx/onnx/onnx.in.proto
Writing /tmp/pip-install-dMWHiR/onnx/.setuptools-cmake-build/onnx/onnx-ml.proto
Writing /tmp/pip-install-dMWHiR/onnx/.setuptools-cmake-build/onnx/onnx-ml.proto3
generating /tmp/pip-install-dMWHiR/onnx/.setuptools-cmake-build/onnx/onnx_pb.py
[ 6%] Running C++ protocol buffer compiler on /tmp/pip-install-dMWHiR/onnx/.setuptools-cmake-build/onnx/onnx-ml.proto
/tmp/pip-install-dMWHiR/onnx/onnx/onnxifi_dummy.c: In function ‘onnxGetExtensionFunctionAddress’:
/tmp/pip-install-dMWHiR/onnx/onnx/onnxifi_dummy.c:177:21: warning: assignment from incompatible pointer type [enabled by default]
*function = &onnxGetExtensionFunctionAddress;
^
/tmp/pip-install-dMWHiR/onnx/onnx/onnxifi_dummy.c:180:21: warning: assignment from incompatible pointer type [enabled by default]
*function = &onnxSetIOAndRunGraph;
^
[ 7%] Linking C static library libonnxifi_loader.a
Writing mypy to onnx/onnx_ml_pb2.pyi
[ 9%] Linking C shared library libonnxifi_dummy.so
[ 9%] Built target onnxifi_loader
[ 9%] Built target gen_onnx_proto
[ 10%] Running gen_proto.py on onnx/onnx-operators.in.proto
[ 12%] Building C object CMakeFiles/onnxifi_wrapper.dir/onnx/onnxifi_wrapper.c.o
[ 12%] Built target onnxifi_dummy
[ 13%] Running gen_proto.py on onnx/onnx-data.in.proto
Processing /tmp/pip-install-dMWHiR/onnx/onnx/onnx-operators.in.proto
Writing /tmp/pip-install-dMWHiR/onnx/.setuptools-cmake-build/onnx/onnx-operators-ml.proto
Writing /tmp/pip-install-dMWHiR/onnx/.setuptools-cmake-build/onnx/onnx-operators-ml.proto3
generating /tmp/pip-install-dMWHiR/onnx/.setuptools-cmake-build/onnx/onnx_operators_pb.py
[ 15%] Running C++ protocol buffer compiler on /tmp/pip-install-dMWHiR/onnx/.setuptools-cmake-build/onnx/onnx-operators-ml.proto
Processing /tmp/pip-install-dMWHiR/onnx/onnx/onnx-data.in.proto
Writing /tmp/pip-install-dMWHiR/onnx/.setuptools-cmake-build/onnx/onnx-data.proto
Writing /tmp/pip-install-dMWHiR/onnx/.setuptools-cmake-build/onnx/onnx-data.proto3
generating /tmp/pip-install-dMWHiR/onnx/.setuptools-cmake-build/onnx/onnx_data_pb.py
[ 16%] Running C++ protocol buffer compiler on /tmp/pip-install-dMWHiR/onnx/.setuptools-cmake-build/onnx/onnx-data.proto
Writing mypy to onnx/onnx_operators_ml_pb2.pyi
Writing mypy to onnx/onnx_data_pb2.pyi
[ 18%] Linking C shared module libonnxifi.so
[ 20%] Building CXX object CMakeFiles/onnx_proto.dir/onnx/onnx-ml.pb.cc.o
[ 21%] Building CXX object CMakeFiles/onnx_proto.dir/onnx/onnx-operators-ml.pb.cc.o
[ 23%] Building CXX object CMakeFiles/onnx_proto.dir/onnx/onnx-data.pb.cc.o
[ 23%] Built target onnxifi_wrapper
[ 24%] Linking CXX static library libonnx_proto.a
[ 27%] Built target onnx_proto
[ 30%] Building CXX object CMakeFiles/onnx.dir/onnx/checker.cc.o
[ 30%] Building CXX object CMakeFiles/onnx.dir/onnx/common/assertions.cc.o
[ 32%] Building CXX object CMakeFiles/onnx.dir/onnx/common/interned_strings.cc.o
[ 33%] Building CXX object CMakeFiles/onnx.dir/onnx/common/ir_pb_converter.cc.o
[ 35%] Building CXX object CMakeFiles/onnx.dir/onnx/common/model_helpers.cc.o
[ 36%] Building CXX object CMakeFiles/onnx.dir/onnx/common/path.cc.o
[ 38%] Building CXX object CMakeFiles/onnx.dir/onnx/common/status.cc.o
[ 40%] Building CXX object CMakeFiles/onnx.dir/onnx/defs/attr_proto_util.cc.o
[ 41%] Building CXX object CMakeFiles/onnx.dir/onnx/defs/controlflow/defs.cc.o
[ 43%] Building CXX object CMakeFiles/onnx.dir/onnx/defs/controlflow/old.cc.o
[ 44%] Building CXX object CMakeFiles/onnx.dir/onnx/defs/data_type_utils.cc.o
[ 46%] Building CXX object CMakeFiles/onnx.dir/onnx/defs/function.cc.o
[ 47%] Building CXX object CMakeFiles/onnx.dir/onnx/defs/generator/defs.cc.o
[ 49%] Building CXX object CMakeFiles/onnx.dir/onnx/defs/generator/old.cc.o
[ 50%] Building CXX object CMakeFiles/onnx.dir/onnx/defs/logical/defs.cc.o
/tmp/pip-install-dMWHiR/onnx/onnx/defs/logical/defs.cc:24:15: error: unterminated raw string
doc = R"DOC(
^
/tmp/pip-install-dMWHiR/onnx/onnx/defs/logical/defs.cc:23:5: error: stray ‘R’ in program
POPULATE_OP_DOC_STR(
^
/tmp/pip-install-dMWHiR/onnx/onnx/defs/logical/defs.cc:23:5: error: stray ‘`’ in program
/tmp/pip-install-dMWHiR/onnx/onnx/defs/logical/defs.cc:23:5: error: stray ‘`’ in program
/tmp/pip-install-dMWHiR/onnx/onnx/defs/logical/defs.cc:23:5: error: stray ‘`’ in program
/tmp/pip-install-dMWHiR/onnx/onnx/defs/logical/defs.cc:23:5: error: stray ‘`’ in program
/tmp/pip-install-dMWHiR/onnx/onnx/defs/logical/defs.cc:23:5: error: stray ‘`’ in program
/tmp/pip-install-dMWHiR/onnx/onnx/defs/logical/defs.cc:23:5: error: stray ‘`’ in program
/tmp/pip-install-dMWHiR/onnx/onnx/defs/logical/defs.cc:29:5: warning: missing terminating " character [enabled by default]
)DOC";
^
/tmp/pip-install-dMWHiR/onnx/onnx/defs/logical/defs.cc:23:5: error: missing terminating " character
POPULATE_OP_DOC_STR(
^
In file included from /tmp/pip-install-dMWHiR/onnx/onnx/defs/logical/defs.cc:7:0:
/tmp/pip-install-dMWHiR/onnx/onnx/defs/logical/defs.cc: In lambda function:
/tmp/pip-install-dMWHiR/onnx/onnx/defs/logical/defs.cc:25:1: error: ‘Returns’ was not declared in this scope
Returns the tensor resulted from performing the `{name}` logical operation
^
/tmp/pip-install-dMWHiR/onnx/onnx/defs/schema.h:1290:5: note: in definition of macro ‘POPULATE_OP_DOC_STR’
DocPopulatorCode \
^
/tmp/pip-install-dMWHiR/onnx/onnx/defs/logical/defs.cc:25:9: error: expected ‘;’ before ‘the’
Returns the tensor resulted from performing the `{name}` logical operation
^
/tmp/pip-install-dMWHiR/onnx/onnx/defs/schema.h:1290:5: note: in definition of macro ‘POPULATE_OP_DOC_STR’
DocPopulatorCode \
^
/tmp/pip-install-dMWHiR/onnx/onnx/defs/logical/defs.cc:25:58: error: ‘logical’ was not declared in this scope
Returns the tensor resulted from performing the `{name}` logical operation
^
/tmp/pip-install-dMWHiR/onnx/onnx/defs/schema.h:1290:5: note: in definition of macro ‘POPULATE_OP_DOC_STR’
DocPopulatorCode \
^
/tmp/pip-install-dMWHiR/onnx/onnx/defs/logical/defs.cc:25:66: error: expected ‘;’ before ‘operation’
Returns the tensor resulted from performing the `{name}` logical operation
^
/tmp/pip-install-dMWHiR/onnx/onnx/defs/schema.h:1290:5: note: in definition of macro ‘POPULATE_OP_DOC_STR’
DocPopulatorCode \
^
/tmp/pip-install-dMWHiR/onnx/onnx/defs/logical/defs.cc:29:2: error: expected ‘;’ before ‘DOC’
)DOC";
^
/tmp/pip-install-dMWHiR/onnx/onnx/defs/logical/defs.cc:29:2: error: ‘DOC’ was not declared in this scope
/tmp/pip-install-dMWHiR/onnx/onnx/defs/logical/defs.cc:30:9: error: expected ‘;’ before ‘ReplaceAll’
ReplaceAll(doc, "{name}", name);
^
/tmp/pip-install-dMWHiR/onnx/onnx/defs/logical/defs.cc:32:75: error: expected primary-expression before ‘)’ token
doc, "{broadcast_doc}", GenerateBroadcastingDocMul().c_str()););
^
/tmp/pip-install-dMWHiR/onnx/onnx/defs/logical/defs.cc:32:75: error: expected ‘;’ before ‘)’ token
/tmp/pip-install-dMWHiR/onnx/onnx/defs/logical/defs.cc: At global scope:
/tmp/pip-install-dMWHiR/onnx/onnx/defs/logical/defs.cc:20:32: warning: unused parameter ‘name’ [-Wunused-parameter]
std::function<void(OpSchema&)> BinaryLogicDocGenerator(const char* name) {
^
make[2]: *** [CMakeFiles/onnx.dir/onnx/defs/logical/defs.cc.o] 错误 1
make[2]: *** 正在等待未完成的任务....
make[1]: *** [CMakeFiles/onnx.dir/all] 错误 2
make: *** [all] 错误 2
Traceback (most recent call last):
File "/usr/local/lib/python2.7/dist-packages/pip-20.3.4-py2.7.egg/pip/_vendor/pep517/_in_process.py", line 280, in <module>
main()
File "/usr/local/lib/python2.7/dist-packages/pip-20.3.4-py2.7.egg/pip/_vendor/pep517/_in_process.py", line 263, in main
json_out['return_val'] = hook(**hook_input['kwargs'])
File "/usr/local/lib/python2.7/dist-packages/pip-20.3.4-py2.7.egg/pip/_vendor/pep517/_in_process.py", line 205, in build_wheel
metadata_directory)
File "/usr/local/lib/python2.7/dist-packages/setuptools/build_meta.py", line 209, in build_wheel
wheel_directory, config_settings)
File "/usr/local/lib/python2.7/dist-packages/setuptools/build_meta.py", line 194, in _build_with_temp_dir
self.run_setup()
File "/usr/local/lib/python2.7/dist-packages/setuptools/build_meta.py", line 243, in run_setup
self).run_setup(setup_script=setup_script)
File "/usr/local/lib/python2.7/dist-packages/setuptools/build_meta.py", line 142, in run_setup
exec(compile(code, __file__, 'exec'), locals())
File "setup.py", line 359, in <module>
'backend-test-tools = onnx.backend.test.cmd_tools:main',
File "/usr/local/lib/python2.7/dist-packages/setuptools/__init__.py", line 162, in setup
return distutils.core.setup(**attrs)
File "/usr/lib/python2.7/distutils/core.py", line 151, in setup
dist.run_commands()
File "/usr/lib/python2.7/distutils/dist.py", line 953, in run_commands
self.run_command(cmd)
File "/usr/lib/python2.7/distutils/dist.py", line 972, in run_command
cmd_obj.run()
File "/tmp/pip-build-env-kdmVEW/overlay/lib/python2.7/site-packages/wheel/bdist_wheel.py", line 299, in run
self.run_command('build')
File "/usr/lib/python2.7/distutils/cmd.py", line 326, in run_command
self.distribution.run_command(command)
File "/usr/lib/python2.7/distutils/dist.py", line 972, in run_command
cmd_obj.run()
File "/usr/lib/python2.7/distutils/command/build.py", line 128, in run
self.run_command(cmd_name)
File "/usr/lib/python2.7/distutils/cmd.py", line 326, in run_command
self.distribution.run_command(command)
File "/usr/lib/python2.7/distutils/dist.py", line 972, in run_command
cmd_obj.run()
File "setup.py", line 233, in run
self.run_command('cmake_build')
File "/usr/lib/python2.7/distutils/cmd.py", line 326, in run_command
self.distribution.run_command(command)
File "/usr/lib/python2.7/distutils/dist.py", line 972, in run_command
cmd_obj.run()
File "setup.py", line 227, in run
subprocess.check_call(build_args)
File "/usr/lib/python2.7/subprocess.py", line 540, in check_call
raise CalledProcessError(retcode, cmd)
subprocess.CalledProcessError: Command '[u'/usr/local/bin/cmake', u'--build', '.', u'--', u'-j', '4']' returned non-zero exit status 2
----------------------------------------
ERROR: Failed building wheel for onnx
Failed to build onnx
ERROR: Could not build wheels for onnx which use PEP 517 and cannot be installed directly
```
|
open
|
2024-05-20T06:13:51Z
|
2024-05-20T07:23:14Z
|
https://github.com/onnx/onnx/issues/6140
|
[
"question"
] |
jh97321
| 3
|
yihong0618/running_page
|
data-visualization
| 63
|
[TODO] add type to db
|
run, bike walk ......
user can select only run.
|
closed
|
2020-12-16T00:55:17Z
|
2022-01-07T05:24:06Z
|
https://github.com/yihong0618/running_page/issues/63
|
[
"enhancement"
] |
yihong0618
| 0
|
mwaskom/seaborn
|
matplotlib
| 3,330
|
Wrong handles in legend with boxplot
|
When trying to change the labels of the legend on a boxplot, there is a change on the symbol in the legend.
Here is my minimal code
```import seaborn as sns
import matplotlib.pyplot as plt
import pingouin as pg
data = pg.read_dataset('penguins')
fig, ax = plt.subplots(layout='tight')
fig.set_figwidth(8)
fig.set_figheight(8)
sns.boxplot(data=data, x='island', y='body_mass_g', hue='sex', ax=ax, palette='husl')
ax.set_ylabel('Body mass (g)')
ax.set_xlabel("Islands")
ax.legend(title='Penguins', labels=['toto', 'tata'])
plt.show()
```
I get that figure :

while I was expecting that :

It seems that the legend associates the second label with the matplotlib PathPatch object wich defines the first box. This may come from the order of creation of artist objects for the boxplots.
One workaround is to replace the line ax.legend with :
`plt.legend(handles=ax.get_legend_handles_labels()[0], title='Penguins', labels=['toto', 'tata'])`
|
closed
|
2023-04-19T08:59:19Z
|
2023-04-25T21:41:52Z
|
https://github.com/mwaskom/seaborn/issues/3330
|
[] |
Djost43
| 1
|
AutoGPTQ/AutoGPTQ
|
nlp
| 575
|
[FEATURE] Add support for Phi models
|
Currently "phi" models don't seem to be supported
```
Traceback (most recent call last):
File "/home/mgoin/marlin-example/apply_gptq_save_marlin.py", line 44, in <module>
model = AutoGPTQForCausalLM.from_pretrained(
File "/home/mgoin/venvs/test/lib/python3.10/site-packages/auto_gptq/modeling/auto.py", line 75, in from_pretrained
model_type = check_and_get_model_type(pretrained_model_name_or_path, trust_remote_code)
File "/home/mgoin/venvs/test/lib/python3.10/site-packages/auto_gptq/modeling/_utils.py", line 305, in check_and_get_model_type
raise TypeError(f"{config.model_type} isn't supported yet.")
TypeError: phi isn't supported yet.
```
|
closed
|
2024-03-02T01:09:08Z
|
2024-03-19T06:41:24Z
|
https://github.com/AutoGPTQ/AutoGPTQ/issues/575
|
[
"enhancement"
] |
mgoin
| 0
|
tfranzel/drf-spectacular
|
rest-api
| 768
|
Spectacular ignores settings
|
**Describe the bug**
I set some parameters in my settings.py `SPECTACULAR_SETTINGS` but it wont get picked up
**To Reproduce**
I placed the example from the docs in my settings.py:
```
SPECTACULAR_SETTINGS = {
'TITLE': 'Your Project API',
'DESCRIPTION': 'Your project description',
'VERSION': '1.0.0',
'SERVE_INCLUDE_SCHEMA': False,
# OTHER SETTINGS
}
```
I confirmed its part of the settings using `python manage.py shell` and printed `settings.SPECTACULAR_SETTINGS` and the settings are there. But running `python manage.py spectacular` I get an openapi definition without title, description or version.
**Expected behavior**
The generated specification contains all above specified parameters.
|
closed
|
2022-07-13T13:13:39Z
|
2022-07-15T18:12:19Z
|
https://github.com/tfranzel/drf-spectacular/issues/768
|
[] |
georgkrause
| 4
|
aimhubio/aim
|
data-visualization
| 3,251
|
Failed to initialize Aim DB, Can't locate revision.
|
## 🐛 Bug
I am getting the following error when using `aim up`:
ERROR [alembic.util.messaging] Can't locate revision identified by '3d5fd76e8485'
FAILED: Can't locate revision identified by '3d5fd76e8485'
Failed to initialize Aim DB. Please see the logs above for details.
### Environment
- Aim 3.25.1
- Python 3.11.9
- Ubuntu 22.04.4 LTS
|
closed
|
2024-11-19T16:52:58Z
|
2025-01-07T12:05:16Z
|
https://github.com/aimhubio/aim/issues/3251
|
[
"type / bug",
"help wanted"
] |
maxbarton15
| 2
|
wkentaro/labelme
|
deep-learning
| 960
|
[BUG]
|
**Describe the bug**
when edit toll selected create rectangle and i clicl any object after i need change to create polygone and i make second click then app crashes
show video
https://youtu.be/kD-cFZ2YO0Y
|
closed
|
2021-11-27T14:18:17Z
|
2022-10-23T12:10:29Z
|
https://github.com/wkentaro/labelme/issues/960
|
[] |
doitauto
| 1
|
yihong0618/running_page
|
data-visualization
| 443
|
python3 scripts/strava_sync.py 同步不了strava数据
|
日志是这样的
Access ok
Start syncing
|
closed
|
2023-07-05T03:48:31Z
|
2023-07-06T07:15:26Z
|
https://github.com/yihong0618/running_page/issues/443
|
[] |
leosj
| 15
|
vastsa/FileCodeBox
|
fastapi
| 127
|
后台登录页面的bug,提示:未授权或授权校验失败
|
复现页面:https://share.lanol.cn/#/admin
复现方法:第一次登录成功,退出登录后,在进行登录会提示:未授权或授权校验失败
查看控制台提示如下:

仔细查看报错路径,发现登录接口路径少了/#/
原登录接口:https://share.lanol.cn/#/admin/login
报错登录接口:https://share.lanol.cn/admin/login
包括在您的演示站也有这个bug。
测试浏览器为edge:版本 120.0.2210.144 (正式版本) (64 位)
系统版本为:win11
新编辑:
是因为登陆页面给了个默认密码,登录的时候就直接提示未授权了,所以建议删除这个默认密码,留空处理。
|
closed
|
2024-01-23T06:52:36Z
|
2024-07-12T05:19:07Z
|
https://github.com/vastsa/FileCodeBox/issues/127
|
[] |
OuOumm
| 4
|
Morizeyao/GPT2-Chinese
|
nlp
| 2
|
generate.py error
|
generate.py 第80行应该放在79行前面哟。
|
closed
|
2019-07-25T16:58:36Z
|
2019-08-06T13:36:54Z
|
https://github.com/Morizeyao/GPT2-Chinese/issues/2
|
[] |
hackerxiaobai
| 1
|
jofpin/trape
|
flask
| 231
|
TRACEBACK error while requirements are installed.
|

|
open
|
2020-04-28T21:40:21Z
|
2020-04-28T21:40:21Z
|
https://github.com/jofpin/trape/issues/231
|
[] |
demaico
| 0
|
mljar/mljar-supervised
|
scikit-learn
| 329
|
Add support for currencies features
|
If column have currency symbol it should be automatically detected and currency symbol should be removed.
|
closed
|
2021-03-03T07:36:58Z
|
2024-09-30T11:34:58Z
|
https://github.com/mljar/mljar-supervised/issues/329
|
[] |
pplonski
| 0
|
graphql-python/graphene-django
|
django
| 710
|
Start warning if `fields` or `exclude` are not defined on `DjangoObjectType`
|
So that model fields aren't accidentally exposed through DjangoObjectType I propose that we start warning if either `fields` or `exclude` aren't defined with the intention to error completely in the future. This would also align the API more with Django Rest Framework which hopefully makes it more familiar to most developers.
|
closed
|
2019-07-12T16:50:31Z
|
2020-07-01T12:07:09Z
|
https://github.com/graphql-python/graphene-django/issues/710
|
[
"✨enhancement",
"v3"
] |
jkimbo
| 6
|
SYSTRAN/faster-whisper
|
deep-learning
| 196
|
Where to put the model.bin and related files if I don't wanna them into C: disk?
|
https://huggingface.co/guillaumekln/faster-whisper-large-v2/tree/main
How should I set up these files if I don't want to put them on disk C? Who knows?
faster-whisper-large-v2


|
closed
|
2023-04-29T01:53:47Z
|
2023-05-03T20:11:25Z
|
https://github.com/SYSTRAN/faster-whisper/issues/196
|
[] |
pendave
| 1
|
kizniche/Mycodo
|
automation
| 734
|
Daemon log doesn't display in GUI when logrotate splits it
|
Develop a more reliable method for serving the latest lines from the daemon log.
|
closed
|
2020-01-16T03:58:11Z
|
2020-01-29T20:30:13Z
|
https://github.com/kizniche/Mycodo/issues/734
|
[] |
kizniche
| 0
|
lux-org/lux
|
jupyter
| 412
|
Converting Timestamp: Error
|
Hi,
I am reading in a csv to my notebook, calling it df_plot. When I do a df_plot.head() it comes back saying that Timestamp maybe temperal. So I followed the suggested template and also tried a suggestion on the lux website. Neither works for me.
See attached image from my csv file of the timestamp

So I tried this, as I believed that my timestamp was in the format dd-mm-yyy hh:mm:ss
```
df_plot['Timestamp'] = pd.to_datetime(df_plot['Timestamp'], format="%d-%m-%y%h:%m:%s")
##df['date'] = pd.to_datetime(df['date'], format="%Y-%m-%d")
```
And get this error
```
ValueError: 'h' is a bad directive in format '%d-%m-%y%h:%m:%s'
---------------------------------------------------------------------------
TypeError Traceback (most recent call last)
c:\xxx\capstone_python\venv\lib\site-packages\pandas\core\tools\datetimes.py in _convert_listlike_datetimes(arg, format, name, tz, unit, errors, infer_datetime_format, dayfirst, yearfirst, exact)
455 try:
--> 456 values, tz = conversion.datetime_to_datetime64(arg)
457 dta = DatetimeArray(values, dtype=tz_to_dtype(tz))
pandas\_libs\tslibs\conversion.pyx in pandas._libs.tslibs.conversion.datetime_to_datetime64()
TypeError: Unrecognized value type: <class 'str'>
During handling of the above exception, another exception occurred:
ValueError Traceback (most recent call last)
~\AppData\Local\Temp/ipykernel_35372/1760194970.py in <module>
----> 1 df_plot['Timestamp'] = pd.to_datetime(df_plot['Timestamp'], format="%d-%m-%y%h:%m:%s")
2 ##df['date'] = pd.to_datetime(df['date'], format="%Y-%m-%d")
c:\xxx\Final_Project\capstone_python\venv\lib\site-packages\pandas\core\tools\datetimes.py in to_datetime(arg, errors, dayfirst, yearfirst, utc, format, exact, unit, infer_datetime_format, origin, cache)
799 result = result.tz_localize(tz)
800 elif isinstance(arg, ABCSeries):
--> 801 cache_array = _maybe_cache(arg, format, cache, convert_listlike)
802 if not cache_array.empty:
803 result = arg.map(cache_array)
c:\xxx\capstone_python\venv\lib\site-packages\pandas\core\tools\datetimes.py in _maybe_cache(arg, format, cache, convert_listlike)
176 unique_dates = unique(arg)
177 if len(unique_dates) < len(arg):
--> 178 cache_dates = convert_listlike(unique_dates, format)
179 cache_array = Series(cache_dates, index=unique_dates)
180 return cache_array
c:\xxx\capstone_python\venv\lib\site-packages\pandas\core\tools\datetimes.py in _convert_listlike_datetimes(arg, format, name, tz, unit, errors, infer_datetime_format, dayfirst, yearfirst, exact)
458 return DatetimeIndex._simple_new(dta, name=name)
459 except (ValueError, TypeError):
--> 460 raise e
461
462 if result is None:
c:\xxx\capstone_python\venv\lib\site-packages\pandas\core\tools\datetimes.py in _convert_listlike_datetimes(arg, format, name, tz, unit, errors, infer_datetime_format, dayfirst, yearfirst, exact)
421 if result is None:
422 try:
--> 423 result, timezones = array_strptime(
424 arg, format, exact=exact, errors=errors
425 )
pandas\_libs\tslibs\strptime.pyx in pandas._libs.tslibs.strptime.array_strptime()
pandas\_libs\tslibs\strptime.pyx in pandas._libs.tslibs.strptime.array_strptime()
ValueError: 'h' is a bad directive in format '%d-%m-%y%h:%m:%s'
```
|
closed
|
2021-08-19T13:31:21Z
|
2021-09-07T00:12:27Z
|
https://github.com/lux-org/lux/issues/412
|
[] |
conorwa
| 1
|
tensorlayer/TensorLayer
|
tensorflow
| 539
|
Failed: TensorLayer (7f692946)
|
*Sent by Read the Docs (readthedocs@readthedocs.org). Created by [fire](https://fire.fundersclub.com/).*
---
| TensorLayer build #7116848
---
| 
---
| Build Failed for TensorLayer (latest)
---
You can find out more about this failure here:
[TensorLayer build #7116848](https://readthedocs.org/projects/tensorlayer/builds/7116848/) \- failed
If you have questions, a good place to start is the FAQ:
<https://docs.readthedocs.io/en/latest/faq.html>
You can unsubscribe from these emails in your [Notification Settings](https://readthedocs.org/dashboard/tensorlayer/notifications/)
Keep documenting,
Read the Docs
| Read the Docs
<https://readthedocs.org>
---

|
closed
|
2018-04-30T04:32:46Z
|
2018-05-15T08:59:04Z
|
https://github.com/tensorlayer/TensorLayer/issues/539
|
[] |
fire-bot
| 0
|
proplot-dev/proplot
|
matplotlib
| 383
|
The attribute fontsize in legend can not execute.
|
### Description
When set fontsize=300 in legend, the attribute fontsize in legend can not execute, the legned fontsize unchanged.
```python
import proplot as pplt
labels = ['a', 'bb', 'ccc', 'ddddd', 'eeeee']
fig, axs = pplt.subplots(ncols=2, share=False, axwidth=3)
hs1, hs2 = [], []
state = np.random.RandomState(51423)
for i,label in enumerate(labels):
data = (state.rand(20) - 0.45).cumsum(axis=0)
h1 = axs[0].plot(data, lw=4, label=label, legend='ul',
legend_kw={'order':'F', 'title':'column major'})
hs1.extend(h1)
h2 = axs[1].plot(data, lw=4, label=label, legend='r', cycle='Set3',
legend_kw={'ncols':1, 'order':'F', 'frame':False, 'title':'No Frame', 'fontsize':40})
hs2.extend(h2)
# Outer legends
axs[0].legend(loc='b', ncols=3, facecolor='red', fontsize=300)
```

|
closed
|
2022-08-15T14:25:14Z
|
2023-03-28T23:57:18Z
|
https://github.com/proplot-dev/proplot/issues/383
|
[
"already fixed"
] |
NWPC-Whisperer
| 2
|
gunthercox/ChatterBot
|
machine-learning
| 2,248
|
Parts of speech classification problem.
|
I'm just playing with chatterbot. I trained a model with chatterbot list trainer using the data of a conversation with a real person.
I was discovering how it works by seeing the contents of the sqlite database(which i used as the storage adapter). When I runned `SELECT * FROM statement` in sqlite shell, I saw that it classifies the strings in NOUN, PRONOUN, VERB etc. But most of the classifications were wrong. The input data was in Bengali language.
|
closed
|
2022-05-12T13:20:06Z
|
2024-02-23T16:22:20Z
|
https://github.com/gunthercox/ChatterBot/issues/2248
|
[] |
SunPodder
| 0
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.