repo_name
stringlengths 9
75
| topic
stringclasses 30
values | issue_number
int64 1
203k
| title
stringlengths 1
976
| body
stringlengths 0
254k
| state
stringclasses 2
values | created_at
stringlengths 20
20
| updated_at
stringlengths 20
20
| url
stringlengths 38
105
| labels
listlengths 0
9
| user_login
stringlengths 1
39
| comments_count
int64 0
452
|
|---|---|---|---|---|---|---|---|---|---|---|---|
donnemartin/system-design-primer
|
python
| 734
|
Sysdes
|
open
|
2023-01-23T05:51:21Z
|
2023-03-16T09:55:57Z
|
https://github.com/donnemartin/system-design-primer/issues/734
|
[
"needs-review"
] |
ireensagh
| 1
|
|
ExpDev07/coronavirus-tracker-api
|
rest-api
| 233
|
Retrieving timelines by U.S. county
|
Is it possible to retrieve data by U.S. county by date? Based on a quick exploration it looks like the only date field is `last_updated`. Was thinking this query might allow me to pull data by date and by county: https://coronavirus-tracker-api.herokuapp.com/v2/locations?timelines=1&country_code=US&source=csbs
|
closed
|
2020-03-29T17:32:54Z
|
2020-03-30T20:16:24Z
|
https://github.com/ExpDev07/coronavirus-tracker-api/issues/233
|
[] |
al-codaio
| 2
|
indico/indico
|
flask
| 6,085
|
Fail with a nicer error when uploading non-UTF8 CSV files
|
We have various places (e.g. when inviting people to register) where users can upload CSV files with bulk data. If it contains non-ASCII data that's not UTF-8 (apparently Excel still thinks it's acceptable to use something else than UTF-8 by default) they get this amazing error message:
> UnicodeDecodeError: 'utf-8' codec can't decode byte 0xe1 in position 86: invalid continuation byte
Non-technical users will have no idea what this error means. Let's catch it and convert it to a custom one like "You need to save your CSV file using the UTF-8 encoding."
|
closed
|
2023-12-07T23:36:46Z
|
2024-03-28T16:01:11Z
|
https://github.com/indico/indico/issues/6085
|
[
"trivial"
] |
ThiefMaster
| 0
|
hbldh/bleak
|
asyncio
| 802
|
Software caused connection to abort error
|
* bleak version: 0.14.2
* Python version: 3.7.5
* Operating System: Linux Ubuntu 18.04
* BlueZ version (`bluetoothctl -v`) in case of Linux: 5.48
### Description
I am running the BLE client within the python thread that utilizes behavior tree for robotics application. What's interesting is that it will run about 10% of the time (upon keep trying), but most of times, it gives me `bleak.exc.BleakDBusError: [org.bluez.Error.Failed] Software caused connection abort` error. Can someone explain how to address this issue and why this issue gets resolved upon just trying again?
P.S. It seems that getting rid of the thread is not a viable solution as already shown from: https://github.com/hbldh/bleak/issues/666.
```
Exception in thread Thread-37:
Traceback (most recent call last):
File "/usr/lib/python3.7/threading.py", line 926, in _bootstrap_inner
self.run()
File "/usr/lib/python3.7/threading.py", line 870, in run
self._target(*self._args, **self._kwargs)
File "bluetooth_door_tool.py", line 402, in open_bluetooth_door
run = asyncio.run(self.check_door_status())
File "/usr/lib/python3.7/asyncio/runners.py", line 43, in run
return loop.run_until_complete(main)
File "/usr/lib/python3.7/asyncio/base_events.py", line 579, in run_until_complete
return future.result()
File "bluetooth_door_tool.py", line 362, in check_door_status
await self.client.connect()
File "/home/BOSDYN/dlee/venv/ble_project/lib/python3.7/site-packages/bleak/backends/bluezdbus/client.py", line 278, in connect
assert_reply(reply)
File "/home/BOSDYN/dlee/venv/ble_project/lib/python3.7/site-packages/bleak/backends/bluezdbus/utils.py", line 23, in assert_reply
raise BleakDBusError(reply.error_name, reply.body)
bleak.exc.BleakDBusError: [org.bluez.Error.Failed] Software caused connection abort
```
|
closed
|
2022-04-08T17:22:27Z
|
2023-07-19T02:49:20Z
|
https://github.com/hbldh/bleak/issues/802
|
[
"Backend: BlueZ"
] |
dlee640
| 5
|
zihangdai/xlnet
|
nlp
| 260
|
ValueError when running ./gpu_squad_base.sh
|
Hello, I am somewhat new to text embedding models. I have successfully ran sudo./prepro_squad.sh after some troubleshooting, and have moved on to the next step in the SQuAD2.0 training/testing. Unfortunately, I continue receiving this error and am having a hard time finding out whether or not it has to do with a .records file or something else that I am not familiar with at the moment?
And if it is some .records file, how should I go about setting that up properly?
Here is what the output is:
**ValueError: Tensor conversion requested dtype string for Tensor with dtype float32: 'Tensor("arg0:0", shape=(), dtype=float32, device=/device:CPU:0)'**
If someone knows how to troubleshoot this in some way, please let me know. Thank you!!
|
open
|
2020-04-06T15:44:10Z
|
2022-09-03T03:47:36Z
|
https://github.com/zihangdai/xlnet/issues/260
|
[] |
Omnis23
| 3
|
wkentaro/labelme
|
deep-learning
| 455
|
Please add reorder action
|
Reordering, especially z-index reordering, is essential for segmentation labeling with occlusion, please add this feature so we no longer need to delete & add the label.
|
closed
|
2019-07-25T00:34:40Z
|
2019-07-31T04:20:20Z
|
https://github.com/wkentaro/labelme/issues/455
|
[] |
MidoriYakumo
| 2
|
piskvorky/gensim
|
machine-learning
| 2,961
|
Documentation of strip_punctuation vs strip_punctuation2 in gensim.parsing.preprocessing
|
Thanks for all the hard work on this fantastic library. I found a small quirk today, not really a bug, just a bit of a rough edge:
In `gensim.parsing` [preprocessing.py ](https://github.com/RaRe-Technologies/gensim/blob/e210f73c42c5df5a511ca27166cbc7d10970eab2/gensim/parsing/preprocessing.py#L121) `strip_punctuation2` is defined: `strip_punctuation2 = strip_punctuation`.
In the [documentation](https://radimrehurek.com/gensim/parsing/preprocessing.html) the description of [`strip_punctuation2`](https://radimrehurek.com/gensim/parsing/preprocessing.html#gensim.parsing.preprocessing.strip_punctuation2) is a duplication of [`strip_punctuation`](https://radimrehurek.com/gensim/parsing/preprocessing.html#gensim.parsing.preprocessing.strip_punctuation) rather than a statement of equality.
I noticed this while reading the documentation and, assuming I was missing an obvious distinction, attempting to hand diff the the docs for the two functions. When I gave up and flipped to the source it became obvious how the two functions are related.
|
closed
|
2020-09-28T12:54:03Z
|
2021-06-29T01:44:31Z
|
https://github.com/piskvorky/gensim/issues/2961
|
[
"documentation"
] |
sciatro
| 4
|
iperov/DeepFaceLab
|
machine-learning
| 969
|
Hello, guys, where i can download oldest builds?
|
Hello, guys, where i can download oldest builds?
working better for my , can somebody help my, where i can download them, ?
Thanx you
|
open
|
2020-12-12T00:08:02Z
|
2023-06-08T21:38:34Z
|
https://github.com/iperov/DeepFaceLab/issues/969
|
[] |
tembel123456
| 1
|
marimo-team/marimo
|
data-science
| 3,288
|
Integration with mito (dataframe-backed spreadsheets)
|
### Description
Integrate with https://github.com/mito-ds/mito, a thing that provides "spreadsheet look and feel" (with formulas!) in Jupyter and Streamlit and translates it into dataframe python code.
Why: bridge the gap to the people (lots of them) who have some pieces of analytical logic already developed in Excel, and would like to have a migration path over to Marimo.
Related:
- https://github.com/marimo-team/marimo/discussions/456, cc @sswatson
- https://github.com/marimo-team/marimo/issues/2560
- https://github.com/marimo-team/marimo/discussions/1663
- Narwhals-based data editor has been added https://github.com/marimo-team/marimo/pull/2567
I don't know whether narwhals and mito really integrate or re-implement things of each other. They have never been mentioned in each other's repos. However, there are already some projects that [depend on both narwhals and mito](https://github.com/search?q=narwhals+mitosheet&type=code), but I don't understand what they are doing.
### Suggested solution
TBD
### Alternative
_No response_
### Additional context
_No response_
|
open
|
2024-12-24T11:34:26Z
|
2025-01-18T04:23:07Z
|
https://github.com/marimo-team/marimo/issues/3288
|
[
"enhancement",
"upstream"
] |
leventov
| 0
|
plotly/dash
|
plotly
| 2,475
|
Allow modification of position/direction and style of dash_table tooltips
|
**Context**
- The tooltip is always positioned under its corresponding cell, except in the last rows where it's positioned on top. This automatic behaviour cannot be modified.
- Right now it's only possible to modify the _general_ style of _all_ of the tooltips with the `css` argument of `dash_table.DataTable`
**Describe the solution you'd like**
Add an argument similar to `tooltip_position` an a `style` key in `tooltip_conditional`, that could be used like:
```
dash_table.DataTable(
...,
tooltip_position='top',
tooltip_conditional= {
'if': {
'filter_query': '{Region} contains "New"'
},
'type': 'markdown',
'value': 'This row is significant.',
'style': {'background-color':'red', 'max-width':'100px'}
}
]
```
**Describe alternatives you've considered**
The tooltip is not a container per cell, but a general container that covers the whole table, and I guess somehow it gets the mouse position and calculates the appropriate position for the visible hover div (the position is specified in css with position: absolute and then top: XXXpx left: XXXpx)
I have explored solutions with the different tooltip properties of dash_table (https://dash.plotly.com/datatable/tooltips#conditional-tooltips) but there are no keys in the tooltip dict to specify the position/direction.
I've explored a workaround by including the table as the children of a [dmc.Tooltip](https://www.dash-mantine-components.com/components/tooltip) and modifying its position based on the hover info of the table, but it didn't work. I will open a feature request so that the Product Team takes this into account for future developments.
|
open
|
2023-03-22T12:02:07Z
|
2024-08-13T19:29:34Z
|
https://github.com/plotly/dash/issues/2475
|
[
"feature",
"dash-data-table",
"P3"
] |
celia-lm
| 0
|
SciTools/cartopy
|
matplotlib
| 2,099
|
`from_cf` constructor inherited from `pyproj` is broken.
|
### Description
For 2 years now, `pyproj` has been able to parse CF "grid mapping" attributes (http://cfconventions.org/Data/cf-conventions/cf-conventions-1.10/cf-conventions.html#coordinate-system) (pyproj4/pyproj#660).
This is done through the `pyproj.CRS.from_cf` method. Sadly, it is a `staticmethod`, and not a `classmethod`. Thus, the result is a `pyproj.CRS` object, even we call the method on `cartopy.crs.Projection`. And this is does not work as expected with matplotlib.
The code seems to be done this way because even though the `from_cf` method lives in `CRS`, it actually returns children of `CRS`, but the exact type depends on the inputs. Would that mean it is not compatible with cartopy?
It would be interesting if the function could be overrided to actually return a cartopy object. Also, this direct use of `cartopy.crs.Projection` doesn't seem much documented. Is it uncommon? In that case, would a `cartopy.crs.from_cf` function be interesting?
I may have time to help on this. My long term goal would be to make [cf-xarray](https://github.com/xarray-contrib/cf-xarray/) aware of these `grid_mappings` CF variables and able to inject the correct cartopy CRS argument to matplotlib's `transform` argument.
#### Code to reproduce
```
import matplotlib.pyplot as plt
import cartopy.crs as ccrs
cf_attrs = {'grid_mapping_name': 'rotated_latitude_longitude',
'grid_north_pole_latitude': 42.5,
'grid_north_pole_longitude': 83.0,
'north_pole_grid_longitude': 0.0}
rp = ccrs.Projection.from_cf(cf_attrs)
fig, ax = plt.subplots(subplot_kw={'projection': rp})
```
#### Traceback
```
---------------------------------------------------------------------------
TypeError Traceback (most recent call last)
Cell In [15], line 6
1 cf_attrs = {'grid_mapping_name': 'rotated_latitude_longitude',
2 'grid_north_pole_latitude': 42.5,
3 'grid_north_pole_longitude': 83.0,
4 'north_pole_grid_longitude': 0.0}
5 rp = ccrs.Projection.from_cf(cf_attrs)
----> 6 fig, ax = plt.subplots(subplot_kw={'projection': rp})
File /exec/pbourg/.conda/cf_xarray_test/lib/python3.10/site-packages/matplotlib/pyplot.py:1443, in subplots(nrows, ncols, sharex, sharey, squeeze, width_ratios, height_ratios, subplot_kw, gridspec_kw, **fig_kw)
1299 """
1300 Create a figure and a set of subplots.
1301
(...)
1440
1441 """
1442 fig = figure(**fig_kw)
-> 1443 axs = fig.subplots(nrows=nrows, ncols=ncols, sharex=sharex, sharey=sharey,
1444 squeeze=squeeze, subplot_kw=subplot_kw,
1445 gridspec_kw=gridspec_kw, height_ratios=height_ratios,
1446 width_ratios=width_ratios)
1447 return fig, axs
File /exec/pbourg/.conda/cf_xarray_test/lib/python3.10/site-packages/matplotlib/figure.py:894, in FigureBase.subplots(self, nrows, ncols, sharex, sharey, squeeze, width_ratios, height_ratios, subplot_kw, gridspec_kw)
891 gridspec_kw['width_ratios'] = width_ratios
893 gs = self.add_gridspec(nrows, ncols, figure=self, **gridspec_kw)
--> 894 axs = gs.subplots(sharex=sharex, sharey=sharey, squeeze=squeeze,
895 subplot_kw=subplot_kw)
896 return axs
File /exec/pbourg/.conda/cf_xarray_test/lib/python3.10/site-packages/matplotlib/gridspec.py:308, in GridSpecBase.subplots(self, sharex, sharey, squeeze, subplot_kw)
306 subplot_kw["sharex"] = shared_with[sharex]
307 subplot_kw["sharey"] = shared_with[sharey]
--> 308 axarr[row, col] = figure.add_subplot(
309 self[row, col], **subplot_kw)
311 # turn off redundant tick labeling
312 if sharex in ["col", "all"]:
File /exec/pbourg/.conda/cf_xarray_test/lib/python3.10/site-packages/matplotlib/figure.py:743, in FigureBase.add_subplot(self, *args, **kwargs)
740 if (len(args) == 1 and isinstance(args[0], Integral)
741 and 100 <= args[0] <= 999):
742 args = tuple(map(int, str(args[0])))
--> 743 projection_class, pkw = self._process_projection_requirements(
744 *args, **kwargs)
745 ax = subplot_class_factory(projection_class)(self, *args, **pkw)
746 key = (projection_class, pkw)
File /exec/pbourg/.conda/cf_xarray_test/lib/python3.10/site-packages/matplotlib/figure.py:1682, in FigureBase._process_projection_requirements(self, axes_class, polar, projection, *args, **kwargs)
1680 kwargs.update(**extra_kwargs)
1681 else:
-> 1682 raise TypeError(
1683 f"projection must be a string, None or implement a "
1684 f"_as_mpl_axes method, not {projection!r}")
1685 if projection_class.__name__ == 'Axes3D':
1686 kwargs.setdefault('auto_add_to_figure', False)
TypeError: projection must be a string, None or implement a _as_mpl_axes method, not <Derived Geographic 2D CRS: {"$schema": "https://proj.org/schemas/v0.2/projjso ...>
Name: undefined
Axis Info [ellipsoidal]:
- lon[east]: Longitude (degree)
- lat[north]: Latitude (degree)
Area of Use:
- undefined
Coordinate Operation:
- name: Pole rotation (netCDF CF convention)
- method: Pole rotation (netCDF CF convention)
Datum: World Geodetic System 1984
- Ellipsoid: WGS 84
- Prime Meridian: Greenwich
```
<details>
<summary>Full environment definition</summary>
<!-- fill in the following information as appropriate -->
### Operating system
Linux 64
### Cartopy version
0.21.0
### pyproj version
3.4.0
</details>
|
open
|
2022-11-07T19:14:42Z
|
2022-11-07T20:37:03Z
|
https://github.com/SciTools/cartopy/issues/2099
|
[] |
aulemahal
| 1
|
tqdm/tqdm
|
jupyter
| 1,097
|
ValueError: too many values to unpack (expected 2)
|
i try to implement lda2vec from (https://github.com/sebkim/lda2vec-pytorch) with my own data. when i run the model preprocess.py.
`from collections import Counter
from tqdm import tqdm
def preprocess(docs):
"""Tokenize, encode documents.
Arguments:
docs: A list of tuples (index, string), each string is a document.
Returns:
encoded_docs: A list of tuples (index, list), each list is a document
with words encoded by integer values.
decoder: A dict, integer -> word.
word_counts: A list of integers, counts of words that are in decoder.
word_counts[i] is the number of occurrences of word decoder[i]
in all documents in docs.
"""
def tokenize(doc):
return doc.split()
tokenized_docs = [(i, tokenize(doc)) for i, doc in tqdm (docs[:])]
counts = _count_unique_tokens(tokenized_docs)
encoder, decoder, word_counts = _create_token_encoder(counts)
encoded_docs = _encode(tokenized_docs, encoder)
return encoded_docs, decoder, word_counts
def _count_unique_tokens(tokenized_docs):
tokens = []
for i, doc in tokenized_docs:
tokens += doc
return Counter(tokens)
def _encode(tokenized_docs, encoder):
return [(i, [encoder[t] for t in doc]) for i, doc in tokenized_docs]
def _create_token_encoder(counts):
total_tokens_count = sum(
count for token, count in counts.most_common()
)
print('total number of tokens:', total_tokens_count)
encoder = {}
decoder = {}
word_counts = []
i = 0
for token, count in counts.most_common():
# counts.most_common() is in decreasing count order
encoder[token] = i
decoder[i] = token
word_counts.append(count)
i += 1
return encoder, decoder, word_counts
def get_windows(doc, hws=5):
"""
For each word in a document get a window around it.
Arguments:
doc: a list of words.
hws: an integer, half window size.
Returns:
a list of tuples, each tuple looks like this
(word w, window around w),
window around w equals to
[hws words that come before w] + [hws words that come after w],
size of the window around w is 2*hws.
Number of the tuples = len(doc).
"""
length = len(doc)
assert length > 2*hws, 'doc is too short!'
inside = [(w, doc[(i - hws):i] + doc[(i + 1):(i + hws + 1)])
for i, w in enumerate(doc[hws:-hws], hws)]
# for words that are near the beginning or
# the end of a doc tuples are slightly different
beginning = [(w, doc[:i] + doc[(i + 1):(2*hws + 1)])
for i, w in enumerate(doc[:hws], 0)]
end = [(w, doc[-(2*hws + 1):i] + doc[(i + 1):])
for i, w in enumerate(doc[-hws:], length - hws)]
return beginning + inside + end
`
i'm getting this error : ValueError: too many values to unpack (expected 2)
how to fix this ??
|
open
|
2020-12-15T22:22:04Z
|
2020-12-15T22:22:04Z
|
https://github.com/tqdm/tqdm/issues/1097
|
[] |
fathia-ghribi
| 0
|
biolab/orange3
|
pandas
| 6,485
|
Pivot Table widget - wrong filtered data output
|
Hi there,
I am experiencing a problem with the Pivot Table Widget, specifically with the "filtered data" output. I want to use the functionality to select specific groups of my pivoted table to analyze separately. When I select one of the groups as intended, the routing towards the output of the widget does not work correctly. As seen on the picture I select the group with 8 instances but the widget outputs 23 instances, which is the group left to it.

When I select all of the groups, the number of outputted instances is also wrong, it doesn't include the most right column of groups. (>=30). The total count of instances in the output should be 94 but is only 50 with 44 instances missing from the most right column.

It seems to me that the selection is simply taking all the instances belonging to the group left to the one I select with my mouse.
I'm using Orange 3.35 as Standalone on Windows 11.
Thanks a lot and all the best,
Merit
|
closed
|
2023-06-21T07:13:27Z
|
2023-07-04T11:34:41Z
|
https://github.com/biolab/orange3/issues/6485
|
[
"bug report"
] |
meritwagner
| 2
|
ranaroussi/yfinance
|
pandas
| 1,370
|
Known Yahoo rate limiter?
|
Hello,
I try to scan **Ticker.quarterly_income_stmt** income statements using multithreading. After a few mins I start receiving empty DataFrames PLUS yahoo website shows the following issue (see the screenshot below).
I there any known IP-based Yahoo rate limiter?

|
closed
|
2023-01-28T22:39:05Z
|
2023-07-21T12:06:08Z
|
https://github.com/ranaroussi/yfinance/issues/1370
|
[] |
ganymedenet
| 20
|
LAION-AI/Open-Assistant
|
machine-learning
| 3,763
|
website is not active
|
website do not active
|
open
|
2025-01-07T04:26:38Z
|
2025-01-07T04:27:18Z
|
https://github.com/LAION-AI/Open-Assistant/issues/3763
|
[] |
daohuong605
| 0
|
pytorch/vision
|
computer-vision
| 8,401
|
Can't use gaussian_blur if sigma is a tensor on gpu
|
### 🐛 Describe the bug
Admittedly perhaps an unconventional use, but I'm using gaussian_blur in my model to blur attention maps and I want to have the sigma be a parameter.
It would work, except for this function:
https://github.com/pytorch/vision/blob/06ad737628abc3a1e617571dc03cbdd5b36ea96a/torchvision/transforms/_functional_tensor.py#L725
x is not moved to the device that sigma is on.
I believe it is like this in all torchvision versions.
WORKS:
```
import torch
from torchvision.transforms.functional import gaussian_blur
k = 15
s = torch.tensor(0.3 * ((5 - 1) * 0.5 - 1) + 0.8, requires_grad = True)
blurred = gaussian_blur(torch.randn(1, 3, 256, 256), k, [s])
blurred.mean().backward()
print(s.grad)
>>> tensor(-4.6193e-05)
```
DOES NOT:
```
import torch
from torchvision.transforms.functional import gaussian_blur
k = 15
s = torch.tensor(0.3 * ((5 - 1) * 0.5 - 1) + 0.8, requires_grad = True, device='cuda')
blurred = gaussian_blur(torch.randn(1, 3, 256, 256, device='cuda'), k, [s])
blurred.mean().backward()
print(s.grad)
---------------------------------------------------------------------------
RuntimeError Traceback (most recent call last)
[D:\Temp\ipykernel_39000\3525683463.py](file:///D:/Temp/ipykernel_39000/3525683463.py) in <module>
4 s = torch.tensor(0.3 * ((5 - 1) * 0.5 - 1) + 0.8, requires_grad = True, device='cuda')
5
----> 6 blurred = gaussian_blur(torch.randn(1, 3, 256, 256, device='cuda'), k, [s])
7 blurred.mean().backward()
8 print(s.grad)
[s:\anaconda3\envs\base_pip\lib\site-packages\torchvision\transforms\functional.py](file:///S:/anaconda3/envs/base_pip/lib/site-packages/torchvision/transforms/functional.py) in gaussian_blur(img, kernel_size, sigma)
1361 t_img = pil_to_tensor(img)
1362
-> 1363 output = F_t.gaussian_blur(t_img, kernel_size, sigma)
1364
1365 if not isinstance(img, torch.Tensor):
[s:\anaconda3\envs\base_pip\lib\site-packages\torchvision\transforms\_functional_tensor.py](file:///S:/anaconda3/envs/base_pip/lib/site-packages/torchvision/transforms/_functional_tensor.py) in gaussian_blur(img, kernel_size, sigma)
749
750 dtype = img.dtype if torch.is_floating_point(img) else torch.float32
--> 751 kernel = _get_gaussian_kernel2d(kernel_size, sigma, dtype=dtype, device=img.device)
752 kernel = kernel.expand(img.shape[-3], 1, kernel.shape[0], kernel.shape[1])
753
[s:\anaconda3\envs\base_pip\lib\site-packages\torchvision\transforms\_functional_tensor.py](file:///S:/anaconda3/envs/base_pip/lib/site-packages/torchvision/transforms/_functional_tensor.py) in _get_gaussian_kernel2d(kernel_size, sigma, dtype, device)
736 kernel_size: List[int], sigma: List[float], dtype: torch.dtype, device: torch.device
737 ) -> Tensor:
--> 738 kernel1d_x = _get_gaussian_kernel1d(kernel_size[0], sigma[0]).to(device, dtype=dtype)
739 kernel1d_y = _get_gaussian_kernel1d(kernel_size[1], sigma[1]).to(device, dtype=dtype)
740 kernel2d = torch.mm(kernel1d_y[:, None], kernel1d_x[None, :])
[s:\anaconda3\envs\base_pip\lib\site-packages\torchvision\transforms\_functional_tensor.py](file:///S:/anaconda3/envs/base_pip/lib/site-packages/torchvision/transforms/_functional_tensor.py) in _get_gaussian_kernel1d(kernel_size, sigma)
727
728 x = torch.linspace(-ksize_half, ksize_half, steps=kernel_size)
--> 729 pdf = torch.exp(-0.5 * (x / sigma).pow(2))
730 kernel1d = pdf / pdf.sum()
731
RuntimeError: Expected all tensors to be on the same device, but found at least two devices, cuda:0 and cpu!
```
~~I don't know about the convention, like whether device should be passed in, but the simplest fix I believe would just be to change:
`728 x = torch.linspace(-ksize_half, ksize_half, steps=kernel_size)`
to:
`728 x = torch.linspace(-ksize_half, ksize_half, steps=kernel_size).to(sigma.device)`~~
Actually that won't when sigma is just a float. So I guess there could be a check for whether its a float or a float tensor.
### Versions
[pip3] efficientunet-pytorch==0.0.6
[pip3] ema-pytorch==0.4.5
[pip3] flake8==6.0.0
[pip3] mypy-extensions==0.4.3
[pip3] numpy==1.24.3
[pip3] numpydoc==1.4.0
[pip3] pytorch-msssim==1.0.0
[pip3] siren-pytorch==0.1.7
[pip3] torch==2.2.2+cu118
[pip3] torch-cluster==1.6.0+pt113cu116
[pip3] torch_geometric==2.4.0
[pip3] torch-scatter==2.1.0+pt113cu116
[pip3] torch-sparse==0.6.16+pt113cu116
[pip3] torch-spline-conv==1.2.1+pt113cu116
[pip3] torch-tools==0.1.5
[pip3] torchaudio==2.2.2+cu118
[pip3] torchbearer==0.5.3
[pip3] torchmeta==1.8.0
[pip3] torchvision==0.17.2+cu118
[pip3] uformer-pytorch==0.0.8
[pip3] vit-pytorch==1.5.0
[conda] blas 1.0 mkl
[conda] efficientunet-pytorch 0.0.6 pypi_0 pypi
[conda] ema-pytorch 0.4.5 pypi_0 pypi
[conda] mkl 2021.4.0 haa95532_640
[conda] mkl-service 2.4.0 py39h2bbff1b_0
[conda] mkl_fft 1.3.1 py39h277e83a_0
[conda] mkl_random 1.2.2 py39hf11a4ad_0
[conda] numpy 1.24.3 pypi_0 pypi
[conda] numpydoc 1.4.0 py39haa95532_0
[conda] pytorch-cuda 11.6 h867d48c_1 pytorch
[conda] pytorch-msssim 1.0.0 pypi_0 pypi
[conda] pytorch-mutex 1.0 cuda pytorch
[conda] siren-pytorch 0.1.7 pypi_0 pypi
[conda] torch 1.13.0 pypi_0 pypi
[conda] torch-cluster 1.6.0+pt113cu116 pypi_0 pypi
[conda] torch-geometric 2.4.0 pypi_0 pypi
[conda] torch-scatter 2.1.0+pt113cu116 pypi_0 pypi
[conda] torch-sparse 0.6.16+pt113cu116 pypi_0 pypi
[conda] torch-spline-conv 1.2.1+pt113cu116 pypi_0 pypi
[conda] torch-tools 0.1.5 pypi_0 pypi
[conda] torchaudio 0.9.1 pypi_0 pypi
[conda] torchbearer 0.5.3 pypi_0 pypi
[conda] torchmeta 1.8.0 pypi_0 pypi
[conda] torchvision 0.17.2+cu118 pypi_0 pypi
[conda] uformer-pytorch 0.0.8 pypi_0 pypi
[conda] vit-pytorch 1.5.0 pypi_0 pypi
|
closed
|
2024-05-01T23:30:56Z
|
2024-05-29T13:03:31Z
|
https://github.com/pytorch/vision/issues/8401
|
[] |
Xact-sniper
| 3
|
keras-team/keras
|
pytorch
| 20,084
|
Layer "dense" expects 1 input(s), but it received 2 input tensors error while loading a keras model
|
**Hello,
I'm not 100% sure it's a bug.
I trained a model and saved it on Google Colab Entreprise
Tensorflow v2.17.0
Keras v 3.4.1
Once a try to load the model using tf.keras.models.load_model('model_v0-1 (1).keras') I get the following error :**
---------------------------------------------------------------------------
ValueError Traceback (most recent call last)
[<ipython-input-16-161c7446005a>](https://localhost:8080/#) in <cell line: 1>()
----> 1 model = tf.keras.models.load_model('model_v0-1 (1).keras')
11 frames
[/usr/local/lib/python3.10/dist-packages/keras/src/layers/input_spec.py](https://localhost:8080/#) in assert_input_compatibility(input_spec, inputs, layer_name)
158 inputs = tree.flatten(inputs)
159 if len(inputs) != len(input_spec):
--> 160 raise ValueError(
161 f'Layer "{layer_name}" expects {len(input_spec)} input(s),'
162 f" but it received {len(inputs)} input tensors. "
ValueError: Layer "dense" expects 1 input(s), but it received 2 input tensors. Inputs received: [<KerasTensor shape=(None, 11, 11, 1280), dtype=float32, sparse=False, name=keras_tensor_4552>, <KerasTensor shape=(None, 11, 11, 1280), dtype=float32, sparse=False, name=keras_tensor_4553>]
---------------------------------------------------------------------------
**I trained EffecientNetB0 and added some layers**
```
# Configure the startegy
if len(gpus) > 1:
strategy = tf.distribute.MirroredStrategy()
else:
strategy = tf.distribute.get_strategy()
with strategy.scope():
base_model = tf.keras.applications.EfficientNetB0(include_top=False, input_shape=(336, 336, 3), weights='imagenet')
base_model.trainable = True # Unfreeze the base model
# Freeze the first few layers
for layer in base_model.layers[:15]:
layer.trainable = False
num_neurons = (len(filtered_data['species'].unique()) + 1280) // 2
# Add layers
model = models.Sequential([
base_model,
layers.GlobalAveragePooling2D(),
layers.Dense(num_neurons, activation='relu', kernel_regularizer=regularizers.l2(0.001)),
layers.BatchNormalization(),
layers.Dropout(0.2),
layers.Dense(num_neurons, activation='relu', kernel_regularizer=regularizers.l2(0.001)),
layers.BatchNormalization(),
layers.Dropout(0.2),
layers.Dense(len(filtered_data['species'].unique()), activation='softmax')
])
```
Model: "sequential"
┏━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━┳━━━━━━━━━━━━━━━━━━━━━━━━━━━━━┳━━━━━━━━━━━━━━━━━┓
┃ Layer (type) ┃ Output Shape ┃ Param # ┃
┡━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━╇━━━━━━━━━━━━━━━━━━━━━━━━━━━━━╇━━━━━━━━━━━━━━━━━┩
│ efficientnetb0 (Functional) │ (None, 11, 11, 1280) │ 4,049,571 │
├──────────────────────────────────────┼─────────────────────────────┼─────────────────┤
│ global_average_pooling2d │ (None, 1280) │ 0 │
│ (GlobalAveragePooling2D) │ │ │
├──────────────────────────────────────┼─────────────────────────────┼─────────────────┤
│ dense (Dense) │ (None, 672) │ 860,832 │
├──────────────────────────────────────┼─────────────────────────────┼─────────────────┤
│ batch_normalization │ (None, 672) │ 2,688 │
│ (BatchNormalization) │ │ │
├──────────────────────────────────────┼─────────────────────────────┼─────────────────┤
│ dropout (Dropout) │ (None, 672) │ 0 │
├──────────────────────────────────────┼─────────────────────────────┼─────────────────┤
│ dense_1 (Dense) │ (None, 672) │ 452,256 │
├──────────────────────────────────────┼─────────────────────────────┼─────────────────┤
│ batch_normalization_1 │ (None, 672) │ 2,688 │
│ (BatchNormalization) │ │ │
├──────────────────────────────────────┼─────────────────────────────┼─────────────────┤
│ dropout_1 (Dropout) │ (None, 672) │ 0 │
├──────────────────────────────────────┼─────────────────────────────┼─────────────────┤
│ dense_2 (Dense) │ (None, 65) │ 43,745 │
└──────────────────────────────────────┴─────────────────────────────┴─────────────────┘
Total params: 16,142,256 (61.58 MB)
Trainable params: 5,365,237 (20.47 MB)
Non-trainable params: 46,543 (181.81 KB)
Optimizer params: 10,730,476 (40.93 MB)
**Therefore, the only dense layers are the ones I added at the end.
Am I doing something wrong ? I read that some other person faces the same issue sinc TF2.16 and keras 3.4, so I guessed it is an issue in keras but not sure.**
Thank you for your help/review
|
open
|
2024-08-02T12:05:50Z
|
2024-08-06T14:42:47Z
|
https://github.com/keras-team/keras/issues/20084
|
[
"type:Bug"
] |
BastienPailloux
| 3
|
mljar/mercury
|
data-visualization
| 90
|
add themes to slides
|
closed
|
2022-04-25T08:22:09Z
|
2022-04-27T11:17:00Z
|
https://github.com/mljar/mercury/issues/90
|
[
"enhancement"
] |
pplonski
| 1
|
|
ymcui/Chinese-BERT-wwm
|
tensorflow
| 209
|
我用bert官方的分类代码去跑,用bert-wwm作为预训练模型,报错TypeError: __init__() takes 1 positional argument but 3 were given
|
WARNING:tensorflow:From /usr/local/lib/python3.5/dist-packages/tensorflow/python/training/learning_rate_decay_v2.py:321: div (from tensorflow.python.ops.math_ops) is deprecated and will be removed in a future version.
Instructions for updating:
Deprecated in favor of operator or tf.math.divide.
INFO:tensorflow:Error recorded from training_loop: __init__() takes 1 positional argument but 3 were given
INFO:tensorflow:training_loop marked as finished
WARNING:tensorflow:Reraising captured error
Traceback (most recent call last):
File "/tf/bert/run_classifier.py", line 1064, in <module>
tf.compat.v1.app.run()
File "/usr/local/lib/python3.5/dist-packages/tensorflow/python/platform/app.py", line 125, in run
_sys.exit(main(argv))
File "/tf/bert/run_classifier.py", line 962, in main
estimator.train(input_fn=train_input_fn, max_steps=num_train_steps)
File "/usr/local/lib/python3.5/dist-packages/tensorflow/contrib/tpu/python/tpu/tpu_estimator.py", line 2457, in train
rendezvous.raise_errors()
File "/usr/local/lib/python3.5/dist-packages/tensorflow/contrib/tpu/python/tpu/error_handling.py", line 128, in raise_errors
six.reraise(typ, value, traceback)
File "/usr/local/lib/python3.5/dist-packages/six.py", line 693, in reraise
raise value
File "/usr/local/lib/python3.5/dist-packages/tensorflow/contrib/tpu/python/tpu/tpu_estimator.py", line 2452, in train
saving_listeners=saving_listeners)
File "/usr/local/lib/python3.5/dist-packages/tensorflow_estimator/python/estimator/estimator.py", line 358, in train
loss = self._train_model(input_fn, hooks, saving_listeners)
File "/usr/local/lib/python3.5/dist-packages/tensorflow_estimator/python/estimator/estimator.py", line 1124, in _train_model
return self._train_model_default(input_fn, hooks, saving_listeners)
File "/usr/local/lib/python3.5/dist-packages/tensorflow_estimator/python/estimator/estimator.py", line 1154, in _train_model_default
features, labels, model_fn_lib.ModeKeys.TRAIN, self.config)
File "/usr/local/lib/python3.5/dist-packages/tensorflow/contrib/tpu/python/tpu/tpu_estimator.py", line 2251, in _call_model_fn
config)
File "/usr/local/lib/python3.5/dist-packages/tensorflow_estimator/python/estimator/estimator.py", line 1112, in _call_model_fn
model_fn_results = self._model_fn(features=features, **kwargs)
File "/usr/local/lib/python3.5/dist-packages/tensorflow/contrib/tpu/python/tpu/tpu_estimator.py", line 2534, in _model_fn
features, labels, is_export_mode=is_export_mode)
File "/usr/local/lib/python3.5/dist-packages/tensorflow/contrib/tpu/python/tpu/tpu_estimator.py", line 1323, in call_without_tpu
return self._call_model_fn(features, labels, is_export_mode=is_export_mode)
File "/usr/local/lib/python3.5/dist-packages/tensorflow/contrib/tpu/python/tpu/tpu_estimator.py", line 1593, in _call_model_fn
estimator_spec = self._model_fn(features=features, **kwargs)
File "/tf/bert/run_classifier.py", line 756, in model_fn
total_loss, learning_rate, num_train_steps, num_warmup_steps, use_tpu)
File "/tf/bert/optimization.py", line 65, in create_optimizer
exclude_from_weight_decay=["LayerNorm", "layer_norm", "bias"])
File "/tf/bert/optimization.py", line 100, in __init__
super(AdamWeightDecayOptimizer, self).__init__(False, name)
TypeError: __init__() takes 1 positional argument but 3 were given
|
closed
|
2021-12-29T09:32:45Z
|
2023-10-25T09:08:27Z
|
https://github.com/ymcui/Chinese-BERT-wwm/issues/209
|
[
"stale"
] |
dolphin-Jia
| 3
|
geex-arts/django-jet
|
django
| 18
|
jquery UI postprocessing issue. Image 'animated-overlay.gif' not found
|
Hi,
On production server, collectstatic is failing as it is unable to find 'animated-overlay.gif'
``` bash
Post-processed 'jet/vendor/jquery-ui-timepicker/include/ui-1.10.0/ui-lightness/images/ui-icons_222222_256x240.png' as 'jet/vendor/jquery-ui-timepicker/include/ui-1.10.0/ui-lightness/images/ui-icons_222222_256x240.ebe6b6902a40.png'
Post-processing 'jet/vendor/jquery-ui-timepicker/include/ui-1.10.0/ui-lightness/jquery-ui-1.10.0.custom.min.css' failed!
Traceback (most recent call last):
File "./manage.py", line 10, in <module>
execute_from_command_line(sys.argv)
File "/home/dheerendra/.Envs/ldapsso/local/lib/python2.7/site-packages/django/core/management/__init__.py", line 350, in execute_from_command_line
utility.execute()
File "/home/dheerendra/.Envs/ldapsso/local/lib/python2.7/site-packages/django/core/management/__init__.py", line 342, in execute
self.fetch_command(subcommand).run_from_argv(self.argv)
File "/home/dheerendra/.Envs/ldapsso/local/lib/python2.7/site-packages/django/core/management/base.py", line 348, in run_from_argv
self.execute(*args, **cmd_options)
File "/home/dheerendra/.Envs/ldapsso/local/lib/python2.7/site-packages/django/core/management/base.py", line 399, in execute
output = self.handle(*args, **options)
File "/home/dheerendra/.Envs/ldapsso/local/lib/python2.7/site-packages/django/contrib/staticfiles/management/commands/collectstatic.py", line 176, in handle
collected = self.collect()
File "/home/dheerendra/.Envs/ldapsso/local/lib/python2.7/site-packages/django/contrib/staticfiles/management/commands/collectstatic.py", line 128, in collect
raise processed
ValueError: The file 'jet/vendor/jquery-ui-timepicker/include/ui-1.10.0/ui-lightness/images/animated-overlay.gif' could not be found with <django.contrib.staticfiles.storage.CachedStaticFilesStorage object at 0x7f8d62ed6a50>
```
|
closed
|
2015-10-24T20:01:45Z
|
2015-11-07T06:33:34Z
|
https://github.com/geex-arts/django-jet/issues/18
|
[] |
DheerendraRathor
| 2
|
hankcs/HanLP
|
nlp
| 1,340
|
四川发文取缔全部不合规p2p, 不能分开"合规"和p2p
|
<!--
注意事项和版本号必填,否则不回复。若希望尽快得到回复,请按模板认真填写,谢谢合作。
-->
## 注意事项
请确认下列注意事项:
* 我已仔细阅读下列文档,都没有找到答案:
- [首页文档](https://github.com/hankcs/HanLP)
- [wiki](https://github.com/hankcs/HanLP/wiki)
- [常见问题](https://github.com/hankcs/HanLP/wiki/FAQ)
* 我已经通过[Google](https://www.google.com/#newwindow=1&q=HanLP)和[issue区检索功能](https://github.com/hankcs/HanLP/issues)搜索了我的问题,也没有找到答案。
* 我明白开源社区是出于兴趣爱好聚集起来的自由社区,不承担任何责任或义务。我会礼貌发言,向每一个帮助我的人表示感谢。
* [x] 我在此括号内输入x打钩,代表上述事项确认完毕
## 版本号
<!-- 发行版请注明jar文件名去掉拓展名的部分;GitHub仓库版请注明master还是portable分支 -->
当前最新版本号是:1.7.5
我使用的版本是:1.7.5
<!--以上属于必填项,以下可自由发挥-->
## 我的问题
<!-- 请详细描述问题,越详细越可能得到解决 -->
## 复现问题
<!-- 你是如何操作导致产生问题的?比如修改了代码?修改了词典或模型?-->
### 步骤
1. 首先……
2. 然后……
3. 接着……
### 触发代码
```
String words = "四川发文取缔全部不合规P2P";
System.out.println(NLPTokenizer.segment(words));
```
### 期望输出
```
四川 发文 取缔 全部 不 合规 P2P
```
### 实际输出
```
[四川/n, 发文/v, 取缔/v, 全部/n, 不/d, 合规p2p/a]
```
## 其他信息
CoreNatureDictionary.ngram.txt中加入了二元句法:
合规@p2p 200
合规@P2P 200
仍不能被分开,不知道是不是因为训练语料没有这样的数据导致(搜索了data/test/下的文件没搜到"p2p").
想知道有什么办法解决.
|
closed
|
2019-12-05T10:04:57Z
|
2020-01-01T10:48:04Z
|
https://github.com/hankcs/HanLP/issues/1340
|
[
"ignored"
] |
wangtao208208
| 4
|
polyaxon/traceml
|
matplotlib
| 25
|
Error in Pytorch Lightning Callback method.
|
In function [log_model_summary](https://github.com/polyaxon/traceml/blob/6b6e3a2f575c76edbce65df861d640f4cc8c72bb/traceml/traceml/integrations/pytorch_lightning.py#L144) there is a reference to non existing attribute `self.run`.
Using this function will raise error:
```python
rel_path = self.run.get_outputs_path("model_summary.txt")
AttributeError: 'Callback' object has no attribute 'run'
```
Possible fix:
replacing `self.run` with `self.experiment` (as used in other methods) seems to fix the issue.
|
closed
|
2023-03-14T10:13:00Z
|
2023-03-14T10:20:12Z
|
https://github.com/polyaxon/traceml/issues/25
|
[] |
sebastian-sz
| 0
|
eamigo86/graphene-django-extras
|
graphql
| 146
|
"totalCount" field stays camelCase when auto_camelcase == False
|
Our GraphQL schema uses snake_case, so we have `auto_camelcase` switched off:
```python
schema = graphene.Schema(query=Query, mutation=Mutation, auto_camelcase=False)
```
This works fine for everything except for the `totalCount` field since it's hardcoded here:
https://github.com/eamigo86/graphene-django-extras/blob/ddc2ed921bd915be16234f4a9761108508cbd49f/graphene_django_extras/types.py#L313
Which means our queries:
```graphql
{
projects(title_text__icontains: "create") {
totalCount
results {
id
title_text
}
}
}
```
and results:
```json
{
"data": {
"projects": {
"totalCount": 1,
"results": [
{
"id": "1",
"title_text": "Create Notification Stack"
}
]
}
}
}
```
end up with inconsistent styling.
|
open
|
2020-07-14T11:48:35Z
|
2020-07-14T11:48:35Z
|
https://github.com/eamigo86/graphene-django-extras/issues/146
|
[] |
flyte
| 0
|
errbotio/errbot
|
automation
| 1,672
|
Is errbot project still actively maintained?
|
Hi,
Is errbot project still actively maintained? I see the last release (v6.1.9) dated Jun 11, 2022. There're some pull requests opened within a year which are still pending to merge.
So, just curious if the project will keep on-going?
Thanks.
|
closed
|
2023-12-03T04:11:19Z
|
2024-01-04T16:49:55Z
|
https://github.com/errbotio/errbot/issues/1672
|
[
"type: feature"
] |
dtserekhman
| 6
|
modoboa/modoboa
|
django
| 2,553
|
Migrate from 1.17 to 2.01 error
|
# Impacted versions
* OS Type: Ubuntu
* OS Version: 18.04 LTs
* Database Type: MariaDB
* Database version: X.y
* Modoboa: 2.0.1
* installer used: Unknown
* Webserver: Nginx
# Current behavior
I just tried upgrading modoboa to 2.0.1 from 1.17. I was able to succesfully upgrade the modules via cli, did the migrate/static/deploy. I did have to reinstall all the modules after updating python from 3.6 to 3.9 but after doing that I was able to migrate and deploy.
I'm getting the warning as follows from all the modoboa related users cron.
/srv/modoboa/env/lib/python3.9/site-packages/requests/__init__.py:109: RequestsDependencyWarning: urllib3 (1.21.1) or chardet (5.0.0)/charset_normalizer (2.0.12) doesn't match a supported version!
warnings.warn(
I've tried changing versions of urllib3 to no avail. All come back with the same error.
Here's the debug:
Environment:
Request Method: GET
Request URL: https://my.server.com
Django Version: 2.2.12
Python Version: 3.6.9
Installed Applications:
('django.contrib.auth',
'django.contrib.contenttypes',
'django.contrib.sessions',
'django.contrib.messages',
'django.contrib.sites',
'django.contrib.staticfiles',
'reversion',
'ckeditor',
'ckeditor_uploader',
'rest_framework',
'rest_framework.authtoken',
'django_otp',
'django_otp.plugins.otp_totp',
'django_otp.plugins.otp_static',
'modoboa',
'modoboa.core',
'modoboa.lib',
'modoboa.admin',
'modoboa.transport',
'modoboa.relaydomains',
'modoboa.limits',
'modoboa.parameters',
'modoboa.dnstools',
'modoboa.maillog',
'modoboa_amavis',
'modoboa_pdfcredentials',
'modoboa_postfix_autoreply',
'modoboa_sievefilters',
'modoboa_webmail',
'modoboa_contacts',
'modoboa_radicale',
'webpack_loader')
Installed Middleware:
('x_forwarded_for.middleware.XForwardedForMiddleware',
'django.contrib.sessions.middleware.SessionMiddleware',
'django.middleware.common.CommonMiddleware',
'django.middleware.csrf.CsrfViewMiddleware',
'django.contrib.auth.middleware.AuthenticationMiddleware',
'django_otp.middleware.OTPMiddleware',
'modoboa.core.middleware.TwoFAMiddleware',
'django.contrib.messages.middleware.MessageMiddleware',
'django.middleware.locale.LocaleMiddleware',
'django.middleware.clickjacking.XFrameOptionsMiddleware',
'modoboa.core.middleware.LocalConfigMiddleware',
'modoboa.lib.middleware.AjaxLoginRedirect',
'modoboa.lib.middleware.CommonExceptionCatcher',
'modoboa.lib.middleware.RequestCatcherMiddleware')
Traceback:
File "/srv/modoboa/env/lib/python3.6/site-packages/rest_framework/settings.py" in import_from_string
177. return import_string(val)
File "/srv/modoboa/env/lib/python3.6/site-packages/django/utils/module_loading.py" in import_string
17. module = import_module(module_path)
File "/usr/lib/python3.6/importlib/__init__.py" in import_module
126. return _bootstrap._gcd_import(name[level:], package, level)
File "<frozen importlib._bootstrap>" in _gcd_import
994. <source code not available>
File "<frozen importlib._bootstrap>" in _find_and_load
971. <source code not available>
File "<frozen importlib._bootstrap>" in _find_and_load_unlocked
953. <source code not available>
During handling of the above exception (No module named 'modoboa.core.drf_authentication'), another exception occurred:
File "/srv/modoboa/env/lib/python3.6/site-packages/django/core/handlers/exception.py" in inner
34. response = get_response(request)
File "/srv/modoboa/env/lib/python3.6/site-packages/modoboa/core/middleware.py" in __call__
26. redirect_url = reverse("core:2fa_verify")
File "/srv/modoboa/env/lib/python3.6/site-packages/django/urls/base.py" in reverse
58. app_list = resolver.app_dict[ns]
File "/srv/modoboa/env/lib/python3.6/site-packages/django/urls/resolvers.py" in app_dict
513. self._populate()
File "/srv/modoboa/env/lib/python3.6/site-packages/django/urls/resolvers.py" in _populate
447. for url_pattern in reversed(self.url_patterns):
File "/srv/modoboa/env/lib/python3.6/site-packages/django/utils/functional.py" in __get__
80. res = instance.__dict__[self.name] = self.func(instance)
File "/srv/modoboa/env/lib/python3.6/site-packages/django/urls/resolvers.py" in url_patterns
584. patterns = getattr(self.urlconf_module, "urlpatterns", self.urlconf_module)
File "/srv/modoboa/env/lib/python3.6/site-packages/django/utils/functional.py" in __get__
80. res = instance.__dict__[self.name] = self.func(instance)
File "/srv/modoboa/env/lib/python3.6/site-packages/django/urls/resolvers.py" in urlconf_module
577. return import_module(self.urlconf_name)
File "/usr/lib/python3.6/importlib/__init__.py" in import_module
126. return _bootstrap._gcd_import(name[level:], package, level)
File "<frozen importlib._bootstrap>" in _gcd_import
994. <source code not available>
File "<frozen importlib._bootstrap>" in _find_and_load
971. <source code not available>
File "<frozen importlib._bootstrap>" in _find_and_load_unlocked
955. <source code not available>
File "<frozen importlib._bootstrap>" in _load_unlocked
665. <source code not available>
File "<frozen importlib._bootstrap_external>" in exec_module
678. <source code not available>
File "<frozen importlib._bootstrap>" in _call_with_frames_removed
219. <source code not available>
File "./instance/urls.py" in <module>
4. url(r'', include('modoboa.urls')),
File "/srv/modoboa/env/lib/python3.6/site-packages/django/urls/conf.py" in include
34. urlconf_module = import_module(urlconf_module)
File "/usr/lib/python3.6/importlib/__init__.py" in import_module
126. return _bootstrap._gcd_import(name[level:], package, level)
File "<frozen importlib._bootstrap>" in _gcd_import
994. <source code not available>
File "<frozen importlib._bootstrap>" in _find_and_load
971. <source code not available>
File "<frozen importlib._bootstrap>" in _find_and_load_unlocked
955. <source code not available>
File "<frozen importlib._bootstrap>" in _load_unlocked
665. <source code not available>
File "<frozen importlib._bootstrap_external>" in exec_module
678. <source code not available>
File "<frozen importlib._bootstrap>" in _call_with_frames_removed
219. <source code not available>
File "/srv/modoboa/env/lib/python3.6/site-packages/modoboa/urls.py" in <module>
13. from rest_framework.schemas import get_schema_view
File "/srv/modoboa/env/lib/python3.6/site-packages/rest_framework/schemas/__init__.py" in <module>
33. authentication_classes=api_settings.DEFAULT_AUTHENTICATION_CLASSES,
File "/srv/modoboa/env/lib/python3.6/site-packages/rest_framework/settings.py" in __getattr__
220. val = perform_import(val, attr)
File "/srv/modoboa/env/lib/python3.6/site-packages/rest_framework/settings.py" in perform_import
168. return [import_from_string(item, setting_name) for item in val]
File "/srv/modoboa/env/lib/python3.6/site-packages/rest_framework/settings.py" in <listcomp>
168. return [import_from_string(item, setting_name) for item in val]
File "/srv/modoboa/env/lib/python3.6/site-packages/rest_framework/settings.py" in import_from_string
180. raise ImportError(msg)
Exception Type: ImportError at /
Exception Value: Could not import 'modoboa.core.drf_authentication.JWTAuthenticationWith2FA' for API setting 'DEFAULT_AUTHENTICATION_CLASSES'. ModuleNotFoundError: No module named 'modoboa.core.drf_authentication'.
|
closed
|
2022-06-28T16:27:31Z
|
2022-06-29T07:32:39Z
|
https://github.com/modoboa/modoboa/issues/2553
|
[] |
studioluminous
| 4
|
ufoym/deepo
|
jupyter
| 74
|
errors with caffe2
|
Thanks for creating and sharing this needed tool!
I'm trying to use a deepo:caffe2-py36-cu90 docker image as a basis for running Detectron with python3.6 (AFAIKT all shared Detectron docker images are bases on fb's caffe docker images, which are all py2.7)
I'm able to build the docker image but it throws caffe2 errors when I run Detectron.
Without getting too much in the weeds with the Detectron code, I get the following basic error with deepo caffe2-py36-cu90:
STEPS:
```bash
docker run --runtime=nvidia --rm -it ufoym/deepo:caffe2-py36-cu90 bash -c "python -c 'from caffe2.python import workspace;print(1)'"
```
EXPECTED
- should print '1'
OBSERVED:
```bash
Traceback (most recent call last):
File "<string>", line 1, in <module>
ModuleNotFoundError: No module named 'caffe2'
```
NOTE:
The same command works as expected with the official caffe docker image, e.g.
```bash
docker run --runtime=nvidia --rm -it caffe2/caffe2:snapshot-py2-cuda9.0-cudnn7-ubuntu16.04 bash -c "python -c 'from caffe2.python import workspace;print(1)'"
# prints '1'
```
|
closed
|
2019-01-10T17:27:13Z
|
2019-01-11T17:13:07Z
|
https://github.com/ufoym/deepo/issues/74
|
[] |
beatthat
| 2
|
ultralytics/yolov5
|
deep-learning
| 12,988
|
Inference model after convert to tflite file.
|
### Search before asking
- [x] I have searched the YOLOv5 [issues](https://github.com/ultralytics/yolov5/issues) and [discussions](https://github.com/ultralytics/yolov5/discussions) and found no similar questions.
### Question
After converting the YOLOv5 model to a TFLite model using `export.py`, I am attempting to use it for object detection. However, I need to understand how to draw the bounding boxes and what the input and output formats are for this TFLite model. I'm currently facing issues with incorrect bounding box placement or errors in my object detection code. Here's the code I'm using to load the image and perform object detection, but the outcomes are not correct:
FYI, this is how I converted the model
`python3 export.py --weights /home/ubuntu/ssl/yolov5/runs/train/exp9/weights/best.pt --include tflite`
And this is my tensor input and output
```
Input Details: [{'name': 'serving_default_input_1:0', 'index': 0, 'shape': array([ 1, 640, 640, 3], dtype=int32), 'shape_signature': array([ 1, 640, 640, 3], dtype=int32), 'dtype': <class 'numpy.float32'>, 'quantization': (0.0, 0), 'quantization_parameters': {'scales': array([], dtype=float32), 'zero_points': array([], dtype=int32), 'quantized_dimension': 0}, 'sparsity_parameters': {}}]
```
```
Output Details: [{'name': 'StatefulPartitionedCall:0', 'index': 532, 'shape': array([ 1, 25200, 10], dtype=int32), 'shape_signature': array([ 1, 25200, 10], dtype=int32), 'dtype': <class 'numpy.float32'>, 'quantization': (0.0, 0), 'quantization_parameters': {'scales': array([], dtype=float32), 'zero_points': array([], dtype=int32), 'quantized_dimension': 0}, 'sparsity_parameters': {}}]
```
### Additional
```
def preprocess_image(image_path, input_size, input_mean=127.5, input_std=127.5):
image = cv2.imread(image_path)
image_rgb = cv2.cvtColor(image, cv2.COLOR_BGR2RGB)
image_resized = cv2.resize(image_rgb, (input_size, input_size))
input_data = (np.float32(image_resized) - input_mean) / input_std
return image, np.expand_dims(input_data, axis=0)
def detect_objects(interpreter, image_path, threshold=0.25):
input_details = interpreter.get_input_details()
output_details = interpreter.get_output_details()
_, input_size = input_details[0]['shape'][1], input_details[0]['shape'][2]
image, input_data = preprocess_image(image_path, input_size)
interpreter.set_tensor(input_details[0]['index'], input_data)
interpreter.invoke()
boxes = interpreter.get_tensor(output_details[1]['index'])[0] # Bounding box coordinates of detected objects
classes = interpreter.get_tensor(output_details[3]['index'])[0] # Class index of detected objects
scores = interpreter.get_tensor(output_details[0]['index'])[0] # Confidence scores
detections = []
# Ensure scores are treated as an array for safe iteration
scores = np.squeeze(scores)
classes = np.squeeze(classes)
boxes = np.squeeze(boxes)
for i in range(len(scores)):
if scores[i] > threshold:
ymin, xmin, ymax, xmax = boxes[i]
# Ensure coordinates are scaled back to original image size
imH, imW, _ = image.shape
xmin = int(max(1, xmin * imW))
xmax = int(min(imW, xmax * imW))
ymin = int(max(1, ymin * imH))
ymax = int(min(imH, ymax * imH))
class_id = int(classes[i])
category_id_mapping = load_category_mapping(labels_path)
# Using category_id_mapping to find the category ID
category_id = category_id_mapping.get(class_id, class_id) + 1 # Fallback to class_id + 1 if not found
detections.append({
"class_id": class_id,
"category_id": category_id,
'bbox': [xmin, ymin, xmax, ymax],
'segmentation': [xmin, ymin, (xmax - xmin), (ymax - ymin)],
"area": (xmax - xmin) * (ymax - ymin),
"score": float(scores[i])
})
return detections
```
Here is the error I get
```
Traceback (most recent call last):
File "/home/sangyoon/dCentralizedSystems/machine-learning/tensorflow/python/testing/thermal/thermal_test.py", line 562, in <module>
main()
File "/home/sangyoon/dCentralizedSystems/machine-learning/tensorflow/python/testing/thermal/thermal_test.py", line 412, in main
detections = detect_objects(interpreter, image_path, threshold=0.25)
File "/home/sangyoon/dCentralizedSystems/machine-learning/tensorflow/python/testing/thermal/thermal_test.py", line 109, in detect_objects
boxes = interpreter.get_tensor(output_details[1]['index'])[0] # Bounding box coordinates of detected objects
IndexError: list index out of range
```
|
closed
|
2024-05-07T18:12:34Z
|
2024-10-20T19:45:24Z
|
https://github.com/ultralytics/yolov5/issues/12988
|
[
"question"
] |
sangyo1
| 18
|
ageitgey/face_recognition
|
python
| 847
|
What feature does this project need help with?
|
I sorry if this isn't the right place to ask, but I was curious if this project needs any help with any particular features.
|
open
|
2019-06-06T01:34:38Z
|
2019-06-13T10:24:24Z
|
https://github.com/ageitgey/face_recognition/issues/847
|
[] |
hashgupta
| 1
|
keras-team/keras
|
tensorflow
| 20,596
|
Loss functions applied in alphabetical order instead of by dictionary keys in Keras 3.5.0
|
**Environment info**
* Google Colab (CPU or GPU)
* Tensorflow 2.17.0, 2.17.1
* Python 3.10.12
**Problem description**
There seems to be a change in Keras 3.5.0 that has introduced a bug for models with multiple outputs.
The problem is not present in Keras 3.4.1.
Passing a dictionary as `loss` to model.compile() should result in those loss functions being applied to the respective outputs based on output name. But instead they now appear to be applied in alphabetical order of dictionary keys, leading to the wrong loss functions being applied against the model outputs.
For example, in the following snippet, "loss_small" gets applied against "output_big" when it should be applied against "output_small". It appears that the loss dictionary gets 1) re-ordered by alphabetical order of key, and then 2) the dictionary values are read off in the resultant order and applied as an ordered list against the model outputs.
```
...
output_small = Dense(1, activation="sigmoid", name="output_small")(x)
output_big = Dense(64, activation="softmax", name="output_big")(x)
model = Model(inputs=input_layer, outputs=[output_small, output_big])
model.compile(optimizer='adam',
loss={
'output_small': DebugLoss(name='loss_small'),
'output_big': DebugLoss(name='loss_big')
})
```
This conclusion is the result of flipping the orders of these components and comparing the results. Which is what the following code does...
**Code to reproduce**
```python
import sys
import tensorflow as tf
import numpy as np
from tensorflow.keras.layers import Dense, Input
from tensorflow.keras.models import Model
print(f"TensorFlow version: {tf.__version__}")
print(f"Keras version: {tf.keras.__version__}")
print(f"Python version: {sys.version}")
print()
print("Problem doesn't occur if model outputs happen to be ordered alphabetically: (big, small)")
# Generate synthetic training data
num_samples = 100
x_train = np.random.normal(size=(num_samples, 10)) # Input data
y_train_output_big = np.eye(64)[np.random.choice(64, size=num_samples)] # Shape (num_samples, 64)
y_train_output_small = np.random.choice([0, 1], size=(num_samples, 1)) # Shape (num_samples, 1)
dataset = tf.data.Dataset.from_tensor_slices((x_train, (y_train_output_big, y_train_output_small)))
dataset = dataset.batch(num_samples)
# Define model with single input and two named outputs
input_layer = Input(shape=(10,))
x = Dense(64, activation="relu")(input_layer)
output_big = Dense(64, activation="softmax", name="output_big")(x) # (100,64)
output_small = Dense(1, activation="sigmoid", name="output_small")(x) # (100,1)
model = Model(inputs=input_layer, outputs=[output_big, output_small])
# Compile with custom loss function for debugging
class DebugLoss(tf.keras.losses.Loss):
def call(self, y_true, y_pred):
print(f"{self.name} - y_true: {y_true.shape}, y_pred: {y_pred.shape}")
return tf.reduce_mean((y_true - y_pred)**2)
model.compile(optimizer='adam',
loss={
'output_big': DebugLoss(name='loss_big'),
'output_small': DebugLoss(name='loss_small')
})
# Train
tf.config.run_functions_eagerly(True)
history = model.fit(dataset, epochs=1, verbose=0)
print()
print("Problem occurs if model outputs happen to be ordered non-alphabetically: (small, big)")
# Generate synthetic training data
num_samples = 100
x_train = np.random.normal(size=(num_samples, 10)) # Input data
y_train_output_small = np.random.choice([0, 1], size=(num_samples, 1)) # Shape (num_samples, 1)
y_train_output_big = np.eye(64)[np.random.choice(64, size=num_samples)] # Shape (num_samples, 64)
dataset = tf.data.Dataset.from_tensor_slices((x_train, (y_train_output_small, y_train_output_big)))
dataset = dataset.batch(num_samples)
# Define model with single input and two named outputs
input_layer = Input(shape=(10,))
x = Dense(64, activation="relu")(input_layer)
output_small = Dense(1, activation="sigmoid", name="output_small")(x) # (100,1)
output_big = Dense(64, activation="softmax", name="output_big")(x) # (100,64)
model = Model(inputs=input_layer, outputs=[output_small, output_big])
# Compile with custom loss function for debugging
class DebugLoss(tf.keras.losses.Loss):
def call(self, y_true, y_pred):
print(f"{self.name} - y_true: {y_true.shape}, y_pred: {y_pred.shape}")
return tf.reduce_mean((y_true - y_pred)**2)
model.compile(optimizer='adam',
loss={
'output_small': DebugLoss(name='loss_small'),
'output_big': DebugLoss(name='loss_big')
})
# Train
tf.config.run_functions_eagerly(True)
history = model.fit(dataset, epochs=1, verbose=0)
```
**Code outputs on various environments**
Current Google Colab env - fails on second ordering:
```
TensorFlow version: 2.17.1
Keras version: 3.5.0
Python version: 3.10.12 (main, Nov 6 2024, 20:22:13) [GCC 11.4.0]
Problem doesn't occur if model outputs happen to be ordered alphabetically: (big, small)
loss_big - y_true: (100, 64), y_pred: (100, 64)
loss_small - y_true: (100, 1), y_pred: (100, 1)
Problem occurs occur if model outputs happen to be ordered non-alphabetically: (small, big)
loss_big - y_true: (100, 1), y_pred: (100, 1)
loss_small - y_true: (100, 64), y_pred: (100, 64)
```
Downgraded TF version, no change:
```
TensorFlow version: 2.17.0
Keras version: 3.5.0
Python version: 3.10.12 (main, Nov 6 2024, 20:22:13) [GCC 11.4.0]
Problem doesn't occur if model outputs happen to be ordered alphabetically: (big, small)
loss_big - y_true: (100, 64), y_pred: (100, 64)
loss_small - y_true: (100, 1), y_pred: (100, 1)
Problem occurs occur if model outputs happen to be ordered non-alphabetically: (small, big)
loss_big - y_true: (100, 1), y_pred: (100, 1)
loss_small - y_true: (100, 64), y_pred: (100, 64)
```
Downgraded Keras, and now get correct output for both orderings
```
TensorFlow version: 2.17.0
Keras version: 3.4.1
Python version: 3.10.12 (main, Nov 6 2024, 20:22:13) [GCC 11.4.0]
Problem doesn't occur if model outputs happen to be ordered alphabetically: (big, small)
loss_big - y_true: (100, 64), y_pred: (100, 64)
loss_small - y_true: (100, 1), y_pred: (100, 1)
Problem occurs occur if model outputs happen to be ordered non-alphabetically: (small, big)
loss_small - y_true: (100, 1), y_pred: (100, 1)
loss_big - y_true: (100, 64), y_pred: (100, 64)
```
**Final remarks**
This seems related to https://github.com/tensorflow/tensorflow/issues/37887, but looks like someone has since tried to fix that bug and introduced another perhaps?
|
closed
|
2024-12-05T06:41:41Z
|
2025-02-13T02:55:54Z
|
https://github.com/keras-team/keras/issues/20596
|
[
"keras-team-review-pending",
"type:Bug"
] |
malcolmlett
| 6
|
freqtrade/freqtrade
|
python
| 11,494
|
Running parallel Bots on one account
|
<!--
Have you searched for similar issues before posting it?
Did you have a VERY good look at the [documentation](https://www.freqtrade.io/en/latest/) and are sure that the question is not explained there
Please do not use the question template to report bugs or to request new features.
-->
## Describe your environment
* Operating system: __linux__
* Python Version: __3.12.3___ (`python -V`)
* CCXT version: ___4.4.65__ (`pip freeze | grep ccxt`)
* Freqtrade Version: ___2025.2_ (`freqtrade -V` or `docker compose run --rm freqtrade -V` for Freqtrade running in docker)
## Your question
Suppose I searched for multiple sets of parameters using hyperopt, can I open multiple `freqtrade trade` programs to trade the same account? For example, if I have $300 in funds, I open three programs, each with $100. This is used to merge at the strategy level to reduce overall volatility and increase sharpe.
|
closed
|
2025-03-12T12:58:00Z
|
2025-03-13T05:31:37Z
|
https://github.com/freqtrade/freqtrade/issues/11494
|
[
"Question"
] |
zhaohui-yang
| 5
|
yeongpin/cursor-free-vip
|
automation
| 58
|
v1.0.10 Mac 注册不成功,人机识别失败
|
<img width="1253" alt="Image" src="https://github.com/user-attachments/assets/46cb90f3-7db2-4dbf-8840-b27454230c6a" />
|
closed
|
2025-02-11T12:05:26Z
|
2025-02-12T07:41:57Z
|
https://github.com/yeongpin/cursor-free-vip/issues/58
|
[
"bug"
] |
hx-1999
| 8
|
ExpDev07/coronavirus-tracker-api
|
rest-api
| 62
|
just a simple question why do I need to install it on my computer ?
|
cant I just use the links:
https://coronavirus-tracker-api.herokuapp.com/ all/confirmed/etc' ?
like some kind of web service ?
why In the read me there is the installation part ?
|
closed
|
2020-03-17T14:09:11Z
|
2020-03-17T14:51:09Z
|
https://github.com/ExpDev07/coronavirus-tracker-api/issues/62
|
[
"question"
] |
NatiBerko
| 2
|
deepset-ai/haystack
|
machine-learning
| 8,912
|
`tools_strict` option in `OpenAIChatGenerator` broken with `ComponentTool`
|
**Describe the bug**
When using `ComponentTool` and setting `tools_strict=True` OpenAI API is complaining that `additionalProperties` of the schema is not `false`.
**Error message**
```---------------------------------------------------------------------------
BadRequestError Traceback (most recent call last)
Cell In[19], line 1
----> 1 result = generator.run(messages=chat_messages["prompt"], tools=tool_invoker.tools)
2 result
File ~/.local/lib/python3.12/site-packages/haystack/components/generators/chat/openai.py:246, in OpenAIChatGenerator.run(self, messages, streaming_callback, generation_kwargs, tools, tools_strict)
237 streaming_callback = streaming_callback or self.streaming_callback
239 api_args = self._prepare_api_call(
240 messages=messages,
241 streaming_callback=streaming_callback,
(...)
244 tools_strict=tools_strict,
245 )
--> 246 chat_completion: Union[Stream[ChatCompletionChunk], ChatCompletion] = self.client.chat.completions.create(
247 **api_args
248 )
250 is_streaming = isinstance(chat_completion, Stream)
251 assert is_streaming or streaming_callback is None
File ~/.local/lib/python3.12/site-packages/ddtrace/contrib/trace_utils.py:336, in with_traced_module.<locals>.with_mod.<locals>.wrapper(wrapped, instance, args, kwargs)
334 log.debug("Pin not found for traced method %r", wrapped)
335 return wrapped(*args, **kwargs)
--> 336 return func(mod, pin, wrapped, instance, args, kwargs)
File ~/.local/lib/python3.12/site-packages/ddtrace/contrib/internal/openai/patch.py:282, in _patched_endpoint.<locals>.patched_endpoint(openai, pin, func, instance, args, kwargs)
280 resp, err = None, None
281 try:
--> 282 resp = func(*args, **kwargs)
283 return resp
284 except Exception as e:
File ~/.local/lib/python3.12/site-packages/openai/_utils/_utils.py:279, in required_args.<locals>.inner.<locals>.wrapper(*args, **kwargs)
277 msg = f"Missing required argument: {quote(missing[0])}"
278 raise TypeError(msg)
--> 279 return func(*args, **kwargs)
File ~/.local/lib/python3.12/site-packages/openai/resources/chat/completions/completions.py:879, in Completions.create(self, messages, model, audio, frequency_penalty, function_call, functions, logit_bias, logprobs, max_completion_tokens, max_tokens, metadata, modalities, n, parallel_tool_calls, prediction, presence_penalty, reasoning_effort, response_format, seed, service_tier, stop, store, stream, stream_options, temperature, tool_choice, tools, top_logprobs, top_p, user, extra_headers, extra_query, extra_body, timeout)
837 @required_args(["messages", "model"], ["messages", "model", "stream"])
838 def create(
839 self,
(...)
876 timeout: float | httpx.Timeout | None | NotGiven = NOT_GIVEN,
877 ) -> ChatCompletion | Stream[ChatCompletionChunk]:
878 validate_response_format(response_format)
--> 879 return self._post(
880 "/chat/completions",
881 body=maybe_transform(
882 {
883 "messages": messages,
884 "model": model,
885 "audio": audio,
886 "frequency_penalty": frequency_penalty,
887 "function_call": function_call,
888 "functions": functions,
889 "logit_bias": logit_bias,
890 "logprobs": logprobs,
891 "max_completion_tokens": max_completion_tokens,
892 "max_tokens": max_tokens,
893 "metadata": metadata,
894 "modalities": modalities,
895 "n": n,
896 "parallel_tool_calls": parallel_tool_calls,
897 "prediction": prediction,
898 "presence_penalty": presence_penalty,
899 "reasoning_effort": reasoning_effort,
900 "response_format": response_format,
901 "seed": seed,
902 "service_tier": service_tier,
903 "stop": stop,
904 "store": store,
905 "stream": stream,
906 "stream_options": stream_options,
907 "temperature": temperature,
908 "tool_choice": tool_choice,
909 "tools": tools,
910 "top_logprobs": top_logprobs,
911 "top_p": top_p,
912 "user": user,
913 },
914 completion_create_params.CompletionCreateParams,
915 ),
916 options=make_request_options(
917 extra_headers=extra_headers, extra_query=extra_query, extra_body=extra_body, timeout=timeout
918 ),
919 cast_to=ChatCompletion,
920 stream=stream or False,
921 stream_cls=Stream[ChatCompletionChunk],
922 )
File ~/.local/lib/python3.12/site-packages/openai/_base_client.py:1290, in SyncAPIClient.post(self, path, cast_to, body, options, files, stream, stream_cls)
1276 def post(
1277 self,
1278 path: str,
(...)
1285 stream_cls: type[_StreamT] | None = None,
1286 ) -> ResponseT | _StreamT:
1287 opts = FinalRequestOptions.construct(
1288 method="post", url=path, json_data=body, files=to_httpx_files(files), **options
1289 )
-> 1290 return cast(ResponseT, self.request(cast_to, opts, stream=stream, stream_cls=stream_cls))
File ~/.local/lib/python3.12/site-packages/openai/_base_client.py:967, in SyncAPIClient.request(self, cast_to, options, remaining_retries, stream, stream_cls)
964 else:
965 retries_taken = 0
--> 967 return self._request(
968 cast_to=cast_to,
969 options=options,
970 stream=stream,
971 stream_cls=stream_cls,
972 retries_taken=retries_taken,
973 )
File ~/.local/lib/python3.12/site-packages/openai/_base_client.py:1071, in SyncAPIClient._request(self, cast_to, options, retries_taken, stream, stream_cls)
1068 err.response.read()
1070 log.debug("Re-raising status error")
-> 1071 raise self._make_status_error_from_response(err.response) from None
1073 return self._process_response(
1074 cast_to=cast_to,
1075 options=options,
(...)
1079 retries_taken=retries_taken,
1080 )
BadRequestError: Error code: 400 - {'error': {'message': "Invalid schema for function 'web_search': In context=(), 'additionalProperties' is required to be supplied and to be false.", 'type': 'invalid_request_error', 'param': 'tools[0].function.parameters', 'code': 'invalid_function_parameters'}}
```
**Expected behavior**
`use_strict=True` is working
**Additional context**
Add any other context about the problem here, like document types / preprocessing steps / settings of reader etc.
**To Reproduce**
```python
from haystack.components.generators.chat.openai import OpenAIChatGenerator
from haystack.dataclasses.chat_message import ChatMessage
gen = OpenAIChatGenerator.from_dict({'type': 'haystack.components.generators.chat.openai.OpenAIChatGenerator',
'init_parameters': {'model': 'gpt-4o',
'streaming_callback': None,
'api_base_url': None,
'organization': None,
'generation_kwargs': {},
'api_key': {'type': 'env_var',
'env_vars': ['OPENAI_API_KEY'],
'strict': False},
'timeout': None,
'max_retries': None,
'tools': [{'type': 'haystack.tools.component_tool.ComponentTool',
'data': {'name': 'web_search',
'description': 'Search the web for current information on any topic',
'parameters': {'type': 'object',
'properties': {'query': {'type': 'string',
'description': 'Search query.'}},
'required': ['query']},
'component': {'type': 'haystack.components.websearch.serper_dev.SerperDevWebSearch',
'init_parameters': {'top_k': 10,
'allowed_domains': None,
'search_params': {},
'api_key': {'type': 'env_var',
'env_vars': ['SERPERDEV_API_KEY'],
'strict': False}}}}}],
'tools_strict': True}})
gen.run([ChatMessage.from_user("How is the weather today in Berlin?")])
```
**FAQ Check**
- [ ] Have you had a look at [our new FAQ page](https://docs.haystack.deepset.ai/docs/faq)?
**System:**
- OS:
- GPU/CPU:
- Haystack version (commit or version number): 2.10.2
- DocumentStore:
- Reader:
- Retriever:
|
closed
|
2025-02-24T13:42:08Z
|
2025-03-03T15:23:26Z
|
https://github.com/deepset-ai/haystack/issues/8912
|
[
"P1"
] |
tstadel
| 0
|
CorentinJ/Real-Time-Voice-Cloning
|
deep-learning
| 807
|
[Question] TTS - 3 question of generation voices?
|
I have questions about voice generation:
1) TTS, Can i change speed of speatch, faster/slower?
2) TTS, Can i change emotion eg; Neutral, Espressive, Anger, Fear, happy, Sad, Shouting?
3) TTS, Dictionary edit world equivalents to Pronuciation?
Thx :)
|
closed
|
2021-07-23T15:28:37Z
|
2021-08-25T09:07:49Z
|
https://github.com/CorentinJ/Real-Time-Voice-Cloning/issues/807
|
[] |
SaraDark
| 1
|
NullArray/AutoSploit
|
automation
| 628
|
Unhandled Exception (e08e877b3)
|
Autosploit version: `3.0`
OS information: `Linux-4.18.0-10-generic-x86_64-with-Ubuntu-18.10-cosmic`
Running context: `autosploit.py --whitelist ************************`
Error meesage: `global name 'Except' is not defined`
Error traceback:
```
Traceback (most recent call):
File "/home/gabriel/Escritorio/AutoSploit-master/autosploit/main.py", line 103, in main
loaded_exploits = load_exploits(EXPLOIT_FILES_PATH)
File "/home/gabriel/Escritorio/AutoSploit-master/lib/jsonize.py", line 61, in load_exploits
except Except:
NameError: global name 'Except' is not defined
```
Metasploit launched: `False`
|
closed
|
2019-04-04T12:22:38Z
|
2019-04-18T17:31:34Z
|
https://github.com/NullArray/AutoSploit/issues/628
|
[] |
AutosploitReporter
| 0
|
igorbenav/fastcrud
|
sqlalchemy
| 79
|
Other missing SQLAlchemy features
|
There are a few sql things that are relevant and are currently missing from `FastCRUD`, I'll list some of them and you guys may add what I forget
- [ ] Add possibility of `distinct` clause
- [ ] Add `group by`
- [ ] `like`, `ilike`, `between` operators
- [ ] `is_` and `isnot_` operators
|
closed
|
2024-05-08T05:51:53Z
|
2024-06-22T05:27:23Z
|
https://github.com/igorbenav/fastcrud/issues/79
|
[
"enhancement",
"FastCRUD Methods"
] |
igorbenav
| 12
|
deezer/spleeter
|
tensorflow
| 615
|
Error: AttributeError: module 'ffmpeg' has no attribute '_run'
|
- [x ] I didn't find a similar issue already open.
- [x ] I read the documentation (README AND Wiki)
- [x ] I have installed FFMpeg
- [ x] My problem is related to Spleeter only, not a derivative product (such as Webapplication, or GUI provided by others)
## Description
When running spleeter to extract stems from a song. I keep getting this error message:
`File "/Users/afolabi/Library/Python/3.8/lib/python/site-packages/spleeter/audio/ffmpeg.py", line 102, in load
except ffmpeg._run.Error as e:
AttributeError: module 'ffmpeg' has no attribute '_run'`
I've installed ffmpeg and read the docs. I've googled extensively for this but can't find anything.
|
closed
|
2021-04-22T22:59:31Z
|
2022-10-17T14:05:43Z
|
https://github.com/deezer/spleeter/issues/615
|
[
"bug",
"invalid"
] |
afolabiaji
| 3
|
marshmallow-code/apispec
|
rest-api
| 566
|
Marshmallow plugin bug with string schema ref and schema_name_resolver
|
`resolve_schema_dict` claims to support `string` as its schema parameter type. It can get there for example when you are parsing schema from docstring using the provided yaml util, or using FlaskPlugin to do it for you.
But, when `schema_name_resolver` returns `None` for this case, the schema gets forwarded to `schema2jsonschema` which expects a `marshmallow.Schema` instance and crashes.
Test case is a slightly modified "example application":
<details>
<summary>Show example test case</summary>
```python
import uuid
from apispec import APISpec
from apispec.ext.marshmallow import MarshmallowPlugin
from apispec_webframeworks.flask import FlaskPlugin
from flask import Flask, jsonify
from marshmallow import fields, Schema
def schema_name_resolver(schema):
if schema == "PetSchema":
return None
# Create an APISpec
spec = APISpec(
title="Swagger Petstore",
version="1.0.0",
openapi_version="3.0.2",
plugins=[
FlaskPlugin(),
MarshmallowPlugin(schema_name_resolver=schema_name_resolver),
],
)
# Optional marshmallow support
class CategorySchema(Schema):
id = fields.Int()
name = fields.Str(required=True)
class PetSchema(Schema):
categories = fields.List(fields.Nested(CategorySchema))
name = fields.Str()
# Optional Flask support
app = Flask(__name__)
@app.route("/random")
def random_pet():
"""A cute furry animal endpoint.
---
get:
description: Get a random pet
responses:
200:
description: Return a pet
content:
application/json:
schema: PetSchema
"""
# Hardcoded example data
pet_data = {
"name": "sample_pet_" + str(uuid.uuid1()),
"categories": [{"id": 1, "name": "sample_category"}],
}
return PetSchema().dump(pet_data)
# Register the path and the entities within it
with app.test_request_context():
spec.path(view=random_pet)
@app.route("/spec.json")
def get_spec():
return jsonify(spec.to_dict())
```
</details>
<details>
<summary>Show example stacktrace</summary>
```
Traceback (most recent call last):
File "/home//PycharmProjects/test-apispec/main.py", line 62, in <module>
spec.path(view=random_pet)
File "/home//.local/share/virtualenvs/test-apispec-HubT4B3L/lib/python3.8/site-packages/apispec/core.py", line 280, in path
plugin.operation_helper(path=path, operations=operations, **kwargs)
File "/home//.local/share/virtualenvs/test-apispec-HubT4B3L/lib/python3.8/site-packages/apispec/ext/marshmallow/__init__.py", line 202, in operation_helper
self.resolver.resolve_response(response)
File "/home//.local/share/virtualenvs/test-apispec-HubT4B3L/lib/python3.8/site-packages/apispec/ext/marshmallow/schema_resolver.py", line 113, in resolve_response
self.resolve_schema(response)
File "/home//.local/share/virtualenvs/test-apispec-HubT4B3L/lib/python3.8/site-packages/apispec/ext/marshmallow/schema_resolver.py", line 161, in resolve_schema
content["schema"] = self.resolve_schema_dict(content["schema"])
File "/home//.local/share/virtualenvs/test-apispec-HubT4B3L/lib/python3.8/site-packages/apispec/ext/marshmallow/schema_resolver.py", line 218, in resolve_schema_dict
return self.converter.resolve_nested_schema(schema)
File "/home//.local/share/virtualenvs/test-apispec-HubT4B3L/lib/python3.8/site-packages/apispec/ext/marshmallow/openapi.py", line 96, in resolve_nested_schema
json_schema = self.schema2jsonschema(schema)
File "/home//.local/share/virtualenvs/test-apispec-HubT4B3L/lib/python3.8/site-packages/apispec/ext/marshmallow/openapi.py", line 261, in schema2jsonschema
fields = get_fields(schema)
File "/home//.local/share/virtualenvs/test-apispec-HubT4B3L/lib/python3.8/site-packages/apispec/ext/marshmallow/common.py", line 63, in get_fields
raise ValueError(
ValueError: 'PetSchema' doesn't have either `fields` or `_declared_fields`.
```
</details>
I think it should be enough to just pass `schema_instance` instead of `schema` to `self.schema2jsonschema` in `resolve_nested_schema`
|
closed
|
2020-05-21T11:32:58Z
|
2020-06-06T09:26:41Z
|
https://github.com/marshmallow-code/apispec/issues/566
|
[] |
black3r
| 1
|
dmlc/gluon-cv
|
computer-vision
| 1,416
|
"IndexError: too many indices for array" when training SSD with subset of VOC
|
I would like to train a SSD model with only subset of VOC - [ "person", "dog" ].
However, I encountered error with [VOCLike](https://gluon-cv.mxnet.io/build/examples_datasets/detection_custom.html?highlight=voclike#pascal-voc-like) with subset of classes. It works if I expand to full classes:
```
class VOCLike(gdata.VOCDetection):
CLASSES = ['person','dog'] # not working
# CLASSES = ['aeroplane', 'bicycle', 'bird', 'boat', 'bottle', 'bus', 'car', 'cat', 'chair', 'cow', 'diningtable', 'dog', 'horse', 'motorbike', 'person', 'pottedplant', 'sheep', 'sofa', 'train', 'tvmonitor'] # this works
def __init__(self, root, splits, transform=None, index_map=None, preload_label=True):
super(VOCLike, self).__init__(root, splits, transform, index_map, preload_label)
```
Error message:
```
INFO:root:Namespace(amp=False, batch_size=32, dali=False, data_shape=300, dataset='voc', dataset_root='.', epochs=240, gpus='0', horovod=False, log_interval=100, lr=0.001, lr_decay=0.1, lr_decay_epoch='160,200', momentum=0.9, network='resnet34_v1b', num_workers=4, resume='', save_interval=10, save_prefix='ssd_300_resnet34_v1b_voc', seed=233, start_epoch=0, syncbn=False, val_interval=1, wd=0.0005)
INFO:root:Start training from [Epoch 0]
multiprocessing.pool.RemoteTraceback:
"""
Traceback (most recent call last):
File "/usr/local/Cellar/python@3.8/3.8.5/Frameworks/Python.framework/Versions/3.8/lib/python3.8/multiprocessing/pool.py", line 125, in worker
result = (True, func(*args, **kwds))
File "/Users/richardkang/training/lib/python3.8/site-packages/mxnet/gluon/data/dataloader.py", line 450, in _worker_fn
batch = batchify_fn([_worker_dataset[i] for i in samples])
File "/Users/richardkang/training/lib/python3.8/site-packages/mxnet/gluon/data/dataloader.py", line 450, in <listcomp>
batch = batchify_fn([_worker_dataset[i] for i in samples])
File "/Users/richardkang/training/lib/python3.8/site-packages/mxnet/gluon/data/dataset.py", line 219, in __getitem__
return self._fn(*item)
File "/Users/richardkang/training/lib/python3.8/site-packages/gluoncv/data/transforms/presets/ssd.py", line 153, in __call__
bbox = tbbox.translate(label, x_offset=expand[0], y_offset=expand[1])
File "/Users/richardkang/training/lib/python3.8/site-packages/gluoncv/data/transforms/bbox.py", line 160, in translate
bbox[:, :2] += (x_offset, y_offset)
IndexError: too many indices for array: array is 1-dimensional, but 2 were indexed
"""
The above exception was the direct cause of the following exception:
Traceback (most recent call last):
File "train_ssd.py", line 431, in <module>
train(net, train_data, val_data, eval_metric, ctx, args)
File "train_ssd.py", line 316, in train
for i, batch in enumerate(train_data):
File "/Users/richardkang/training/lib/python3.8/site-packages/mxnet/gluon/data/dataloader.py", line 506, in __next__
batch = pickle.loads(ret.get(self._timeout))
File "/usr/local/Cellar/python@3.8/3.8.5/Frameworks/Python.framework/Versions/3.8/lib/python3.8/multiprocessing/pool.py", line 771, in get
raise self._value
IndexError: too many indices for array: array is 1-dimensional, but 2 were indexed
```
To reproduce
1. Prepare [VOC dataset](https://gluon-cv.mxnet.io/build/examples_datasets/pascal_voc.html#sphx-glr-build-examples-datasets-pascal-voc-py)
2. Download [train_ssd.py](https://gluon-cv.mxnet.io/build/examples_detection/train_ssd_voc.html#sphx-glr-build-examples-detection-train-ssd-voc-py)
3. Create `VOCLike` and replace `gdata.VOCDetection` in `train_ssd.py::get_dataset()` to use `VOCLike`
4. Define only sub classes in `VOCLike`
5. Run the `train_ssd.py` and the error will appear
System details:
1. OS: OSX 10.15.5
2. Python: Python 3.8.5, pip details:
```
% python -m pip freeze
certifi==2020.6.20
chardet==3.0.4
cycler==0.10.0
gluoncv==0.8.0
graphviz==0.8.4
idna==2.10
imgviz==1.2.2
kiwisolver==1.2.0
labelme==4.5.6
lxml==4.5.2
matplotlib==3.3.0
mxnet==1.6.0
numpy==1.19.1
opencv-python==4.4.0.40
Pillow==7.2.0
portalocker==2.0.0
pyparsing==2.4.7
PyQt5==5.15.0
PyQt5-sip==12.8.0
python-dateutil==2.8.1
PyYAML==5.3.1
QtPy==1.9.0
requests==2.24.0
scipy==1.5.2
six==1.15.0
termcolor==1.1.0
tqdm==4.48.2
urllib3==1.25.10
```
|
closed
|
2020-08-14T00:58:46Z
|
2021-05-22T06:40:32Z
|
https://github.com/dmlc/gluon-cv/issues/1416
|
[
"Stale"
] |
kangks
| 2
|
psf/requests
|
python
| 6,372
|
InvalidHeader exception when using subclass of str or bytes
|
An InvalidHeader is raised while preparing a request using a subclass of `str` instead of `str` or `bytes` in headers.
## Origin
This is due to type checking in check_header_validity in utils.py. This type check uses `type` instead of `isinstance`.
## Context
The need of using subclass of str instead of str comes from the need to have 2 "Authorization" header and not just one with 2 tokens separeated by a comma.
PS: I didn't build the API i am requesting, don't blame me.
|
closed
|
2023-03-07T14:49:34Z
|
2024-03-07T00:02:47Z
|
https://github.com/psf/requests/issues/6372
|
[] |
hpierre001
| 2
|
BayesWitnesses/m2cgen
|
scikit-learn
| 298
|
Error when compiling exported Java code using javac: Too many constants
|
Hi!
I have problems when compiling the generated Java Code from m2cgen.
I am using an XGBClassifier with n_estimators = 400.
The generated code is ~360k lines long, which is around 17MB.
Unfortunately, this code does not compile with javac as i get this error:
```
Model.java:1: error: too many constants
public class Model {
^
1 error
```
Using the approach suggested in #297, i have managed to split the score function into subroutines, which unfortunately does not fix my problem.
Is there any workaround for this type of error?
Manually adjusting the file is not really an option since i am planning to create a large number of models and compare them
|
open
|
2020-08-21T09:57:23Z
|
2020-09-04T21:32:18Z
|
https://github.com/BayesWitnesses/m2cgen/issues/298
|
[] |
kijz
| 2
|
iterative/dvc
|
data-science
| 9,754
|
fetch/push/status: not handling config from other revisions
|
When branches have wildly different remote setups, those configs are not taken into account during fetch/push/status -c --all-tags/branches/etc
Example:
```
#!/bin/bash
set -e
set -x
rm -rf mytest
mkdir mytest
cd mytest
mkdir remote1
mkdir remote2
remote1="$(pwd)/remote1"
remote2="$(pwd)/remote2"
mkdir repo
cd repo
git init
dvc init
git commit -m "init"
git branch branch1
git branch branch2
git checkout branch1
echo foo > foo
dvc add foo
dvc remote add -d myremote1 $remote1
dvc push
git add .gitignore foo.dvc .dvc/config
git commit -m "foo"
git checkout branch2
echo bar > bar
dvc add bar
dvc remote add -d myremote2 $remote2
dvc push
git add .gitignore bar.dvc .dvc/config
git commit -m "bar"
git checkout main
rm -rf .dvc/cache
dvc fetch --all-branches
tree .dvc/cache # will show 0 files
```
Studio uses real `git checkout` to collect objects and has been doing that for years as a workaround, but I couldn't find an issue in dvc yet.
To fix this we should make `config` part of `Index`(same as stages, outs, etc are, don't confuse with DataIndex) and use it to build `Index.data`. This is the easiest to do in `dvc fetch` because it is using `Index.data` already, but might require temporary workarounds for `push/status -c` like manually triggering config reloading in brancher or something.
- [x] fetch (#9758)
- [ ] push (requires https://github.com/iterative/dvc/issues/9333)
- [ ] status -c
|
open
|
2023-07-24T20:51:26Z
|
2024-10-23T08:06:33Z
|
https://github.com/iterative/dvc/issues/9754
|
[
"bug",
"p2-medium",
"A: data-sync"
] |
efiop
| 2
|
ray-project/ray
|
pytorch
| 51,218
|
[Core] get_user_temp_dir() Doesn't Honor the User Specified Temp Dir
|
### What happened + What you expected to happen
There are a couple of places in the code that uses `ray._private.utils.get_user_temp_dir()` to get the temporary directory for ray. However, if the user specifies the `--temp-dir` during `ray start`, the `get_user_temp_dir()` won't honor the custom temp directory.
The function or the usage of the function needs to be fixed to honor the user specified temp directory.
### Versions / Dependencies
Nightly
### Reproduction script
1. Start ray using: `ray start --head --temp-dir=/tmp/my-temp-dir`
2. Run the following:
```
import ray
ray._private.utils.get_user_temp_dir() # it will return 'tmp' instead of the tmp directory specified in ray start
```
### Issue Severity
Low: It annoys or frustrates me.
|
open
|
2025-03-10T17:45:56Z
|
2025-03-21T23:10:02Z
|
https://github.com/ray-project/ray/issues/51218
|
[
"bug",
"P1",
"core"
] |
MengjinYan
| 0
|
coqui-ai/TTS
|
pytorch
| 3,745
|
[Bug] Anyway to run this as docker-compose ?
|
### Describe the bug
Anyway to run this as docker-compose ?
### To Reproduce
docker-compose up
### Expected behavior
N/A
### Logs
```shell
N/A
```
### Environment
```shell
N/A
```
### Additional context
N/A
|
closed
|
2024-05-16T23:08:26Z
|
2024-07-27T06:33:40Z
|
https://github.com/coqui-ai/TTS/issues/3745
|
[
"bug",
"wontfix"
] |
PeterTucker
| 4
|
BeastByteAI/scikit-llm
|
scikit-learn
| 13
|
`GPTVectorizer().fit_transform(X)` always returns `RuntimeError`
|
Hi! First of all very nice work!
I was trying the embedding utility with something as simple as:
```python
from skllm.datasets import get_classification_dataset
X, _ = get_classification_dataset()
model = GPTVectorizer()
vectors = model.fit_transform(X)
```
however, I always get:
```bash
RuntimeError: Could not obtain the embedding after retrying 3 times.
Last captured error: `<empty message>
```
I also tried with a custom dataset and with some simple strings.
Am I doing something wrong?
|
closed
|
2023-05-23T22:39:44Z
|
2023-05-24T18:27:46Z
|
https://github.com/BeastByteAI/scikit-llm/issues/13
|
[] |
nicola-corbellini
| 12
|
MagicStack/asyncpg
|
asyncio
| 755
|
advice: best bulk upsert method that still allows to track # of affected rows?
|
I've been relying on the newest implementation of executemany() to perform bulk upserts, but it has the shortcoming that it will not allow to easily determine the number of affected rows by parsing the statusmsg.
The number of effectively upserted rows can easily be less than the number of rows I attempt to upsert, since I qualify my ON CONFLICT clause with a further WHERE clause specifying that the update should only happen if the new and excluded tuples are distinct.
```
INSERT INTO "table_name" AS __destination_row (
id,
other_column
) VALUES ($1, $2)
ON CONFLICT (id)
DO UPDATE SET
id = excluded.id,
other_column = excluded.other_column
WHERE
(__destination_row.id IS DISTINCT FROM excluded.id)
OR
(__destination_row.other_column IS DISTINCT FROM excluded.other_column)
;
```
(regular Postgres would allow for a much terser syntax, but this is the only syntax that is accepted by CockroachDB)
Suppose that at times knowing the exact number of effectively upserted rows is more crucial than the bulk performance, and yet I would prefer not to go to the extreme of upserting one row at a time, what would be the best compromise?
Should I rely on a temporary table and then upserting into the physical tables from that temporary table?
```
INSERT INTO "table_name" AS __destination_row (
id,
other_column
) SELECT (
id,
other_column
) FROM "__temp_table_name"
ON CONFLICT (id)
DO UPDATE SET
id = excluded.id,
other_column = excluded.other_column
WHERE
(__destination_row.id IS DISTINCT FROM excluded.id)
OR
(__destination_row.other_column IS DISTINCT FROM excluded.other_column)
;
```
Should I instead use a transaction with several individual upserts of values once again provided by the client?
Are there other approaches I should explore?
|
open
|
2021-05-06T06:59:59Z
|
2021-09-30T20:49:49Z
|
https://github.com/MagicStack/asyncpg/issues/755
|
[] |
ale-dd
| 8
|
Gozargah/Marzban
|
api
| 713
|
مشکل محاسبه حجم اشتباه
|
با سلام
وقتی یوزر به نود از طریق کلودفلر وصل میشه سرور حجم رو چند برابر مصرف اصلی محاسبه میکنه ولی حالت مستقیم اوکی هست
مخصوصا وقتی اسپید تست گرفته میشه
|
closed
|
2023-12-26T00:48:26Z
|
2023-12-27T11:01:33Z
|
https://github.com/Gozargah/Marzban/issues/713
|
[
"Bug"
] |
theonewhopassed
| 2
|
ml-tooling/opyrator
|
fastapi
| 45
|
launch-api does not support file upload
|

I've tested the launch-ui, everything works fine.
But when I go to use the api, and tried with postman, I encountered this problem:
```
{
"detail": [
{
"loc": [
"body"
],
"msg": "value is not a valid dict",
"type": "type_error.dict"
}
]
}
```
I think the definition is not compatible with FastAPI? Should I change that into some Form format?
```
class Input(BaseModel):
file: FileContent = Field(..., mime_type="application/x-sqlite3")
calc_type: str = 'PF'
```
Append:
When change the api into `http://127.0.0.1:8080/call/` the response goes into:
```
WARNING: Invalid HTTP request received.
Traceback (most recent call last):
File "/Users/dragonszy/miniconda3/lib/python3.9/site-packages/uvicorn/protocols/http/httptools_impl.py", line 132, in data_received
self.parser.feed_data(data)
File "httptools/parser/parser.pyx", line 212, in httptools.parser.parser.HttpParser.feed_data
httptools.parser.errors.HttpParserInvalidMethodError: Invalid method encountered
INFO: 127.0.0.1:54894 - "POST /call HTTP/1.1" 422 Unprocessable Entity
INFO: 127.0.0.1:54993 - "POST /call HTTP/1.1" 422 Unprocessable Entity
```
|
closed
|
2021-12-31T08:07:42Z
|
2022-04-15T03:15:22Z
|
https://github.com/ml-tooling/opyrator/issues/45
|
[
"bug",
"stale"
] |
loongmxbt
| 2
|
JaidedAI/EasyOCR
|
pytorch
| 1,099
|
multilanguage train
|
Hi, how does detection in two languages work in general? Earlier I tried to detect using easyocr with the languages ru and en, in principle it works, but so far it does not detect well, so I decided to annotate my dataset and retrain the model.
but I ran into a problem, the fact is that the texts that I need to detect look like this: P6ОЖ00, 9OОЖ08, 3QОЖ2383 and so on, the meaning is like this (the first two can either be numbers or English letters, the second two are Cyrillic and then the numbers go). I can't teach, because I get an error related to encoding. Now I don't know how to use a dataset of 4000 texts to train a model.
|
open
|
2023-07-28T10:39:57Z
|
2023-07-28T10:39:57Z
|
https://github.com/JaidedAI/EasyOCR/issues/1099
|
[] |
Sherlock-IT
| 0
|
dynaconf/dynaconf
|
flask
| 1,261
|
[bug] Overrides not working with environments
|
**Describe the bug**
When using environments, a key in non-default environment doesn't override default.
The below example results in
```bash
Working in prod environment
FOO<str> 'DEFAULT'
```
**To Reproduce**
Steps to reproduce the behavior:
1. Having the following folder structure
<!-- Describe or use the command `$ tree -v` and paste below -->
<details>
<summary> Project structure </summary>
```bash
conf
├── config.py
├── default.yaml
└── prod.yaml
```
</details>
2. Having the following config files:
<!-- Please adjust if you are using different files and formats! -->
<details>
<summary> Config files </summary>
**default.yaml**
```yaml
default:
foo: DEFAULT
```
**prod.yaml**
```yaml
prod:
foo: PROD
```
</details>
3. Having the following app code:
<details>
<summary> Code </summary>
**config.py**
```python
from dynaconf import Dynaconf
config = Dynaconf(
settings_files=["*.yaml"],
environments=True,
env_switcher="ENV",
)
```
</details>
4. Executing under the following environment
<details>
<summary> Execution </summary>
```bash
ENV=prod dynaconf -i config.config list
```
</details>
**Expected behavior**
Should result in:
```bash
Working in prod environment
FOO<str> 'PROD'
```
**Environment (please complete the following information):**
- OS: MacOS 15.3.1
- Dynaconf Version 3.2.10
- Frameworks in use: N/A
**Additional context**
It is not completely clear from the documentation if this is the intended behaviour or not.
I assumed a non-default environment would override everything (top-level or nested) from a `default` environment.
[Default-override workflow](https://www.dynaconf.com/faq/#default-override-workflow) says this, which makes me believe this is a bug:
> By default, top-level settings with different keys will both be kept, **while the ones with the same keys will be overridden.**
|
open
|
2025-02-28T11:53:34Z
|
2025-03-07T17:57:39Z
|
https://github.com/dynaconf/dynaconf/issues/1261
|
[
"bug"
] |
rsmeral
| 0
|
mirumee/ariadne
|
api
| 374
|
Proposal: ariadne-relay
|
@rafalp and @patrys, I dropped a comment on #188 a couple of weeks ago, but I think it might have gone under the radar since that issue is closed, so I'm going to make a more formal proposal here.
I've already built a quick-and-dirty implementation for **Relay** connections, based on a decorator that wraps an ObjectType resolver that emits a Sequence. The implementation resolves the connection using [graphql-relay-py](https://github.com/graphql-python/graphql-relay-py). It works well, but its too specific to my case to be offered as a generalized implementation.
I agree with the assessment in #188 that the right path forward here is for there to be an independent **ariadne-relay** package. It seems reasonable to build it on top of **graphql-relay-py** so it is synchronised closely to the [reference Relay codebase](https://github.com/graphql/graphql-relay-js). I'm willing to take ownership of this project, but I want to introduce the approach I have in mind here for comment, prior to doing further work.
Proposed implementation:
* ~~**ariadne-relay** will provide a `ConnectionType` class, as a subclass of `ariadne.ObjectType`, which takes care of the boilerplate glue between an iterable/sliceable object returned by another resolver and **graphql-relay-py**. This class will have a `set_count_resolver()` method and a `count_resolver` decorator to allow for control over how the count is derived.~~
* **ariadne-relay** will provide a `NodeType` class, as a subclass of `Ariadne.InterfaceType`, which will help with the formation of `ID` values, leveraging the methods in **graphql-relay-py**. This class will have a `set_id_value_resolver()` method and a `id_value_resolver` decorator to allow for control over how ids are derived.
* I haven't worked yet with `graphql_relay.mutation` so I'm not sure how that might be leveraged, but at a glance it seems like it should be possible to do something useful here.
* **ariadne-relay** will adhere to Ariadne's dev and coding standards, so it stays a close cousin.
Does this seems like a reasonable basic direction? It seems from #188 that you guys have put a bit of consideration into this already, so perhaps you have some insight that I'm missing?
|
closed
|
2020-05-31T18:46:47Z
|
2021-03-19T12:19:03Z
|
https://github.com/mirumee/ariadne/issues/374
|
[
"discussion"
] |
markedwards
| 4
|
unit8co/darts
|
data-science
| 1,795
|
[BUG] RMSE and MAE have same result when using backtest()
|
**Describe the bug**
When using the RMSE and MAE as metrics for the backtest the results are the same. I do not get why, as the implemented metrics look correct. I am using the backtest function to perform one-step-ahead forecasting.
**To Reproduce**
```py
import pandas as pd
from pytorch_lightning.callbacks import EarlyStopping
from ray import tune
from ray.tune import CLIReporter
from ray.tune.integration.pytorch_lightning import TuneReportCallback
from ray.tune.schedulers import ASHAScheduler
from torchmetrics import (
MeanAbsoluteError,
MeanSquaredError,
MetricCollection,
)
from darts.dataprocessing.transformers import Scaler
from darts.datasets import AirPassengersDataset
from darts.models import NBEATSModel
from darts.metrics import rmse, mae
def train_model(model_args, callbacks, train, val):
torch_metrics = MetricCollection(
[MeanAbsoluteError(), MeanSquaredError(squared=False)]
)
# Create the model using model_args from Ray Tune
model = NBEATSModel(
input_chunk_length=24,
output_chunk_length=1,
n_epochs=1,
torch_metrics=torch_metrics,
pl_trainer_kwargs={
"callbacks": callbacks,
"enable_progress_bar": False,
"accelerator": "auto",
},
**model_args
)
pred = model.backtest(
series=series,
start=len(train),
forecast_horizon=1,
stride=1,
retrain=True,
verbose=False,
metric=[mae, rmse],
)
tune.report(mae=pred[0], rmse=pred[1])
# Read data:
series = AirPassengersDataset().load()
# Create training and validation sets:
train, val = series.split_after(pd.Timestamp(year=1957, month=12, day=1))
# Normalize the time series (note: we avoid fitting the transformer on the validation set)
transformer = Scaler()
transformer.fit(train)
train = transformer.transform(train)
val = transformer.transform(val)
# set up ray tune callback
tune_callback = TuneReportCallback(
{
"loss": "val_loss",
"mae": "val_MeanAbsoluteError",
"rmse": "val_MeanSquaredError",
},
on="validation_end",
)
# define the hyperparameter space
config = {
"batch_size": tune.choice([16, 32, 64, 128]),
"num_blocks": tune.choice([1, 2, 3, 4, 5]),
"num_stacks": tune.choice([32, 64, 128]),
"dropout": tune.uniform(0, 0.2),
}
reporter = CLIReporter(
parameter_columns=list(config.keys()),
metric_columns=["loss", "rmse", "mae", "training_iteration"],
)
train_fn_with_parameters = tune.with_parameters(
train_model,
callbacks=[tune_callback],
train=train,
val=val,
)
# optimize hyperparameters by minimizing the MAPE on the validation set
analysis = tune.run(
train_fn_with_parameters,
resources_per_trial={"cpu": 12, "gpu": 1},
# Using a metric instead of loss allows for
# comparison between different likelihood or loss functions.
metric="rmse", # any value in TuneReportCallback.
mode="min",
config=config,
num_samples=1,
scheduler=ASHAScheduler(max_t=100, grace_period=3, reduction_factor=2),
progress_reporter=reporter,
name="tune_darts",
)
print("Best hyperparameters found were: ", analysis.best_config)
````
**Expected behavior**
When running the code above, the RMSE and MAE were both 886.74. I understand that the RMSE and MAE can be similar if all values are small, so is this due to the scaler?
**System (please complete the following information):**
- Python version: 3.10.11
- darts version: 0.24.0
|
closed
|
2023-05-23T17:46:49Z
|
2023-06-12T08:25:39Z
|
https://github.com/unit8co/darts/issues/1795
|
[
"bug",
"triage"
] |
StephanAkkerman
| 5
|
strawberry-graphql/strawberry
|
graphql
| 3,814
|
Resolver error nulls other resolvers
|
When evaluating strawberry-graphql as a tool, we are experiencing behavior where when a single resolver throws an error but other resolvers involved in the same query do not, we are not getting partial responses.
Consider the following:
```python
import strawberry
@strawberry.type
class Query:
@strawberry.field
def successful_field(self) -> str:
return "This field works"
@strawberry.field
def error_field(self) -> str:
raise Exception("This field fails")
schema = strawberry.Schema(Query)
```
If we were to write the following query:
```json
{
successfulField
errorField
}
```
We would expect:
```json
{
"data": {
"successfulField": "This field works",
"errorField": null
},
"errors": [
{
"message": "This field fails",
"locations": [{"line": 3, "column": 3}],
"path": ["errorField"]
}
]
}
```
but instead I am getting:
```json
{
"data": null,
"errors": [
{
"message": "This field fails",
"locations": [{"line": 3, "column": 3}],
"path": ["errorField"]
}
]
}
```
This doesn't seem to be the expected behavior
|
open
|
2025-03-20T22:36:49Z
|
2025-03-21T19:18:53Z
|
https://github.com/strawberry-graphql/strawberry/issues/3814
|
[
"bug"
] |
TheCaffinatedDeveloper
| 2
|
scikit-multilearn/scikit-multilearn
|
scikit-learn
| 93
|
Issue with WEKA multilabel ?
|
Hi. I am new with multilabel classificatin and I was making some tests and I want to figure out how MEKA works with sckitmultilearn. I got a dataset YEAST from http://mulan.sourceforge.net/datasets-mlc.html and converted it to .csv using MEKA, after that I tried to make some classification... Does anybody know what went wrong ?

|
closed
|
2018-01-04T16:49:47Z
|
2018-06-06T16:46:28Z
|
https://github.com/scikit-multilearn/scikit-multilearn/issues/93
|
[] |
mvenanc
| 6
|
horovod/horovod
|
tensorflow
| 3,474
|
TBB Warning: The number of workers is currently limited to 0
|
**Environment:**
1. Framework: (TensorFlow, Keras, PyTorch, MXNet): Pytorch
2. Framework version: 1.7.1
3. Horovod version: 0.21.0
4. MPI version: 3.1.3
5. CUDA version: 11.0
6. NCCL version: 2.7.8
7. Python version: 3.8.11
8. Spark / PySpark version: N/A
9. Ray version: N/A
10. OS and version: RHEL 8.4
11. GCC version: 8.4.1
12. CMake version: 3.18.2
**Checklist:**
1. Did you search issues to find if somebody asked this question before?
2. If your question is about hang, did you read [this doc](https://github.com/horovod/horovod/blob/master/docs/running.rst)?
3. If your question is about docker, did you read [this doc](https://github.com/horovod/horovod/blob/master/docs/docker.rst)?
4. Did you check if you question is answered in the [troubleshooting guide](https://github.com/horovod/horovod/blob/master/docs/troubleshooting.rst)?
**Bug report:**
Please describe erroneous behavior you're observing and steps to reproduce it.
I ran pytorch_imagenet_resnet50.py on two nodes with 6GPU on each and got the following warnings:
[1,5]<stderr>:TBB Warning: The number of workers is currently limited to 0. The request for 15 workers is ignored. Further requests for more workers will be silently ignored until the limit changes.
[1,5]<stderr>:
[1,16]<stderr>:TBB Warning: The number of workers is currently limited to 0. The request for 15 workers is ignored. Further requests for more workers will be silently ignored until the limit changes.
[1,16]<stderr>:
[1,19]<stderr>:TBB Warning: The number of workers is currently limited to 0. The request for 15 workers is ignored. Further requests for more workers will be silently ignored until the limit changes.
[1,19]<stderr>:
[1,13]<stderr>:TBB Warning: The number of workers is currently limited to 0. The request for 15 workers is ignored. Further requests for more workers will be silently ignored until the limit changes.
[1,13]<stderr>:
I am not sure if these are benign and expected. Would they cause any performance issues? Is there a way to set the number of workers to something else or is it hard coded? Thanks.
|
open
|
2022-03-17T20:05:18Z
|
2022-03-17T20:05:18Z
|
https://github.com/horovod/horovod/issues/3474
|
[
"bug"
] |
kkvtran
| 0
|
tflearn/tflearn
|
data-science
| 566
|
No example of tflearn.custom_layer
|
There is no documentation, regarding how to specify a custom function when using tflearn.
I also checked and there is no unit test for tflearn.custom_layer either.
|
open
|
2017-01-17T19:41:35Z
|
2017-01-19T06:36:07Z
|
https://github.com/tflearn/tflearn/issues/566
|
[] |
kieran-mace
| 1
|
pytest-dev/pytest-cov
|
pytest
| 85
|
Branch coverage please
|
Please allow turning on branch coverage.
IMO it should be the default too.
|
closed
|
2015-08-25T11:15:02Z
|
2017-03-15T23:11:37Z
|
https://github.com/pytest-dev/pytest-cov/issues/85
|
[] |
dimaqq
| 7
|
alpacahq/alpaca-trade-api-python
|
rest-api
| 226
|
Data streaming broken on version 0.48 but not on v0.46 ('AM.*')
|
Just spent way too many hours of my life to figure out what was causing the problem :/
I have a project streaming ['AM.*', 'trade_updates'] on alpaca-trade-api v0.46. Everything works as expected.
On another project, I had the exact same code but only 'trade_updates' were working. The difference was alpaca-trade-api v0.48. Not sure if this is related to Alpaca's new data streaming but something broke between these two versions. Just wanted to make you and others aware.
Note: I was streaming using paper credentials, not sure about live.
My code:
```python
self._on_minute_bars = self.paper_conn.on(r'^AM$')(self._on_minute_bars)
self._on_trade_updates = self.paper_conn.on(r'trade_updates$')(self._on_trade_updates)
```
|
closed
|
2020-06-18T09:25:07Z
|
2020-07-07T11:56:10Z
|
https://github.com/alpacahq/alpaca-trade-api-python/issues/226
|
[] |
duhfrazee
| 6
|
waditu/tushare
|
pandas
| 1,109
|
业绩预告p_change_min, p_change_max无数据时返回值不统一
|
调用业绩预告时返回的p_change_min, p_change_max有的没有数据,当没有数据的时候,有的是标记成nan或NaN,type是numpy.float64,但是有的返回的是None,type显示是“<class 'NoneType'>“。
email: hbzong@qq.com
|
closed
|
2019-08-01T15:41:29Z
|
2019-08-06T06:10:08Z
|
https://github.com/waditu/tushare/issues/1109
|
[] |
hbzong
| 1
|
huggingface/text-generation-inference
|
nlp
| 2,681
|
Complexe response format lead the container to run forever on CPU
|
### System Info
System:
`Linux 4.18.0-553.22.1.el8_10.x86_64 #1 SMP Wed Sep 25 09:20:43 UTC 2024 x86_64 x86_64 x86_64 GNU/Linux`
`Rocky Linux 8.10`
Model: `mistralai/Mistral-Nemo-Instruct-2407`
Hardware:
* GPU: `NVIDIA A100-SXM4-80GB`
* CPU:
```
Architecture: x86_64
CPU op-mode(s): 32-bit, 64-bit
Byte Order: Little Endian
CPU(s): 256
On-line CPU(s) list: 0-255
Thread(s) per core: 2
Core(s) per socket: 64
Socket(s): 2
NUMA node(s): 8
Vendor ID: AuthenticAMD
CPU family: 23
Model: 49
Model name: AMD EPYC 7742 64-Core Processor
Stepping: 0
CPU MHz: 2250.000
CPU max MHz: 2250.0000
CPU min MHz: 1500.0000
```
Using text-generation-inference docker containers, the issue is reproduced with TGI 2.3.1 & 2.2.0
### Information
- [X] Docker
- [ ] The CLI directly
### Tasks
- [X] An officially supported command
- [ ] My own modifications
### Reproduction
1. Run a TGI container
``` sh
# Podman
podman run --device nvidia.com/gpu=4 --shm-size 64g -p 7085:80 -v /data/huggingface/hub/:/data ghcr.io/huggingface/text-generation-inference:2.3.1 --model-id /data/models--mistralai--Mistral-Nemo-Instruct-2407/snapshots/e17a136e1dcba9c63ad771f2c85c1c312c563e6b --num-shard 1 --trust-remote-code --env --max-input-length 63999 --max-total-tokens 64000 --max-batch-prefill-tokens 64000 --max-concurrent-requests 128 --cuda-memory-fraction 0.94 --json-output
# Equivalent using docker
docker run --gpus '"device=4"' --shm-size 64g -p 7085:80 -v /data/huggingface/hub/:/data ghcr.io/huggingface/text-generation-inference:2.3.1 --model-id /data/models--mistralai--Mistral-Nemo-Instruct-2407/snapshots/e17a136e1dcba9c63ad771f2c85c1c312c563e6b --num-shard 1 --trust-remote-code --env --max-input-length 63999 --max-total-tokens 64000 --max-batch-prefill-tokens 64000 --max-concurrent-requests 128 --cuda-memory-fraction 0.94 --json-output
```
<details>
<summary>2. Make a chat completion call with a complexe regex in response format:</summary>
``` sh
curl -X 'POST' \
'http://px101.prod.exalead.com:7085/v1/chat/completions' \
-H 'accept: application/json' \
-H 'Content-Type: application/json' \
-d '{
"model": "mistralai/Mistral-Nemo-Instruct-2407",
"messages": [
{
"role": "user",
"content": "hello",
"tool_calls": null
}
],
"temperature": 0.0,
"top_p": null,
"max_tokens": 2048,
"stop": [],
"stream": true,
"seed": null,
"frequency_penalty": null,
"presence_penalty": null,
"logprobs": false,
"top_logprobs": null,
"stream_options": null,
"response_format": {
"type": "regex",
"whitespace_pattern": null,
"value": "\\{([ ]?\"a\"[ ]?:[ ]?\\[[ ]?((\"([^\"\\\\\\x00-\\x1F\\x7F-\\x9F]|\\\\[\"\\\\])*\")(,[ ]?(\"([^\"\\\\\\x00-\\x1F\\x7F-\\x9F]|\\\\[\"\\\\])*\")){0,})?[ ]?\\]([ ]?,[ ]?\"b\"[ ]?:[ ]?\\[[ ]?((\"([^\"\\\\\\x00-\\x1F\\x7F-\\x9F]|\\\\[\"\\\\])*\")(,[ ]?(\"([^\"\\\\\\x00-\\x1F\\x7F-\\x9F]|\\\\[\"\\\\])*\")){0,})?[ ]?\\])?([ ]?,[ ]?\"c\"[ ]?:[ ]?\\[[ ]?((\"([^\"\\\\\\x00-\\x1F\\x7F-\\x9F]|\\\\[\"\\\\])*\")(,[ ]?(\"([^\"\\\\\\x00-\\x1F\\x7F-\\x9F]|\\\\[\"\\\\])*\")){0,})?[ ]?\\])?([ ]?,[ ]?\"d\"[ ]?:[ ]?\\[[ ]?((\"([^\"\\\\\\x00-\\x1F\\x7F-\\x9F]|\\\\[\"\\\\])*\")(,[ ]?(\"([^\"\\\\\\x00-\\x1F\\x7F-\\x9F]|\\\\[\"\\\\])*\")){0,})?[ ]?\\])?([ ]?,[ ]?\"e\"[ ]?:[ ]?\\[[ ]?((\"([^\"\\\\\\x00-\\x1F\\x7F-\\x9F]|\\\\[\"\\\\])*\")(,[ ]?(\"([^\"\\\\\\x00-\\x1F\\x7F-\\x9F]|\\\\[\"\\\\])*\")){0,})?[ ]?\\])?([ ]?,[ ]?\"f\"[ ]?:[ ]?\\[[ ]?((\"([^\"\\\\\\x00-\\x1F\\x7F-\\x9F]|\\\\[\"\\\\])*\")(,[ ]?(\"([^\"\\\\\\x00-\\x1F\\x7F-\\x9F]|\\\\[\"\\\\])*\")){0,})?[ ]?\\])?([ ]?,[ ]?\"g\"[ ]?:[ ]?\\[[ ]?((\"([^\"\\\\\\x00-\\x1F\\x7F-\\x9F]|\\\\[\"\\\\])*\")(,[ ]?(\"([^\"\\\\\\x00-\\x1F\\x7F-\\x9F]|\\\\[\"\\\\])*\")){0,})?[ ]?\\])?([ ]?,[ ]?\"h\"[ ]?:[ ]?\\[[ ]?((\"([^\"\\\\\\x00-\\x1F\\x7F-\\x9F]|\\\\[\"\\\\])*\")(,[ ]?(\"([^\"\\\\\\x00-\\x1F\\x7F-\\x9F]|\\\\[\"\\\\])*\")){0,})?[ ]?\\])?([ ]?,[ ]?\"i\"[ ]?:[ ]?\\[[ ]?((\"([^\"\\\\\\x00-\\x1F\\x7F-\\x9F]|\\\\[\"\\\\])*\")(,[ ]?(\"([^\"\\\\\\x00-\\x1F\\x7F-\\x9F]|\\\\[\"\\\\])*\")){0,})?[ ]?\\])?([ ]?,[ ]?\"j\"[ ]?:[ ]?\\[[ ]?((\"([^\"\\\\\\x00-\\x1F\\x7F-\\x9F]|\\\\[\"\\\\])*\")(,[ ]?(\"([^\"\\\\\\x00-\\x1F\\x7F-\\x9F]|\\\\[\"\\\\])*\")){0,})?[ ]?\\])?([ ]?,[ ]?\"k\"[ ]?:[ ]?\\[[ ]?((\"([^\"\\\\\\x00-\\x1F\\x7F-\\x9F]|\\\\[\"\\\\])*\")(,[ ]?(\"([^\"\\\\\\x00-\\x1F\\x7F-\\x9F]|\\\\[\"\\\\])*\")){0,})?[ ]?\\])?([ ]?,[ ]?\"l\"[ ]?:[ ]?\\[[ ]?((\"([^\"\\\\\\x00-\\x1F\\x7F-\\x9F]|\\\\[\"\\\\])*\")(,[ ]?(\"([^\"\\\\\\x00-\\x1F\\x7F-\\x9F]|\\\\[\"\\\\])*\")){0,})?[ ]?\\])?|([ ]?\"a\"[ ]?:[ ]?\\[[ ]?((\"([^\"\\\\\\x00-\\x1F\\x7F-\\x9F]|\\\\[\"\\\\])*\")(,[ ]?(\"([^\"\\\\\\x00-\\x1F\\x7F-\\x9F]|\\\\[\"\\\\])*\")){0,})?[ ]?\\][ ]?,)?[ ]?\"b\"[ ]?:[ ]?\\[[ ]?((\"([^\"\\\\\\x00-\\x1F\\x7F-\\x9F]|\\\\[\"\\\\])*\")(,[ ]?(\"([^\"\\\\\\x00-\\x1F\\x7F-\\x9F]|\\\\[\"\\\\])*\")){0,})?[ ]?\\]([ ]?,[ ]?\"c\"[ ]?:[ ]?\\[[ ]?((\"([^\"\\\\\\x00-\\x1F\\x7F-\\x9F]|\\\\[\"\\\\])*\")(,[ ]?(\"([^\"\\\\\\x00-\\x1F\\x7F-\\x9F]|\\\\[\"\\\\])*\")){0,})?[ ]?\\])?([ ]?,[ ]?\"d\"[ ]?:[ ]?\\[[ ]?((\"([^\"\\\\\\x00-\\x1F\\x7F-\\x9F]|\\\\[\"\\\\])*\")(,[ ]?(\"([^\"\\\\\\x00-\\x1F\\x7F-\\x9F]|\\\\[\"\\\\])*\")){0,})?[ ]?\\])?([ ]?,[ ]?\"e\"[ ]?:[ ]?\\[[ ]?((\"([^\"\\\\\\x00-\\x1F\\x7F-\\x9F]|\\\\[\"\\\\])*\")(,[ ]?(\"([^\"\\\\\\x00-\\x1F\\x7F-\\x9F]|\\\\[\"\\\\])*\")){0,})?[ ]?\\])?([ ]?,[ ]?\"f\"[ ]?:[ ]?\\[[ ]?((\"([^\"\\\\\\x00-\\x1F\\x7F-\\x9F]|\\\\[\"\\\\])*\")(,[ ]?(\"([^\"\\\\\\x00-\\x1F\\x7F-\\x9F]|\\\\[\"\\\\])*\")){0,})?[ ]?\\])?([ ]?,[ ]?\"g\"[ ]?:[ ]?\\[[ ]?((\"([^\"\\\\\\x00-\\x1F\\x7F-\\x9F]|\\\\[\"\\\\])*\")(,[ ]?(\"([^\"\\\\\\x00-\\x1F\\x7F-\\x9F]|\\\\[\"\\\\])*\")){0,})?[ ]?\\])?([ ]?,[ ]?\"h\"[ ]?:[ ]?\\[[ ]?((\"([^\"\\\\\\x00-\\x1F\\x7F-\\x9F]|\\\\[\"\\\\])*\")(,[ ]?(\"([^\"\\\\\\x00-\\x1F\\x7F-\\x9F]|\\\\[\"\\\\])*\")){0,})?[ ]?\\])?([ ]?,[ ]?\"i\"[ ]?:[ ]?\\[[ ]?((\"([^\"\\\\\\x00-\\x1F\\x7F-\\x9F]|\\\\[\"\\\\])*\")(,[ ]?(\"([^\"\\\\\\x00-\\x1F\\x7F-\\x9F]|\\\\[\"\\\\])*\")){0,})?[ ]?\\])?([ ]?,[ ]?\"j\"[ ]?:[ ]?\\[[ ]?((\"([^\"\\\\\\x00-\\x1F\\x7F-\\x9F]|\\\\[\"\\\\])*\")(,[ ]?(\"([^\"\\\\\\x00-\\x1F\\x7F-\\x9F]|\\\\[\"\\\\])*\")){0,})?[ ]?\\])?([ ]?,[ ]?\"k\"[ ]?:[ ]?\\[[ ]?((\"([^\"\\\\\\x00-\\x1F\\x7F-\\x9F]|\\\\[\"\\\\])*\")(,[ ]?(\"([^\"\\\\\\x00-\\x1F\\x7F-\\x9F]|\\\\[\"\\\\])*\")){0,})?[ ]?\\])?([ ]?,[ ]?\"l\"[ ]?:[ ]?\\[[ ]?((\"([^\"\\\\\\x00-\\x1F\\x7F-\\x9F]|\\\\[\"\\\\])*\")(,[ ]?(\"([^\"\\\\\\x00-\\x1F\\x7F-\\x9F]|\\\\[\"\\\\])*\")){0,})?[ ]?\\])?|([ ]?\"a\"[ ]?:[ ]?\\[[ ]?((\"([^\"\\\\\\x00-\\x1F\\x7F-\\x9F]|\\\\[\"\\\\])*\")(,[ ]?(\"([^\"\\\\\\x00-\\x1F\\x7F-\\x9F]|\\\\[\"\\\\])*\")){0,})?[ ]?\\][ ]?,)?([ ]?\"b\"[ ]?:[ ]?\\[[ ]?((\"([^\"\\\\\\x00-\\x1F\\x7F-\\x9F]|\\\\[\"\\\\])*\")(,[ ]?(\"([^\"\\\\\\x00-\\x1F\\x7F-\\x9F]|\\\\[\"\\\\])*\")){0,})?[ ]?\\][ ]?,)?[ ]?\"c\"[ ]?:[ ]?\\[[ ]?((\"([^\"\\\\\\x00-\\x1F\\x7F-\\x9F]|\\\\[\"\\\\])*\")(,[ ]?(\"([^\"\\\\\\x00-\\x1F\\x7F-\\x9F]|\\\\[\"\\\\])*\")){0,})?[ ]?\\]([ ]?,[ ]?\"d\"[ ]?:[ ]?\\[[ ]?((\"([^\"\\\\\\x00-\\x1F\\x7F-\\x9F]|\\\\[\"\\\\])*\")(,[ ]?(\"([^\"\\\\\\x00-\\x1F\\x7F-\\x9F]|\\\\[\"\\\\])*\")){0,})?[ ]?\\])?([ ]?,[ ]?\"e\"[ ]?:[ ]?\\[[ ]?((\"([^\"\\\\\\x00-\\x1F\\x7F-\\x9F]|\\\\[\"\\\\])*\")(,[ ]?(\"([^\"\\\\\\x00-\\x1F\\x7F-\\x9F]|\\\\[\"\\\\])*\")){0,})?[ ]?\\])?([ ]?,[ ]?\"f\"[ ]?:[ ]?\\[[ ]?((\"([^\"\\\\\\x00-\\x1F\\x7F-\\x9F]|\\\\[\"\\\\])*\")(,[ ]?(\"([^\"\\\\\\x00-\\x1F\\x7F-\\x9F]|\\\\[\"\\\\])*\")){0,})?[ ]?\\])?([ ]?,[ ]?\"g\"[ ]?:[ ]?\\[[ ]?((\"([^\"\\\\\\x00-\\x1F\\x7F-\\x9F]|\\\\[\"\\\\])*\")(,[ ]?(\"([^\"\\\\\\x00-\\x1F\\x7F-\\x9F]|\\\\[\"\\\\])*\")){0,})?[ ]?\\])?([ ]?,[ ]?\"h\"[ ]?:[ ]?\\[[ ]?((\"([^\"\\\\\\x00-\\x1F\\x7F-\\x9F]|\\\\[\"\\\\])*\")(,[ ]?(\"([^\"\\\\\\x00-\\x1F\\x7F-\\x9F]|\\\\[\"\\\\])*\")){0,})?[ ]?\\])?([ ]?,[ ]?\"i\"[ ]?:[ ]?\\[[ ]?((\"([^\"\\\\\\x00-\\x1F\\x7F-\\x9F]|\\\\[\"\\\\])*\")(,[ ]?(\"([^\"\\\\\\x00-\\x1F\\x7F-\\x9F]|\\\\[\"\\\\])*\")){0,})?[ ]?\\])?([ ]?,[ ]?\"j\"[ ]?:[ ]?\\[[ ]?((\"([^\"\\\\\\x00-\\x1F\\x7F-\\x9F]|\\\\[\"\\\\])*\")(,[ ]?(\"([^\"\\\\\\x00-\\x1F\\x7F-\\x9F]|\\\\[\"\\\\])*\")){0,})?[ ]?\\])?([ ]?,[ ]?\"k\"[ ]?:[ ]?\\[[ ]?((\"([^\"\\\\\\x00-\\x1F\\x7F-\\x9F]|\\\\[\"\\\\])*\")(,[ ]?(\"([^\"\\\\\\x00-\\x1F\\x7F-\\x9F]|\\\\[\"\\\\])*\")){0,})?[ ]?\\])?([ ]?,[ ]?\"l\"[ ]?:[ ]?\\[[ ]?((\"([^\"\\\\\\x00-\\x1F\\x7F-\\x9F]|\\\\[\"\\\\])*\")(,[ ]?(\"([^\"\\\\\\x00-\\x1F\\x7F-\\x9F]|\\\\[\"\\\\])*\")){0,})?[ ]?\\])?|([ ]?\"a\"[ ]?:[ ]?\\[[ ]?((\"([^\"\\\\\\x00-\\x1F\\x7F-\\x9F]|\\\\[\"\\\\])*\")(,[ ]?(\"([^\"\\\\\\x00-\\x1F\\x7F-\\x9F]|\\\\[\"\\\\])*\")){0,})?[ ]?\\][ ]?,)?([ ]?\"b\"[ ]?:[ ]?\\[[ ]?((\"([^\"\\\\\\x00-\\x1F\\x7F-\\x9F]|\\\\[\"\\\\])*\")(,[ ]?(\"([^\"\\\\\\x00-\\x1F\\x7F-\\x9F]|\\\\[\"\\\\])*\")){0,})?[ ]?\\][ ]?,)?([ ]?\"c\"[ ]?:[ ]?\\[[ ]?((\"([^\"\\\\\\x00-\\x1F\\x7F-\\x9F]|\\\\[\"\\\\])*\")(,[ ]?(\"([^\"\\\\\\x00-\\x1F\\x7F-\\x9F]|\\\\[\"\\\\])*\")){0,})?[ ]?\\][ ]?,)?[ ]?\"d\"[ ]?:[ ]?\\[[ ]?((\"([^\"\\\\\\x00-\\x1F\\x7F-\\x9F]|\\\\[\"\\\\])*\")(,[ ]?(\"([^\"\\\\\\x00-\\x1F\\x7F-\\x9F]|\\\\[\"\\\\])*\")){0,})?[ ]?\\]([ ]?,[ ]?\"e\"[ ]?:[ ]?\\[[ ]?((\"([^\"\\\\\\x00-\\x1F\\x7F-\\x9F]|\\\\[\"\\\\])*\")(,[ ]?(\"([^\"\\\\\\x00-\\x1F\\x7F-\\x9F]|\\\\[\"\\\\])*\")){0,})?[ ]?\\])?([ ]?,[ ]?\"f\"[ ]?:[ ]?\\[[ ]?((\"([^\"\\\\\\x00-\\x1F\\x7F-\\x9F]|\\\\[\"\\\\])*\")(,[ ]?(\"([^\"\\\\\\x00-\\x1F\\x7F-\\x9F]|\\\\[\"\\\\])*\")){0,})?[ ]?\\])?([ ]?,[ ]?\"g\"[ ]?:[ ]?\\[[ ]?((\"([^\"\\\\\\x00-\\x1F\\x7F-\\x9F]|\\\\[\"\\\\])*\")(,[ ]?(\"([^\"\\\\\\x00-\\x1F\\x7F-\\x9F]|\\\\[\"\\\\])*\")){0,})?[ ]?\\])?([ ]?,[ ]?\"h\"[ ]?:[ ]?\\[[ ]?((\"([^\"\\\\\\x00-\\x1F\\x7F-\\x9F]|\\\\[\"\\\\])*\")(,[ ]?(\"([^\"\\\\\\x00-\\x1F\\x7F-\\x9F]|\\\\[\"\\\\])*\")){0,})?[ ]?\\])?([ ]?,[ ]?\"i\"[ ]?:[ ]?\\[[ ]?((\"([^\"\\\\\\x00-\\x1F\\x7F-\\x9F]|\\\\[\"\\\\])*\")(,[ ]?(\"([^\"\\\\\\x00-\\x1F\\x7F-\\x9F]|\\\\[\"\\\\])*\")){0,})?[ ]?\\])?([ ]?,[ ]?\"j\"[ ]?:[ ]?\\[[ ]?((\"([^\"\\\\\\x00-\\x1F\\x7F-\\x9F]|\\\\[\"\\\\])*\")(,[ ]?(\"([^\"\\\\\\x00-\\x1F\\x7F-\\x9F]|\\\\[\"\\\\])*\")){0,})?[ ]?\\])?([ ]?,[ ]?\"k\"[ ]?:[ ]?\\[[ ]?((\"([^\"\\\\\\x00-\\x1F\\x7F-\\x9F]|\\\\[\"\\\\])*\")(,[ ]?(\"([^\"\\\\\\x00-\\x1F\\x7F-\\x9F]|\\\\[\"\\\\])*\")){0,})?[ ]?\\])?([ ]?,[ ]?\"l\"[ ]?:[ ]?\\[[ ]?((\"([^\"\\\\\\x00-\\x1F\\x7F-\\x9F]|\\\\[\"\\\\])*\")(,[ ]?(\"([^\"\\\\\\x00-\\x1F\\x7F-\\x9F]|\\\\[\"\\\\])*\")){0,})?[ ]?\\])?|([ ]?\"a\"[ ]?:[ ]?\\[[ ]?((\"([^\"\\\\\\x00-\\x1F\\x7F-\\x9F]|\\\\[\"\\\\])*\")(,[ ]?(\"([^\"\\\\\\x00-\\x1F\\x7F-\\x9F]|\\\\[\"\\\\])*\")){0,})?[ ]?\\][ ]?,)?([ ]?\"b\"[ ]?:[ ]?\\[[ ]?((\"([^\"\\\\\\x00-\\x1F\\x7F-\\x9F]|\\\\[\"\\\\])*\")(,[ ]?(\"([^\"\\\\\\x00-\\x1F\\x7F-\\x9F]|\\\\[\"\\\\])*\")){0,})?[ ]?\\][ ]?,)?([ ]?\"c\"[ ]?:[ ]?\\[[ ]?((\"([^\"\\\\\\x00-\\x1F\\x7F-\\x9F]|\\\\[\"\\\\])*\")(,[ ]?(\"([^\"\\\\\\x00-\\x1F\\x7F-\\x9F]|\\\\[\"\\\\])*\")){0,})?[ ]?\\][ ]?,)?([ ]?\"d\"[ ]?:[ ]?\\[[ ]?((\"([^\"\\\\\\x00-\\x1F\\x7F-\\x9F]|\\\\[\"\\\\])*\")(,[ ]?(\"([^\"\\\\\\x00-\\x1F\\x7F-\\x9F]|\\\\[\"\\\\])*\")){0,})?[ ]?\\][ ]?,)?[ ]?\"e\"[ ]?:[ ]?\\[[ ]?((\"([^\"\\\\\\x00-\\x1F\\x7F-\\x9F]|\\\\[\"\\\\])*\")(,[ ]?(\"([^\"\\\\\\x00-\\x1F\\x7F-\\x9F]|\\\\[\"\\\\])*\")){0,})?[ ]?\\]([ ]?,[ ]?\"f\"[ ]?:[ ]?\\[[ ]?((\"([^\"\\\\\\x00-\\x1F\\x7F-\\x9F]|\\\\[\"\\\\])*\")(,[ ]?(\"([^\"\\\\\\x00-\\x1F\\x7F-\\x9F]|\\\\[\"\\\\])*\")){0,})?[ ]?\\])?([ ]?,[ ]?\"g\"[ ]?:[ ]?\\[[ ]?((\"([^\"\\\\\\x00-\\x1F\\x7F-\\x9F]|\\\\[\"\\\\])*\")(,[ ]?(\"([^\"\\\\\\x00-\\x1F\\x7F-\\x9F]|\\\\[\"\\\\])*\")){0,})?[ ]?\\])?([ ]?,[ ]?\"h\"[ ]?:[ ]?\\[[ ]?((\"([^\"\\\\\\x00-\\x1F\\x7F-\\x9F]|\\\\[\"\\\\])*\")(,[ ]?(\"([^\"\\\\\\x00-\\x1F\\x7F-\\x9F]|\\\\[\"\\\\])*\")){0,})?[ ]?\\])?([ ]?,[ ]?\"i\"[ ]?:[ ]?\\[[ ]?((\"([^\"\\\\\\x00-\\x1F\\x7F-\\x9F]|\\\\[\"\\\\])*\")(,[ ]?(\"([^\"\\\\\\x00-\\x1F\\x7F-\\x9F]|\\\\[\"\\\\])*\")){0,})?[ ]?\\])?([ ]?,[ ]?\"j\"[ ]?:[ ]?\\[[ ]?((\"([^\"\\\\\\x00-\\x1F\\x7F-\\x9F]|\\\\[\"\\\\])*\")(,[ ]?(\"([^\"\\\\\\x00-\\x1F\\x7F-\\x9F]|\\\\[\"\\\\])*\")){0,})?[ ]?\\])?([ ]?,[ ]?\"k\"[ ]?:[ ]?\\[[ ]?((\"([^\"\\\\\\x00-\\x1F\\x7F-\\x9F]|\\\\[\"\\\\])*\")(,[ ]?(\"([^\"\\\\\\x00-\\x1F\\x7F-\\x9F]|\\\\[\"\\\\])*\")){0,})?[ ]?\\])?([ ]?,[ ]?\"l\"[ ]?:[ ]?\\[[ ]?((\"([^\"\\\\\\x00-\\x1F\\x7F-\\x9F]|\\\\[\"\\\\])*\")(,[ ]?(\"([^\"\\\\\\x00-\\x1F\\x7F-\\x9F]|\\\\[\"\\\\])*\")){0,})?[ ]?\\])?|([ ]?\"a\"[ ]?:[ ]?\\[[ ]?((\"([^\"\\\\\\x00-\\x1F\\x7F-\\x9F]|\\\\[\"\\\\])*\")(,[ ]?(\"([^\"\\\\\\x00-\\x1F\\x7F-\\x9F]|\\\\[\"\\\\])*\")){0,})?[ ]?\\][ ]?,)?([ ]?\"b\"[ ]?:[ ]?\\[[ ]?((\"([^\"\\\\\\x00-\\x1F\\x7F-\\x9F]|\\\\[\"\\\\])*\")(,[ ]?(\"([^\"\\\\\\x00-\\x1F\\x7F-\\x9F]|\\\\[\"\\\\])*\")){0,})?[ ]?\\][ ]?,)?([ ]?\"c\"[ ]?:[ ]?\\[[ ]?((\"([^\"\\\\\\x00-\\x1F\\x7F-\\x9F]|\\\\[\"\\\\])*\")(,[ ]?(\"([^\"\\\\\\x00-\\x1F\\x7F-\\x9F]|\\\\[\"\\\\])*\")){0,})?[ ]?\\][ ]?,)?([ ]?\"d\"[ ]?:[ ]?\\[[ ]?((\"([^\"\\\\\\x00-\\x1F\\x7F-\\x9F]|\\\\[\"\\\\])*\")(,[ ]?(\"([^\"\\\\\\x00-\\x1F\\x7F-\\x9F]|\\\\[\"\\\\])*\")){0,})?[ ]?\\][ ]?,)?([ ]?\"e\"[ ]?:[ ]?\\[[ ]?((\"([^\"\\\\\\x00-\\x1F\\x7F-\\x9F]|\\\\[\"\\\\])*\")(,[ ]?(\"([^\"\\\\\\x00-\\x1F\\x7F-\\x9F]|\\\\[\"\\\\])*\")){0,})?[ ]?\\][ ]?,)?[ ]?\"f\"[ ]?:[ ]?\\[[ ]?((\"([^\"\\\\\\x00-\\x1F\\x7F-\\x9F]|\\\\[\"\\\\])*\")(,[ ]?(\"([^\"\\\\\\x00-\\x1F\\x7F-\\x9F]|\\\\[\"\\\\])*\")){0,})?[ ]?\\]([ ]?,[ ]?\"g\"[ ]?:[ ]?\\[[ ]?((\"([^\"\\\\\\x00-\\x1F\\x7F-\\x9F]|\\\\[\"\\\\])*\")(,[ ]?(\"([^\"\\\\\\x00-\\x1F\\x7F-\\x9F]|\\\\[\"\\\\])*\")){0,})?[ ]?\\])?([ ]?,[ ]?\"h\"[ ]?:[ ]?\\[[ ]?((\"([^\"\\\\\\x00-\\x1F\\x7F-\\x9F]|\\\\[\"\\\\])*\")(,[ ]?(\"([^\"\\\\\\x00-\\x1F\\x7F-\\x9F]|\\\\[\"\\\\])*\")){0,})?[ ]?\\])?([ ]?,[ ]?\"i\"[ ]?:[ ]?\\[[ ]?((\"([^\"\\\\\\x00-\\x1F\\x7F-\\x9F]|\\\\[\"\\\\])*\")(,[ ]?(\"([^\"\\\\\\x00-\\x1F\\x7F-\\x9F]|\\\\[\"\\\\])*\")){0,})?[ ]?\\])?([ ]?,[ ]?\"j\"[ ]?:[ ]?\\[[ ]?((\"([^\"\\\\\\x00-\\x1F\\x7F-\\x9F]|\\\\[\"\\\\])*\")(,[ ]?(\"([^\"\\\\\\x00-\\x1F\\x7F-\\x9F]|\\\\[\"\\\\])*\")){0,})?[ ]?\\])?([ ]?,[ ]?\"k\"[ ]?:[ ]?\\[[ ]?((\"([^\"\\\\\\x00-\\x1F\\x7F-\\x9F]|\\\\[\"\\\\])*\")(,[ ]?(\"([^\"\\\\\\x00-\\x1F\\x7F-\\x9F]|\\\\[\"\\\\])*\")){0,})?[ ]?\\])?([ ]?,[ ]?\"l\"[ ]?:[ ]?\\[[ ]?((\"([^\"\\\\\\x00-\\x1F\\x7F-\\x9F]|\\\\[\"\\\\])*\")(,[ ]?(\"([^\"\\\\\\x00-\\x1F\\x7F-\\x9F]|\\\\[\"\\\\])*\")){0,})?[ ]?\\])?|([ ]?\"a\"[ ]?:[ ]?\\[[ ]?((\"([^\"\\\\\\x00-\\x1F\\x7F-\\x9F]|\\\\[\"\\\\])*\")(,[ ]?(\"([^\"\\\\\\x00-\\x1F\\x7F-\\x9F]|\\\\[\"\\\\])*\")){0,})?[ ]?\\][ ]?,)?([ ]?\"b\"[ ]?:[ ]?\\[[ ]?((\"([^\"\\\\\\x00-\\x1F\\x7F-\\x9F]|\\\\[\"\\\\])*\")(,[ ]?(\"([^\"\\\\\\x00-\\x1F\\x7F-\\x9F]|\\\\[\"\\\\])*\")){0,})?[ ]?\\][ ]?,)?([ ]?\"c\"[ ]?:[ ]?\\[[ ]?((\"([^\"\\\\\\x00-\\x1F\\x7F-\\x9F]|\\\\[\"\\\\])*\")(,[ ]?(\"([^\"\\\\\\x00-\\x1F\\x7F-\\x9F]|\\\\[\"\\\\])*\")){0,})?[ ]?\\][ ]?,)?([ ]?\"d\"[ ]?:[ ]?\\[[ ]?((\"([^\"\\\\\\x00-\\x1F\\x7F-\\x9F]|\\\\[\"\\\\])*\")(,[ ]?(\"([^\"\\\\\\x00-\\x1F\\x7F-\\x9F]|\\\\[\"\\\\])*\")){0,})?[ ]?\\][ ]?,)?([ ]?\"e\"[ ]?:[ ]?\\[[ ]?((\"([^\"\\\\\\x00-\\x1F\\x7F-\\x9F]|\\\\[\"\\\\])*\")(,[ ]?(\"([^\"\\\\\\x00-\\x1F\\x7F-\\x9F]|\\\\[\"\\\\])*\")){0,})?[ ]?\\][ ]?,)?([ ]?\"f\"[ ]?:[ ]?\\[[ ]?((\"([^\"\\\\\\x00-\\x1F\\x7F-\\x9F]|\\\\[\"\\\\])*\")(,[ ]?(\"([^\"\\\\\\x00-\\x1F\\x7F-\\x9F]|\\\\[\"\\\\])*\")){0,})?[ ]?\\][ ]?,)?[ ]?\"g\"[ ]?:[ ]?\\[[ ]?((\"([^\"\\\\\\x00-\\x1F\\x7F-\\x9F]|\\\\[\"\\\\])*\")(,[ ]?(\"([^\"\\\\\\x00-\\x1F\\x7F-\\x9F]|\\\\[\"\\\\])*\")){0,})?[ ]?\\]([ ]?,[ ]?\"h\"[ ]?:[ ]?\\[[ ]?((\"([^\"\\\\\\x00-\\x1F\\x7F-\\x9F]|\\\\[\"\\\\])*\")(,[ ]?(\"([^\"\\\\\\x00-\\x1F\\x7F-\\x9F]|\\\\[\"\\\\])*\")){0,})?[ ]?\\])?([ ]?,[ ]?\"i\"[ ]?:[ ]?\\[[ ]?((\"([^\"\\\\\\x00-\\x1F\\x7F-\\x9F]|\\\\[\"\\\\])*\")(,[ ]?(\"([^\"\\\\\\x00-\\x1F\\x7F-\\x9F]|\\\\[\"\\\\])*\")){0,})?[ ]?\\])?([ ]?,[ ]?\"j\"[ ]?:[ ]?\\[[ ]?((\"([^\"\\\\\\x00-\\x1F\\x7F-\\x9F]|\\\\[\"\\\\])*\")(,[ ]?(\"([^\"\\\\\\x00-\\x1F\\x7F-\\x9F]|\\\\[\"\\\\])*\")){0,})?[ ]?\\])?([ ]?,[ ]?\"k\"[ ]?:[ ]?\\[[ ]?((\"([^\"\\\\\\x00-\\x1F\\x7F-\\x9F]|\\\\[\"\\\\])*\")(,[ ]?(\"([^\"\\\\\\x00-\\x1F\\x7F-\\x9F]|\\\\[\"\\\\])*\")){0,})?[ ]?\\])?([ ]?,[ ]?\"l\"[ ]?:[ ]?\\[[ ]?((\"([^\"\\\\\\x00-\\x1F\\x7F-\\x9F]|\\\\[\"\\\\])*\")(,[ ]?(\"([^\"\\\\\\x00-\\x1F\\x7F-\\x9F]|\\\\[\"\\\\])*\")){0,})?[ ]?\\])?|([ ]?\"a\"[ ]?:[ ]?\\[[ ]?((\"([^\"\\\\\\x00-\\x1F\\x7F-\\x9F]|\\\\[\"\\\\])*\")(,[ ]?(\"([^\"\\\\\\x00-\\x1F\\x7F-\\x9F]|\\\\[\"\\\\])*\")){0,})?[ ]?\\][ ]?,)?([ ]?\"b\"[ ]?:[ ]?\\[[ ]?((\"([^\"\\\\\\x00-\\x1F\\x7F-\\x9F]|\\\\[\"\\\\])*\")(,[ ]?(\"([^\"\\\\\\x00-\\x1F\\x7F-\\x9F]|\\\\[\"\\\\])*\")){0,})?[ ]?\\][ ]?,)?([ ]?\"c\"[ ]?:[ ]?\\[[ ]?((\"([^\"\\\\\\x00-\\x1F\\x7F-\\x9F]|\\\\[\"\\\\])*\")(,[ ]?(\"([^\"\\\\\\x00-\\x1F\\x7F-\\x9F]|\\\\[\"\\\\])*\")){0,})?[ ]?\\][ ]?,)?([ ]?\"d\"[ ]?:[ ]?\\[[ ]?((\"([^\"\\\\\\x00-\\x1F\\x7F-\\x9F]|\\\\[\"\\\\])*\")(,[ ]?(\"([^\"\\\\\\x00-\\x1F\\x7F-\\x9F]|\\\\[\"\\\\])*\")){0,})?[ ]?\\][ ]?,)?([ ]?\"e\"[ ]?:[ ]?\\[[ ]?((\"([^\"\\\\\\x00-\\x1F\\x7F-\\x9F]|\\\\[\"\\\\])*\")(,[ ]?(\"([^\"\\\\\\x00-\\x1F\\x7F-\\x9F]|\\\\[\"\\\\])*\")){0,})?[ ]?\\][ ]?,)?([ ]?\"f\"[ ]?:[ ]?\\[[ ]?((\"([^\"\\\\\\x00-\\x1F\\x7F-\\x9F]|\\\\[\"\\\\])*\")(,[ ]?(\"([^\"\\\\\\x00-\\x1F\\x7F-\\x9F]|\\\\[\"\\\\])*\")){0,})?[ ]?\\][ ]?,)?([ ]?\"g\"[ ]?:[ ]?\\[[ ]?((\"([^\"\\\\\\x00-\\x1F\\x7F-\\x9F]|\\\\[\"\\\\])*\")(,[ ]?(\"([^\"\\\\\\x00-\\x1F\\x7F-\\x9F]|\\\\[\"\\\\])*\")){0,})?[ ]?\\][ ]?,)?[ ]?\"h\"[ ]?:[ ]?\\[[ ]?((\"([^\"\\\\\\x00-\\x1F\\x7F-\\x9F]|\\\\[\"\\\\])*\")(,[ ]?(\"([^\"\\\\\\x00-\\x1F\\x7F-\\x9F]|\\\\[\"\\\\])*\")){0,})?[ ]?\\]([ ]?,[ ]?\"i\"[ ]?:[ ]?\\[[ ]?((\"([^\"\\\\\\x00-\\x1F\\x7F-\\x9F]|\\\\[\"\\\\])*\")(,[ ]?(\"([^\"\\\\\\x00-\\x1F\\x7F-\\x9F]|\\\\[\"\\\\])*\")){0,})?[ ]?\\])?([ ]?,[ ]?\"j\"[ ]?:[ ]?\\[[ ]?((\"([^\"\\\\\\x00-\\x1F\\x7F-\\x9F]|\\\\[\"\\\\])*\")(,[ ]?(\"([^\"\\\\\\x00-\\x1F\\x7F-\\x9F]|\\\\[\"\\\\])*\")){0,})?[ ]?\\])?([ ]?,[ ]?\"k\"[ ]?:[ ]?\\[[ ]?((\"([^\"\\\\\\x00-\\x1F\\x7F-\\x9F]|\\\\[\"\\\\])*\")(,[ ]?(\"([^\"\\\\\\x00-\\x1F\\x7F-\\x9F]|\\\\[\"\\\\])*\")){0,})?[ ]?\\])?([ ]?,[ ]?\"l\"[ ]?:[ ]?\\[[ ]?((\"([^\"\\\\\\x00-\\x1F\\x7F-\\x9F]|\\\\[\"\\\\])*\")(,[ ]?(\"([^\"\\\\\\x00-\\x1F\\x7F-\\x9F]|\\\\[\"\\\\])*\")){0,})?[ ]?\\])?|([ ]?\"a\"[ ]?:[ ]?\\[[ ]?((\"([^\"\\\\\\x00-\\x1F\\x7F-\\x9F]|\\\\[\"\\\\])*\")(,[ ]?(\"([^\"\\\\\\x00-\\x1F\\x7F-\\x9F]|\\\\[\"\\\\])*\")){0,})?[ ]?\\][ ]?,)?([ ]?\"b\"[ ]?:[ ]?\\[[ ]?((\"([^\"\\\\\\x00-\\x1F\\x7F-\\x9F]|\\\\[\"\\\\])*\")(,[ ]?(\"([^\"\\\\\\x00-\\x1F\\x7F-\\x9F]|\\\\[\"\\\\])*\")){0,})?[ ]?\\][ ]?,)?([ ]?\"c\"[ ]?:[ ]?\\[[ ]?((\"([^\"\\\\\\x00-\\x1F\\x7F-\\x9F]|\\\\[\"\\\\])*\")(,[ ]?(\"([^\"\\\\\\x00-\\x1F\\x7F-\\x9F]|\\\\[\"\\\\])*\")){0,})?[ ]?\\][ ]?,)?([ ]?\"d\"[ ]?:[ ]?\\[[ ]?((\"([^\"\\\\\\x00-\\x1F\\x7F-\\x9F]|\\\\[\"\\\\])*\")(,[ ]?(\"([^\"\\\\\\x00-\\x1F\\x7F-\\x9F]|\\\\[\"\\\\])*\")){0,})?[ ]?\\][ ]?,)?([ ]?\"e\"[ ]?:[ ]?\\[[ ]?((\"([^\"\\\\\\x00-\\x1F\\x7F-\\x9F]|\\\\[\"\\\\])*\")(,[ ]?(\"([^\"\\\\\\x00-\\x1F\\x7F-\\x9F]|\\\\[\"\\\\])*\")){0,})?[ ]?\\][ ]?,)?([ ]?\"f\"[ ]?:[ ]?\\[[ ]?((\"([^\"\\\\\\x00-\\x1F\\x7F-\\x9F]|\\\\[\"\\\\])*\")(,[ ]?(\"([^\"\\\\\\x00-\\x1F\\x7F-\\x9F]|\\\\[\"\\\\])*\")){0,})?[ ]?\\][ ]?,)?([ ]?\"g\"[ ]?:[ ]?\\[[ ]?((\"([^\"\\\\\\x00-\\x1F\\x7F-\\x9F]|\\\\[\"\\\\])*\")(,[ ]?(\"([^\"\\\\\\x00-\\x1F\\x7F-\\x9F]|\\\\[\"\\\\])*\")){0,})?[ ]?\\][ ]?,)?([ ]?\"h\"[ ]?:[ ]?\\[[ ]?((\"([^\"\\\\\\x00-\\x1F\\x7F-\\x9F]|\\\\[\"\\\\])*\")(,[ ]?(\"([^\"\\\\\\x00-\\x1F\\x7F-\\x9F]|\\\\[\"\\\\])*\")){0,})?[ ]?\\][ ]?,)?[ ]?\"i\"[ ]?:[ ]?\\[[ ]?((\"([^\"\\\\\\x00-\\x1F\\x7F-\\x9F]|\\\\[\"\\\\])*\")(,[ ]?(\"([^\"\\\\\\x00-\\x1F\\x7F-\\x9F]|\\\\[\"\\\\])*\")){0,})?[ ]?\\]([ ]?,[ ]?\"j\"[ ]?:[ ]?\\[[ ]?((\"([^\"\\\\\\x00-\\x1F\\x7F-\\x9F]|\\\\[\"\\\\])*\")(,[ ]?(\"([^\"\\\\\\x00-\\x1F\\x7F-\\x9F]|\\\\[\"\\\\])*\")){0,})?[ ]?\\])?([ ]?,[ ]?\"k\"[ ]?:[ ]?\\[[ ]?((\"([^\"\\\\\\x00-\\x1F\\x7F-\\x9F]|\\\\[\"\\\\])*\")(,[ ]?(\"([^\"\\\\\\x00-\\x1F\\x7F-\\x9F]|\\\\[\"\\\\])*\")){0,})?[ ]?\\])?([ ]?,[ ]?\"l\"[ ]?:[ ]?\\[[ ]?((\"([^\"\\\\\\x00-\\x1F\\x7F-\\x9F]|\\\\[\"\\\\])*\")(,[ ]?(\"([^\"\\\\\\x00-\\x1F\\x7F-\\x9F]|\\\\[\"\\\\])*\")){0,})?[ ]?\\])?|([ ]?\"a\"[ ]?:[ ]?\\[[ ]?((\"([^\"\\\\\\x00-\\x1F\\x7F-\\x9F]|\\\\[\"\\\\])*\")(,[ ]?(\"([^\"\\\\\\x00-\\x1F\\x7F-\\x9F]|\\\\[\"\\\\])*\")){0,})?[ ]?\\][ ]?,)?([ ]?\"b\"[ ]?:[ ]?\\[[ ]?((\"([^\"\\\\\\x00-\\x1F\\x7F-\\x9F]|\\\\[\"\\\\])*\")(,[ ]?(\"([^\"\\\\\\x00-\\x1F\\x7F-\\x9F]|\\\\[\"\\\\])*\")){0,})?[ ]?\\][ ]?,)?([ ]?\"c\"[ ]?:[ ]?\\[[ ]?((\"([^\"\\\\\\x00-\\x1F\\x7F-\\x9F]|\\\\[\"\\\\])*\")(,[ ]?(\"([^\"\\\\\\x00-\\x1F\\x7F-\\x9F]|\\\\[\"\\\\])*\")){0,})?[ ]?\\][ ]?,)?([ ]?\"d\"[ ]?:[ ]?\\[[ ]?((\"([^\"\\\\\\x00-\\x1F\\x7F-\\x9F]|\\\\[\"\\\\])*\")(,[ ]?(\"([^\"\\\\\\x00-\\x1F\\x7F-\\x9F]|\\\\[\"\\\\])*\")){0,})?[ ]?\\][ ]?,)?([ ]?\"e\"[ ]?:[ ]?\\[[ ]?((\"([^\"\\\\\\x00-\\x1F\\x7F-\\x9F]|\\\\[\"\\\\])*\")(,[ ]?(\"([^\"\\\\\\x00-\\x1F\\x7F-\\x9F]|\\\\[\"\\\\])*\")){0,})?[ ]?\\][ ]?,)?([ ]?\"f\"[ ]?:[ ]?\\[[ ]?((\"([^\"\\\\\\x00-\\x1F\\x7F-\\x9F]|\\\\[\"\\\\])*\")(,[ ]?(\"([^\"\\\\\\x00-\\x1F\\x7F-\\x9F]|\\\\[\"\\\\])*\")){0,})?[ ]?\\][ ]?,)?([ ]?\"g\"[ ]?:[ ]?\\[[ ]?((\"([^\"\\\\\\x00-\\x1F\\x7F-\\x9F]|\\\\[\"\\\\])*\")(,[ ]?(\"([^\"\\\\\\x00-\\x1F\\x7F-\\x9F]|\\\\[\"\\\\])*\")){0,})?[ ]?\\][ ]?,)?([ ]?\"h\"[ ]?:[ ]?\\[[ ]?((\"([^\"\\\\\\x00-\\x1F\\x7F-\\x9F]|\\\\[\"\\\\])*\")(,[ ]?(\"([^\"\\\\\\x00-\\x1F\\x7F-\\x9F]|\\\\[\"\\\\])*\")){0,})?[ ]?\\][ ]?,)?([ ]?\"i\"[ ]?:[ ]?\\[[ ]?((\"([^\"\\\\\\x00-\\x1F\\x7F-\\x9F]|\\\\[\"\\\\])*\")(,[ ]?(\"([^\"\\\\\\x00-\\x1F\\x7F-\\x9F]|\\\\[\"\\\\])*\")){0,})?[ ]?\\][ ]?,)?[ ]?\"j\"[ ]?:[ ]?\\[[ ]?((\"([^\"\\\\\\x00-\\x1F\\x7F-\\x9F]|\\\\[\"\\\\])*\")(,[ ]?(\"([^\"\\\\\\x00-\\x1F\\x7F-\\x9F]|\\\\[\"\\\\])*\")){0,})?[ ]?\\]([ ]?,[ ]?\"k\"[ ]?:[ ]?\\[[ ]?((\"([^\"\\\\\\x00-\\x1F\\x7F-\\x9F]|\\\\[\"\\\\])*\")(,[ ]?(\"([^\"\\\\\\x00-\\x1F\\x7F-\\x9F]|\\\\[\"\\\\])*\")){0,})?[ ]?\\])?([ ]?,[ ]?\"l\"[ ]?:[ ]?\\[[ ]?((\"([^\"\\\\\\x00-\\x1F\\x7F-\\x9F]|\\\\[\"\\\\])*\")(,[ ]?(\"([^\"\\\\\\x00-\\x1F\\x7F-\\x9F]|\\\\[\"\\\\])*\")){0,})?[ ]?\\])?|([ ]?\"a\"[ ]?:[ ]?\\[[ ]?((\"([^\"\\\\\\x00-\\x1F\\x7F-\\x9F]|\\\\[\"\\\\])*\")(,[ ]?(\"([^\"\\\\\\x00-\\x1F\\x7F-\\x9F]|\\\\[\"\\\\])*\")){0,})?[ ]?\\][ ]?,)?([ ]?\"b\"[ ]?:[ ]?\\[[ ]?((\"([^\"\\\\\\x00-\\x1F\\x7F-\\x9F]|\\\\[\"\\\\])*\")(,[ ]?(\"([^\"\\\\\\x00-\\x1F\\x7F-\\x9F]|\\\\[\"\\\\])*\")){0,})?[ ]?\\][ ]?,)?([ ]?\"c\"[ ]?:[ ]?\\[[ ]?((\"([^\"\\\\\\x00-\\x1F\\x7F-\\x9F]|\\\\[\"\\\\])*\")(,[ ]?(\"([^\"\\\\\\x00-\\x1F\\x7F-\\x9F]|\\\\[\"\\\\])*\")){0,})?[ ]?\\][ ]?,)?([ ]?\"d\"[ ]?:[ ]?\\[[ ]?((\"([^\"\\\\\\x00-\\x1F\\x7F-\\x9F]|\\\\[\"\\\\])*\")(,[ ]?(\"([^\"\\\\\\x00-\\x1F\\x7F-\\x9F]|\\\\[\"\\\\])*\")){0,})?[ ]?\\][ ]?,)?([ ]?\"e\"[ ]?:[ ]?\\[[ ]?((\"([^\"\\\\\\x00-\\x1F\\x7F-\\x9F]|\\\\[\"\\\\])*\")(,[ ]?(\"([^\"\\\\\\x00-\\x1F\\x7F-\\x9F]|\\\\[\"\\\\])*\")){0,})?[ ]?\\][ ]?,)?([ ]?\"f\"[ ]?:[ ]?\\[[ ]?((\"([^\"\\\\\\x00-\\x1F\\x7F-\\x9F]|\\\\[\"\\\\])*\")(,[ ]?(\"([^\"\\\\\\x00-\\x1F\\x7F-\\x9F]|\\\\[\"\\\\])*\")){0,})?[ ]?\\][ ]?,)?([ ]?\"g\"[ ]?:[ ]?\\[[ ]?((\"([^\"\\\\\\x00-\\x1F\\x7F-\\x9F]|\\\\[\"\\\\])*\")(,[ ]?(\"([^\"\\\\\\x00-\\x1F\\x7F-\\x9F]|\\\\[\"\\\\])*\")){0,})?[ ]?\\][ ]?,)?([ ]?\"h\"[ ]?:[ ]?\\[[ ]?((\"([^\"\\\\\\x00-\\x1F\\x7F-\\x9F]|\\\\[\"\\\\])*\")(,[ ]?(\"([^\"\\\\\\x00-\\x1F\\x7F-\\x9F]|\\\\[\"\\\\])*\")){0,})?[ ]?\\][ ]?,)?([ ]?\"i\"[ ]?:[ ]?\\[[ ]?((\"([^\"\\\\\\x00-\\x1F\\x7F-\\x9F]|\\\\[\"\\\\])*\")(,[ ]?(\"([^\"\\\\\\x00-\\x1F\\x7F-\\x9F]|\\\\[\"\\\\])*\")){0,})?[ ]?\\][ ]?,)?([ ]?\"j\"[ ]?:[ ]?\\[[ ]?((\"([^\"\\\\\\x00-\\x1F\\x7F-\\x9F]|\\\\[\"\\\\])*\")(,[ ]?(\"([^\"\\\\\\x00-\\x1F\\x7F-\\x9F]|\\\\[\"\\\\])*\")){0,})?[ ]?\\][ ]?,)?[ ]?\"k\"[ ]?:[ ]?\\[[ ]?((\"([^\"\\\\\\x00-\\x1F\\x7F-\\x9F]|\\\\[\"\\\\])*\")(,[ ]?(\"([^\"\\\\\\x00-\\x1F\\x7F-\\x9F]|\\\\[\"\\\\])*\")){0,})?[ ]?\\]([ ]?,[ ]?\"l\"[ ]?:[ ]?\\[[ ]?((\"([^\"\\\\\\x00-\\x1F\\x7F-\\x9F]|\\\\[\"\\\\])*\")(,[ ]?(\"([^\"\\\\\\x00-\\x1F\\x7F-\\x9F]|\\\\[\"\\\\])*\")){0,})?[ ]?\\])?|([ ]?\"a\"[ ]?:[ ]?\\[[ ]?((\"([^\"\\\\\\x00-\\x1F\\x7F-\\x9F]|\\\\[\"\\\\])*\")(,[ ]?(\"([^\"\\\\\\x00-\\x1F\\x7F-\\x9F]|\\\\[\"\\\\])*\")){0,})?[ ]?\\][ ]?,)?([ ]?\"b\"[ ]?:[ ]?\\[[ ]?((\"([^\"\\\\\\x00-\\x1F\\x7F-\\x9F]|\\\\[\"\\\\])*\")(,[ ]?(\"([^\"\\\\\\x00-\\x1F\\x7F-\\x9F]|\\\\[\"\\\\])*\")){0,})?[ ]?\\][ ]?,)?([ ]?\"c\"[ ]?:[ ]?\\[[ ]?((\"([^\"\\\\\\x00-\\x1F\\x7F-\\x9F]|\\\\[\"\\\\])*\")(,[ ]?(\"([^\"\\\\\\x00-\\x1F\\x7F-\\x9F]|\\\\[\"\\\\])*\")){0,})?[ ]?\\][ ]?,)?([ ]?\"d\"[ ]?:[ ]?\\[[ ]?((\"([^\"\\\\\\x00-\\x1F\\x7F-\\x9F]|\\\\[\"\\\\])*\")(,[ ]?(\"([^\"\\\\\\x00-\\x1F\\x7F-\\x9F]|\\\\[\"\\\\])*\")){0,})?[ ]?\\][ ]?,)?([ ]?\"e\"[ ]?:[ ]?\\[[ ]?((\"([^\"\\\\\\x00-\\x1F\\x7F-\\x9F]|\\\\[\"\\\\])*\")(,[ ]?(\"([^\"\\\\\\x00-\\x1F\\x7F-\\x9F]|\\\\[\"\\\\])*\")){0,})?[ ]?\\][ ]?,)?([ ]?\"f\"[ ]?:[ ]?\\[[ ]?((\"([^\"\\\\\\x00-\\x1F\\x7F-\\x9F]|\\\\[\"\\\\])*\")(,[ ]?(\"([^\"\\\\\\x00-\\x1F\\x7F-\\x9F]|\\\\[\"\\\\])*\")){0,})?[ ]?\\][ ]?,)?([ ]?\"g\"[ ]?:[ ]?\\[[ ]?((\"([^\"\\\\\\x00-\\x1F\\x7F-\\x9F]|\\\\[\"\\\\])*\")(,[ ]?(\"([^\"\\\\\\x00-\\x1F\\x7F-\\x9F]|\\\\[\"\\\\])*\")){0,})?[ ]?\\][ ]?,)?([ ]?\"h\"[ ]?:[ ]?\\[[ ]?((\"([^\"\\\\\\x00-\\x1F\\x7F-\\x9F]|\\\\[\"\\\\])*\")(,[ ]?(\"([^\"\\\\\\x00-\\x1F\\x7F-\\x9F]|\\\\[\"\\\\])*\")){0,})?[ ]?\\][ ]?,)?([ ]?\"i\"[ ]?:[ ]?\\[[ ]?((\"([^\"\\\\\\x00-\\x1F\\x7F-\\x9F]|\\\\[\"\\\\])*\")(,[ ]?(\"([^\"\\\\\\x00-\\x1F\\x7F-\\x9F]|\\\\[\"\\\\])*\")){0,})?[ ]?\\][ ]?,)?([ ]?\"j\"[ ]?:[ ]?\\[[ ]?((\"([^\"\\\\\\x00-\\x1F\\x7F-\\x9F]|\\\\[\"\\\\])*\")(,[ ]?(\"([^\"\\\\\\x00-\\x1F\\x7F-\\x9F]|\\\\[\"\\\\])*\")){0,})?[ ]?\\][ ]?,)?([ ]?\"k\"[ ]?:[ ]?\\[[ ]?((\"([^\"\\\\\\x00-\\x1F\\x7F-\\x9F]|\\\\[\"\\\\])*\")(,[ ]?(\"([^\"\\\\\\x00-\\x1F\\x7F-\\x9F]|\\\\[\"\\\\])*\")){0,})?[ ]?\\][ ]?,)?[ ]?\"l\"[ ]?:[ ]?\\[[ ]?((\"([^\"\\\\\\x00-\\x1F\\x7F-\\x9F]|\\\\[\"\\\\])*\")(,[ ]?(\"([^\"\\\\\\x00-\\x1F\\x7F-\\x9F]|\\\\[\"\\\\])*\")){0,})?[ ]?\\])?[ ]?\\}"
},
"tools": null,
"tool_choice": null,
"logit_bias": null,
"n": null,
"parallel_tool_choice": null,
"functions": null,
"function_call": null
}'
```
</details>
<details>
<summary>JSON schema for above regex</summary>
``` json
{
"properties": {
"a": {"type": "array", "items": {"type": "string"}},
"b": {"type": "array", "items": {"type": "string"}},
"c": {"type": "array", "items": {"type": "string"}},
"d": {"type": "array", "items": {"type": "string"}},
"e": {"type": "array", "items": {"type": "string"}},
"f": {"type": "array", "items": {"type": "string"}},
"g": {"type": "array", "items": {"type": "string"}},
"h": {"type": "array", "items": {"type": "string"}},
"i": {"type": "array", "items": {"type": "string"}},
"j": {"type": "array", "items": {"type": "string"}},
"k": {"type": "array", "items": {"type": "string"}},
"l": {"type": "array", "items": {"type": "string"}}
}
}
```
</details>
Note that the issue is still present when passing the JSON schema instead of the regex, but for various reasons we compute the regex ourselves using outlines.build_regex_from_schema.
So when running above curl, TGI does accept the request and start processing. The %CPU goes to 99.7% and it stays like this for very long time until I decide to kill it. The container no longer accepts new requests and becomes useless.
### Expected behavior
A timeout that would stop the processing before it runs for too long.
Not sure which kind of timeout is relevant but since no token is received, a timeout _until first token is received_ would make sense for me.
<details>
<summary>Here's another schema that also triggers above issue but is more common</summary>
```json
{
"properties": {
"main_intent": {
"type": "string"
},
"expected_output": {
"type": "string"
},
"key_entities": {
"type": "array",
"items": {
"type": "string"
}
},
"key_properties": {
"type": "array",
"items": {
"type": "object",
"properties": {
"name": {
"type": "string"
},
"comment": {
"type": "string"
}
}
}
},
"classes": {
"type": "string"
},
"properties": {
"type": "string"
},
"sparql_query_form": {
"type": "string"
},
"variables": {
"type": "array",
"items": {
"type": "object",
"properties": {
"name": {
"type": "string"
},
"comment": {
"type": "string"
}
}
}
},
"properties_marked_as_optional": {
"type": "array",
"items": {
"type": "string"
}
},
"triple_patterns_for_relations": {
"type": "array",
"items": {
"type": "string"
}
},
"triple_patterns_for_classes": {
"type": "array",
"items": {
"type": "string"
}
},
"additional_properties": {
"type": "array",
"items": {
"type": "string"
}
},
"filters_and_constraints": {
"type": "array",
"items": {
"type": "string"
}
},
"output_variables": {
"type": "array",
"items": {
"type": "string"
}
},
"sparql_query": {
"type": "string"
}
}
}
```
</details>
|
open
|
2024-10-23T08:10:43Z
|
2024-12-10T16:56:16Z
|
https://github.com/huggingface/text-generation-inference/issues/2681
|
[] |
Rictus
| 1
|
RayVentura/ShortGPT
|
automation
| 35
|
Add Chinese support
|
I hope to increase support for Chinese ,Really
|
closed
|
2023-07-22T10:37:30Z
|
2023-07-27T12:45:35Z
|
https://github.com/RayVentura/ShortGPT/issues/35
|
[] |
tianheil3
| 4
|
miguelgrinberg/python-socketio
|
asyncio
| 411
|
Invalid async_mode specified while running docker image
|
Hi,
while running docker image I am facing the below issue, I have added the --hidden-import=gevent-websocket, but no luck.
anyone can help me on this.
**my requirements.txt**
Click==7.0
dnspython==1.16.0
eventlet==0.25.1
Flask==1.1.1
Flask-SocketIO==4.2.1
greenlet==0.4.15
itsdangerous==1.1.0
Jinja2==2.10.3
MarkupSafe==1.1.1
monotonic==1.5
pymongo==3.10.1
python-engineio==3.11.2
python-socketio==4.4.0
gevent-websocket==0.10.1
six==1.13.0
Werkzeug==0.16.0
[6] Failed to execute script app
Traceback (most recent call last):
File "app/app.py", line 12, in <module>
File "flask_socketio/__init__.py", line 185, in __init__
File "flask_socketio/__init__.py", line 245, in init_app
File "socketio/server.py", line 108, in __init__
File "engineio/server.py", line 139, in __init__
ValueError: Invalid async_mode specified
|
closed
|
2020-01-16T09:17:48Z
|
2020-04-10T14:15:31Z
|
https://github.com/miguelgrinberg/python-socketio/issues/411
|
[
"question"
] |
nmanthena18
| 5
|
pytorch/vision
|
computer-vision
| 8,762
|
[Feature Request] Datasets Should Use New `torchvision.io` Image Loader APIs and Return `TVTensor` Images by Default
|
### 🚀 The feature
1. Add "torchvision" image loader backend based on new `torchvision.io` APIs (See: [Release Notes v0.20](https://github.com/pytorch/vision/releases/tag/v0.20.0)) and enable it by default.
2. VisionDatasets should return `TVTensor` images by default instead of `PIL.Image`.
### Motivation, pitch
1. TorchVision v0.20 introduces new `torchvision.io` APIs that enhance its encoding/decoding capabilities.
2. Current VisionDatasets returns `PIL.Image` by default, but the first step of transforms is usually `transforms.ToImage()`.
3. PIL is slow (See: [Pillow-SIMD](https://python-pillow.org/pillow-perf/)), especially when compared with new `torchvision.io` APIs.
4. Current TorchVision image loader backends are based on PIL or accimage, not including new `torchvision.io` APIs.
### Alternatives
1. The return type of datasets can be `PIL.Image` when using the PIL or the accimage backends, and be `TVTensor` if using new APIs (may lose consistency).
### Additional context
I would like to make a pull request if the community likes this feature.
|
open
|
2024-11-28T10:31:42Z
|
2025-03-19T21:10:46Z
|
https://github.com/pytorch/vision/issues/8762
|
[] |
fang-d
| 9
|
jupyter-widgets-contrib/ipycanvas
|
jupyter
| 24
|
Fill rule support
|
https://developer.mozilla.org/en-US/docs/Web/API/Canvas_API/Tutorial/Applying_styles_and_colors#Canvas_fill_rules
|
closed
|
2019-09-18T19:05:42Z
|
2019-09-20T14:18:10Z
|
https://github.com/jupyter-widgets-contrib/ipycanvas/issues/24
|
[
"enhancement"
] |
martinRenou
| 0
|
Lightning-AI/pytorch-lightning
|
pytorch
| 20,268
|
rich progress bar shows v_num as 0.000
|
### Bug description
```
Epoch 11/999 ━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━╸━━━ 1498/1638 0:02:25 • 0:00:14 10.20it/s v_num: 0.000 train/loss_step: 0.426
```
tqdm works well to show v_num as integer
### What version are you seeing the problem on?
v2.4
### How to reproduce the bug
_No response_
### Error messages and logs
```
# Error messages and logs here please
```
### Environment
<details>
<summary>Current environment</summary>
```
#- PyTorch Lightning Version (e.g., 2.4.0):
#- PyTorch Version (e.g., 2.4):
#- Python version (e.g., 3.12):
#- OS (e.g., Linux):
#- CUDA/cuDNN version:
#- GPU models and configuration:
#- How you installed Lightning(`conda`, `pip`, source):
```
</details>
### More info
_No response_
|
open
|
2024-09-09T04:09:53Z
|
2024-09-09T04:10:44Z
|
https://github.com/Lightning-AI/pytorch-lightning/issues/20268
|
[
"bug",
"needs triage",
"ver: 2.4.x"
] |
npuichigo
| 0
|
omnilib/aiomultiprocess
|
asyncio
| 189
|
EOFError exception raised for unknown reason
|
Here is a simple code to test the apply() method, but it ended up with an `EOFError` exception:
```Python
import asyncio
from aiomultiprocess import Pool
async def worker():
pass
async def main():
async with Pool() as pool:
tasks = [pool.apply(worker) for i in range(5)]
await asyncio.gather(*tasks)
if __name__ == "__main__":
asyncio.run(main())
```
Here is the output:
```
File "/home/babak/web/test/TEST.py", line 14, in <module>
asyncio.run(main())
File "/usr/lib/python3.11/asyncio/runners.py", line 190, in run
return runner.run(main)
^^^^^^^^^^^^^^^^
File "/usr/lib/python3.11/asyncio/runners.py", line 118, in run
return self._loop.run_until_complete(task)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/usr/lib/python3.11/asyncio/base_events.py", line 653, in run_until_complete
return future.result()
^^^^^^^^^^^^^^^
File "/home/babak/web/test/TEST.py", line 9, in main
async with Pool() as pool:
^^^^^^
File "/home/babak/env_311/lib/python3.11/site-packages/aiomultiprocess/pool.py", line 186, in __init__
self.init()
File "/home/babak/env_311/lib/python3.11/site-packages/aiomultiprocess/pool.py", line 214, in init
self.processes[self.create_worker(qid)] = qid
^^^^^^^^^^^^^^^^^^^^^^^
File "/home/babak/env_311/lib/python3.11/site-packages/aiomultiprocess/pool.py", line 251, in create_worker
process = PoolWorker(
^^^^^^^^^^^
File "/home/babak/env_311/lib/python3.11/site-packages/aiomultiprocess/pool.py", line 59, in __init__
super().__init__(
File "/home/babak/env_311/lib/python3.11/site-packages/aiomultiprocess/core.py", line 109, in __init__
namespace=get_manager().Namespace(),
^^^^^^^^^^^^^
File "/home/babak/env_311/lib/python3.11/site-packages/aiomultiprocess/core.py", line 29, in get_manager
_manager = context.Manager()
^^^^^^^^^^^^^^^^^
File "/usr/lib/python3.11/multiprocessing/context.py", line 57, in Manager
m.start()
File "/usr/lib/python3.11/multiprocessing/managers.py", line 567, in start
self._address = reader.recv()
^^^^^^^^^^^^^
File "/usr/lib/python3.11/multiprocessing/connection.py", line 250, in recv
buf = self._recv_bytes()
^^^^^^^^^^^^^^^^^^
File "/usr/lib/python3.11/multiprocessing/connection.py", line 430, in _recv_bytes
buf = self._recv(4)
^^^^^^^^^^^^^
File "/usr/lib/python3.11/multiprocessing/connection.py", line 399, in _recv
raise EOFError
```
What am I doing wrong?
* OS: Ubuntu 20.0.4 running on a cheap VPS with 1 core CPU and about 1.3 GB of free RAM
* Python version: 3.12
* aiomultiprocess version: 0.9.0
|
open
|
2024-01-05T08:40:08Z
|
2024-01-06T14:37:21Z
|
https://github.com/omnilib/aiomultiprocess/issues/189
|
[] |
BabakAmini
| 0
|
nvbn/thefuck
|
python
| 1,257
|
Different rule found when using fish vs bash/zsh
|
The output of `thefuck --version` (something like `The Fuck 3.1 using Python
3.5.0 and Bash 4.4.12(1)-release`):
The Fuck 3.31 using Python 3.10.1 and Fish Shell 3.3.1
Your system (Debian 7, ArchLinux, Windows, etc.):
Fedora Linux 35 (Workstation Edition)
How to reproduce the bug
1. Create a git branch which diverges from upstream such that it would require a force push e.g. using `git commit --amend` and changing the commit message.
2. Run a normal push without `--force`.
3. Run `fuck`.
The output of The Fuck with `THEFUCK_DEBUG=true` exported (typically execute `export THEFUCK_DEBUG=true` in your shell before The Fuck):
<details><summary>Log</summary>
```
fuck
DEBUG: Run with settings: {'alter_history': True,
'debug': True,
'env': {'GIT_TRACE': '1', 'LANG': 'C', 'LC_ALL': 'C'},
'exclude_rules': [],
'excluded_search_path_prefixes': [],
'history_limit': None,
'instant_mode': False,
'no_colors': False,
'num_close_matches': 3,
'priority': {},
'repeat': False,
'require_confirmation': True,
'rules': [<const: All rules enabled>],
'slow_commands': ['lein', 'react-native', 'gradle', './gradlew', 'vagrant'],
'user_dir': PosixPath('/home/septatrix/.config/thefuck'),
'wait_command': 3,
'wait_slow_command': 15}
DEBUG: Received output: 18:21:50.638803 git.c:455 trace: built-in: git push
18:21:50.641447 run-command.c:666 trace: run_command: unset GIT_PREFIX; ssh -p 20009 gitea@tracker.cde-ev.de 'git-receive-pack '\''/cdedb/cdedb2.git'\'''
18:21:51.551401 run-command.c:666 trace: run_command: .git/hooks/pre-push origin ssh://gitea@tracker.cde-ev.de:20009/cdedb/cdedb2.git
To ssh://tracker.cde-ev.de:20009/cdedb/cdedb2.git
! [rejected] feature/configurable-db-host -> feature/configurable-db-host (non-fast-forward)
error: failed to push some refs to 'ssh://tracker.cde-ev.de:20009/cdedb/cdedb2.git'
hint: Updates were rejected because the tip of your current branch is behind
hint: its remote counterpart. Integrate the remote changes (e.g.
hint: 'git pull ...') before pushing again.
hint: See the 'Note about fast-forwards' in 'git push --help' for details.
DEBUG: Call: git push; with env: {'LC_MEASUREMENT': 'en_GB.UTF-8', 'COLORTERM': 'truecolor', 'GNOME_TERMINAL_SCREEN': '/org/gnome/Terminal/screen/20f05158_bd07_42d0_b162_7281d135ce0d', 'LC_MONETARY': 'en_GB.UTF-8', 'XDG_SESSION_CLASS': 'user', 'LANG': 'C', 'VIRTUAL_ENV': '/home/septatrix/Documents/cde/db/cdedb2/.venv', 'VSCODE_GIT_IPC_HANDLE': '/run/user/1000/vscode-git-996b716d83.sock', 'VSCODE_GIT_ASKPASS_MAIN': '/usr/share/code/resources/app/extensions/git/dist/askpass-main.js', 'WAYLAND_DISPLAY': 'wayland-0', 'TERM': 'xterm-256color', '_OLD_VIRTUAL_PATH': '/home/septatrix/.local/bin:/usr/local/bin:/usr/local/sbin:/usr/bin:/usr/sbin', 'SHLVL': '3', 'GNOME_TERMINAL_SERVICE': ':1.441', 'TERM_PROGRAM_VERSION': '1.63.2', 'XDG_RUNTIME_DIR': '/run/user/1000', 'VSCODE_GIT_ASKPASS_NODE': '/usr/share/code/code', 'LC_PAPER': 'en_GB.UTF-8', 'PATH': '/home/septatrix/Documents/cde/db/cdedb2/.venv/bin:/home/septatrix/.local/bin:/usr/local/bin:/usr/local/sbin:/usr/bin:/usr/sbin', 'USER': 'septatrix', 'XDG_MENU_PREFIX': 'gnome-', 'XDG_CURRENT_DESKTOP': 'GNOME', 'XDG_SESSION_DESKTOP': 'gnome', 'VIRTUAL_ENV_PROMPT': '(.venv) ', 'CHROME_DESKTOP': 'code-url-handler.desktop', '_OLD_FISH_PROMPT_OVERRIDE': '/home/septatrix/Documents/cde/db/cdedb2/.venv', 'PWD': '/home/septatrix/Documents/cde/db/cdedb2', 'DESKTOP_SESSION': 'gnome', 'XMODIFIERS': '@im=ibus', 'SSH_AUTH_SOCK': '/run/user/1000/keyring/ssh', 'SHELL': '/usr/bin/fish', 'XDG_SESSION_TYPE': 'wayland', 'XDG_DATA_DIRS': '/home/septatrix/.local/share/flatpak/exports/share:/var/lib/flatpak/exports/share:/usr/local/share/:/usr/share/', 'HOME': '/home/septatrix', 'VTE_VERSION': '6602', 'SESSION_MANAGER': 'local/unix:@/tmp/.ICE-unix/1443610,unix/unix:/tmp/.ICE-unix/1443610', 'VSCODE_GIT_ASKPASS_EXTRA_ARGS': '--ms-enable-electron-run-as-node', 'LC_NUMERIC': 'en_GB.UTF-8', 'GDK_BACKEND': 'x11', 'SYSTEMD_EXEC_PID': '1443968', 'NO_AT_BRIDGE': '1', 'BREAKPAD_DUMP_LOCATION': '/home/septatrix/.config/Code/exthost Crash Reports', 'LC_TIME': 'en_GB.UTF-8', 'DISPLAY': ':0', 'THEFUCK_DEBUG': 'true', 'USERNAME': 'septatrix', 'APPLICATION_INSIGHTS_NO_DIAGNOSTIC_CHANNEL': 'true', 'DBUS_SESSION_BUS_ADDRESS': 'unix:path=/run/user/1000/bus', 'GNOME_SETUP_DISPLAY': ':1', 'TERM_PROGRAM': 'vscode', 'QT_IM_MODULE': 'ibus', 'GDMSESSION': 'gnome', 'ORIGINAL_XDG_CURRENT_DESKTOP': 'GNOME', 'EDITOR': '/usr/bin/nano', 'LOGNAME': 'septatrix', 'GIT_ASKPASS': '/usr/share/code/resources/app/extensions/git/dist/askpass.sh', 'GDM_LANG': 'en_US.UTF-8', 'XAUTHORITY': '/run/user/1000/.mutter-Xwaylandauth.2WCHF1', 'TF_SHELL': 'fish', 'TF_ALIAS': 'fuck', 'PYTHONIOENCODING': 'utf-8', 'LC_ALL': 'C', 'GIT_TRACE': '1'}; is slow: False took: 0:00:00.980850
DEBUG: Importing rule: adb_unknown_command; took: 0:00:00.000164
DEBUG: Importing rule: ag_literal; took: 0:00:00.000307
DEBUG: Importing rule: apt_get; took: 0:00:00.001086
DEBUG: Importing rule: apt_get_search; took: 0:00:00.000286
DEBUG: Importing rule: apt_invalid_operation; took: 0:00:00.000436
DEBUG: Importing rule: apt_list_upgradable; took: 0:00:00.000262
DEBUG: Importing rule: apt_upgrade; took: 0:00:00.000218
DEBUG: Importing rule: aws_cli; took: 0:00:00.000225
DEBUG: Importing rule: az_cli; took: 0:00:00.000186
DEBUG: Importing rule: brew_cask_dependency; took: 0:00:00.000464
DEBUG: Importing rule: brew_install; took: 0:00:00.000121
DEBUG: Importing rule: brew_link; took: 0:00:00.000183
DEBUG: Importing rule: brew_reinstall; took: 0:00:00.000536
DEBUG: Importing rule: brew_uninstall; took: 0:00:00.000240
DEBUG: Importing rule: brew_unknown_command; took: 0:00:00.000141
DEBUG: Importing rule: brew_update_formula; took: 0:00:00.000201
DEBUG: Importing rule: brew_upgrade; took: 0:00:00.000108
DEBUG: Importing rule: cargo; took: 0:00:00.000172
DEBUG: Importing rule: cargo_no_command; took: 0:00:00.000192
DEBUG: Importing rule: cat_dir; took: 0:00:00.000173
DEBUG: Importing rule: cd_correction; took: 0:00:00.000704
DEBUG: Importing rule: cd_cs; took: 0:00:00.000108
DEBUG: Importing rule: cd_mkdir; took: 0:00:00.000209
DEBUG: Importing rule: cd_parent; took: 0:00:00.000116
DEBUG: Importing rule: chmod_x; took: 0:00:00.000129
DEBUG: Importing rule: choco_install; took: 0:00:00.000350
DEBUG: Importing rule: composer_not_command; took: 0:00:00.000238
DEBUG: Importing rule: conda_mistype; took: 0:00:00.000231
DEBUG: Importing rule: cp_create_destination; took: 0:00:00.000223
DEBUG: Importing rule: cp_omitting_directory; took: 0:00:00.000257
DEBUG: Importing rule: cpp11; took: 0:00:00.000220
DEBUG: Importing rule: dirty_untar; took: 0:00:00.001978
DEBUG: Importing rule: dirty_unzip; took: 0:00:00.001140
DEBUG: Importing rule: django_south_ghost; took: 0:00:00.000128
DEBUG: Importing rule: django_south_merge; took: 0:00:00.000147
DEBUG: Importing rule: dnf_no_such_command; took: 0:00:00.000691
DEBUG: Importing rule: docker_image_being_used_by_container; took: 0:00:00.000248
DEBUG: Importing rule: docker_login; took: 0:00:00.000196
DEBUG: Importing rule: docker_not_command; took: 0:00:00.000539
DEBUG: Importing rule: dry; took: 0:00:00.000161
DEBUG: Importing rule: fab_command_not_found; took: 0:00:00.000241
DEBUG: Importing rule: fix_alt_space; took: 0:00:00.000182
DEBUG: Importing rule: fix_file; took: 0:00:00.001661
DEBUG: Importing rule: gem_unknown_command; took: 0:00:00.000345
DEBUG: Importing rule: git_add; took: 0:00:00.000391
DEBUG: Importing rule: git_add_force; took: 0:00:00.000164
DEBUG: Importing rule: git_bisect_usage; took: 0:00:00.000192
DEBUG: Importing rule: git_branch_delete; took: 0:00:00.000183
DEBUG: Importing rule: git_branch_delete_checked_out; took: 0:00:00.000186
DEBUG: Importing rule: git_branch_exists; took: 0:00:00.000199
DEBUG: Importing rule: git_branch_list; took: 0:00:00.000273
DEBUG: Importing rule: git_checkout; took: 0:00:00.000205
DEBUG: Importing rule: git_clone_git_clone; took: 0:00:00.000179
DEBUG: Importing rule: git_commit_amend; took: 0:00:00.000175
DEBUG: Importing rule: git_commit_reset; took: 0:00:00.000192
DEBUG: Importing rule: git_diff_no_index; took: 0:00:00.000183
DEBUG: Importing rule: git_diff_staged; took: 0:00:00.000189
DEBUG: Importing rule: git_fix_stash; took: 0:00:00.000188
DEBUG: Importing rule: git_flag_after_filename; took: 0:00:00.000200
DEBUG: Importing rule: git_help_aliased; took: 0:00:00.000178
DEBUG: Importing rule: git_hook_bypass; took: 0:00:00.000197
DEBUG: Importing rule: git_lfs_mistype; took: 0:00:00.000184
DEBUG: Importing rule: git_merge; took: 0:00:00.000184
DEBUG: Importing rule: git_merge_unrelated; took: 0:00:00.000173
DEBUG: Importing rule: git_not_command; took: 0:00:00.000192
DEBUG: Importing rule: git_pull; took: 0:00:00.000174
DEBUG: Importing rule: git_pull_clone; took: 0:00:00.000269
DEBUG: Importing rule: git_pull_uncommitted_changes; took: 0:00:00.000195
DEBUG: Importing rule: git_push; took: 0:00:00.000176
DEBUG: Importing rule: git_push_different_branch_names; took: 0:00:00.000184
DEBUG: Importing rule: git_push_force; took: 0:00:00.000170
DEBUG: Importing rule: git_push_pull; took: 0:00:00.000226
DEBUG: Importing rule: git_push_without_commits; took: 0:00:00.000305
DEBUG: Importing rule: git_rebase_merge_dir; took: 0:00:00.000184
DEBUG: Importing rule: git_rebase_no_changes; took: 0:00:00.000129
DEBUG: Importing rule: git_remote_delete; took: 0:00:00.000197
DEBUG: Importing rule: git_remote_seturl_add; took: 0:00:00.000154
DEBUG: Importing rule: git_rm_local_modifications; took: 0:00:00.000198
DEBUG: Importing rule: git_rm_recursive; took: 0:00:00.000190
DEBUG: Importing rule: git_rm_staged; took: 0:00:00.000171
DEBUG: Importing rule: git_stash; took: 0:00:00.000192
DEBUG: Importing rule: git_stash_pop; took: 0:00:00.000169
DEBUG: Importing rule: git_tag_force; took: 0:00:00.000230
DEBUG: Importing rule: git_two_dashes; took: 0:00:00.000171
DEBUG: Importing rule: go_run; took: 0:00:00.000197
DEBUG: Importing rule: go_unknown_command; took: 0:00:00.000307
DEBUG: Importing rule: gradle_no_task; took: 0:00:00.000401
DEBUG: Importing rule: gradle_wrapper; took: 0:00:00.000188
DEBUG: Importing rule: grep_arguments_order; took: 0:00:00.000159
DEBUG: Importing rule: grep_recursive; took: 0:00:00.000230
DEBUG: Importing rule: grunt_task_not_found; took: 0:00:00.000456
DEBUG: Importing rule: gulp_not_task; took: 0:00:00.000229
DEBUG: Importing rule: has_exists_script; took: 0:00:00.000176
DEBUG: Importing rule: heroku_multiple_apps; took: 0:00:00.000191
DEBUG: Importing rule: heroku_not_command; took: 0:00:00.000207
DEBUG: Importing rule: history; took: 0:00:00.000110
DEBUG: Importing rule: hostscli; took: 0:00:00.000201
DEBUG: Importing rule: ifconfig_device_not_found; took: 0:00:00.000282
DEBUG: Importing rule: java; took: 0:00:00.000222
DEBUG: Importing rule: javac; took: 0:00:00.000178
DEBUG: Importing rule: lein_not_task; took: 0:00:00.000215
DEBUG: Importing rule: ln_no_hard_link; took: 0:00:00.000196
DEBUG: Importing rule: ln_s_order; took: 0:00:00.000208
DEBUG: Importing rule: long_form_help; took: 0:00:00.000142
DEBUG: Importing rule: ls_all; took: 0:00:00.000202
DEBUG: Importing rule: ls_lah; took: 0:00:00.000194
DEBUG: Importing rule: man; took: 0:00:00.000192
DEBUG: Importing rule: man_no_space; took: 0:00:00.000151
DEBUG: Importing rule: mercurial; took: 0:00:00.000245
DEBUG: Importing rule: missing_space_before_subcommand; took: 0:00:00.000171
DEBUG: Importing rule: mkdir_p; took: 0:00:00.000206
DEBUG: Importing rule: mvn_no_command; took: 0:00:00.000215
DEBUG: Importing rule: mvn_unknown_lifecycle_phase; took: 0:00:00.000209
DEBUG: Importing rule: nixos_cmd_not_found; took: 0:00:00.000470
DEBUG: Importing rule: no_command; took: 0:00:00.000276
DEBUG: Importing rule: no_such_file; took: 0:00:00.000137
DEBUG: Importing rule: npm_missing_script; took: 0:00:00.000431
DEBUG: Importing rule: npm_run_script; took: 0:00:00.000193
DEBUG: Importing rule: npm_wrong_command; took: 0:00:00.000255
DEBUG: Importing rule: omnienv_no_such_command; took: 0:00:00.000407
DEBUG: Importing rule: open; took: 0:00:00.000264
DEBUG: Importing rule: pacman; took: 0:00:00.000620
DEBUG: Importing rule: pacman_invalid_option; took: 0:00:00.000384
DEBUG: Importing rule: pacman_not_found; took: 0:00:00.000178
DEBUG: Importing rule: path_from_history; took: 0:00:00.000202
DEBUG: Importing rule: php_s; took: 0:00:00.000252
DEBUG: Importing rule: pip_install; took: 0:00:00.000255
DEBUG: Importing rule: pip_unknown_command; took: 0:00:00.000276
DEBUG: Importing rule: port_already_in_use; took: 0:00:00.000324
DEBUG: Importing rule: prove_recursively; took: 0:00:00.000276
DEBUG: Importing rule: pyenv_no_such_command; took: 0:00:00.000443
DEBUG: Importing rule: python_command; took: 0:00:00.000235
DEBUG: Importing rule: python_execute; took: 0:00:00.000231
DEBUG: Importing rule: python_module_error; took: 0:00:00.000162
DEBUG: Importing rule: quotation_marks; took: 0:00:00.000151
DEBUG: Importing rule: react_native_command_unrecognized; took: 0:00:00.000498
DEBUG: Importing rule: remove_shell_prompt_literal; took: 0:00:00.000378
DEBUG: Importing rule: remove_trailing_cedilla; took: 0:00:00.000319
DEBUG: Importing rule: rm_dir; took: 0:00:00.000452
DEBUG: Importing rule: rm_root; took: 0:00:00.000350
DEBUG: Importing rule: scm_correction; took: 0:00:00.000286
DEBUG: Importing rule: sed_unterminated_s; took: 0:00:00.000251
DEBUG: Importing rule: sl_ls; took: 0:00:00.000145
DEBUG: Importing rule: ssh_known_hosts; took: 0:00:00.000245
DEBUG: Importing rule: sudo; took: 0:00:00.000156
DEBUG: Importing rule: sudo_command_from_user_path; took: 0:00:00.000261
DEBUG: Importing rule: switch_lang; took: 0:00:00.000273
DEBUG: Importing rule: systemctl; took: 0:00:00.000374
DEBUG: Importing rule: terraform_init; took: 0:00:00.000419
DEBUG: Importing rule: test.py; took: 0:00:00.000190
DEBUG: Importing rule: tmux; took: 0:00:00.000276
DEBUG: Importing rule: touch; took: 0:00:00.000322
DEBUG: Importing rule: tsuru_login; took: 0:00:00.000318
DEBUG: Importing rule: tsuru_not_command; took: 0:00:00.000328
DEBUG: Importing rule: unknown_command; took: 0:00:00.000170
DEBUG: Importing rule: unsudo; took: 0:00:00.000139
DEBUG: Importing rule: vagrant_up; took: 0:00:00.000288
DEBUG: Importing rule: whois; took: 0:00:00.000516
DEBUG: Importing rule: workon_doesnt_exists; took: 0:00:00.000317
DEBUG: Importing rule: yarn_alias; took: 0:00:00.000295
DEBUG: Importing rule: yarn_command_not_found; took: 0:00:00.000631
DEBUG: Importing rule: yarn_command_replaced; took: 0:00:00.000538
DEBUG: Importing rule: yarn_help; took: 0:00:00.000327
DEBUG: Importing rule: yum_invalid_operation; took: 0:00:00.000718
DEBUG: Trying rule: path_from_history; took: 0:00:00.000624
DEBUG: Trying rule: cd_cs; took: 0:00:00.000077
DEBUG: Trying rule: dry; took: 0:00:00.000004
DEBUG: Trying rule: git_hook_bypass; took: 0:00:00.000079
DEBUG: Trying rule: git_stash_pop; took: 0:00:00.000041
DEBUG: Trying rule: test.py; took: 0:00:00.000003
DEBUG: Trying rule: adb_unknown_command; took: 0:00:00.000019
DEBUG: Trying rule: ag_literal; took: 0:00:00.000041
DEBUG: Trying rule: aws_cli; took: 0:00:00.000031
DEBUG: Trying rule: az_cli; took: 0:00:00.000034
DEBUG: Trying rule: brew_link; took: 0:00:00.000056
DEBUG: Trying rule: brew_reinstall; took: 0:00:00.000032
DEBUG: Trying rule: brew_uninstall; took: 0:00:00.000028
DEBUG: Trying rule: brew_update_formula; took: 0:00:00.000033
DEBUG: Trying rule: cargo; took: 0:00:00.000003
DEBUG: Trying rule: cargo_no_command; took: 0:00:00.000032
DEBUG: Trying rule: cat_dir; took: 0:00:00.000035
DEBUG: Trying rule: cd_correction; took: 0:00:00.000049
DEBUG: Trying rule: cd_mkdir; took: 0:00:00.000048
DEBUG: Trying rule: cd_parent; took: 0:00:00.000002
DEBUG: Trying rule: chmod_x; took: 0:00:00.000003
DEBUG: Trying rule: composer_not_command; took: 0:00:00.000030
DEBUG: Trying rule: conda_mistype; took: 0:00:00.000029
DEBUG: Trying rule: cp_create_destination; took: 0:00:00.000023
DEBUG: Trying rule: cp_omitting_directory; took: 0:00:00.000034
DEBUG: Trying rule: cpp11; took: 0:00:00.000031
DEBUG: Trying rule: dirty_untar; took: 0:00:00.000026
DEBUG: Trying rule: dirty_unzip; took: 0:00:00.000028
DEBUG: Trying rule: django_south_ghost; took: 0:00:00.000003
DEBUG: Trying rule: django_south_merge; took: 0:00:00.000003
DEBUG: Trying rule: dnf_no_such_command; took: 0:00:00.000054
DEBUG: Trying rule: docker_image_being_used_by_container; took: 0:00:00.000032
DEBUG: Trying rule: docker_login; took: 0:00:00.000026
DEBUG: Trying rule: docker_not_command; took: 0:00:00.000034
DEBUG: Trying rule: fab_command_not_found; took: 0:00:00.000034
DEBUG: Trying rule: fix_alt_space; took: 0:00:00.000023
DEBUG: Trying rule: fix_file; took: 0:00:00.000181
DEBUG: Trying rule: gem_unknown_command; took: 0:00:00.000059
DEBUG: Trying rule: git_add; took: 0:00:00.000038
DEBUG: Trying rule: git_add_force; took: 0:00:00.000027
DEBUG: Trying rule: git_bisect_usage; took: 0:00:00.000026
DEBUG: Trying rule: git_branch_delete; took: 0:00:00.000023
DEBUG: Trying rule: git_branch_delete_checked_out; took: 0:00:00.000031
DEBUG: Trying rule: git_branch_exists; took: 0:00:00.000021
DEBUG: Trying rule: git_branch_list; took: 0:00:00.000026
DEBUG: Trying rule: git_checkout; took: 0:00:00.000040
DEBUG: Trying rule: git_clone_git_clone; took: 0:00:00.000017
DEBUG: Trying rule: git_commit_amend; took: 0:00:00.000015
DEBUG: Trying rule: git_commit_reset; took: 0:00:00.000021
DEBUG: Trying rule: git_diff_no_index; took: 0:00:00.000025
DEBUG: Trying rule: git_diff_staged; took: 0:00:00.000019
DEBUG: Trying rule: git_fix_stash; took: 0:00:00.000015
DEBUG: Trying rule: git_flag_after_filename; took: 0:00:00.000015
DEBUG: Trying rule: git_help_aliased; took: 0:00:00.000015
DEBUG: Trying rule: git_lfs_mistype; took: 0:00:00.000014
DEBUG: Trying rule: git_merge; took: 0:00:00.000019
DEBUG: Trying rule: git_merge_unrelated; took: 0:00:00.000015
DEBUG: Trying rule: git_not_command; took: 0:00:00.000015
DEBUG: Trying rule: git_pull; took: 0:00:00.000014
DEBUG: Trying rule: git_pull_clone; took: 0:00:00.000015
DEBUG: Trying rule: git_pull_uncommitted_changes; took: 0:00:00.000015
DEBUG: Trying rule: git_push; took: 0:00:00.000015
DEBUG: Trying rule: git_push_different_branch_names; took: 0:00:00.000015
DEBUG: Trying rule: git_push_pull; took: 0:00:00.000017
DEBUG: Trying rule: git_push_without_commits; took: 0:00:00.000015
DEBUG: Trying rule: git_rebase_merge_dir; took: 0:00:00.000015
DEBUG: Trying rule: git_rebase_no_changes; took: 0:00:00.000016
DEBUG: Trying rule: git_remote_delete; took: 0:00:00.000023
DEBUG: Trying rule: git_remote_seturl_add; took: 0:00:00.000022
DEBUG: Trying rule: git_rm_local_modifications; took: 0:00:00.000045
DEBUG: Trying rule: git_rm_recursive; took: 0:00:00.000025
DEBUG: Trying rule: git_rm_staged; took: 0:00:00.000029
DEBUG: Trying rule: git_stash; took: 0:00:00.000035
DEBUG: Trying rule: git_tag_force; took: 0:00:00.000027
DEBUG: Trying rule: git_two_dashes; took: 0:00:00.000025
DEBUG: Trying rule: go_run; took: 0:00:00.000032
DEBUG: Trying rule: go_unknown_command; took: 0:00:00.000025
DEBUG: Trying rule: gradle_no_task; took: 0:00:00.000031
DEBUG: Trying rule: gradle_wrapper; took: 0:00:00.000033
DEBUG: Trying rule: grep_arguments_order; took: 0:00:00.000028
DEBUG: Trying rule: grep_recursive; took: 0:00:00.000026
DEBUG: Trying rule: grunt_task_not_found; took: 0:00:00.000027
DEBUG: Trying rule: gulp_not_task; took: 0:00:00.000029
DEBUG: Trying rule: has_exists_script; took: 0:00:00.000053
DEBUG: Trying rule: heroku_multiple_apps; took: 0:00:00.000030
DEBUG: Trying rule: heroku_not_command; took: 0:00:00.000033
DEBUG: Trying rule: hostscli; took: 0:00:00.000040
DEBUG: Trying rule: ifconfig_device_not_found; took: 0:00:00.000040
DEBUG: Trying rule: java; took: 0:00:00.000032
DEBUG: Trying rule: javac; took: 0:00:00.000025
DEBUG: Trying rule: lein_not_task; took: 0:00:00.000047
DEBUG: Trying rule: ln_no_hard_link; took: 0:00:00.000016
DEBUG: Trying rule: ln_s_order; took: 0:00:00.000015
DEBUG: Trying rule: ls_all; took: 0:00:00.000026
DEBUG: Trying rule: ls_lah; took: 0:00:00.000029
DEBUG: Trying rule: man; took: 0:00:00.000027
DEBUG: Trying rule: mercurial; took: 0:00:00.000025
DEBUG: Trying rule: mkdir_p; took: 0:00:00.000019
DEBUG: Trying rule: mvn_no_command; took: 0:00:00.000026
DEBUG: Trying rule: mvn_unknown_lifecycle_phase; took: 0:00:00.000022
DEBUG: Trying rule: no_such_file; took: 0:00:00.001031
DEBUG: Trying rule: npm_missing_script; took: 0:00:00.000066
DEBUG: Trying rule: npm_run_script; took: 0:00:00.000032
DEBUG: Trying rule: npm_wrong_command; took: 0:00:00.000039
DEBUG: Trying rule: open; took: 0:00:00.000032
DEBUG: Trying rule: pacman_invalid_option; took: 0:00:00.000042
DEBUG: Trying rule: php_s; took: 0:00:00.000026
DEBUG: Trying rule: pip_install; took: 0:00:00.000038
DEBUG: Trying rule: pip_unknown_command; took: 0:00:00.000050
DEBUG: Trying rule: port_already_in_use; took: 0:00:00.000741
DEBUG: Trying rule: prove_recursively; took: 0:00:00.000069
DEBUG: Trying rule: pyenv_no_such_command; took: 0:00:00.000053
DEBUG: Trying rule: python_command; took: 0:00:00.000018
DEBUG: Trying rule: python_execute; took: 0:00:00.000023
DEBUG: Trying rule: python_module_error; took: 0:00:00.000003
DEBUG: Trying rule: quotation_marks; took: 0:00:00.000001
DEBUG: Trying rule: react_native_command_unrecognized; took: 0:00:00.000026
DEBUG: Trying rule: remove_shell_prompt_literal; took: 0:00:00.000003
DEBUG: Trying rule: remove_trailing_cedilla; took: 0:00:00.000004
DEBUG: Trying rule: rm_dir; took: 0:00:00.000016
DEBUG: Trying rule: scm_correction; took: 0:00:00.000029
DEBUG: Trying rule: sed_unterminated_s; took: 0:00:00.000030
DEBUG: Trying rule: sl_ls; took: 0:00:00.000003
DEBUG: Trying rule: ssh_known_hosts; took: 0:00:00.000029
DEBUG: Trying rule: sudo; took: 0:00:00.000043
DEBUG: Trying rule: sudo_command_from_user_path; took: 0:00:00.000037
DEBUG: Trying rule: switch_lang; took: 0:00:00.000003
DEBUG: Trying rule: systemctl; took: 0:00:00.000049
DEBUG: Trying rule: terraform_init; took: 0:00:00.000033
DEBUG: Trying rule: tmux; took: 0:00:00.000035
DEBUG: Trying rule: touch; took: 0:00:00.000033
DEBUG: Trying rule: tsuru_login; took: 0:00:00.000037
DEBUG: Trying rule: tsuru_not_command; took: 0:00:00.000026
DEBUG: Trying rule: unknown_command; took: 0:00:00.000705
DEBUG: Trying rule: unsudo; took: 0:00:00.000006
DEBUG: Trying rule: vagrant_up; took: 0:00:00.000056
DEBUG: Trying rule: whois; took: 0:00:00.000038
DEBUG: Trying rule: workon_doesnt_exists; took: 0:00:00.000040
DEBUG: Trying rule: yarn_alias; took: 0:00:00.000030
DEBUG: Trying rule: yarn_command_not_found; took: 0:00:00.000025
DEBUG: Trying rule: yarn_command_replaced; took: 0:00:00.000025
DEBUG: Trying rule: yarn_help; took: 0:00:00.000029
DEBUG: Trying rule: yum_invalid_operation; took: 0:00:00.000045
DEBUG: Trying rule: man_no_space; took: 0:00:00.000002
DEBUG: Trying rule: no_command; took: 0:00:00.000138
DEBUG: Trying rule: missing_space_before_subcommand; took: 0:00:00.042224
DEBUG: Trying rule: long_form_help; took: 0:00:00.000379
fish -ic "git push" [enter/↑/↓/ctrl+c]
Aborted
DEBUG: Total took: 0:00:02.887527
```
</details>
If the bug only appears with a specific application, the output of that application and its version:
git version 2.33.1
Anything else you think is relevant:
TF_SHELL=fish thefuck git push THEFUCK_ARGUMENT_PLACEHOLDER # does not work
TF_SHELL=bash thefuck git push THEFUCK_ARGUMENT_PLACEHOLDER # works
|
open
|
2022-01-02T17:26:12Z
|
2022-01-05T00:24:04Z
|
https://github.com/nvbn/thefuck/issues/1257
|
[
"fish"
] |
septatrix
| 19
|
litestar-org/litestar
|
pydantic
| 3,088
|
Enhancement: Support "+json" suffixed media types per RFC 6839
|
### Summary
Json encoded responses in Litestar must have a media type which starts with "application/json", as can be seen https://github.com/litestar-org/litestar/blob/main/litestar/response/base.py#L388
```
if media_type.startswith("application/json"):
return encode_json(content, enc_hook)
```
I would like this to change so that it supports "application" media types with the "+json" suffix, as described in [RFC 6839](https://datatracker.ietf.org/doc/html/rfc6839#section-3.1). For example, the lines could change to something like
```
super_type, sub_type = media_type.split("/", 1)
if super_type == "application" and (sub_type.startswith("json") or sub_type.endswith("+json")):
return encode_json(content, enc_hook)
```
So that I can set the media type to, e.g., "application/problem+json" (as described [here](https://webconcepts.info/concepts/media-type/application/problem+json)), and then json encoding would still work.
### Basic Example
I would like to be able to do something like the below and have the json encoding of the response work. If I run the below app now, the schema look alright, but the endpoint gives a server error because the response encoding fails since the media type does not start with "application/json". (It is clear that this is the problem because if I change the media type to "application/json+problem", it works.)
```python
from typing import Optional
import pydantic
import litestar
import litestar.status_codes
class ProblemDetails(pydantic.BaseModel):
title: str
type: str
status: Optional[int] = None
detail: Optional[str] = None
instance: Optional[str] = None
@litestar.get(
status_code=litestar.status_codes.HTTP_418_IM_A_TEAPOT,
media_type="application/problem+json",
)
async def problem_endpoint() -> ProblemDetails:
return ProblemDetails(title="broken", type="bad server")
app = litestar.Litestar(route_handlers=[problem_endpoint])
```
### Drawbacks and Impact
I do not see any drawbacks. If you preserve the behavior for media types which start with "application/json", I think it should all be fine. It just extends the media types which can be encoded in responses, which is very unlikely to be problematic since I doubt many servers rely on response encoding failing.
The "+json" suffix for application media types is not uncommon. For example, the "ProblemDetails" specification is quite common. Since there is an RFC for it (RFC 6839) it is clear that this is a standard worthy to support.
### Unresolved questions
_No response_
|
closed
|
2024-02-08T11:01:55Z
|
2025-03-20T15:54:25Z
|
https://github.com/litestar-org/litestar/issues/3088
|
[
"Enhancement"
] |
bunny-therapist
| 4
|
microsoft/unilm
|
nlp
| 772
|
Questions about implementing layoutlmv3 objective II MIM
|
Hi,
I have been using layoutlm series for some time and thanks for the sharing. I would like to pretrain layoutlmv3 in my domain. I am currently implementing the second objective Masked Image Modeling(MIM). I get a question regarding how image masking is implemented in MIM head? I can see BEiT uses mask_token parameter to mask out image tokens produced by PatchEmbed. LayoutLMv3Model does not have a mask_token parameter, as it does not need it after pretraining. I wonder if a modified version of LayoutLMv3Model(with mask_token included and applied) was used in MIM head when you implmented it? Could you please give me some tips and description how it is implemented, so that I can reference the idea.
With BEiT paper, I get a general idea on how to implement MIM head. I will generate image_masked_token_positions and labels(by Dall-e). Then I will use image_masked_token_positions to mask out image tokens produced by PatchEmbed(similar to BEiT). After that, image tokens and text tokens will be passed through transformer layers(the normal flow in LayoutLMv3Model).
Hi @HYPJUDY, this is Cheng again. Thanks for helping out my previous issue #751. If you have time, please help me also. Thanks.
Thanks,
Cheng
|
closed
|
2022-06-27T01:25:19Z
|
2023-04-23T10:44:06Z
|
https://github.com/microsoft/unilm/issues/772
|
[] |
weicheng113
| 10
|
writer/writer-framework
|
data-visualization
| 222
|
Add serialiser for Polars and Dataframe Interchange Protocol
|
Support Polars and DIP natively by serialising to Arrow before sending to frontend.
- `df.to_arrow()` for Polars
- `pyarrow.interchange.from_dataframe()` for Dataframe Interchange Protocol
|
closed
|
2024-02-02T20:46:00Z
|
2024-03-27T21:15:10Z
|
https://github.com/writer/writer-framework/issues/222
|
[
"enhancement",
"ship soon"
] |
ramedina86
| 1
|
ultrafunkamsterdam/undetected-chromedriver
|
automation
| 783
|
undetected-chromedriver with pyvirtualdisplay, but webdriver detect failed
|
```
from pyvirtualdisplay import Display
import undetected_chromedriver as uc
display = Display(visible=0, size=(1680, 860))
display.start()
driver = uc.Chrome(headless=False)
driver.get('https://bot.sannysoft.com/')
driver.save_screenshot("sannysoft.png")
display.stop()
```
I'm trying avoid detected on unbuntu, by using pyvirtualdisplay, I always get windows.navigator.webdriver = present(failed)

|
open
|
2022-08-17T08:04:31Z
|
2022-08-22T02:18:00Z
|
https://github.com/ultrafunkamsterdam/undetected-chromedriver/issues/783
|
[] |
chenchenchen7
| 0
|
tqdm/tqdm
|
pandas
| 802
|
better `update(n: float)` guidelines
|
- update docs to recommend manual e.g. `{n:.3f}` in `bar_format`
- move to `FloatProgress` as per #471
related to #456
|
closed
|
2019-09-04T16:42:52Z
|
2019-12-01T01:22:09Z
|
https://github.com/tqdm/tqdm/issues/802
|
[
"question/docs ‽",
"to-fix ⌛",
"need-feedback 📢",
"submodule-notebook 📓"
] |
casperdcl
| 0
|
autokey/autokey
|
automation
| 378
|
Invalid literal for int, when using a list menu
|
Now that I am using the QT version of Autokey and I can use the keyboard, I noticed that when I hit ESC to close the list that it tells me
```
ValueError: invalid literal for int() with base 10: "
```
Hmm, I realize it means null.
The list menu needs to return -1 if I hit ESC or close, or cancel.
```
Operating System: Manjaro Linux
KDE Plasma Version: 5.17.5
KDE Frameworks Version: 5.66.0
Qt Version: 5.14.1
Kernel Version: 5.4.18-1-MANJARO
OS Type: 64-bit
```
Autokey-QT 0.95.10
|
open
|
2020-02-28T15:31:45Z
|
2023-07-29T19:16:18Z
|
https://github.com/autokey/autokey/issues/378
|
[
"bug",
"enhancement",
"autokey-qt",
"scripting"
] |
kreezxil
| 2
|
coqui-ai/TTS
|
python
| 3,101
|
xtts with use_deepspeed=true on gpu-1 Error[Bug]
|
### Describe the bug
sing model directly error:
use_deepspeed = True
config = XttsConfig()
config.load_json(tts_path + "config_v1.json")
model = Xtts1.init_from_config(config)
model.load_checkpoint(config, checkpoint_dir=tts_path, use_deepspeed=use_deepspeed, eval=True)
model.cuda(1)
### To Reproduce
Traceback (most recent call last):
File "/data/work/tts/TTS-dev-231007/test1.py", line 67, in <module>
xtts_test()
File "/data/work/tts/TTS-dev-231007/test1.py", line 42, in xtts_test
outputs = model.synthesize(
File "/data/work/tts/TTS-dev-231007/TTS/tts/models/xtts1.py", line 451, in synthesize
return self.inference_with_config(text, config, ref_audio_path=speaker_wav, language=language, **kwargs)
File "/data/work/tts/TTS-dev-231007/TTS/tts/models/xtts1.py", line 473, in inference_with_config
return self.full_inference(text, ref_audio_path, language, **settings)
File "/opt/conda/lib/python3.10/site-packages/torch/utils/_contextlib.py", line 115, in decorate_context
return func(*args, **kwargs)
File "/data/work/tts/TTS-dev-231007/TTS/tts/models/xtts1.py", line 558, in full_inference
return self.inference(
File "/opt/conda/lib/python3.10/site-packages/torch/utils/_contextlib.py", line 115, in decorate_context
return func(*args, **kwargs)
File "/data/work/tts/TTS-dev-231007/TTS/tts/models/xtts1.py", line 620, in inference
gpt_codes = self.gpt.generate(
File "/data/work/tts/TTS-dev-231007/TTS/tts/layers/xtts/gpt.py", line 546, in generate
gen = self.gpt_inference.generate(
File "/opt/conda/lib/python3.10/site-packages/torch/utils/_contextlib.py", line 115, in decorate_context
return func(*args, **kwargs)
File "/opt/conda/lib/python3.10/site-packages/transformers/generation/utils.py", line 1648, in generate
return self.sample(
File "/opt/conda/lib/python3.10/site-packages/transformers/generation/utils.py", line 2730, in sample
outputs = self(
File "/opt/conda/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1501, in _call_impl
return forward_call(*args, **kwargs)
File "/data/work/tts/TTS-dev-231007/TTS/tts/layers/xtts/gpt_inference.py", line 97, in forward
transformer_outputs = self.transformer(
File "/opt/conda/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1501, in _call_impl
return forward_call(*args, **kwargs)
File "/opt/conda/lib/python3.10/site-packages/transformers/models/gpt2/modeling_gpt2.py", line 900, in forward
outputs = block(
File "/opt/conda/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1501, in _call_impl
return forward_call(*args, **kwargs)
File "/opt/conda/lib/python3.10/site-packages/deepspeed/model_implementations/transformers/ds_transformer.py", line 171, in forward
self.attention(input,
File "/opt/conda/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1501, in _call_impl
return forward_call(*args, **kwargs)
File "/opt/conda/lib/python3.10/site-packages/deepspeed/ops/transformer/inference/ds_attention.py", line 154, in forward
qkv_out = self.qkv_func(input=input,
File "/opt/conda/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1501, in _call_impl
return forward_call(*args, **kwargs)
File "/opt/conda/lib/python3.10/site-packages/deepspeed/ops/transformer/inference/op_binding/qkv_gemm.py", line 82, in forward
output, norm = self.qkv_gemm_func(input, weight, q_scale, bias, gamma, beta, self.config.epsilon, add_bias,
ValueError: Specified device cuda:1 does not match device of data cuda:0
### Expected behavior
_No response_
### Logs
_No response_
### Environment
```shell
linux torch 2.0.1
```
### Additional context
_No response_
|
closed
|
2023-10-23T05:42:29Z
|
2023-12-04T03:45:19Z
|
https://github.com/coqui-ai/TTS/issues/3101
|
[
"bug",
"wontfix"
] |
hcl777
| 1
|
2noise/ChatTTS
|
python
| 768
|
音色更改失败,总是一种音色,想更改却没反应
|
```py
spk_stat = torch.load('./asset/spk_stat.pt')
spk = torch.randn(768) * spk_stat.chunk(2)[0] + spk_stat.chunk(2)[1]
params_infer_code = ChatTTS.Chat.InferCodeParams(
spk_emb=spk, # add sampled speaker
temperature=.3, # using custom temperature
top_P=0.7, # top P decode
top_K=20, # top K decode
)
params_refine_text = ChatTTS.Chat.RefineTextParams( prompt='[oral_2][laugh_0][break_6]', )
logger.info("Start inference.")
wavs = chat.infer(
texts,
stream,
params_infer_code = params_infer_code,
params_refine_text=params_refine_text,
use_decoder=True,
)
```
之前总是一种很低沉很慢速的音色,想改却没效果。我的代码如上所示,但是不起作用,请问我有哪里写错了吗?
|
closed
|
2024-09-29T10:03:47Z
|
2024-10-08T06:09:51Z
|
https://github.com/2noise/ChatTTS/issues/768
|
[
"documentation"
] |
Joseph513shen
| 1
|
scanapi/scanapi
|
rest-api
| 464
|
make change-version: create a readable PEP440-compatible dev version
|
## Feature request
### Description the feature
<!-- A clear and concise description of what the new feature is. -->
Make "make change-version" useful again by keeping the current release version, and adding a human readable and PEP440 compliant timestamp to it
|
closed
|
2021-07-31T17:28:33Z
|
2021-07-31T17:34:15Z
|
https://github.com/scanapi/scanapi/issues/464
|
[
"Automation"
] |
camilamaia
| 0
|
gee-community/geemap
|
streamlit
| 2,119
|
Some basemaps do not work.
|
Some of the basemaps listed by the code below do not work.
```
for basemap in geemap.basemaps.keys():
print(basemap)
```
Last July, I was able to use the following layers as basemaps. What changes could have occurred?
```
ESA Worldcover 2021
ESA Worldcover 2021 S2 FCC
ESA Worldcover 2021 S2 TCC
```
|
closed
|
2024-08-27T00:48:37Z
|
2024-08-27T02:16:43Z
|
https://github.com/gee-community/geemap/issues/2119
|
[
"bug"
] |
osgeokr
| 2
|
AutoGPTQ/AutoGPTQ
|
nlp
| 152
|
Using the bigcode/starcoderbase model after quantization by gptq, the generation effect is poor, and only <|endoftext|> can be generated
|
Quantize config:
```
quantize_config = BaseQuantizeConfig(
bits=4,
group_size=128,
desc_act=True,
)
```
Script: generation_speed.py (In examples)
```python
python generation_speed.py --model_name_or_path /data/starcoderbase-4bit --tokenizer_name_or_path bigcode/starcoderbase --use_fast_tokenizer
```
Executing logs:
```
2023-06-10 21:29:23 INFO [__main__] loading model and tokenizer
2023-06-10 21:29:24 INFO [auto_gptq.modeling._base] lm_head not been quantized, will be ignored when make_quant.
2023-06-10 21:29:29 WARNING [auto_gptq.modeling._base] GPTBigCodeGPTQForCausalLM hasn't fused attention module yet, will skip inject fused attention.
2023-06-10 21:29:29 WARNING [auto_gptq.modeling._base] GPTBigCodeGPTQForCausalLM hasn't fused mlp module yet, will skip inject fused mlp.
2023-06-10 21:29:29 INFO [__main__] model and tokenizer loading time: 6.2061s
2023-06-10 21:29:29 INFO [__main__] model quantized: True
2023-06-10 21:29:29 INFO [__main__] quantize config: {'bits': 4, 'group_size': 128, 'damp_percent': 0.01, 'desc_act': False, 'sym': True, 'true_sequential': True, 'model_name_or_path': '/data/workspace/code/generate-demo/models/quant/starcoderbase-4bit', 'model_file_base_name': 'gptq_model-4bit-128g'}
2023-06-10 21:29:29 INFO [__main__] model device map: {'transformer.wte': 0, 'transformer.wpe': 0, 'transformer.drop': 0, 'transformer.h.0': 0, 'transformer.h.1': 0, 'transformer.h.2': 0, 'transformer.h.3': 0, 'transformer.h.4': 0, 'transformer.h.5': 0, 'transformer.h.6': 0, 'transformer.h.7': 0, 'transformer.h.8': 0, 'transformer.h.9': 0, 'transformer.h.10': 0, 'transformer.h.11': 0, 'transformer.h.12': 0, 'transformer.h.13': 0, 'transformer.h.14': 0, 'transformer.h.15': 0, 'transformer.h.16': 0, 'transformer.h.17': 1, 'transformer.h.18': 1, 'transformer.h.19': 1, 'transformer.h.20': 1, 'transformer.h.21': 1, 'transformer.h.22': 1, 'transformer.h.23': 1, 'transformer.h.24': 1, 'transformer.h.25': 1, 'transformer.h.26': 1, 'transformer.h.27': 1, 'transformer.h.28': 1, 'transformer.h.29': 1, 'transformer.h.30': 1, 'transformer.h.31': 1, 'transformer.h.32': 1, 'transformer.h.33': 1, 'transformer.h.34': 1, 'transformer.h.35': 1, 'transformer.h.36': 1, 'transformer.h.37': 1, 'transformer.h.38': 1, 'transformer.h.39': 1, 'transformer.ln_f': 1, 'lm_head': 1}
2023-06-10 21:29:29 INFO [__main__] loading data
Downloading and preparing dataset generator/default to /nas/cache/huggingface/datasets/generator/default-4941558ec0c8d6ec/0.0.0...
Dataset generator downloaded and prepared to /nas/cache/huggingface/datasets/generator/default-4941558ec0c8d6ec/0.0.0. Subsequent calls will reuse this data.
2023-06-10 21:29:30 INFO [__main__] generation config: {'max_length': 20, 'max_new_tokens': 512, 'min_length': 0, 'min_new_tokens': 512, 'early_stopping': False, 'max_time': None, 'do_sample': False, 'num_beams': 1, 'num_beam_groups': 1, 'penalty_alpha': None, 'use_cache': True, 'temperature': 1.0, 'top_k': 50, 'top_p': 1.0, 'typical_p': 1.0, 'epsilon_cutoff': 0.0, 'eta_cutoff': 0.0, 'diversity_penalty': 0.0, 'repetition_penalty': 1.0, 'encoder_repetition_penalty': 1.0, 'length_penalty': 1.0, 'no_repeat_ngram_size': 0, 'bad_words_ids': None, 'force_words_ids': None, 'renormalize_logits': False, 'constraints': None, 'forced_bos_token_id': None, 'forced_eos_token_id': None, 'remove_invalid_values': False, 'exponential_decay_length_penalty': None, 'suppress_tokens': None, 'begin_suppress_tokens': None, 'forced_decoder_ids': None, 'num_return_sequences': 1, 'output_attentions': False, 'output_hidden_states': False, 'output_scores': False, 'return_dict_in_generate': False, 'pad_token_id': 0, 'bos_token_id': None, 'eos_token_id': None, 'encoder_no_repeat_ngram_size': 0, 'decoder_start_token_id': None, 'generation_kwargs': {}, '_from_model_config': False, 'transformers_version': '4.28.1'}
2023-06-10 21:29:30 INFO [__main__] benchmark generation speed
100%|███████████████████████████████| 10/10 [00:15<00:00, 1.59s/it, num_tokens=0, speed=0.0000tokens/s, time=1.38]
2023-06-10 21:29:46 INFO [__main__] generated 0 tokens using 15.898282289505005 seconds, generation speed: 0.0tokens/s
```
version: auto-gptq 0.3.0.dev0
cuda: 11.7
|
open
|
2023-06-10T13:45:29Z
|
2023-07-21T03:25:20Z
|
https://github.com/AutoGPTQ/AutoGPTQ/issues/152
|
[
"bug"
] |
liulhdarks
| 6
|
fbdesignpro/sweetviz
|
pandas
| 43
|
Make font color of "missing" red
|
Suggestion:
This is an open suggestion that the font color of "missing" in report HTML would be nicer (IMHO) it has color red.
|
closed
|
2020-07-22T15:55:21Z
|
2020-11-24T15:13:41Z
|
https://github.com/fbdesignpro/sweetviz/issues/43
|
[
"feature request"
] |
bhishanpdl
| 4
|
CorentinJ/Real-Time-Voice-Cloning
|
tensorflow
| 339
|
Problem with demo_cli.py
|
C:\Users\Test\Desktop\Real-Time-Voice-Cloning-master>python demo_cli.py --low_mem
Traceback (most recent call last):
File "demo_cli.py", line 2, in <module>
from utils.argutils import print_args
File "C:\Users\Test\Desktop\Real-Time-Voice-Cloning-master\utils\argutils.py", line 2, in <module>
import numpy as np
ModuleNotFoundError: No module named 'numpy'
What am I doing wrong?
|
closed
|
2020-05-09T18:24:25Z
|
2020-08-24T04:06:28Z
|
https://github.com/CorentinJ/Real-Time-Voice-Cloning/issues/339
|
[] |
NumpyG
| 3
|
littlecodersh/ItChat
|
api
| 311
|
请问为什么无法调用图灵机器人的知识库?
|
您的itchat版本为:`1.3.5`。
程序:
```python
import itchat, time, re
from itchat.content import *
import urllib2,urllib
import json
@itchat.msg_register(TEXT,isGroupChat=True)
def text_reply(msg):
info = msg['Text'].encode('UTF-8')
url = 'http://www.tuling123.com/openapi/api'
data = {u"key": "xxxxxxxxxxxxx","info": info,u"loc": "", "userid": ""}
data = urllib.urlencode(data)
url2 = urllib2.Request(url, data)
response = urllib2.urlopen(url2)
apicontent = response.read()
s = json.loads(apicontent, encoding='utf-8')
print 's==',s
if s['code']== 100000:
itchat.send(s['text'], msg['FromUserName'])
itchat.auto_login(enableCmdQR=2,hotReload=True)
itchat.run(debug=True)
```
请问我在注册时开启了群聊,但是在群聊时出现以下问题:
1、不能通过上下文回答问题。
2、无法调用图灵机器人的知识库。
谢谢。
|
closed
|
2017-04-05T14:34:01Z
|
2017-04-25T03:45:09Z
|
https://github.com/littlecodersh/ItChat/issues/311
|
[
"question"
] |
dujbo
| 4
|
netbox-community/netbox
|
django
| 18,769
|
Bulk edit devices: Switching Site fails when devices are racked
|
### Deployment Type
Self-hosted
### NetBox Version
v4.2.4
### Python Version
3.11
### Steps to Reproduce
1. select the checkbox of a bunch of devices, e.g. [by device type](http://netbox.local/dcim/devices/?device_type_id=1234), that are in different sites and are assigned a rack.
2. click "edit selected" and change the site, selecting "set null" for location.
3. click "save".
### Expected Behavior
The Site should be updated for all selected devices, and the Rack should be cleared.
Alternatively, the rack should be able to be cleared manually, but there is no option for this.
### Observed Behavior
The following error toast is displayed in the lower right corner: `Rack oldlocation-rack1 does not belong to site newlocation.`
It is not possible to edit the Rack/set it to null on the bulk-edit devices page.
|
closed
|
2025-02-28T09:40:11Z
|
2025-03-03T08:39:43Z
|
https://github.com/netbox-community/netbox/issues/18769
|
[] |
uedvt359
| 3
|
flairNLP/flair
|
pytorch
| 2,760
|
Custom loss function in training or fine tunning of TARS model
|
Hello, is there a possibility or method where we can use a custom polyloss or focal loss for Tars model training?
|
closed
|
2022-05-12T08:17:45Z
|
2022-10-15T21:06:31Z
|
https://github.com/flairNLP/flair/issues/2760
|
[
"question",
"wontfix"
] |
Ujwal2910
| 1
|
comfyanonymous/ComfyUI
|
pytorch
| 6,912
|
Copy Paste bug - cntrl+v paste stuck on something from a long time ago
|
### Expected Behavior
cntrl+c to copy nodes, cntrl+v to paste nodes.
### Actual Behavior
cntrl+c to copy nodes, cntrl+v to paste nodes. Except it pastes nodes that were copied a long time ago (many resets ago), many days ago
### Steps to Reproduce
seems very difficult to reproduce, as it isn't tied to any workflow or any specific action that was ever done (apart from everyday copy+paste of nodes
### Debug Logs
```powershell
No debug logs as this is not logged behavior
```
### Other
_No response_
|
closed
|
2025-02-21T21:37:51Z
|
2025-02-24T00:10:46Z
|
https://github.com/comfyanonymous/ComfyUI/issues/6912
|
[
"Potential Bug"
] |
aikaizen
| 3
|
shroominic/funcchain
|
pydantic
| 32
|
How can I intercept the prompt sent to the backend
|
I'd like to get the exact string sent to the backend, after everything is filled in and processed and formatted etc. How can I do that please?
|
open
|
2024-04-14T18:31:43Z
|
2024-04-14T18:31:43Z
|
https://github.com/shroominic/funcchain/issues/32
|
[] |
AriMKatz
| 0
|
sigmavirus24/github3.py
|
rest-api
| 988
|
TypeError: 'NoneType' object is not subscriptable
|
## Version Information
Please provide:
- The version of Python you're using
Python 3.7.3
- The version of pip you used to install github3.py
conda 4.8.2. I used conda-forge to install github3.py
- The version of github3.py, requests, uritemplate, and dateutil installed
github3.py 1.3.0 py_0 conda-forge
requests 2.22.0 py37_1
uritemplate.py 3.0.2 py_1 conda-forge
didn't install dateutil
## Minimum Reproducible Example
Please provide an example of the code that generates the error you're seeing.
```
repo = gh.repository('pytoolz', 'toolz')
events = repo.issue_events()
events = list(events)
```
I am trying to get all issue events for a repo. But I got `TypeError`, I think it's because some json records don't have all the fields. In this case, it doesn’t have user[‘avatar_url’].


<!-- links -->
[search]: https://github.com/sigmavirus24/github3.py/issues?utf8=%E2%9C%93&q=is%3Aissue
|
open
|
2020-03-17T18:37:22Z
|
2021-01-16T09:45:11Z
|
https://github.com/sigmavirus24/github3.py/issues/988
|
[
"help wanted",
"bug"
] |
sophiamyang
| 2
|
mherrmann/helium
|
web-scraping
| 26
|
Check current URL
|
Hey,
Again great package.
Is there any way to get the current URL? I have a login page and if the user login successfully he is redirected and I want to test that is redirected to the right URL.
Thanks.
|
closed
|
2020-05-11T19:42:23Z
|
2020-05-12T05:51:20Z
|
https://github.com/mherrmann/helium/issues/26
|
[] |
shlomiLan
| 1
|
pallets/flask
|
flask
| 4,794
|
PEP 484 – Type Hints & DI
|
I think it will be useful to start using core python functionality like type hints so the IDEs can be smarter with the code, so at the end it's time all of will save. And also integrate some Dependency injection ?
I think the best example for this both cases is FastAPI they have both things and because of that they got a lot of people using the framework and became popular, both of this things are extremely useful and powerful so worth adding.
There's more examples like starlite, etc..
|
closed
|
2022-08-31T15:07:48Z
|
2022-10-11T09:13:20Z
|
https://github.com/pallets/flask/issues/4794
|
[] |
Saphyel
| 2
|
python-gino/gino
|
sqlalchemy
| 174
|
mixins do not work
|
* GINO version: 0.6.1
* Python version: 3.6.0
* Operating System: ubuntu 17.10
### Description
I want to use a simple mixin to add common columns.
The added columns did not resolve to values after a query.
### What I Did
```
import asyncio
from gino import Gino
db = Gino()
class Audit:
created = db.Column(db.DateTime(timezone=True))
class Thing(Audit, db.Model):
__tablename__ = 'thing'
id = db.Column(db.Integer, primary_key=True)
async def main():
async with db.with_bind('postgresql://user:password@localhost:5432/db'):
await Thing.create(id='splunge')
thing = await Thing.query.where(Thing.id == 'splunge').gino.first()
print('name: {} created: {}'.format(thing.id, thing.created))
if __name__ == '__main__':
asyncio.get_event_loop().run_until_complete(main())
```
output:
> name: splunge created: (no name)
Accessing the created attribute returns a Column instance, not the value associated with the column.
|
closed
|
2018-03-23T15:26:08Z
|
2018-03-24T01:39:52Z
|
https://github.com/python-gino/gino/issues/174
|
[
"bug"
] |
aglassdarkly
| 1
|
streamlit/streamlit
|
python
| 10,648
|
adding the help parameter to a (Button?) widget pads it weirdly instead of staying to the left.
|
### Checklist
- [x] I have searched the [existing issues](https://github.com/streamlit/streamlit/issues) for similar issues.
- [x] I added a very descriptive title to this issue.
- [x] I have provided sufficient information below to help reproduce this issue.
### Summary
```python
st.page_link('sub_pages/resources/lesson_plans.py', label='Lesson Plans', icon=':material/docs:', help='View and download lesson plans')
```
## Unexpected Outcome

---
## Expected

### Reproducible Code Example
```Python
import streamlit as st
st.page_link('sub_pages/resources/lesson_plans.py', label='Lesson Plans', icon=':material/docs:', help='View and download lesson plans')
```
### Steps To Reproduce
1. Add help to a button widget
### Expected Behavior
Button keeps help text and is on the left

### Current Behavior
Button keeps help text but is on the right

### Is this a regression?
- [x] Yes, this used to work in a previous version.
### Debug info
- Streamlit version: 1.43.0
- Python version: 3.12.2
- Operating System: Windows 11
- Browser: Chrome
### Additional Information
> Yes, this used to work in a previous version.
1.42.0 works
|
closed
|
2025-03-05T09:56:06Z
|
2025-03-07T21:21:05Z
|
https://github.com/streamlit/streamlit/issues/10648
|
[
"type:bug",
"status:confirmed",
"priority:P1",
"feature:st.download_button",
"feature:st.button",
"feature:st.link_button",
"feature:st.page_link"
] |
thehamish555
| 2
|
ClimbsRocks/auto_ml
|
scikit-learn
| 175
|
test for gs_params and training_params
|
open
|
2017-03-13T20:48:31Z
|
2017-03-13T20:48:31Z
|
https://github.com/ClimbsRocks/auto_ml/issues/175
|
[] |
ClimbsRocks
| 0
|
|
plotly/dash
|
data-science
| 2,496
|
Failure in Hot reload & clientside callbacks
|
Hot reload doesn't work with clientside callbacks, the frontend is reloaded but the new functions are not available and the browser must be refreshed to take effect.
|
open
|
2023-04-04T12:58:53Z
|
2024-08-13T19:30:57Z
|
https://github.com/plotly/dash/issues/2496
|
[
"bug",
"P3"
] |
T4rk1n
| 3
|
InstaPy/InstaPy
|
automation
| 6,024
|
follow_commenters Engine
|
Hi, I'm currently using InstaPy to growth my Insta Account and it's a pretty good solution but I've some interrogations!
My first question is, is that possible to tell to the bot to reexcute some actions ?
Situation
I ask to my bot to follow 10 users but not private, business, at the end my bot followed only 2 users. How can I tell him to "loop" to grab other users and continue ?
Secondly, is it possible to add filter on comments with 'follow_commenters' function ? Like select only users who there comments contains a '@' ?
Thanks :)
|
open
|
2021-01-14T09:29:41Z
|
2021-07-21T03:19:05Z
|
https://github.com/InstaPy/InstaPy/issues/6024
|
[
"wontfix"
] |
jonathanroze
| 1
|
sinaptik-ai/pandas-ai
|
data-science
| 1,390
|
Misleading license classifier in Python package?
|
Dear devs,
I noticed on PyPi.org that the `MIT` license classifier is set for pandasai Python package.
The readme on the same page, quite at the bottom, on the other hand, mentions an exception for the `/ee`-subfolder, which is also included in the Python package
This got me confused....
Shouldn't the classifier be set to `Other/Proprietary License` in that case? Or both, MIT and Other, if that's possible?


|
closed
|
2024-10-10T07:43:15Z
|
2025-01-27T16:00:14Z
|
https://github.com/sinaptik-ai/pandas-ai/issues/1390
|
[] |
dynobo
| 3
|
deeppavlov/DeepPavlov
|
tensorflow
| 1,684
|
Update to pydantic v2
|
I am running into issues using deeppavlov in combination with other packages that use pydantic>2, as deeppavlov has the requirement pydantic<2.
I saw this post https://github.com/deeppavlov/DeepPavlov/pull/1665 which seemed to indicate that the dependency could be updated, but currently the latest release still specificies pydantic<2.
Would it be possible to update the pydantic dependency?
|
open
|
2024-03-22T10:01:00Z
|
2024-03-22T10:01:00Z
|
https://github.com/deeppavlov/DeepPavlov/issues/1684
|
[
"enhancement"
] |
robinderat
| 0
|
trevorstephens/gplearn
|
scikit-learn
| 7
|
Feature Request: Saving / Loading a Model
|
In many cases one invests significant time/effort to fit/train the GP model. Unfortunately, without a method of "saving" the model and "reloading" it from its save state, one is required to re-do the training of the model form its initial state. This forces one to spend the same (significant) time/effort.
It would be ideal to:
(a) save the state of the model to a file
(b) load a model from a (state) file
(c) Incrementally update the loaded model with new data
This is a feature request.
|
closed
|
2016-03-03T17:44:30Z
|
2017-03-29T09:37:03Z
|
https://github.com/trevorstephens/gplearn/issues/7
|
[
"enhancement"
] |
ericbroda
| 8
|
sczhou/CodeFormer
|
pytorch
| 388
|
Eye iris deformed
|
Thanks the author for the excellent work. I have issues thou with the irises, which in some case have been badly formed. Samples:


any idea how to correct them?
Help is so appreciated!
|
open
|
2024-07-10T02:43:33Z
|
2024-09-06T04:51:42Z
|
https://github.com/sczhou/CodeFormer/issues/388
|
[] |
branaway
| 3
|
zappa/Zappa
|
flask
| 740
|
[Migrated] Getting 'AttributeError' Exception Thrown
|
Originally from: https://github.com/Miserlou/Zappa/issues/1864 by [canada4663](https://github.com/canada4663)
I am getting uncaught AttributeError thrown at the line below:
https://github.com/Miserlou/Zappa/blob/3ccf7490a8d8b8fa74a61ee39bf44234f3567739/zappa/handler.py#L309
It seems that the code is designed to catch exceptions where the 'command' key doesn't exist, however, if the message doesn't included an object with the get method and is only a 'str' then this fails.
|
closed
|
2021-02-20T12:41:37Z
|
2022-07-16T06:23:36Z
|
https://github.com/zappa/Zappa/issues/740
|
[] |
jneves
| 1
|
flasgger/flasgger
|
api
| 21
|
Hardcoded optional_fields
|
[Optional fields for operation definition are hardcoded.](https://github.com/rochacbruno/flasgger/blob/master/flasgger/base.py#L228)
So, if you want to add some non-standard fields (I want to generate swagger definition for Amazon API Gateway, so I have to use [their extensions to swagger](http://docs.aws.amazon.com/apigateway/latest/developerguide/api-gateway-swagger-extensions.html)) you have to patch it, using inheritance or monkey-patching.
It is very inconvenient. I suggest to allow overwriting this list in config or just allow using any optional fields without any filtering (maybe by adding to config "strict"/"non-strict" switcher, where "non-strict" will mean that any optional fields are allowed).
|
closed
|
2016-04-16T18:12:01Z
|
2017-03-24T20:15:10Z
|
https://github.com/flasgger/flasgger/issues/21
|
[] |
vovanz
| 1
|
ageitgey/face_recognition
|
python
| 1,239
|
face recognition is very dlow
|
* face_recognition version: .1.3
* Python version: 3.7
* Operating System: RPI buster
### Description
Use face_recognition for matching face from known photo database.
Means trying to find the name of the detected person from the knows persons photo db.
### What I Did
Code is here
frame = cv2.flip(frame, 1)
rgb = frame
boxes = face_recognition.face_locations(rgb, model="hog")
if len(boxes) > 0:
encodings = face_recognition.face_encodings(rgb)
else:
return "Unknown"
names = []
# loop over the facial embeddings
for encoding in encodings:
matches = face_recognition.compare_faces(self.data["encodings"],encoding)
name = "Unknown"
if True in matches:
matchedIdxs = [i for (i, b) in enumerate(matches) if b]
counts = {}
for i in matchedIdxs:
name = self.data["names"][i]
counts[name] = counts.get(name, 0) + 1
name = max(counts, key=counts.get)
# update the list of names
names.append(name)
return names[0]
Where we can do the changes to make it faster
|
open
|
2020-10-31T17:48:27Z
|
2021-08-21T13:33:01Z
|
https://github.com/ageitgey/face_recognition/issues/1239
|
[] |
naushadck
| 3
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.