repo_name
stringlengths 9
75
| topic
stringclasses 30
values | issue_number
int64 1
203k
| title
stringlengths 1
976
| body
stringlengths 0
254k
| state
stringclasses 2
values | created_at
stringlengths 20
20
| updated_at
stringlengths 20
20
| url
stringlengths 38
105
| labels
listlengths 0
9
| user_login
stringlengths 1
39
| comments_count
int64 0
452
|
|---|---|---|---|---|---|---|---|---|---|---|---|
pandas-dev/pandas
|
pandas
| 60,429
|
DOC: Missing 'pickleshare' package when running 'sphinx-build' command
|
### Pandas version checks
- [X] I have checked that the issue still exists on the latest versions of the docs on `main` [here](https://pandas.pydata.org/docs/dev/)
### Location of the documentation
https://github.com/pandas-dev/pandas/blob/106f33cfce16f4e08f6ca5bd0e6e440ec9a94867/requirements-dev.txt#L26
### Documentation problem
After installing `requirements-dev.txt` and `pandas` from sources, I tried to run `sphinx-build` command to build pandas docs. However, I noticed that there are lots of such warnings:
```bash
UserWarning: This is now an optional IPython functionality, using bookmarks requires you to install the `pickleshare` library.
bkms = self.shell.db.get('bookmarks',{})
UsageError: %bookmark -d: Can't delete bookmark 'ipy_savedir'
```
The followings are the commands I used:
```bash
git clone --branch=main https://github.com/pandas-dev/pandas.git pandas-main
cd pandas-main
git log -1 --pretty=format:"%H%n%s"
conda create --prefix ./.venv --channel conda-forge --yes
conda activate ./.venv
conda install python=3.10 --yes
export PYTHONNOUSERSITE=1
python -m pip install --requirement=requirements-dev.txt --progress-bar=off --verbose
python -m pip install . --no-build-isolation --no-deps -Csetup-args=--werror --progress-bar=off --verbose
export LANG=en_US.UTF-8
sphinx-build -b html -v -j 4 -c doc/source doc/source doc/build/html
```
Log file of the above commands:
[log-sphinx-build-pandas-docs.txt](https://github.com/user-attachments/files/17928982/log-sphinx-build-pandas-docs.txt)
### Suggested fix for documentation
It seems that this issue is caused by https://github.com/ipython/ipython/issues/14237. Therefore, my suggested fix is to add `pickleshare` requirement in `environment.yml` and `requirements-dev.txt`.
Just like NumPy demonstrated here: [requirements/doc_requirements.txt#L12-L14](https://github.com/numpy/numpy/blob/4e8f724fbc136b1bac1c43e24d189ebc45e056eb/requirements/doc_requirements.txt#L12-L14)
### Versions and Platforms
- OS version: Kubuntu 24.04
- Current Branch: [`main`](https://github.com/numpy/numpy/tree/main)
- Latest Commit: 106f33cfce16f4e08f6ca5bd0e6e440ec9a94867
- Conda version: `24.9.2`
- Python version: `3.10.15`
- iPython version: `8.29.0`
|
closed
|
2024-11-27T04:36:42Z
|
2024-12-03T21:21:21Z
|
https://github.com/pandas-dev/pandas/issues/60429
|
[
"Build",
"Docs"
] |
hwhsu1231
| 4
|
mirumee/ariadne
|
graphql
| 80
|
Make Resolver type definition more accurate
|
Our current type for `Resolver` is `Callable[..., Any]`, catching any arguments and relasing anything.
This is because its currently impossible to tpe `Callable` that accepts `*args` or `**kwagrs`.
This issue is [known to MyPy authors](https://github.com/python/typing/issues/264 but a the time of writing no solution is available.
This is related to #79
|
open
|
2018-12-13T16:32:26Z
|
2022-02-21T18:53:56Z
|
https://github.com/mirumee/ariadne/issues/80
|
[
"enhancement"
] |
rafalp
| 2
|
ResidentMario/missingno
|
pandas
| 62
|
Suggest adding test coverage and code quality trackers
|
[landscape.io](https://landscape.io) is helpful for tracking test coverage for your package.
[coveralls.io](https://coveralls.io) is helpful for linting and tracking the code quality for your package.
Both are free services for public repos and can be paired with the continuous integration services mentioned in #61.
|
closed
|
2018-01-31T03:23:50Z
|
2020-04-09T21:12:43Z
|
https://github.com/ResidentMario/missingno/issues/62
|
[] |
rhiever
| 0
|
ivy-llc/ivy
|
numpy
| 28,340
|
Fix Frontend Failing Test: jax - math.tensorflow.math.argmin
|
To-do List: https://github.com/unifyai/ivy/issues/27496
|
closed
|
2024-02-20T08:14:20Z
|
2024-02-20T10:21:20Z
|
https://github.com/ivy-llc/ivy/issues/28340
|
[
"Sub Task"
] |
Sai-Suraj-27
| 0
|
scanapi/scanapi
|
rest-api
| 474
|
Move Wiki to Read The Docs
|
## Feature request
### Description the feature
<!-- A clear and concise description of what the new feature is. -->
Move Wiki to Read The Docs. To have everything in a single place. It should be easier for contributors.
We need to wait for #469 to be done first.
|
open
|
2021-08-01T14:54:58Z
|
2021-08-01T14:55:20Z
|
https://github.com/scanapi/scanapi/issues/474
|
[
"Documentation",
"Multi Contributors",
"Status: Blocked",
"Wiki"
] |
camilamaia
| 0
|
agronholm/anyio
|
asyncio
| 671
|
`ClosedResourceError` vs. `BrokenResourceError` is sometimes backend-dependent, and is sometimes not raised at all (in favor of a non-AnyIO exception type)
|
### Things to check first
- [X] I have searched the existing issues and didn't find my bug already reported there
- [X] I have checked that my bug is still present in the latest release
### AnyIO version
main
### Python version
CPython 3.12.1
### What happened?
see the title. here are some reproducers for some issues that i noticed while working on a fix for #669: https://github.com/agronholm/anyio/compare/c6f0334e67818b90540dac20815cad9e0b2c7eee...b6576ae16a9b055e8109c8d2ca81e00bf439cb3f
the main question that these tests raise is: if a send stream is in a `Broken` state (i.e. our side of the connection learned that it was closed by the peer, or otherwise broken due to something external) and then is explicitly `aclose()`d by our side, should subsequent calls raise `BrokenResourceError` or should they raise `ClosedResourceError`? i.e. if you `aclose()` a `Broken` send stream, does that "clear" its `Broken`ness and convert it to just being `Closed`?
the documentation does not seem to discuss what should happen if the conditions for `ClosedResourceError` and `BrokenResourceError` are _both_ met. i am not sure what the behavior here is intended to be (or if it's intended to be undefined behavior?), as currently different streams do different things in this situation, with the behavior sometimes also varying between backends.
my initial intuition was that the intended behavior was to give precedence to `raise BrokenResourceError` over `raise ClosedResourceError` (this choice prevents the stream from changing from `Broken` to `Closed`). I thought this becuase this behavior looks like it was rather explicitly chosen when implementing the "important" asyncio-backend streams (TCP and UDP): they explicitly do _not_ set `self._closed` if they are already closing due to an external cause:
* `SocketStream` https://github.com/agronholm/anyio/blob/c6efbe352705529123d55f87d6dbb366a3e0612f/src/anyio/_backends/_asyncio.py#L1173-L1184
* `UDPStream` https://github.com/agronholm/anyio/blob/c6efbe352705529123d55f87d6dbb366a3e0612f/src/anyio/_backends/_asyncio.py#L1459-L1463
so I started to implement it: here is most of an implementation of that behavior: https://github.com/agronholm/anyio/compare/c6f0334e67818b90540dac20815cad9e0b2c7eee...1be403d20f94a8a6522f27f24a1830f2351aab3b [^1]
[^1]: note: github shows these commits out of order as it's sorting based on time rather than doing a correct topological sort, so it may be easier to look at these locally, in order.
however, `MemoryObjectSendStream` is also an "important" stream and it has the opposite behavior, even on the asyncio backend.
### How can we reproduce the bug?
see above
|
open
|
2024-01-17T08:26:06Z
|
2025-03-24T11:18:49Z
|
https://github.com/agronholm/anyio/issues/671
|
[
"bug"
] |
gschaffner
| 0
|
JaidedAI/EasyOCR
|
pytorch
| 404
|
Tajik Language
|
Here's needed data. Can you tell me when your ocr can learn it, so i can use it? Thank you!
[easyocr.zip](https://github.com/JaidedAI/EasyOCR/files/6220472/easyocr.zip)
|
closed
|
2021-03-29T08:59:30Z
|
2021-03-31T01:36:32Z
|
https://github.com/JaidedAI/EasyOCR/issues/404
|
[] |
KhayrulloevDD
| 1
|
deepinsight/insightface
|
pytorch
| 2,461
|
Wild Anti-Spoofing dataset providing is stop?
|
Hello,
I'm interested in Anti-Spoofing dataset.
A month ago, I had submitted the application to get the permission of dataset.
But, I haven't received any contact.
Do you still provide the dataset?
Thanks.
|
closed
|
2023-10-25T08:12:30Z
|
2023-11-17T01:53:53Z
|
https://github.com/deepinsight/insightface/issues/2461
|
[] |
ralpyna
| 7
|
HumanSignal/labelImg
|
deep-learning
| 515
|
Draw Squares Resets
|
OS: Windows 7 (x64)
PyQt version: Installed labelImg from binary; otherwise PyQt5
labelImg binary version: v1.8.1
Issue: I set the "Draw Squares" setting in 'Edit". However, when scrolling in (CTRL* + Scroll wheel) the setting is disabled even though it is still checked in 'Edit'. Upon further inspection, I discovered simply pressing CTRL disables the setting.
If I recheck the setting in 'Edit', the setting is re-enabled until I press CTRL again.
Steps to reproduce: Enable Draw Squares, press CTRL
*CTRL refers to the left CTRL button
|
open
|
2019-10-25T14:27:42Z
|
2019-10-25T14:27:42Z
|
https://github.com/HumanSignal/labelImg/issues/515
|
[] |
JulianOrteil
| 0
|
jupyter/docker-stacks
|
jupyter
| 1,520
|
Failed write to /etc/passwd
|
From @bilke:
> @maresb Does this part got lost?
>
> I am asking because the following does not work anymore:
>
```bash
$ docker run --rm -p 8888:8888 -v $PWD:/home/jovyan/work --user `id -u $USER` \
--group-add users my_image
Running: start.sh jupyter lab
id: cannot find name for user ID 40841
WARNING: container must be started as root to change the desired user's name with NB_USER!
WARNING: container must be started as root to change the desired user's id with NB_UID!
WARNING: container must be started as root to change the desired user's group id with NB_GID!
There is no entry in /etc/passwd for our UID. Attempting to fix...
Renaming old jovyan user to nayvoj (1000:100)
sed: couldn't open temporary file /etc/sedAELey6: Permission denied
```
> Before removal of this section it printed:
```
Adding passwd file entry for 40841
```
_Originally posted by @bilke in https://github.com/jupyter/docker-stacks/pull/1512#discussion_r746462652_
|
closed
|
2021-11-10T11:14:43Z
|
2021-12-15T10:23:30Z
|
https://github.com/jupyter/docker-stacks/issues/1520
|
[] |
maresb
| 4
|
numpy/numpy
|
numpy
| 28,394
|
ENH: Compiled einsum path support
|
### Proposed new feature or change:
`einsum_path` pre-analyzes the best contraction strategy for an Einstein sum; fine. `einsum` can accept that contraction strategy; also fine. I imagine that this works well in the case that there are very few, very large calls to `einsum`.
Where this breaks down is the case where the input is smaller or the calls are more frequent, making the overhead of the Python side of `einsum` dominant when `optimize` is not `False`. This effect can get really bad and easily overwhelm any benefit of the planned `einsum_path` contraction.
Another issue is that the planned contraction implies a specific subscript string, but that string needs to be passed again to `einsum`; this API allows for a mismatch between the subscript and optimize arguments.
I can imagine a few solutions, but the one that's currently my favourite is a simple loop evaluation -> AST -> compiled set of calls to `tensordot` and `c_einsum`. In my testing this solves all of the problems I described above.
For a vile patch-hack that demonstrates the concept and does function correctly but is probably not the best way to implement this (and also does not have `out` support):
```python
import ast
import logging
import typing
import numpy as np
logger = logging.getLogger('contractions')
if typing.TYPE_CHECKING:
class Contract(typing.Protocol):
def __call__(self, *operands: np.ndarray) -> np.ndarray: ...
def build_contraction(
name: str, subscripts: str, *shapes: tuple[int, ...],
optimize: typing.Literal['greedy', 'optimal'] = 'greedy',
) -> 'Contract':
"""
This is a wrapper for the numpy.einsum and numpy.einsum_path pair of functions to de-duplicate
some parameters. It represents an Einstein tensor contraction that has been pre-planned to
perform as well as possible.
If we simply call einsum() with no optimization, the performance is "good" because it skips
straight to a naive but efficient c_einsum.
If we call einsum_path and then pass the calculated contraction to einsum(), the performance is
awful because there's a lot of overhead in the contraction stage loops. The only way to
alleviate this is to have einsum() run the contraction once to sort out its stage logic and
essentially cache that procedure so that actual runs simply make calls to c_einsum() or
tensordot().
"""
def get_index(a: np.ndarray | object) -> int:
# simple .index() does not work when `a` is an ndarray
return next(i for i, x in enumerate(operands) if x is a)
def c_einsum(intermediate_subscripts: str, *intermediate_operands: np.ndarray, **kwargs) -> object:
"""
This is a patched-in c_einsum that accepts intermediate arguments from the real einsum()
but, rather than calling the actual sum, constructs a planned call for our compiled
contraction.
"""
if kwargs:
raise NotImplementedError()
# xi = c_einsum(intermediate_subscripts, *operands)
body.append(ast.Assign(
targets=[ast.Name(id=f'x{len(operands)}', ctx=ast.Store())],
value=ast.Call(
func=ast.Name(id='c_einsum', ctx=ast.Load()),
args=[ast.Constant(value=intermediate_subscripts)]
+ [
ast.Name(id=f'x{get_index(o)}', ctx=ast.Load())
for o in intermediate_operands
],
),
))
# This is a placeholder sentinel only. It's used to identify inputs to subsequent einsum or tensordot calls.
out = object()
operands.append(out)
return out
def tensordot(a: np.ndarray, b: np.ndarray, axes: tuple[tuple[int, ...], ...]) -> object:
"""
This is a patched-in tensordot that accepts intermediate arguments from the real einsum()
but, rather than calling the actual dot, constructs a planned call for our compiled
contraction.
"""
# xi = tensordot(a, b, axes=axes)
body.append(ast.Assign(
targets=[ast.Name(id=f'x{len(operands)}', ctx=ast.Store())],
value=ast.Call(
func=ast.Name(id='tensordot', ctx=ast.Load()),
args=[
ast.Name(id=f'x{get_index(a)}', ctx=ast.Load()),
ast.Name(id=f'x{get_index(b)}', ctx=ast.Load()),
],
keywords=[ast.keyword(arg='axes', value=ast.Constant(value=axes))],
),
))
# This is a placeholder sentinel only. It's used to identify inputs to subsequent einsum or tensordot calls.
out = object()
operands.append(out)
return out
# These are needed for einsum_path() and einsum() to perform planning, but the actual content
# doesn't matter, only the shape.
fake_arrays = [np.empty(shape) for shape in shapes]
# Build and describe the optimized contraction path plan.
contraction, desc = np.einsum_path(subscripts, *fake_arrays, optimize=optimize)
logger.debug('%s contraction: \n%s', name, desc)
# This will be mutated by our monkeypatched functions and assumes that each element is
# accessible by an indexed-like variable, i.e. x0, x1... in this list's order.
operands: list[np.ndarray | object] = list(fake_arrays)
# AST statements in the function body; will be mutated by the monkeypatched functions
body = []
# Preserve the old numerical backend functions
old_c_einsum = np._core.einsumfunc.c_einsum
old_tensordot = np._core.einsumfunc.tensordot
try:
# Monkeypatch in our substitute functions
np._core.einsumfunc.c_einsum = c_einsum
np._core.einsumfunc.tensordot = tensordot
# Run the real einsum() with fake data and fake numerical backend.
np.einsum(subscripts, *fake_arrays, optimize=contraction)
finally:
# Restore the old numerical backend functions
np._core.einsumfunc.c_einsum = old_c_einsum
np._core.einsumfunc.tensordot = old_tensordot
# The AST function representation; will always have the same name
func = ast.FunctionDef(
name='contraction',
args=ast.arguments(
args=[ast.arg(arg=f'x{i}') for i in range(len(shapes))],
),
body=body + [ast.Return(
value=ast.Name(id=f'x{len(operands) - 1}', ctx=ast.Load()),
)],
)
ast.fix_missing_locations(func) # Add line numbers
code = compile( # Compile to an in-memory anonymous module
source=ast.Module(body=[func]), filename='<compiled tensor contraction>',
mode='exec', flags=0, dont_inherit=1, optimize=2,
)
globs = { # Globals accessible to the module
'__builtins__': {}, # no built-ins
'c_einsum': old_c_einsum, # real numerical backends
'tensordot': old_tensordot,
}
exec(code, globs) # Evaluate the code, putting the function in globs
return globs['contraction'] # Return the function reference
```
|
open
|
2025-02-26T20:23:41Z
|
2025-03-16T19:31:39Z
|
https://github.com/numpy/numpy/issues/28394
|
[
"01 - Enhancement"
] |
gtoombs-avidrone
| 11
|
Guovin/iptv-api
|
api
| 438
|
docker容器闪退
|
在爱快 docker 上运行tv-driver,运行命令类似。
```
docker run -v /etc/docker/config:/tv-driver/config -v /etc/docker/output:/tv-driver/output -d -p 8000:8000 guovern/tv-driver
```
闪退,日志如下:
```
cron: unrecognized service
/tv-driver/updates/fofa/request.py:73: TqdmMonitorWarning: tqdm:disabling monitor support (monitor_interval = 0) due to:
can't start new thread
pbar = tqdm_asyncio(
Processing fofa for hotel: 0% 0/45 [00:00<?, ?it/s]Traceback (most recent call last):
File "/tv-driver/main.py", line 273, in <module>
scheduled_task()
File "/tv-driver/main.py", line 259, in scheduled_task
loop.run_until_complete(update_source.start())
File "/usr/local/lib/python3.8/asyncio/base_events.py", line 616, in run_until_complete
return future.result()
File "/tv-driver/main.py", line 244, in start
await self.main()
File "/tv-driver/main.py", line 148, in main
await self.visit_page(channel_names)
File "/tv-driver/main.py", line 117, in visit_page
setattr(self, result_attr, await task)
File "/tv-driver/updates/fofa/request.py", line 155, in get_channels_by_fofa
futures = [
File "/tv-driver/updates/fofa/request.py", line 156, in <listcomp>
executor.submit(process_fofa_channels, fofa_url) for fofa_url in fofa_urls
File "/usr/local/lib/python3.8/concurrent/futures/thread.py", line 188, in submit
self._adjust_thread_count()
File "/usr/local/lib/python3.8/concurrent/futures/thread.py", line 213, in _adjust_thread_count
t.start()
File "/usr/local/lib/python3.8/threading.py", line 852, in start
_start_new_thread(self._bootstrap, ())
RuntimeError: can't start new thread
Error in atexit._run_exitfuncs:
Traceback (most recent call last):
File "/root/.local/share/virtualenvs/tv-driver-D9SmWF1i/lib/python3.8/site-packages/tqdm/_monitor.py", line 44, in exit
self.join()
File "/usr/local/lib/python3.8/threading.py", line 1006, in join
raise RuntimeError("cannot join thread before it is started")
RuntimeError: cannot join thread before it is started
Processing fofa for hotel: 0% 0/45 [00:00<?, ?it/s]
/tv_entrypoint.sh: line 15: gunicorn: command not found
cron: unrecognized service
/tv-driver/updates/fofa/request.py:73: TqdmMonitorWarning: tqdm:disabling monitor support (monitor_interval = 0) due to:
can't start new thread
pbar = tqdm_asyncio(
Processing fofa for hotel: 0% 0/45 [00:00<?, ?it/s]Traceback (most recent call last):
File "/tv-driver/main.py", line 273, in <module>
scheduled_task()
File "/tv-driver/main.py", line 259, in scheduled_task
loop.run_until_complete(update_source.start())
File "/usr/local/lib/python3.8/asyncio/base_events.py", line 616, in run_until_complete
return future.result()
File "/tv-driver/main.py", line 244, in start
await self.main()
File "/tv-driver/main.py", line 148, in main
await self.visit_page(channel_names)
File "/tv-driver/main.py", line 117, in visit_page
setattr(self, result_attr, await task)
File "/tv-driver/updates/fofa/request.py", line 155, in get_channels_by_fofa
futures = [
File "/tv-driver/updates/fofa/request.py", line 156, in <listcomp>
executor.submit(process_fofa_channels, fofa_url) for fofa_url in fofa_urls
File "/usr/local/lib/python3.8/concurrent/futures/thread.py", line 188, in submit
self._adjust_thread_count()
File "/usr/local/lib/python3.8/concurrent/futures/thread.py", line 213, in _adjust_thread_count
t.start()
File "/usr/local/lib/python3.8/threading.py", line 852, in start
_start_new_thread(self._bootstrap, ())
RuntimeError: can't start new thread
Error in atexit._run_exitfuncs:
Traceback (most recent call last):
File "/root/.local/share/virtualenvs/tv-driver-D9SmWF1i/lib/python3.8/site-packages/tqdm/_monitor.py", line 44, in exit
self.join()
File "/usr/local/lib/python3.8/threading.py", line 1006, in join
raise RuntimeError("cannot join thread before it is started")
RuntimeError: cannot join thread before it is started
Processing fofa for hotel: 0% 0/45 [00:00<?, ?it/s]
/tv_entrypoint.sh: line 15: gunicorn: command not found
cron: unrecognized service
/tv-driver/updates/fofa/request.py:73: TqdmMonitorWarning: tqdm:disabling monitor support (monitor_interval = 0) due to:
can't start new thread
pbar = tqdm_asyncio(
Processing fofa for hotel: 0% 0/45 [00:00<?, ?it/s]Traceback (most recent call last):
File "/tv-driver/main.py", line 273, in <module>
scheduled_task()
File "/tv-driver/main.py", line 259, in scheduled_task
loop.run_until_complete(update_source.start())
File "/usr/local/lib/python3.8/asyncio/base_events.py", line 616, in run_until_complete
return future.result()
File "/tv-driver/main.py", line 244, in start
await self.main()
File "/tv-driver/main.py", line 148, in main
await self.visit_page(channel_names)
File "/tv-driver/main.py", line 117, in visit_page
setattr(self, result_attr, await task)
File "/tv-driver/updates/fofa/request.py", line 155, in get_channels_by_fofa
futures = [
File "/tv-driver/updates/fofa/request.py", line 156, in <listcomp>
executor.submit(process_fofa_channels, fofa_url) for fofa_url in fofa_urls
File "/usr/local/lib/python3.8/concurrent/futures/thread.py", line 188, in submit
self._adjust_thread_count()
File "/usr/local/lib/python3.8/concurrent/futures/thread.py", line 213, in _adjust_thread_count
t.start()
File "/usr/local/lib/python3.8/threading.py", line 852, in start
_start_new_thread(self._bootstrap, ())
RuntimeError: can't start new thread
Error in atexit._run_exitfuncs:
Traceback (most recent call last):
File "/root/.local/share/virtualenvs/tv-driver-D9SmWF1i/lib/python3.8/site-packages/tqdm/_monitor.py", line 44, in exit
self.join()
File "/usr/local/lib/python3.8/threading.py", line 1006, in join
raise RuntimeError("cannot join thread before it is started")
RuntimeError: cannot join thread before it is started
Processing fofa for hotel: 0% 0/45 [00:00<?, ?it/s]
/tv_entrypoint.sh: line 15: gunicorn: command not found
```
|
closed
|
2024-10-22T12:33:22Z
|
2024-11-05T08:32:53Z
|
https://github.com/Guovin/iptv-api/issues/438
|
[
"question"
] |
snowdream
| 4
|
ultralytics/yolov5
|
machine-learning
| 12,533
|
!yolo task=detect mode=predict
|
2023-12-21 09:13:37.146006: E external/local_xla/xla/stream_executor/cuda/cuda_dnn.cc:9261] Unable to register cuDNN factory: Attempting to register factory for plugin cuDNN when one has already been registered
2023-12-21 09:13:37.146066: E external/local_xla/xla/stream_executor/cuda/cuda_fft.cc:607] Unable to register cuFFT factory: Attempting to register factory for plugin cuFFT when one has already been registered
2023-12-21 09:13:37.147291: E external/local_xla/xla/stream_executor/cuda/cuda_blas.cc:1515] Unable to register cuBLAS factory: Attempting to register factory for plugin cuBLAS when one has already been registered
2023-12-21 09:13:38.201310: W tensorflow/compiler/tf2tensorrt/utils/py_utils.cc:38] TF-TRT Warning: Could not find TensorRT
Ultralytics YOLOv8.0.20 🚀 Python-3.10.12 torch-2.1.0+cu121 CUDA:0 (Tesla T4, 15102MiB)
Model summary (fused): 218 layers, 25841497 parameters, 0 gradients, 78.7 GFLOPs
WARNING ⚠️ NMS time limit 0.550s exceeded
image 1/1 /content/DJI0009.JPG: 480x640 58 IMs, 54 IPs, 23 ITs, 74.2ms
Speed: 0.9ms pre-process, 74.2ms inference, 840.7ms postprocess per image at shape (1, 3, 640, 640)
Results saved to runs/detect/predict3
can i increase the nms time limit in predict mode in google colab with yolo v8? thank u
|
closed
|
2023-12-21T09:16:19Z
|
2024-10-20T19:35:02Z
|
https://github.com/ultralytics/yolov5/issues/12533
|
[
"Stale"
] |
SkripsiFighter
| 7
|
nolar/kopf
|
asyncio
| 166
|
Invalid attribute apiVersion in build_object_reference()
|
> <a href="https://github.com/olivier-mauras"><img align="left" height="50" src="https://avatars3.githubusercontent.com/u/1299371?v=4"></a> An issue by [olivier-mauras](https://github.com/olivier-mauras) at _2019-08-06 05:54:25+00:00_
> Original URL: https://github.com/zalando-incubator/kopf/issues/166
>
## Actual Behavior
``` text
[2019-08-06 05:50:32,627] kopf.objects [ERROR ] [xxx-dev/tiller] Handler 'sa_delete' failed with an exception. Will retry.
Traceback (most recent call last):
File "/usr/local/lib/python3.7/site-packages/kopf/reactor/handling.py", line 387, in _execute
lifecycle=lifecycle, # just a default for the sub-handlers, not used directly.
File "/usr/local/lib/python3.7/site-packages/kopf/reactor/handling.py", line 478, in _call_handler
**kwargs,
File "/usr/local/lib/python3.7/site-packages/kopf/reactor/invocation.py", line 66, in invoke
result = await fn(*args, **kwargs)
File "./ns.py", line 235, in sa_delete
kopf.info(ns.to_dict(), reason='SA_DELETED', message='Managed service account got deleted')
File "/usr/local/lib/python3.7/site-packages/kopf/engines/posting.py", line 79, in info
event(obj, type='Normal', reason=reason, message=message)
File "/usr/local/lib/python3.7/site-packages/kopf/engines/posting.py", line 72, in event
ref = hierarchies.build_object_reference(obj)
File "/usr/local/lib/python3.7/site-packages/kopf/structs/hierarchies.py", line 12, in build_object_reference
apiVersion=body['apiVersion'],
KeyError: 'apiVersion'
```
## Steps to Reproduce the Problem
``` python
@kopf.on.delete('', 'v1', 'serviceaccounts', annotations={'custom/created_by': 'namespace-manager'})
async def sa_delete(body, namespace, **kwargs):
try:
api = kubernetes.client.CoreV1Api()
ns = api.read_namespace(name=namespace)
except ApiException as err:
sprint('ERROR', 'Exception when calling CoreV1Api->read_namespace: {}'.format(err))
kopf.info(ns.to_dict(), reason='SA_DELETED', message='Managed service account got deleted')
return
```
#######
Culprit lies here in `hierarchies.py`
``` python
def build_object_reference(body):
"""
Construct an object reference for the events.
"""
return dict(
apiVersion=body['apiVersion'],
kind=body['kind'],
name=body['metadata']['name'],
uid=body['metadata']['uid'],
namespace=body['metadata']['namespace'],
)
```
As described in https://github.com/kubernetes-client/python/blob/master/kubernetes/docs/V1Namespace.md `apiVersion` attribute should be `api_version`.
Replacing line 12 with `api_version` makes indeed the above code work but I'm not sure there's other implication before sending you a PR
---
> <a href="https://github.com/olivier-mauras"><img align="left" height="30" src="https://avatars3.githubusercontent.com/u/1299371?v=4"></a> Commented by [olivier-mauras](https://github.com/olivier-mauras) at _2019-08-07 14:37:17+00:00_
>
Oh apparently it's not the API that is inconsistent, but the body returned by on.xxx() handlers of kopf.
Here's an example:
body returned by on.resume/create
``` text
{'apiVersion': 'v1',
'kind': 'Namespace',
'metadata': {'annotations': {'cattle.io/status': '{"Conditions":[{"Type":"ResourceQuotaInit","Status":"True","Message":"","LastUpdateTime":"2019-04-29T08:26:31Z"},{"Type":"InitialRolesPopulated","Status":"True","Message":"","LastUpdateTime":"2019-04-29T08:26:32Z"}]}',
'kopf.zalando.org/last-handled-configuration': '{"spec": '
'{"finalizers": '
'["kubernetes"]}, '
'"annotations": '
'{"cattle.io/status": '
'"{\\"Conditions\\":[{\\"Type\\":\\"ResourceQuotaInit\\",\\"Status\\":\\"True\\",\\"Message\\":\\"\\",\\"LastUpdateTime\\":\\"2019-04-29T08:26:31Z\\"},{\\"Type\\":\\"InitialRolesPopulated\\",\\"Status\\":\\"True\\",\\"Message\\":\\"\\",\\"LastUpdateTime\\":\\"2019-04-29T08:26:32Z\\"}]}", '
'"lifecycle.cattle.io/create.namespace-auth": '
'"true"}}}',
'lifecycle.cattle.io/create.namespace-auth': 'true''},
'creationTimestamp': '2019-02-04T12:18:54Z',
'finalizers': ['controller.cattle.io/namespace-auth'],
'name': 'trident',
'resourceVersion': '27520896',
'selfLink': '/api/v1/namespaces/trident',
'uid': '0a305ee9-2877-11e9-a525-005056abd413'},
'spec': {'finalizers': ['kubernetes']},
'status': {'phase': 'Active'}}
```
body returned directly by the API with a read_namespace call:
``` text
{'api_version': 'v1',
'kind': 'Namespace',
'metadata': {'annotations': {'cattle.io/status': '{"Conditions":[{"Type":"ResourceQuotaInit","Status":"True","Message":"","LastUpdateTime":"2019-04-29T08:26:31Z"},{"Type":"InitialRolesPopulated","Status":"True","Message":"","LastUpdateTime":"2019-04-29T08:26:32Z"}]}',
'kopf.zalando.org/last-handled-configuration': '{"spec": '
'{"finalizers": '
'["kubernetes"]}, '
'"annotations": '
'{"cattle.io/status": '
'"{\\"Conditions\\":[{\\"Type\\":\\"ResourceQuotaInit\\",\\"Status\\":\\"True\\",\\"Message\\":\\"\\",\\"LastUpdateTime\\":\\"2019-04-29T08:26:31Z\\"},{\\"Type\\":\\"InitialRolesPopulated\\",\\"Status\\":\\"True\\",\\"Message\\":\\"\\",\\"LastUpdateTime\\":\\"2019-04-29T08:26:32Z\\"}]}", '
'"lifecycle.cattle.io/create.namespace-auth": '
'"true"}}}',
'lifecycle.cattle.io/create.namespace-auth': 'true'},
'cluster_name': None,
'creation_timestamp': datetime.datetime(2019, 2, 4, 12, 18, 54, tzinfo=tzlocal()),
'deletion_grace_period_seconds': None,
'deletion_timestamp': None,
'finalizers': ['controller.cattle.io/namespace-auth'],
'generate_name': None,
'generation': None,
'initializers': None,
'name': 'trident',
'namespace': None,
'owner_references': None,
'resource_version': '27520896',
'self_link': '/api/v1/namespaces/trident',
'uid': '0a305ee9-2877-11e9-a525-005056abd413'},
'spec': {'finalizers': ['kubernetes']},
'status': {'phase': 'Active'}}
```
I had taken example from https://kopf.readthedocs.io/en/latest/events/#other-objects
https://github.com/kubernetes-client/python/blob/master/kubernetes/docs/V1Pod.md is clear that it should return `api_version`
---
> <a href="https://github.com/psycho-ir"><img align="left" height="30" src="https://avatars0.githubusercontent.com/u/726875?v=4"></a> Commented by [psycho-ir](https://github.com/psycho-ir) at _2019-08-07 15:40:18+00:00_
>
I think that's because of the difference naming in `pykube`(camelCase) has from `kubernetes-client`(snake_case).
so probably if you use the `pykube-ng` instead of `kubernetes-client` this issue should be gone, just as an interim solution.
we need to find a proper way probably to handle these naming mismatches [nolar](https://github.com/nolar).
---
> <a href="https://github.com/nolar"><img align="left" height="30" src="https://avatars0.githubusercontent.com/u/544296?v=4"></a> Commented by [nolar](https://github.com/nolar) at _2019-08-07 16:26:45+00:00_
>
Kopf is agnostic of the clients used by the developers (kind of). The only "canonical" reference is the Kubernetes API. The API doc [says the field name is `apiVersion`](https://kubernetes.io/docs/reference/generated/kubernetes-api/v1.15/#objectreference-v1-core).
This is how Kopf gets it from the API, and passes it to the handlers. And this is how Kopf expects it from the handlers to be passed to the API.
In the base case, there are no Kubernetes clients at all, just the raw dicts, the json parser-serializer, and an HTTP library.
`api_version` is a naming convention used in the Python Kubernetes client only (in an attempt to follow Python's snake_case convention). I.e., it is a client's issue that it produces and consumes such strange non-API-compatible dicts.
---
Kopf also tries to be friendly to all clients — but to an extent. I'm not sure if bringing the workarounds for literally _all_ clients with their nuances would be a good idea.
Even for the official client alone (just for the [principle of least astonishment](https://en.wikipedia.org/wiki/Principle_of_least_astonishment)), it would mean one of the following:
* Adding the duplicating dict keys (both `apiVersion` & `api_version`), and filtering out the non-canonical forms on the internal use, the canonical forms on the external use — thus increasing the complexity.
* Making special classes for all these structs, with properties as aliases. And so we turn Kopf into a client library with its own K8s object-manipulation DSL — which goes against some principles of API-client neutrality. (Now it runs on plain dicts.)
* Allowing the design and conventions of the official client (sometimes questionable) to dictate the design and conventions of Kopf, making Kopf incompatible with other clients that _do_ follow the canonical API naming.
Neither of this I see as a good solution to the problem.
At the moment, and in my opinion, the best solution is to do nothing, and to let this issue exist (to be solved by the operator developers on the app-level).
---
> <a href="https://github.com/olivier-mauras"><img align="left" height="30" src="https://avatars3.githubusercontent.com/u/1299371?v=4"></a> Commented by [olivier-mauras](https://github.com/olivier-mauras) at _2019-08-07 17:09:16+00:00_
>
[nolar](https://github.com/nolar) Thanks for the reply.
I'm actually super fine to fix that in my code directly. I guess we can close the PR
|
closed
|
2020-08-18T19:59:31Z
|
2020-08-23T20:48:38Z
|
https://github.com/nolar/kopf/issues/166
|
[
"wontfix",
"archive"
] |
kopf-archiver[bot]
| 0
|
schemathesis/schemathesis
|
graphql
| 1,762
|
Documentation += report format
|
Finding out how can I see the report file generated. Neither file extension nor any hints in documentation.
Can it be added to the documentation near the option? Is it gzip compressed data? (did not manage to unzip yet...) Or what tool to use to read it? I only found:
`--report FILENAME Upload test report to Schemathesis.io, or store in a file`
Thank you
|
closed
|
2023-08-18T10:27:40Z
|
2023-08-21T15:28:02Z
|
https://github.com/schemathesis/schemathesis/issues/1762
|
[] |
tsyg
| 2
|
jupyterlab/jupyter-ai
|
jupyter
| 671
|
Change the list of provided Anthropic and Bedrock models
|
<!-- Welcome! Thank you for contributing. These HTML comments will not render in the issue, but you can delete them once you've read them if you prefer! -->
<!--
Thanks for thinking of a way to improve JupyterLab. If this solves a problem for you, then it probably solves that problem for lots of people! So the whole community will benefit from this request.
Before creating a new feature request please search the issues for relevant feature requests.
-->
### Problem
Anthropic's Claude v1 (`anthropic.claude-v1`) is no longer supported and the Claude-v3-Sonnet model needs to be added to the `%ai list` command. The drop down list in the left panel also needs to be updated for the same model changes.
Update for three providers: `bedrock-chat`, `anthropic`, `anthropic-chat`
<!-- Provide a clear and concise description of what problem this feature will solve. For example:
* I'm always frustrated when [...] because [...]
* I would like it if [...] happened when I [...] because [...]
-->
|
closed
|
2024-03-04T22:41:39Z
|
2024-03-07T19:38:37Z
|
https://github.com/jupyterlab/jupyter-ai/issues/671
|
[
"enhancement"
] |
srdas
| 2
|
aleju/imgaug
|
deep-learning
| 3
|
Cant load augmenters
|
As said in the README, I copied the required files into my directory and ran the
`from imgaug import augmenters as iaa`
but I get an error
"ImportError: cannot import name 'augmenters'"
|
closed
|
2016-12-10T16:34:31Z
|
2016-12-10T17:52:29Z
|
https://github.com/aleju/imgaug/issues/3
|
[] |
SarthakYadav
| 9
|
deepspeedai/DeepSpeed
|
pytorch
| 7,041
|
ambiguity in Deepspeed Ulysses
|
Hi,
There is a discrepancy between Figure 2 and Section 3.1 in the DeepSpeed-Ulysses paper (https://arxiv.org/abs/2309.14509). My understanding from the text is that the entire method simply partitions the sequence length N across P available devices, and that’s it. However, Figure 3 seems to suggest that there is an additional partitioning happening across attention heads and devices. Is that correct? If so, could you provide an equation to better explain this method?
In addition, if the partitioning happens only in the queries (Q), keys (K), and values (V) embeddings, while attention is still calculated on the full NxN sequence (as the text states that QKV embeddings are gathered into global QKV before computing attention...), then how does this method help with longer sequences? The main challenge in handling long sequences is the quadratic computational complexity of attention, so it is unclear how this method addresses that issue.
I believe writing down the equations to complement the figure would greatly help clarify the ambiguity in this method.
|
closed
|
2025-02-16T10:56:21Z
|
2025-02-24T07:45:24Z
|
https://github.com/deepspeedai/DeepSpeed/issues/7041
|
[] |
rasoolfa
| 0
|
graphql-python/graphql-core
|
graphql
| 226
|
How to pass additional information to GraphQLResolveInfo
|
Hi there 😊
I was looking at this PR on Strawberry: https://github.com/strawberry-graphql/strawberry/pull/3461
and I was wondering if there's a nicer to pass the input extensions data around, so I stumbled on the ExecutionContext class which can be useful for us to customise the GraphQLResolveInfo data[1], but I haven't figured out a nice way to pass some data to ExecutionContext.build without customising the execute function here: https://github.com/graphql-python/graphql-core/blob/9dcf25e66f6ed36b77de788621cf50bab600d1d3/src/graphql/execution/execute.py#L1844-L1856
Is there a better alternative for something like this?
[1] We also have a custom version of info, so customising the ExecutionContext class will help with that too :D
|
open
|
2024-08-27T10:24:57Z
|
2025-02-02T09:09:50Z
|
https://github.com/graphql-python/graphql-core/issues/226
|
[] |
patrick91
| 2
|
Ehco1996/django-sspanel
|
django
| 412
|
启动失败,错误报告 No module named 'django_crontab'
|
**问题的描述**
使用2.4版的release https://github.com/Ehco1996/django-sspanel/releases/tag/2.4
命令 docker-compose run --rm web python manage.py collectstatic --noinput 后报错 No module named 'django_crontab'
**项目的配置文件**
**如何复现**
wget https://codeload.github.com/Ehco1996/django-sspanel/zip/2.4
unzip django-sspanel-2.4
cd django-sspanel-2.4
docker-compose run --rm web python manage.py collectstatic --noinput
**相关截图/log**

**其他信息**
docker-compose up 的话 里面是一大堆报错>.>
|
closed
|
2020-10-04T03:30:30Z
|
2020-10-04T10:12:52Z
|
https://github.com/Ehco1996/django-sspanel/issues/412
|
[
"bug"
] |
jackjieYYY
| 2
|
gradio-app/gradio
|
python
| 9,977
|
Queuing related guides contain outdated information about `concurrency_count`
|
### Describe the bug
These guides related to queuing still refer to `concurrency_count`:
- [Queuing](https://www.gradio.app/guides/queuing)
- [Setting Up a Demo for Maximum Performance](https://www.gradio.app/guides/setting-up-a-demo-for-maximum-performance)
However, as confirmed in #9463:
> The `concurrency_count` parameter has been removed from `.queue()`. In Gradio 4, this parameter was already deprecated and had no effect. In Gradio 5, this parameter has been removed altogether.
Running the code from [Queuing](https://www.gradio.app/guides/queuing) guide results in the error below:
```
Exception has occurred: TypeError
EventListener._setup.<locals>.event_trigger() got an unexpected keyword argument 'concurrency_count'
File "./test_gradio.py", line 23, in <module>
greet_btn.click(fn=greet, inputs=[tag, output], outputs=[
TypeError: EventListener._setup.<locals>.event_trigger() got an unexpected keyword argument 'concurrency_count'
```
### Have you searched existing issues? 🔎
- [X] I have searched and found no existing issues
### Reproduction
```python
# Sample code from https://www.gradio.app/guides/queuing
import gradio as gr
with gr.Blocks() as demo:
prompt = gr.Textbox()
image = gr.Image()
generate_btn = gr.Button("Generate Image")
generate_btn.click(image_gen, prompt, image, concurrency_count=5)
```
### Screenshot
_No response_
### Logs
_No response_
### System Info
```shell
Gradio Environment Information:
------------------------------
Operating System: Darwin
gradio version: 5.6.0
gradio_client version: 1.4.3
```
### Severity
I can work around it
|
closed
|
2024-11-17T23:13:20Z
|
2024-11-24T05:47:38Z
|
https://github.com/gradio-app/gradio/issues/9977
|
[
"good first issue",
"docs/website"
] |
the-eddie
| 3
|
sqlalchemy/alembic
|
sqlalchemy
| 595
|
Adding a unique constraint doesn't work when more than one columns are given.
|
So I tried adding a unique constraint for two columns of a table at the same time and migrating to it. This works if I drop the database and create the new one using these changes but I cannot migrate to the new database version using alembic.
Here's the class I want to add the constraint to:
```
class UserItems(BASE):
__tablename__ = 'user_items'
id = Column(Integer, primary_key=True)
user_id = Column(Integer, ForeignKey('users.id'))
user = relationship('Users', back_populates='items')
item_id = Column(Integer, ForeignKey('items.id'), nullable=False)
item= relationship('Items', back_populates='users')
# The combination of a user and an item should be unique.
__table_args__ = (UniqueConstraint('user_id', 'item_id',
name='unique_association'), )
```
And here's the code that is supposed to upgrade the database:
```
def upgrade():
with op.batch_alter_table("user_items") as batch_op:
batch_op.create_unique_constraint("unique_association", "user_items", ["user_id', 'item_id"])
```
When I run the `python3 migrate.py db upgrade` command, I get this error:
```
INFO [alembic.runtime.migration] Context impl PostgresqlImpl.
INFO [alembic.runtime.migration] Will assume transactional DDL.
INFO [alembic.runtime.migration] Running upgrade -> 740399c84e0f, empty message
Traceback (most recent call last):
File "migrate.py", line 13, in <module>
manager.run()
File "/usr/local/lib/python3.5/dist-packages/flask_script/__init__.py", line 417, in run
result = self.handle(argv[0], argv[1:])
File "/usr/local/lib/python3.5/dist-packages/flask_script/__init__.py", line 386, in handle
res = handle(*args, **config)
File "/usr/local/lib/python3.5/dist-packages/flask_script/commands.py", line 216, in __call__
return self.run(*args, **kwargs)
File "/usr/local/lib/python3.5/dist-packages/flask_migrate/__init__.py", line 95, in wrapped
f(*args, **kwargs)
File "/usr/local/lib/python3.5/dist-packages/flask_migrate/__init__.py", line 280, in upgrade
command.upgrade(config, revision, sql=sql, tag=tag)
File "/usr/local/lib/python3.5/dist-packages/alembic/command.py", line 276, in upgrade
script.run_env()
File "/usr/local/lib/python3.5/dist-packages/alembic/script/base.py", line 475, in run_env
util.load_python_file(self.dir, "env.py")
File "/usr/local/lib/python3.5/dist-packages/alembic/util/pyfiles.py", line 90, in load_python_file
module = load_module_py(module_id, path)
File "/usr/local/lib/python3.5/dist-packages/alembic/util/compat.py", line 156, in load_module_py
spec.loader.exec_module(module)
File "<frozen importlib._bootstrap_external>", line 665, in exec_module
File "<frozen importlib._bootstrap>", line 222, in _call_with_frames_removed
File "migrations/env.py", line 96, in <module>
run_migrations_online()
File "migrations/env.py", line 90, in run_migrations_online
context.run_migrations()
File "<string>", line 8, in run_migrations
File "/usr/local/lib/python3.5/dist-packages/alembic/runtime/environment.py", line 839, in run_migrations
self.get_context().run_migrations(**kw)
File "/usr/local/lib/python3.5/dist-packages/alembic/runtime/migration.py", line 361, in run_migrations
step.migration_fn(**kw)
File "/vagrant/Project/migrations/versions/740399c84e0f_.py", line 26, in upgrade
["user_id", "item_id"])
TypeError: <flask_script.commands.Command object at 0x7fe108a46fd0>: create_unique_constraint() takes 3 positional arguments but 4 were given
```
|
closed
|
2019-08-25T08:44:50Z
|
2019-09-17T16:55:26Z
|
https://github.com/sqlalchemy/alembic/issues/595
|
[
"question"
] |
Nikitas-io
| 4
|
TracecatHQ/tracecat
|
automation
| 7
|
CrowdStrike Integration
|
You can start with my CS_BADGER it's API for CS's Splunk uses MFA / persistent tokens. :
I know it's gross ... I want to rewrite it with PING auth but I suck with SAML in python
https://github.com/freeload101/SCRIPTS/blob/master/Bash/CS_BADGER/CS_BADGER.sh
|
closed
|
2024-03-20T19:17:05Z
|
2024-06-16T19:14:51Z
|
https://github.com/TracecatHQ/tracecat/issues/7
|
[
"enhancement",
"integrations"
] |
freeload101
| 5
|
deepset-ai/haystack
|
nlp
| 8,912
|
`tools_strict` option in `OpenAIChatGenerator` broken with `ComponentTool`
|
**Describe the bug**
When using `ComponentTool` and setting `tools_strict=True` OpenAI API is complaining that `additionalProperties` of the schema is not `false`.
**Error message**
```---------------------------------------------------------------------------
BadRequestError Traceback (most recent call last)
Cell In[19], line 1
----> 1 result = generator.run(messages=chat_messages["prompt"], tools=tool_invoker.tools)
2 result
File ~/.local/lib/python3.12/site-packages/haystack/components/generators/chat/openai.py:246, in OpenAIChatGenerator.run(self, messages, streaming_callback, generation_kwargs, tools, tools_strict)
237 streaming_callback = streaming_callback or self.streaming_callback
239 api_args = self._prepare_api_call(
240 messages=messages,
241 streaming_callback=streaming_callback,
(...)
244 tools_strict=tools_strict,
245 )
--> 246 chat_completion: Union[Stream[ChatCompletionChunk], ChatCompletion] = self.client.chat.completions.create(
247 **api_args
248 )
250 is_streaming = isinstance(chat_completion, Stream)
251 assert is_streaming or streaming_callback is None
File ~/.local/lib/python3.12/site-packages/ddtrace/contrib/trace_utils.py:336, in with_traced_module.<locals>.with_mod.<locals>.wrapper(wrapped, instance, args, kwargs)
334 log.debug("Pin not found for traced method %r", wrapped)
335 return wrapped(*args, **kwargs)
--> 336 return func(mod, pin, wrapped, instance, args, kwargs)
File ~/.local/lib/python3.12/site-packages/ddtrace/contrib/internal/openai/patch.py:282, in _patched_endpoint.<locals>.patched_endpoint(openai, pin, func, instance, args, kwargs)
280 resp, err = None, None
281 try:
--> 282 resp = func(*args, **kwargs)
283 return resp
284 except Exception as e:
File ~/.local/lib/python3.12/site-packages/openai/_utils/_utils.py:279, in required_args.<locals>.inner.<locals>.wrapper(*args, **kwargs)
277 msg = f"Missing required argument: {quote(missing[0])}"
278 raise TypeError(msg)
--> 279 return func(*args, **kwargs)
File ~/.local/lib/python3.12/site-packages/openai/resources/chat/completions/completions.py:879, in Completions.create(self, messages, model, audio, frequency_penalty, function_call, functions, logit_bias, logprobs, max_completion_tokens, max_tokens, metadata, modalities, n, parallel_tool_calls, prediction, presence_penalty, reasoning_effort, response_format, seed, service_tier, stop, store, stream, stream_options, temperature, tool_choice, tools, top_logprobs, top_p, user, extra_headers, extra_query, extra_body, timeout)
837 @required_args(["messages", "model"], ["messages", "model", "stream"])
838 def create(
839 self,
(...)
876 timeout: float | httpx.Timeout | None | NotGiven = NOT_GIVEN,
877 ) -> ChatCompletion | Stream[ChatCompletionChunk]:
878 validate_response_format(response_format)
--> 879 return self._post(
880 "/chat/completions",
881 body=maybe_transform(
882 {
883 "messages": messages,
884 "model": model,
885 "audio": audio,
886 "frequency_penalty": frequency_penalty,
887 "function_call": function_call,
888 "functions": functions,
889 "logit_bias": logit_bias,
890 "logprobs": logprobs,
891 "max_completion_tokens": max_completion_tokens,
892 "max_tokens": max_tokens,
893 "metadata": metadata,
894 "modalities": modalities,
895 "n": n,
896 "parallel_tool_calls": parallel_tool_calls,
897 "prediction": prediction,
898 "presence_penalty": presence_penalty,
899 "reasoning_effort": reasoning_effort,
900 "response_format": response_format,
901 "seed": seed,
902 "service_tier": service_tier,
903 "stop": stop,
904 "store": store,
905 "stream": stream,
906 "stream_options": stream_options,
907 "temperature": temperature,
908 "tool_choice": tool_choice,
909 "tools": tools,
910 "top_logprobs": top_logprobs,
911 "top_p": top_p,
912 "user": user,
913 },
914 completion_create_params.CompletionCreateParams,
915 ),
916 options=make_request_options(
917 extra_headers=extra_headers, extra_query=extra_query, extra_body=extra_body, timeout=timeout
918 ),
919 cast_to=ChatCompletion,
920 stream=stream or False,
921 stream_cls=Stream[ChatCompletionChunk],
922 )
File ~/.local/lib/python3.12/site-packages/openai/_base_client.py:1290, in SyncAPIClient.post(self, path, cast_to, body, options, files, stream, stream_cls)
1276 def post(
1277 self,
1278 path: str,
(...)
1285 stream_cls: type[_StreamT] | None = None,
1286 ) -> ResponseT | _StreamT:
1287 opts = FinalRequestOptions.construct(
1288 method="post", url=path, json_data=body, files=to_httpx_files(files), **options
1289 )
-> 1290 return cast(ResponseT, self.request(cast_to, opts, stream=stream, stream_cls=stream_cls))
File ~/.local/lib/python3.12/site-packages/openai/_base_client.py:967, in SyncAPIClient.request(self, cast_to, options, remaining_retries, stream, stream_cls)
964 else:
965 retries_taken = 0
--> 967 return self._request(
968 cast_to=cast_to,
969 options=options,
970 stream=stream,
971 stream_cls=stream_cls,
972 retries_taken=retries_taken,
973 )
File ~/.local/lib/python3.12/site-packages/openai/_base_client.py:1071, in SyncAPIClient._request(self, cast_to, options, retries_taken, stream, stream_cls)
1068 err.response.read()
1070 log.debug("Re-raising status error")
-> 1071 raise self._make_status_error_from_response(err.response) from None
1073 return self._process_response(
1074 cast_to=cast_to,
1075 options=options,
(...)
1079 retries_taken=retries_taken,
1080 )
BadRequestError: Error code: 400 - {'error': {'message': "Invalid schema for function 'web_search': In context=(), 'additionalProperties' is required to be supplied and to be false.", 'type': 'invalid_request_error', 'param': 'tools[0].function.parameters', 'code': 'invalid_function_parameters'}}
```
**Expected behavior**
`use_strict=True` is working
**Additional context**
Add any other context about the problem here, like document types / preprocessing steps / settings of reader etc.
**To Reproduce**
```python
from haystack.components.generators.chat.openai import OpenAIChatGenerator
from haystack.dataclasses.chat_message import ChatMessage
gen = OpenAIChatGenerator.from_dict({'type': 'haystack.components.generators.chat.openai.OpenAIChatGenerator',
'init_parameters': {'model': 'gpt-4o',
'streaming_callback': None,
'api_base_url': None,
'organization': None,
'generation_kwargs': {},
'api_key': {'type': 'env_var',
'env_vars': ['OPENAI_API_KEY'],
'strict': False},
'timeout': None,
'max_retries': None,
'tools': [{'type': 'haystack.tools.component_tool.ComponentTool',
'data': {'name': 'web_search',
'description': 'Search the web for current information on any topic',
'parameters': {'type': 'object',
'properties': {'query': {'type': 'string',
'description': 'Search query.'}},
'required': ['query']},
'component': {'type': 'haystack.components.websearch.serper_dev.SerperDevWebSearch',
'init_parameters': {'top_k': 10,
'allowed_domains': None,
'search_params': {},
'api_key': {'type': 'env_var',
'env_vars': ['SERPERDEV_API_KEY'],
'strict': False}}}}}],
'tools_strict': True}})
gen.run([ChatMessage.from_user("How is the weather today in Berlin?")])
```
**FAQ Check**
- [ ] Have you had a look at [our new FAQ page](https://docs.haystack.deepset.ai/docs/faq)?
**System:**
- OS:
- GPU/CPU:
- Haystack version (commit or version number): 2.10.2
- DocumentStore:
- Reader:
- Retriever:
|
closed
|
2025-02-24T13:42:08Z
|
2025-03-03T15:23:26Z
|
https://github.com/deepset-ai/haystack/issues/8912
|
[
"P1"
] |
tstadel
| 0
|
sgl-project/sglang
|
pytorch
| 4,013
|
[Bug] Qwen2.5-32B-Instruct-GPTQ-Int answer end abnormally,Qwen2.5-32B-Instruct answer is ok,use vllm is ok
|
### Checklist
- [x] 1. I have searched related issues but cannot get the expected help.
- [x] 2. The bug has not been fixed in the latest version.
- [ ] 3. Please note that if the bug-related issue you submitted lacks corresponding environment info and a minimal reproducible demo, it will be challenging for us to reproduce and resolve the issue, reducing the likelihood of receiving feedback.
- [ ] 4. If the issue you raised is not a bug but a question, please raise a discussion at https://github.com/sgl-project/sglang/discussions/new/choose Otherwise, it will be closed.
- [x] 5. Please use English, otherwise it will be closed.
### Describe the bug
sglang 0.4.3.post2
start command: python3 -m sglang.lanuch_server --model-path /Qwen2.5-32B-Instruct-GPTQ-Int4 --host 0.0.0.0 --port 8088 --tensor-parallel-size 2

Qwen2.5-32B-Instruct is ok
start command: python3 -m sglang.lanuch_server --model-path /Qwen2.5-32B-Instruct --host 0.0.0.0 --port 8088 --tensor-parallel-size 2

use vllm 0.7.3 is ok
start command: vllm serve /Qwen2.5-32B-Instruct-GPTQ-Int4 --host 0.0.0.0 --port 8088 --served-model-name qwen -tp 2

### Reproduction
-
### Environment
-
|
closed
|
2025-03-03T06:32:39Z
|
2025-03-05T07:41:11Z
|
https://github.com/sgl-project/sglang/issues/4013
|
[] |
Flynn-Zh
| 5
|
piskvorky/gensim
|
nlp
| 3,234
|
Update release instructions
|
https://github.com/RaRe-Technologies/gensim/wiki/Maintainer-page is out of date
- We don't use gensim-wheels repo anymore - everything happens in the main gensim repo
- Wheel building is less of a pain now - GHA takes care of most things
- Some release scripts appear out of date, e.g. prepare.sh
- A general description of what we're doing and why wrt to the wheel builds (interplay between manylinux, multibuild, etc) would be helpful
|
open
|
2021-09-14T13:45:07Z
|
2021-09-14T13:45:17Z
|
https://github.com/piskvorky/gensim/issues/3234
|
[
"documentation",
"housekeeping"
] |
mpenkov
| 0
|
plotly/dash
|
plotly
| 2,761
|
[BUG] extending a trace in callback using extendData property doesn't work for a figure with multi-level axis
|
Hello, plotly community,
I'm seeking your help in resolving a trace extension issue for a multi-level axis figure.
**Describe your context**
Please provide us your environment, so we can easily reproduce the issue.
- replace the result of `pip list | grep dash` below
-
```
dash 2.14.1
dash-bootstrap-components 1.5.0
dash-core-components 2.0.0
dash-extendable-graph 1.3.0
dash-html-components 2.0.0
dash-table 5.0.0
dash-treeview-antd 0.0.1
```
- if frontend related, tell us your Browser, Version and OS
- OS: Windows 10
- Browser: Microsoft Edge
**Describe the bug**
Extending a trace in callback using extendData property doesn't work for a figure with multi-level axis
```
import dash
from dash.dependencies import Input, Output, State
import dash_html_components as html
import dash_core_components as dcc
import random
app = dash.Dash(__name__)
app.layout = html.Div([
html.Div([
dcc.Graph(
id='graph-extendable',
figure=dict(
data=[{'x': [0, 1, 2, 3, 4],
'y': [[0,0,0,0], [0,0,0,0]],
'mode':'lines+markers'
}],
)
),
]),
dcc.Interval(
id='interval-graph-update',
interval=1000,
n_intervals=0),
])
@app.callback(Output('graph-extendable', 'extendData'),
[Input('interval-graph-update', 'n_intervals')],
[State('graph-extendable', 'figure')])
def update_extend_traces_traceselect(n_intervals, existing):
print("")
print(existing['data'][0])
x_new = existing['data'][0]['x'][-1] + 1
d = dict(x=[[x_new]], y=[[[0],[0]]])
print(d)
return d, [0]
if __name__ == '__main__':
app.run_server(debug=True)
```
produces the following sequence of the trace data updates:
`{'x': [0, 1, 2, 3, 4], 'y': [[0, 0, 0, 0], [0, 0, 0, 0]], 'mode': 'lines+markers'}
{'x': [[5]], 'y': [[[0], [0]]]}
{'x': [0, 1, 2, 3, 4, **5**], 'y': [[0, 0, 0, 0], [0, 0, 0, 0]**, [0], [0]**], 'mode': 'lines+markers'}
{'x': [[6]], 'y': [[[0], [0]]]}
{'x': [0, 1, 2, 3, 4, **5, 6**], 'y': [[0, 0, 0, 0], [0, 0, 0, 0], **[0], [0], [0], [0]**], 'mode': 'lines+markers'}
{'x': [[7]], 'y': [[[0], [0]]]}
{'x': [0, 1, 2, 3, 4, **5, 6, 7**], 'y': [[0, 0, 0, 0], [0, 0, 0, 0], **[0], [0], [0], [0], [0], [0]**], 'mode': 'lines+markers'}
{'x': [[8]], 'y': [[[0], [0]]]}`
**Expected behavior**
it's expected that both existing levels of y axis are extended 'y': [[0, 0, 0, 0, **0**], [0, 0, 0, 0, **0**]]
instead of adding new levels 'y': [[0, 0, 0, 0], [0, 0, 0, 0], **[0], [0]**]
|
closed
|
2024-02-16T00:06:01Z
|
2024-07-25T13:13:30Z
|
https://github.com/plotly/dash/issues/2761
|
[] |
dmitrii-erkin
| 2
|
ploomber/ploomber
|
jupyter
| 922
|
lint failing when using `ploomber nb --remove`
|
From Slack:
> Currently, in ploomber, after running ploomber nb remove, the parameters cell is empty. This causes checkers to complain as product variable doesn’t exist.
I would be great if instead of being empty, it was a TypedDic
|
open
|
2022-07-19T15:01:57Z
|
2022-07-19T15:02:05Z
|
https://github.com/ploomber/ploomber/issues/922
|
[] |
edublancas
| 0
|
ydataai/ydata-profiling
|
data-science
| 1,514
|
Bug Report
|
### Current Behaviour
IndexError: list index out of range
### Expected Behaviour
The data is getting summarized but during generating report structure it gives IndexError: list index out of range
### Data Description
Dataset link - https://www.kaggle.com/datasets/himanshupoddar/zomato-bangalore-restaurants
### Code that reproduces the bug
_No response_
### pandas-profiling version
3.2.0
### Dependencies
```Text
pandas
numpy
```
### OS
windows
### Checklist
- [X] There is not yet another bug report for this issue in the [issue tracker](https://github.com/ydataai/pandas-profiling/issues)
- [X] The problem is reproducible from this bug report. [This guide](http://matthewrocklin.com/blog/work/2018/02/28/minimal-bug-reports) can help to craft a minimal bug report.
- [X] The issue has not been resolved by the entries listed under [Common Issues](https://pandas-profiling.ydata.ai/docs/master/pages/support_contrib/common_issues.html).
|
open
|
2023-12-05T14:15:17Z
|
2024-01-18T07:24:14Z
|
https://github.com/ydataai/ydata-profiling/issues/1514
|
[
"information requested ❔"
] |
Shashankb1910
| 2
|
tortoise/tortoise-orm
|
asyncio
| 1,774
|
Enums not quoted (bug of pypika-torotise)
|
**Describe the bug**
Reported here: https://github.com/tortoise/pypika-tortoise/issues/7
**To Reproduce**
```py
from tortoise import Model, fields, run_async
from tortoise.contrib.test import init_memory_sqlite
from tortoise.fields.base import StrEnum
class MyModel(Model):
id = fields.IntField(pk=True)
name = fields.TextField()
class MyEnum(StrEnum):
A = "a"
@init_memory_sqlite
async def do():
await MyModel.create(name="a")
qs = MyModel.filter(name=MyEnum.A)
qs._make_query()
print(qs.query)
# expected SELECT "id","name" FROM "mymodel" WHERE "name"='a'
# actual SELECT "id","name" FROM "mymodel" WHERE "name"=a
print(await MyModel.filter(name=MyEnum.A).exists())
# expected True
# actual raises tortoise.exceptions.OperationalError: no such column: a
run_async(do())
```
**Expected behavior**
Support filter by enum
**Additional context**
Will be fixed after this PR(https://github.com/tortoise/pypika-tortoise/pull/10) merged.
|
closed
|
2024-11-17T19:10:02Z
|
2024-11-19T08:55:17Z
|
https://github.com/tortoise/tortoise-orm/issues/1774
|
[] |
waketzheng
| 0
|
mwaskom/seaborn
|
data-visualization
| 3,318
|
Add read-only permissions to ci.yaml GitHub workflow
|
Seaborn's ci.yaml workflow currently run with write-all permissions. This is dangerous, since it opens the project up to supply-chain attacks. [GitHub itself](https://docs.github.com/en/actions/security-guides/security-hardening-for-github-actions#using-secrets) recommends ensuring all workflows run with minimal permissions.
I've taken a look at the workflow, and it doesn't seem to require any permissions other than `contents: read`.
This issue can be solved in two ways:
- add top-level read-only permissions to ci.yaml; and/or
- set the default token permissions to read-only in the repo settings.
I'll be sending a PR along with this issue that sets the top-level permissions. If you instead (or also) wish to modify the default token permissions:
1. Open the repo settings
2. Go to [Actions > General](https://github.com/mwaskom/seaborn/settings/actions)
3. Under "Workflow permissions", set them to "Read repository contents and packages permissions"
---
**Disclosure:** My name is Pedro and I work with Google and the [Open Source Security Foundation (OpenSSF)](https://www.openssf.org/) to improve the supply-chain security of the open-source ecosystem.
|
closed
|
2023-04-12T14:08:36Z
|
2023-04-12T23:19:14Z
|
https://github.com/mwaskom/seaborn/issues/3318
|
[] |
pnacht
| 0
|
flasgger/flasgger
|
rest-api
| 544
|
swag_from with specs arguments not working properly
|
When used swag_from with specs argument multiple times for one endpoint and different methods results in overwriting text in doc.
```@bp.route("/<string:order_number>", methods=["GET", "PUT", "DELETE"])
@swag_from(specs=doc_shipment_delete, methods=["DELETE"])
@swag_from(specs=doc_shipment_put, methods=["PUT"])
@swag_from(specs=doc_shipment_get, methods=["GET"])
@api_basic_authentication
def shipment(order_number: str) -> bp.route:
(....)
```
the dicts doc_shipment_delete, doc_shipment_put and doc_shipment_get are all different, but on generated webpage there is just the content of the "delete" dict.

|
open
|
2022-08-04T11:59:54Z
|
2022-08-08T06:38:08Z
|
https://github.com/flasgger/flasgger/issues/544
|
[] |
Zbysekz
| 3
|
reloadware/reloadium
|
django
| 64
|
Won't work with remote developing.
|
**Describe the bug**
**To Reproduce**
Steps to reproduce the behavior:
1. Meau -> Tools -> Deployment -> Configuration, add a configuration to connect your server, map the project folder on the local disk and the server.
2. Meau -> File -> Setting -> Project: Name -> Python Interpreter -> Add Interpreter, add a python Interpreter on the server.
3. Meau -> Tools -> Deployment -> Download from default server, download the code from the server to local disk.
4. Open a *.py file, add a breakpoint, click 'Debug with Relodium'.
5. The thread stop on the breakpoint, modify the code, ctrl+s to upload the code to the server, but nothing changes.
**Expected behavior**
What I hope: When I modify the code, ctrl+s to upload the code to the server, the value of a variable should change.
**Screenshots**
[Imgur](https://i.imgur.com/xjGn0LA.gifv)
**Desktop (please complete the following information):**
- OS: PyCharm on Windows 10 and python 3.10 on Ubuntu 20.0.4
- OS version: 10
- Reloadium package version: 0.9.4
- PyCharm plugin version: 0.8.8
- Editor: PyCharm 2022.2.3
- Run mode: Debug
|
closed
|
2022-11-09T14:52:37Z
|
2022-11-23T08:01:08Z
|
https://github.com/reloadware/reloadium/issues/64
|
[] |
00INDEX
| 3
|
modelscope/modelscope
|
nlp
| 867
|
cpu memory leak
|
After I calling the skin retouch pipeline, CPU memory increase about 100Mb for one time, later callings will keep on increasing the memory, does this relate to CPU memory leak?
|
closed
|
2024-05-25T09:59:23Z
|
2024-07-21T01:56:15Z
|
https://github.com/modelscope/modelscope/issues/867
|
[
"Stale"
] |
garychan22
| 4
|
anselal/antminer-monitor
|
dash
| 47
|
Support for PORT
|
Hi there,
##I use the same network IP for all my miners, and only have a different port for each miner, if I fill in for example 192.168.1.200:4030, it won't work. I know this is not wrong or anything, but would it be possible to have support for ports as well to solve this quite simple issue?
Thank you
PS. Perhaps not have a separate input field box for field, but simply auto detect all ports on a given IP, or allow people to add :<port> in the IP field. if no port, change nothing, same as now/currently. If mentioned a port, then use that one only :)
|
closed
|
2018-01-08T15:48:32Z
|
2018-01-09T21:44:14Z
|
https://github.com/anselal/antminer-monitor/issues/47
|
[
":dancing_men: duplicate"
] |
webhunter69
| 9
|
horovod/horovod
|
tensorflow
| 3,303
|
Is there an example of distributed training using only CPU
|
I want to try distributed training with multiple machines using CPU. What command should I use to start it? Is there an example for reference
|
closed
|
2021-12-08T10:08:25Z
|
2021-12-08T10:33:15Z
|
https://github.com/horovod/horovod/issues/3303
|
[] |
liiitleboy
| 0
|
flasgger/flasgger
|
rest-api
| 239
|
instructions for apispec example are underspecified, causing TypeError
|
New to flasgger and was interested in the apispec example, but unfortunately the instructions seem out of date?
Quoting https://github.com/rochacbruno/flasgger#readme
> Flasgger also supports Marshmallow APISpec as base template for specification, if you are using APISPec from Marshmallow take a look at [apispec example](https://github.com/rochacbruno/flasgger/blob/master/examples/apispec_example.py).
> ...
> NOTE: If you want to use Marshmallow Schemas you also need to run `pip install marshmallow apispec`
Is some non-latest version of one of these required? Following the instructions as written results in apispec-0.39.0 and marshmallow-2.15.4, which results in TypeError when running the apispec example:
```
jab@pro ~> python3 -m virtualenv tmpvenv
Using base prefix '/usr/local/Cellar/python/3.7.0/Frameworks/Python.framework/Versions/3.7'
/usr/local/lib/python3.7/site-packages/virtualenv.py:1041: DeprecationWarning: the imp module is deprecated in favour of importlib; see the module's documentation for alternative uses
import imp
New python executable in /Users/jab/tmpvenv/bin/python3.7
Also creating executable in /Users/jab/tmpvenv/bin/python
Installing setuptools, pip, wheel...done.
jab@pro ~> cd tmpvenv
jab@pro ~/tmpvenv> . bin/activate.fish
(tmpvenv) jab@pro ~/tmpvenv> pip install flasgger
Collecting flasgger
Using cached https://files.pythonhosted.org/packages/59/25/d25af3ebe1f04f47530028647e3476b829b1950deab14237948fe3aea552/flasgger-0.9.0-py2.py3-none-any.whl
Collecting PyYAML>=3.0 (from flasgger)
Collecting six>=1.10.0 (from flasgger)
Using cached https://files.pythonhosted.org/packages/67/4b/141a581104b1f6397bfa78ac9d43d8ad29a7ca43ea90a2d863fe3056e86a/six-1.11.0-py2.py3-none-any.whl
Collecting mistune (from flasgger)
Using cached https://files.pythonhosted.org/packages/c8/8c/87f4d359438ba0321a2ae91936030110bfcc62fef752656321a72b8c1af9/mistune-0.8.3-py2.py3-none-any.whl
Collecting jsonschema>=2.5.1 (from flasgger)
Using cached https://files.pythonhosted.org/packages/77/de/47e35a97b2b05c2fadbec67d44cfcdcd09b8086951b331d82de90d2912da/jsonschema-2.6.0-py2.py3-none-any.whl
Collecting Flask>=0.10 (from flasgger)
Using cached https://files.pythonhosted.org/packages/7f/e7/08578774ed4536d3242b14dacb4696386634607af824ea997202cd0edb4b/Flask-1.0.2-py2.py3-none-any.whl
Collecting Werkzeug>=0.14 (from Flask>=0.10->flasgger)
Using cached https://files.pythonhosted.org/packages/20/c4/12e3e56473e52375aa29c4764e70d1b8f3efa6682bef8d0aae04fe335243/Werkzeug-0.14.1-py2.py3-none-any.whl
Collecting itsdangerous>=0.24 (from Flask>=0.10->flasgger)
Collecting click>=5.1 (from Flask>=0.10->flasgger)
Using cached https://files.pythonhosted.org/packages/34/c1/8806f99713ddb993c5366c362b2f908f18269f8d792aff1abfd700775a77/click-6.7-py2.py3-none-any.whl
Collecting Jinja2>=2.10 (from Flask>=0.10->flasgger)
Using cached https://files.pythonhosted.org/packages/7f/ff/ae64bacdfc95f27a016a7bed8e8686763ba4d277a78ca76f32659220a731/Jinja2-2.10-py2.py3-none-any.whl
Collecting MarkupSafe>=0.23 (from Jinja2>=2.10->Flask>=0.10->flasgger)
Installing collected packages: PyYAML, six, mistune, jsonschema, Werkzeug, itsdangerous, click, MarkupSafe, Jinja2, Flask, flasgger
Successfully installed Flask-1.0.2 Jinja2-2.10 MarkupSafe-1.0 PyYAML-3.13 Werkzeug-0.14.1 click-6.7 flasgger-0.9.0 itsdangerous-0.24 jsonschema-2.6.0 mistune-0.8.3 six-1.11.0
(tmpvenv) jab@pro ~/tmpvenv> pip install marshmallow apispec
Collecting marshmallow
Using cached https://files.pythonhosted.org/packages/67/7d/5435c399acecd4398d77ef31ea80e02cee5368599ce6a980f9014e8ec5fd/marshmallow-2.15.4-py2.py3-none-any.whl
Collecting apispec
Using cached https://files.pythonhosted.org/packages/55/81/9f54520d3cb03ffb207ccef01298c037dcec83a111ec838aed971c7f9bf2/apispec-0.39.0-py2.py3-none-any.whl
Requirement already satisfied: PyYAML>=3.10 in ./lib/python3.7/site-packages (from apispec) (3.13)
Installing collected packages: marshmallow, apispec
Successfully installed apispec-0.39.0 marshmallow-2.15.4
(tmpvenv) jab@pro ~/tmpvenv> curl -O 'https://raw.githubusercontent.com/rochacbruno/flasgger/master/examples/apispec_example.py'
% Total % Received % Xferd Average Speed Time Time Time Current
Dload Upload Total Spent Left Speed
100 1527 100 1527 0 0 17232 0 --:--:-- --:--:-- --:--:-- 17352
(tmpvenv) jab@pro ~/tmpvenv> python apispec_example.py
Traceback (most recent call last):
File "apispec_example.py", line 15, in <module>
'apispec.ext.marshmallow',
TypeError: APISpec() takes no arguments
```
|
closed
|
2018-09-12T00:31:02Z
|
2018-09-13T04:40:31Z
|
https://github.com/flasgger/flasgger/issues/239
|
[
"duplicate"
] |
jab
| 1
|
serengil/deepface
|
deep-learning
| 773
|
Question : Datasets used for training Emotion Recognition
|
Hi serengil!!
Thanks for the excellent work you have done. It is really helpful, I would really like to use you model for emotion recognition, I just had one doubt could you please mention on exactly `which datasets was the model for emotion recognition trained on` ?
Thanks for your reply in advance!!!
|
closed
|
2023-06-06T10:38:06Z
|
2023-06-06T10:43:58Z
|
https://github.com/serengil/deepface/issues/773
|
[
"question"
] |
Aayush2003Gupta
| 1
|
microsoft/qlib
|
deep-learning
| 1,019
|
Download csi300 info failed
|
## 🐛 Bug Description
It seems orginal website changed their path
<img width="840" alt="image" src="https://user-images.githubusercontent.com/4394975/160270928-ffc6828d-fe40-43ef-868e-7bf6b4fa15ba.png">
## To Reproduce
Steps to reproduce the behavior:
1.
1.
1.
## Expected Behavior
<!-- A clear and concise description of what you expected to happen. -->
## Screenshot
<!-- A screenshot of the error message or anything shouldn't appear-->
## Environment
**Note**: User could run `cd scripts && python collect_info.py all` under project directory to get system information
and paste them here directly.
- Qlib version:
- Python version:
- OS (`Windows`, `Linux`, `MacOS`):
- Commit number (optional, please provide it if you are using the dev version):
## Additional Notes
<!-- Add any other information about the problem here. -->
|
closed
|
2022-03-27T07:15:10Z
|
2023-10-23T12:51:32Z
|
https://github.com/microsoft/qlib/issues/1019
|
[
"bug"
] |
liujianliuku
| 2
|
ipython/ipython
|
data-science
| 14,166
|
Run tutorials in Manimgl with error `NoneType`
|
<!-- This is the repository for IPython command line, if you can try to make sure this question/bug/feature belong here and not on one of the Jupyter repositories.
If it's a generic Python/Jupyter question, try other forums or discourse.jupyter.org.
If you are unsure, it's ok to post here, though, there are few maintainer so you might not get a fast response.
-->
I have installed the Manimgl latest version and ran the interactive tutorials in `https://3b1b.github.io/manim/getting_started/quickstart.html` when I added the `self.embed()` into the file with input `touch()`. The IPython command line shows the log:

Issue #13966 seems similar to my problem, and I replaced `self.embed()` with `embed()` after importing relative lib. The `NoneType` disappears, but the command I inputted does not work.
Mabey it does an error to replace `self.embed` with `embed()`, but I certainly don't know how to fix this problem.
|
open
|
2023-09-17T05:56:04Z
|
2023-09-17T05:56:04Z
|
https://github.com/ipython/ipython/issues/14166
|
[] |
prexhu
| 0
|
InstaPy/InstaPy
|
automation
| 5,859
|
ERROR: Command errored out with exit status 1 [ PLEASE HELP ME WITH INSTAPY INSTALLATION ]
|
running build_ext
error: [WinError 2] The system cannot find the file specified
----------------------------------------
ERROR: Command errored out with exit status 1: 'c:\users\jose\appdata\local\programs\python\python39\python.exe' -u -c 'import sys, setuptools, tokenize; sys.argv[0] = '"'"'C:\\Users\\jose\\AppData\\Local\\Temp\\pip-install-kjmx69yt\\grpcio\\setup.py'"'"'; __file__='"'"'C:\\Users\\jose\\AppData\\Local\\Temp\\pip-install-kjmx69yt\\grpcio\\setup.py'"'"';f=getattr(tokenize, '"'"'open'"'"', open)(__file__);code=f.read().replace('"'"'\r\n'"'"', '"'"'\n'"'"');f.close();exec(compile(code, __file__, '"'"'exec'"'"'))' install --record 'C:\Users\jose\AppData\Local\Temp\pip-record-36s0jlxn\install-record.txt' --single-version-externally-managed --compile --install-headers 'c:\users\jose\appdata\local\programs\python\python39\Include\grpcio' Check the logs for full command output.
C:\>pip install grpcio==1.26.0rc1
Collecting grpcio==1.26.0rc1
Downloading grpcio-1.26.0rc1.tar.gz (15.4 MB)
|████████████████████████████████| 15.4 MB 182 kB/s
Requirement already satisfied: six>=1.5.2 in c:\users\jose\appdata\local\programs\python\python39\lib\site-packages (from grpcio==1.26.0rc1) (1.15.0)
Using legacy 'setup.py install' for grpcio, since package 'wheel' is not installed.
Installing collected packages: grpcio
Running setup.py install for grpcio ... error
ERROR: Command errored out with exit status 1:
command: 'c:\users\jose\appdata\local\programs\python\python39\python.exe' -u -c 'import sys, setuptools, tokenize; sys.argv[0] = '"'"'C:\\Users\\jose\\AppData\\Local\\Temp\\pip-install-r67hznce\\grpcio\\setup.py'"'"'; __file__='"'"'C:\\Users\\jose\\AppData\\Local\\Temp\\pip-install-r67hznce\\grpcio\\setup.py'"'"';f=getattr(tokenize, '"'"'open'"'"', open)(__file__);code=f.read().replace('"'"'\r\n'"'"', '"'"'\n'"'"');f.close();exec(compile(code, __file__, '"'"'exec'"'"'))' install --record 'C:\Users\jose\AppData\Local\Temp\pip-record-3ct7kuu3\install-record.txt' --single-version-externally-managed --compile --install-headers 'c:\users\jose\appdata\local\programs\python\python39\Include\grpcio'
cwd: C:\Users\jose\AppData\Local\Temp\pip-install-r67hznce\grpcio\
Complete output (67 lines):
Found cython-generated files...
running install
running build
running build_py
running build_project_metadata
creating python_build
creating python_build\lib.win-amd64-3.9
creating python_build\lib.win-amd64-3.9\grpc
copying src\python\grpcio\grpc\_auth.py -> python_build\lib.win-amd64-3.9\grpc
copying src\python\grpcio\grpc\_channel.py -> python_build\lib.win-amd64-3.9\grpc
copying src\python\grpcio\grpc\_common.py -> python_build\lib.win-amd64-3.9\grpc
copying src\python\grpcio\grpc\_compression.py -> python_build\lib.win-amd64-3.9\grpc
copying src\python\grpcio\grpc\_grpcio_metadata.py -> python_build\lib.win-amd64-3.9\grpc
copying src\python\grpcio\grpc\_interceptor.py -> python_build\lib.win-amd64-3.9\grpc
copying src\python\grpcio\grpc\_plugin_wrapping.py -> python_build\lib.win-amd64-3.9\grpc
copying src\python\grpcio\grpc\_server.py -> python_build\lib.win-amd64-3.9\grpc
copying src\python\grpcio\grpc\_utilities.py -> python_build\lib.win-amd64-3.9\grpc
copying src\python\grpcio\grpc\__init__.py -> python_build\lib.win-amd64-3.9\grpc
creating python_build\lib.win-amd64-3.9\grpc\beta
copying src\python\grpcio\grpc\beta\implementations.py -> python_build\lib.win-amd64-3.9\grpc\beta
copying src\python\grpcio\grpc\beta\interfaces.py -> python_build\lib.win-amd64-3.9\grpc\beta
copying src\python\grpcio\grpc\beta\utilities.py -> python_build\lib.win-amd64-3.9\grpc\beta
copying src\python\grpcio\grpc\beta\_client_adaptations.py -> python_build\lib.win-amd64-3.9\grpc\beta
copying src\python\grpcio\grpc\beta\_metadata.py -> python_build\lib.win-amd64-3.9\grpc\beta
copying src\python\grpcio\grpc\beta\_server_adaptations.py -> python_build\lib.win-amd64-3.9\grpc\beta
copying src\python\grpcio\grpc\beta\__init__.py -> python_build\lib.win-amd64-3.9\grpc\beta
creating python_build\lib.win-amd64-3.9\grpc\experimental
copying src\python\grpcio\grpc\experimental\gevent.py -> python_build\lib.win-amd64-3.9\grpc\experimental
copying src\python\grpcio\grpc\experimental\session_cache.py -> python_build\lib.win-amd64-3.9\grpc\experimental
copying src\python\grpcio\grpc\experimental\__init__.py -> python_build\lib.win-amd64-3.9\grpc\experimental
creating python_build\lib.win-amd64-3.9\grpc\framework
copying src\python\grpcio\grpc\framework\__init__.py -> python_build\lib.win-amd64-3.9\grpc\framework
creating python_build\lib.win-amd64-3.9\grpc\_cython
copying src\python\grpcio\grpc\_cython\__init__.py -> python_build\lib.win-amd64-3.9\grpc\_cython
creating python_build\lib.win-amd64-3.9\grpc\experimental\aio
copying src\python\grpcio\grpc\experimental\aio\_call.py -> python_build\lib.win-amd64-3.9\grpc\experimental\aio
copying src\python\grpcio\grpc\experimental\aio\_channel.py -> python_build\lib.win-amd64-3.9\grpc\experimental\aio
copying src\python\grpcio\grpc\experimental\aio\_server.py -> python_build\lib.win-amd64-3.9\grpc\experimental\aio
copying src\python\grpcio\grpc\experimental\aio\__init__.py -> python_build\lib.win-amd64-3.9\grpc\experimental\aio
creating python_build\lib.win-amd64-3.9\grpc\framework\common
copying src\python\grpcio\grpc\framework\common\cardinality.py -> python_build\lib.win-amd64-3.9\grpc\framework\common
copying src\python\grpcio\grpc\framework\common\style.py -> python_build\lib.win-amd64-3.9\grpc\framework\common
copying src\python\grpcio\grpc\framework\common\__init__.py -> python_build\lib.win-amd64-3.9\grpc\framework\common
creating python_build\lib.win-amd64-3.9\grpc\framework\foundation
copying src\python\grpcio\grpc\framework\foundation\abandonment.py -> python_build\lib.win-amd64-3.9\grpc\framework\foundation
copying src\python\grpcio\grpc\framework\foundation\callable_util.py -> python_build\lib.win-amd64-3.9\grpc\framework\foundation
copying src\python\grpcio\grpc\framework\foundation\future.py -> python_build\lib.win-amd64-3.9\grpc\framework\foundation
copying src\python\grpcio\grpc\framework\foundation\logging_pool.py -> python_build\lib.win-amd64-3.9\grpc\framework\foundation
copying src\python\grpcio\grpc\framework\foundation\stream.py -> python_build\lib.win-amd64-3.9\grpc\framework\foundation
copying src\python\grpcio\grpc\framework\foundation\stream_util.py -> python_build\lib.win-amd64-3.9\grpc\framework\foundation
copying src\python\grpcio\grpc\framework\foundation\__init__.py -> python_build\lib.win-amd64-3.9\grpc\framework\foundation
creating python_build\lib.win-amd64-3.9\grpc\framework\interfaces
copying src\python\grpcio\grpc\framework\interfaces\__init__.py -> python_build\lib.win-amd64-3.9\grpc\framework\interfaces
creating python_build\lib.win-amd64-3.9\grpc\framework\interfaces\base
copying src\python\grpcio\grpc\framework\interfaces\base\base.py -> python_build\lib.win-amd64-3.9\grpc\framework\interfaces\base
copying src\python\grpcio\grpc\framework\interfaces\base\utilities.py -> python_build\lib.win-amd64-3.9\grpc\framework\interfaces\base
copying src\python\grpcio\grpc\framework\interfaces\base\__init__.py -> python_build\lib.win-amd64-3.9\grpc\framework\interfaces\base
creating python_build\lib.win-amd64-3.9\grpc\framework\interfaces\face
copying src\python\grpcio\grpc\framework\interfaces\face\face.py -> python_build\lib.win-amd64-3.9\grpc\framework\interfaces\face
copying src\python\grpcio\grpc\framework\interfaces\face\utilities.py -> python_build\lib.win-amd64-3.9\grpc\framework\interfaces\face
copying src\python\grpcio\grpc\framework\interfaces\face\__init__.py -> python_build\lib.win-amd64-3.9\grpc\framework\interfaces\face
creating python_build\lib.win-amd64-3.9\grpc\_cython\_cygrpc
copying src\python\grpcio\grpc\_cython\_cygrpc\__init__.py -> python_build\lib.win-amd64-3.9\grpc\_cython\_cygrpc
creating python_build\lib.win-amd64-3.9\grpc\_cython\_credentials
copying src\python\grpcio\grpc\_cython\_credentials\roots.pem -> python_build\lib.win-amd64-3.9\grpc\_cython\_credentials
running build_ext
error: [WinError 2] The system cannot find the file specified
----------------------------------------
ERROR: Command errored out with exit status 1: 'c:\users\jose\appdata\local\programs\python\python39\python.exe' -u -c 'import sys, setuptools, tokenize; sys.argv[0] = '"'"'C:\\Users\\jose\\AppData\\Local\\Temp\\pip-install-r67hznce\\grpcio\\setup.py'"'"'; __file__='"'"'C:\\Users\\jose\\AppData\\Local\\Temp\\pip-install-r67hznce\\grpcio\\setup.py'"'"';f=getattr(tokenize, '"'"'open'"'"', open)(__file__);code=f.read().replace('"'"'\r\n'"'"', '"'"'\n'"'"');f.close();exec(compile(code, __file__, '"'"'exec'"'"'))' install --record 'C:\Users\jose\AppData\Local\Temp\pip-record-3ct7kuu3\install-record.txt' --single-version-externally-managed --compile --install-headers 'c:\users\jose\appdata\local\programs\python\python39\Include\grpcio' Check the logs for full command output.
C:\>pip install instapy -U
Collecting instapy
Using cached instapy-0.6.12-py2.py3-none-any.whl (240 kB)
Requirement already satisfied, skipping upgrade: requests>=2.20.1 in c:\users\jose\appdata\local\programs\python\python39\lib\site-packages (from instapy) (2.24.0)
Requirement already satisfied, skipping upgrade: plyer>=1.3.1 in c:\users\jose\appdata\local\programs\python\python39\lib\site-packages (from instapy) (1.4.3)
Requirement already satisfied, skipping upgrade: webdriverdownloader in c:\users\jose\appdata\local\programs\python\python39\lib\site-packages (from instapy) (1.1.0.3)
Collecting clarifai>=2.4.1
Using cached clarifai-2.6.2.tar.gz (125 kB)
Requirement already satisfied, skipping upgrade: future>=0.17.1 in c:\users\jose\appdata\local\programs\python\python39\lib\site-packages (from instapy) (0.18.2)
Requirement already satisfied, skipping upgrade: chardet>=3.0.4 in c:\users\jose\appdata\local\programs\python\python39\lib\site-packages (from instapy) (3.0.4)
Requirement already satisfied, skipping upgrade: selenium>=3.141.0 in c:\users\jose\appdata\local\programs\python\python39\lib\site-packages (from instapy) (3.141.0)
Collecting grpcio>=1.16.1
Using cached grpcio-1.33.2.tar.gz (20.9 MB)
Requirement already satisfied, skipping upgrade: EasyProcess>=0.2.3 in c:\users\jose\appdata\local\programs\python\python39\lib\site-packages (from instapy) (0.3)
Requirement already satisfied, skipping upgrade: idna>=2.7 in c:\users\jose\appdata\local\programs\python\python39\lib\site-packages (from instapy) (2.10)
Requirement already satisfied, skipping upgrade: certifi>=2018.10.15 in c:\users\jose\appdata\local\programs\python\python39\lib\site-packages (from instapy) (2020.6.20)
Requirement already satisfied, skipping upgrade: MeaningCloud-python>=1.1.1 in c:\users\jose\appdata\local\programs\python\python39\lib\site-packages (from instapy) (2.0.0)
Requirement already satisfied, skipping upgrade: protobuf>=3.6.1 in c:\users\jose\appdata\local\programs\python\python39\lib\site-packages (from instapy) (3.13.0)
Requirement already satisfied, skipping upgrade: emoji>=0.5.1 in c:\users\jose\appdata\local\programs\python\python39\lib\site-packages (from instapy) (0.6.0)
Requirement already satisfied, skipping upgrade: configparser>=3.5.0 in c:\users\jose\appdata\local\programs\python\python39\lib\site-packages (from instapy) (5.0.1)
Requirement already satisfied, skipping upgrade: regex>=2018.11.22 in c:\users\jose\appdata\local\programs\python\python39\lib\site-packages (from instapy) (2020.10.28)
Requirement already satisfied, skipping upgrade: python-telegram-bot>=12.0.0 in c:\users\jose\appdata\local\programs\python\python39\lib\site-packages (from instapy) (13.0)
Requirement already satisfied, skipping upgrade: urllib3>=1.24.1 in c:\users\jose\appdata\local\programs\python\python39\lib\site-packages (from instapy) (1.25.11)
Requirement already satisfied, skipping upgrade: PyYAML>=3.13 in c:\users\jose\appdata\local\programs\python\python39\lib\site-packages (from instapy) (5.3.1)
Requirement already satisfied, skipping upgrade: jsonschema<3,>=2.6.0 in c:\users\jose\appdata\local\programs\python\python39\lib\site-packages (from instapy) (2.6.0)
Requirement already satisfied, skipping upgrade: six>=1.11.0 in c:\users\jose\appdata\local\programs\python\python39\lib\site-packages (from instapy) (1.15.0)
Requirement already satisfied, skipping upgrade: tqdm in c:\users\jose\appdata\local\programs\python\python39\lib\site-packages (from webdriverdownloader->instapy) (4.51.0)
Requirement already satisfied, skipping upgrade: beautifulsoup4 in c:\users\jose\appdata\local\programs\python\python39\lib\site-packages (from webdriverdownloader->instapy) (4.9.3)
Collecting googleapis-common-protos<2,>=1.5.0
Using cached googleapis_common_protos-1.52.0-py2.py3-none-any.whl (100 kB)
Requirement already satisfied, skipping upgrade: setuptools in c:\users\jose\appdata\local\programs\python\python39\lib\site-packages (from protobuf>=3.6.1->instapy) (49.2.1)
Requirement already satisfied, skipping upgrade: APScheduler==3.6.3 in c:\users\jose\appdata\local\programs\python\python39\lib\site-packages (from python-telegram-bot>=12.0.0->instapy) (3.6.3)
Requirement already satisfied, skipping upgrade: cryptography in c:\users\jose\appdata\local\programs\python\python39\lib\site-packages (from python-telegram-bot>=12.0.0->instapy) (3.2.1)
Requirement already satisfied, skipping upgrade: decorator>=4.4.0 in c:\users\jose\appdata\local\programs\python\python39\lib\site-packages (from python-telegram-bot>=12.0.0->instapy) (4.4.2)
Requirement already satisfied, skipping upgrade: tornado>=5.1 in c:\users\jose\appdata\local\programs\python\python39\lib\site-packages (from python-telegram-bot>=12.0.0->instapy) (6.1)
Requirement already satisfied, skipping upgrade: soupsieve>1.2; python_version >= "3.0" in c:\users\jose\appdata\local\programs\python\python39\lib\site-packages (from beautifulsoup4->webdriverdownloader->instapy) (2.0.1)
Requirement already satisfied, skipping upgrade: pytz in c:\users\jose\appdata\local\programs\python\python39\lib\site-packages (from APScheduler==3.6.3->python-telegram-bot>=12.0.0->instapy) (2020.1)
Requirement already satisfied, skipping upgrade: tzlocal>=1.2 in c:\users\jose\appdata\local\programs\python\python39\lib\site-packages (from APScheduler==3.6.3->python-telegram-bot>=12.0.0->instapy) (2.1)
Requirement already satisfied, skipping upgrade: cffi!=1.11.3,>=1.8 in c:\users\jose\appdata\local\programs\python\python39\lib\site-packages (from cryptography->python-telegram-bot>=12.0.0->instapy) (1.14.3)
Requirement already satisfied, skipping upgrade: pycparser in c:\users\jose\appdata\local\programs\python\python39\lib\site-packages (from cffi!=1.11.3,>=1.8->cryptography->python-telegram-bot>=12.0.0->instapy) (2.20)
Using legacy 'setup.py install' for clarifai, since package 'wheel' is not installed.
Using legacy 'setup.py install' for grpcio, since package 'wheel' is not installed.
Installing collected packages: grpcio, googleapis-common-protos, clarifai, instapy
Running setup.py install for grpcio ... error
ERROR: Command errored out with exit status 1:
command: 'c:\users\jose\appdata\local\programs\python\python39\python.exe' -u -c 'import sys, setuptools, tokenize; sys.argv[0] = '"'"'C:\\Users\\jose\\AppData\\Local\\Temp\\pip-install-__lghjt7\\grpcio\\setup.py'"'"'; __file__='"'"'C:\\Users\\jose\\AppData\\Local\\Temp\\pip-install-__lghjt7\\grpcio\\setup.py'"'"';f=getattr(tokenize, '"'"'open'"'"', open)(__file__);code=f.read().replace('"'"'\r\n'"'"', '"'"'\n'"'"');f.close();exec(compile(code, __file__, '"'"'exec'"'"'))' install --record 'C:\Users\jose\AppData\Local\Temp\pip-record-9wy96of4\install-record.txt' --single-version-externally-managed --compile --install-headers 'c:\users\jose\appdata\local\programs\python\python39\Include\grpcio'
cwd: C:\Users\jose\AppData\Local\Temp\pip-install-__lghjt7\grpcio\
Complete output (79 lines):
ASM Builds for BoringSSL currently not supported on: win-amd64
Found cython-generated files...
running install
running build
running build_py
running build_project_metadata
creating python_build
creating python_build\lib.win-amd64-3.9
creating python_build\lib.win-amd64-3.9\grpc
copying src\python\grpcio\grpc\_auth.py -> python_build\lib.win-amd64-3.9\grpc
copying src\python\grpcio\grpc\_channel.py -> python_build\lib.win-amd64-3.9\grpc
copying src\python\grpcio\grpc\_common.py -> python_build\lib.win-amd64-3.9\grpc
copying src\python\grpcio\grpc\_compression.py -> python_build\lib.win-amd64-3.9\grpc
copying src\python\grpcio\grpc\_grpcio_metadata.py -> python_build\lib.win-amd64-3.9\grpc
copying src\python\grpcio\grpc\_interceptor.py -> python_build\lib.win-amd64-3.9\grpc
copying src\python\grpcio\grpc\_plugin_wrapping.py -> python_build\lib.win-amd64-3.9\grpc
copying src\python\grpcio\grpc\_runtime_protos.py -> python_build\lib.win-amd64-3.9\grpc
copying src\python\grpcio\grpc\_server.py -> python_build\lib.win-amd64-3.9\grpc
copying src\python\grpcio\grpc\_simple_stubs.py -> python_build\lib.win-amd64-3.9\grpc
copying src\python\grpcio\grpc\_utilities.py -> python_build\lib.win-amd64-3.9\grpc
copying src\python\grpcio\grpc\__init__.py -> python_build\lib.win-amd64-3.9\grpc
creating python_build\lib.win-amd64-3.9\grpc\aio
copying src\python\grpcio\grpc\aio\_base_call.py -> python_build\lib.win-amd64-3.9\grpc\aio
copying src\python\grpcio\grpc\aio\_base_channel.py -> python_build\lib.win-amd64-3.9\grpc\aio
copying src\python\grpcio\grpc\aio\_base_server.py -> python_build\lib.win-amd64-3.9\grpc\aio
copying src\python\grpcio\grpc\aio\_call.py -> python_build\lib.win-amd64-3.9\grpc\aio
copying src\python\grpcio\grpc\aio\_channel.py -> python_build\lib.win-amd64-3.9\grpc\aio
copying src\python\grpcio\grpc\aio\_interceptor.py -> python_build\lib.win-amd64-3.9\grpc\aio
copying src\python\grpcio\grpc\aio\_metadata.py -> python_build\lib.win-amd64-3.9\grpc\aio
copying src\python\grpcio\grpc\aio\_server.py -> python_build\lib.win-amd64-3.9\grpc\aio
copying src\python\grpcio\grpc\aio\_typing.py -> python_build\lib.win-amd64-3.9\grpc\aio
copying src\python\grpcio\grpc\aio\_utils.py -> python_build\lib.win-amd64-3.9\grpc\aio
copying src\python\grpcio\grpc\aio\__init__.py -> python_build\lib.win-amd64-3.9\grpc\aio
creating python_build\lib.win-amd64-3.9\grpc\beta
copying src\python\grpcio\grpc\beta\implementations.py -> python_build\lib.win-amd64-3.9\grpc\beta
copying src\python\grpcio\grpc\beta\interfaces.py -> python_build\lib.win-amd64-3.9\grpc\beta
copying src\python\grpcio\grpc\beta\utilities.py -> python_build\lib.win-amd64-3.9\grpc\beta
copying src\python\grpcio\grpc\beta\_client_adaptations.py -> python_build\lib.win-amd64-3.9\grpc\beta
copying src\python\grpcio\grpc\beta\_metadata.py -> python_build\lib.win-amd64-3.9\grpc\beta
copying src\python\grpcio\grpc\beta\_server_adaptations.py -> python_build\lib.win-amd64-3.9\grpc\beta
copying src\python\grpcio\grpc\beta\__init__.py -> python_build\lib.win-amd64-3.9\grpc\beta
creating python_build\lib.win-amd64-3.9\grpc\experimental
copying src\python\grpcio\grpc\experimental\gevent.py -> python_build\lib.win-amd64-3.9\grpc\experimental
copying src\python\grpcio\grpc\experimental\session_cache.py -> python_build\lib.win-amd64-3.9\grpc\experimental
copying src\python\grpcio\grpc\experimental\__init__.py -> python_build\lib.win-amd64-3.9\grpc\experimental
creating python_build\lib.win-amd64-3.9\grpc\framework
copying src\python\grpcio\grpc\framework\__init__.py -> python_build\lib.win-amd64-3.9\grpc\framework
creating python_build\lib.win-amd64-3.9\grpc\_cython
copying src\python\grpcio\grpc\_cython\__init__.py -> python_build\lib.win-amd64-3.9\grpc\_cython
creating python_build\lib.win-amd64-3.9\grpc\experimental\aio
copying src\python\grpcio\grpc\experimental\aio\__init__.py -> python_build\lib.win-amd64-3.9\grpc\experimental\aio
creating python_build\lib.win-amd64-3.9\grpc\framework\common
copying src\python\grpcio\grpc\framework\common\cardinality.py -> python_build\lib.win-amd64-3.9\grpc\framework\common
copying src\python\grpcio\grpc\framework\common\style.py -> python_build\lib.win-amd64-3.9\grpc\framework\common
copying src\python\grpcio\grpc\framework\common\__init__.py -> python_build\lib.win-amd64-3.9\grpc\framework\common
creating python_build\lib.win-amd64-3.9\grpc\framework\foundation
copying src\python\grpcio\grpc\framework\foundation\abandonment.py -> python_build\lib.win-amd64-3.9\grpc\framework\foundation
copying src\python\grpcio\grpc\framework\foundation\callable_util.py -> python_build\lib.win-amd64-3.9\grpc\framework\foundation
copying src\python\grpcio\grpc\framework\foundation\future.py -> python_build\lib.win-amd64-3.9\grpc\framework\foundation
copying src\python\grpcio\grpc\framework\foundation\logging_pool.py -> python_build\lib.win-amd64-3.9\grpc\framework\foundation
copying src\python\grpcio\grpc\framework\foundation\stream.py -> python_build\lib.win-amd64-3.9\grpc\framework\foundation
copying src\python\grpcio\grpc\framework\foundation\stream_util.py -> python_build\lib.win-amd64-3.9\grpc\framework\foundation
copying src\python\grpcio\grpc\framework\foundation\__init__.py -> python_build\lib.win-amd64-3.9\grpc\framework\foundation
creating python_build\lib.win-amd64-3.9\grpc\framework\interfaces
copying src\python\grpcio\grpc\framework\interfaces\__init__.py -> python_build\lib.win-amd64-3.9\grpc\framework\interfaces
creating python_build\lib.win-amd64-3.9\grpc\framework\interfaces\base
copying src\python\grpcio\grpc\framework\interfaces\base\base.py -> python_build\lib.win-amd64-3.9\grpc\framework\interfaces\base
copying src\python\grpcio\grpc\framework\interfaces\base\utilities.py -> python_build\lib.win-amd64-3.9\grpc\framework\interfaces\base
copying src\python\grpcio\grpc\framework\interfaces\base\__init__.py -> python_build\lib.win-amd64-3.9\grpc\framework\interfaces\base
creating python_build\lib.win-amd64-3.9\grpc\framework\interfaces\face
copying src\python\grpcio\grpc\framework\interfaces\face\face.py -> python_build\lib.win-amd64-3.9\grpc\framework\interfaces\face
copying src\python\grpcio\grpc\framework\interfaces\face\utilities.py -> python_build\lib.win-amd64-3.9\grpc\framework\interfaces\face
copying src\python\grpcio\grpc\framework\interfaces\face\__init__.py -> python_build\lib.win-amd64-3.9\grpc\framework\interfaces\face
creating python_build\lib.win-amd64-3.9\grpc\_cython\_cygrpc
copying src\python\grpcio\grpc\_cython\_cygrpc\__init__.py -> python_build\lib.win-amd64-3.9\grpc\_cython\_cygrpc
creating python_build\lib.win-amd64-3.9\grpc\_cython\_credentials
copying src\python\grpcio\grpc\_cython\_credentials\roots.pem -> python_build\lib.win-amd64-3.9\grpc\_cython\_credentials
running build_ext
error: [WinError 2] The system cannot find the file specified
----------------------------------------
ERROR: Command errored out with exit status 1: 'c:\users\jose\appdata\local\programs\python\python39\python.exe' -u -c 'import sys, setuptools, tokenize; sys.argv[0] = '"'"'C:\\Users\\jose\\AppData\\Local\\Temp\\pip-install-__lghjt7\\grpcio\\setup.py'"'"'; __file__='"'"'C:\\Users\\jose\\AppData\\Local\\Temp\\pip-install-__lghjt7\\grpcio\\setup.py'"'"';f=getattr(tokenize, '"'"'open'"'"', open)(__file__);code=f.read().replace('"'"'\r\n'"'"', '"'"'\n'"'"');f.close();exec(compile(code, __file__, '"'"'exec'"'"'))' install --record 'C:\Users\jose\AppData\Local\Temp\pip-record-9wy96of4\install-record.txt' --single-version-externally-managed --compile --install-headers 'c:\users\jose\appdata\local\programs\python\python39\Include\grpcio' Check the logs for full command output.
|
closed
|
2020-10-31T07:51:30Z
|
2020-11-01T03:24:06Z
|
https://github.com/InstaPy/InstaPy/issues/5859
|
[] |
alpha-digger
| 0
|
Netflix/metaflow
|
data-science
| 1,380
|
Replace deprecated imp module with importlib
|
This project uses the [`imp` module](https://docs.python.org/3/library/imp.html) which has been deprecated since Python 3.4 and removed in 3.12:
* Raised `PendingDeprecationWarning` since [3.4 (2014)](https://github.com/python/cpython/commit/e4f41deccf94ccc798b1eb1f44657ade66669a60)
* Raised `DeprecationWarning` since [3.5 (2015)](https://github.com/python/cpython/commit/c0d91aff9a3b91307b26e8b7c34dfbf27bbdd43a)
* Updated `DeprecationWarning` to say removal in 3.12 since [3.10 (2021)](https://github.com/python/cpython/commit/dc6d3e1e4c0c1e4b2210edab8fb4762569dc2936)
* Removal planned for [3.12](https://github.com/python/cpython/issues/98040) [(2023)](https://github.com/python/cpython/pull/98573)
Python 3.12 is [set for release on 2023-10-02](https://devguide.python.org/versions/) and this library is one of the [top 5,000 most-downloaded from PyPI](https://hugovk.github.io/top-pypi-packages/).
Please could you upgrade to use `importlib`? The [`imp` docs](https://docs.python.org/3.11/library/imp.html) have suggestions on what to use to replace each function and constant.
|
closed
|
2023-04-29T12:36:42Z
|
2023-05-08T14:25:25Z
|
https://github.com/Netflix/metaflow/issues/1380
|
[] |
hugovk
| 0
|
Kanaries/pygwalker
|
matplotlib
| 124
|
[Feat] Workflow for programmatically exporting a plot
|
A typical workflow is probably to explore the dataset using PyGWalker, create some plots and finally export the plot as an image.
What would be the best way forward to recreate the plot using an exported visualization configuration and programmatically export the plot?
In my opinion, such an interaction with PyGWalker could look like the following, where I would prefer the export as a Vega configuration option.
```python
gwalker = pyg.walk(dataframe, spec=vis_spec)
# Export as Vega lite configuration:
vega_config = gwalker.export_vega()
# Export as JPEG:
gwalker.export(path="plot.jpeg")
```
Do you have a feature like that in mind and what are you thoughts about such functionality?
|
closed
|
2023-06-02T07:20:58Z
|
2023-08-03T01:17:12Z
|
https://github.com/Kanaries/pygwalker/issues/124
|
[
"enhancement",
"P1"
] |
Julius-Plehn
| 10
|
Johnserf-Seed/TikTokDownload
|
api
| 131
|
视频命名问题 emoji去除
|
视频下载 保存的文件名含有Unicode字符的emoji 可否加个匹配直接替换掉 谢

|
closed
|
2022-04-16T01:38:24Z
|
2022-04-16T11:12:27Z
|
https://github.com/Johnserf-Seed/TikTokDownload/issues/131
|
[
"故障(bug)",
"额外求助(help wanted)",
"无效(invalid)"
] |
sym9518
| 2
|
fastapi/sqlmodel
|
pydantic
| 533
|
Data Integrity: Raise error on attempt to delete an object required via a Relationship
|
### First Check
- [X] I added a very descriptive title to this issue.
- [X] I used the GitHub search to find a similar issue and didn't find it.
- [X] I searched the SQLModel documentation, with the integrated search.
- [X] I already searched in Google "How to X in SQLModel" and didn't find any information.
- [X] I already read and followed all the tutorial in the docs and didn't find an answer.
- [X] I already checked if it is not related to SQLModel but to [Pydantic](https://github.com/samuelcolvin/pydantic).
- [X] I already checked if it is not related to SQLModel but to [SQLAlchemy](https://github.com/sqlalchemy/sqlalchemy).
### Commit to Help
- [X] I commit to help with one of those options 👆
### Example Code
```python
class Contact(SQLModel, table=True):
"""An entry in the address book."""
id: Optional[int] = Field(default=None, primary_key=True)
first_name: Optional[str]
last_name: Optional[str]
company: Optional[str]
email: Optional[str]
address_id: Optional[int] = Field(default=None, foreign_key="address.id")
address: Optional[Address] = Relationship(
back_populates="contacts", sa_relationship_kwargs={"lazy": "subquery"}
)
invoicing_contact_of: List["Client"] = Relationship(
back_populates="invoicing_contact", sa_relationship_kwargs={"lazy": "subquery"}
)
class Client(SQLModel, table=True):
"""A client the freelancer has contracted with."""
id: Optional[int] = Field(default=None, primary_key=True)
name: str = Field(default="")
# Client 1:1 invoicing Contact
invoicing_contact_id: int = Field(default=None, foreign_key="contact.id")
invoicing_contact: Contact = Relationship(
back_populates="invoicing_contact_of",
sa_relationship_kwargs={"lazy": "subquery"},
)
contracts: List["Contract"] = Relationship(
back_populates="client", sa_relationship_kwargs={"lazy": "subquery"}
)
```
### Description
(As far as I know the documentation does not handle data integrity topics - please point me to the chapter if I am wrong.)
Consider these two model classes `Contact` and `Client`. To keep the integrity of the data model, I need the following behavior:
An exception is raised if there is an attempt to delete a `Contact` that is still the invoicing contact of an existing `Client`.
Does SQLModel support this, perhaps via SQLAlchemy?
### Operating System
macOS
### Operating System Details
_No response_
### SQLModel Version
0.0.8
### Python Version
3.10
### Additional Context
_No response_
|
open
|
2023-01-18T20:25:03Z
|
2023-02-09T15:50:03Z
|
https://github.com/fastapi/sqlmodel/issues/533
|
[
"question"
] |
clstaudt
| 8
|
PaddlePaddle/models
|
nlp
| 5,329
|
如何实现添加热词功能?
|
如何在源码的基础上修改,自定义自己的热词,来提高识别这些词语的权重?
|
open
|
2021-07-20T15:36:24Z
|
2021-07-25T03:55:04Z
|
https://github.com/PaddlePaddle/models/issues/5329
|
[] |
shanmon110
| 1
|
graphql-python/graphene-django
|
graphql
| 1,530
|
Error debugging
|
**problem**
The code generated an error, and Graphene Django returned the error through a response, which increased the difficulty of debugging
**django setting**
Directly throwing global settings in the development environment is a good choice
|
open
|
2024-07-25T06:24:30Z
|
2024-07-25T06:24:30Z
|
https://github.com/graphql-python/graphene-django/issues/1530
|
[
"✨enhancement"
] |
HuLight
| 0
|
scikit-learn/scikit-learn
|
machine-learning
| 30,817
|
sample_weight is silently ignored in LogisticRegressionCV.score when metadata routing is enabled
|
### Describe the bug
I'm not sure if it is a proper bug, or my lack of understanding of the metadata routing API ;)
When `enable_metadata_routing=True`, the `score` method of a `LogisticRegressionCV` estimator will ignore `sample_weight`.
```python
set_config(enable_metadata_routing=True)
logreg_cv = LogisticRegressionCV().fit(X, y)
logreg_cv.score(X, y, sample_weight=sw)==logreg_cv.score(X, y) #unweighted accuracy
```
I found it surprising, because the `score` method works fine when `enable_metadata_routing=False`, so the same piece of code behaves differently depending on the metadata routing config.
```python
set_config(enable_metadata_routing=False)
logreg_cv = LogisticRegressionCV().fit(X, y)
logreg_cv.score(X, y, sample_weight=sw) #weighted accuracy
```
If I understood the metadata routing API correctly, to make the `score` method `sample_weight` aware we need to explicitly pass a scorer that request it:
```python
set_config(enable_metadata_routing=True)
weighted_accuracy = make_scorer(accuracy_score).set_score_request(sample_weight=True)
logreg_cv = LogisticRegressionCV(scoring=weighted_accuracy).fit(X, y)
logreg_cv.score(X, y, sample_weight=sw) #weighted accuracy
```
If it's the intended behavior of the metadata routing API, maybe we should warn the user or raise an error in the first case, instead of silently ignoring `sample_weight` ?
### Steps/Code to Reproduce
```python
from sklearn import set_config
from sklearn.metrics import make_scorer, accuracy_score
from sklearn.linear_model import LogisticRegressionCV
import numpy as np
rng = np.random.RandomState(22)
n_samples, n_features = 10, 4
X = rng.rand(n_samples, n_features)
y = rng.randint(0, 2, size=n_samples)
sw = rng.randint(0, 5, size=n_samples)
set_config(enable_metadata_routing=True)
logreg_cv = LogisticRegressionCV()
logreg_cv.fit(X, y)
# sample_weight is silently ignored in logreg_cv.score
assert logreg_cv.score(X, y) == logreg_cv.score(X, y, sample_weight=sw)
assert not logreg_cv.score(X, y, sample_weight=sw)==accuracy_score(logreg_cv.predict(X),y, sample_weight=sw)
```
### Expected Results
Either `logreg_cv.score(X, y, sample_weight=sw)` raises an error/warning or the assertions are false.
### Actual Results
The assertions are true.
### Versions
```shell
sklearn: 1.7.dev0
```
|
open
|
2025-02-12T15:49:01Z
|
2025-02-16T23:23:32Z
|
https://github.com/scikit-learn/scikit-learn/issues/30817
|
[
"Bug",
"Metadata Routing"
] |
antoinebaker
| 5
|
Urinx/WeixinBot
|
api
| 120
|
接受到的消息可以分辨是否是屏蔽的群消息吗?
|
接受到的消息可以分辨是否是屏蔽的群消息吗?
|
open
|
2016-10-29T10:11:47Z
|
2016-10-29T10:11:47Z
|
https://github.com/Urinx/WeixinBot/issues/120
|
[] |
luckysta
| 0
|
pydantic/pydantic-ai
|
pydantic
| 497
|
Feature Request: Support for OpenAI o1 Model
|
The OpenAI o1 model introduces several new features, including:
- Structured responses
- Tool calling
- Developer (formerly system) prompts
Additionally, it offers improved pricing compared to the currently supported o1-preview model in pydantic-ai.
These enhancements could significantly improve the library’s functionality and make it more cost-effective. It would be great to see support for this model added to the library.
|
closed
|
2024-12-19T14:10:45Z
|
2025-01-31T00:40:37Z
|
https://github.com/pydantic/pydantic-ai/issues/497
|
[
"new models"
] |
lucasprim
| 8
|
public-apis/public-apis
|
api
| 3,195
|
Cat-Facts
|
closed
|
2022-06-18T09:39:48Z
|
2022-07-15T21:26:35Z
|
https://github.com/public-apis/public-apis/issues/3195
|
[] |
shoxnazar
| 0
|
|
ScottfreeLLC/AlphaPy
|
pandas
| 39
|
Error During save_predictions
|
**Describe the bug**
Attempting to run the MarketFlow tutorial, the code throws an error after plot generation, during what seems like its attempt to write the data.
**To Reproduce**
Steps to reproduce the behavior:
1. ~> mflow --train 2019-01-01
2. ~> mflow --predict 2020-07-21
both train and predict flags seem to exit at the same place.
**Expected behavior**
I expected the code to finish and write out the file to the ./output/ directory
**Desktop (please complete the following information):**
- OS: Linux Mint 18.3
- Python: v3.7.7
**Traceback**
[07/20/20 21:06:02] INFO Writing feature map to ./model/feature_map_20200720.pkl
[07/20/20 21:06:02] INFO Loading data from ./input/test_20200720.csv
Traceback (most recent call last):
File "/home/michael/miniconda3/bin/mflow", line 8, in <module>
sys.exit(main())
File "/home/michael/miniconda3/lib/python3.7/site-packages/alphapy/market_flow.py", line 430, in main
model = market_pipeline(model, market_specs)
File "/home/michael/miniconda3/lib/python3.7/site-packages/alphapy/market_flow.py", line 292, in market_pipeline
run_analysis(a, lag_period, forecast_period, leaders, predict_history)
File "/home/michael/miniconda3/lib/python3.7/site-packages/alphapy/analysis.py", line 270, in run_analysis
analysis.model = main_pipeline(model)
File "/home/michael/miniconda3/lib/python3.7/site-packages/alphapy/__main__.py", line 426, in main_pipeline
model = training_pipeline(model)
File "/home/michael/miniconda3/lib/python3.7/site-packages/alphapy/__main__.py", line 289, in training_pipeline
save_model(model, 'BEST', Partition.test)
File "/home/michael/miniconda3/lib/python3.7/site-packages/alphapy/model.py", line 1315, in save_model
preds, probas = save_predictions(model, tag, partition)
File "/home/michael/miniconda3/lib/python3.7/site-packages/alphapy/model.py", line 1208, in save_predictions
pd_indices = pf[pf.date >= predict_date].index.tolist()
File "/home/michael/miniconda3/lib/python3.7/site-packages/pandas/core/generic.py", line 5274, in __getattr__
return object.__getattribute__(self, name)
AttributeError: 'DataFrame' object has no attribute 'date'
|
open
|
2020-07-21T01:12:05Z
|
2021-09-22T19:03:46Z
|
https://github.com/ScottfreeLLC/AlphaPy/issues/39
|
[
"bug"
] |
minger88
| 2
|
benbusby/whoogle-search
|
flask
| 1,015
|
Option to block webcrawlers (robots.txt)
|
**Describe the feature you'd like to see added**
In order to block webcrawlers and indexers, Whoogle should serve a `/robots.txt` similar to the following
```
User-agent: *
Disallow: /
```
This could be toggled with a new environment variable like `WHOOGLE_DISABLE_INDEXING` that is turned off by default to match the existing behavior
|
closed
|
2023-06-03T06:53:32Z
|
2023-06-26T22:18:10Z
|
https://github.com/benbusby/whoogle-search/issues/1015
|
[
"enhancement"
] |
ufUNnxagpM
| 0
|
gradio-app/gradio
|
python
| 10,512
|
Links in Dataframe do not open on click
|
### Describe the bug
Hi, I am not able to click in the links defined in a gr.Dataframe with 5.14.0. It was possible with 5.9.1 .
### Have you searched existing issues? 🔎
- [x] I have searched and found no existing issues
### Reproduction
```python
import gradio as gr
with gr.Blocks() as app:
with gr.Accordion(""):
df = gr.Dataframe([[f"""<a href=\"https://www.google.com/">{'google'}</a>"""]],
headers=["link"], elem_classes='dataframe',
datatype='markdown', interactive=False)
if __name__ == "__main__":
app.launch()
```
### Screenshot
_No response_
### Logs
```shell
```
### System Info
```shell
Gradio Environment Information:
------------------------------
Operating System: Windows
gradio version: 5.14.0
gradio_client version: 1.7.0
------------------------------------------------
gradio dependencies in your environment:
aiofiles: 23.2.1
anyio: 4.3.0
audioop-lts is not installed.
fastapi: 0.115.4
ffmpy: 0.3.2
gradio-client==1.7.0 is not installed.
httpx: 0.27.2
huggingface-hub: 0.26.2
jinja2: 3.1.3
markupsafe: 2.1.5
numpy: 1.26.4
orjson: 3.9.14
packaging: 23.2
pandas: 2.2.2
pillow: 10.2.0
pydantic: 2.6.1
pydub: 0.25.1
python-multipart: 0.0.20
pyyaml: 6.0.1
ruff: 0.9.4
safehttpx: 0.1.6
semantic-version: 2.10.0
starlette: 0.41.2
tomlkit: 0.12.0
typer: 0.12.5
typing-extensions: 4.10.0
urllib3: 2.2.1
uvicorn: 0.27.1
authlib; extra == 'oauth' is not installed.
itsdangerous; extra == 'oauth' is not installed.
gradio_client dependencies in your environment:
fsspec: 2024.2.0
httpx: 0.27.2
huggingface-hub: 0.26.2
packaging: 23.2
typing-extensions: 4.10.0
websockets: 11.0.3
```
### Severity
I can work around it
|
closed
|
2025-02-05T11:01:49Z
|
2025-02-06T13:14:12Z
|
https://github.com/gradio-app/gradio/issues/10512
|
[
"bug",
"Priority",
"💾 Dataframe",
"Regression"
] |
cuccatodavide
| 1
|
amidaware/tacticalrmm
|
django
| 1,577
|
Search by Windows Version
|
Hi,
We would greatly appreciate having an option to search Windows devices by their Windows Version (e.g., _21H2_, _22H2_, etc.). This feature would help us quickly verify whether updates were successfully installed on the devices.
Thank you for considering our request.
Best regards, Alex
|
closed
|
2023-07-28T05:26:56Z
|
2024-03-12T14:16:21Z
|
https://github.com/amidaware/tacticalrmm/issues/1577
|
[] |
Linux-Alex
| 4
|
ultralytics/ultralytics
|
machine-learning
| 19,513
|
pt model to an ONNX model, different result
|
When using the official tool to convert a PyTorch (pt) model to an ONNX model, it is found that when inferring the same image using the pt model and the ONNX model, the results are inconsistent.
|
open
|
2025-03-04T09:16:58Z
|
2025-03-12T22:44:15Z
|
https://github.com/ultralytics/ultralytics/issues/19513
|
[
"non-reproducible",
"exports"
] |
GXDD666
| 7
|
unit8co/darts
|
data-science
| 2,427
|
How to access and use functionalities of the underlying / wrapped regression models?
|
Is it possible to use methods/functions of the models which Darts wraps around?
For example for LGBM, is it possible to create a Darts LGBM model object but still use this [plot_importance](https://lightgbm.readthedocs.io/en/latest/pythonapi/lightgbm.plot_importance.html#lightgbm.plot_importance) function from the original LGBM?
|
closed
|
2024-06-24T14:08:36Z
|
2024-07-15T09:22:41Z
|
https://github.com/unit8co/darts/issues/2427
|
[
"question",
"q&a"
] |
ETTAN93
| 1
|
plotly/plotly.py
|
plotly
| 4,141
|
Area charts: area() got an unexpected keyword argument 'stackgaps'
|
Hi, I have issues with different area charts which I'm using to monitor the presence of different types of events.
My data in my area charts are going all over the place because values seem by default to be interpolated for the traces which have no data at one "location" (for a given month, in my case)


in this examples, each time there is no data, the curves should go back to zero
Instead, the area extends for each trace to the next point where there is some data.
I would like to insert a zero at locations at which other traces in this group have data but this one does not
from https://github.com/plotly/plotly.js/issues/1217
and from https://plotly.com/python/reference/scatter/#scatter-line
I understood that I should use `stackgaps="infer zero"` to avoid have my data in my area chart going all over the place because values are by default interpolated for the traces which have no data at one "location" (for a given month, in my case)
I'm declaring my chart as a plotly express area chart.
It seems that under the hood, it a a scatter plot, and it links to the reference for scatter plots.
So I don't really undertand why my `stackgaps` parameter is not recognized, either in the `fig = px.area` definition or in the `fig.update_traces(stackgaps="infer zero",)`
Is that a bug? What am I doing wrong?
|
closed
|
2023-04-02T21:38:20Z
|
2024-07-11T14:08:59Z
|
https://github.com/plotly/plotly.py/issues/4141
|
[] |
Skrattoune
| 2
|
rasbt/watermark
|
jupyter
| 74
|
Watermark fails to recognize all project used libraries
|
I find Colab pre-installed libraries convenient as well as AWS Sagemaker pre-defined kernels but this convenience becomes very annoying when gathering the requirements.txt file as I end up with many libraries I have not actually used in my project. I know I could create a virtual at the very beginning but I am wondering if there is a way to avoid it.
I have recently discovered watermark which partially solves this issue. Nevertheless, for this solution to be a perfect fit it still has two issues that I will exemplify below and that you can easily reproduce in Colab.
```
!pip install fastai --upgrade
!pip install voila
!jupyter serverextension enable voila --sys-prefix
!pip install watermark
from fastai.vision.all import *
from fastai.vision.widgets import *
%load_ext watermark
%watermark --iversion
```
Neither fastai nor voila appear in the output as I am not running import fastai and loading voila as an extension.
`%watermark -p fastai`
This would return the correct output for e.g. fastai but I would like to be able to generate automatically without having to manually check for the missing packages.
|
closed
|
2021-01-11T20:40:33Z
|
2024-09-22T20:08:21Z
|
https://github.com/rasbt/watermark/issues/74
|
[
"help wanted"
] |
gloriamacia
| 3
|
twopirllc/pandas-ta
|
pandas
| 81
|
Parameters in strategy()'s ta
|
Hi @twopirllc,
I'm playing with the new improved `strategy()` method, it's quite an improvement nice work ! You should push it to `pip` ;)
In my workflow, I'm computing a bunch of data on the indicators and then select the relevant ones. However, I don't know _a priori_ which ones nor their parameters, and they are not always the same.
Since all the indictors don't have the same number of parameters, they would be passed as a list of floats, instead of their explicit names. For example :
CustomStrategy = ta.Strategy(name="MyStrat",
ta=[{'kind':'sma', 'params':[30]},
{'kind':'bbands', 'params':[20,2]}},
{'kind':'macd', 'params':[8,21]},
],
)
This way, the code would be more modular. Is it possible to implement that ?
|
closed
|
2020-07-26T20:24:42Z
|
2020-09-29T16:56:00Z
|
https://github.com/twopirllc/pandas-ta/issues/81
|
[
"enhancement"
] |
drpaprika
| 16
|
microsoft/nlp-recipes
|
nlp
| 583
|
Nlp
|
closed
|
2020-04-10T07:55:10Z
|
2020-04-13T22:43:53Z
|
https://github.com/microsoft/nlp-recipes/issues/583
|
[] |
sagarkumar2804
| 1
|
|
MaartenGr/BERTopic
|
nlp
| 1,320
|
BaseRepresentation in Multiple Representations raises an exception
|
Hi,
I'm trying to use BaseRepresentation in Multiple Representations in the following way:
```
self.representation_model = {
'Main': BaseRepresentation,
'KeyBert': KeyBERTInspired(top_n_words=self.top_n_words, random_state=42),
'KeyBertMMR': [
KeyBERTInspired(top_n_words=self.top_n_words, random_state=42),
MaximalMarginalRelevance(diversity=self.mmr_diversity, top_n_words=self.top_n_words)
],
'MMR': MaximalMarginalRelevance(diversity=self.mmr_diversity, top_n_words=self.top_n_words)
}
```
The parameters are defined when the object is created:
```
my_topics = Topics(corpus=corpus,
dates=dates,
stopwords=stopwords,
nr_topics=20,
spacy_model='ru_core_news_lg',
top_n_words=30,
mmr_diversity=0.7,
text_gen_model='bigscience/mt0-large',
n_gram_range=(1, 3),
nr_bins=70,
repr_model=repr_model,
embedding_model_name='paraphrase-multilingual-mpnet-base-v2',
nr_topics_reduced=20,
outliers_reduction_strategy='embeddings'
)
```
When running the code I got:
`TypeError: BaseRepresentation.extract_topics() missing 1 required positional argument: 'topics'`
Traceback below:
```
╭─────────────────────────────── Traceback (most recent call last) ────────────────────────────────╮
│ C:\Users\piotr\onedrive\PycharmProjects\tematy-bert-310\slupsk-ria-tass.py:177 in <module> │
│ │
│ 174 │ logger.info(f'{Fore.GREEN}Quitting{Style.RESET_ALL}') │
│ 175 │
│ 176 if __name__ =='__main__': │
│ ❱ 177 │ main() │
│ 178 │
│ │
│ C:\Users\piotr\onedrive\PycharmProjects\tematy-bert-310\slupsk-ria-tass.py:169 in main │
│ │
│ 166 │ │ │ │ │ │ │ │ f'{Fore.BLUE}LEMMA/ORIG:{Style.RESET_ALL} {lemorig}') │
│ 167 │ │ │ │ │ filename = re.sub(r'\.', '_', src) + f'_{lemorig}' │
│ 168 │ │ │ │ │ stopwords = Filters.stopwords_ru + ['новости', 'новость', 'риа', 'та │
│ ❱ 169 │ │ │ │ │ make_topics(corp_by_src[src][lemorig], corp_by_src[src]['date'], sto │
│ 170 │ │ │ │ │ │ │ │ f'{infilename}') │
│ 171 │ # my_topics = Topics() │
│ 172 │ # my_topics.load_model(f'{basepath}Ria_ru_lemma_Base_outliers_reduced.model') │
│ │
│ C:\Users\piotr\onedrive\PycharmProjects\tematy-bert-310\slupsk-ria-tass.py:107 in make_topics │
│ │
│ 104 │ │ │ │ │ outliers_reduction_strategy='embeddings' │
│ 105 │ │ │ │ │ ) │
│ 106 │ logger.info(f'{Fore.GREEN}Topics search{Style.RESET_ALL}') │
│ ❱ 107 │ my_topics.topics_from_corpus(f'{filename}') │
│ 108 │ # logger.info(f'{Fore.GREEN}Reducing outliers{Style.RESET_ALL}') │
│ 109 │ # my_topics.reduce_outliers(f'{filename}') │
│ 110 │ # logger.info(f'{Fore.GREEN}Reducing topics{Style.RESET_ALL}') │
│ │
│ C:\Users\piotr\OneDrive\python\pgc\nlp\topics.py:227 in topics_from_corpus │
│ │
│ 224 │ │
│ 225 │ def topics_from_corpus(self, filename): │
│ 226 │ │ self.logger.info(f'{Fore.GREEN}Transforming corpus{Style.RESET_ALL}') │
│ ❱ 227 │ │ self.topics, self.probs = self.topic_model.fit_transform(self.corpus) │
│ 228 │ │ # self.generate_and_set_topic_labels() │
│ 229 │ │ self.save_model_and_topics(filename, 'initial') │
│ 230 │
│ │
│ C:\Users\piotr\OneDrive\PycharmProjects\tematy-bert-310\lib\site-packages\bertopic\_bertopic.py: │
│ 411 in fit_transform │
│ │
│ 408 │ │ │ self._save_representative_docs(custom_documents) │
│ 409 │ │ else: │
│ 410 │ │ │ # Extract topics by calculating c-TF-IDF │
│ ❱ 411 │ │ │ self._extract_topics(documents, embeddings=embeddings) │
│ 412 │ │ │ │
│ 413 │ │ │ # Reduce topics │
│ 414 │ │ │ if self.nr_topics: │
│ │
│ C:\Users\piotr\OneDrive\PycharmProjects\tematy-bert-310\lib\site-packages\bertopic\_bertopic.py: │
│ 3296 in _extract_topics │
│ │
│ 3293 │ │ """ │
│ 3294 │ │ documents_per_topic = documents.groupby(['Topic'], as_index=False).agg({'Documen │
│ 3295 │ │ self.c_tf_idf_, words = self._c_tf_idf(documents_per_topic) │
│ ❱ 3296 │ │ self.topic_representations_ = self._extract_words_per_topic(words, documents) │
│ 3297 │ │ self._create_topic_vectors(documents=documents, embeddings=embeddings, mappings= │
│ 3298 │ │ self.topic_labels_ = {key: f"{key}_" + "_".join([word[0] for word in values[:4]] │
│ 3299 │ │ │ │ │ │ │ for key, values in │
│ │
│ C:\Users\piotr\OneDrive\PycharmProjects\tematy-bert-310\lib\site-packages\bertopic\_bertopic.py: │
│ 3573 in _extract_words_per_topic │
│ │
│ 3570 │ │ │ topics = self.representation_model.extract_topics(self, documents, c_tf_idf, │
│ 3571 │ │ elif isinstance(self.representation_model, dict): │
│ 3572 │ │ │ if self.representation_model.get("Main"): │
│ ❱ 3573 │ │ │ │ topics = self.representation_model["Main"].extract_topics(self, document │
│ 3574 │ │ topics = {label: values[:self.top_n_words] for label, values in topics.items()} │
│ 3575 │ │ │
│ 3576 │ │ # Extract additional topic aspects │
╰──────────────────────────────────────────────────────────────────────────────────────────────────╯
TypeError: BaseRepresentation.extract_topics() missing 1 required positional argument: 'topics'
```
Please advice what am I doing wrong? The problem does not occur when there is not `BaseRepresentation` in `representation_model` dictionary.
Bests
Piotr
|
closed
|
2023-06-06T08:43:58Z
|
2023-06-06T22:41:18Z
|
https://github.com/MaartenGr/BERTopic/issues/1320
|
[] |
piotrcelinski
| 2
|
agronholm/anyio
|
asyncio
| 422
|
Cannot mark tests programmatically for anyio
|
I used to mark all my tests for `pytest-asyncio` using this hook in my `conftest.py` file (here adopted for anyio already):
```py
import inspect
import pytest
def pytest_collection_modifyitems(session, config, items): # noqa
for item in items:
# pylint: disable=protected-access
if (
isinstance(item, pytest.Function)
and inspect.iscoroutinefunction(item.function)
):
print(f"\nMarking {item}...")
item.add_marker(pytest.mark.anyio)
```
If I run this test file, only the second test gets executed which I consider a bug given the documentation which states that a test is picked up from anyio by marking it.
If it's not a bug, what am I missing?
```py
import pytest
@pytest.fixture(scope="session")
def anyio_backend():
return 'asyncio'
async def test_not_executed():
assert True
@pytest.mark.anyio
async def test_executed():
assert True
```
|
open
|
2022-02-28T13:25:13Z
|
2024-02-26T19:55:29Z
|
https://github.com/agronholm/anyio/issues/422
|
[
"documentation",
"pytest plugin"
] |
chbndrhnns
| 8
|
MaartenGr/BERTopic
|
nlp
| 2,255
|
RuntimeError: Failed to import transformers.integrations.integration_utils because of the following error (look up to see its traceback): Failed to import transformers.modeling_utils because of the following error (look up to see its traceback): partially initialized module 'torch._dynamo' has no attribute 'external_utils' (most likely due to a circular import)
|
### Have you searched existing issues? 🔎
- [X] I have searched and found no existing issues
### Desribe the bug
!pip install bertopic
from bertopic import BERTopic
RuntimeError: Failed to import transformers.integrations.integration_utils because of the following error (look up to see its traceback):
Failed to import transformers.modeling_utils because of the following error (look up to see its traceback):
partially initialized module 'torch._dynamo' has no attribute 'external_utils' (most likely due to a circular import)

### Reproduction
```python
from bertopic import BERTopic
```
### BERTopic Version
I don't know
|
closed
|
2024-12-24T16:26:53Z
|
2024-12-30T16:22:07Z
|
https://github.com/MaartenGr/BERTopic/issues/2255
|
[
"bug"
] |
tempdeltavalue
| 3
|
horovod/horovod
|
machine-learning
| 3,223
|
docker build error
|
**Environment:**
1. Framework: (TensorFlow, Keras, PyTorch, MXNet)
2. Framework version:
3. Horovod version:
4. MPI version:
5. CUDA version:
6. NCCL version:
7. Python version:
8. Spark / PySpark version:
9. Ray version:
10. OS and version: Ubuntu 18.04.3 LTS
11. GCC version:
12. CMake version:
13. Docker version: 19.03.13, build 4484c46d9d
**Checklist:**
1. Did you search issues to find if somebody asked this question before?
2. If your question is about hang, did you read [this doc](https://github.com/horovod/horovod/blob/master/docs/running.rst)?
3. If your question is about docker, did you read [this doc](https://github.com/horovod/horovod/blob/master/docs/docker.rst)?
4. Did you check if you question is answered in the [troubleshooting guide](https://github.com/horovod/horovod/blob/master/docs/troubleshooting.rst)?
**Bug report:**
I want to build my own docker image through the Dockerfile provided in the project. But error happend. I changed nothing.
```
Running setup.py install for horovod: started
Running command /usr/bin/python -u -c 'import io, os, sys, setuptools, tokenize; sys.argv[0] = '"'"'/tmp/pip-req-build-4j_mtnr3/setup.py'"'"'; __file__='"'"'/tmp/pip-req-build-4j_mtnr3/setup.py'"'"';f = getattr(tokenize, '"'"'open'"'"', open)(__file__) if os.path.exists(__file__) else io.StringIO('"'"'from setuptools import setup; setup()'"'"');code = f.read().replace('"'"'\r\n'"'"', '"'"'\n'"'"');f.close();exec(compile(code, __file__, '"'"'exec'"'"'))' install --record /tmp/pip-record-hn4z8609/install-record.txt --single-version-externally-managed --compile --install-headers /usr/local/include/python3.7/horovod
running install
running build
running build_py
creating build
creating build/lib.linux-x86_64-3.7
creating build/lib.linux-x86_64-3.7/horovod
copying horovod/__init__.py -> build/lib.linux-x86_64-3.7/horovod
creating build/lib.linux-x86_64-3.7/horovod/torch
copying horovod/torch/sync_batch_norm.py -> build/lib.linux-x86_64-3.7/horovod/torch
copying horovod/torch/compression.py -> build/lib.linux-x86_64-3.7/horovod/torch
copying horovod/torch/functions.py -> build/lib.linux-x86_64-3.7/horovod/torch
copying horovod/torch/optimizer.py -> build/lib.linux-x86_64-3.7/horovod/torch
copying horovod/torch/__init__.py -> build/lib.linux-x86_64-3.7/horovod/torch
copying horovod/torch/mpi_ops.py -> build/lib.linux-x86_64-3.7/horovod/torch
creating build/lib.linux-x86_64-3.7/horovod/tensorflow
copying horovod/tensorflow/sync_batch_norm.py -> build/lib.linux-x86_64-3.7/horovod/tensorflow
copying horovod/tensorflow/compression.py -> build/lib.linux-x86_64-3.7/horovod/tensorflow
copying horovod/tensorflow/functions.py -> build/lib.linux-x86_64-3.7/horovod/tensorflow
copying horovod/tensorflow/gradient_aggregation.py -> build/lib.linux-x86_64-3.7/horovod/tensorflow
copying horovod/tensorflow/util.py -> build/lib.linux-x86_64-3.7/horovod/tensorflow
copying horovod/tensorflow/__init__.py -> build/lib.linux-x86_64-3.7/horovod/tensorflow
copying horovod/tensorflow/mpi_ops.py -> build/lib.linux-x86_64-3.7/horovod/tensorflow
copying horovod/tensorflow/elastic.py -> build/lib.linux-x86_64-3.7/horovod/tensorflow
copying horovod/tensorflow/gradient_aggregation_eager.py -> build/lib.linux-x86_64-3.7/horovod/tensorflow
creating build/lib.linux-x86_64-3.7/horovod/common
copying horovod/common/exceptions.py -> build/lib.linux-x86_64-3.7/horovod/common
copying horovod/common/process_sets.py -> build/lib.linux-x86_64-3.7/horovod/common
copying horovod/common/util.py -> build/lib.linux-x86_64-3.7/horovod/common
copying horovod/common/basics.py -> build/lib.linux-x86_64-3.7/horovod/common
copying horovod/common/__init__.py -> build/lib.linux-x86_64-3.7/horovod/common
copying horovod/common/elastic.py -> build/lib.linux-x86_64-3.7/horovod/common
creating build/lib.linux-x86_64-3.7/horovod/ray
copying horovod/ray/utils.py -> build/lib.linux-x86_64-3.7/horovod/ray
copying horovod/ray/runner.py -> build/lib.linux-x86_64-3.7/horovod/ray
copying horovod/ray/driver_service.py -> build/lib.linux-x86_64-3.7/horovod/ray
copying horovod/ray/strategy.py -> build/lib.linux-x86_64-3.7/horovod/ray
copying horovod/ray/__init__.py -> build/lib.linux-x86_64-3.7/horovod/ray
copying horovod/ray/ray_logger.py -> build/lib.linux-x86_64-3.7/horovod/ray
copying horovod/ray/elastic.py -> build/lib.linux-x86_64-3.7/horovod/ray
copying horovod/ray/worker.py -> build/lib.linux-x86_64-3.7/horovod/ray
creating build/lib.linux-x86_64-3.7/horovod/spark
copying horovod/spark/gloo_run.py -> build/lib.linux-x86_64-3.7/horovod/spark
copying horovod/spark/conf.py -> build/lib.linux-x86_64-3.7/horovod/spark
copying horovod/spark/runner.py -> build/lib.linux-x86_64-3.7/horovod/spark
copying horovod/spark/__init__.py -> build/lib.linux-x86_64-3.7/horovod/spark
copying horovod/spark/mpi_run.py -> build/lib.linux-x86_64-3.7/horovod/spark
creating build/lib.linux-x86_64-3.7/horovod/mxnet
copying horovod/mxnet/compression.py -> build/lib.linux-x86_64-3.7/horovod/mxnet
copying horovod/mxnet/functions.py -> build/lib.linux-x86_64-3.7/horovod/mxnet
copying horovod/mxnet/__init__.py -> build/lib.linux-x86_64-3.7/horovod/mxnet
copying horovod/mxnet/mpi_ops.py -> build/lib.linux-x86_64-3.7/horovod/mxnet
creating build/lib.linux-x86_64-3.7/horovod/keras
copying horovod/keras/callbacks.py -> build/lib.linux-x86_64-3.7/horovod/keras
copying horovod/keras/__init__.py -> build/lib.linux-x86_64-3.7/horovod/keras
copying horovod/keras/elastic.py -> build/lib.linux-x86_64-3.7/horovod/keras
creating build/lib.linux-x86_64-3.7/horovod/_keras
copying horovod/_keras/callbacks.py -> build/lib.linux-x86_64-3.7/horovod/_keras
copying horovod/_keras/__init__.py -> build/lib.linux-x86_64-3.7/horovod/_keras
copying horovod/_keras/elastic.py -> build/lib.linux-x86_64-3.7/horovod/_keras
creating build/lib.linux-x86_64-3.7/horovod/runner
copying horovod/runner/run_task.py -> build/lib.linux-x86_64-3.7/horovod/runner
copying horovod/runner/gloo_run.py -> build/lib.linux-x86_64-3.7/horovod/runner
copying horovod/runner/js_run.py -> build/lib.linux-x86_64-3.7/horovod/runner
copying horovod/runner/task_fn.py -> build/lib.linux-x86_64-3.7/horovod/runner
copying horovod/runner/__init__.py -> build/lib.linux-x86_64-3.7/horovod/runner
copying horovod/runner/launch.py -> build/lib.linux-x86_64-3.7/horovod/runner
copying horovod/runner/mpi_run.py -> build/lib.linux-x86_64-3.7/horovod/runner
creating build/lib.linux-x86_64-3.7/horovod/data
copying horovod/data/data_loader_base.py -> build/lib.linux-x86_64-3.7/horovod/data
copying horovod/data/__init__.py -> build/lib.linux-x86_64-3.7/horovod/data
creating build/lib.linux-x86_64-3.7/horovod/torch/mpi_lib_impl
copying horovod/torch/mpi_lib_impl/__init__.py -> build/lib.linux-x86_64-3.7/horovod/torch/mpi_lib_impl
creating build/lib.linux-x86_64-3.7/horovod/torch/mpi_lib
copying horovod/torch/mpi_lib/__init__.py -> build/lib.linux-x86_64-3.7/horovod/torch/mpi_lib
creating build/lib.linux-x86_64-3.7/horovod/torch/elastic
copying horovod/torch/elastic/state.py -> build/lib.linux-x86_64-3.7/horovod/torch/elastic
copying horovod/torch/elastic/sampler.py -> build/lib.linux-x86_64-3.7/horovod/torch/elastic
copying horovod/torch/elastic/__init__.py -> build/lib.linux-x86_64-3.7/horovod/torch/elastic
creating build/lib.linux-x86_64-3.7/horovod/tensorflow/keras
copying horovod/tensorflow/keras/callbacks.py -> build/lib.linux-x86_64-3.7/horovod/tensorflow/keras
copying horovod/tensorflow/keras/__init__.py -> build/lib.linux-x86_64-3.7/horovod/tensorflow/keras
copying horovod/tensorflow/keras/elastic.py -> build/lib.linux-x86_64-3.7/horovod/tensorflow/keras
creating build/lib.linux-x86_64-3.7/horovod/spark/torch
copying horovod/spark/torch/remote.py -> build/lib.linux-x86_64-3.7/horovod/spark/torch
copying horovod/spark/torch/util.py -> build/lib.linux-x86_64-3.7/horovod/spark/torch
copying horovod/spark/torch/__init__.py -> build/lib.linux-x86_64-3.7/horovod/spark/torch
copying horovod/spark/torch/estimator.py -> build/lib.linux-x86_64-3.7/horovod/spark/torch
creating build/lib.linux-x86_64-3.7/horovod/spark/driver
copying horovod/spark/driver/rendezvous.py -> build/lib.linux-x86_64-3.7/horovod/spark/driver
copying horovod/spark/driver/mpirun_rsh.py -> build/lib.linux-x86_64-3.7/horovod/spark/driver
copying horovod/spark/driver/rsh.py -> build/lib.linux-x86_64-3.7/horovod/spark/driver
copying horovod/spark/driver/driver_service.py -> build/lib.linux-x86_64-3.7/horovod/spark/driver
copying horovod/spark/driver/__init__.py -> build/lib.linux-x86_64-3.7/horovod/spark/driver
copying horovod/spark/driver/job_id.py -> build/lib.linux-x86_64-3.7/horovod/spark/driver
copying horovod/spark/driver/host_discovery.py -> build/lib.linux-x86_64-3.7/horovod/spark/driver
creating build/lib.linux-x86_64-3.7/horovod/spark/data_loaders
copying horovod/spark/data_loaders/pytorch_data_loaders.py -> build/lib.linux-x86_64-3.7/horovod/spark/data_loaders
copying horovod/spark/data_loaders/__init__.py -> build/lib.linux-x86_64-3.7/horovod/spark/data_loaders
creating build/lib.linux-x86_64-3.7/horovod/spark/lightning
copying horovod/spark/lightning/legacy.py -> build/lib.linux-x86_64-3.7/horovod/spark/lightning
copying horovod/spark/lightning/datamodule.py -> build/lib.linux-x86_64-3.7/horovod/spark/lightning
copying horovod/spark/lightning/remote.py -> build/lib.linux-x86_64-3.7/horovod/spark/lightning
copying horovod/spark/lightning/util.py -> build/lib.linux-x86_64-3.7/horovod/spark/lightning
copying horovod/spark/lightning/__init__.py -> build/lib.linux-x86_64-3.7/horovod/spark/lightning
copying horovod/spark/lightning/estimator.py -> build/lib.linux-x86_64-3.7/horovod/spark/lightning
creating build/lib.linux-x86_64-3.7/horovod/spark/common
copying horovod/spark/common/params.py -> build/lib.linux-x86_64-3.7/horovod/spark/common
copying horovod/spark/common/serialization.py -> build/lib.linux-x86_64-3.7/horovod/spark/common
copying horovod/spark/common/store.py -> build/lib.linux-x86_64-3.7/horovod/spark/common
copying horovod/spark/common/util.py -> build/lib.linux-x86_64-3.7/horovod/spark/common
copying horovod/spark/common/constants.py -> build/lib.linux-x86_64-3.7/horovod/spark/common
copying horovod/spark/common/cache.py -> build/lib.linux-x86_64-3.7/horovod/spark/common
copying horovod/spark/common/__init__.py -> build/lib.linux-x86_64-3.7/horovod/spark/common
copying horovod/spark/common/_namedtuple_fix.py -> build/lib.linux-x86_64-3.7/horovod/spark/common
copying horovod/spark/common/backend.py -> build/lib.linux-x86_64-3.7/horovod/spark/common
copying horovod/spark/common/estimator.py -> build/lib.linux-x86_64-3.7/horovod/spark/common
creating build/lib.linux-x86_64-3.7/horovod/spark/task
copying horovod/spark/task/gloo_exec_fn.py -> build/lib.linux-x86_64-3.7/horovod/spark/task
copying horovod/spark/task/mpirun_exec_fn.py -> build/lib.linux-x86_64-3.7/horovod/spark/task
copying horovod/spark/task/task_info.py -> build/lib.linux-x86_64-3.7/horovod/spark/task
copying horovod/spark/task/task_service.py -> build/lib.linux-x86_64-3.7/horovod/spark/task
copying horovod/spark/task/__init__.py -> build/lib.linux-x86_64-3.7/horovod/spark/task
creating build/lib.linux-x86_64-3.7/horovod/spark/keras
copying horovod/spark/keras/tensorflow.py -> build/lib.linux-x86_64-3.7/horovod/spark/keras
copying horovod/spark/keras/remote.py -> build/lib.linux-x86_64-3.7/horovod/spark/keras
copying horovod/spark/keras/optimizer.py -> build/lib.linux-x86_64-3.7/horovod/spark/keras
copying horovod/spark/keras/util.py -> build/lib.linux-x86_64-3.7/horovod/spark/keras
copying horovod/spark/keras/bare.py -> build/lib.linux-x86_64-3.7/horovod/spark/keras
copying horovod/spark/keras/__init__.py -> build/lib.linux-x86_64-3.7/horovod/spark/keras
copying horovod/spark/keras/estimator.py -> build/lib.linux-x86_64-3.7/horovod/spark/keras
creating build/lib.linux-x86_64-3.7/horovod/runner/driver
copying horovod/runner/driver/driver_service.py -> build/lib.linux-x86_64-3.7/horovod/runner/driver
copying horovod/runner/driver/__init__.py -> build/lib.linux-x86_64-3.7/horovod/runner/driver
creating build/lib.linux-x86_64-3.7/horovod/runner/http
copying horovod/runner/http/http_server.py -> build/lib.linux-x86_64-3.7/horovod/runner/http
copying horovod/runner/http/__init__.py -> build/lib.linux-x86_64-3.7/horovod/runner/http
copying horovod/runner/http/http_client.py -> build/lib.linux-x86_64-3.7/horovod/runner/http
creating build/lib.linux-x86_64-3.7/horovod/runner/common
copying horovod/runner/common/__init__.py -> build/lib.linux-x86_64-3.7/horovod/runner/common
creating build/lib.linux-x86_64-3.7/horovod/runner/util
copying horovod/runner/util/network.py -> build/lib.linux-x86_64-3.7/horovod/runner/util
copying horovod/runner/util/lsf.py -> build/lib.linux-x86_64-3.7/horovod/runner/util
copying horovod/runner/util/streams.py -> build/lib.linux-x86_64-3.7/horovod/runner/util
copying horovod/runner/util/remote.py -> build/lib.linux-x86_64-3.7/horovod/runner/util
copying horovod/runner/util/threads.py -> build/lib.linux-x86_64-3.7/horovod/runner/util
copying horovod/runner/util/cache.py -> build/lib.linux-x86_64-3.7/horovod/runner/util
copying horovod/runner/util/__init__.py -> build/lib.linux-x86_64-3.7/horovod/runner/util
creating build/lib.linux-x86_64-3.7/horovod/runner/task
copying horovod/runner/task/task_service.py -> build/lib.linux-x86_64-3.7/horovod/runner/task
copying horovod/runner/task/__init__.py -> build/lib.linux-x86_64-3.7/horovod/runner/task
creating build/lib.linux-x86_64-3.7/horovod/runner/elastic
copying horovod/runner/elastic/registration.py -> build/lib.linux-x86_64-3.7/horovod/runner/elastic
copying horovod/runner/elastic/rendezvous.py -> build/lib.linux-x86_64-3.7/horovod/runner/elastic
copying horovod/runner/elastic/settings.py -> build/lib.linux-x86_64-3.7/horovod/runner/elastic
copying horovod/runner/elastic/discovery.py -> build/lib.linux-x86_64-3.7/horovod/runner/elastic
copying horovod/runner/elastic/constants.py -> build/lib.linux-x86_64-3.7/horovod/runner/elastic
copying horovod/runner/elastic/__init__.py -> build/lib.linux-x86_64-3.7/horovod/runner/elastic
copying horovod/runner/elastic/driver.py -> build/lib.linux-x86_64-3.7/horovod/runner/elastic
copying horovod/runner/elastic/worker.py -> build/lib.linux-x86_64-3.7/horovod/runner/elastic
creating build/lib.linux-x86_64-3.7/horovod/runner/common/service
copying horovod/runner/common/service/task_service.py -> build/lib.linux-x86_64-3.7/horovod/runner/common/service
copying horovod/runner/common/service/driver_service.py -> build/lib.linux-x86_64-3.7/horovod/runner/common/service
copying horovod/runner/common/service/__init__.py -> build/lib.linux-x86_64-3.7/horovod/runner/common/service
creating build/lib.linux-x86_64-3.7/horovod/runner/common/util
copying horovod/runner/common/util/env.py -> build/lib.linux-x86_64-3.7/horovod/runner/common/util
copying horovod/runner/common/util/network.py -> build/lib.linux-x86_64-3.7/horovod/runner/common/util
copying horovod/runner/common/util/tiny_shell_exec.py -> build/lib.linux-x86_64-3.7/horovod/runner/common/util
copying horovod/runner/common/util/safe_shell_exec.py -> build/lib.linux-x86_64-3.7/horovod/runner/common/util
copying horovod/runner/common/util/hosts.py -> build/lib.linux-x86_64-3.7/horovod/runner/common/util
copying horovod/runner/common/util/settings.py -> build/lib.linux-x86_64-3.7/horovod/runner/common/util
copying horovod/runner/common/util/codec.py -> build/lib.linux-x86_64-3.7/horovod/runner/common/util
copying horovod/runner/common/util/host_hash.py -> build/lib.linux-x86_64-3.7/horovod/runner/common/util
copying horovod/runner/common/util/secret.py -> build/lib.linux-x86_64-3.7/horovod/runner/common/util
copying horovod/runner/common/util/timeout.py -> build/lib.linux-x86_64-3.7/horovod/runner/common/util
copying horovod/runner/common/util/__init__.py -> build/lib.linux-x86_64-3.7/horovod/runner/common/util
copying horovod/runner/common/util/config_parser.py -> build/lib.linux-x86_64-3.7/horovod/runner/common/util
running build_ext
-- Could not find CCache. Consider installing CCache to speed up compilation.
-- The CXX compiler identification is GNU 7.5.0
-- Check for working CXX compiler: /usr/bin/c++
-- Check for working CXX compiler: /usr/bin/c++ -- works
-- Detecting CXX compiler ABI info
-- Detecting CXX compiler ABI info - done
-- Detecting CXX compile features
-- Detecting CXX compile features - done
-- Build architecture flags: -mf16c -mavx -mfma
-- Using command /usr/bin/python
-- Found MPI_CXX: /usr/local/lib/libmpi.so (found version "3.1")
-- Found MPI: TRUE (found version "3.1")
-- Could NOT find NVTX (missing: NVTX_INCLUDE_DIR)
CMake Error at CMakeLists.txt:265 (add_subdirectory):
add_subdirectory given source "third_party/gloo" which is not an existing
directory.
CMake Error at CMakeLists.txt:267 (target_compile_definitions):
Cannot specify compile definitions for target "gloo" which is not built by
this project.
Tensorflow_LIBRARIES := -L/usr/local/lib/python3.7/dist-packages/tensorflow -l:libtensorflow_framework.so.2
-- Found Tensorflow: -L/usr/local/lib/python3.7/dist-packages/tensorflow -l:libtensorflow_framework.so.2 (found suitable version "2.5.0", minimum required is "1.15.0")
-- Found Pytorch: 1.8.1+cu102 (found suitable version "1.8.1+cu102", minimum required is "1.2.0")
-- Found Mxnet: /usr/local/lib/python3.7/dist-packages/mxnet/libmxnet.so (found suitable version "1.8.0", minimum required is "1.4.0")
CMake Error at CMakeLists.txt:327 (file):
file COPY cannot find "/tmp/pip-req-build-4j_mtnr3/third_party/gloo".
CMake Error at CMakeLists.txt:331 (add_subdirectory):
The source directory
/tmp/pip-req-build-4j_mtnr3/third_party/compatible_gloo
does not contain a CMakeLists.txt file.
CMake Error at CMakeLists.txt:332 (target_compile_definitions):
Cannot specify compile definitions for target "compatible_gloo" which is
not built by this project.
CMake Error: The following variables are used in this project, but they are set to NOTFOUND.
Please set them or make sure they are set and tested correctly in the CMake files:
/tmp/pip-req-build-4j_mtnr3/horovod/mxnet/TF_FLATBUFFERS_INCLUDE_PATH
used as include directory in directory /tmp/pip-req-build-4j_mtnr3/horovod/mxnet
/tmp/pip-req-build-4j_mtnr3/horovod/tensorflow/TF_FLATBUFFERS_INCLUDE_PATH
used as include directory in directory /tmp/pip-req-build-4j_mtnr3/horovod/tensorflow
/tmp/pip-req-build-4j_mtnr3/horovod/torch/TF_FLATBUFFERS_INCLUDE_PATH
used as include directory in directory /tmp/pip-req-build-4j_mtnr3/horovod/torch
-- Configuring incomplete, errors occurred!
See also "/tmp/pip-req-build-4j_mtnr3/build/temp.linux-x86_64-3.7/RelWithDebInfo/CMakeFiles/CMakeOutput.log".
Traceback (most recent call last):
File "<string>", line 1, in <module>
File "/tmp/pip-req-build-4j_mtnr3/setup.py", line 211, in <module>
'horovodrun = horovod.runner.launch:run_commandline'
File "/usr/local/lib/python3.7/dist-packages/setuptools/__init__.py", line 153, in setup
return distutils.core.setup(**attrs)
File "/usr/lib/python3.7/distutils/core.py", line 148, in setup
dist.run_commands()
File "/usr/lib/python3.7/distutils/dist.py", line 966, in run_commands
self.run_command(cmd)
File "/usr/lib/python3.7/distutils/dist.py", line 985, in run_command
cmd_obj.run()
File "/usr/local/lib/python3.7/dist-packages/setuptools/command/install.py", line 61, in run
return orig.install.run(self)
File "/usr/lib/python3.7/distutils/command/install.py", line 589, in run
self.run_command('build')
File "/usr/lib/python3.7/distutils/cmd.py", line 313, in run_command
self.distribution.run_command(command)
File "/usr/lib/python3.7/distutils/dist.py", line 985, in run_command
cmd_obj.run()
File "/usr/lib/python3.7/distutils/command/build.py", line 135, in run
self.run_command(cmd_name)
File "/usr/lib/python3.7/distutils/cmd.py", line 313, in run_command
self.distribution.run_command(command)
File "/usr/lib/python3.7/distutils/dist.py", line 985, in run_command
cmd_obj.run()
File "/usr/local/lib/python3.7/dist-packages/setuptools/command/build_ext.py", line 79, in run
_build_ext.run(self)
File "/usr/lib/python3.7/distutils/command/build_ext.py", line 340, in run
self.build_extensions()
File "/tmp/pip-req-build-4j_mtnr3/setup.py", line 99, in build_extensions
cwd=cmake_build_dir)
File "/usr/lib/python3.7/subprocess.py", line 363, in check_call
raise CalledProcessError(retcode, cmd)
subprocess.CalledProcessError: Command '['cmake', '/tmp/pip-req-build-4j_mtnr3', '-DCMAKE_BUILD_TYPE=RelWithDebInfo', '-DCMAKE_LIBRARY_OUTPUT_DIRECTORY_RELWITHDEBINFO=/tmp/pip-req-build-4j_mtnr3/build/lib.linux-x86_64-3.7', '-DPYTHON_EXECUTABLE:FILEPATH=/usr/bin/python']' returned non-zero exit status 1.
Running setup.py install for horovod: finished with status 'error'
ERROR: Command errored out with exit status 1: /usr/bin/python -u -c 'import io, os, sys, setuptools, tokenize; sys.argv[0] = '"'"'/tmp/pip-req-build-4j_mtnr3/setup.py'"'"'; __file__='"'"'/tmp/pip-req-build-4j_mtnr3/setup.py'"'"';f = getattr(tokenize, '"'"'open'"'"', open)(__file__) if os.path.exists(__file__) else io.StringIO('"'"'from setuptools import setup; setup()'"'"');code = f.read().replace('"'"'\r\n'"'"', '"'"'\n'"'"');f.close();exec(compile(code, __file__, '"'"'exec'"'"'))' install --record /tmp/pip-record-hn4z8609/install-record.txt --single-version-externally-managed --compile --install-headers /usr/local/include/python3.7/horovod Check the logs for full command output.
The command '/bin/bash -cu python setup.py sdist && bash -c "HOROVOD_WITH_TENSORFLOW=1 HOROVOD_WITH_PYTORCH=1 HOROVOD_WITH_MXNET=1 pip install --no-cache-dir -v $(ls /horovod/dist/horovod-*.tar.gz)[spark,ray]" && horovodrun --check-build' returned a non-zero code: 1
```
|
closed
|
2021-10-14T09:37:24Z
|
2021-10-15T14:19:39Z
|
https://github.com/horovod/horovod/issues/3223
|
[
"bug"
] |
behome
| 2
|
rougier/matplotlib-tutorial
|
matplotlib
| 2
|
A typo in your blog "Matplotlib tutorial"
|
I understand that I should not post a typo in your blog here, but this is the only way I can contact you, since your blog points here.
In your blog "Matplotlib tutorial", chapter "Simple Plot", section "Instantiating defaults", `# savefig("../figures/exercice_2.png",dpi=72)` could be changed to `plt.savefig("../figures/exercice_2.png",dpi=72)`. You might have just missed `plt` in your code.
I really appreciate your tutorial. It's the best I could find on the web.
|
closed
|
2015-10-08T15:40:41Z
|
2021-03-29T05:43:05Z
|
https://github.com/rougier/matplotlib-tutorial/issues/2
|
[] |
JohnCoconut
| 2
|
marshmallow-code/apispec
|
rest-api
| 35
|
Should marshmallow helpers respect `only`, `exclude`?
|
When introspecting `Nested` fields, we don't check for the `only` or `exclude` attributes. For example:
``` python
class FooSchema(Schema):
bar = fields.Str()
baz = fields.Str()
class BobSchema(Schema):
foo = fields.Nested(FooSchema, only=('bar', ))
```
If we call `schema2jsonschema(BobSchema)`, the nested `foo` will include `bar` and `baz` fields, even though `baz` will never be included in this case. Which isn't necessarily a problem, unless `baz` is required:
``` python
class FooSchema(Schema):
bar = fields.Str()
baz = fields.Str(required=True)
```
In this case, users of apispec will likely return JSON that wouldn't validate against the definition schema, since there's going to be a missing required field. I haven't actually encountered this situation, and I don't know how often it's going to come up--just wanted to raise for discussion.
|
closed
|
2015-10-24T05:09:46Z
|
2019-01-25T14:12:20Z
|
https://github.com/marshmallow-code/apispec/issues/35
|
[] |
jmcarp
| 6
|
noirbizarre/flask-restplus
|
flask
| 209
|
Updating from v.0.9.0 to v.0.9.2 with issues
|
Hi,
I've got an application using Flask-RESTplus in version 0.9.0. I wanted to upgrade it to the newest version v.0.9.2. My unit tests failed due to the fact that I didn't include "Content-Type: application/json" header. Is it now required? It may broke integration with my clients. Is there a way to disable it and trust payload without "Content-Type" header? I couldn't find anything about it in documentation...
Here is my working unit test before upgrading (v.0.9.0):
``` python
@mock.patch('path.to.my.mocked.method')
def test_some_endpoint(self, m_mocked_method):
m_mocked_method.return_value = True
result = self.app.post("/my/test/endpoint/", data='{"test": 123}')
self.assertEquals(result.status_code, 200)
```
And after upgrade to v.0.9.2:
``` python
@mock.patch('path.to.my.mocked.method')
def test_some_endpoint(self, m_mocked_method):
m_mocked_method.return_value = True
result = self.app.post("/my/test/endpoint/", data='{"test": 123}', headers={'content-type': 'application/json')
self.assertEquals(result.status_code, 200)
```
I receive status code 400 instead of 200, if I won't add this header. The same story if I try to call my API with Postman.
Thanks in advance!
|
open
|
2016-10-20T14:51:49Z
|
2018-09-27T11:40:33Z
|
https://github.com/noirbizarre/flask-restplus/issues/209
|
[
"bug"
] |
jpowie01
| 0
|
deepset-ai/haystack
|
machine-learning
| 8,356
|
Rename internal mentions of `from_socket` and `to_socket` to `sender_socket` and `receiver_socket`
|
The internal data of the `Pipeline` graph stores some informations about its edges, like the name of the inputs and outputs of the connected Components.
In [newer parts](https://github.com/deepset-ai/haystack/blob/3016c5ca93b2532836f3ffd2d4bd31114e8ddfa3/haystack/core/component/types.py#L48) of the code like in `InputSocket` and `OutputSocket` we use the terms `sender` and `receiver` instead of `from` and `to`. Both when talking about sockets and Components.
[Older parts](https://github.com/deepset-ai/haystack/blob/3016c5ca93b2532836f3ffd2d4bd31114e8ddfa3/haystack/core/pipeline/base.py#L105-L106) use the `from` and `to`.
We should change the old cases to be consistent with the new `sender` and `receiver` convention.
This is an internal change only and doesn't affect the end user, though it's best to keep things consistent.
|
open
|
2024-09-11T15:39:09Z
|
2025-01-10T09:28:22Z
|
https://github.com/deepset-ai/haystack/issues/8356
|
[
"type:refactor",
"topic:DX",
"P3"
] |
silvanocerza
| 0
|
deepakpadhi986/AI-Resume-Analyzer
|
streamlit
| 4
|
Whats this error bro and explain briefly how to sort it out ..plz explain step by step to sort this error bcz just i m beginner
|

|
closed
|
2023-11-27T05:54:24Z
|
2023-11-28T11:33:55Z
|
https://github.com/deepakpadhi986/AI-Resume-Analyzer/issues/4
|
[] |
Sudharsan912
| 1
|
pyeve/eve
|
flask
| 847
|
Different MONGO_QUERY_BLACKLIST for GET vs POST
|
Is it possible, or could it be possible, to have a different `MONGO_QUERY_BLACKLIST` for different HTTP methods?
The main reason I ask is, I would like to be able to search using `$regex` for read only queries, without exposing any ability to alter the database by regex matching.
|
closed
|
2016-03-30T12:12:22Z
|
2016-03-31T07:20:25Z
|
https://github.com/pyeve/eve/issues/847
|
[] |
harryjubb
| 2
|
coqui-ai/TTS
|
pytorch
| 4,018
|
[Bug] 安装TTS失败
|
### Describe the bug
Traceback (most recent call last):
File "/root/miniconda3/envs/xtts_v2/lib/python3.10/site-packages/pip/_vendor/pyproject_hooks/_in_process/_in_process.py", line 353, in <module>
main()
File "/root/miniconda3/envs/xtts_v2/lib/python3.10/site-packages/pip/_vendor/pyproject_hooks/_in_process/_in_process.py", line 335, in main
json_out['return_val'] = hook(**hook_input['kwargs'])
File "/root/miniconda3/envs/xtts_v2/lib/python3.10/site-packages/pip/_vendor/pyproject_hooks/_in_process/_in_process.py", line 118, in get_requires_for_build_wheel
return hook(config_settings)
File "/tmp/pip-build-env-omv17cfm/overlay/lib/python3.10/site-packages/setuptools/build_meta.py", line 332, in get_requires_for_build_wheel
return self._get_build_requires(config_settings, requirements=[])
File "/tmp/pip-build-env-omv17cfm/overlay/lib/python3.10/site-packages/setuptools/build_meta.py", line 302, in _get_build_requires
self.run_setup()
File "/tmp/pip-build-env-omv17cfm/overlay/lib/python3.10/site-packages/setuptools/build_meta.py", line 318, in run_setup
exec(code, locals())
File "<string>", line 224, in <module>
File "<string>", line 211, in setup_package
File "/tmp/pip-build-env-omv17cfm/overlay/lib/python3.10/site-packages/Cython/Build/Dependencies.py", line 1154, in cythonize
cythonize_one(*args)
File "/tmp/pip-build-env-omv17cfm/overlay/lib/python3.10/site-packages/Cython/Build/Dependencies.py", line 1321, in cythonize_one
raise CompileError(None, pyx_file)
Cython.Compiler.Errors.CompileError: spacy/kb.pyx
[end of output]
note: This error originates from a subprocess, and is likely not a problem with pip.
error: subprocess-exited-with-error
× Getting requirements to build wheel did not run successfully.
│ exit code: 1
╰─> See above for output.
### To Reproduce
Traceback (most recent call last):
File "/root/miniconda3/envs/xtts_v2/lib/python3.10/site-packages/pip/_vendor/pyproject_hooks/_in_process/_in_process.py", line 353, in <module>
main()
File "/root/miniconda3/envs/xtts_v2/lib/python3.10/site-packages/pip/_vendor/pyproject_hooks/_in_process/_in_process.py", line 335, in main
json_out['return_val'] = hook(**hook_input['kwargs'])
File "/root/miniconda3/envs/xtts_v2/lib/python3.10/site-packages/pip/_vendor/pyproject_hooks/_in_process/_in_process.py", line 118, in get_requires_for_build_wheel
return hook(config_settings)
File "/tmp/pip-build-env-omv17cfm/overlay/lib/python3.10/site-packages/setuptools/build_meta.py", line 332, in get_requires_for_build_wheel
return self._get_build_requires(config_settings, requirements=[])
File "/tmp/pip-build-env-omv17cfm/overlay/lib/python3.10/site-packages/setuptools/build_meta.py", line 302, in _get_build_requires
self.run_setup()
File "/tmp/pip-build-env-omv17cfm/overlay/lib/python3.10/site-packages/setuptools/build_meta.py", line 318, in run_setup
exec(code, locals())
File "<string>", line 224, in <module>
File "<string>", line 211, in setup_package
File "/tmp/pip-build-env-omv17cfm/overlay/lib/python3.10/site-packages/Cython/Build/Dependencies.py", line 1154, in cythonize
cythonize_one(*args)
File "/tmp/pip-build-env-omv17cfm/overlay/lib/python3.10/site-packages/Cython/Build/Dependencies.py", line 1321, in cythonize_one
raise CompileError(None, pyx_file)
Cython.Compiler.Errors.CompileError: spacy/kb.pyx
[end of output]
note: This error originates from a subprocess, and is likely not a problem with pip.
error: subprocess-exited-with-error
× Getting requirements to build wheel did not run successfully.
│ exit code: 1
╰─> See above for output.
### Expected behavior
_No response_
### Logs
_No response_
### Environment
```shell
python3.10 linux
```
### Additional context
_No response_
|
closed
|
2024-10-10T08:28:19Z
|
2024-12-28T11:58:06Z
|
https://github.com/coqui-ai/TTS/issues/4018
|
[
"bug",
"wontfix"
] |
Alex-DeepL
| 12
|
mlfoundations/open_clip
|
computer-vision
| 13
|
training perf for single GPU is not good
|
Hi, I was training clip using single GPU. After profiling, I noticed that the perf of CLIP training was not good, as we can see from the picture below. GPU idle time is almost twice of GPU active due to the sem_timedwait as blocked in CPU. Any idea we can solve this unnecessary block? Thanks!

|
closed
|
2021-09-06T09:28:24Z
|
2022-04-06T00:18:12Z
|
https://github.com/mlfoundations/open_clip/issues/13
|
[] |
cyy857
| 5
|
svc-develop-team/so-vits-svc
|
pytorch
| 53
|
RuntimeError: output with shape [1, 256] doesn't match the broadcast shape [200, 256]
|
在Featurize平台上租卡跑的时候(linux系统),在训练时报错RuntimeError: output with shape [1, 256] doesn't match the broadcast shape [200, 256],试了一下午了,怎么办捏
|
closed
|
2023-03-18T16:09:48Z
|
2023-04-12T15:08:43Z
|
https://github.com/svc-develop-team/so-vits-svc/issues/53
|
[] |
thekingofcircus
| 15
|
plotly/dash-table
|
dash
| 892
|
Page selection disappears when number of pages decreases to one
|
dash-table version: 4.11.3
Browsers tested: Chromium 78.0.3904.70, Edge 83.0.478.58
On tables where the number of entries (and in consqeuence the number of pages) may change dynamically, dash-table will remain on the currently selected page. If I am on page 4/8 and the number of pages reduces to 2, I will then be on page 4/2.
When this happens, I can use the page selectors to navigate back to previous pages. However, when the number of pages decreases to 1, I am stuck on another page without any apparent way to navigate to the actual content.
Here is a simple example illustrating the issue:

And here is the code used to produce the example above:
```
import dash
import dash_table
import dash_core_components as dcc
import dash_html_components as html
from dash.dependencies import Input, Output
app = dash.Dash(__name__)
app.layout = html.Div([
dcc.Input(
id='input', type='number',
),
dash_table.DataTable(
id='table', page_size= 10,
columns=[{'name':'i', 'id':'i'}, {'name':'square', 'id':'square'}],
)
])
@app.callback(Output('table', 'data'),
Input('input', 'value'))
def update_table_data(val):
if not val: val = 9
return [{'i':i, 'square':i**2} for i in range(val+1)]
if __name__ == '__main__':
app.run_server(debug=True)
```
This is especially bothersome if the number of table entries cannot be controlled by the user, because the user is then stuck on the non-existing page.
Possible solutions:
- Automatically change pages when the currently selected page exceeds the new total number of pages: User is on page 4/8, number of pages reduces to 3 -> User is on page 3/3 or 1/3.
- Do not hide the page selectors if the user is on another page than page 1, even when there is only 1 page in total. This would allow the user to navigate back to the content even in that case.
I am currently handling this by checking myself and updating ```page_current``` as needed, but a solution on the side of Dash would be preferred, as I don't think the current behaviour is intended.
|
closed
|
2021-04-20T11:44:07Z
|
2021-06-25T15:28:01Z
|
https://github.com/plotly/dash-table/issues/892
|
[] |
mhnrchs
| 0
|
jupyter-incubator/sparkmagic
|
jupyter
| 334
|
Issue with basic SELECT/or JSON file
|
Hi,
Running a simple PySpark command in SparkMagic:
%%sql
SELECT 1, 2, 3
causes the Error: Cannot parse object as JSON: '['b\'{"1":1,"2":2,"3":3}\'']'
This same error occurs when loading a simple JSON file with content `{"name":"test"}`:
CREATE TEMPORARY TABLE Test
USING org.apache.spark.sql.json
OPTIONS (path '/test.json')
-Per checking the code in sparkmagic/livyclientlib/sqlquery.py, we think that this line of code:
command = u'for {} in {}: print({})'.format(constants.LONG_RANDOM_VARIABLE_NAME,
command,
print_command)
causes the problem because it adds the 'b' in front of text and parsing text trigger exception.
Any idea how to fix this?
PS: scala works fine.
Thanks!
|
closed
|
2017-02-28T17:16:42Z
|
2017-03-09T00:16:04Z
|
https://github.com/jupyter-incubator/sparkmagic/issues/334
|
[] |
daithang1111
| 10
|
glumpy/glumpy
|
numpy
| 9
|
glumpy.test() returns: AttributeError: 'module' object has no attribute 'test'
|
Hi,
I just installed glumpy but can't even run the test.
``` Python
>>> import glumpy
>>> glumpy.test()
---------------------------------------------------------------------------
AttributeError Traceback (most recent call last)
<ipython-input-8-f27e1516c606> in <module>()
----> 1 glumpy.test()
AttributeError: 'module' object has no attribute 'test'
>>> glumpy.version.version
'0.2.1'
```
Best regards,
Arne
|
closed
|
2014-12-06T17:56:00Z
|
2014-12-14T13:41:10Z
|
https://github.com/glumpy/glumpy/issues/9
|
[] |
arne-cl
| 3
|
blacklanternsecurity/bbot
|
automation
| 2,252
|
When using BBot as a Python Library the Scan Name Directory Generated Uses Random Name Instead of User-Specified Name
|
**Description**
When running a scan using `Scanner()`, BBOT creates an output folder with a randomly generated name, even when the user explicitly sets the `name` parameter.
This makes it difficult to track scans, automate processes, and manage output files predictably.
**Code Used**
```Python
from bbot.scanner import Scanner
if __name__ == "__main__":
modules_list = [
"anubisdb", "asn", "azure_realm", "azure_tenant", "certspotter",
"crt", "digitorus", "dnsbimi", "dnscaa", "dnsdumpster", "dnstlsrpt",
"github_org", "hackertarget", "internetdb", "ipneighbor", "leakix", "myssl", "otx",
"rapiddns", "sitedossier", "social", "subdomaincenter",
"urlscan", "wayback",
]
scan = Scanner(
"evilcorp.com",
modules=modules_list,
output_dir=".", # Generates result in directory running the Python script.
name="custom_name", # `name` is "custom_name", not "unmitigated_jesse" as below
)
for event in scan.start():
print(event)
```
Generates random name "`unmitigated_jesse`" instead:
```
[SCAN("{'id': 'SCAN:76146c1aee15ebfbc02ee7c634ba5cd4de5012bd', 'name': 'unmitigated_jes...", module=TARGET, tags=set()),<SNIP>
<SNIP>
[INFO] output.csv: Saved CSV output to /home/kali/bbot_test/unmitigated_jesse/output.csv
[INFO] output.json: Saved JSON output to /home/kali/bbot_test/unmitigated_jesse/output.json
[INFO] output.txt: Saved TXT output to /home/kali/bbot_test/unmitigated_jesse/output.txt
[SUCC] Scan unmitigated_jesse completed in 24 seconds with status FINISHED
```
This is different to the CLI command which works as intended:
```Bash
bbot -t evilcorp.com -n custom_name -o .
```
Generates desired name "`custom_name`":
```
[INFO] output.csv: Saved CSV output to /home/kali/bbot_test/custom_name/output.csv
[INFO] output.json: Saved JSON output to /home/kali/bbot_test/custom_name/output.json
[INFO] output.txt: Saved TXT output to /home/kali/bbot_test/custom_name/output.txt
[SUCC] Scan custom_name completed in 24 seconds with status FINISHED
```
**OS, BBOT Installation Method + Version**
OS is `Linux kali 6.11.2-amd64 #1 SMP PREEMPT_DYNAMIC Kali 6.11.2-1kali1 (2024-10-15) x86_64 GNU/Linux`
BBot Version is `v2.3.2`
Installation method is `pip`.
If I'm missing something obvious I'm happy to be told.
|
open
|
2025-02-06T06:29:12Z
|
2025-03-12T19:30:53Z
|
https://github.com/blacklanternsecurity/bbot/issues/2252
|
[
"bug"
] |
vaarg
| 3
|
CTFd/CTFd
|
flask
| 1,813
|
Submissions should link directly to the user that submitted
|
Submissions don't link directly to the user in team mode which means you need to search to see what user submitted for a given team.
|
closed
|
2021-02-27T15:09:18Z
|
2021-03-16T19:32:38Z
|
https://github.com/CTFd/CTFd/issues/1813
|
[
"completed"
] |
ColdHeat
| 0
|
remsky/Kokoro-FastAPI
|
fastapi
| 218
|
TensorRT
|
**Describe the feature you'd like**
Deplying the model with TensorRT would speed up further. Perhaps we can consider having a new option to enable TensorRT model on NVIDIA GPU
|
closed
|
2025-03-04T06:41:10Z
|
2025-03-05T02:39:07Z
|
https://github.com/remsky/Kokoro-FastAPI/issues/218
|
[] |
wangjia184
| 1
|
streamlit/streamlit
|
data-science
| 10,346
|
Zoom with px.scatter_mapbox() composant from plotly.express doesn't work with streamlit 1.42.0 but ok with 1.38.0
|
### Checklist
- [x] I have searched the [existing issues](https://github.com/streamlit/streamlit/issues) for similar issues.
- [x] I added a very descriptive title to this issue.
- [x] I have provided sufficient information below to help reproduce this issue.
### Summary
Hi,
I think there are a bug on new version streamlit 1.42.0, 1.41. and 1.40
We can't zoom the map composant
```
fig = px.scatter_mapbox(df_region_count,
mapbox_style="carto-positron",
lat="LATITUDE",
lon="LONGITUDE",
size="Nombre de compagnies",
color_discrete_sequence=["magenta"],
zoom=10,
height=600)
st.plotly_chart(fig, use_container_width=True)
```
But with 1.38.0 or 1.39.0 of streamlit version's, it's ok.
I used plotly==5.18.0 and plotly==6.0.0 for my test.
Thank's
### Reproducible Code Example
```Python
# streamlit==1.38.0/1.42.0
# plotly==5.18.0
# pandas==2.0.3
# version python 3.11
import streamlit as st
import pandas as pd
import plotly.express as px
df = pd.DataFrame(
{
"City": ["Buenos Aires", "Brasilia", "Santiago", "Bogota", "Caracas"],
"Country": ["Argentina", "Brazil", "Chile", "Colombia", "Venezuela"],
"Latitude": [-34.58, -15.78, -33.45, 4.60, 10.48],
"Longitude": [-58.66, -47.91, -70.66, -74.08, -66.86],
"Nb_company": [25, 45, 65, 84, 35]
}
)
fig = px.scatter_mapbox(df,
mapbox_style="carto-positron",
lat="Latitude",
lon="Longitude",
size="Nb_company",
zoom=1
)
st.plotly_chart(fig, use_container_width=True)
```
### Steps To Reproduce
_No response_
### Expected Behavior
_No response_
### Current Behavior
_No response_
### Is this a regression?
- [x] Yes, this used to work in a previous version.
### Debug info
- Streamlit version:
- Python version:
- Operating System:
- Browser:
### Additional Information
_No response_
|
closed
|
2025-02-05T09:20:33Z
|
2025-02-11T21:36:52Z
|
https://github.com/streamlit/streamlit/issues/10346
|
[
"type:bug",
"feature:st.plotly_chart"
] |
tmosc
| 9
|
dynaconf/dynaconf
|
flask
| 690
|
[bug] Dynamic variables are evaluated before environment variables populate values
|
**Describe the bug**
When using dynamic variables, we have noticed that they receive the value set in the settings file(s) and are not updated when those referenced settings are overridden by an environment variable.
**To Reproduce**
Steps to reproduce the behavior:
my_settings.yaml
```yaml
SERVER:
VERSION:
RELEASE: "6.10"
SNAP: 22
DEPLOY_WORKFLOW: "deploy-sat-jenkins"
DEPLOY_ARGUMENTS:
deploy_sat_version: "@format {this.server.version.release}"
deploy_snap_version: "@format {this.server.version.snap}"
deploy_rhel_version: '7'
```
my_app.py
```python
from dynaconf import Dynaconf
settings = Dynaconf(
envvar_prefix="MYAPP",
core_loaders=["YAML"],
settings_file="my_settings.yaml",
)
```
loading my_app in ipython **without** environment variable
```
ipython -i my_app.py
...
In [1]: settings.server.deploy_arguments
Out[1]: <Box: {'deploy_sat_version': '6.10', 'deploy_snap_version': '22', 'deploy_rhel_version': '7'}>
```
loading my_app in ipython **with(* environment variable. notice how the non-dynamic setting is correctly reflecting the value of the environment variable.
```
MYAPP_SERVER__VERSION__RELEASE="7.2" ipython -i my_app.py
...
In [1]: settings.server.deploy_arguments
Out[1]: <Box: {'deploy_sat_version': '6.10', 'deploy_snap_version': '22', 'deploy_rhel_version': '7'}>
In [2]: settings.server.version.release
Out[2]: 7.2
```
**Expected behavior**
The dynamic variable should be evaluated **after** environment variables populate settings. This will allow for people to use both in conjunction with one another.
**Environment (please complete the following information):**
- OS: Fedora 35
- Python: 3.9.7
- Dynaconf Version: 3.1.7
|
closed
|
2021-11-15T20:24:02Z
|
2023-07-13T19:11:02Z
|
https://github.com/dynaconf/dynaconf/issues/690
|
[
"bug",
"LazyIssue"
] |
JacobCallahan
| 2
|
pinry/pinry
|
django
| 148
|
feature request: support for other db engines
|
The documentation suggests that you can db engines other than sqlite by editing local_settings.py, but this is not accurate as the image does not include the db clients necessary to use those engines.
|
closed
|
2019-06-01T14:59:38Z
|
2019-06-01T20:39:38Z
|
https://github.com/pinry/pinry/issues/148
|
[] |
norweeg
| 1
|
ultrafunkamsterdam/undetected-chromedriver
|
automation
| 1,579
|
I can't bypass cloudflare
|
```
from time import sleep
from selenium.common import TimeoutException
from selenium.webdriver.support import expected_conditions as EC
import undetected_chromedriver as uc # the driver
from selenium.webdriver.common.by import By # to find various items on the page
from selenium.webdriver.chrome.options import Options # to have devtools open
from selenium.webdriver.support.wait import WebDriverWait
from undetected_chromedriver import WebElement
def wait_elem(driver, selector, m=EC.presence_of_element_located, method=By.XPATH, tmt=5,
click=False) -> WebElement | None:
try:
el = WebDriverWait(driver, tmt).until(m((method, selector)))
if click and el:
el.click()
return el
except TimeoutException:
return None
except Exception as f:
print(f)
return None
def cloudflare(driver):
for _ in range(5):
if not any([i in driver.page_source for i in ['site connection is secure', 'are a human']]):
return False
iframe = wait_elem(driver, 'iframe[src *= "cloudflare.com"]', tmt=15, method=By.CSS_SELECTOR)
if not iframe:
return False
driver.switch_to.frame(iframe)
cb = wait_elem(driver, 'input[type=checkbox]', method=By.CSS_SELECTOR)
if cb:
cb.click()
driver.switch_to.default_content()
sleep(3)
if __name__ == '__main__':
url = 'https://www.propertyguru.com.sg/property-for-sale?agent_id=195876&market=residential'
options = Options()
options.add_argument('--auto-open-devtools-for-tabs')
driver = uc.Chrome(options=options, use_subprocess=True)
driver.execute_script(f'window.open("{url}", "_blank");')
driver.switch_to.window(driver.window_handles[-1])
cloudflare(driver)
sleep(10)
driver.quit() # quit
```
What am I doing wrong?
|
open
|
2023-09-24T21:52:46Z
|
2023-12-18T12:44:15Z
|
https://github.com/ultrafunkamsterdam/undetected-chromedriver/issues/1579
|
[] |
mcsham
| 4
|
huggingface/datasets
|
tensorflow
| 6,917
|
WinError 32 The process cannot access the file during load_dataset
|
### Describe the bug
When I try to load the opus_book from hugging face (following the [guide on the website](https://huggingface.co/docs/transformers/main/en/tasks/translation))
```python
from datasets import load_dataset, Dataset
dataset = load_dataset("Helsinki-NLP/opus_books", "en-fr", features=["id", "translation"])
```
I get an error:
`PermissionError: [WinError 32] The process cannot access the file because it is being used by another process: 'C:/Users/Me/.cache/huggingface/datasets/Helsinki-NLP___parquet/ca-de-a39f1ef185b9b73b/0.0.0/2a3b91fbd88a2c90d1dbbb32b460cf621d31bd5b05b934492fdef7d8d6f236ec.incomplete\\parquet-train-00000-00000-of-NNNNN.arrow'
`
<details><summary>Full stacktrace</summary>
<p>
```python
AttributeError Traceback (most recent call last)
File c:\Users\Me\.conda\envs\ia\lib\site-packages\datasets\builder.py:1858, in ArrowBasedBuilder._prepare_split_single(self, gen_kwargs, fpath, file_format, max_shard_size, job_id)
[1857](file:///C:/Users/Me/.conda/envs/ia/lib/site-packages/datasets/builder.py:1857) _time = time.time()
-> [1858](file:///C:/Users/Me/.conda/envs/ia/lib/site-packages/datasets/builder.py:1858) for _, table in generator:
[1859](file:///C:/Users/Me/.conda/envs/ia/lib/site-packages/datasets/builder.py:1859) if max_shard_size is not None and writer._num_bytes > max_shard_size:
File c:\Users\Me\.conda\envs\ia\lib\site-packages\datasets\packaged_modules\parquet\parquet.py:59, in Parquet._generate_tables(self, files)
[58](file:///C:/Users/Me/.conda/envs/ia/lib/site-packages/datasets/packaged_modules/parquet/parquet.py:58) def _generate_tables(self, files):
---> [59](file:///C:/Users/Me/.conda/envs/ia/lib/site-packages/datasets/packaged_modules/parquet/parquet.py:59) schema = self.config.features.arrow_schema if self.config.features is not None else None
[60](file:///C:/Users/Me/.conda/envs/ia/lib/site-packages/datasets/packaged_modules/parquet/parquet.py:60) if self.config.features is not None and self.config.columns is not None:
AttributeError: 'list' object has no attribute 'arrow_schema'
During handling of the above exception, another exception occurred:
AttributeError Traceback (most recent call last)
File c:\Users\Me\.conda\envs\ia\lib\site-packages\datasets\builder.py:1882, in ArrowBasedBuilder._prepare_split_single(self, gen_kwargs, fpath, file_format, max_shard_size, job_id)
[1881](file:///C:/Users/Me/.conda/envs/ia/lib/site-packages/datasets/builder.py:1881) num_shards = shard_id + 1
-> [1882](file:///C:/Users/Me/.conda/envs/ia/lib/site-packages/datasets/builder.py:1882) num_examples, num_bytes = writer.finalize()
[1883](file:///C:/Users/Me/.conda/envs/ia/lib/site-packages/datasets/builder.py:1883) writer.close()
File c:\Users\Me\.conda\envs\ia\lib\site-packages\datasets\arrow_writer.py:584, in ArrowWriter.finalize(self, close_stream)
[583](file:///C:/Users/Me/.conda/envs/ia/lib/site-packages/datasets/arrow_writer.py:583) # If schema is known, infer features even if no examples were written
--> [584](file:///C:/Users/Me/.conda/envs/ia/lib/site-packages/datasets/arrow_writer.py:584) if self.pa_writer is None and self.schema:
...
--> [627](file:///C:/Users/Me/.conda/envs/ia/lib/shutil.py:627) os.unlink(fullname)
[628](file:///C:/Users/Me/.conda/envs/ia/lib/shutil.py:628) except OSError:
[629](file:///C:/Users/Me/.conda/envs/ia/lib/shutil.py:629) onerror(os.unlink, fullname, sys.exc_info())
PermissionError: [WinError 32] The process cannot access the file because it is being used by another process: 'C:/Users/Me/.cache/huggingface/datasets/Helsinki-NLP___parquet/ca-de-a39f1ef185b9b73b/0.0.0/2a3b91fbd88a2c90d1dbbb32b460cf621d31bd5b05b934492fdef7d8d6f236ec.incomplete\\parquet-train-00000-00000-of-NNNNN.arrow'
```
</p>
</details>
### Steps to reproduce the bug
Steps to reproduce:
Just execute these lines
```python
from datasets import load_dataset, Dataset
dataset = load_dataset("Helsinki-NLP/opus_books", "en-fr", features=["id", "translation"])
```
### Expected behavior
I expect the dataset to be loaded without any errors.
### Environment info
| Package| Version|
|--------|--------|
| transformers| 4.37.2|
| python| 3.9.19|
| pytorch| 2.3.0|
| datasets|2.12.0 |
| arrow | 1.2.3|
I am using Conda on Windows 11.
|
open
|
2024-05-24T07:54:51Z
|
2024-05-24T07:54:51Z
|
https://github.com/huggingface/datasets/issues/6917
|
[] |
elwe-2808
| 0
|
nltk/nltk
|
nlp
| 2,451
|
Mesure Arabic Text similarity
|
I need to find the similarity between many documents that contain Arabic language plain texts.
Example: input: text1="اهلا وسهلا", text2="اهلا وسهلا"
Output: Similarity = 1.0
|
closed
|
2019-10-31T16:15:42Z
|
2020-01-21T09:10:01Z
|
https://github.com/nltk/nltk/issues/2451
|
[
"invalid"
] |
MohammadModallal
| 0
|
jupyterlab/jupyter-ai
|
jupyter
| 883
|
Some providers (e.g. HuggingFace) not working in chat nor in streaming completer
|
## Description
```
Traceback (most recent call last):
File "/jupyter_ai/chat_handlers/base.py", line 170, in on_message
await self.process_message(message)
File "/jupyter_ai/chat_handlers/default.py", line 104, in process_message
async for chunk in self.llm_chain.astream(
File "/langchain_core/runnables/base.py", line 4698, in astream
async for item in self.bound.astream(
File "/langchain_core/runnables/base.py", line 4698, in astream
async for item in self.bound.astream(
File "/langchain_core/runnables/base.py", line 2900, in astream
async for chunk in self.atransform(input_aiter(), config, **kwargs):
File "/langchain_core/runnables/base.py", line 2883, in atransform
async for chunk in self._atransform_stream_with_config(
File "/langchain_core/runnables/base.py", line 1984, in _atransform_stream_with_config
chunk = cast(Output, await py_anext(iterator))
File "/langchain_core/runnables/base.py", line 2853, in _atransform
async for output in final_pipeline:
File "/langchain_core/runnables/base.py", line 4734, in atransform
async for item in self.bound.atransform(
File "/langchain_core/runnables/base.py", line 2883, in atransform
async for chunk in self._atransform_stream_with_config(
File "/langchain_core/runnables/base.py", line 1984, in _atransform_stream_with_config
chunk = cast(Output, await py_anext(iterator))
File "/langchain_core/runnables/base.py", line 2853, in _atransform
async for output in final_pipeline:
File "/langchain_core/runnables/base.py", line 1333, in atransform
async for output in self.astream(final, config, **kwargs):
File "/langchain_core/language_models/llms.py", line 492, in astream
raise e
File "/langchain_core/language_models/llms.py", line 475, in astream
async for chunk in self._astream(
File "/langchain_community/llms/huggingface_endpoint.py", line 345, in _astream
async for response in await self.async_client.text_generation(
AttributeError: 'NoneType' object has no attribute 'text_generation'
```
## Reproduce
Set any model from from hugging face hub on, try to use chat or completion with streaming enabled
## Expected behavior
Hugging face hub models work
## Context
`main` at [2019571](https://github.com/jupyterlab/jupyter-ai/commit/2019571d916549262d04b3c02ec13e043935a7d4). The issue with streaming in inline completer is pre-existing, but chat got broken only rececntly, since:
- https://github.com/jupyterlab/jupyter-ai/pull/859
This might be an upstream issue, or maybe we should have a way to detect when a model does not support streaming.
|
closed
|
2024-07-09T11:07:50Z
|
2024-07-12T17:59:20Z
|
https://github.com/jupyterlab/jupyter-ai/issues/883
|
[
"bug",
"regression"
] |
krassowski
| 8
|
robotframework/robotframework
|
automation
| 5,319
|
Process based (Popen) Concurrancy in robotframework
|
Many aproaches are currently in the making how to make robotframework concurancy capable.
[Adding threads ](https://github.com/robotframework/robotframework/issues/5294), is a very ambitious one.
Others are about concurrent keywords, or [logging](https://github.com/robotframework/robotbackgroundlogger) from threads...
I would like to propose to consider an alternative, which comes with no limitations for pre-existing robotframework and keyword library code bases.
[the process star based parallelism form this repo](https://github.com/franzhaas/robotframework-concurrent)
This is a solution using pipes and Popen to start a new robotframework process on demand, which can be communicated using said pipe.
This brings the limitations that come from process separation and pipes
- file handles can not be shared
- all data needs to be serialized and deserialized
- at most one write access at a time (= max one thread per process can write to the pipe)
- at most one read access at a time (= max one thread per process can read from the pipe)
- no shared mutable data
This brings the advantages
- NO deadlocks
- NO race conditions
- NO GIL or other locks from python/robotframework
- Technically this can be extended to work across machines via TCP
While the functionality as presented can be implemented as an external library, changes to the framework core allow for significant improvements. Which in my point of view warrants inclusion into the framework core.
- embed process log/report/json/xml into the "main" log/report/json/xml
- link process start log entries to the process log, and do the same for send_message/recv_message log entries
- propagate log level etc... from the initial robotframework sequence to the started processes
- allow for configuration of log level per process
|
closed
|
2025-01-16T20:57:30Z
|
2025-01-16T21:25:01Z
|
https://github.com/robotframework/robotframework/issues/5319
|
[] |
franzhaas
| 0
|
Evil0ctal/Douyin_TikTok_Download_API
|
fastapi
| 75
|
抖音分享口令提取不了链接
|
1e--.:/ 复制打开抖音,看看【栗子熟了的作品】只存在了38年的最强王朝# 隋朝 # 杨坚 # 杨... ЭGaNlj6lqVQVFv8Э
|
closed
|
2022-09-14T11:51:52Z
|
2022-11-09T21:02:29Z
|
https://github.com/Evil0ctal/Douyin_TikTok_Download_API/issues/75
|
[
"help wanted"
] |
LukyBruce
| 4
|
JaidedAI/EasyOCR
|
machine-learning
| 320
|
Can it be used in php?
|
Can it be used in php?
|
closed
|
2020-12-02T06:33:20Z
|
2022-03-02T09:24:09Z
|
https://github.com/JaidedAI/EasyOCR/issues/320
|
[] |
netwons
| 0
|
google-research/bert
|
tensorflow
| 942
|
Pre-Training BERT from scratch using Tokenized input file and custom vocabulary file for Khmer language
|
Hi!
I would like to know if it's possible for me to use tokenized/segmented documents as my input file to the `create_pretraining_data.py`. The main reason for this is because the segmentation/tokenization for the Khmer language is different than that of English.
```
Original:
វាមានមកជាមួយនូវ
Segmented:
វា មាន មក ជាមួយ នូវ
Translation (Direct from Google translate):
It comes with
```
I tried something on my own and managed to get some results from running the `run_pretraining.py` script. However, I'm not sure if what I'm doing can be considered correct.
Any feedback/comments are highly appreciated!
## Script Modifications
The modifications that I did were:
#### 1. Make input file in a list format
Instead of a normal plain text, my input file is from my custom Khmer tokenization output where I then make it into a list format, mimicking the output that I get when running the sample English text.
```
[[['ដំណាំ', 'សាវម៉ាវ', 'ជា', 'ប្រភេទ', 'ឈើ', 'ហូប', 'ផ្លែ'],
['វា', 'ផ្តល់', 'ផប្រយោជន៍', 'យ៉ាង', 'ច្រើន', 'ដល់', 'សុខភាព']],
[['cmt', '$', '270', 'នាំ', 'លាភ', 'នាំ', 'សំណាង', 'ហេង', 'ហេង']]]
```
*\* The outer bracket indicates a source file, the first nested bracket indicates a document and the second nested bracket indicates a sentence. Exactly the same structure as the variable `all_documents` inside the `create_training_instances()` function*
#### 2. Vocab file from unique segmented words
This is the part that I'm really really having some serious doubt with. To create my vocab file, all I did was find the unique tokens from the whole documents. I then add the core token requirement `[CLS], [SEP], [UNK] and [MASK]`. I'm not sure if this the correct way to do it.
Feedback on this part is highly appreciated!
#### 3. Skip tokenization step inside the create_training_instances() function
Since my input file already matches what the variable `all_documents` is, I skip line 183 to line 207. I replaced it with reading my input as-is:
```
for input_file in input_files:
with tf.gfile.GFile(input_file, "r") as reader:
lines = reader.read()
all_documents = ast.literal_eval(lines)
```
## Results/Output
The raw input file (before custom tokenization) is from random web-scraping.
Some information on the raw and vocab file:
```
Number of documents/articles: 5
Number of sentences: 78
Number of vocabs: 649 (including [CLS], [SEP] etc.)
```
Below is the output (tail end of it) after running the `create_pretraining_data.py`

And this is what I get after running the `run_pretraining.py`

As shown in the diagram above I'm getting a very low accuracy from this and hence my concern if I'm doing it correctly.
To be fair, I got quite a **low accuracy** from MLM as well **using the English sample text file** and the vocab file from `uncased_L-24_H-1024_A-16`. If I'm not mistaken I only get around 6% or so. Is this expected or did I also do some mistakes with the sample English file?
|
open
|
2019-11-27T07:10:25Z
|
2019-11-28T01:49:23Z
|
https://github.com/google-research/bert/issues/942
|
[] |
nikmuhammadnaim
| 1
|
tflearn/tflearn
|
tensorflow
| 782
|
wide and deep help.
|
Run on default everything,
Training
---------------------------------
Run id: wide+deep
Log directory: /tmp/tflearn_logs/
INFO:tensorflow:Summary name BinaryAccuracy/wide_regression_0 (raw) is illegal; using BinaryAccuracy/wide_regression_0__raw_ instead.
INFO:tensorflow:Summary name BinaryAccuracy_1/deep_regression_0 (raw) is illegal; using BinaryAccuracy_1/deep_regression_0__raw_ instead.
INFO:tensorflow:Summary name BinaryAccuracy_2/central_bias_regression_0 (raw) is illegal; using BinaryAccuracy_2/central_bias_regression_0__raw_ instead.
INFO:tensorflow:Summary name BinaryAccuracy_3/wide_regression_1 (raw) is illegal; using BinaryAccuracy_3/wide_regression_1__raw_ instead.
INFO:tensorflow:Summary name BinaryAccuracy_4/deep_regression_1 (raw) is illegal; using BinaryAccuracy_4/deep_regression_1__raw_ instead.
INFO:tensorflow:Summary name BinaryAccuracy_5/central_bias_regression_1 (raw) is illegal; using BinaryAccuracy_5/central_bias_regression_1__raw_ instead.
---------------------------------
Training samples: 195366
Validation samples: 97686
--
Traceback (most recent call last):
File "<ipython-input-2-36cb65ca3534>", line 1, in <module>
runfile('/Users/shaotang/OneDrive/ebay_project/wide_n_deep/ebay_linux_SK.py', wdir='/Users/shaotang/OneDrive/ebay_project/wide_n_deep')
File "/Users/shaotang/anaconda/lib/python3.6/site-packages/spyder/utils/site/sitecustomize.py", line 866, in runfile
execfile(filename, namespace)
File "/Users/shaotang/anaconda/lib/python3.6/site-packages/spyder/utils/site/sitecustomize.py", line 102, in execfile
exec(compile(f.read(), filename, 'exec'), namespace)
File "/Users/shaotang/OneDrive/ebay_project/wide_n_deep/ebay_linux_SK.py", line 426, in <module>
CommandLine()
File "/Users/shaotang/OneDrive/ebay_project/wide_n_deep/ebay_linux_SK.py", line 374, in CommandLine
twad.train(n_epoch=FLAGS.n_epoch, snapshot_step=FLAGS.snapshot_step)
File "/Users/shaotang/OneDrive/ebay_project/wide_n_deep/ebay_linux_SK.py", line 315, in train
run_id=self.name,
File "/Users/shaotang/anaconda/lib/python3.6/site-packages/tflearn/models/dnn.py", line 215, in fit
callbacks=callbacks)
File "/Users/shaotang/anaconda/lib/python3.6/site-packages/tflearn/helpers/trainer.py", line 336, in fit
show_metric)
File "/Users/shaotang/anaconda/lib/python3.6/site-packages/tflearn/helpers/trainer.py", line 777, in _train
feed_batch)
File "/Users/shaotang/anaconda/lib/python3.6/site-packages/tensorflow/python/client/session.py", line 789, in run
run_metadata_ptr)
File "/Users/shaotang/anaconda/lib/python3.6/site-packages/tensorflow/python/client/session.py", line 997, in _run
feed_dict_string, options, run_metadata)
File "/Users/shaotang/anaconda/lib/python3.6/site-packages/tensorflow/python/client/session.py", line 1132, in _do_run
target_list, options, run_metadata)
File "/Users/shaotang/anaconda/lib/python3.6/site-packages/tensorflow/python/client/session.py", line 1152, in _do_call
raise type(e)(node_def, op, message)
InvalidArgumentError: Shape [-1,5] has negative dimensions
[[Node: wide_X_1/X = Placeholder[dtype=DT_FLOAT, shape=[?,5], _device="/job:localhost/replica:0/task:0/cpu:0"]()]]
Caused by op 'wide_X_1/X', defined at:
File "/Users/shaotang/anaconda/lib/python3.6/site-packages/spyder/utils/ipython/start_kernel.py", line 227, in <module>
main()
File "/Users/shaotang/anaconda/lib/python3.6/site-packages/spyder/utils/ipython/start_kernel.py", line 223, in main
kernel.start()
File "/Users/shaotang/anaconda/lib/python3.6/site-packages/ipykernel/kernelapp.py", line 474, in start
ioloop.IOLoop.instance().start()
File "/Users/shaotang/anaconda/lib/python3.6/site-packages/zmq/eventloop/ioloop.py", line 177, in start
super(ZMQIOLoop, self).start()
File "/Users/shaotang/anaconda/lib/python3.6/site-packages/tornado/ioloop.py", line 887, in start
handler_func(fd_obj, events)
File "/Users/shaotang/anaconda/lib/python3.6/site-packages/tornado/stack_context.py", line 275, in null_wrapper
return fn(*args, **kwargs)
File "/Users/shaotang/anaconda/lib/python3.6/site-packages/zmq/eventloop/zmqstream.py", line 440, in _handle_events
self._handle_recv()
File "/Users/shaotang/anaconda/lib/python3.6/site-packages/zmq/eventloop/zmqstream.py", line 472, in _handle_recv
self._run_callback(callback, msg)
File "/Users/shaotang/anaconda/lib/python3.6/site-packages/zmq/eventloop/zmqstream.py", line 414, in _run_callback
callback(*args, **kwargs)
File "/Users/shaotang/anaconda/lib/python3.6/site-packages/tornado/stack_context.py", line 275, in null_wrapper
return fn(*args, **kwargs)
File "/Users/shaotang/anaconda/lib/python3.6/site-packages/ipykernel/kernelbase.py", line 276, in dispatcher
return self.dispatch_shell(stream, msg)
File "/Users/shaotang/anaconda/lib/python3.6/site-packages/ipykernel/kernelbase.py", line 228, in dispatch_shell
handler(stream, idents, msg)
File "/Users/shaotang/anaconda/lib/python3.6/site-packages/ipykernel/kernelbase.py", line 390, in execute_request
user_expressions, allow_stdin)
File "/Users/shaotang/anaconda/lib/python3.6/site-packages/ipykernel/ipkernel.py", line 196, in do_execute
res = shell.run_cell(code, store_history=store_history, silent=silent)
File "/Users/shaotang/anaconda/lib/python3.6/site-packages/ipykernel/zmqshell.py", line 501, in run_cell
return super(ZMQInteractiveShell, self).run_cell(*args, **kwargs)
File "/Users/shaotang/anaconda/lib/python3.6/site-packages/IPython/core/interactiveshell.py", line 2717, in run_cell
interactivity=interactivity, compiler=compiler, result=result)
File "/Users/shaotang/anaconda/lib/python3.6/site-packages/IPython/core/interactiveshell.py", line 2827, in run_ast_nodes
if self.run_code(code, result):
File "/Users/shaotang/anaconda/lib/python3.6/site-packages/IPython/core/interactiveshell.py", line 2881, in run_code
exec(code_obj, self.user_global_ns, self.user_ns)
File "<ipython-input-2-36cb65ca3534>", line 1, in <module>
runfile('/Users/shaotang/OneDrive/ebay_project/wide_n_deep/ebay_linux_SK.py', wdir='/Users/shaotang/OneDrive/ebay_project/wide_n_deep')
File "/Users/shaotang/anaconda/lib/python3.6/site-packages/spyder/utils/site/sitecustomize.py", line 866, in runfile
execfile(filename, namespace)
File "/Users/shaotang/anaconda/lib/python3.6/site-packages/spyder/utils/site/sitecustomize.py", line 102, in execfile
exec(compile(f.read(), filename, 'exec'), namespace)
File "/Users/shaotang/OneDrive/ebay_project/wide_n_deep/ebay_linux_SK.py", line 426, in <module>
CommandLine()
File "/Users/shaotang/OneDrive/ebay_project/wide_n_deep/ebay_linux_SK.py", line 369, in CommandLine
checkpoints_dir=FLAGS.checkpoints_dir)
File "/Users/shaotang/OneDrive/ebay_project/wide_n_deep/ebay_linux_SK.py", line 57, in __init__
self.build_model([wide_learning_rate, deep_learning_rate])
File "/Users/shaotang/OneDrive/ebay_project/wide_n_deep/ebay_linux_SK.py", line 88, in build_model
wide_inputs = tflearn.input_data(shape=input_shape, name="wide_X")
File "/Users/shaotang/anaconda/lib/python3.6/site-packages/tflearn/layers/core.py", line 81, in input_data
placeholder = tf.placeholder(shape=shape, dtype=dtype, name="X")
File "/Users/shaotang/anaconda/lib/python3.6/site-packages/tensorflow/python/ops/array_ops.py", line 1530, in placeholder
return gen_array_ops._placeholder(dtype=dtype, shape=shape, name=name)
File "/Users/shaotang/anaconda/lib/python3.6/site-packages/tensorflow/python/ops/gen_array_ops.py", line 1954, in _placeholder
name=name)
File "/Users/shaotang/anaconda/lib/python3.6/site-packages/tensorflow/python/framework/op_def_library.py", line 767, in apply_op
op_def=op_def)
File "/Users/shaotang/anaconda/lib/python3.6/site-packages/tensorflow/python/framework/ops.py", line 2506, in create_op
original_op=self._default_original_op, op_def=op_def)
File "/Users/shaotang/anaconda/lib/python3.6/site-packages/tensorflow/python/framework/ops.py", line 1269, in __init__
self._traceback = _extract_stack()
InvalidArgumentError (see above for traceback): Shape [-1,5] has negative dimensions
[[Node: wide_X_1/X = Placeholder[dtype=DT_FLOAT, shape=[?,5], _device="/job:localhost/replica:0/task:0/cpu:0"]()]]
Any fix?
|
open
|
2017-06-06T03:54:06Z
|
2017-06-06T03:54:06Z
|
https://github.com/tflearn/tflearn/issues/782
|
[] |
lancerts
| 0
|
holoviz/panel
|
jupyter
| 7,118
|
Flaky UI tests for ToggleIcon and Tabulator on MacOS
|
With latest 9404b4348a80a190b3dcda7f270ba3e5b3c10210 on MacOS I get a few UI test fails.
One run see [full log](https://gist.github.com/cdeil/3cc8f61ffea180d0f06bbc6ea07bd0cb) where these fail:
```
FAILED panel/tests/ui/pane/test_textual.py::test_textual_app - TimeoutError: wait_until timed out in 5000 milliseconds
FAILED panel/tests/ui/widgets/test_tabulator.py::test_tabulator_patch_no_height_resize - TimeoutError: wait_until timed out in 5000 milliseconds
FAILED panel/tests/ui/widgets/test_tabulator.py::test_selection_indices_on_paginated_sorted_and_filtered_data[remote] - TimeoutError: wait_until timed out in 5000 milliseconds
FAILED panel/tests/ui/widgets/test_tabulator.py::test_tabulator_edit_event_and_header_filters_same_column[index-True] - playwright._impl._errors.TimeoutError: Locator.fill: Timeout 20000ms exceeded.
FAILED panel/tests/ui/widgets/test_tabulator.py::test_tabulator_edit_event_and_header_filters_same_column[index-False] - playwright._impl._errors.TimeoutError: Locator.fill: Timeout 20000ms exceeded.
FAILED panel/tests/ui/widgets/test_tabulator.py::test_tabulator_edit_event_and_header_filters_same_column[foo-False] - playwright._impl._errors.TimeoutError: Locator.fill: Timeout 20000ms exceeded.
FAILED panel/tests/ui/widgets/test_tabulator.py::test_tabulator_edit_event_and_header_filters_same_column[foo-True] - playwright._impl._errors.TimeoutError: Locator.fill: Timeout 20000ms exceeded.
```
Another run see [full log](https://gist.github.com/cdeil/39a54d6da3af536a9feee04e4ddd4538) where these fail:
```
FAILED panel/tests/ui/io/test_reload.py::test_reload_app_on_local_module_change - TimeoutError: wait_until timed out in 5000 milliseconds
FAILED panel/tests/ui/pane/test_textual.py::test_textual_app - TimeoutError: wait_until timed out in 5000 milliseconds
FAILED panel/tests/ui/widgets/test_tabulator.py::test_tabulator_patch_no_height_resize - TimeoutError: wait_until timed out in 5000 milliseconds
FAILED panel/tests/ui/widgets/test_tabulator.py::test_selection_indices_on_paginated_sorted_and_filtered_data[remote] - TimeoutError: wait_until timed out in 5000 milliseconds
FAILED panel/tests/ui/widgets/test_tabulator.py::test_tabulator_edit_event_and_header_filters_same_column[index-True] - playwright._impl._errors.TimeoutError: Locator.fill: Timeout 20000ms exceeded.
=
```
I tried turning xdist off via
```
$ pixi run -e test-ui pytest --ui panel/tests/ui/widgets/test_icon.py -v --browser chromium -n logical --dist no -n 0
```
but still got test fails ( see [full log](https://gist.github.com/cdeil/ed1e5119da6d371411099bd560f538a3) ):
```
FAILED panel/tests/ui/pane/test_textual.py::test_textual_app - TimeoutError: wait_until timed out in 5000 milliseconds
FAILED panel/tests/ui/pane/test_vizzu.py::test_vizzu_click - TimeoutError: wait_until timed out in 5000 milliseconds
FAILED panel/tests/ui/template/test_editabletemplate.py::test_editable_template_drag_item - TimeoutError: wait_until timed out in 5000 milliseconds
FAILED panel/tests/ui/widgets/test_icon.py::test_toggle_icon_width_height - TimeoutError: wait_until timed out in 5000 milliseconds
FAILED panel/tests/ui/widgets/test_icon.py::test_toggle_icon_size - TimeoutError: wait_until timed out in 5000 milliseconds
FAILED panel/tests/ui/widgets/test_tabulator.py::test_tabulator_patch_no_height_resize - TimeoutError: wait_until timed out in 5000 milliseconds
FAILED panel/tests/ui/widgets/test_tabulator.py::test_tabulator_header_filter_no_horizontal_rescroll[remote] - AssertionError: assert {'height': 20...: 714, 'y': 9} == {'height': 20...: 264, 'y': 9}
FAILED panel/tests/ui/widgets/test_tabulator.py::test_tabulator_edit_event_and_header_filters_same_column[index-True] - playwright._impl._errors.TimeoutError: Locator.fill: Timeout 20000ms exceeded.
FAILED panel/tests/ui/widgets/test_tabulator.py::test_tabulator_edit_event_and_header_filters_same_column[index-False] - playwright._impl._errors.TimeoutError: Locator.fill: Timeout 20000ms exceeded.
FAILED panel/tests/ui/widgets/test_tabulator.py::test_tabulator_edit_event_and_header_filters_same_column[foo-True] - playwright._impl._errors.TimeoutError: Locator.fill: Timeout 20000ms exceeded.
FAILED panel/tests/ui/widgets/test_tabulator.py::test_tabulator_edit_event_and_header_filters_same_column[foo-False] - playwright._impl._errors.TimeoutError: Locator.fill: Timeout 20000ms exceeded.
FAILED panel/tests/ui/widgets/test_tabulator.py::test_selection_indices_on_paginated_sorted_and_filtered_data[remote] - TimeoutError: wait_until timed out in 5000 milliseconds
ERROR panel/tests/ui/widgets/test_tabulator.py::test_tabulator_header_filter_no_horizontal_rescroll[None] - pluggy.PluggyTeardownRaisedWarning: A plugin raised an exception during an old-style hookwrapper teardown.
```
The textual fail is due to a recent breaking API change - see #7117
The others are flaky tests I think, although this seems to fail for me consistently now:
```
panel $ pixi run -e test-ui pytest --ui panel/tests/ui/widgets/test_icon.py -k test_toggle_icon_size -v
=============================================================================================== test session starts ================================================================================================
platform darwin -- Python 3.12.5, pytest-7.4.4, pluggy-1.5.0 -- /Users/cdeil/code/oss/panel/.pixi/envs/test-ui/bin/python3.12
cachedir: .pytest_cache
rootdir: /Users/cdeil/code/oss/panel
configfile: pyproject.toml
plugins: asyncio-0.23.8, cov-5.0.0, github-actions-annotate-failures-0.2.0, playwright-0.5.0, rerunfailures-14.0, anyio-4.4.0, base-url-2.1.0, xdist-3.6.1
asyncio: mode=Mode.AUTO
collected 17 items / 16 deselected / 1 selected
panel/tests/ui/widgets/test_icon.py::test_toggle_icon_size FAILED [100%]
===================================================================================================== FAILURES =====================================================================================================
______________________________________________________________________________________________ test_toggle_icon_size _______________________________________________________________________________________________
page = <Page url='http://localhost:65348/'>
def test_toggle_icon_size(page):
icon = ToggleIcon(size="120px")
serve_component(page, icon)
# test defaults
assert icon.icon == "heart"
assert not icon.value
icon_element = page.locator(".ti-heart")
> wait_until(lambda: icon_element.bounding_box()["width"] == 120)
E TimeoutError: wait_until timed out in 5000 milliseconds
panel/tests/ui/widgets/test_icon.py:66: TimeoutError
----------------------------------------------------------------------------------------------- Captured stdout call -----------------------------------------------------------------------------------------------
Launching server at http://localhost:65348
----------------------------------------------------------------------------------------------- Captured stderr call -----------------------------------------------------------------------------------------------
INFO:bokeh.server.server:Starting Bokeh server version 3.5.1 (running on Tornado 6.4.1)
INFO:bokeh.server.tornado:User authentication hooks NOT provided (default user enabled)
INFO:bokeh.server.views.ws:WebSocket connection opened
INFO:bokeh.server.views.ws:ServerConnection created
------------------------------------------------------------------------------------------------ Captured log call -------------------------------------------------------------------------------------------------
INFO tornado.access:web.py:2348 200 GET /liveness (127.0.0.1) 0.39ms
INFO tornado.access:web.py:2348 200 GET / (::1) 17.99ms
INFO tornado.access:web.py:2348 200 GET /static/js/bokeh.min.js?v=276377ed021e1611c60311b355033c865900f31a918aa4565aba37a78700f17b017100a8a618bded4140c6ad247a0b0237d3a02bee9fd722ce67a459479522dc (::1) 1.99ms
INFO tornado.access:web.py:2348 200 GET /static/extensions/panel/bundled/reactiveesm/es-module-shims@%5E1.10.0/dist/es-module-shims.min.js (::1) 2.09ms
INFO tornado.access:web.py:2348 200 GET /static/js/bokeh-gl.min.js?v=70bc1a9856b732e888ed6b2a8e9b6382bf538fee3ec9f1145b8db1778158fd51e478dbe0600650e30d5a0083b12fc43961bc7b2ef3e9f366000199b83b9a1644 (::1) 0.38ms
INFO tornado.access:web.py:2348 200 GET /static/extensions/panel/panel.min.js?v=a91daab4668e3299f59ed231b5da2e657f5e65d10a1d501ff0a660306b1fdb79 (::1) 4.22ms
INFO tornado.access:web.py:2348 200 GET /static/js/bokeh-widgets.min.js?v=8541420c1bb1dbde534df1d9b2be7c8248f61fca353a821ffc4d459b08b79c4b39f0ea1dd6960aa3b734bea988cf822dc6993c786de844db80e4f258dd90727f (::1) 1.91ms
INFO tornado.access:web.py:2348 200 GET /static/js/bokeh-tables.min.js?v=26281191594de496d010d87b3a56c1679330da29fcf72d3dab91ac4a45479c16b36e82ce4325f4217df4614fad13927fd7f1e1be64cf838e4a18a60852e2be0e (::1) 2.00ms
INFO tornado.access:web.py:2348 101 GET /ws (::1) 0.32ms
INFO tornado.access:web.py:2348 200 GET /static/extensions/panel/css/loading.css?v=1.5.0-b.3 (::1) 1.10ms
INFO tornado.access:web.py:2348 200 GET /static/extensions/panel/css/icon.css?v=1.5.0-b.3 (::1) 1.52ms
INFO tornado.access:web.py:2348 200 GET /static/extensions/panel/bundled/theme/default.css?v=1.5.0-b.3 (::1) 4.11ms
INFO tornado.access:web.py:2348 200 GET /static/extensions/panel/bundled/theme/native.css?v=1.5.0-b.3 (::1) 9.85ms
--------------------------------------------------------------------------------------------- Captured stderr teardown ---------------------------------------------------------------------------------------------
INFO:bokeh.server.views.ws:WebSocket connection closed: code=1001, reason=None
============================================================================================= short test summary info ==============================================================================================
FAILED panel/tests/ui/widgets/test_icon.py::test_toggle_icon_size - TimeoutError: wait_until timed out in 5000 milliseconds
========================================================================================= 1 failed, 16 deselected in 6.08s =========================================================================================
```
|
open
|
2024-08-10T18:07:37Z
|
2024-08-13T10:01:58Z
|
https://github.com/holoviz/panel/issues/7118
|
[] |
cdeil
| 4
|
explosion/spaCy
|
machine-learning
| 12,299
|
SpanCategorizer gives 0 scores for Danish language
|
<!-- NOTE: For questions or install related issues, please open a Discussion instead. -->
## How to reproduce the behaviour
<!-- Include a code example or the steps that led to the problem. Please try to be as specific as possible. -->
I have converted the Danish datasets to SpaCy SpanCategorizer format. Here is screenshot of format

Here is debug data command output.
```shell
python -m spacy debug data config.cfg --ignore-warnings --verbose --no-format --paths.train corpus/train.spacy --paths.dev corpus/dev.spacy
Data file validation
Pipeline can be initialized with data
Corpus is loadable
Training stats
Language: da
Training pipeline: tok2vec, spancat
148 training docs
40 evaluation docs
It's recommended to use at least 2000 examples (minimum 100)
Vocab & Vectors
7130 total word(s) in the data (522 unique)
10 most common words: 'de' (318), '...' (280), 'det' (196), 'er' (174), 'og' (174), '.' (172), ',' (168), 'har' (166), 'en' (130), 'deres' (116)
No word vectors present in the package
Span Categorization
Spans Key Labels
--------- ------------------------------
sc {'Price', 'People', 'Proces', 'Product', 'Place'}
Label counts in train data:
Key: sc, 'Product' (70), 'Place' (6), 'People' (66), 'Proces' (4), 'Price' (2)
Span characteristics for spans_key 'sc'
SD = Span Distinctiveness, BD = Boundary Distinctiveness
Span Type Length SD BD N
------------ ------ ---- ---- --
Product 2.20 1.98 1.79 70
Place 3.00 3.56 3.06 6
People 2.19 2.19 1.98 66
Proces 1.73 4.60 4.90 4
Price 1.00 8.18 5.52 2
------------ ------ ---- ---- --
Wgt. Average 2.20 2.29 2.06 -
Over 90% of spans have lengths of 1 -- 4 (min=1, max=5). The most common span lengths are: 1 (22.97%), 2 (28.38%), 3 (33.78%), 4 (10.81%). If you are using the n-gram suggester, note that omitting infrequent n-gram lengths can greatly improve speed and memory usage.
Full distribution of span lengths: 1 (22.97%), 2 (28.38%), 3 (33.78%), 4 (10.81%), 5 (4.05%)
Spans are distinct from the rest of the corpus
10 most common span tokens: 'omkostninger', 'det', 'ikke', 'interesse', 'en', 'med', 'højsæson', 'materialet', 'deres', 'og'
Boundary tokens are distinct from the rest of the corpus
10 most common span boundary tokens: ',', 'har', 'masse', 'som', 'lige', 'igang', 'sendt', 'videre', '...', '.'
To train a new span type, your data should include at least 50 instances of the new label
Examples without ocurrences available for all labels
Summary
5 checks passed
```
My config.cfg
```text
[paths]
train = null
dev = null
vectors = null
init_tok2vec = null
[system]
gpu_allocator = null
seed = 0
[nlp]
lang = "da"
pipeline = ["tok2vec","spancat"]
batch_size = 1000
disabled = []
before_creation = null
after_creation = null
after_pipeline_creation = null
tokenizer = {"@tokenizers":"spacy.Tokenizer.v1"}
[components]
[components.spancat]
factory = "spancat"
max_positive = null
scorer = {"@scorers":"spacy.spancat_scorer.v1"}
spans_key = "sc"
threshold = 0.5
[components.spancat.model]
@architectures = "spacy.SpanCategorizer.v1"
[components.spancat.model.reducer]
@layers = "spacy.mean_max_reducer.v1"
hidden_size = 128
[components.spancat.model.scorer]
@layers = "spacy.LinearLogistic.v1"
nO = null
nI = null
[components.spancat.model.tok2vec]
@architectures = "spacy.Tok2VecListener.v1"
width = ${components.tok2vec.model.encode.width}
upstream = "*"
[components.spancat.suggester]
@misc = "spacy.ngram_suggester.v1"
sizes = [1,2,3,4,5,6,7,8]
[components.tok2vec]
factory = "tok2vec"
[components.tok2vec.model]
@architectures = "spacy.Tok2Vec.v2"
[components.tok2vec.model.embed]
@architectures = "spacy.MultiHashEmbed.v2"
width = ${components.tok2vec.model.encode.width}
attrs = ["NORM","PREFIX","SUFFIX","SHAPE"]
rows = [5000,1000,2500,2500]
include_static_vectors = false
[components.tok2vec.model.encode]
@architectures = "spacy.MaxoutWindowEncoder.v2"
width = 96
depth = 4
window_size = 1
maxout_pieces = 3
[corpora]
[corpora.dev]
@readers = "spacy.Corpus.v1"
path = ${paths.dev}
max_length = 0
gold_preproc = false
limit = 0
augmenter = null
[corpora.train]
@readers = "spacy.Corpus.v1"
path = ${paths.train}
max_length = 0
gold_preproc = false
limit = 0
augmenter = null
[training]
dev_corpus = "corpora.dev"
train_corpus = "corpora.train"
seed = ${system.seed}
gpu_allocator = ${system.gpu_allocator}
dropout = 0.1
accumulate_gradient = 1
patience = 1600
max_epochs = 0
max_steps = 20000
eval_frequency = 200
frozen_components = []
annotating_components = []
before_to_disk = null
before_update = null
[training.batcher]
@batchers = "spacy.batch_by_words.v1"
discard_oversize = false
tolerance = 0.2
get_length = null
[training.batcher.size]
@schedules = "compounding.v1"
start = 100
stop = 1000
compound = 1.001
t = 0.0
[training.logger]
@loggers = "spacy.ConsoleLogger.v1"
progress_bar = false
[training.optimizer]
@optimizers = "Adam.v1"
beta1 = 0.9
beta2 = 0.999
L2_is_weight_decay = true
L2 = 0.01
grad_clip = 1.0
use_averages = false
eps = 0.00000001
learn_rate = 0.001
[training.score_weights]
spans_sc_f = 1.0
spans_sc_p = 0.0
spans_sc_r = 0.0
[pretraining]
[initialize]
vectors = ${paths.vectors}
init_tok2vec = ${paths.init_tok2vec}
vocab_data = null
lookups = null
before_init = null
after_init = null
[initialize.components]
[initialize.tokenizer]
```
Model training logs
```shell
python -m spacy train config.cfg --output ./output --paths.train corpus/train.spacy --paths.dev corpus/dev.spacy
ℹ Saving to output directory: output
ℹ Using CPU
=========================== Initializing pipeline ===========================
[2023-02-17 22:30:10,359] [INFO] Set up nlp object from config
[2023-02-17 22:30:10,368] [INFO] Pipeline: ['tok2vec', 'spancat']
[2023-02-17 22:30:10,371] [INFO] Created vocabulary
[2023-02-17 22:30:10,371] [INFO] Finished initializing nlp object
[2023-02-17 22:30:10,576] [INFO] Initialized pipeline components: ['tok2vec', 'spancat']
✔ Initialized pipeline
============================= Training pipeline =============================
ℹ Pipeline: ['tok2vec', 'spancat']
ℹ Initial learn rate: 0.001
E # LOSS TOK2VEC LOSS SPANCAT SPANS_SC_F SPANS_SC_P SPANS_SC_R SCORE
--- ------ ------------ ------------ ---------- ---------- ---------- ------
0 0 67.87 806.15 0.13 0.06 40.00 0.00
2 200 121.03 1831.67 0.00 0.00 0.00 0.00
5 400 0.76 458.56 0.00 0.00 0.00 0.00
9 600 0.28 484.45 0.00 0.00 0.00 0.00
13 800 0.20 539.33 0.00 0.00 0.00 0.00
19 1000 0.22 607.30 0.00 0.00 0.00 0.00
26 1200 0.31 661.08 0.00 0.00 0.00 0.00
35 1400 0.40 764.32 0.00 0.00 0.00 0.00
47 1600 0.77 837.79 0.00 0.00 0.00 0.00
✔ Saved pipeline to output directory
output/model-last
```
But model is giving very strange spans.
```python
import spacy
from spacy import displacy
nlp = spacy.load("output/model-best")
doc = nlp("Slet ikke behov for er bare en lille forening med ikke mange indtægter eller udgifter.")
doc.spans
```
```shell
{'sc': [Slet, Slet, Slet, Slet, ikke, ikke, behov, behov, for, for, er, bare, en, lille, lille, lille, forening, forening, forening, forening, med, mange, mange, mange, indtægter, eller, eller, eller, udgifter, udgifter, udgifter, udgifter, ., ., ., ikke behov, behov for, behov for, for er, bare en, en lille, en lille, lille forening, lille forening, forening med, ikke mange, mange indtægter, mange indtægter, indtægter eller, eller udgifter, eller udgifter, udgifter., Slet ikke behov, ikke behov for, ikke behov for, for er bare, for er bare, er bare en, er bare en, bare en lille, bare en lille, en lille forening, lille forening med, forening med ikke, med ikke mange, mange indtægter eller, indtægter eller udgifter, eller udgifter., Slet ikke behov for, ikke behov for er, ikke behov for er, behov for er bare, for er bare en, er bare en lille, bare en lille forening, bare en lille forening, en lille forening med, forening med ikke mange, forening med ikke mange, ikke mange indtægter eller, mange indtægter eller udgifter, indtægter eller udgifter., indtægter eller udgifter., ikke behov for er bare, behov for er bare en, for er bare en lille, er bare en lille forening, bare en lille forening med, lille forening med ikke mange, lille forening med ikke mange, lille forening med ikke mange, ikke mange indtægter eller udgifter, mange indtægter eller udgifter., mange indtægter eller udgifter., mange indtægter eller udgifter., ikke behov for er bare en, ikke behov for er bare en, for er bare en lille forening, for er bare en lille forening, er bare en lille forening med, bare en lille forening med ikke, en lille forening med ikke mange, lille forening med ikke mange indtægter, forening med ikke mange indtægter eller, Slet ikke behov for er bare en, ikke behov for er bare en lille, ikke behov for er bare en lille, behov for er bare en lille forening, for er bare en lille forening med, er bare en lille forening med ikke, bare en lille forening med ikke mange, Slet ikke behov for er bare en lille, Slet ikke behov for er bare en lille, ikke behov for er bare en lille forening, ikke behov for er bare en lille forening, ikke behov for er bare en lille forening, for er bare en lille forening med ikke, er bare en lille forening med ikke mange, er bare en lille forening med ikke mange, er bare en lille forening med ikke mange, er bare en lille forening med ikke mange, bare en lille forening med ikke mange indtægter, lille forening med ikke mange indtægter eller udgifter]}
```
## Your Environment
<!-- Include details of your environment. You can also type `python -m spacy info --markdown` and copy-paste the result here.-->
## Info about spaCy
- **spaCy version:** 3.5.0
- **Platform:** Linux-5.4.0-139-generic-x86_64-with-glibc2.27
- **Python version:** 3.8.9
- **Pipelines:** da_core_news_md (3.5.0)
|
closed
|
2023-02-17T17:35:34Z
|
2023-02-20T14:08:18Z
|
https://github.com/explosion/spaCy/issues/12299
|
[] |
mirfan899
| 2
|
custom-components/pyscript
|
jupyter
| 336
|
Issue installing opencv-python using requirements.txt
|
I understand this is somewhat complicated and perhaps not within the scope of pyscript. I added `opencv-python` to `requirements.txt` and reloaded the integration. I then get this log in home assistant:
```
Collecting opencv-python
Using cached opencv-python-4.5.5.64.tar.gz (89.9 MB)
Installing build dependencies ... error
error: subprocess-exited-with-error
× pip subprocess to install build dependencies did not run successfully.
│ exit code: 1
╰─> [157 lines of output]
Ignoring numpy: markers 'python_version == "3.6" and platform_machine != "aarch64" and platform_machine != "arm64"' don't match your environment
Ignoring numpy: markers 'python_version == "3.7" and platform_machine != "aarch64" and platform_machine != "arm64"' don't match your environment
Ignoring numpy: markers 'python_version == "3.8" and platform_machine != "aarch64" and platform_machine != "arm64"' don't match your environment
Ignoring numpy: markers 'python_version <= "3.9" and sys_platform == "linux" and platform_machine == "aarch64"' don't match your environment
Ignoring numpy: markers 'python_version <= "3.9" and sys_platform == "darwin" and platform_machine == "arm64"' don't match your environment
Ignoring numpy: markers 'python_version >= "3.10"' don't match your environment
Collecting setuptools
Using cached setuptools-60.10.0-py3-none-any.whl (1.1 MB)
Collecting wheel
Using cached wheel-0.37.1-py2.py3-none-any.whl (35 kB)
Collecting scikit-build
Using cached scikit_build-0.13.1-py2.py3-none-any.whl (75 kB)
Collecting cmake
Using cached cmake-3.22.3-py2.py3-none-musllinux_1_1_x86_64.whl (24.0 MB)
Collecting pip
Using cached pip-22.0.4-py3-none-any.whl (2.1 MB)
Collecting numpy==1.19.3
Using cached numpy-1.19.3.zip (7.3 MB)
Installing build dependencies: started
Installing build dependencies: finished with status 'done'
Getting requirements to build wheel: started
Getting requirements to build wheel: finished with status 'done'
Preparing metadata (pyproject.toml): started
Preparing metadata (pyproject.toml): finished with status 'error'
error: subprocess-exited-with-error
× Preparing metadata (pyproject.toml) did not run successfully.
│ exit code: 1
╰─> [116 lines of output]
Running from numpy source directory.
setup.py:480: UserWarning: Unrecognized setuptools command, proceeding with generating Cython sources and expanding templates
run_build = parse_setuppy_commands()
Processing numpy/random/_bounded_integers.pxd.in
Processing numpy/random/_mt19937.pyx
Processing numpy/random/_generator.pyx
Processing numpy/random/_philox.pyx
Processing numpy/random/_bounded_integers.pyx.in
Processing numpy/random/_common.pyx
Processing numpy/random/_sfc64.pyx
Processing numpy/random/bit_generator.pyx
Processing numpy/random/_pcg64.pyx
Processing numpy/random/mtrand.pyx
/tmp/pip-install-3zzrzqc0/numpy_629a38ae54df4eebb95633a858886eed/numpy/distutils/system_info.py:1914: UserWarning:
Optimized (vendor) Blas libraries are not found.
Falls back to netlib Blas library which has worse performance.
A better performance should be easily gained by switching
Blas library.
if self._calc_info(blas):
/tmp/pip-install-3zzrzqc0/numpy_629a38ae54df4eebb95633a858886eed/numpy/distutils/system_info.py:1914: UserWarning:
Blas (http://www.netlib.org/blas/) libraries not found.
Directories to search for the libraries can be specified in the
numpy/distutils/site.cfg file (section [blas]) or by setting
the BLAS environment variable.
if self._calc_info(blas):
/tmp/pip-install-3zzrzqc0/numpy_629a38ae54df4eebb95633a858886eed/numpy/distutils/system_info.py:1914: UserWarning:
Blas (http://www.netlib.org/blas/) sources not found.
Directories to search for the sources can be specified in the
numpy/distutils/site.cfg file (section [blas_src]) or by setting
the BLAS_SRC environment variable.
if self._calc_info(blas):
/tmp/pip-install-3zzrzqc0/numpy_629a38ae54df4eebb95633a858886eed/numpy/distutils/system_info.py:1748: UserWarning:
Lapack (http://www.netlib.org/lapack/) libraries not found.
Directories to search for the libraries can be specified in the
numpy/distutils/site.cfg file (section [lapack]) or by setting
the LAPACK environment variable.
return getattr(self, '_calc_info_{}'.format(name))()
/tmp/pip-install-3zzrzqc0/numpy_629a38ae54df4eebb95633a858886eed/numpy/distutils/system_info.py:1748: UserWarning:
Lapack (http://www.netlib.org/lapack/) sources not found.
Directories to search for the sources can be specified in the
numpy/distutils/site.cfg file (section [lapack_src]) or by setting
the LAPACK_SRC environment variable.
return getattr(self, '_calc_info_{}'.format(name))()
/tmp/pip-build-env-amuiiku3/overlay/lib/python3.9/site-packages/setuptools/_distutils/dist.py:275: UserWarning: Unknown distribution option: 'define_macros'
warnings.warn(msg)
Cythonizing sources
non-existing path in 'numpy/distutils': 'site.cfg'
running dist_info
running build_src
creating build
creating build/src.linux-x86_64-3.9
creating build/src.linux-x86_64-3.9/numpy
creating build/src.linux-x86_64-3.9/numpy/distutils
Could not locate executable gfortran
Could not locate executable f95
Could not locate executable ifort
Could not locate executable ifc
Could not locate executable lf95
Could not locate executable pgfortran
Could not locate executable nvfortran
Could not locate executable f90
Could not locate executable f77
Could not locate executable fort
Could not locate executable efort
Could not locate executable efc
Could not locate executable g77
Could not locate executable g95
Could not locate executable pathf95
Could not locate executable nagfor
don't know how to compile Fortran code on platform 'posix'
Traceback (most recent call last):
File "/tmp/tmp1q1ncy0x_in_process.py", line 363, in <module>
main()
File "/tmp/tmp1q1ncy0x_in_process.py", line 345, in main
json_out['return_val'] = hook(**hook_input['kwargs'])
File "/tmp/tmp1q1ncy0x_in_process.py", line 164, in prepare_metadata_for_build_wheel
return hook(metadata_directory, config_settings)
File "/tmp/pip-build-env-amuiiku3/overlay/lib/python3.9/site-packages/setuptools/build_meta.py", line 157, in prepare_metadata_for_build_wheel
self.run_setup()
File "/tmp/pip-build-env-amuiiku3/overlay/lib/python3.9/site-packages/setuptools/build_meta.py", line 248, in run_setup
super(_BuildMetaLegacyBackend,
File "/tmp/pip-build-env-amuiiku3/overlay/lib/python3.9/site-packages/setuptools/build_meta.py", line 142, in run_setup
exec(compile(code, __file__, 'exec'), locals())
File "setup.py", line 508, in <module>
setup_package()
File "setup.py", line 500, in setup_package
setup(**metadata)
File "/tmp/pip-install-3zzrzqc0/numpy_629a38ae54df4eebb95633a858886eed/numpy/distutils/core.py", line 169, in setup
return old_setup(**new_attr)
File "/tmp/pip-build-env-amuiiku3/overlay/lib/python3.9/site-packages/setuptools/__init__.py", line 165, in setup
return distutils.core.setup(**attrs)
File "/tmp/pip-build-env-amuiiku3/overlay/lib/python3.9/site-packages/setuptools/_distutils/core.py", line 148, in setup
dist.run_commands()
File "/tmp/pip-build-env-amuiiku3/overlay/lib/python3.9/site-packages/setuptools/_distutils/dist.py", line 967, in run_commands
self.run_command(cmd)
File "/tmp/pip-build-env-amuiiku3/overlay/lib/python3.9/site-packages/setuptools/_distutils/dist.py", line 986, in run_command
cmd_obj.run()
File "/tmp/pip-build-env-amuiiku3/overlay/lib/python3.9/site-packages/setuptools/command/dist_info.py", line 31, in run
egg_info.run()
File "/tmp/pip-install-3zzrzqc0/numpy_629a38ae54df4eebb95633a858886eed/numpy/distutils/command/egg_info.py", line 24, in run
self.run_command("build_src")
File "/tmp/pip-build-env-amuiiku3/overlay/lib/python3.9/site-packages/setuptools/_distutils/cmd.py", line 313, in run_command
self.distribution.run_command(command)
File "/tmp/pip-build-env-amuiiku3/overlay/lib/python3.9/site-packages/setuptools/_distutils/dist.py", line 986, in run_command
cmd_obj.run()
File "/tmp/pip-install-3zzrzqc0/numpy_629a38ae54df4eebb95633a858886eed/numpy/distutils/command/build_src.py", line 144, in run
self.build_sources()
File "/tmp/pip-install-3zzrzqc0/numpy_629a38ae54df4eebb95633a858886eed/numpy/distutils/command/build_src.py", line 155, in build_sources
self.build_library_sources(*libname_info)
File "/tmp/pip-install-3zzrzqc0/numpy_629a38ae54df4eebb95633a858886eed/numpy/distutils/command/build_src.py", line 288, in build_library_sources
sources = self.generate_sources(sources, (lib_name, build_info))
File "/tmp/pip-install-3zzrzqc0/numpy_629a38ae54df4eebb95633a858886eed/numpy/distutils/command/build_src.py", line 378, in generate_sources
source = func(extension, build_dir)
File "numpy/core/setup.py", line 663, in get_mathlib_info
raise RuntimeError("Broken toolchain: cannot link a simple C program")
RuntimeError: Broken toolchain: cannot link a simple C program
[end of output]
note: This error originates from a subprocess, and is likely not a problem with pip.
error: metadata-generation-failed
× Encountered error while generating package metadata.
╰─> See above for output.
note: This is an issue with the package mentioned above, not pip.
hint: See above for details.
WARNING: You are using pip version 22.0.3; however, version 22.0.4 is available.
You should consider upgrading via the '/usr/local/bin/python -m pip install --upgrade pip' command.
[end of output]
note: This error originates from a subprocess, and is likely not a problem with pip.
error: subprocess-exited-with-error
× pip subprocess to install build dependencies did not run successfully.
│ exit code: 1
╰─> See above for output.
note: This error originates from a subprocess, and is likely not a problem with pip.
WARNING: You are using pip version 22.0.3; however, version 22.0.4 is available.
You should consider upgrading via the '/usr/local/bin/python -m pip install --upgrade pip' command.
```
|
open
|
2022-03-23T01:52:01Z
|
2022-03-23T01:52:01Z
|
https://github.com/custom-components/pyscript/issues/336
|
[] |
NickM-27
| 0
|
huggingface/transformers
|
deep-learning
| 36,830
|
Build for Windows and VS 2022 does not compile CUDA sources
|
### System Info
I am trying to build the **transformers** package **v4.48.1** from source
for platform *Windows 10*
with python *v3.12*
*pytorch 2.5.1+cu124*
and *CUDA 12.4*
along with *Visual Studio 2022*.
I have executed `vcvarsall.bat x64` , furthermore
all known environment variables like
* `set CUDA_HOME=...`
* `set DISTUTILS_USE_SDK=...`
* `set CXX=...`
* `set TORCH_DONT_CHECK_COMPILER_ABI=...`
* `set TORCH_CUDA_ARCH_LIST=...`
are in place.
`python setup.py build` just copies the CUDA files to the build directory like
``` bash
copying src\transformers\kernels\deformable_detr\cuda\ms_deform_attn_cuda.cu -> build\lib\transformers\kernels\deformable_detr\cuda
```
but the *.cu* files are never handled by *nvcc*, as the compiler is never called.
How can I compile the transformers package sources in Windows for CUDA 12.4?
### Who can help?
_No response_
### Information
- [ ] The official example scripts
- [ ] My own modified scripts
### Tasks
- [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [ ] My own task or dataset (give details below)
### Reproduction
1) set CUDA_HOME=\<path to cuda 12.4 toolkit\>
2) set CXX=\<path to vs2022 cl.exe\>
3) make sure pip version is *v25.0*
4) make sure setuptools version is *v70.0.0*
5) execute `python setup.py build`
6) grep build output for *cuda* or *nvcc*
### Expected behavior
The .cu files should be compiled for the architectures in the given arch list.
|
open
|
2025-03-19T16:18:50Z
|
2025-03-23T18:37:12Z
|
https://github.com/huggingface/transformers/issues/36830
|
[
"bug"
] |
JRGit4UE
| 4
|
Farama-Foundation/Gymnasium
|
api
| 1,162
|
[Proposal] AsyncVectorEnv with Graph observation spaces
|
### Proposal
Add support for environments with `Graph` observation spaces in `AsyncVectorEnv` in Gymnasium. Currently, `AsyncVectorEnv` assumes observations can be stacked in a typical array-like format. However, environments that return graph-based observations (e.g., adjacency matrices, node/edge features) are incompatible with the default vectorized operations. This proposal seeks to introduce native support for vectorizing such environments, allowing the parallelization of environments that utilize `Graph`-like spaces for their observations.
### Motivation
The current limitation of `AsyncVectorEnv` is that it can only handle environments with simple observation spaces. For graphs (e.g., adjacency matrices or node/edge data), the default handling of observation stacking fails, leading to errors and the inability to utilize vectorized environments.
### Pitch
Modify `AsyncVectorEnv` to natively handle observation spaces that are graph-structured, by allowing users to define custom stacking or returning observations as lists of graphs instead of arrays. This could be done via:
- Adding a specific check for `Graph`-type spaces in the observation stacking method.
- Providing an option for environments with complex observations to return observations as lists, or by defining a custom merging mechanism for such spaces.
### Alternatives
_No response_
### Additional context
Many modern reinforcement learning tasks involve graph-structured data, particularly in domains such as networked systems, biology, and operations research. A robust way to handle graph-based environments in a vectorized setup would greatly enhance Gymnasium's flexibility and application scope.
Additionally, this proposal would align with Gymnasium's mission to support a wide range of environment types, particularly as more graph-based tasks emerge in reinforcement learning.
### Checklist
- [X] I have checked that there is no similar [issue](https://github.com/Farama-Foundation/Gymnasium/issues) in the repo
|
open
|
2024-09-17T06:04:04Z
|
2024-09-17T11:00:48Z
|
https://github.com/Farama-Foundation/Gymnasium/issues/1162
|
[
"enhancement"
] |
ramondalmau
| 1
|
d2l-ai/d2l-en
|
data-science
| 2,523
|
pip install d2l==1.0.0b0 Fails to Install on Linux Mint/Ubuntu 22.04
|
Error Message:
Collecting d2l==1.0.0b0
Using cached d2l-1.0.0b0-py3-none-any.whl (141 kB)
Collecting jupyter (from d2l==1.0.0b0)
Using cached jupyter-1.0.0-py2.py3-none-any.whl (2.7 kB)
Requirement already satisfied: numpy in /home/remote/miniconda3/envs/pt/lib/python3.10/site-packages (from d2l==1.0.0b0) (1.24.3)
Requirement already satisfied: matplotlib in /home/remote/miniconda3/envs/pt/lib/python3.10/site-packages (from d2l==1.0.0b0) (3.7.1)
Requirement already satisfied: matplotlib-inline in /home/remote/miniconda3/envs/pt/lib/python3.10/site-packages (from d2l==1.0.0b0) (0.1.6)
Requirement already satisfied: requests in /home/remote/miniconda3/envs/pt/lib/python3.10/site-packages (from d2l==1.0.0b0) (2.31.0)
Requirement already satisfied: pandas in /home/remote/miniconda3/envs/pt/lib/python3.10/site-packages (from d2l==1.0.0b0) (1.5.3)
Collecting gym==0.21.0 (from d2l==1.0.0b0)
Using cached gym-0.21.0.tar.gz (1.5 MB)
Preparing metadata (setup.py) ... error
error: subprocess-exited-with-error
× python setup.py egg_info did not run successfully.
│ exit code: 1
╰─> [1 lines of output]
error in gym setup command: 'extras_require' must be a dictionary whose values are strings or lists of strings containing valid project/version requirement specifiers.
[end of output]
note: This error originates from a subprocess, and is likely not a problem with pip.
error: metadata-generation-failed
× Encountered error while generating package metadata.
╰─> See above for output.
note: This is an issue with the package mentioned above, not pip.
hint: See above for details.
Thank you!
|
closed
|
2023-07-01T17:56:41Z
|
2023-07-01T18:12:09Z
|
https://github.com/d2l-ai/d2l-en/issues/2523
|
[] |
k7e7n7t
| 1
|
matterport/Mask_RCNN
|
tensorflow
| 2,788
|
Could not build a TypeSpec for KerasTensor
|
Hi Everybody
I've faced this error during Creating the training model,
TypeError: Could not build a TypeSpec for KerasTensor(type_spec=TensorSpec(shape=(None, None, 4), dtype=tf.float32, name=None), name='tf.math.truediv_1/truediv:0', description="created by layer 'tf.math.truediv_1'") of unsupported type <class 'keras.engine.keras_tensor.KerasTensor'>.
Any help please ????
|
open
|
2022-03-08T05:03:25Z
|
2022-03-16T07:40:23Z
|
https://github.com/matterport/Mask_RCNN/issues/2788
|
[] |
Isamalatby
| 5
|
pydata/xarray
|
pandas
| 10,094
|
Make `compat="override"` more strict.
|
### What happened?
The `compat="override"` option in the various combining functions, avoids comparing values for compatibility and simply picks the first occurence of a variable, and inserts it in the result.
This can be dangerous if the values are of differing dimensionality:
```python
ds1 = xr.Dataset({"x": 0})
ds2 = xr.Dataset({"x": ("y", [0, 0])})
```
Now the dimensionality of `x` in the output depends on the order of arguments (example below).
I propose that `compat="override"` at least check that `ndim` is the same for a variable across all provided objects.
### Example
```
xr.merge([ds1, ds2], compat="override")
```
```
<xarray.Dataset> Size: 8B
Dimensions: ()
Data variables:
x int64 8B 0
```
```
xr.merge([ds2, ds1], compat="override")
```
```
<xarray.Dataset> Size: 16B
Dimensions: (y: 2)
Dimensions without coordinates: y
Data variables:
x (y) int64 16B 0 0
```
|
open
|
2025-03-03T20:08:31Z
|
2025-03-17T16:35:02Z
|
https://github.com/pydata/xarray/issues/10094
|
[
"bug",
"topic-combine"
] |
dcherian
| 1
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.