repo_name
stringlengths 9
75
| topic
stringclasses 30
values | issue_number
int64 1
203k
| title
stringlengths 1
976
| body
stringlengths 0
254k
| state
stringclasses 2
values | created_at
stringlengths 20
20
| updated_at
stringlengths 20
20
| url
stringlengths 38
105
| labels
listlengths 0
9
| user_login
stringlengths 1
39
| comments_count
int64 0
452
|
|---|---|---|---|---|---|---|---|---|---|---|---|
pywinauto/pywinauto
|
automation
| 887
|
Error after 'from pywinauto.application import Application'
|
## Expected Behavior
```
from pywinauto.application import Application
```
## Actual Behavior
```
Traceback (most recent call last):
File "d:\Devel\python\lib\ctypes\__init__.py", line 121, in WINFUNCTYPE
return _win_functype_cache[(restype, argtypes, flags)]
KeyError: (<class 'ctypes.HRESULT'>, (<class 'ctypes.c_long'>, <class 'comtypes.automation.tagVARIANT'>, <class 'comtypes.LP_POINTER(IUIAutomationCondition)'>), 0)
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
File "d:\Devel\python\lib\site-packages\pywinauto\__init__.py", line 89, in <module>
from . import findwindows
File "d:\Devel\python\lib\site-packages\pywinauto\findwindows.py", line 42, in <module>
from . import controls
File "d:\Devel\python\lib\site-packages\pywinauto\controls\__init__.py", line 36, in <module>
from . import uiawrapper # register "uia" back-end (at the end of uiawrapper module)
File "d:\Devel\python\lib\site-packages\pywinauto\controls\uiawrapper.py", line 47, in <module>
from ..uia_defines import IUIA
File "d:\Devel\python\lib\site-packages\pywinauto\uia_defines.py", line 181, in <module>
pattern_ids = _build_pattern_ids_dic()
File "d:\Devel\python\lib\site-packages\pywinauto\uia_defines.py", line 169, in _build_pattern_ids_dic
if hasattr(IUIA().ui_automation_client, cls_name):
File "d:\Devel\python\lib\site-packages\pywinauto\uia_defines.py", line 50, in __call__
cls._instances[cls] = super(_Singleton, cls).__call__(*args, **kwargs)
File "d:\Devel\python\lib\site-packages\pywinauto\uia_defines.py", line 60, in __init__
self.UIA_dll = comtypes.client.GetModule('UIAutomationCore.dll')
File "d:\Devel\python\lib\site-packages\comtypes\client\_generate.py", line 110, in GetModule
mod = _CreateWrapper(tlib, pathname)
File "d:\Devel\python\lib\site-packages\comtypes\client\_generate.py", line 184, in _CreateWrapper
mod = _my_import(fullname)
File "d:\Devel\python\lib\site-packages\comtypes\client\_generate.py", line 24, in _my_import
return __import__(fullname, globals(), locals(), ['DUMMY'])
File "d:\Devel\python\lib\site-packages\comtypes\gen\_944DE083_8FB8_45CF_BCB7_C477ACB2F897_0_1_0.py", line 1870, in <module>
( ['out', 'retval'], POINTER(POINTER(IUIAutomationElement)), 'element' )),
File "d:\Devel\python\lib\site-packages\comtypes\__init__.py", line 329, in __setattr__
self._make_methods(value)
File "d:\Devel\python\lib\site-packages\comtypes\__init__.py", line 698, in _make_methods
prototype = WINFUNCTYPE(restype, *argtypes)
File "d:\Devel\python\lib\ctypes\__init__.py", line 123, in WINFUNCTYPE
class WinFunctionType(_CFuncPtr):
TypeError: item 2 in _argtypes_ passes a union by value, which is unsupported.
```
## Steps to Reproduce the Problem
## Short Example of Code to Demonstrate the Problem
## Specifications
- Pywinauto version: comtypes-1.1.7 pywin32-227 pywinauto-0.6.8 six-1.14.0
- Python version and bitness: Python 3.7.6, 32bit
- Platform and OS: Win 10
|
closed
|
2020-02-03T15:30:10Z
|
2020-02-13T17:17:44Z
|
https://github.com/pywinauto/pywinauto/issues/887
|
[
"duplicate",
"3rd-party issue"
] |
arozehnal
| 5
|
scikit-optimize/scikit-optimize
|
scikit-learn
| 1,076
|
ImportError when using skopt with scikit-learn 1.0
|
When importing Scikit-optimize, the following ImportError is returned:
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
File "/usr/local/lib/python3.7/dist-packages/skopt/__init__.py", line 55, in <module>
from .searchcv import BayesSearchCV
File "/usr/local/lib/python3.7/dist-packages/skopt/searchcv.py", line 16, in <module>
from sklearn.utils.fixes import MaskedArray
ImportError: cannot import name 'MaskedArray' from 'sklearn.utils.fixes' (/usr/local/lib/python3.7/dist-packages/sklearn/utils/fixes.py)

This issue started occurring when upgrading from Scikit-learn 0.24.2 to 1.0.
System dependencies:
Python 3.7.12
scikit-image 0.18.3
scikit-learn 1.0
scikit-optimize 0.8.1
sklearn 0.0
|
closed
|
2021-10-05T12:30:14Z
|
2021-10-12T14:41:36Z
|
https://github.com/scikit-optimize/scikit-optimize/issues/1076
|
[] |
SenneDeproost
| 3
|
s3rius/FastAPI-template
|
fastapi
| 96
|
Request object isn't passed as argument
|
Thanks for this package. I have created graphql app using template but getting below error. It seems fastapi doesn't pass request object.
```log
ERROR: Exception in ASGI application
Traceback (most recent call last):
File "/Users/test/Library/Caches/pypoetry/virtualenvs/fastapi-graphql-practice-1UuEp-7G-py3.10/lib/python3.10/site-packages/uvicorn/protocols/websockets/websockets_impl.py", line 184, in run_asgi
result = await self.app(self.scope, self.asgi_receive, self.asgi_send)
File "/Users/test/Library/Caches/pypoetry/virtualenvs/fastapi-graphql-practice-1UuEp-7G-py3.10/lib/python3.10/site-packages/uvicorn/middleware/proxy_headers.py", line 75, in __call__
return await self.app(scope, receive, send)
File "/Users/test/Library/Caches/pypoetry/virtualenvs/fastapi-graphql-practice-1UuEp-7G-py3.10/lib/python3.10/site-packages/fastapi/applications.py", line 261, in __call__
await super().__call__(scope, receive, send)
File "/Users/test/Library/Caches/pypoetry/virtualenvs/fastapi-graphql-practice-1UuEp-7G-py3.10/lib/python3.10/site-packages/starlette/applications.py", line 112, in __call__
await self.middleware_stack(scope, receive, send)
File "/Users/test/Library/Caches/pypoetry/virtualenvs/fastapi-graphql-practice-1UuEp-7G-py3.10/lib/python3.10/site-packages/starlette/middleware/errors.py", line 146, in __call__
await self.app(scope, receive, send)
File "/Users/test/Library/Caches/pypoetry/virtualenvs/fastapi-graphql-practice-1UuEp-7G-py3.10/lib/python3.10/site-packages/starlette/exceptions.py", line 58, in __call__
await self.app(scope, receive, send)
File "/Users/test/Library/Caches/pypoetry/virtualenvs/fastapi-graphql-practice-1UuEp-7G-py3.10/lib/python3.10/site-packages/fastapi/middleware/asyncexitstack.py", line 21, in __call__
raise e
File "/Users/test/Library/Caches/pypoetry/virtualenvs/fastapi-graphql-practice-1UuEp-7G-py3.10/lib/python3.10/site-packages/fastapi/middleware/asyncexitstack.py", line 18, in __call__
await self.app(scope, receive, send)
File "/Users/test/Library/Caches/pypoetry/virtualenvs/fastapi-graphql-practice-1UuEp-7G-py3.10/lib/python3.10/site-packages/starlette/routing.py", line 656, in __call__
await route.handle(scope, receive, send)
File "/Users/test/Library/Caches/pypoetry/virtualenvs/fastapi-graphql-practice-1UuEp-7G-py3.10/lib/python3.10/site-packages/starlette/routing.py", line 315, in handle
await self.app(scope, receive, send)
File "/Users/test/Library/Caches/pypoetry/virtualenvs/fastapi-graphql-practice-1UuEp-7G-py3.10/lib/python3.10/site-packages/starlette/routing.py", line 77, in app
await func(session)
File "/Users/test/Library/Caches/pypoetry/virtualenvs/fastapi-graphql-practice-1UuEp-7G-py3.10/lib/python3.10/site-packages/fastapi/routing.py", line 264, in app
solved_result = await solve_dependencies(
File "/Users/test/Library/Caches/pypoetry/virtualenvs/fastapi-graphql-practice-1UuEp-7G-py3.10/lib/python3.10/site-packages/fastapi/dependencies/utils.py", line 498, in solve_dependencies
solved_result = await solve_dependencies(
File "/Users/test/Library/Caches/pypoetry/virtualenvs/fastapi-graphql-practice-1UuEp-7G-py3.10/lib/python3.10/site-packages/fastapi/dependencies/utils.py", line 498, in solve_dependencies
solved_result = await solve_dependencies(
File "/Users/test/Library/Caches/pypoetry/virtualenvs/fastapi-graphql-practice-1UuEp-7G-py3.10/lib/python3.10/site-packages/fastapi/dependencies/utils.py", line 498, in solve_dependencies
solved_result = await solve_dependencies(
File "/Users/test/Library/Caches/pypoetry/virtualenvs/fastapi-graphql-practice-1UuEp-7G-py3.10/lib/python3.10/site-packages/fastapi/dependencies/utils.py", line 523, in solve_dependencies
solved = await solve_generator(
File "/Users/test/Library/Caches/pypoetry/virtualenvs/fastapi-graphql-practice-1UuEp-7G-py3.10/lib/python3.10/site-packages/fastapi/dependencies/utils.py", line 443, in solve_generator
cm = asynccontextmanager(call)(**sub_values)
File "/Users/test/.pyenv/versions/3.10.2/lib/python3.10/contextlib.py", line 314, in helper
return _AsyncGeneratorContextManager(func, args, kwds)
File "/Users/test/.pyenv/versions/3.10.2/lib/python3.10/contextlib.py", line 103, in __init__
self.gen = func(*args, **kwds)
TypeError: get_db_session() missing 1 required positional argument: 'request'
INFO: connection open
INFO: connection closed
```
|
closed
|
2022-07-05T07:01:34Z
|
2022-10-13T21:26:26Z
|
https://github.com/s3rius/FastAPI-template/issues/96
|
[] |
devNaresh
| 16
|
unionai-oss/pandera
|
pandas
| 1,261
|
Fix to_script description
|
#### Location of the documentation
[DataFrameSchema.to_script](https://github.com/unionai-oss/pandera/blob/62bc4840508ff1ac0df595b57b2152737a1228a2/pandera/api/pandas/container.py#L1251)
#### Documentation problem
This method has the description for `from_yaml`.
#### Suggested fix for documentation
Something like "Write `DataFrameSchema` to script".
|
closed
|
2023-07-15T19:25:28Z
|
2023-07-17T17:47:29Z
|
https://github.com/unionai-oss/pandera/issues/1261
|
[
"docs"
] |
tmcclintock
| 1
|
onnx/onnx
|
pytorch
| 5,926
|
Add TopK node to a pretrained Brevitas model
|
We are working with FINN-ONNX, and we want the pretrained models from Brevitas that classify the MNIST images to output the index (class) instead of a probabilities tensor of dim 1x10.To our knowledge, the node responsible for this is the TopK.
Where do we have to add this layer, and what function can we add so the 'export_qonnx' would understand it as a TopK node?
The desired block is in the following image:

|
open
|
2024-02-09T17:21:55Z
|
2024-02-13T10:04:09Z
|
https://github.com/onnx/onnx/issues/5926
|
[
"question"
] |
abedbaltaji
| 1
|
flasgger/flasgger
|
flask
| 443
|
Compatibility Proposal for OpenAPI 3
|
This issue to discuss compatibility of OpenAPI3 in flasgger. Currently, the code differentiates them in runtime, and mixes up the processing of both specifications. In long term, I believe that this would lower code quality, and make the code harder to maintain. Please raise any suggestions or plans to make Flasgger work better with OpenAPI 3 and 2 at the same time.
|
open
|
2020-11-21T18:15:27Z
|
2021-11-14T08:53:02Z
|
https://github.com/flasgger/flasgger/issues/443
|
[] |
billyrrr
| 3
|
nteract/papermill
|
jupyter
| 575
|
Some weird error messages when executing a notebook involving pytorch
|
I have a notebook for training a model using pytorch. The notebook runs fine if I run it from browser. But I ran into the following problem when executing it via papermill
```
Generating grammar tables from /home/ubuntu/anaconda3/envs/pytorch_p36/lib/python3.6/site-packages/blib2to3/Grammar.txt
Writing grammar tables to /home/ubuntu/.cache/black/20.8b1/Grammar3.6.10.final.0.pickle
Writing failed: [Errno 2] No such file or directory: '/home/ubuntu/.cache/black/20.8b1/tmp0twtlmvs'
Generating grammar tables from /home/ubuntu/anaconda3/envs/pytorch_p36/lib/python3.6/site-packages/blib2to3/PatternGrammar.txt
Writing grammar tables to /home/ubuntu/.cache/black/20.8b1/PatternGrammar3.6.10.final.0.pickle
Writing failed: [Errno 2] No such file or directory: '/home/ubuntu/.cache/black/20.8b1/tmp2_jvdud_'
Executing: 0%| | 0/23 [00:00<?, ?cell/s]Executing notebook with kernel: pytorch_p36
Executing: 22%|████████████████▎ | 5/23 [00:03<00:13, 1.38cell/s]
Traceback (most recent call last):
File "/home/ubuntu/anaconda3/envs/pytorch_p36/bin/papermill", line 8, in <module>
sys.exit(papermill())
File "/home/ubuntu/anaconda3/envs/pytorch_p36/lib/python3.6/site-packages/click/core.py", line 764, in __call__
return self.main(*args, **kwargs)
File "/home/ubuntu/anaconda3/envs/pytorch_p36/lib/python3.6/site-packages/click/core.py", line 717, in main
rv = self.invoke(ctx)
File "/home/ubuntu/anaconda3/envs/pytorch_p36/lib/python3.6/site-packages/click/core.py", line 956, in invoke
return ctx.invoke(self.callback, **ctx.params)
File "/home/ubuntu/anaconda3/envs/pytorch_p36/lib/python3.6/site-packages/click/core.py", line 555, in invoke
return callback(*args, **kwargs)
File "/home/ubuntu/anaconda3/envs/pytorch_p36/lib/python3.6/site-packages/click/decorators.py", line 17, in new_func
return f(get_current_context(), *args, **kwargs)
File "/home/ubuntu/anaconda3/envs/pytorch_p36/lib/python3.6/site-packages/papermill/cli.py", line 256, in papermill
execution_timeout=execution_timeout,
File "/home/ubuntu/anaconda3/envs/pytorch_p36/lib/python3.6/site-packages/papermill/execute.py", line 118, in execute_notebook
raise_for_execution_errors(nb, output_path)
File "/home/ubuntu/anaconda3/envs/pytorch_p36/lib/python3.6/site-packages/papermill/execute.py", line 230, in raise_for_execution_errors
raise error
papermill.exceptions.PapermillExecutionError:
---------------------------------------------------------------------------
Exception encountered at "In [2]":
---------------------------------------------------------------------------
RuntimeError Traceback (most recent call last)
<ipython-input-2-3916aaf64ab2> in <module>
----> 1 from torchvision import datasets, transforms
2
3 datasets.MNIST('data', download=True, transform=transforms.Compose([
4 transforms.ToTensor(),
5 transforms.Normalize((0.1307,), (0.3081,))
~/anaconda3/envs/pytorch_p36/lib/python3.6/site-packages/torchvision/__init__.py in <module>
1 import warnings
2
----> 3 from torchvision import models
4 from torchvision import datasets
5 from torchvision import ops
~/anaconda3/envs/pytorch_p36/lib/python3.6/site-packages/torchvision/models/__init__.py in <module>
10 from .shufflenetv2 import *
11 from . import segmentation
---> 12 from . import detection
13 from . import video
14 from . import quantization
~/anaconda3/envs/pytorch_p36/lib/python3.6/site-packages/torchvision/models/detection/__init__.py in <module>
----> 1 from .faster_rcnn import *
2 from .mask_rcnn import *
3 from .keypoint_rcnn import *
~/anaconda3/envs/pytorch_p36/lib/python3.6/site-packages/torchvision/models/detection/faster_rcnn.py in <module>
11
12 from .generalized_rcnn import GeneralizedRCNN
---> 13 from .rpn import AnchorGenerator, RPNHead, RegionProposalNetwork
14 from .roi_heads import RoIHeads
15 from .transform import GeneralizedRCNNTransform
~/anaconda3/envs/pytorch_p36/lib/python3.6/site-packages/torchvision/models/detection/rpn.py in <module>
9 from torchvision.ops import boxes as box_ops
10
---> 11 from . import _utils as det_utils
12 from .image_list import ImageList
13
~/anaconda3/envs/pytorch_p36/lib/python3.6/site-packages/torchvision/models/detection/_utils.py in <module>
17
18 @torch.jit.script
---> 19 class BalancedPositiveNegativeSampler(object):
20 """
21 This class samples batches, ensuring that they contain a fixed proportion of positives
~/anaconda3/envs/pytorch_p36/lib/python3.6/site-packages/torch/jit/__init__.py in script(obj, optimize, _frames_up, _rcb)
1217 if _rcb is None:
1218 _rcb = _jit_internal.createResolutionCallback(_frames_up + 1)
-> 1219 _compile_and_register_class(obj, _rcb, qualified_name)
1220 return obj
1221 else:
~/anaconda3/envs/pytorch_p36/lib/python3.6/site-packages/torch/jit/__init__.py in _compile_and_register_class(obj, rcb, qualified_name)
1074 def _compile_and_register_class(obj, rcb, qualified_name):
1075 ast = get_jit_class_def(obj, obj.__name__)
-> 1076 _jit_script_class_compile(qualified_name, ast, rcb)
1077 _add_script_class(obj, qualified_name)
1078
~/anaconda3/envs/pytorch_p36/lib/python3.6/site-packages/torch/jit/_recursive.py in try_compile_fn(fn, loc)
220 # object
221 rcb = _jit_internal.createResolutionCallbackFromClosure(fn)
--> 222 return torch.jit.script(fn, _rcb=rcb)
223
224
~/anaconda3/envs/pytorch_p36/lib/python3.6/site-packages/torch/jit/__init__.py in script(obj, optimize, _frames_up, _rcb)
1224 if _rcb is None:
1225 _rcb = _gen_rcb(obj, _frames_up)
-> 1226 fn = torch._C._jit_script_compile(qualified_name, ast, _rcb, get_default_args(obj))
1227 # Forward docstrings
1228 fn.__doc__ = obj.__doc__
RuntimeError:
builtin cannot be used as a value:
at /home/ubuntu/anaconda3/envs/pytorch_p36/lib/python3.6/site-packages/torchvision/models/detection/_utils.py:14:56
def zeros_like(tensor, dtype):
# type: (Tensor, int) -> Tensor
return torch.zeros_like(tensor, dtype=dtype, layout=tensor.layout,
~~~~~~~~~~~~~ <--- HERE
device=tensor.device, pin_memory=tensor.is_pinned())
'zeros_like' is being compiled since it was called from '__torch__.torchvision.models.detection._utils.BalancedPositiveNegativeSampler.__call__'
at /home/ubuntu/anaconda3/envs/pytorch_p36/lib/python3.6/site-packages/torchvision/models/detection/_utils.py:72:12
# randomly select positive and negative examples
perm1 = torch.randperm(positive.numel(), device=positive.device)[:num_pos]
perm2 = torch.randperm(negative.numel(), device=negative.device)[:num_neg]
pos_idx_per_image = positive[perm1]
neg_idx_per_image = negative[perm2]
# create binary mask from indices
pos_idx_per_image_mask = zeros_like(
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~... <--- HERE
matched_idxs_per_image, dtype=torch.uint8
)
neg_idx_per_image_mask = zeros_like(
matched_idxs_per_image, dtype=torch.uint8
)
pos_idx_per_image_mask[pos_idx_per_image] = torch.tensor(1, dtype=torch.uint8)
neg_idx_per_image_mask[neg_idx_per_image] = torch.tensor(1, dtype=torch.uint8)
```
|
closed
|
2021-01-26T23:16:54Z
|
2021-02-04T17:06:27Z
|
https://github.com/nteract/papermill/issues/575
|
[
"question"
] |
hongshanli23
| 12
|
twopirllc/pandas-ta
|
pandas
| 807
|
SyntaxWarning: invalid escape sequence '\g' return re_.sub("([a-z])([A-Z])","\g<1> \g<2>", x).title()
|
I got this warning after upgrade python from 3.11 to 3.12:
pandas_ta\utils\_core.py:14: SyntaxWarning: invalid escape sequence '\g'
return re_.sub("([a-z])([A-Z])","\g<1> \g<2>", x).title()
python 3.12.4
pandas_ta 0.3.14b
Do I need to downgrade to python 3.11 for now?
|
closed
|
2024-07-02T05:57:48Z
|
2024-12-01T22:06:59Z
|
https://github.com/twopirllc/pandas-ta/issues/807
|
[
"bug"
] |
kopes18
| 1
|
deepfakes/faceswap
|
deep-learning
| 1,356
|
Training collapse
|
*Note: For general usage questions and help, please use either our [FaceSwap Forum](https://faceswap.dev/forum)
or [FaceSwap Discord server](https://discord.gg/FC54sYg). General usage questions are liable to be closed without
response.*
**Crash reports MUST be included when reporting bugs.**
**Describe the bug**
Please take a screenshot
**To Reproduce**
Steps to reproduce the behavior:
1. input A
2. input B
3. input model
4. click train
**Expected behavior**
normal operation
**Screenshots**


**Desktop (please complete the following information):**
- OS: Window11 23H2 22635.2486
- Python Version [e.g. 3.5, 3.6]
- Conda Version [e.g. 4.5.12]
- Commit ID [e.g. e83819f]
-
**Additional context**
============ System Information ============
backend: nvidia
encoding: cp936
git_branch: master
git_commits: 8e6c6c3 patch writer: Sort the json file by key
gpu_cuda: 12.3
gpu_cudnn: No global version found. Check Conda packages for Conda cuDNN
gpu_devices: GPU_0: NVIDIA GeForce RTX 3050 Laptop GPU
gpu_devices_active: GPU_0
gpu_driver: 545.84
gpu_vram: GPU_0: 4096MB (3977MB free)
os_machine: AMD64
os_platform: Windows-10-10.0.22635-SP0
os_release: 10
py_command: d:\faceswap/faceswap.py gui
py_conda_version: conda 23.9.0
py_implementation: CPython
py_version: 3.10.13
py_virtual_env: True
sys_cores: 20
sys_processor: Intel64 Family 6 Model 154 Stepping 3, GenuineIntel
sys_ram: Total: 16076MB, Available: 4159MB, Used: 11916MB, Free: 4159MB
=============== Pip Packages ===============
absl-py==2.0.0
astunparse==1.6.3
cachetools==5.3.1
certifi==2023.7.22
charset-normalizer==3.3.0
colorama @ file:///C:/b/abs_a9ozq0l032/croot/colorama_1672387194846/work
contourpy @ file:///C:/b/abs_d5rpy288vc/croots/recipe/contourpy_1663827418189/work
cycler @ file:///tmp/build/80754af9/cycler_1637851556182/work
decorator @ file:///opt/conda/conda-bld/decorator_1643638310831/work
fastcluster @ file:///D:/bld/fastcluster_1695650232190/work
ffmpy @ file:///home/conda/feedstock_root/build_artifacts/ffmpy_1659474992694/work
flatbuffers==23.5.26
fonttools==4.25.0
gast==0.4.0
google-auth==2.23.3
google-auth-oauthlib==0.4.6
google-pasta==0.2.0
grpcio==1.59.0
h5py==3.10.0
idna==3.4
imageio @ file:///C:/b/abs_3eijmwdodc/croot/imageio_1695996500830/work
imageio-ffmpeg @ file:///home/conda/feedstock_root/build_artifacts/imageio-ffmpeg_1694632425602/work
joblib @ file:///C:/b/abs_1anqjntpan/croot/joblib_1685113317150/work
keras==2.10.0
Keras-Preprocessing==1.1.2
kiwisolver @ file:///C:/b/abs_88mdhvtahm/croot/kiwisolver_1672387921783/work
libclang==16.0.6
Markdown==3.5
MarkupSafe==2.1.3
matplotlib @ file:///C:/b/abs_085jhivdha/croot/matplotlib-suite_1693812524572/work
mkl-fft @ file:///C:/b/abs_19i1y8ykas/croot/mkl_fft_1695058226480/work
mkl-random @ file:///C:/b/abs_edwkj1_o69/croot/mkl_random_1695059866750/work
mkl-service==2.4.0
munkres==1.1.4
numexpr @ file:///C:/b/abs_5fucrty5dc/croot/numexpr_1696515448831/work
numpy @ file:///C:/b/abs_9fu2cs2527/croot/numpy_and_numpy_base_1695830496596/work/dist/numpy-1.26.0-cp310-cp310-win_amd64.whl#sha256=11367989d61b64039738e0c68c95c6b797a41c4c75ec2147c0541b21163786eb
nvidia-ml-py @ file:///home/conda/feedstock_root/build_artifacts/nvidia-ml-py_1693425331741/work
oauthlib==3.2.2
opencv-python==4.8.1.78
opt-einsum==3.3.0
packaging @ file:///C:/b/abs_28t5mcoltc/croot/packaging_1693575224052/work
Pillow @ file:///C:/b/abs_153xikw91n/croot/pillow_1695134603563/work
ply==3.11
protobuf==3.19.6
psutil @ file:///C:/Windows/Temp/abs_b2c2fd7f-9fd5-4756-95ea-8aed74d0039flsd9qufz/croots/recipe/psutil_1656431277748/work
pyasn1==0.5.0
pyasn1-modules==0.3.0
pyparsing @ file:///C:/Users/BUILDE~1/AppData/Local/Temp/abs_7f_7lba6rl/croots/recipe/pyparsing_1661452540662/work
PyQt5==5.15.7
PyQt5-sip @ file:///C:/Windows/Temp/abs_d7gmd2jg8i/croots/recipe/pyqt-split_1659273064801/work/pyqt_sip
python-dateutil @ file:///tmp/build/80754af9/python-dateutil_1626374649649/work
pywin32==305.1
pywinpty @ file:///C:/ci_310/pywinpty_1644230983541/work/target/wheels/pywinpty-2.0.2-cp310-none-win_amd64.whl
requests==2.31.0
requests-oauthlib==1.3.1
rsa==4.9
scikit-learn @ file:///C:/b/abs_55olq_4gzc/croot/scikit-learn_1690978955123/work
scipy==1.11.3
sip @ file:///C:/Windows/Temp/abs_b8fxd17m2u/croots/recipe/sip_1659012372737/work
six @ file:///tmp/build/80754af9/six_1644875935023/work
tensorboard==2.10.1
tensorboard-data-server==0.6.1
tensorboard-plugin-wit==1.8.1
tensorflow==2.10.1
tensorflow-estimator==2.10.0
tensorflow-io-gcs-filesystem==0.31.0
termcolor==2.3.0
threadpoolctl @ file:///Users/ktietz/demo/mc3/conda-bld/threadpoolctl_1629802263681/work
toml @ file:///tmp/build/80754af9/toml_1616166611790/work
tornado @ file:///C:/b/abs_0cbrstidzg/croot/tornado_1696937003724/work
tqdm @ file:///C:/b/abs_f76j9hg7pv/croot/tqdm_1679561871187/work
typing_extensions==4.8.0
urllib3==2.0.7
Werkzeug==3.0.0
wrapt==1.15.0
============== Conda Packages ==============
# packages in environment at C:\Users\cui19\MiniConda3\envs\faceswap:
#
# Name Version Build Channel
absl-py 2.0.0 pypi_0 pypi
astunparse 1.6.3 pypi_0 pypi
blas 1.0 mkl
brotli 1.0.9 h2bbff1b_7
brotli-bin 1.0.9 h2bbff1b_7
bzip2 1.0.8 he774522_0
ca-certificates 2023.08.22 haa95532_0
cachetools 5.3.1 pypi_0 pypi
certifi 2023.7.22 pypi_0 pypi
charset-normalizer 3.3.0 pypi_0 pypi
colorama 0.4.6 py310haa95532_0
contourpy 1.0.5 py310h59b6b97_0
cudatoolkit 11.8.0 hd77b12b_0
cudnn 8.9.2.26 cuda11_0
cycler 0.11.0 pyhd3eb1b0_0
decorator 5.1.1 pyhd3eb1b0_0
fastcluster 1.2.6 py310hecd3228_3 conda-forge
ffmpeg 4.3.1 ha925a31_0 conda-forge
ffmpy 0.3.0 pyhb6f538c_0 conda-forge
flatbuffers 23.5.26 pypi_0 pypi
fonttools 4.25.0 pyhd3eb1b0_0
freetype 2.12.1 ha860e81_0
gast 0.4.0 pypi_0 pypi
giflib 5.2.1 h8cc25b3_3
git 2.40.1 haa95532_1
glib 2.69.1 h5dc1a3c_2
google-auth 2.23.3 pypi_0 pypi
google-auth-oauthlib 0.4.6 pypi_0 pypi
google-pasta 0.2.0 pypi_0 pypi
grpcio 1.59.0 pypi_0 pypi
h5py 3.10.0 pypi_0 pypi
icc_rt 2022.1.0 h6049295_2
icu 58.2 ha925a31_3
idna 3.4 pypi_0 pypi
imageio 2.31.4 py310haa95532_0
imageio-ffmpeg 0.4.9 pyhd8ed1ab_0 conda-forge
intel-openmp 2023.1.0 h59b6b97_46319
joblib 1.2.0 py310haa95532_0
jpeg 9e h2bbff1b_1
keras 2.10.0 pypi_0 pypi
keras-preprocessing 1.1.2 pypi_0 pypi
kiwisolver 1.4.4 py310hd77b12b_0
krb5 1.20.1 h5b6d351_0
lerc 3.0 hd77b12b_0
libbrotlicommon 1.0.9 h2bbff1b_7
libbrotlidec 1.0.9 h2bbff1b_7
libbrotlienc 1.0.9 h2bbff1b_7
libclang 16.0.6 pypi_0 pypi
libclang13 14.0.6 default_h8e68704_1
libdeflate 1.17 h2bbff1b_1
libffi 3.4.4 hd77b12b_0
libiconv 1.16 h2bbff1b_2
libpng 1.6.39 h8cc25b3_0
libpq 12.15 h906ac69_1
libtiff 4.5.1 hd77b12b_0
libwebp 1.3.2 hbc33d0d_0
libwebp-base 1.3.2 h2bbff1b_0
libxml2 2.10.4 h0ad7f3c_1
libxslt 1.1.37 h2bbff1b_1
libzlib 1.2.13 hcfcfb64_5 conda-forge
libzlib-wapi 1.2.13 hcfcfb64_5 conda-forge
lz4-c 1.9.4 h2bbff1b_0
markdown 3.5 pypi_0 pypi
markupsafe 2.1.3 pypi_0 pypi
matplotlib 3.7.2 py310haa95532_0
matplotlib-base 3.7.2 py310h4ed8f06_0
mkl 2023.1.0 h6b88ed4_46357
mkl-service 2.4.0 py310h2bbff1b_1
mkl_fft 1.3.8 py310h2bbff1b_0
mkl_random 1.2.4 py310h59b6b97_0
munkres 1.1.4 py_0
numexpr 2.8.7 py310h2cd9be0_0
numpy 1.26.0 py310h055cbcc_0
numpy-base 1.26.0 py310h65a83cf_0
nvidia-ml-py 12.535.108 pyhd8ed1ab_0 conda-forge
oauthlib 3.2.2 pypi_0 pypi
opencv-python 4.8.1.78 pypi_0 pypi
openssl 3.0.11 h2bbff1b_2
opt-einsum 3.3.0 pypi_0 pypi
packaging 23.1 py310haa95532_0
pcre 8.45 hd77b12b_0
pillow 9.4.0 py310hd77b12b_1
pip 23.3 py310haa95532_0
ply 3.11 py310haa95532_0
protobuf 3.19.6 pypi_0 pypi
psutil 5.9.0 py310h2bbff1b_0
pyasn1 0.5.0 pypi_0 pypi
pyasn1-modules 0.3.0 pypi_0 pypi
pyparsing 3.0.9 py310haa95532_0
pyqt 5.15.7 py310hd77b12b_0
pyqt5-sip 12.11.0 py310hd77b12b_0
python 3.10.13 he1021f5_0
python-dateutil 2.8.2 pyhd3eb1b0_0
python_abi 3.10 2_cp310 conda-forge
pywin32 305 py310h2bbff1b_0
pywinpty 2.0.2 py310h5da7b33_0
qt-main 5.15.2 h879a1e9_9
qt-webengine 5.15.9 h5bd16bc_7
qtwebkit 5.212 h2bbfb41_5
requests 2.31.0 pypi_0 pypi
requests-oauthlib 1.3.1 pypi_0 pypi
rsa 4.9 pypi_0 pypi
scikit-learn 1.3.0 py310h4ed8f06_0
scipy 1.11.3 py310h309d312_0
setuptools 68.0.0 py310haa95532_0
sip 6.6.2 py310hd77b12b_0
six 1.16.0 pyhd3eb1b0_1
sqlite 3.41.2 h2bbff1b_0
tbb 2021.8.0 h59b6b97_0
tensorboard 2.10.1 pypi_0 pypi
tensorboard-data-server 0.6.1 pypi_0 pypi
tensorboard-plugin-wit 1.8.1 pypi_0 pypi
tensorflow 2.10.1 pypi_0 pypi
tensorflow-estimator 2.10.0 pypi_0 pypi
tensorflow-io-gcs-filesystem 0.31.0 pypi_0 pypi
termcolor 2.3.0 pypi_0 pypi
threadpoolctl 2.2.0 pyh0d69192_0
tk 8.6.12 h2bbff1b_0
toml 0.10.2 pyhd3eb1b0_0
tornado 6.3.3 py310h2bbff1b_0
tqdm 4.65.0 py310h9909e9c_0
typing-extensions 4.8.0 pypi_0 pypi
tzdata 2023c h04d1e81_0
ucrt 10.0.22621.0 h57928b3_0 conda-forge
urllib3 2.0.7 pypi_0 pypi
vc 14.2 h21ff451_1
vc14_runtime 14.36.32532 hdcecf7f_17 conda-forge
vs2015_runtime 14.36.32532 h05e6639_17 conda-forge
werkzeug 3.0.0 pypi_0 pypi
wheel 0.41.2 py310haa95532_0
winpty 0.4.3 4
wrapt 1.15.0 pypi_0 pypi
xz 5.4.2 h8cc25b3_0
zlib 1.2.13 hcfcfb64_5 conda-forge
zlib-wapi 1.2.13 hcfcfb64_5 conda-forge
zstd 1.5.5 hd43e919_0
================= Configs ==================
--------- .faceswap ---------
backend: nvidia
--------- convert.ini ---------
[color.color_transfer]
clip: True
preserve_paper: True
[color.manual_balance]
colorspace: HSV
balance_1: 0.0
balance_2: 0.0
balance_3: 0.0
contrast: 0.0
brightness: 0.0
[color.match_hist]
threshold: 99.0
[mask.mask_blend]
type: normalized
kernel_size: 3
passes: 4
threshold: 4
erosion: 0.0
erosion_top: 0.0
erosion_bottom: 0.0
erosion_left: 0.0
erosion_right: 0.0
[scaling.sharpen]
method: none
amount: 150
radius: 0.3
threshold: 5.0
[writer.ffmpeg]
container: mp4
codec: libx264
crf: 23
preset: medium
tune: none
profile: auto
level: auto
skip_mux: False
[writer.gif]
fps: 25
loop: 0
palettesize: 256
subrectangles: False
[writer.opencv]
format: png
draw_transparent: False
separate_mask: False
jpg_quality: 75
png_compress_level: 3
[writer.patch]
start_index: 0
index_offset: 0
number_padding: 6
include_filename: True
face_index_location: before
origin: bottom-left
empty_frames: blank
json_output: False
separate_mask: False
bit_depth: 16
format: png
png_compress_level: 3
tiff_compression_method: lzw
[writer.pillow]
format: png
draw_transparent: False
separate_mask: False
optimize: False
gif_interlace: True
jpg_quality: 75
png_compress_level: 3
tif_compression: tiff_deflate
--------- extract.ini ---------
[global]
allow_growth: False
aligner_min_scale: 0.07
aligner_max_scale: 2.0
aligner_distance: 22.5
aligner_roll: 45.0
aligner_features: True
filter_refeed: True
save_filtered: False
realign_refeeds: True
filter_realign: True
[align.fan]
batch-size: 12
[detect.cv2_dnn]
confidence: 50
[detect.mtcnn]
minsize: 20
scalefactor: 0.709
batch-size: 8
cpu: True
threshold_1: 0.6
threshold_2: 0.7
threshold_3: 0.7
[detect.s3fd]
confidence: 70
batch-size: 4
[mask.bisenet_fp]
batch-size: 8
cpu: False
weights: faceswap
include_ears: False
include_hair: False
include_glasses: True
[mask.custom]
batch-size: 8
centering: face
fill: False
[mask.unet_dfl]
batch-size: 8
[mask.vgg_clear]
batch-size: 6
[mask.vgg_obstructed]
batch-size: 2
[recognition.vgg_face2]
batch-size: 16
cpu: False
--------- gui.ini ---------
[global]
fullscreen: False
tab: extract
options_panel_width: 30
console_panel_height: 20
icon_size: 14
font: default
font_size: 9
autosave_last_session: prompt
timeout: 120
auto_load_model_stats: True
--------- train.ini ---------
[global]
centering: face
coverage: 87.5
icnr_init: False
conv_aware_init: False
optimizer: adam
learning_rate: 5e-05
epsilon_exponent: -7
save_optimizer: exit
lr_finder_iterations: 1000
lr_finder_mode: set
lr_finder_strength: default
autoclip: False
reflect_padding: False
allow_growth: False
mixed_precision: True
nan_protection: True
convert_batchsize: 16
[global.loss]
loss_function: ssim
loss_function_2: mse
loss_weight_2: 100
loss_function_3: None
loss_weight_3: 0
loss_function_4: None
loss_weight_4: 0
mask_loss_function: mse
eye_multiplier: 3
mouth_multiplier: 2
penalized_mask_loss: True
mask_type: extended
mask_blur_kernel: 3
mask_threshold: 4
learn_mask: False
[model.dfaker]
output_size: 128
[model.dfl_h128]
lowmem: False
[model.dfl_sae]
input_size: 128
architecture: df
autoencoder_dims: 0
encoder_dims: 42
decoder_dims: 21
multiscale_decoder: False
[model.dlight]
features: best
details: good
output_size: 256
[model.original]
lowmem: False
[model.phaze_a]
output_size: 128
shared_fc: None
enable_gblock: True
split_fc: True
split_gblock: False
split_decoders: False
enc_architecture: fs_original
enc_scaling: 7
enc_load_weights: True
bottleneck_type: dense
bottleneck_norm: None
bottleneck_size: 1024
bottleneck_in_encoder: True
fc_depth: 1
fc_min_filters: 1024
fc_max_filters: 1024
fc_dimensions: 4
fc_filter_slope: -0.5
fc_dropout: 0.0
fc_upsampler: upsample2d
fc_upsamples: 1
fc_upsample_filters: 512
fc_gblock_depth: 3
fc_gblock_min_nodes: 512
fc_gblock_max_nodes: 512
fc_gblock_filter_slope: -0.5
fc_gblock_dropout: 0.0
dec_upscale_method: subpixel
dec_upscales_in_fc: 0
dec_norm: None
dec_min_filters: 64
dec_max_filters: 512
dec_slope_mode: full
dec_filter_slope: -0.45
dec_res_blocks: 1
dec_output_kernel: 5
dec_gaussian: True
dec_skip_last_residual: True
freeze_layers: keras_encoder
load_layers: encoder
fs_original_depth: 4
fs_original_min_filters: 128
fs_original_max_filters: 1024
fs_original_use_alt: False
mobilenet_width: 1.0
mobilenet_depth: 1
mobilenet_dropout: 0.001
mobilenet_minimalistic: False
[model.realface]
input_size: 64
output_size: 128
dense_nodes: 1536
complexity_encoder: 128
complexity_decoder: 512
[model.unbalanced]
input_size: 128
lowmem: False
nodes: 1024
complexity_encoder: 128
complexity_decoder_a: 384
complexity_decoder_b: 512
[model.villain]
lowmem: False
[trainer.original]
preview_images: 14
mask_opacity: 30
mask_color: #ff0000
zoom_amount: 5
rotation_range: 10
shift_range: 5
flip_chance: 50
color_lightness: 30
color_ab: 8
color_clahe_chance: 50
color_clahe_max_size: 4
**Crash Report**
The crash report generated in the root of your Faceswap folder
|
closed
|
2023-10-22T05:54:22Z
|
2023-10-23T00:03:58Z
|
https://github.com/deepfakes/faceswap/issues/1356
|
[] |
Cashew-wood
| 1
|
alteryx/featuretools
|
scikit-learn
| 2,156
|
release Featuretools v1.11.0
|
- We can release **once these are merged in**
- https://github.com/alteryx/featuretools/pull/2136
- https://github.com/alteryx/featuretools/pull/2157
- The instructions for releasing:
- https://github.com/alteryx/featuretools/blob/main/release.md
|
closed
|
2022-06-29T15:46:14Z
|
2022-06-30T23:07:11Z
|
https://github.com/alteryx/featuretools/issues/2156
|
[] |
gsheni
| 0
|
iperov/DeepFaceLab
|
machine-learning
| 826
|
Save ERROR
|
The SAEHD and Quick96 training run as expected however it crashes every time I wanted to press save. This makes all of the previous progress useless. This error message pops up
2020-07-12 09:48:02.749598: E tensorflow/stream_executor/cuda/cuda_driver.cc:806] failed to allocate 517.44M (542572544 bytes) from device: CUDA_ERROR_OUT_OF_MEMORY: out of memory
[09:48:09][#000002][0912ms][5.9417][4.6986]
Error: [WinError 32] The process cannot access the file because it is being used by another process: 'C:\\Users\\USER\\Downloads\\DeepFaceLab_NVIDIA\\workspace\\model\\ _SAEHD_encoder.npy.tmp' -> 'C:\\Users\\USER\\Downloads\\DeepFaceLab_NVIDIA\\workspace\\model\\ _SAEHD_encoder.npy'
Traceback (most recent call last):
File "C:\Users\USER\Downloads\DeepFaceLab_NVIDIA\_internal\DeepFaceLab\mainscripts\Trainer.py", line 178, in trainerThread
model_save()
File "C:\Users\USER\Downloads\DeepFaceLab_NVIDIA\_internal\DeepFaceLab\mainscripts\Trainer.py", line 68, in model_save
model.save()
File "C:\Users\USER\Downloads\DeepFaceLab_NVIDIA\_internal\DeepFaceLab\models\ModelBase.py", line 374, in save
self.onSave()
File "C:\Users\USER\Downloads\DeepFaceLab_NVIDIA\_internal\DeepFaceLab\models\Model_SAEHD\Model.py", line 604, in onSave
model.save_weights ( self.get_strpath_storage_for_file(filename) )
File "C:\Users\USER\Downloads\DeepFaceLab_NVIDIA\_internal\DeepFaceLab\core\leras\layers\Saveable.py", line 60, in save_weights
pathex.write_bytes_safe ( Path(filename), d_dumped )
File "C:\Users\USER\Downloads\DeepFaceLab_NVIDIA\_internal\DeepFaceLab\core\pathex.py", line 14, in write_bytes_safe
p_tmp.rename (p)
File "pathlib.py", line 1309, in rename
File "pathlib.py", line 393, in wrapped
PermissionError: [WinError 32] The process cannot access the file because it is being used by another process: 'C:\\Users\\USER\\Downloads\\DeepFaceLab_NVIDIA\\workspace\\model\\ _SAEHD_encoder.npy.tmp' -> 'C:\\Users\\USER\\Downloads\\DeepFaceLab_NVIDIA\\workspace\\model\\ _SAEHD_encoder.npy'
It did not happened to me on last year's version so I tried using the previous version 06_27_2020 instead of 07_04_2020 but it didn't seem to fix the problem at all. Furthermore, I tried deleting new _SAEHD_encoder.npy.tmp but it recreated the file everytime I pressed save so I tried removing the permission to manage files from new_SAEHD_encoder.npy.tmp but when I tried to save, there is an error saying that new_SAEHD_encoder.npy.tmp doesn't have permission so I don't know what to do with the file.

|
closed
|
2020-07-12T03:05:47Z
|
2020-07-12T05:39:00Z
|
https://github.com/iperov/DeepFaceLab/issues/826
|
[] |
THE-MATT-222
| 1
|
dynaconf/dynaconf
|
flask
| 993
|
[bug] Default value on empty string
|
## Problem
I have a nested structure whose value i need to set to a specific string when empy string or `None` is provided.
Take the following for example:
```python
from dynaconf import Dynaconf, Validator
settings = Dynaconf(
settings_files=[
'config.toml',
'.secrets.toml'
],
merge_enabled=True, # Merge all found files into one configuration.
validators=[ # Custom validators.
Validator(
"files.output.kml",
default="output.kml",
apply_default_on_none=True,
),
],
environments=False, # Disable environments support.
apply_default_on_none=True # Apply default values when a value is None.
)
print(f"KML FILE: '{settings.files.output.kml}'")
```
With the following configuration file (saved as `config.toml`):
```toml
[files.output]
kml = ""
```
- When the `kml` key is **not present** in the config file, the default is given to the setting as espected
- When the `kml` key is set to an **empty string**, the default is completely ignored, even if I passed `apply_default_on_none=True`, while I would expect it to print `output.kml`
It is somehow related to #973, since if that issue was solved I could simply put a `condition=lambda value: value is not None and value.strip() != ""` parameter to the `Validator` and then use the default value since a `ValidationError` would occur.
## What I expected
From the [documentation](https://www.dynaconf.com/validation/#default-values):
> Warning
>
> YAML reads empty keys as `None` and in that case defaults are not applied, if you want to change it set `apply_default_on_none=True` either globally to `Dynaconf` class or individually on a `Validator`.
Reading this I expected the default value to kick in even on empty strings if i set `apply_default_on_none=True` on the `Validator` or on the `Dynaconf` class (I tried both but got the same result).
## Workaround
To work around this issue I had to check manually if the setting was still empty:
```python
from dynaconf import Dynaconf, Validator
settings = Dynaconf(
settings_files=[
'config.toml',
'.secrets.toml'
],
merge_enabled=True, # Merge all found files into one configuration.
validators=[ # Custom validators.
Validator(
"files.output.kml",
default="output.kml",
apply_default_on_none=True,
),
],
environments=False, # Disable environments support.
apply_default_on_none=True # Apply default values when a value is None.
)
# Setup default values for missing settings
if str(settings.files.output.kml.strip()) == "":
settings.files.output.kml = "output.kml"
print(f"KML FILE: '{settings.files.output.kml}'") # Correctly prints `KML FILE: 'output.kml'`
```
And the same for all the other keys I have to make sure exist.
|
open
|
2023-09-04T17:18:12Z
|
2023-11-04T00:47:48Z
|
https://github.com/dynaconf/dynaconf/issues/993
|
[
"Docs"
] |
LukeSavefrogs
| 9
|
kizniche/Mycodo
|
automation
| 867
|
compatible with DFRobot sensor/controller or other Chinese made sensors
|
**Is your feature request related to a problem? Please describe.**
Altas scientific is too costly even though it is very accurate and stable so i prefer cheaper one like DFRobot.
**Describe the solution you'd like**
will mycodo work with product from DFRobot or other Chinese brand? i am not sure if they are all standardised output.
|
closed
|
2020-10-18T09:32:21Z
|
2020-11-17T03:11:01Z
|
https://github.com/kizniche/Mycodo/issues/867
|
[
"question"
] |
garudaonekh
| 9
|
aimhubio/aim
|
data-visualization
| 2,658
|
Show the number of metrics (and system metrics) tracked on the Run page
|
## 🚀 Feature
Add the number of tracked metrics on the Run metrics page.
### Motivation
It would be great to see how many metrics had been tracked.
A couple of use-cases:
- make sure the same number of metrics are shown as intended
- when metrics take time to load and lots are tracked, the number could help shed some light
<img width="1365" alt="Screenshot 2023-04-17 at 16 27 53" src="https://user-images.githubusercontent.com/3179216/232632082-36e9dfe0-266d-4edb-a49d-c1c03fc67fe1.png">
For instance Hyperparameters tab does a good job of showing the number of items.
<img width="555" alt="Screenshot 2023-04-17 at 16 26 09" src="https://user-images.githubusercontent.com/3179216/232632103-c8d4bf6d-cce9-438c-9338-bb949af17b19.png">
### Pitch
Add dimensions of metrics in the Run page.
|
open
|
2023-04-17T23:34:35Z
|
2023-04-17T23:34:35Z
|
https://github.com/aimhubio/aim/issues/2658
|
[
"type / enhancement"
] |
SGevorg
| 0
|
ipython/ipython
|
data-science
| 14,540
|
add support for capturing entire ipython interaction [enhancement]
|
It would be good if IPython can support running exported Jupyter python notebooks in batch mode in a way that better matches the Jupyter output. In particular, being able to see the expressions that are evaluated along with the output would be beneficial, even when the output is not explicit. For example, there could be a command-line option like -capture-session, so that the complete interaction of the REPL is captured in the output.
For example, the following code evaluates an expression and then re-outputs the result.
```
(2 + 2)
_
```
Ideally the session output when running it in batch mode would be like the following:
```
In [1]: (2 + 2)
Out [1]: 4
In [2]: _
Out [2]: 4
```
It seems that the closest ipython current comes to this would be when running the script from stdin:
```
In [1]: Out[1]: 4
In [2]: Out[2]: 4
```
Unfortunately, the input expression is not shown.
An additional complication with running scripts from stdin is that indentation issues can arise. See the attached file for a script that runs into an Indentation error due to an empty line in a function definition. I tried it with four combinations for stdin-vs-file and interactive-vs-non, hoping to find an approximate solution.
[interaction_quirk.py.txt](https://github.com/user-attachments/files/17358257/interaction_quirk.py.txt)
I'm not sure if modern interactive languages support this, but Lisp supports it via its "dribble" mechanism. After a call to dribble, both the input and output of the REPL are saved in the specified log. This is analogous to running the Unix script command.
The motivation for all this comes in the context of testing. With more development being done via Jupyter notebooks, it becomes harder to develop automated tests because the notebooks tend to be opaque and monolithic. If the notebook can be evaluated in batch mode, then automated tests can be written checking for expected output. For this to be effective, all output from the notebook should be included, not just output from explicit calls to print or write. In addition, the output should include the evaluated expressions to allow for more precise tests.
|
open
|
2024-10-14T02:18:47Z
|
2024-10-18T02:18:05Z
|
https://github.com/ipython/ipython/issues/14540
|
[] |
thomas-paul-ohara
| 2
|
samuelcolvin/watchfiles
|
asyncio
| 56
|
[FEATURE] add ‘closed’ as change type
|
I have a server for file uploads. With low latency I need to trigger some python code that reads the incoming files and...
I have an issue right now though. Some times the files are empty and I think it’s because the upload is not done yet. How can I determine if the file is done being written to?
I think inotify have this functionality but I agree with you that it is nice to have it platform independent.
Do you have a proposal for on how to handle this?
|
closed
|
2020-03-30T21:33:04Z
|
2020-05-22T11:44:58Z
|
https://github.com/samuelcolvin/watchfiles/issues/56
|
[] |
NixBiks
| 1
|
unionai-oss/pandera
|
pandas
| 1,059
|
Getting "TypeError: type of out argument not recognized: <class 'str'>" when using class function with Pandera decorator
|
Hi. I am having trouble to get Pandera work with classes.
First I create schemas:
```
from pandera import Column, Check
import yaml
in_ = pa.DataFrameSchema(
{
"Name": Column(object, nullable=True),
"Height": Column(object, nullable=True),
})
with open("./in_.yml", "w") as file:
yaml.dump(in_, file)
out_ = pa.DataFrameSchema(
{
"Name": Column(object, nullable=True),
"Height": Column(object, nullable=True),
})
with open("./out_.yml", "w") as file:
yaml.dump(out_, file)
```
Next I create test.py file with class:
```
from pandera import check_io
import pandas as pd
class TransformClass():
with open("./in_.yml", "r") as file:
in_ = file.read()
with open("./out_.yml", "r") as file:
out_ = file.read()
@staticmethod
@check_io(df=in_, out=out_)
def func(df: pd.DataFrame) -> pd.DataFrame:
return df
```
Finally I importing this class:
```
from test import TransformClass
data = {'Name': [np.nan, 'Princi', 'Gaurav', 'Anuj'],
'Height': [5.1, 6.2, 5.1, 5.2],
'Qualification': ['Msc', 'MA', 'Msc', 'Msc']}
df = pd.DataFrame(data)
TransformClass.func(df)
```
I am getting:
```
File C:\Anaconda3\envs\py310\lib\site-packages\pandera\decorators.py:464, in check_io.<locals>._wrapper(fn, instance, args, kwargs)
462 out_schemas = []
463 else:
--> 464 raise TypeError(
465 f"type of out argument not recognized: {type(out)}"
466 )
468 wrapped_fn = fn
469 for input_getter, input_schema in inputs.items():
470 # pylint: disable=no-value-for-parameter
TypeError: type of out argument not recognized: <class 'str'>
```
Any help would much appreciated
|
closed
|
2022-12-19T08:55:30Z
|
2022-12-19T19:55:28Z
|
https://github.com/unionai-oss/pandera/issues/1059
|
[
"question"
] |
al-yakubovich
| 1
|
pyeve/eve
|
flask
| 964
|
PyMongo 3.4.0 support
|
closed
|
2017-01-15T16:54:21Z
|
2017-01-15T16:58:25Z
|
https://github.com/pyeve/eve/issues/964
|
[
"enhancement"
] |
nicolaiarocci
| 0
|
|
gradio-app/gradio
|
python
| 10,850
|
Could not create share link. Please check your internet connection or our status page: https://status.gradio.app.
|
### Describe the bug
Hi, I am using the latest version of Gradio.
But I encounter this problem:

Do you know how can I solve this problem? Thank you very much!
### Have you searched existing issues? 🔎
- [x] I have searched and found no existing issues
### Reproduction
```python
import gradio as gr
gr.Interface(lambda x: x, "text", "text").launch(share=True)
```
### Screenshot
_No response_
### Logs
```shell
```
### System Info
```shell
Gradio Environment Information:
------------------------------
Operating System: Linux
gradio version: 5.22.0
gradio_client version: 1.8.0
------------------------------------------------
gradio dependencies in your environment:
aiofiles: 23.2.1
anyio: 4.9.0
audioop-lts is not installed.
fastapi: 0.115.11
ffmpy: 0.5.0
gradio-client==1.8.0 is not installed.
groovy: 0.1.2
httpx: 0.28.1
huggingface-hub: 0.29.3
jinja2: 3.1.4
markupsafe: 2.1.5
numpy: 1.24.4
orjson: 3.10.15
packaging: 24.2
pandas: 2.2.3
pillow: 11.0.0
pydantic: 2.10.6
pydub: 0.25.1
python-multipart: 0.0.20
pyyaml: 6.0.2
ruff: 0.11.1
safehttpx: 0.1.6
semantic-version: 2.10.0
starlette: 0.46.1
tomlkit: 0.13.2
typer: 0.15.2
typing-extensions: 4.12.2
urllib3: 2.3.0
uvicorn: 0.34.0
authlib; extra == 'oauth' is not installed.
itsdangerous; extra == 'oauth' is not installed.
gradio_client dependencies in your environment:
fsspec: 2024.6.1
httpx: 0.28.1
huggingface-hub: 0.29.3
packaging: 24.2
typing-extensions: 4.12.2
websockets: 14.2
```
### Severity
Blocking usage of gradio
|
closed
|
2025-03-21T04:00:17Z
|
2025-03-22T22:36:48Z
|
https://github.com/gradio-app/gradio/issues/10850
|
[
"bug"
] |
Allen-Zhou729
| 7
|
ultralytics/ultralytics
|
machine-learning
| 19,781
|
High CPU Usage with OpenVINO YOLOv8n on Integrated GPU – How Can I Reduce It?
|
### Search before asking
- [x] I have searched the Ultralytics YOLO [issues](https://github.com/ultralytics/ultralytics/issues) and [discussions](https://github.com/orgs/ultralytics/discussions) and found no similar questions.
### Question
Hi everyone,
I'm running inference using an OpenVINO-converted YOLOv8n model on an integrated GPU (IGPU), but I'm noticing that the CPU usage stays around 90% while the GPU is only at about 50%. I’ve tried configuring various GPU-specific properties to reduce the CPU load, yet the high CPU usage persists.
```python
import collections
import time
import openvino as ov
import openvino.properties as properties
import openvino.properties.device as device
import openvino.properties.hint as hints
import openvino.properties.streams as streams
import openvino.properties.intel_auto as intel_auto
import cv2
from ultralytics import YOLO
import torch
def open_video_stream():
return cv2.VideoCapture(0)
model_path = r"\yolov8n_openvino_model\yolov8n.xml"
core = ov.Core()
# Read the quantized model
print("Loading OpenVINO model...")
ov_model = core.read_model(str(model_path))
# Reshape the input for GPU
ov_model.reshape({0: [1, 3, 640, 640]})
gpu_config = {
hints.inference_precision: "FP16", # Alternatively, use ov.Type.f16 if available in your API
hints.execution_mode: "PERFORMANCE",
"ENABLE_CPU_PINNING": "NO",
"NUM_STREAMS": "1",
"ENABLE_CPU_PINNING": "NO",
"COMPILATION_NUM_THREADS": "2",
"GPU_DISABLE_WINOGRAD_CONVOLUTION": "YES",
"GPU_QUEUE_THROTTLE": hints.Priority.LOW,
"GPU_HOST_TASK_PRIORITY": hints.Priority.LOW,
}
# Compile the model for GPU
print(f"Compiling model for {device}...")
compiled_model=core.compile_model(ov_model,"GPU",gpu_config)
det_model = YOLO("yolov8n.pt")
label_map = det_model.model.names # Extract class names
test_img_path = "coco_bike.jpg"
# Test inference
try:
test_results = det_model(test_img_path)
print(f"Test inference successful! Found {len(test_results[0].boxes)} objects")
except Exception as e:
print(f"Warning: Test inference failed: {e}")
print("Error details:", e)
print("Continuing anyway...")
def infer(*args):
result = compiled_model(args)
return torch.from_numpy(result[0])
det_model.predictor.inference = infer
det_model.predictor.model.pt = False # Indicate PyTorch model is not used
def run_object_detection():
print("Opening video stream...")
cap = open_video_stream()
if not cap.isOpened():
print("Error: Could not open RTSP stream.")
return
print("Starting object detection loop...")
processing_times = collections.deque()
while True:
ret, frame = cap.read()
if not ret:
print("Failed to get frame from stream. Retrying...")
# Try to reopen the stream if it's dropped
cap.release()
time.sleep(1) # Wait a bit before reconnecting
cap = open_video_stream()
continue
frame= cv2.cvtColor(frame, cv2.COLOR_YUV2BGR_NV12)
# Optionally, resize frame for faster processing if it's too large
scale = 1280 / max(frame.shape)
if scale < 1:
frame = cv2.resize(frame, None, fx=scale, fy=scale, interpolation=cv2.INTER_AREA)
try:
# Run inference on the frame
results = det_model(frame, verbose=False)
if len(processing_times) > 200:
processing_times.popleft()
# Overlay inference time and FPS on the output frame
output_frame = results[0].plot()
cv2.imshow("annotated frame", output_frame)
except Exception as e:
print(f"Error during inference: {e}")
# Show the original frame if inference fails
cv2.putText(frame, "Inference Error", (20, 40),
cv2.FONT_HERSHEY_COMPLEX, 1, (0, 0, 255), 2, cv2.LINE_AA)
cv2.imshow("annotated frame", frame)
if cv2.waitKey(1) == 27: # Exit if ESC key is pressed
break
print("Cleaning up...")
cap.release()
cv2.destroyAllWindows()
if __name__ == "__main__":
print("Starting application...")
run_object_detection()
```
### Additional
_No response_
|
open
|
2025-03-19T10:56:05Z
|
2025-03-20T01:34:16Z
|
https://github.com/ultralytics/ultralytics/issues/19781
|
[
"question",
"detect",
"exports"
] |
AlaaArboun
| 2
|
facebookresearch/fairseq
|
pytorch
| 4,754
|
Forced decoding for translation
|
Hello,
Is there a flag in fairseq-cli to specify a prefix token for forced decoding? The [fairseq-generate](https://fairseq.readthedocs.io/en/latest/command_line_tools.html#fairseq-generate) documentation shows a flag to indicate the size *prefix-size* but I haven't found how to indicate what that token(s) is. Also looking at [sequence-generator.py](https://github.com/facebookresearch/fairseq/blob/main/fairseq/sequence_generator.py) there are code for handling prefix-tokens, but I haven't seen how to specify it in either the code or using fairseq-generate cli.
Thanks
|
open
|
2022-10-03T23:15:45Z
|
2022-10-03T23:15:45Z
|
https://github.com/facebookresearch/fairseq/issues/4754
|
[
"question",
"needs triage"
] |
Pogayo
| 0
|
nalepae/pandarallel
|
pandas
| 244
|
Some workers stuck while others finish 100%
|
## General
- **Operating System**:
Centos 7
- **Python version**:
3.8
- **Pandas version**:
2.0.1
- **Pandarallel version**:
1.6.4
## Acknowledgement
- [x] My issue is **NOT** present when using `pandas` without alone (without `pandarallel`)
- [ ] If I am on **Windows**, I read the [Troubleshooting page](https://nalepae.github.io/pandarallel/troubleshooting/)
before writing a new bug report
## Bug description
<img width="809" alt="image" src="https://github.com/nalepae/pandarallel/assets/12313888/4e12c9ea-a95b-4b55-b136-39d890a71058">
I started a parallel_apply program with 80 workers to decode and clean a large amount of data(about 50GB), after nearly 8mins, most of them reached 100%, but some got stuck. And after 20mins, the progress_bar is still freeze.
```
pandarallel.initialize(nb_workers=os.cpu_count(), progress_bar=True)
df["text"] = df["text"].parallel_apply(decode_clean)
```
### Observed behavior
Progress_bar freezes and cpu usage is 0%
### Expected behavior
The process progress should be nearly linear, the program should be finished after arount 10mins according to the progress_bar.
## Minimal but working code sample to ease bug fix for `pandarallel` team
_Write here the minimal code sample to ease bug fix for `pandarallel` team_
|
closed
|
2023-06-12T13:36:37Z
|
2023-06-28T08:41:51Z
|
https://github.com/nalepae/pandarallel/issues/244
|
[] |
SysuJayce
| 5
|
KevinMusgrave/pytorch-metric-learning
|
computer-vision
| 531
|
DistributedLossWrapper always requires labels
|
It shouldn't require labels if `indices_tuple` is provided.
|
closed
|
2022-09-29T13:42:08Z
|
2023-01-17T01:26:39Z
|
https://github.com/KevinMusgrave/pytorch-metric-learning/issues/531
|
[
"bug"
] |
KevinMusgrave
| 1
|
tatsu-lab/stanford_alpaca
|
deep-learning
| 210
|
Does this code still work when fine-tune with encoder-decoder (BLOOMZ or mT0) ?
|
I'm worry this code doesn't run when use pre-trained BLOOMZ or mT0 [https://github.com/bigscience-workshop/xmtf].
Have anyone fine-tuned this ?
|
open
|
2023-04-14T02:30:27Z
|
2023-04-14T02:30:27Z
|
https://github.com/tatsu-lab/stanford_alpaca/issues/210
|
[] |
nqchieutb01
| 0
|
iMerica/dj-rest-auth
|
rest-api
| 542
|
Get JWT secret used for encoding
|
How can i get the secret being used be library for encoding jwt tokens?
|
closed
|
2023-09-01T13:15:25Z
|
2023-09-01T13:19:25Z
|
https://github.com/iMerica/dj-rest-auth/issues/542
|
[] |
legalimpurity
| 0
|
horovod/horovod
|
tensorflow
| 3,707
|
Reducescatter: Support ncclAvg op for averaging
|
Equivalently to #3646 for Allreduce
|
open
|
2022-09-20T12:17:35Z
|
2022-09-20T12:17:35Z
|
https://github.com/horovod/horovod/issues/3707
|
[
"enhancement"
] |
maxhgerlach
| 0
|
mars-project/mars
|
numpy
| 2,645
|
[BUG] Groupby().agg() returned a DataFrame with index even as_index=False
|
<!--
Thank you for your contribution!
Please review https://github.com/mars-project/mars/blob/master/CONTRIBUTING.rst before opening an issue.
-->
**Describe the bug**
Groupby().agg() returned a DataFrame with index even as_index=False.
**To Reproduce**
To help us reproducing this bug, please provide information below:
1. Your Python version
2. The version of Mars you use
3. Versions of crucial packages, such as numpy, scipy and pandas
4. Full stack of the error.
5. Minimized code to reproduce the error.
```
In [10]: def g(x):
...: return (x == '1').sum()
...:
In [11]: df = md.DataFrame({'a': ['1', '2', '3'], 'b': ['a1', 'a2', 'a1']})
In [12]: df.groupby('b', as_index=False)['a'].agg((g,)).execute()
/Users/qinxuye/Workspace/mars/mars/deploy/oscar/session.py:1932: UserWarning:
Out[12]:
g
b
a1 1
a2 0
```
|
closed
|
2022-01-21T07:58:47Z
|
2022-01-21T09:26:23Z
|
https://github.com/mars-project/mars/issues/2645
|
[
"type: bug",
"reso: invalid",
"mod: dataframe"
] |
qinxuye
| 1
|
graphql-python/graphene-mongo
|
graphql
| 24
|
Types are unaware of parent class attributes defined on child model
|
First of all, thank you for writing this library. I've been wanting to try GraphQL out with my current project but didn't want to have to create an entire new backend application from scratch. I can reuse my existing models thanks to this library, way cool 👍 🥇
Now for my issue...
I have a parent/child relationship defined like this:
```
from mongoengine import Document
class Parent(Document):
bar = StringField()
class Child(Parent):
baz = StringField()
```
When I defined my schema and attempt to query against the `Child` model, it says `Unknown argument "bar" on field "child" of type "Query"`
My query:
```
{
child(bar:"a valid value") {
edges {
node {
bar
baz
}
}
}
}
```
```
from graphene_mongo import MongoengineConnectionField, MongoengineObjectType
from app.models import Child as ChildModel
class Child(MongoengineObjectType):
class Meta:
model = ChildModel
interfaces = (Node,)
class Query(graphene.ObjectType):
node = Node.Field()
child = MongoengineConnectionField(Child)
schema = graphene.Schema(query=Query, types=[Child])
```
I may just be misusing the library, or perhaps this is a feature that isn't implemented yet. If the feature hasn't been implemented yet I am up for taking a stab at it. Is there a way for my schema to infer the parent's attributes based on how I define them like the above example? Thank you again!
|
closed
|
2018-04-01T13:42:14Z
|
2018-04-02T13:59:45Z
|
https://github.com/graphql-python/graphene-mongo/issues/24
|
[] |
msholty-fd
| 1
|
iterative/dvc
|
machine-learning
| 10,064
|
dvc pull: failed to load directory when first failed s3 connection
|
# Bug Report
<!--
## Issue name
Issue names must follow the pattern `command: description` where the command is the dvc command that you are trying to run. The description should describe the consequence of the bug.
Example: `repro: doesn't detect input changes`
-->
## Description
<!--
A clear and concise description of what the bug is.
-->
The command `dvc pull` consistently fails with the error message "failed to load directory" when there was a previous occurrence of "failed to connect to s3". This issue persists even after fixing the s3 credentials.
### Reproduce
<!--
Step list of how to reproduce the bug
-->
#### Reset DVC at the initial step.
```bash
$ rm -rf .dvc
$ git checkout .
Updated 2 paths from the index
```
#### Move credentials to provoke failed s3 connection
```bash
$ mv ~/.aws/credentials{,.tmp}
$ dvc pull
Collecting |25.0 [00:00, 36.8entry/s]
ERROR: failed to connect to s3 (XXX/files/md5) - The config profile (YYY) could not be found
ERROR: failed to pull data from the cloud - 25 files failed to download
```
#### Restore credentials to resolve s3 connection
```bash
$ mv ~/.aws/credentials{.tmp,}
```
#### Reproduce the Bug
```bash
$ dvc pull
Collecting |0.00 [00:00, ?entry/s]
Fetching
ERROR: unexpected error - failed to load directory ('d6', '38d9367bc2b169fb89b59f19e2844f.dir'): [Errno 2] No such file or directory: '/ZZZ/.dvc/cache/files/md5/d6/38d9367bc2b169fb89b59f19e2844f.dir'
```
#### Workaround
```bash
$ rm -rf .dvc/tmp
$ dvc pull
Collecting |1.56k [00:07, 221entry/s]
Fetching
|Fetching from s3 63/130 [00:01<00:00, 78.26file/s]
```
<!--
Example:
1. dvc init
2. Copy dataset.zip to the directory
3. dvc add dataset.zip
4. dvc run -d dataset.zip -o model ./train.sh
5. modify dataset.zip
6. dvc repro
-->
### Expected
<!--
A clear and concise description of what you expect to happen.
-->
`dvc pull` should work without removing ` .dvc/tmp`!
### Environment information
<!--
This is required to ensure that we can reproduce the bug.
-->
**Output of `dvc doctor`:**
```console
DVC version: 3.28.0 (pip)
-------------------------
Platform: Python 3.10.9 on Linux-4.18.0-372.70.1.1.el8_6.x86_64-x86_64-with-glibc2.28
Subprojects:
dvc_data = 2.20.0
dvc_objects = 1.1.0
dvc_render = 0.5.3
dvc_task = 0.3.0
scmrepo = 1.4.0
Supports:
http (aiohttp = 3.8.4, aiohttp-retry = 2.8.3),
https (aiohttp = 3.8.4, aiohttp-retry = 2.8.3),
s3 (s3fs = 2023.6.0, boto3 = 1.26.76)
Config:
Global: /YYY/.config/dvc
System: /etc/xdg/dvc
Cache types: hardlink, symlink
Cache directory: lustre on XXX
Caches: local
Remotes: s3
Workspace directory: lustre on XXX
Repo: dvc, git
Repo.site_cache_dir: /var/tmp/dvc/repo/4372a7cb7af0fda33045046f65b86013
```
## Notes
Maybe related to #10030 ?
<!--
Please check https://github.com/iterative/dvc/wiki/Debugging-DVC on ways to gather more information regarding the issue.
If applicable, please also provide a `--verbose` output of the command, eg: `dvc add --verbose`.
If the issue is regarding the performance, please attach the profiling information and the benchmark comparisons.
-->
|
closed
|
2023-11-03T10:33:57Z
|
2023-12-15T13:36:31Z
|
https://github.com/iterative/dvc/issues/10064
|
[
"awaiting response"
] |
fguiotte
| 3
|
man-group/notebooker
|
jupyter
| 134
|
Add option to pass scheduled cron time to the notebook
|
Being able to read scheduled cron time from the notebook would improve the use case of using notebooker as tool to generate periodic reports. Might also need to maintain that time if same report is re-run.
|
open
|
2023-02-02T20:41:31Z
|
2023-10-11T15:34:27Z
|
https://github.com/man-group/notebooker/issues/134
|
[] |
marcinapostoluk
| 1
|
ivy-llc/ivy
|
numpy
| 28,764
|
Fix Frontend Failing Test: tensorflow - pooling_functions.torch.nn.functional.max_pool2d
|
closed
|
2024-06-15T20:44:58Z
|
2024-07-15T02:29:34Z
|
https://github.com/ivy-llc/ivy/issues/28764
|
[
"Sub Task"
] |
nicolasb0
| 0
|
|
waditu/tushare
|
pandas
| 1,526
|
股票列表接口中没有标明请求所需积分值
|
https://tushare.pro/document/2?doc_id=94
股票列表请求提示无权限,没有明确标明具体所需分值
|
open
|
2021-03-23T06:04:20Z
|
2021-03-23T06:04:20Z
|
https://github.com/waditu/tushare/issues/1526
|
[] |
mestarshine
| 0
|
microsoft/nni
|
pytorch
| 5,309
|
inputs is empty!
|
**Describe the bug**:
inputs is empty!As show in figure:

node-----> name: .aten::mul.146, type: func, op_type: aten::mul, sub_nodes: ['_aten::mul'], inputs: ['logvar'], outputs: ['809'], aux: None
node-----> name: .aten::exp.147, type: func, op_type: aten::exp, sub_nodes: ['_aten::exp'], inputs: ['809'], outputs: ['std'], aux: None
node-----> name: .aten::randn_like.148, type: func, op_type: aten::randn_like, sub_nodes: ['_aten::randn_like'], inputs: ['mu'], outputs: ['eps'], aux: None
node-----> name: .aten::mul.149, type: func, op_type: aten::mul, sub_nodes: ['_aten::mul'], inputs: ['eps', 'std'], outputs: ['817'], aux: None
node-----> name: .aten::add.150, type: func, op_type: aten::add, sub_nodes: ['_aten::add'], inputs: ['817', 'mu'], outputs: ['819'], aux: None
node-----> name: .aten::unsqueeze.151, type: func, op_type: aten::unsqueeze, sub_nodes: ['_aten::unsqueeze'], inputs: ['819'], outputs: ['821'], aux: None
node-----> name: .aten::unsqueeze.152, type: func, op_type: aten::unsqueeze, sub_nodes: ['_aten::unsqueeze'], inputs: ['821'], outputs: ['z.1'], aux: None
node-----> name: .aten::repeat.153, type: func, op_type: aten::repeat, sub_nodes: ['_aten::repeat', '_prim::ListConstruct'], inputs: ['z.1'], outputs: ['z'], aux: None
node-----> name: .aten::cat.154, type: func, op_type: aten::cat, sub_nodes: ['_aten::cat', '_prim::ListConstruct'], inputs: ['z', 'x_stereo'], outputs: ['input.9'], aux: {'out_shape': [2, 322, 276, 513], 'cat_dim': 1, 'in_order': ['.aten::repeat.153', '.aten::transpose.115'], 'in_shape': [[2, 320, 276, 513], [2, 2, 276, 513]]}
node-----> name: .aten::relu.155, type: func, op_type: aten::relu, sub_nodes: ['_aten::relu'], inputs: ['input.82'], outputs: ['input_tensor'], aux: None
node-----> name: .aten::size.156, type: func, op_type: aten::size, sub_nodes: ['_aten::size'], inputs: ['input_tensor'], outputs: ['1812'], aux: None
node-----> name: .aten::Int.157, type: func, op_type: aten::Int, sub_nodes: ['_aten::Int', '_prim::NumToTensor'], inputs: ['1812'], outputs: ['2374'], aux: None
node-----> name: .aten::size.158, type: func, op_type: aten::size, sub_nodes: ['_aten::size'], inputs: ['input_tensor'], outputs: ['1818'], aux: None
node-----> name: .aten::Int.159, type: func, op_type: aten::Int, sub_nodes: ['_aten::Int', '_prim::NumToTensor'], inputs: ['1818'], outputs: ['2125'], aux: None
node-----> name: .aten::Int.160, type: func, op_type: aten::Int, sub_nodes: ['_aten::Int', '_prim::NumToTensor'], inputs: ['1818'], outputs: ['2119'], aux: None
node-----> name: .aten::size.161, type: func, op_type: aten::size, sub_nodes: ['_aten::size'], inputs: ['input_tensor'], outputs: ['1821'], aux: None
node-----> name: .aten::Int.162, type: func, op_type: aten::Int, sub_nodes: ['_aten::Int', '_prim::NumToTensor'], inputs: ['1821'], outputs: ['2126'], aux: None
node-----> name: .aten::Int.163, type: func, op_type: aten::Int, sub_nodes: ['_aten::Int', '_prim::NumToTensor'], inputs: ['1821'], outputs: ['2120'], aux: None
node-----> name: .aten::slice.164, type: func, op_type: aten::slice, sub_nodes: ['_aten::slice'], inputs: ['input_tensor'], outputs: ['1827'], aux: None
node-----> name: .aten::slice.166, type: func, op_type: aten::slice, sub_nodes: ['_aten::slice'], inputs: ['1827'], outputs: ['1839'], aux: None
node-----> name: .aten::slice.167, type: func, op_type: aten::slice, sub_nodes: ['_aten::slice'], inputs: ['1839'], outputs: ['1844'], aux: None
**Environment**:
- NNI version: 2.10
- Training service (local|remote|pai|aml|etc): remote
- Python version: 3.8.13
- PyTorch version: 1.8.0
- Cpu or cuda version: cuda111
**Reproduce the problem**
- Code|Example:
According to my position, the mistake should be here:
def feature_maps_to_wav(
self,
input_tensor: torch.Tensor,
cos_in: torch.tensor,
sin_in: torch.tensor,
cos_c: torch.tensor,
sin_c: torch.tensor,
audio_length: int,
) -> torch.Tensor:
r"""Convert feature maps to waveform.
Outputs:
waveform: (batch_size, output_channels, segment_samples)
"""
batch_size, _, time_steps, freq_bins = input_tensor.shape
l_mag = input_tensor[:, [0], :, :]
r_mag = input_tensor[:, [1], :, :]
c_mag = input_tensor[:, [2], :, :]
lfe_mag = input_tensor[:, [3], :, :]
ls_mag = input_tensor[:, [4], :, :]
rs_mag = input_tensor[:, [5], :, :]
lls_cos_in = cos_in[:, 0:1, :, :]
rrs_cos_in = cos_in[:, 1:2, :, :]
lls_sin_in = sin_in[:, 0:1, :, :]
rrs_sin_in = sin_in[:, 1:2, :, :]
real = torch.cat(
(l_mag * lls_cos_in,
r_mag * rrs_cos_in,
c_mag * cos_c,
lfe_mag * cos_c,
ls_mag * lls_cos_in,
rs_mag * rrs_cos_in),
dim=1)
imag = torch.cat(
(l_mag * lls_sin_in,
r_mag * rrs_sin_in,
c_mag * sin_c,
lfe_mag * sin_c,
ls_mag * lls_sin_in,
rs_mag * rrs_sin_in),
dim=1)
real = torch.cat((clfe_mag * cos_c, lls_mag * lls_cos_in, rrs_mag * rrs_cos_in), dim=1)
imag = torch.cat((clfe_mag * sin_c, lls_mag * lls_sin_in, rrs_mag * rrs_sin_in), dim=1)
real = real.reshape((-1, 1, time_steps, freq_bins))
imag = imag.reshape((-1, 1, time_steps, freq_bins))
# ISTFT.
x = self.istft(real, imag, audio_length)
# Reshape.
waveform = x.reshape(batch_size, -1, audio_length)
return waveform
- How to reproduce: I hope you can help me solve this problem, thanks!
https://github.com/microsoft/nni/issues/5309#tasklist-block-ed4e9d7e-b082-416c-97ab-bcff3aa3c51e
|
closed
|
2023-01-05T10:33:19Z
|
2023-01-06T02:19:42Z
|
https://github.com/microsoft/nni/issues/5309
|
[] |
Blakey-Gavin
| 0
|
aleju/imgaug
|
machine-learning
| 849
|
Adding BlendAlphaSimplexNoise into an augmentation sequence fails to convert keypoints
|
Imgaug 0.4.0
Python 3.10
`iaa.BlendAlphaSimplexNoise` seems to cause problems when converting keypoints.
I have created an sequence of augmentations:
```python
seq = iaa.Sequential([
iaa.Affine(rotate=(-25, 25)),
iaa.AllChannelsCLAHE(clip_limit=(1, 3), tile_grid_size_px=(10, 25)),
iaa.BlendAlphaSimplexNoise(iaa.Multiply(iap.Uniform(0.7, 1.3), per_channel=True), size_px_max=(2, 16), upscale_method="nearest")
# iaa.BlendAlphaFrequencyNoise(foreground=iaa.Multiply(iap.Choice([0.8, 1.2]), per_channel=True))
], random_order=False)
```
When I try to augment image and the corresponding keypoints with:
```python
image_aug, kps_aug = seq(image=image, keypoints=kps_oi)
```
I get the error:
```python
File ~/anaconda3/envs/dlc239-gui/lib/python3.10/site-packages/imgaug/augmenters/blend.py:757, in BlendAlphaMask._blend_coordinates(cls, cbaoi, cbaoi_fg, cbaoi_bg, mask_image, mode)
755 subgen = zip(coords, coords_fg, coords_bg)
756 for coord, coord_fg, coord_bg in subgen:
--> 757 x_int = int(np.round(coord[0]))
758 y_int = int(np.round(coord[1]))
759 if 0 <= y_int < h_img and 0 <= x_int < w_img:
ValueError: cannot convert float NaN to integer
```
My keypoints include some NaN values (as a side note).
If I remove specifically `iaa.BlendAlphaSimplexNoise` there no error. For example If use `iaa.BlendAlphaFrequencyNoise` instead there is also no error.
|
open
|
2024-05-03T12:39:05Z
|
2024-05-03T12:39:05Z
|
https://github.com/aleju/imgaug/issues/849
|
[] |
vonaviv
| 0
|
alteryx/featuretools
|
data-science
| 2,047
|
Investigate and resolve warnings related to pandas
|
- Our unit tests currently output these warnings.
- We should determine the cause of these warnings, and resolve them.
|
closed
|
2022-04-29T20:47:16Z
|
2022-05-13T21:24:26Z
|
https://github.com/alteryx/featuretools/issues/2047
|
[] |
gsheni
| 1
|
google-research/bert
|
nlp
| 372
|
Two to Three mask word prediction at same sentence is very complex?
|
Two to Three mask word prediction at same sentence also very complex.
how to get good accuracy?
if i have to pretrained bert model and own dataset with **masked_lm_prob=0.25** (https://github.com/google-research/bert#pre-training-with-bert), what will happened?
Thanks.
|
open
|
2019-01-18T05:48:06Z
|
2019-02-11T07:10:39Z
|
https://github.com/google-research/bert/issues/372
|
[] |
MuruganR96
| 1
|
JaidedAI/EasyOCR
|
deep-learning
| 992
|
EasyOCR failed when the picture has long height like long wechat snapshot contained several small snapshots
|
open
|
2023-04-17T04:07:42Z
|
2023-04-17T04:07:42Z
|
https://github.com/JaidedAI/EasyOCR/issues/992
|
[] |
crazyn2
| 0
|
|
Yorko/mlcourse.ai
|
plotly
| 406
|
Topic 4. Part 4: Some modules are imported at the beginning, but are not used further in the text
|

It seems that _TfidfTransformer, TfidfVectorizer, LinearSVC_ were forgotten to be removed when preparing the final [article](https://mlcourse.ai/notebooks/blob/master/jupyter_english/topic04_linear_models/topic4_linear_models_part4_good_bad_logit_movie_reviews_XOR.ipynb). If they are left on purpose, it seems that it is worth adding a few words about them in the text.
|
closed
|
2018-11-05T20:57:44Z
|
2018-11-10T16:18:38Z
|
https://github.com/Yorko/mlcourse.ai/issues/406
|
[
"minor_fix"
] |
ptaiga
| 1
|
taverntesting/tavern
|
pytest
| 505
|
Question: Any experience with using mocking with Tavern?
|
Would like to get head start on automating new api by mocking requests/responses while using Tavern and pytest.
|
closed
|
2020-01-06T20:18:37Z
|
2020-01-13T18:27:48Z
|
https://github.com/taverntesting/tavern/issues/505
|
[] |
pmneve
| 2
|
horovod/horovod
|
pytorch
| 4,110
|
[+[!𝐅𝐔𝐋𝐋 𝐕𝐈𝐃𝐄𝐎𝐒!]+]Sophie Rain Spiderman Video Original Video Link Sophie Rain Spiderman Video Viral On Social Media X Trending Now
|
20 seconds ago
L𝚎aked Video Sophie Rain Spiderman Original Video Viral Video L𝚎aked on X Twitter Telegram
..
..
[🌐 𝖢𝖫𝖨𝖢𝖪 𝖧𝖤𝖱𝖤 🟢==►► 𝖶𝖠𝖳𝖢𝖧 𝖭𝖮𝖶](https://usnews-daily.com/free-watch/)
..
..
[🔴 𝖢𝖫𝖨𝖢𝖪 𝖧𝖤𝖱𝖤 🌐==►► 𝖣𝗈𝗐𝗇𝗅𝗈𝖺𝖽 𝖭𝗈𝗐](https://usnews-daily.com/free-watch/?t)
..
..
<a href="https://usnews-daily.com/free-watch/?y" rel="nofollow" data-target="animated-image.originalLink"><img src="https://i.imgur.com/vN3eWE7.png"></a>
..
..
[-wATCH-]— Sophie Rain Spiderman Video Original Video Link Sophie Rain Spiderman Video Viral On Social Media X Trending Now
[-wATCH-]— Sophie Rain Spiderman ʟᴇᴀᴋᴇᴅ Video ᴠɪʀᴀʟ On Social Media ˣ ᵀʷⁱᵗᵗᵉʳ
[-wATCH-]— Sophie Rain Spiderman ʟᴇᴀᴋᴇᴅ Video ᴠɪʀᴀʟ On Social Media ˣ ᵀʷⁱᵗᵗᵉʳ
[-wATCH-]— Sophie Rain Spiderman Video Original Video Link Sophie Rain Spiderman Video Viral On Social Media X Trending Now
Sophie Rain Spiderman Original Video video took the internet by storm and amazed viewers on various social media platforms. Sophie Rain Spiderman, a young and talented digital creator, recently became famous thanks to this interesting video.
L𝚎aked Video Sophie Rain Spiderman Original Video Viral Video L𝚎aked on X Twitter
Sophie Rain Spiderman Original Video video oficial twitter
L𝚎aked Video Sophie Rain Spiderman Original Video Viral Video L𝚎aked on X Twitter..
.
.
.
..
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
..
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
..
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
..
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
..
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
..
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
..
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
..
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
..
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
|
closed
|
2024-11-17T17:23:26Z
|
2024-11-20T12:23:49Z
|
https://github.com/horovod/horovod/issues/4110
|
[] |
ghost
| 1
|
dynaconf/dynaconf
|
fastapi
| 823
|
[bug] Validator cast happens before must_exist check
|
**Describe the bug**
The `Validator` `cast` function call happens before the `must_exist`, which doesn't make sense since you first want to ensure that the value has been provided, and then cast it into a different object.
**To Reproduce**
Steps to reproduce the behavior:
Having the following app code:
<details>
<summary> Code </summary>
**/path/src/app.py**
```python3
from dynaconf import Dynaconf, Validator
from pathlib import Path
settings = Dynaconf(
validators=[
Validator("java_bin", must_exist=True, cast=Path)
]
)
settings.validators.validate()
```
</details>
**Expected behavior**
The validator should have raise the exception related to `must_exist` check first.
**Environment (please complete the following information):**
- OS: Ubuntu 20.04
- Dynaconf Version 3.1.11
**Additional context**
Would it be possible to document these Validators additionals arguments:
- `cast`
- `condition`
In the [documentation](https://www.dynaconf.com/validation/) ?
I almost passed on dynaconf today and I found them by luck.
Thanks for Dynaconf, I ❤️ it !
|
closed
|
2022-10-28T15:16:59Z
|
2023-03-02T13:29:40Z
|
https://github.com/dynaconf/dynaconf/issues/823
|
[
"bug"
] |
Wenzel
| 0
|
SciTools/cartopy
|
matplotlib
| 2,311
|
Chart Server dependency
|
Google chart server has been deprecated in 2012, and the notification for turning it down was in 2019:
https://groups.google.com/g/google-chart-api/c/rZtHTyYgyXI
The servers can be gone at any moment at this point.
These cartopy tests still seem to rely on the chart server API:
https://github.com/SciTools/cartopy/blob/da6a8c521f614abea4d16e659b3c87ec80025a66/lib/cartopy/tests/test_img_tiles.py#L39
Could we remove the chartserver dependencies? What is the best path forward? How deep does cartopy depend on the chart server API?
|
open
|
2024-01-08T22:09:19Z
|
2024-01-09T00:18:47Z
|
https://github.com/SciTools/cartopy/issues/2311
|
[] |
rainwoodman
| 1
|
ageitgey/face_recognition
|
python
| 1,247
|
Different domains
|
Hi guys, If I want to use images coming from different domains, e.g., webcamera and professional camera or identification cards, with different colors, what should be the best option/way to normalize images? face_encodings perform the normalization, really? It should be better to apply first an early normalization?
|
open
|
2020-11-21T10:24:19Z
|
2020-11-21T10:24:19Z
|
https://github.com/ageitgey/face_recognition/issues/1247
|
[] |
MarioProjects
| 0
|
fastapi/fastapi
|
pydantic
| 12,055
|
Why can't the key of the returned value start with “_sa”?
|
### Privileged issue
- [X] I'm @tiangolo or he asked me directly to create an issue here.
### Issue Content
```
import uvicorn
from fastapi import FastAPI
app = FastAPI()
@app.get("/")
def root():
return {"_sa": "Hello World", "status": "OK"}
if __name__ == '__main__':
uvicorn.run(app, host="0.0.0.0", port=8000)
```
The result of the above code is:
```
{
"status": "OK"
}
```
|
closed
|
2024-08-21T23:42:20Z
|
2024-08-22T13:54:53Z
|
https://github.com/fastapi/fastapi/issues/12055
|
[] |
leafotto
| 2
|
roboflow/supervision
|
pytorch
| 1,052
|
DetectionDataset.from_yolo bad conversion with autodistill_grounded_sam DetectionDataset object
|
### Search before asking
- [X] I have searched the Supervision [issues](https://github.com/roboflow/supervision/issues) and found no similar bug report.
### Bug
The sv.DetectionDataset.from_yolo function has abnormal behavior when processing DetectionDataset objects from autodistil_grounded_sam
When I use sv.DetectionDataset.from_yolo on a dataset generated via `base_model.label` (base_model being GroundedSAM), I get a different detection number of the object returned by `base_model.label`, whereas this is supposed to only carry out a conversion.
Note that I did the test with a GroundingDino base_model, and I did not encounter the problem.
The detections returned can be lower or higher than the basic detections (900 maximum according to what I have experienced with a confidence of 0.00)
### Environment
Supervision = 0.19.0
### Minimal Reproducible Example
```py
from autodistill_grounded_sam import GroundedSAM
from autodistill.detection import CaptionOntology
from pathlib import Path
import supervision as sv
```
```py
base_model = GroundedSAM(
ontology=CaptionOntology(
{
"screen": "a computer screen",
}
),
box_threshold = 0.00
)
```
```py
# Put the cat image on your input directory
# Put your input directory path
input_dir = "/home/ggiret/Téléchargements/chat"
output_dir = "test/"
```
```py
results = base_model.label(
input_folder=input_dir,
extension=".png",
output_folder=output_dir,
record_confidence=True)
```
```py
# Put the correct image name if the name changed
len(results.annotations['images.png'].class_id)
```
> 900
```py
sv_dataset = sv.DetectionDataset.from_yolo(
images_directory_path=Path(output_dir).joinpath("images"),
annotations_directory_path=Path(output_dir).joinpath("annotations"),
data_yaml_path=Path(output_dir).joinpath("data.yaml"))
```
```py
# Put the correct image name if the name changed
len(sv_dataset.annotations['test/images/images.jpg'].class_id)
```
> 1100
### Additional

This is the image I used for the 1100 number of class_id result.
### Are you willing to submit a PR?
- [ ] Yes I'd like to help by submitting a PR!
|
closed
|
2024-03-26T13:59:20Z
|
2024-04-02T12:40:16Z
|
https://github.com/roboflow/supervision/issues/1052
|
[
"bug"
] |
Youho99
| 7
|
ymcui/Chinese-BERT-wwm
|
nlp
| 132
|
预训练维基 繁/简体
|
您好:
感谢您提供预训练模型。想请教 BERT-wwm 在进行预训练时,使用的中文维基,是简体中文,还是繁体中文,还是两者都有?
|
closed
|
2020-07-23T08:16:17Z
|
2020-07-23T11:12:37Z
|
https://github.com/ymcui/Chinese-BERT-wwm/issues/132
|
[] |
d223302
| 1
|
Evil0ctal/Douyin_TikTok_Download_API
|
api
| 161
|
[BUG] 抖音接口应该是换了
|
抖音接口应该是换了
|
closed
|
2023-02-27T09:27:49Z
|
2024-08-23T05:25:17Z
|
https://github.com/Evil0ctal/Douyin_TikTok_Download_API/issues/161
|
[
"BUG",
"Fixed"
] |
jw-star
| 29
|
strawberry-graphql/strawberry
|
graphql
| 3,119
|
FastAPI GraphQL: Unknown type 'Upload'
|
<!--- Provide a general summary of the changes you want in the title above. -->
I'm getting an error `Unknown type 'Upload'` when calling my file upload API. Have I missed something? Or is it a bug?
Most of this code is based off the [Strawberry file upload guide](https://strawberry.rocks/docs/guides/file-upload)
<!--- Anything on lines wrapped in comments like these will not show up in the final text. -->
Here's what I've done so far:
Javascript:
```js
const query = `
mutation($files: [Upload!]!) {
uploadFiles(files: $files, projectId: ${projectId})
}`;
const formData = new FormData();
// We're only testing a single file for now
formData.append('map', JSON.stringify({ 0: ['variables.files.0'] }));
const filesVariable = [];
for (let i = 0; i < files.length; i++) {
filesVariable.push(null);
}
formData.append('operations', JSON.stringify({ query, variables: { 'files': filesVariable } }).replace('\n', ''));
files.forEach((file, index) => {
formData.append(index.toString(), file);
});
const response = await this.api.post(
'/obsidian-graphql',
formData
)
```
Python backend:
```py
import strawberry
from strawberry.fastapi import GraphQLRouter
from strawberry.exceptions import StrawberryGraphQLError
from strawberry.types import Info
from strawberry.file_uploads import Upload
# other imports are omitted
@strawberry.type
class Mutation:
@strawberry.mutation
async def upload_files(self, files: list[Upload], project_id: int) -> None:
# TODO: Add business logic
for file in files:
assert validate_file_type(file.filename)
return None
schema = strawberry.Schema(Query)
obsidian_router_ql = GraphQLRouter(schema, context_getter=get_context)
```
And the error:
```
2023-09-25 06:40:21,872:ERROR - Unknown type 'Upload'.
GraphQL request:2:27
1 |
2 | mutation($files: [Upload!]!) {
| ^
3 | uploadFiles(files: $files, projectId: 31)
```
It would be great to understand what's going wrong.
Sincerely,
Aiden.
|
closed
|
2023-09-24T20:52:14Z
|
2025-03-20T15:56:23Z
|
https://github.com/strawberry-graphql/strawberry/issues/3119
|
[] |
SquarerFive
| 2
|
simple-login/app
|
flask
| 2,292
|
None of the SL domains accepted in www.studystream.live
|
Please note that this is only for bug report.
For help on your account, please reach out to us at hi[at]simplelogin.io. Please make sure to check out [our FAQ](https://simplelogin.io/faq/) that contains frequently asked questions.
For feature request, you can use our [forum](https://github.com/simple-login/app/discussions/categories/feature-request).
For self-hosted question/issue, please ask in [self-hosted forum](https://github.com/simple-login/app/discussions/categories/self-hosting-question)
## Prerequisites
- [ ] I have searched open and closed issues to make sure that the bug has not yet been reported.
## Bug report
**Describe the bug**
A clear and concise description of what the bug is.
**Expected behavior**
A clear and concise description of what you expected to happen.
**Screenshots**
If applicable, add screenshots to help explain your problem.
**Environment (If applicable):**
- OS: Linux, Mac, Windows
- Browser: Firefox, Chrome, Brave, Safari
- Version [e.g. 78]
**Additional context**
Add any other context about the problem here.
|
open
|
2024-10-26T11:50:45Z
|
2024-10-26T11:50:45Z
|
https://github.com/simple-login/app/issues/2292
|
[] |
homeostashish
| 0
|
whitphx/streamlit-webrtc
|
streamlit
| 1,254
|
Webcam Stream not showing up
|
First of all, thank you very much for your really great work and contribution!!!
I currently need help accessing some of your demos and running my own App on the Streamlit Cloud.
When accessing the demos and starting the webcam (after granting access) it looks like this :
<img width="743" alt="image" src="https://github.com/whitphx/streamlit-webrtc/assets/20643017/26e56afc-ed19-46d0-8f95-55d2a8b98cde">
or
<img width="758" alt="image" src="https://github.com/whitphx/streamlit-webrtc/assets/20643017/de98323d-2543-41e5-b3cf-5eb9512253eb">
I have written a short sample app:
```python
import streamlit as st
from streamlit_webrtc import webrtc_streamer, WebRtcMode
# Streamlit app
st.title("DEMO APP")
# Instantiate WebRTC (and show start button)
ctx = webrtc_streamer(
key="FaceIDAppDemo",
mode=WebRtcMode.SENDONLY,
media_stream_constraints={"video": True, "audio": False},
rtc_configuration={"iceServers": [{"urls": ["stun:stun.l.google.com:19302"]}]},
video_receiver_size=1,
async_processing=True,
)
# Live Stream Display
image_loc = st.empty()
if ctx.video_receiver:
while True:
try:
frame = ctx.video_receiver.get_frame(timeout=1)
img = frame.to_ndarray(format="rgb24")
except:
continue
# Display Live Frame
image_loc.image(img)
```
, which is running fine locally. But on Streamlit Cloud, after I press start, it seems to get a connection to the webcam (the text turns to "Stop", but after a few seconds the text turns back to "Start". In the logs, my app got stuck at "Collecting usage statistics. To deactivate, set browser.gatherUsageStats to False."
Do you have any suggestions, on what I could have done wrong here? Or where the error is?
Thank you,
Cheers,
Martlgap
|
closed
|
2023-05-10T20:38:48Z
|
2024-05-05T04:59:15Z
|
https://github.com/whitphx/streamlit-webrtc/issues/1254
|
[] |
Martlgap
| 2
|
yunjey/pytorch-tutorial
|
pytorch
| 185
|
there was a problem in language model
|
hello,when I try to run pytorch-tutorial/tutorials/02-intermediate/language_model/main.py this file,but I got some error,Firstly,when it comes to "ids = corpus.get_data('data/train.txt', batch_size)",it will display shape '[20,-1]' is invalid for input of size 929589,Is there any wrong with the train.txt? Secondly,sample.txt doesn't contain in the dictionary 'data'.Hope you can fix it,thanks!
|
open
|
2019-07-27T09:35:45Z
|
2019-12-17T22:47:25Z
|
https://github.com/yunjey/pytorch-tutorial/issues/185
|
[] |
TobeyLi
| 1
|
serengil/deepface
|
machine-learning
| 465
|
OS error occurs when running DeepFace.stream()
|
Hello, I am completely new to deepface and I recently faced an error stated in this image when running DeepFace.stream(). I tried to reinstall deepface, keras and h5py packages but the problem still occurs. Any solutions for this?
This is what i ran,
[

](url)
This is the error,

|
closed
|
2022-04-26T22:45:21Z
|
2022-05-01T08:30:54Z
|
https://github.com/serengil/deepface/issues/465
|
[
"dependencies"
] |
thushaltk
| 2
|
biolab/orange3
|
numpy
| 6,089
|
Group by: change categorical default
|
Currently, the default aggregation method for categorical variables is "Concatenate", which is quite a strange default and not very useful in general for categorical data. I propose not having any aggregation by default (None) or, alternatively, setting it to Mode.
On that note, *Mode* is currently unavailable for categorical data, which is a shame. It would be fantastic to take the majority value for aggregation.
|
closed
|
2022-08-04T08:29:16Z
|
2022-08-04T09:08:02Z
|
https://github.com/biolab/orange3/issues/6089
|
[
"bug report"
] |
ajdapretnar
| 1
|
guohongze/adminset
|
django
| 96
|
持续交付
|
持续交付,发布代码支持版本回滚么
|
open
|
2019-02-21T07:00:08Z
|
2019-02-22T03:49:30Z
|
https://github.com/guohongze/adminset/issues/96
|
[] |
frank0826
| 2
|
vllm-project/vllm
|
pytorch
| 15,056
|
[Bug]: [Minor] Forking happens after the creation of tokenizer
|
### Your current environment
When testing prefix caching on TPU, we got the following in the log:
```text
huggingface/tokenizers: The current process just got forked, after parallelism has already been used. Disabling parallelism to avoid deadlocks...
To disable this warning, you can either:
- Avoid using `tokenizers` before the fork if possible
- Explicitly set the environment variable TOKENIZERS_PARALLELISM=(true | false)
…
```
```text
Command to reproduce:
_VLLM_USE_V1=1 python benchmark_prefix_caching.py --model meta-llama/Llama-3.1-8B-Instruct --dataset-path ~/data/ShareGPT_V3_unfiltered_cleaned_split.json --enable-prefix-caching --num-prompts 20 --repeat-count 5 --input-length-range 128:256 --gpu-memory-utilization 0.95 --max-model-len 2048
```
<summary>The output of `python collect_env.py`</summary>
<details>
```text
Collecting environment information...
PyTorch version: 2.7.0
Is debug build: False
CUDA used to build PyTorch: None
ROCM used to build PyTorch: N/A
OS: Ubuntu 22.04.5 LTS (x86_64)
GCC version: (Ubuntu 11.4.0-1ubuntu1~22.04) 11.4.0
Clang version: Could not collect
CMake version: version 3.31.6
Libc version: glibc-2.35
Python version: 3.11.11 (main, Dec 11 2024, 16:28:39) [GCC 11.2.0] (64-bit runtime)
Python platform: Linux-6.8.0-1015-gcp-x86_64-with-glibc2.35
Is CUDA available: False
CUDA runtime version: No CUDA
CUDA_MODULE_LOADING set to: N/A
GPU models and configuration: No CUDA
Nvidia driver version: No CUDA
cuDNN version: No CUDA
HIP runtime version: N/A
MIOpen runtime version: N/A
Is XNNPACK available: True
CPU:
Architecture: x86_64
CPU op-mode(s): 32-bit, 64-bit
Address sizes: 52 bits physical, 57 bits virtual
Byte Order: Little Endian
CPU(s): 180
On-line CPU(s) list: 0-179
Vendor ID: AuthenticAMD
Model name: AMD EPYC 9B14
CPU family: 25
Model: 17
Thread(s) per core: 1
Core(s) per socket: 90
Socket(s): 2
Stepping: 1
BogoMIPS: 5199.99
Flags: fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush mmx fxsr sse sse2 ht syscall nx mmxext fxsr_opt pdpe1gb rdtscp lm constant_tsc rep_good nopl nonstop_tsc cpuid extd_apicid tsc_known_freq pni pclmulqdq ssse3 fma cx16 pcid sse4_1 sse4_2 x2apic movbe popcnt aes xsave avx f16c rdrand hypervisor lahf_lm cmp_legacy cr8_legacy abm sse4a misalignsse 3dnowprefetch osvw topoext ssbd ibrs ibpb stibp vmmcall fsgsbase tsc_adjust bmi1 avx2 smep bmi2 erms invpcid avx512f avx512dq rdseed adx smap avx512ifma clflushopt clwb avx512cd sha_ni avx512bw avx512vl xsaveopt xsavec xgetbv1 xsaves avx512_bf16 clzero xsaveerptr wbnoinvd arat avx512vbmi umip avx512_vbmi2 gfni vaes vpclmulqdq avx512_vnni avx512_bitalg avx512_vpopcntdq rdpid fsrm
Hypervisor vendor: KVM
Virtualization type: full
L1d cache: 5.6 MiB (180 instances)
L1i cache: 5.6 MiB (180 instances)
L2 cache: 180 MiB (180 instances)
L3 cache: 768 MiB (24 instances)
NUMA node(s): 2
NUMA node0 CPU(s): 0-89
NUMA node1 CPU(s): 90-179
Vulnerability Gather data sampling: Not affected
Vulnerability Itlb multihit: Not affected
Vulnerability L1tf: Not affected
Vulnerability Mds: Not affected
Vulnerability Meltdown: Not affected
Vulnerability Mmio stale data: Not affected
Vulnerability Reg file data sampling: Not affected
Vulnerability Retbleed: Not affected
Vulnerability Spec rstack overflow: Mitigation; Safe RET
Vulnerability Spec store bypass: Mitigation; Speculative Store Bypass disabled via prctl
Vulnerability Spectre v1: Mitigation; usercopy/swapgs barriers and __user pointer sanitization
Vulnerability Spectre v2: Mitigation; Retpolines; IBPB conditional; IBRS_FW; STIBP disabled; RSB filling; PBRSB-eIBRS Not affected; BHI Not affected
Vulnerability Srbds: Not affected
Vulnerability Tsx async abort: Not affected
Versions of relevant libraries:
[pip3] numpy==1.26.4
[pip3] pyzmq==26.3.0
[pip3] torch==2.7.0
[pip3] torch-xla==2.7.0+git6c53a1e
[pip3] transformers==4.49.0
[conda] numpy 1.26.4 pypi_0 pypi
[conda] pyzmq 26.3.0 pypi_0 pypi
[conda] torch 2.7.0 pypi_0 pypi
[conda] torch-xla 2.7.0+git6c53a1e pypi_0 pypi
[conda] transformers 4.49.0 pypi_0 pypi
ROCM Version: Could not collect
Neuron SDK Version: N/A
vLLM Version: 0.7.4.dev452+g46f98893
vLLM Build Flags:
CUDA Archs: Not Set; ROCm: Disabled; Neuron: Disabled
GPU Topology:
Could not collect
VLLM_XLA_CACHE_PATH=/user/ymu_google_com
LD_LIBRARY_PATH=/home/ymu_google_com/miniconda3/envs/vllm2/lib/python3.11/site-packages/cv2/../../lib64:
NCCL_CUMEM_ENABLE=0
TORCHINDUCTOR_COMPILE_THREADS=1
```
</details>
### 🐛 Describe the bug
When testing prefix caching on TPU, we got the following in the log:
```text
huggingface/tokenizers: The current process just got forked, after parallelism has already been used. Disabling parallelism to avoid deadlocks...
To disable this warning, you can either:
- Avoid using `tokenizers` before the fork if possible
- Explicitly set the environment variable TOKENIZERS_PARALLELISM=(true | false)
…
```
Command to reproduce:
```text
VLLM_USE_V1=1 python benchmark_prefix_caching.py --model meta-llama/Llama-3.1-8B-Instruct --dataset-path ~/data/ShareGPT_V3_unfiltered_cleaned_split.json --enable-prefix-caching --num-prompts 20 --repeat-count 5 --input-length-range 128:256 --gpu-memory-utilization 0.95 --max-model-len 2048
```
### Before submitting a new issue...
- [x] Make sure you already searched for relevant issues, and asked the chatbot living at the bottom right corner of the [documentation page](https://docs.vllm.ai/en/latest/), which can answer lots of frequently asked questions.
|
closed
|
2025-03-18T18:08:20Z
|
2025-03-18T18:58:42Z
|
https://github.com/vllm-project/vllm/issues/15056
|
[
"bug"
] |
yarongmu-google
| 3
|
pennersr/django-allauth
|
django
| 3,391
|
LinkedIn with OpenID Connect
|
First off, thank you for all your work on `django-allauth`. It is an indispensable addition to the Django ecosystem and I have been very happy with it for the past few years whenever I've needed the functionality.
I created a new LinkedIn App and wanted to use `django-allauth` to integrate with it. Newly created apps seem to only have a `Sign In with LinkedIn using OpenID Connect` integration available for authentication purposes.
<img width="751" alt="image" src="https://github.com/pennersr/django-allauth/assets/317045/34fd8cd4-957c-4a3e-bb8b-12024bb9f6ce">
This seems to be a new API that is different than what is supported by the `linkedin` or `linkedin_oauth2` providers. I did also try to use the OpenID and OpenID Connect providers in `django-allauth`, but couldn't figure out how to make them work with LinkedIn's offering. More details about their OpenID Connect product: https://learn.microsoft.com/en-us/linkedin/consumer/integrations/self-serve/sign-in-with-linkedin-v2.
I ended up creating a new provider for `django-allauth` based on `linkedin_oauth2` that implements the correct API calls -- it all seems to work with my testing. I would be happy to polish up my code, add more tests, write docs, and create a PR to add this functionality if you would be interested.
Thanks again for all that you do! 🚀
|
closed
|
2023-08-25T23:53:11Z
|
2024-06-28T01:21:06Z
|
https://github.com/pennersr/django-allauth/issues/3391
|
[] |
adamghill
| 9
|
Neoteroi/BlackSheep
|
asyncio
| 341
|
Cryptic error message when a list is expected and an object is received
|
Consider the following example:
```python
from dataclasses import dataclass
from blacksheep import Application, pretty_json
app = Application()
@dataclass
class Access:
id: int
name: str
permissions: list[str]
@app.router.post("/")
def set_access(data: list[Access]):
# Just an example...
return pretty_json(data)
if __name__ == "__main__":
import uvicorn
uvicorn.run(app, port=44555, lifespan="on")
```
The server endpoint expects a list of objects. If the client sends a dictionary, the server produces a cryptic error message.
```bash
curl -X POST http://127.0.0.1:44555 -H "Content-Type: application/json" -d '{"id": 1, "name": "foo", "permissions": []}'
Bad Request: invalid parameter in request payload, caused by type Access or one of its subproperties. Error: __main__.Access() argument after ** must be a mapping, not str
```
"Bad Request: invalid parameter in request payload, caused by type Access or one of its subproperties. Error: __main__.Access() argument after ** must be a mapping, not str".
This happens because the function `_get_default_converter_for_iterable` does not handle properly this case.
Improve to raise a clearer exception:
```python
def _get_default_converter_for_iterable(self, expected_type):
generic_type = self.get_type_for_generic_iterable(expected_type)
item_type = self.generic_iterable_annotation_item_type(expected_type)
if isinstance(item_type, ForwardRef): # pragma: no cover
from blacksheep.server.normalization import (
UnsupportedForwardRefInSignatureError,
)
raise UnsupportedForwardRefInSignatureError(expected_type)
item_converter = self._get_default_converter_single(item_type)
def list_converter(values):
if not isinstance(values, list):
raise BadRequest("Invalid input: expected a list of objects.")
return generic_type(item_converter(value) for value in values)
return list_converter
```
|
closed
|
2023-04-26T19:43:31Z
|
2023-04-28T05:50:19Z
|
https://github.com/Neoteroi/BlackSheep/issues/341
|
[] |
RobertoPrevato
| 1
|
ansible/ansible
|
python
| 84,843
|
ansible-config does not correclty validate all entries
|
### Summary
Mostly the dynamic 'galaxy servers'
### Issue Type
Bug Report
### Component Name
ansible-config
### Ansible Version
```console
$ ansible --version
all
```
### Configuration
```console
# if using a version older than ansible-core 2.12 you should omit the '-t all'
$ ansible-config dump --only-changed -t all
any
```
### OS / Environment
all
### Steps to Reproduce
<!--- Paste example playbooks or commands between quotes below -->
```
[galaxy]
server_list=my_org_hub
[galaxy_server.my_org_hub]
# url missing
```
```yaml (paste below)
ansible-config validate
```
### Expected Results
error!
### Actual Results
```console
alls good!
```
### Code of Conduct
- [x] I agree to follow the Ansible Code of Conduct
|
open
|
2025-03-17T19:15:38Z
|
2025-03-18T15:14:21Z
|
https://github.com/ansible/ansible/issues/84843
|
[
"bug"
] |
bcoca
| 1
|
HumanSignal/labelImg
|
deep-learning
| 666
|
Unable to open previously saved .xml file
|
- **OS: Mac
- **PyQt version: 5.15.1
- Python 3.8.6 Homebrew
Python crashes and this error appears when i "Open Dir > .xml file"
Traceback (most recent call last):
File "labelimg.py", line 1367, in openFile
self.loadFile(filename)
File "labelimg.py", line 1065, in loadFile
self.lineColor = QColor(*self.labelFile.lineColor)
AttributeError: 'LabelFile' object has no attribute 'lineColor'
zsh: abort python labelimg.py
|
open
|
2020-10-24T12:56:20Z
|
2022-05-30T20:07:07Z
|
https://github.com/HumanSignal/labelImg/issues/666
|
[] |
zoehako
| 3
|
plotly/dash
|
plotly
| 3,218
|
deselect all tabs in dcc tab component
|
I'm building an application that is using the tab dcc component. This component provide the option of not selecting anything as initial value. I want to take davantage of this feature to display specific information but if I interact with the tabs I have no way to go back to a "nothing selected" state.
The following demo application is showing this exact behaviour, once I click somewhere i cannot show back the "dark" content.
```python
import dash
from dash import dcc, html
# Initialize Dash app
app = dash.Dash(__name__)
app.layout = html.Div([
dcc.Tabs(
id="tabs-example",
value=None, # No tab selected by default
children=[
dcc.Tab(label="Tab Alpha", value="alpha"),
dcc.Tab(label="Tab Beta", value="beta"),
dcc.Tab(label="Tab Gamma", value="gamma"),
],
),
html.Div(id="tabs-content", style={"padding": "20px", "fontSize": "18px"})
])
@app.callback(
dash.Output("tabs-content", "children"),
dash.Output("tabs-content", "style"),
dash.Input("tabs-example", "value"),
)
def update_content(selected_tab):
content_styles = {"padding": "20px", "fontSize": "18px"}
if selected_tab == "alpha":
return html.P("Lorem ipsum dolor sit amet, consectetur adipiscing elit."), {**content_styles, "color": "red"}
elif selected_tab == "beta":
return html.P("Sed do eiusmod tempor incididunt ut labore et dolore magna aliqua."), {**content_styles, "color": "blue"}
elif selected_tab == "gamma":
return html.P("Ut enim ad minim veniam, quis nostrud exercitation ullamco laboris."), {**content_styles, "color": "green"}
else:
return html.P("Nothing selected", style={"color": "black"}), content_styles
# Run app
if __name__ == "__main__":
app.run_server(debug=True)
```
Can you had a way to get back to this state like clicking again on the selected one ?
|
open
|
2025-03-14T11:00:04Z
|
2025-03-17T18:20:23Z
|
https://github.com/plotly/dash/issues/3218
|
[
"feature",
"P3"
] |
12rambau
| 2
|
recommenders-team/recommenders
|
machine-learning
| 1,516
|
Run tests in the appropriate extra dependencies.
|
### Description
Our recommender package has several `extra` dependencies to address the different compute environments: `recommender[spark,gpu,example,dev]`. We should run our tests against the specific extra dependency so we can ensure users of the library would have minimum problems installing these extra dependencies.
### Expected behavior with the suggested feature
For example:
- A user `pip install recommenders[gpu, examples]` should be able to run recommender utilities tested by `pytest -m "gpu and notebooks and not spark`
### Other Comments
|
closed
|
2021-09-01T21:32:17Z
|
2021-09-16T13:11:30Z
|
https://github.com/recommenders-team/recommenders/issues/1516
|
[
"enhancement"
] |
laserprec
| 0
|
seleniumbase/SeleniumBase
|
web-scraping
| 2,255
|
Looks like Cloudflare found out about SeleniumBase UC Mode
|
The makers of the **Turnstile** have found out about **SeleniumBase UC Mode**:
<img width="480" alt="Screenshot 2023-11-08 at 5 47 30 PM" src="https://github.com/seleniumbase/SeleniumBase/assets/6788579/08fa67af-262e-48e4-8699-33e04c15ab54">
**To quote Dr. Emmett Brown from Back to the Future:**
> **"They found me. I don't how, but they found me."**

I guess that means they watched the **SeleniumBase UC Mode** video: https://www.youtube.com/watch?v=5dMFI3e85ig
--------
In other news, I'm working on more updates and demo pages for running tests.
Once the next release is shipped, I'll start going through the notification queue.
|
closed
|
2023-11-08T23:43:16Z
|
2023-11-15T02:40:06Z
|
https://github.com/seleniumbase/SeleniumBase/issues/2255
|
[
"News / Announcements",
"UC Mode / CDP Mode",
"Fun"
] |
mdmintz
| 10
|
pydantic/pydantic
|
pydantic
| 11,361
|
Override composed Field constraints not working when using AfterValidator
|
### Initial Checks
- [x] I confirm that I'm using Pydantic V2
### Description
Hi!
First of all, thank you for all the time you put into building/maintaining this fantastic library!
I've noticed that when using shared annotated types with some sane default constraints/validation, later, when overriding the constraints, the new constraints don't have any effect.
For example:
```python
from typing import Annotated
from pydantic import Field, BaseModel, AfterValidator
String = Annotated[
str,
Field(min_length=5, max_length=10),
AfterValidator(lambda v: v),
]
class TestModel(BaseModel):
title: Annotated[String, Field(max_length=20)]
TestModel(title="a" * 20)
```
Generates this error:
```
pydantic_core._pydantic_core.ValidationError: 1 validation error for TestModel
title
String should have at most 10 characters [type=string_too_long, input_value='aaaaaaaaaaaaaaaaaaaa', input_type=str]
```
However, if I remove the `AfterValidator` it will work as expected.
I've nailed down that the behavior change was first introduced between version `2.1.1` -> `2.2.0` (it works as expected in `2.1.1`).
I can work around the problem by using the `f: <type> = Field(...)` form like this:
```python
class TestModel(BaseModel):
title: String = Field(max_length=20)
```
Is this the expected behavior, or is it a bug?
Best regards,
Simon
### Python, Pydantic & OS Version
```Text
pydantic version: 2.10.6
pydantic-core version: 2.27.2
pydantic-core build: profile=release pgo=false
install path: /home/simon/dev/lab/pydantic/.venv/lib/python3.12/site-packages/pydantic
python version: 3.12.4 (main, Jul 9 2024, 10:49:22) [GCC 14.1.1 20240522]
platform: Linux-6.6.72-1-lts-x86_64-with-glibc2.40
related packages: typing_extensions-4.12.2
commit: unknown
```
|
open
|
2025-01-30T10:38:29Z
|
2025-02-12T20:00:27Z
|
https://github.com/pydantic/pydantic/issues/11361
|
[
"change",
"bug V2",
"topic-annotations"
] |
simonwahlgren
| 7
|
fastapi/sqlmodel
|
fastapi
| 475
|
How to join tables across multiple schemas
|
### First Check
- [X] I added a very descriptive title to this issue.
- [X] I used the GitHub search to find a similar issue and didn't find it.
- [X] I searched the SQLModel documentation, with the integrated search.
- [X] I already searched in Google "How to X in SQLModel" and didn't find any information.
- [X] I already read and followed all the tutorial in the docs and didn't find an answer.
- [X] I already checked if it is not related to SQLModel but to [Pydantic](https://github.com/samuelcolvin/pydantic).
- [X] I already checked if it is not related to SQLModel but to [SQLAlchemy](https://github.com/sqlalchemy/sqlalchemy).
### Commit to Help
- [X] I commit to help with one of those options 👆
### Example Code
```python
Not applicable
```
### Description
Hi there, it is possible to join tables that are '*within*' the same database but '*in different'* schemas?
Lets say I have two schemas: `A` and `B`
For schema `A` I have full control and populate it with tables with SQLModel; e.g.
```
class Sample(SQLModel, table=True):
__table_args__ = {"schema": "A"}
id: Optional[int] = Field(default=None, primary_key=True)
key: int
```
For schema `B` I only have read rights. The table of interest named `Order` within schema `B` looks like this:
```
Order
id | key |
=============
1 | 435
... | ....
```
Now I would like to join my `Sample` table within schema `A` with the `Order` table within schema `B`.
From my understanding I should implement the `Order` table as pydantic model which I can use then in my SQL Statement powerd by SQLModel
```
class Order(SQLModel):
__table_args__ = {"schema": "B"}
id: Optional[int] = Field(default=None, primary_key=True)
key: int
```
```
statement = select(Sample).join(Table, Sample.key == Order.key)
```
However, this seems not to work. Any help would be highly appreciated
### Operating System
Windows
### Operating System Details
_No response_
### SQLModel Version
0.0.7
### Python Version
3.8.1
### Additional Context
_No response_
|
closed
|
2022-10-21T11:28:47Z
|
2022-11-26T11:04:32Z
|
https://github.com/fastapi/sqlmodel/issues/475
|
[
"question",
"investigate"
] |
christianholland
| 3
|
s3rius/FastAPI-template
|
asyncio
| 40
|
Database is not initialized without migrations
|
If you choose to skip adding migrations, you'll face this issue.
We must add a function in application's startup that initializes database using metadata.
|
closed
|
2021-10-10T05:57:07Z
|
2021-10-13T10:03:45Z
|
https://github.com/s3rius/FastAPI-template/issues/40
|
[
"bug"
] |
s3rius
| 2
|
kubeflow/katib
|
scikit-learn
| 1,971
|
Katib-DB-Manager is not automatically creating katib database in external mysql DB
|
/kind bug
**What steps did you take and what happened:**
When we point Katib-DB-manager to AWS RDS MySql Database, it is not automatically creating katib database automatically like pipeline/metadata pods
**What did you expect to happen:**
katib-db-manager to automatically create database in RDS mysql (external-db)
**Anything else you would like to add:**
[Miscellaneous information that will assist in solving the issue.]
**Environment:**
- Katib version (check the Katib controller image version): 0.14.0
- Kubernetes version: (`kubectl version`): 1.22
- OS (`uname -a`):
---
<!-- Don't delete this message to encourage users to support your issue! -->
Impacted by this bug? Give it a 👍 We prioritize the issues with the most 👍
|
closed
|
2022-10-05T15:58:08Z
|
2023-09-14T00:17:37Z
|
https://github.com/kubeflow/katib/issues/1971
|
[
"kind/bug",
"lifecycle/stale"
] |
moorthy156
| 3
|
serengil/deepface
|
machine-learning
| 996
|
update illustration for detectors
|
We recently added yolo and yunet, add their outputs in the illustration
|
closed
|
2024-02-01T11:51:09Z
|
2024-02-03T10:49:18Z
|
https://github.com/serengil/deepface/issues/996
|
[
"enhancement"
] |
serengil
| 1
|
dmlc/gluon-nlp
|
numpy
| 1,008
|
GPT2BPETokenizer produce strange symbol
|
## Description
I ran code piece from
https://gluon-nlp.mxnet.io/model_zoo/bert/index.html
and the GPT2BPETokenizer produce a strange symbol Ġ
### Error Message
In [1]: import gluonnlp as nlp; import mxnet as mx;
...: model, vocab = nlp.model.get_model('roberta_12_768_12', dataset_name='openwebtext_ccnew
...: s_stories_books_cased', use_decoder=False);
...: tokenizer = nlp.data.GPT2BPETokenizer();
...: text = [vocab.bos_token] + tokenizer('Hello world!') + [vocab.eos_token];
...: seq_encoding = model(mx.nd.array([vocab[text]]))
...:
...:
In [2]: print(text)
['\<s\>', 'Hello', 'Ġworld', '!', '\</s\>']
command and paste the outputs below:
|
closed
|
2019-11-14T09:38:41Z
|
2020-10-26T22:48:10Z
|
https://github.com/dmlc/gluon-nlp/issues/1008
|
[
"bug"
] |
hutao965
| 9
|
flairNLP/flair
|
nlp
| 3,543
|
[Bug]: Cannot load pre-trained models after fine-tuning (Transformers)
|
### Describe the bug
Hello,
I was trying to fine tune a mT5 model (google/mT5 series models) on a custom dataset that follows the text format given in your documentation for the column data loader. I have been trying to figure out what is happening but I think there is some problem in the way the model is being loaded/saved. I am sharing my files that have changes done to them (uses the base template of [this example](https://github.com/flairNLP/flair/blob/master/examples/ner/run_ner.py)).
### To Reproduce
`run_ner.py` (I am trying to reproduce results from this repo: https://github.com/MLlab4CS/Astro-mT5/tree/main)
```python
import inspect
import json
import logging
import os
import sys
from dataclasses import dataclass, field
import torch
from transformers import HfArgumentParser
import flair
from flair import set_seed
from flair.embeddings import TransformerWordEmbeddings
from flair.models import SequenceTagger
from flair.trainers import ModelTrainer
from flair.datasets import ColumnCorpus
logger = logging.getLogger("flair")
logger.setLevel(level="INFO")
@dataclass
class ModelArguments:
model_name_or_path: str = field(
metadata={"help": "The model checkpoint for weights initialization."},
)
layers: str = field(default="-1", metadata={"help": "Layers to be fine-tuned."})
subtoken_pooling: str = field(
default="first",
metadata={"help": "Subtoken pooling strategy used for fine-tuned."},
)
hidden_size: int = field(default=256, metadata={"help": "Hidden size for NER model."})
use_crf: bool = field(default=False, metadata={"help": "Whether to use a CRF on-top or not."})
@dataclass
class TrainingArguments:
num_epochs: int = field(default=10, metadata={"help": "The number of training epochs."})
batch_size: int = field(default=8, metadata={"help": "Batch size used for training."})
mini_batch_chunk_size: int = field(
default=1,
metadata={"help": "If smaller than batch size, batches will be chunked."},
)
learning_rate: float = field(default=5e-05, metadata={"help": "Learning rate"})
seed: int = field(default=42, metadata={"help": "Seed used for reproducible fine-tuning results."})
device: str = field(default="cuda:0", metadata={"help": "CUDA device string."})
weight_decay: float = field(default=0.0, metadata={"help": "Weight decay for optimizer."})
embeddings_storage_mode: str = field(default="none", metadata={"help": "Defines embedding storage method."})
@dataclass
class FlertArguments:
context_size: int = field(default=0, metadata={"help": "Context size when using FLERT approach."})
respect_document_boundaries: bool = field(
default=False,
metadata={"help": "Whether to respect document boundaries or not when using FLERT."},
)
@dataclass
class DataArguments:
dataset_name: str = field(metadata={"help": "Flair NER dataset name."})
dataset_arguments: str = field(default="", metadata={"help": "Dataset arguments for Flair NER dataset."})
output_dir: str = field(
default="resources/taggers/ner",
metadata={"help": "Defines output directory for final fine-tuned model."},
)
def get_flair_corpus(data_args):
ner_task_mapping = {}
for name, obj in inspect.getmembers(flair.datasets.sequence_labeling):
if inspect.isclass(obj):
if name.startswith("NER") or name.startswith("CONLL") or name.startswith("WNUT"):
ner_task_mapping[name] = obj
dataset_args = {}
dataset_name = data_args.dataset_name
if data_args.dataset_arguments:
dataset_args = json.loads(data_args.dataset_arguments)
if dataset_name not in ner_task_mapping:
raise ValueError(f"Dataset name {dataset_name} is not a valid Flair datasets name!")
return ner_task_mapping[dataset_name](**dataset_args)
def main():
parser = HfArgumentParser((ModelArguments, TrainingArguments, FlertArguments, DataArguments))
if len(sys.argv) == 2 and sys.argv[1].endswith(".json"):
(
model_args,
training_args,
flert_args,
data_args,
) = parser.parse_json_file(json_file=os.path.abspath(sys.argv[1]))
else:
(
model_args,
training_args,
flert_args,
data_args,
) = parser.parse_args_into_dataclasses()
set_seed(training_args.seed)
flair.device = training_args.device
columns = {0: 'tokens', 1: 'ner'}
corpus: Corpus = ColumnCorpus('some_directory/astrobert_models/Model_3(mT5)', columns,
train_file='train-80txt',
test_file='test-10.txt',
dev_file='val-10.txt'
)
logger.info(corpus)
tag_type: str = "ner"
tag_dictionary = corpus.make_label_dictionary(tag_type, add_unk=False)
logger.info(tag_dictionary)
embeddings = TransformerWordEmbeddings(
model=model_args.model_name_or_path,
layers=model_args.layers,
subtoken_pooling=model_args.subtoken_pooling,
fine_tune=True,
allow_long_sentences=True,
use_context=flert_args.context_size,
respect_document_boundaries=flert_args.respect_document_boundaries,
)
tagger = SequenceTagger(
hidden_size=model_args.hidden_size,
embeddings=embeddings,
tag_dictionary=tag_dictionary,
tag_type=tag_type,
use_crf=model_args.use_crf,
use_rnn=False,
allow_unk_predictions=True,
reproject_embeddings=True,
)
trainer = ModelTrainer(tagger, corpus)
trainer.fine_tune(
data_args.output_dir,
learning_rate=training_args.learning_rate,
mini_batch_size=training_args.batch_size,
mini_batch_chunk_size=training_args.mini_batch_chunk_size,
max_epochs=training_args.num_epochs,
embeddings_storage_mode=training_args.embeddings_storage_mode,
weight_decay=training_args.weight_decay,
param_selection_mode=False,
use_final_model_for_eval=False,
save_final_model=False,
)
torch.save(model_args, os.path.join(data_args.output_dir, "model_args.bin"))
torch.save(training_args, os.path.join(data_args.output_dir, "training_args.bin"))
# finally, print model card for information
tagger.print_model_card()
if __name__ == "__main__":
main()
```
This uses the `google/mT5-large` model to fine tune but I am using the `google/mT5-base` which is similar architecture but less parameters.
Also, this is using the `add-t5-encoder-support` branch for running the code.
### Expected behavior
Expected behaviour is that these parameters:
```py
param_selection_mode=False,
use_final_model_for_eval=False,
save_final_model=False,
```
should allow me to save the best model and run the tests on this. But I am unable to do so.
### Logs and Stack traces
Command to invoke the training (fine tuning)
```sh
python3 run_ner.py --dataset_name NER_MASAKHANE --model_name_or_path google/mt5-base --layers -1 --subtoken_pooling first_last --hidden_size 256 --batch_size 4 --learning_rate 5e-05 --num_epochs 5 --use_crf True --output_dir ./content/mt5-large
```
Stack Trace with the training log:
```stacktrace
2024-09-02 22:12:56,024 Reading data from some_directory/astrobert_models/Model_3(mT5)
2024-09-02 22:12:56,024 Train: some_directory/astrobert_models/Model_3(mT5)/train-80.txt
2024-09-02 22:12:56,025 Dev: some_directory/astrobert_models/Model_3(mT5)/val-10.txt
2024-09-02 22:12:56,025 Test: some_directory/astrobert_models/Model_3(mT5)/test-10.txt
2024-09-02 22:13:02,297 Corpus: 2028 train + 226 dev + 251 test sentences
2024-09-02 22:13:02,298 Computing label dictionary. Progress:
2028it [00:00, 22607.38it/s]
2024-09-02 22:13:02,408 Dictionary created for label 'ner' with 31 values: Organization (seen 9269 times), Citation (seen 7050 times), Person (seen 4895 times), Grant (seen 4199 times), Wavelength (seen 3773 times), CelestialObject (seen 3035 times), Formula (seen 2860 times), Model (seen 2531 times), Telescope (seen 1929 times), Location (seen 1817 times), Software (seen 1154 times), Observatory (seen 1036 times), Survey (seen 1034 times), Instrument (seen 912 times), CelestialObjectRegion (seen 619 times), ComputingFacility (seen 496 times), Fellowship (seen 495 times), Dataset (seen 448 times), Collaboration (seen 370 times), EntityOfFutureInterest (seen 347 times)
2024-09-02 22:13:02,408 Dictionary with 31 tags: Organization, Citation, Person, Grant, Wavelength, CelestialObject, Formula, Model, Telescope, Location, Software, Observatory, Survey, Instrument, CelestialObjectRegion, ComputingFacility, Fellowship, Dataset, Collaboration, EntityOfFutureInterest, URL, Archive, Database, TextGarbage, Mission, CelestialRegion, Proposal, Identifier, Tag, ObservationalTechniques, Event
/home/bob2/.local/lib/python3.10/site-packages/transformers/convert_slow_tokenizer.py:560: UserWarning: The sentencepiece tokenizer that you are converting to a fast tokenizer uses the byte fallback option which is not implemented in the fast tokenizers. In practice this means that the fast version of the tokenizer can produce unknown tokens whereas the sentencepiece version would have converted these unknown tokens into a sequence of byte tokens matching the original piece of text.
warnings.warn(
/home/bob2/.local/lib/python3.10/site-packages/huggingface_hub/file_download.py:1132: FutureWarning: `resume_download` is deprecated and will be removed in version 1.0.0. Downloads always resume when possible. If you want to force a new download, use `force_download=True`.
warnings.warn(
2024-09-02 22:13:08,487 SequenceTagger predicts: Dictionary with 126 tags: <unk>, O, S-Organization, B-Organization, E-Organization, I-Organization, S-Citation, B-Citation, E-Citation, I-Citation, S-Person, B-Person, E-Person, I-Person, S-Grant, B-Grant, E-Grant, I-Grant, S-Wavelength, B-Wavelength, E-Wavelength, I-Wavelength, S-CelestialObject, B-CelestialObject, E-CelestialObject, I-CelestialObject, S-Formula, B-Formula, E-Formula, I-Formula, S-Model, B-Model, E-Model, I-Model, S-Telescope, B-Telescope, E-Telescope, I-Telescope, S-Location, B-Location, E-Location, I-Location, S-Software, B-Software, E-Software, I-Software, S-Observatory, B-Observatory, E-Observatory, I-Observatory
2024-09-02 22:13:09,364 ----------------------------------------------------------------------------------------------------
2024-09-02 22:13:09,365 Model: "SequenceTagger(
(embeddings): TransformerWordEmbeddings(
(model): T5EncoderModel(
(shared): Embedding(250112, 768)
(encoder): T5Stack(
(embed_tokens): Embedding(250112, 768)
(block): ModuleList(
(0): T5Block(
(layer): ModuleList(
(0): T5LayerSelfAttention(
(SelfAttention): T5Attention(
(q): Linear(in_features=768, out_features=768, bias=False)
(k): Linear(in_features=768, out_features=768, bias=False)
(v): Linear(in_features=768, out_features=768, bias=False)
(o): Linear(in_features=768, out_features=768, bias=False)
(relative_attention_bias): Embedding(32, 12)
)
(layer_norm): T5LayerNorm()
(dropout): Dropout(p=0.1, inplace=False)
)
(1): T5LayerFF(
(DenseReluDense): T5DenseGatedActDense(
(wi_0): Linear(in_features=768, out_features=2048, bias=False)
(wi_1): Linear(in_features=768, out_features=2048, bias=False)
(wo): Linear(in_features=2048, out_features=768, bias=False)
(dropout): Dropout(p=0.1, inplace=False)
(act): NewGELUActivation()
)
(layer_norm): T5LayerNorm()
(dropout): Dropout(p=0.1, inplace=False)
)
)
)
(1-11): 11 x T5Block(
(layer): ModuleList(
(0): T5LayerSelfAttention(
(SelfAttention): T5Attention(
(q): Linear(in_features=768, out_features=768, bias=False)
(k): Linear(in_features=768, out_features=768, bias=False)
(v): Linear(in_features=768, out_features=768, bias=False)
(o): Linear(in_features=768, out_features=768, bias=False)
)
(layer_norm): T5LayerNorm()
(dropout): Dropout(p=0.1, inplace=False)
)
(1): T5LayerFF(
(DenseReluDense): T5DenseGatedActDense(
(wi_0): Linear(in_features=768, out_features=2048, bias=False)
(wi_1): Linear(in_features=768, out_features=2048, bias=False)
(wo): Linear(in_features=2048, out_features=768, bias=False)
(dropout): Dropout(p=0.1, inplace=False)
(act): NewGELUActivation()
)
(layer_norm): T5LayerNorm()
(dropout): Dropout(p=0.1, inplace=False)
)
)
)
)
(final_layer_norm): T5LayerNorm()
(dropout): Dropout(p=0.1, inplace=False)
)
)
)
(word_dropout): WordDropout(p=0.05)
(locked_dropout): LockedDropout(p=0.5)
(embedding2nn): Linear(in_features=1536, out_features=1536, bias=True)
(linear): Linear(in_features=1536, out_features=128, bias=True)
(loss_function): ViterbiLoss()
(crf): CRF()
)"
2024-09-02 22:13:09,365 ----------------------------------------------------------------------------------------------------
2024-09-02 22:13:09,365 Corpus: "Corpus: 2028 train + 226 dev + 251 test sentences"
2024-09-02 22:13:09,365 ----------------------------------------------------------------------------------------------------
2024-09-02 22:13:09,365 Parameters:
2024-09-02 22:13:09,365 - learning_rate: "0.000050"
2024-09-02 22:13:09,365 - mini_batch_size: "4"
2024-09-02 22:13:09,365 - patience: "3"
2024-09-02 22:13:09,365 - anneal_factor: "0.5"
2024-09-02 22:13:09,365 - max_epochs: "5"
2024-09-02 22:13:09,365 - shuffle: "True"
2024-09-02 22:13:09,365 - train_with_dev: "False"
2024-09-02 22:13:09,365 - batch_growth_annealing: "False"
2024-09-02 22:13:09,365 ----------------------------------------------------------------------------------------------------
2024-09-02 22:13:09,365 Model training base path: "content/mt5-large"
2024-09-02 22:13:09,365 ----------------------------------------------------------------------------------------------------
2024-09-02 22:13:09,366 Device: cuda:0
2024-09-02 22:13:09,366 ----------------------------------------------------------------------------------------------------
2024-09-02 22:13:09,366 Embeddings storage mode: none
2024-09-02 22:13:09,366 ----------------------------------------------------------------------------------------------------
2024-09-02 22:14:22,599 epoch 1 - iter 50/507 - loss 5.21869016 - samples/sec: 2.73 - lr: 0.000010
2024-09-02 22:15:31,374 epoch 1 - iter 100/507 - loss 4.76969707 - samples/sec: 2.91 - lr: 0.000020
2024-09-02 22:16:44,454 epoch 1 - iter 150/507 - loss 3.84992501 - samples/sec: 2.74 - lr: 0.000030
2024-09-02 22:17:57,165 epoch 1 - iter 200/507 - loss 3.22765532 - samples/sec: 2.75 - lr: 0.000040
2024-09-02 22:19:07,797 epoch 1 - iter 250/507 - loss 2.81055829 - samples/sec: 2.83 - lr: 0.000049
2024-09-02 22:20:24,791 epoch 1 - iter 300/507 - loss 2.47280144 - samples/sec: 2.60 - lr: 0.000049
2024-09-02 22:21:34,641 epoch 1 - iter 350/507 - loss 2.25822920 - samples/sec: 2.86 - lr: 0.000048
2024-09-02 22:22:49,561 epoch 1 - iter 400/507 - loss 2.06685372 - samples/sec: 2.67 - lr: 0.000047
2024-09-02 22:24:04,744 epoch 1 - iter 450/507 - loss 1.91565943 - samples/sec: 2.66 - lr: 0.000046
2024-09-02 22:25:15,756 epoch 1 - iter 500/507 - loss 1.80107189 - samples/sec: 2.82 - lr: 0.000045
2024-09-02 22:25:23,133 ----------------------------------------------------------------------------------------------------
2024-09-02 22:25:23,133 EPOCH 1 done: loss 1.7909 - lr 0.000045
100%|███████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 57/57 [00:37<00:00, 1.50it/s]
2024-09-02 22:26:01,071 Evaluating as a multi-label problem: False
2024-09-02 22:26:01,123 DEV : loss 0.40007856488227844 - f1-score (micro avg) 0.4167
2024-09-02 22:26:01,134 BAD EPOCHS (no improvement): 4
2024-09-02 22:26:01,134 saving best model
2024-09-02 22:26:02,344 ----------------------------------------------------------------------------------------------------
2024-09-02 22:27:14,119 epoch 2 - iter 50/507 - loss 0.66224097 - samples/sec: 2.79 - lr: 0.000043
2024-09-02 22:28:26,077 epoch 2 - iter 100/507 - loss 0.66289136 - samples/sec: 2.78 - lr: 0.000042
2024-09-02 22:29:43,508 epoch 2 - iter 150/507 - loss 0.66188128 - samples/sec: 2.58 - lr: 0.000041
2024-09-02 22:30:56,096 epoch 2 - iter 200/507 - loss 0.64561237 - samples/sec: 2.76 - lr: 0.000040
2024-09-02 22:32:07,025 epoch 2 - iter 250/507 - loss 0.63093977 - samples/sec: 2.82 - lr: 0.000039
2024-09-02 22:33:13,665 epoch 2 - iter 300/507 - loss 0.62267017 - samples/sec: 3.00 - lr: 0.000038
2024-09-02 22:34:27,071 epoch 2 - iter 350/507 - loss 0.61492844 - samples/sec: 2.72 - lr: 0.000037
2024-09-02 22:35:41,670 epoch 2 - iter 400/507 - loss 0.60867990 - samples/sec: 2.68 - lr: 0.000036
2024-09-02 22:36:53,006 epoch 2 - iter 450/507 - loss 0.60102799 - samples/sec: 2.80 - lr: 0.000035
2024-09-02 22:38:06,344 epoch 2 - iter 500/507 - loss 0.59238830 - samples/sec: 2.73 - lr: 0.000034
2024-09-02 22:38:15,044 ----------------------------------------------------------------------------------------------------
2024-09-02 22:38:15,045 EPOCH 2 done: loss 0.5919 - lr 0.000034
100%|███████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 57/57 [01:26<00:00, 1.52s/it]
2024-09-02 22:39:41,854 Evaluating as a multi-label problem: False
2024-09-02 22:39:41,895 DEV : loss 0.258797824382782 - f1-score (micro avg) 0.6063
2024-09-02 22:39:41,907 BAD EPOCHS (no improvement): 4
2024-09-02 22:39:41,907 saving best model
2024-09-02 22:39:50,782 ----------------------------------------------------------------------------------------------------
2024-09-02 22:40:52,674 epoch 3 - iter 50/507 - loss 0.53751011 - samples/sec: 3.23 - lr: 0.000032
2024-09-02 22:42:07,553 epoch 3 - iter 100/507 - loss 0.51292905 - samples/sec: 2.67 - lr: 0.000031
2024-09-02 22:43:15,788 epoch 3 - iter 150/507 - loss 0.52074144 - samples/sec: 2.93 - lr: 0.000030
2024-09-02 22:44:29,978 epoch 3 - iter 200/507 - loss 0.50887246 - samples/sec: 2.70 - lr: 0.000029
2024-09-02 22:45:44,776 epoch 3 - iter 250/507 - loss 0.50465450 - samples/sec: 2.67 - lr: 0.000028
2024-09-02 22:46:53,595 epoch 3 - iter 300/507 - loss 0.49652591 - samples/sec: 2.91 - lr: 0.000027
2024-09-02 22:48:03,269 epoch 3 - iter 350/507 - loss 0.49103096 - samples/sec: 2.87 - lr: 0.000026
2024-09-02 22:49:22,787 epoch 3 - iter 400/507 - loss 0.48587132 - samples/sec: 2.52 - lr: 0.000025
2024-09-02 22:50:40,318 epoch 3 - iter 450/507 - loss 0.47988559 - samples/sec: 2.58 - lr: 0.000024
2024-09-02 22:51:53,871 epoch 3 - iter 500/507 - loss 0.47534172 - samples/sec: 2.72 - lr: 0.000022
2024-09-02 22:52:02,896 ----------------------------------------------------------------------------------------------------
2024-09-02 22:52:02,896 EPOCH 3 done: loss 0.4754 - lr 0.000022
100%|███████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 57/57 [01:27<00:00, 1.53s/it]
2024-09-02 22:53:30,026 Evaluating as a multi-label problem: False
2024-09-02 22:53:30,067 DEV : loss 0.22028639912605286 - f1-score (micro avg) 0.6517
2024-09-02 22:53:30,079 BAD EPOCHS (no improvement): 4
2024-09-02 22:53:30,079 saving best model
2024-09-02 22:53:39,030 ----------------------------------------------------------------------------------------------------
2024-09-02 22:54:58,710 epoch 4 - iter 50/507 - loss 0.42972222 - samples/sec: 2.51 - lr: 0.000021
2024-09-02 22:56:09,934 epoch 4 - iter 100/507 - loss 0.42529253 - samples/sec: 2.81 - lr: 0.000020
2024-09-02 22:57:18,254 epoch 4 - iter 150/507 - loss 0.41949796 - samples/sec: 2.93 - lr: 0.000019
2024-09-02 22:58:35,158 epoch 4 - iter 200/507 - loss 0.41590241 - samples/sec: 2.60 - lr: 0.000018
2024-09-02 22:59:42,396 epoch 4 - iter 250/507 - loss 0.42134116 - samples/sec: 2.97 - lr: 0.000017
2024-09-02 23:00:51,994 epoch 4 - iter 300/507 - loss 0.42124508 - samples/sec: 2.87 - lr: 0.000016
2024-09-02 23:02:06,538 epoch 4 - iter 350/507 - loss 0.41991969 - samples/sec: 2.68 - lr: 0.000015
2024-09-02 23:03:16,007 epoch 4 - iter 400/507 - loss 0.41864415 - samples/sec: 2.88 - lr: 0.000014
2024-09-02 23:04:30,849 epoch 4 - iter 450/507 - loss 0.41877229 - samples/sec: 2.67 - lr: 0.000012
2024-09-02 23:05:43,238 epoch 4 - iter 500/507 - loss 0.41600581 - samples/sec: 2.76 - lr: 0.000011
2024-09-02 23:05:52,670 ----------------------------------------------------------------------------------------------------
2024-09-02 23:05:52,670 EPOCH 4 done: loss 0.4157 - lr 0.000011
100%|███████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 57/57 [01:27<00:00, 1.53s/it]
2024-09-02 23:07:20,127 Evaluating as a multi-label problem: False
2024-09-02 23:07:20,169 DEV : loss 0.20156854391098022 - f1-score (micro avg) 0.6764
2024-09-02 23:07:20,181 BAD EPOCHS (no improvement): 4
2024-09-02 23:07:20,181 saving best model
2024-09-02 23:07:29,094 ----------------------------------------------------------------------------------------------------
2024-09-02 23:08:41,206 epoch 5 - iter 50/507 - loss 0.41014725 - samples/sec: 2.77 - lr: 0.000010
2024-09-02 23:09:55,703 epoch 5 - iter 100/507 - loss 0.40355902 - samples/sec: 2.68 - lr: 0.000009
2024-09-02 23:11:06,169 epoch 5 - iter 150/507 - loss 0.40052907 - samples/sec: 2.84 - lr: 0.000008
2024-09-02 23:12:16,356 epoch 5 - iter 200/507 - loss 0.40273058 - samples/sec: 2.85 - lr: 0.000007
2024-09-02 23:13:28,812 epoch 5 - iter 250/507 - loss 0.39995092 - samples/sec: 2.76 - lr: 0.000006
2024-09-02 23:14:41,129 epoch 5 - iter 300/507 - loss 0.39412877 - samples/sec: 2.77 - lr: 0.000005
2024-09-02 23:15:54,505 epoch 5 - iter 350/507 - loss 0.39045605 - samples/sec: 2.73 - lr: 0.000004
2024-09-02 23:17:07,290 epoch 5 - iter 400/507 - loss 0.39085101 - samples/sec: 2.75 - lr: 0.000002
2024-09-02 23:18:20,001 epoch 5 - iter 450/507 - loss 0.38970339 - samples/sec: 2.75 - lr: 0.000001
2024-09-02 23:19:30,506 epoch 5 - iter 500/507 - loss 0.38807320 - samples/sec: 2.84 - lr: 0.000000
2024-09-02 23:19:42,705 ----------------------------------------------------------------------------------------------------
2024-09-02 23:19:42,705 EPOCH 5 done: loss 0.3880 - lr 0.000000
100%|███████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 57/57 [01:27<00:00, 1.53s/it]
2024-09-02 23:21:09,993 Evaluating as a multi-label problem: False
2024-09-02 23:21:10,034 DEV : loss 0.19652396440505981 - f1-score (micro avg) 0.6858
2024-09-02 23:21:10,046 BAD EPOCHS (no improvement): 4
2024-09-02 23:21:10,046 saving best model
2024-09-02 23:21:20,453 ----------------------------------------------------------------------------------------------------
2024-09-02 23:21:20,454 loading file content/mt5-large/best-model.pt
Traceback (most recent call last):
File "/media/bob2/d8c6a01c-a6c1-4ad3-a8d5-a740f2fa4a7a/home/bob2/_dhruv/astrobert_models/Model_3(mT5)/flair/run_ner.py", line 382, in <module>
main()
File "/media/bob2/d8c6a01c-a6c1-4ad3-a8d5-a740f2fa4a7a/home/bob2/_dhruv/astrobert_models/Model_3(mT5)/flair/run_ner.py", line 363, in main
trainer.fine_tune(
File "/media/bob2/d8c6a01c-a6c1-4ad3-a8d5-a740f2fa4a7a/home/bob2/_dhruv/astrobert_models/Model_3(mT5)/flair/flair/trainers/trainer.py", line 919, in fine_tune
return self.train(
File "/media/bob2/d8c6a01c-a6c1-4ad3-a8d5-a740f2fa4a7a/home/bob2/_dhruv/astrobert_models/Model_3(mT5)/flair/flair/trainers/trainer.py", line 836, in train
final_score = self.final_test(
File "/media/bob2/d8c6a01c-a6c1-4ad3-a8d5-a740f2fa4a7a/home/bob2/_dhruv/astrobert_models/Model_3(mT5)/flair/flair/trainers/trainer.py", line 949, in final_test
self.model.load_state_dict(self.model.load(base_path / "best-model.pt").state_dict())
File "/media/bob2/d8c6a01c-a6c1-4ad3-a8d5-a740f2fa4a7a/home/bob2/_dhruv/astrobert_models/Model_3(mT5)/flair/flair/nn/model.py", line 142, in load
state = torch.load(f, map_location="cpu")
File "/home/bob2/.local/lib/python3.10/site-packages/torch/serialization.py", line 1025, in load
return _load(opened_zipfile,
File "/home/bob2/.local/lib/python3.10/site-packages/torch/serialization.py", line 1446, in _load
result = unpickler.load()
File "/media/bob2/d8c6a01c-a6c1-4ad3-a8d5-a740f2fa4a7a/home/bob2/_dhruv/astrobert_models/Model_3(mT5)/flair/flair/embeddings/transformer.py", line 1004, in __setstate__
embedding = self.create_from_state(saved_config=config, **state)
File "/media/bob2/d8c6a01c-a6c1-4ad3-a8d5-a740f2fa4a7a/home/bob2/_dhruv/astrobert_models/Model_3(mT5)/flair/flair/embeddings/token.py", line 62, in create_from_state
return cls(**state)
File "/media/bob2/d8c6a01c-a6c1-4ad3-a8d5-a740f2fa4a7a/home/bob2/_dhruv/astrobert_models/Model_3(mT5)/flair/flair/embeddings/token.py", line 49, in __init__
TransformerEmbeddings.__init__(
File "/media/bob2/d8c6a01c-a6c1-4ad3-a8d5-a740f2fa4a7a/home/bob2/_dhruv/astrobert_models/Model_3(mT5)/flair/flair/embeddings/transformer.py", line 810, in __init__
self.tokenizer = self._tokenizer_from_bytes(tokenizer_data)
File "/media/bob2/d8c6a01c-a6c1-4ad3-a8d5-a740f2fa4a7a/home/bob2/_dhruv/astrobert_models/Model_3(mT5)/flair/flair/embeddings/transformer.py", line 335, in _tokenizer_from_bytes
return AutoTokenizer.from_pretrained(temp_dir, add_prefix_space=True)
File "/home/bob2/.local/lib/python3.10/site-packages/transformers/models/auto/tokenization_auto.py", line 880, in from_pretrained
return tokenizer_class.from_pretrained(pretrained_model_name_or_path, *inputs, **kwargs)
File "/home/bob2/.local/lib/python3.10/site-packages/transformers/tokenization_utils_base.py", line 2110, in from_pretrained
return cls._from_pretrained(
File "/home/bob2/.local/lib/python3.10/site-packages/transformers/tokenization_utils_base.py", line 2336, in _from_pretrained
tokenizer = cls(*init_inputs, **init_kwargs)
File "/home/bob2/.local/lib/python3.10/site-packages/transformers/models/t5/tokenization_t5_fast.py", line 120, in __init__
super().__init__(
File "/home/bob2/.local/lib/python3.10/site-packages/transformers/tokenization_utils_fast.py", line 124, in __init__
slow_tokenizer = self.slow_tokenizer_class(*args, **kwargs)
File "/home/bob2/.local/lib/python3.10/site-packages/transformers/models/t5/tokenization_t5.py", line 151, in __init__
self.sp_model.Load(vocab_file)
File "/home/bob2/.local/lib/python3.10/site-packages/sentencepiece/__init__.py", line 367, in Load
return self.LoadFromFile(model_file)
File "/home/bob2/.local/lib/python3.10/site-packages/sentencepiece/__init__.py", line 171, in LoadFromFile
return _sentencepiece.SentencePieceProcessor_LoadFromFile(self, arg)
TypeError: not a string
```
### Screenshots
_No response_
### Additional Context
Please let me know if you need any more context or maybe a very small dataset to reproduce the results for this output. Thanks in advance for any assistance.
### Environment
#### Versions:
##### Flair
0.13.1
##### Pytorch
2.3.1+cu121
##### Transformers
4.41.2
#### GPU
True
|
closed
|
2024-09-03T04:42:49Z
|
2024-10-11T11:07:15Z
|
https://github.com/flairNLP/flair/issues/3543
|
[
"bug"
] |
DhruvSondhi
| 1
|
pydata/xarray
|
numpy
| 9,878
|
multiindex + cftimeindex broken on pandas main
|
### What happened?
This test fails on pandas main branch
https://github.com/pydata/xarray/blob/96e0ff7d70c605a1505ff89a2d62b5c4138b0305/xarray/tests/test_cftimeindex.py#L1191-L1195
I will xfail this test but it would be good to fix it
cc @spencerkclark
|
open
|
2024-12-11T23:16:05Z
|
2024-12-13T19:32:03Z
|
https://github.com/pydata/xarray/issues/9878
|
[
"bug",
"topic-cftime"
] |
dcherian
| 1
|
ydataai/ydata-profiling
|
jupyter
| 733
|
Slack links in the README.md are no longer valid
|
**Describe the bug**
The slack of Join the Slack community links on the README.md - both return an error saying the link is no longer valid
**To Reproduce**
Click the [Slack](https://join.slack.com/t/pandas-profiling/shared_invite/zt-l2iqwb92-9JpTEdFBijR2G798j2MpQw) link at the [beginning](https://github.com/pandas-profiling/pandas-profiling#pandas-profiling) or [Join the Slack community](https://join.slack.com/t/pandas-profiling/shared_invite/zt-hfy3iwp2-qEJSItye5QBZf8YGFMaMnQ) under the [Contributing](https://github.com/pandas-profiling/pandas-profiling#contributing) section on the rendered README.md

|
closed
|
2021-03-25T12:55:29Z
|
2021-03-27T19:23:16Z
|
https://github.com/ydataai/ydata-profiling/issues/733
|
[] |
owenlamont
| 1
|
lanpa/tensorboardX
|
numpy
| 258
|
symbolic for max_pool2d_with_indices returned None for the output 1 (indicating conversion for that particular output is not supported), but the network uses this output later
|
Hi,
I am getting this error while adding a graph. Following is the stack trace
`---------------------------------------------------------------------------
RuntimeError Traceback (most recent call last)
<ipython-input-43-d96358ad8344> in <module>()
21 use_last_checkpoint =train_params['use_last_checkpoint'])
22
---> 23 solver.train(train_loader, val_loader)
~/shayan/quickNat_pytorch/quickNat_pytorch/solver.py in train(self, train_loader, val_loader)
140 self.logWriter.update_cm_per_iter(output, y, phase)
141
--> 142 self.logWriter.graph(model, X, phase)
143 del X, y, w, output, loss
144 torch.cuda.empty_cache()
~/shayan/quickNat_pytorch/quickNat_pytorch/log_utils.py in graph(self, model, X, phase)
79
80 def graph(self, model, X, phase):
---> 81 self.writer[phase].add_graph(model, X)
82
83 def update_cm_per_iter(self, predictions, correct_labels, phase):
~/anaconda3/lib/python3.6/site-packages/tensorboardX/writer.py in add_graph(self, model, input_to_model, verbose, **kwargs)
518 print('add_graph() only supports PyTorch v0.2.')
519 return
--> 520 self.file_writer.add_graph(graph(model, input_to_model, verbose))
521 except AttributeError:
522 # Caffe2 models do not have the 'forward' method
~/anaconda3/lib/python3.6/site-packages/tensorboardX/pytorch_graph.py in graph(model, args, verbose)
94 return GraphDef(versions=VersionDef(producer=22))
95 if LooseVersion(torch.__version__) >= LooseVersion("0.4.1"):
---> 96 torch.onnx._optimize_trace(trace, torch._C._onnx.OperatorExportTypes.ONNX_ATEN_FALLBACK)
97 elif LooseVersion(torch.__version__) >= LooseVersion("0.4"):
98 torch.onnx._optimize_trace(trace, False)
~/anaconda3/lib/python3.6/site-packages/torch/onnx/__init__.py in _optimize_trace(trace, operator_export_type)
39 def _optimize_trace(trace, operator_export_type):
40 from torch.onnx import utils
---> 41 trace.set_graph(utils._optimize_graph(trace.graph(), operator_export_type))
42
43
~/anaconda3/lib/python3.6/site-packages/torch/onnx/utils.py in _optimize_graph(graph, operator_export_type)
105 torch._C._jit_pass_lint(graph)
106 if operator_export_type != OperatorExportTypes.RAW:
--> 107 graph = torch._C._jit_pass_onnx(graph, operator_export_type)
108 torch._C._jit_pass_lint(graph)
109 torch._C._jit_pass_onnx_peephole(graph)`
|
open
|
2018-10-24T10:12:35Z
|
2019-07-04T18:55:51Z
|
https://github.com/lanpa/tensorboardX/issues/258
|
[
"add_graph",
"wait for response"
] |
shayansiddiqui
| 9
|
donnemartin/data-science-ipython-notebooks
|
machine-learning
| 71
|
Data science
|
closed
|
2020-06-06T12:05:25Z
|
2020-06-06T12:06:27Z
|
https://github.com/donnemartin/data-science-ipython-notebooks/issues/71
|
[] |
Amine-OMRI
| 0
|
|
aiortc/aiortc
|
asyncio
| 302
|
Allow same track to be sent N times
|
In `rtcpeerconnection.py`:
```py
def __assertTrackHasNoSender(self, track: MediaStreamTrack) -> None:
for sender in self.getSenders():
if sender.track == track:
raise InvalidAccessError("Track already has a sender")
```
May I know why this constraint? is it artificial? Nothing in the spec prevents a PC from sending the same track N times in different transceivers (may be with different encoding settings).
|
closed
|
2020-02-25T11:30:25Z
|
2020-02-26T10:44:01Z
|
https://github.com/aiortc/aiortc/issues/302
|
[] |
ibc
| 5
|
prkumar/uplink
|
rest-api
| 119
|
Class-level decorators on Consumer classes do not apply to inherited methods
|
**Describe the bug**
For consumer classes that inherit consumer methods (i.e., methods decorated with `@uplink.get`, `@uplink.post`, etc.) from one or more parent classes, uplink decorators such as`@response_handler` or `@timeout` are not applied to those inherited methods when these decorators are used as class-level decorators. In other words, these decorators are strictly applied to consumer methods that are directly defined on the decorated consumer class.
**To Reproduce**
Consider the following consumer class:
```python
class GitHub(uplink.Consumer):
@uplink.get("/users/{username}")
def get_user(self, username):
"""Get a single user."""
```
Create a subclass of `GitHub` and decorate it with any uplink decorator that should propagate to consumer methods when used as a class decorator. For this example, I apply a `@response_handler` that should make any consumer method return the integer `1`, regardless of the actual response returned by the server:
```python
@response_handler(lambda resp: 1)
class GitHubSubclass(GitHub):
pass
```
Here’s a quick test that shows that the response handler is not applied to the inherited method (i.e., the assertion fails):
```python
client = GitHubSubclass(...)
assert github.get_user(“prkumar”) == 1
```
**Expected behavior**
Applying a decorator to a Consumer class should propagate to ALL consumer methods available to that class, including inherited consumer methods.
**Additional context**
Prior to v0.3.0, the actual behavior reflected the expected behavior detailed above. However, as part of #27, we unnecessarily began restricting the application of class-level decorators to only those consumer methods defined directly on the decorated consumer class. Hence, a fix for this bug should effectively revert the changes made in #27. Notably, this means that the fix should make changes to the function `uplink.helpers.get_api_definitions`.
|
closed
|
2018-11-15T19:41:14Z
|
2019-01-22T18:52:47Z
|
https://github.com/prkumar/uplink/issues/119
|
[
"Bug",
"help wanted",
"good first issue"
] |
prkumar
| 1
|
ckan/ckan
|
api
| 8,647
|
`package_show` fallback to `name_or_id` does not work, requires `id`
|
## CKAN version
2.11
## Describe the bug
The `package_show` API is supposed to allow `name_or_id` as a fallback for `id`, but it doesn't work on CKAN 2.11, giving an error if `id` is not present.
### Steps to reproduce
- Start a CKAN 2.11 instance.
- Create a public dataset named "Test".
- Go to `/api/action/package_show?name_or_id=test`
### Expected behavior
The API should return a JSON description of the Test dataset.
### Additional details
6:04:20,563 INFO [ckan.views.api] Validation error (Action API): "{'message': 'Missing id, can not get Package object', '__type': 'Validation Error'}"
16:04:20,565 INFO [ckan.config.middleware.flask_app] 409 /api/action/package_show render time 0.024 seconds
127.0.0.1 - - [04/Feb/2025:16:04:20 +1000] "GET /api/action/package_show?name_or_id=testing-qoldev-1070 HTTP/1.0" 409 357 "-" "Amazon CloudFront"
`name_or_id` is configured as a fallback at https://github.com/ckan/ckan/blob/df8881ccacb555668207b93a77da6cc65b84bfe0/ckan/logic/action/get.py#L980
But the corresponding auth function requires `id` to be present, and gives an error otherwise: https://github.com/ckan/ckan/blob/df8881ccacb555668207b93a77da6cc65b84bfe0/ckan/logic/auth/get.py#L107 and https://github.com/ckan/ckan/blob/df8881ccacb555668207b93a77da6cc65b84bfe0/ckan/logic/auth/__init__.py#L53
|
open
|
2025-02-04T06:14:05Z
|
2025-02-04T13:21:20Z
|
https://github.com/ckan/ckan/issues/8647
|
[] |
ThrawnCA
| 0
|
cvat-ai/cvat
|
computer-vision
| 8,297
|
Restoring task's backup which initially was created from file share fails
|
### Actions before raising this issue
- [X] I searched the existing issues and did not find anything similar.
- [X] I read/searched [the docs](https://docs.cvat.ai/docs/)
### Steps to Reproduce
During development of #8287 the problem occured:
1. Connect file share to CVAT
2. Create a task with files from share
3. Add some annotations
4. Backup the task
5. Try to restore it, the error occurs:

### Expected Behavior
Backup should be restored successfully
### Possible Solution
It seems backup of the task which is created from local files and file share have different folder structure. Maybe this is the problem.
Backup with task from `share` looks like this:

Backup with task from `local` files looks like:

### Context
_No response_
### Environment
- Git commit ff50b464ddaa85a2496da79ac87fa71455f01c92
- Env: local
- Full error log:
```
[2024-08-13 08:10:57,951] ERROR rq.worker: [Job import:task-02f399a3-612d-4bd8-b1fb-9ed57c83c2e9-backup]: exception raised while executing (cvat.apps.engine.utils.import_resource_with_clean_up_after)
Traceback (most recent call last):
File "/home/kirill/projects/cvat/.env/lib/python3.10/site-packages/rq/worker.py", line 1431, in perform_job
rv = job.perform()
File "/home/kirill/projects/cvat/.env/lib/python3.10/site-packages/rq/job.py", line 1280, in perform
self._result = self._execute()
File "/home/kirill/projects/cvat/.env/lib/python3.10/site-packages/rq/job.py", line 1317, in _execute
result = self.func(*self.args, **self.kwargs)
File "/home/kirill/projects/cvat/cvat/apps/engine/utils.py", line 289, in import_resource_with_clean_up_after
result = func(filename, *args, **kwargs)
File "/usr/lib/python3.10/contextlib.py", line 79, in inner
return func(*args, **kwds)
File "/home/kirill/projects/cvat/cvat/apps/engine/backup.py", line 751, in _import_task
db_task = task_importer.import_task()
File "/home/kirill/projects/cvat/cvat/apps/engine/backup.py", line 742, in import_task
self._import_task()
File "/home/kirill/projects/cvat/cvat/apps/engine/backup.py", line 690, in _import_task
_create_thread(self._db_task.pk, data.copy(), isBackupRestore=True)
File "/usr/lib/python3.10/contextlib.py", line 79, in inner
return func(*args, **kwds)
File "/home/kirill/projects/cvat/cvat/apps/engine/task.py", line 553, in _create_thread
manifest_file = _validate_manifest(
File "/home/kirill/projects/cvat/cvat/apps/engine/task.py", line 342, in _validate_manifest
if is_manifest(full_manifest_path):
File "/home/kirill/projects/cvat/utils/dataset_manifest/core.py", line 819, in is_manifest
return is_video_manifest(full_manifest_path) or \
File "/home/kirill/projects/cvat/utils/dataset_manifest/core.py", line 824, in is_video_manifest
return validator.validate()
File "/home/kirill/projects/cvat/utils/dataset_manifest/core.py", line 737, in validate
with open(self._manifest.path, 'r') as manifest:
FileNotFoundError: [Errno 2] No such file or directory: '/home/kirill/projects/cvat/share/manifest.jsonl'
```
|
open
|
2024-08-13T08:12:35Z
|
2024-11-07T06:48:15Z
|
https://github.com/cvat-ai/cvat/issues/8297
|
[
"bug"
] |
klakhov
| 2
|
OpenBB-finance/OpenBB
|
machine-learning
| 6,720
|
[🕹️] Create a Simple Sentiment Analysis for Stock Prices Notebook
|
# 📄 Task
Create a notebook that fetches sentiment data from financial news and correlates it with stock price movements.
---
### 📋 Requirements:
1. **Template**: Start by copying the [example template notebook](https://github.com/OpenBB-finance/OpenBB/blob/develop/examples/COMMUNITY_EXAMPLE_TEMPLATE.ipynb).
2. **Content**:
- Give your notebook a meaningful name.
- Fill in the details in the template, including the notebook title, description, your GitHub username, the notebook name in the Google Colab button, and any additional sections relevant to the task.
- Write code that uses OpenBB's features to model risk-return tradeoffs.
- If your notebook requires additional dependencies, please specify those.
3. **Testing**: Ensure that all cells in the notebook run successfully and produce the intended results.
4. **Documentation**: Comment your code and add markdown cells where necessary to provide explanations for the analysis.
5. **Output**: The final notebook should be added to the `examples` folder in this repository.
### 💡 Tips:
- Refer to the [OpenBB Documentation](https://docs.openbb.co/) for guidance on using OpenBB features.
### 📬 Submission:
- Follow the submission instructions [here](https://github.com/OpenBB-finance/OpenBB/tree/develop/oss.gg).
- Open a Pull Request (PR) to the `develop` branch.
- Include a brief description of your notebook and the analysis it performs in the PR body.
Happy hacking!
|
closed
|
2024-09-30T19:03:31Z
|
2024-11-02T07:41:49Z
|
https://github.com/OpenBB-finance/OpenBB/issues/6720
|
[
"🕹️ 300 points"
] |
piiq
| 54
|
JoeanAmier/TikTokDownloader
|
api
| 76
|
如何批量获取达人所有视频的点赞和评论数
|
目前只能单独视频手动收入。
|
open
|
2023-10-26T07:30:33Z
|
2023-11-13T15:11:29Z
|
https://github.com/JoeanAmier/TikTokDownloader/issues/76
|
[] |
myrainbowandsky
| 1
|
serengil/deepface
|
machine-learning
| 824
|
analyze() detector_backends 'yolov8' and 'dlib' errors
|
Hey! I was just trying to test out all of the available backend detectors for the analyze function. I have managed to get them all to run except for yolov8 and dlib. Here are the errors:
shape_predictor_5_face_landmarks.dat.bz2 is going to be downloaded
Detector: dlib Error: 'content-type'
-and-
Detector: yolov8 Error: invalid detector_backend passed - yolov8
Suggestions?
For the yolov8 detector, as I passed the string copied directly from the DeepFace module which states it as an option. Curious if it's just a typo. Maybe no v8, or an n at the end or not actually valid and was just missed when documenting. Idk. I tried installing ultralytics, but it did not solve the issue, which makes sense. lol
For dlib, I pip installed the package, then even uninstalled and reinstalled copying the version from the optional requirements text file, it was the same anyway. Google showed that maybe it's a firewall issue? I'm not sure where to find the weights to download them myself nor how to resolve such a firewall issue.
|
closed
|
2023-08-15T20:30:57Z
|
2023-08-18T15:32:52Z
|
https://github.com/serengil/deepface/issues/824
|
[
"question"
] |
Hipples
| 2
|
AutoGPTQ/AutoGPTQ
|
nlp
| 378
|
关于量化使用的数据最后没有eos的问题
|
在实例脚本中,下面这一段好像没有在每条数据的最后加入eos token?此处是否需要加eos token呢?
https://github.com/PanQiWei/AutoGPTQ/blob/e4b2493733d69a6e60e22cebc64b619be39feb0e/examples/quantization/quant_with_alpaca.py#L30-L40
|
open
|
2023-10-25T09:19:03Z
|
2023-10-26T16:46:16Z
|
https://github.com/AutoGPTQ/AutoGPTQ/issues/378
|
[
"chinese"
] |
sakura-umi
| 0
|
chatopera/Synonyms
|
nlp
| 93
|
效率过低,不知能否优化?
|
使用该库来返回近义词,单核CPU一秒钟只能返回50个不相同的词的近义词,对于NLP任务效率过低,成为数据读取的瓶颈,不知能否进行优化?
|
closed
|
2019-07-28T16:32:41Z
|
2020-10-01T11:36:21Z
|
https://github.com/chatopera/Synonyms/issues/93
|
[] |
braveryCHR
| 1
|
alteryx/featuretools
|
data-science
| 2,284
|
Add primitive for 2 digit Postal Code Prefix (US-only)
|
- As a user of Featuretools, I would like to do feature engineering for Postal Codes in USA.
- I would like to extract the 2 digit prefix:

|
closed
|
2022-09-12T14:38:50Z
|
2022-11-29T20:08:15Z
|
https://github.com/alteryx/featuretools/issues/2284
|
[] |
gsheni
| 0
|
deepspeedai/DeepSpeed
|
machine-learning
| 5,579
|
[BUG] fp6 can‘t load qwen1.5-34b-chat
|
**Describe the bug**
NotImplementedError: Cannot copy out of meta tensor; no data!
`import mii
model_path = 'Qwen1.5-32B-Chat-hf'
pipe = mii.pipeline(model_path, quantization_mode='wf6af16')
response = pipe(["DeepSpeed is", "Seattle is"], max_new_tokens=128)
print(response)`
**System info (please complete the following information):**
- OS: Ubuntu 2004]
- GPU A100
- Python 3.11
**stark**


thanks for your help!
|
open
|
2024-05-29T07:32:04Z
|
2024-05-29T07:32:42Z
|
https://github.com/deepspeedai/DeepSpeed/issues/5579
|
[
"bug",
"inference"
] |
pointerhacker
| 0
|
PeterL1n/RobustVideoMatting
|
computer-vision
| 78
|
[Advice] Training in a Low RAM System
|
I am re-training this code in a 64GB RAM system. Do you have any recommendation on how to reduce the memory utilization? I've already reduced T to 4, but still a lot of swap memory usage, which is bottlenecking my training process.
|
closed
|
2021-10-12T17:52:34Z
|
2021-10-14T14:58:30Z
|
https://github.com/PeterL1n/RobustVideoMatting/issues/78
|
[] |
SamHSlva
| 2
|
aleju/imgaug
|
deep-learning
| 705
|
ValueError with Color Temperature Augmenter
|
I am using 0.4.0 installed from conda-forge and receive a ValueError in the last step of transform_kelvins_to_rgb_multipliers() in color.py. I am attempting to augment a batch of images. If it reshape and tile the "interpolation_factors" array, then the augmenter works as expected. Is this a bug, or am I using the augmenter incorrectly?
|
open
|
2020-07-29T20:06:07Z
|
2020-09-18T03:39:50Z
|
https://github.com/aleju/imgaug/issues/705
|
[] |
wjacobward
| 1
|
davidsandberg/facenet
|
tensorflow
| 527
|
how to set optional parameters for slim.batch_norm
|
here is my batch_norm_params, which is soon fed into normalizer_params.

however, when i print tf.trainable_variables, there are only mean, variance and beta for BN, missing gamma..

how to change the default settings? such as adding gamma or simply reserve mean and variance.
|
open
|
2017-11-14T06:57:37Z
|
2017-11-14T06:57:37Z
|
https://github.com/davidsandberg/facenet/issues/527
|
[] |
patienceFromZhou
| 0
|
jpadilla/django-rest-framework-jwt
|
django
| 334
|
Authenticate against custom user model
|
I see PR (Feature/allow custom user identifier and custom user lookup field #211).
Is there currently a way to authenticate a token against a custom user model?
If so, please point me to the docs.
Thank you,
Michaela
|
open
|
2017-05-15T18:55:11Z
|
2017-08-27T09:59:16Z
|
https://github.com/jpadilla/django-rest-framework-jwt/issues/334
|
[] |
michaelaelise
| 2
|
Lightning-AI/pytorch-lightning
|
machine-learning
| 20,094
|
Please allow automatic optimization for multiple optimizers again.
|
### Description & Motivation
I'm suggesting allowing the old behavior to work again. While still giving users the option to use the new behavior if they set self.automatic_optimization=False. The old API was well designed and can allow for extremely simple implementations especially in situations where the training step is the same for each optimizer being used (i.e. no optimizer_idx if-statement).
### Pitch
The original purpose of Pytorch-Lightning was to simplify & eliminate the boiler plate in the pytorch training loop. But the new behavior is **much more complicated than even using base pytorch**. Since it requires extra bloat like `self.automatic_optimization=False`, `self.toggle_optimizer()`, `self.untoggle_optimizer()`, `self.optimizers()`, then using custom rewrites of well-known base pytorch APIs like `self.manual_backwards()`, in addition to reintroducing the boiler-plate that Pytorch-Lightning was made to remove.
As a matter of fact **in the simplest case it adds 12 additional lines of bloat...**
### Alternatives
_No response_
### Additional context
Could you at least consider collecting user feedback before you remove useful features like this in the future?
cc @borda
|
open
|
2024-07-16T02:49:09Z
|
2024-07-19T16:45:13Z
|
https://github.com/Lightning-AI/pytorch-lightning/issues/20094
|
[
"feature",
"discussion"
] |
profPlum
| 2
|
deezer/spleeter
|
tensorflow
| 273
|
[Bug] Failed to load the native TensorFlow runtime.
|
## Description
<!-- Hello. Every time I try to make a split, It always say that Failed to load the native TensorFlow runtime. Can someone help me on this -->
## Step to reproduce
<!-- Indicates clearly steps to reproduce the behavior: -->
1. Put the seperate command
2. Pressed enter
3. Got `Failed to load the native TensorFlow` error
## Output
PS C:\Users\Purple Flippy\Music> spleeter separate -i 'song.wav' -p spleeter:4stems -o splits
Traceback (most recent call last):
File "c:\users\purple flippy\appdata\local\programs\python\python37\lib\site-packages\tensorflow\python\pywrap_tensorflow.py", line 58, in <module>
from tensorflow.python.pywrap_tensorflow_internal import *
File "c:\users\purple flippy\appdata\local\programs\python\python37\lib\site-packages\tensorflow\python\pywrap_tensorflow_internal.py", line 28, in <module>
_pywrap_tensorflow_internal = swig_import_helper()
File "c:\users\purple flippy\appdata\local\programs\python\python37\lib\site-packages\tensorflow\python\pywrap_tensorflow_internal.py", line 24, in swig_import_helper
_mod = imp.load_module('_pywrap_tensorflow_internal', fp, pathname, description)
File "c:\users\purple flippy\appdata\local\programs\python\python37\lib\imp.py", line 242, in load_module
return load_dynamic(name, filename, file)
File "c:\users\purple flippy\appdata\local\programs\python\python37\lib\imp.py", line 342, in load_dynamic
return _load(spec)
ImportError: DLL load failed: A dynamic link library (DLL) initialization routine failed.
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "c:\users\purple flippy\appdata\local\programs\python\python37\lib\runpy.py", line 193, in _run_module_as_main
"__main__", mod_spec)
File "c:\users\purple flippy\appdata\local\programs\python\python37\lib\runpy.py", line 85, in _run_code
exec(code, run_globals)
File "C:\Users\Purple Flippy\AppData\Local\Programs\Python\Python37\Scripts\spleeter.exe\__main__.py", line 9, in <module>
File "c:\users\purple flippy\appdata\local\programs\python\python37\lib\site-packages\spleeter\__main__.py", line 54, in entrypoint
main(sys.argv)
File "c:\users\purple flippy\appdata\local\programs\python\python37\lib\site-packages\spleeter\__main__.py", line 36, in main
enable_logging()
File "c:\users\purple flippy\appdata\local\programs\python\python37\lib\site-packages\spleeter\utils\logging.py", line 60, in enable_logging
tf_logger = get_tensorflow_logger()
File "c:\users\purple flippy\appdata\local\programs\python\python37\lib\site-packages\spleeter\utils\logging.py", line 27, in get_tensorflow_logger
from tensorflow.compat.v1 import logging
File "c:\users\purple flippy\appdata\local\programs\python\python37\lib\site-packages\tensorflow\__init__.py", line 28, in <module>
from tensorflow.python import pywrap_tensorflow # pylint: disable=unused-import
File "c:\users\purple flippy\appdata\local\programs\python\python37\lib\site-packages\tensorflow\python\__init__.py", line 49, in <module>
from tensorflow.python import pywrap_tensorflow
File "c:\users\purple flippy\appdata\local\programs\python\python37\lib\site-packages\tensorflow\python\pywrap_tensorflow.py", line 74, in <module>
raise ImportError(msg)
ImportError: Traceback (most recent call last):
File "c:\users\purple flippy\appdata\local\programs\python\python37\lib\site-packages\tensorflow\python\pywrap_tensorflow.py", line 58, in <module>
from tensorflow.python.pywrap_tensorflow_internal import *
File "c:\users\purple flippy\appdata\local\programs\python\python37\lib\site-packages\tensorflow\python\pywrap_tensorflow_internal.py", line 28, in <module>
_pywrap_tensorflow_internal = swig_import_helper()
File "c:\users\purple flippy\appdata\local\programs\python\python37\lib\site-packages\tensorflow\python\pywrap_tensorflow_internal.py", line 24, in swig_import_helper
_mod = imp.load_module('_pywrap_tensorflow_internal', fp, pathname, description)
File "c:\users\purple flippy\appdata\local\programs\python\python37\lib\imp.py", line 242, in load_module
return load_dynamic(name, filename, file)
File "c:\users\purple flippy\appdata\local\programs\python\python37\lib\imp.py", line 342, in load_dynamic
return _load(spec)
ImportError: DLL load failed: A dynamic link library (DLL) initialization routine failed.
Failed to load the native TensorFlow runtime.
See https://www.tensorflow.org/install/errors
for some common reasons and solutions. Include the entire stack trace
above this error message when asking for help.
## Environment
<!-- Fill the following table -->
| | |
| ----------------- | ------------------------------- |
| OS | Windows 10 |
| Installation type | Powershell |
| RAM available | 16GB |
| Hardware spec | GPU / CPU / etc ... |
## Additional context
<!-- Add any other context about the problem here, references, cites, etc.. -->
|
closed
|
2020-02-15T23:21:02Z
|
2020-04-05T12:09:00Z
|
https://github.com/deezer/spleeter/issues/273
|
[
"bug",
"invalid"
] |
ghost
| 5
|
FlareSolverr/FlareSolverr
|
api
| 274
|
[yggtorrent] Exception (yggtorrent): FlareSolverr was unable to process the request, please check FlareSolverr logs. Message: Cloudflare Error: Cloudflare has blocked this request. Probably your IP is banned for this site, check in your web browser.: Parse error (Test)
|
### Environment
* **FlareSolverr version**: 2.1.0
* **Operating system**: Debian 10
* **Are you using Docker**: yes
* **Are you using a proxy or VPN?** no
* **Are you using Captcha Solver:** no
### Description
Hello suddenly FlareSolver stopped to work for on of my indexer. Can you please help me ?
### Error Messages
FlareSolverr was unable to process the request, please check FlareSolverr logs. Message: Cloudflare Error: Cloudflare has blocked this request. Probably your IP is banned for this site, check in your web browser.
### Screenshots

|
closed
|
2021-12-30T23:57:05Z
|
2022-01-09T14:07:23Z
|
https://github.com/FlareSolverr/FlareSolverr/issues/274
|
[] |
MozkaGit
| 2
|
Evil0ctal/Douyin_TikTok_Download_API
|
fastapi
| 62
|
API Test
|
图集链接:
https://www.tiktok.com/@pertcghy/video/7113056553556561195
|
closed
|
2022-08-08T23:44:10Z
|
2022-08-09T01:20:59Z
|
https://github.com/Evil0ctal/Douyin_TikTok_Download_API/issues/62
|
[] |
Evil0ctal
| 0
|
d2l-ai/d2l-en
|
computer-vision
| 2,056
|
multi-head Attention code has a big problem.
|
I only checked the pytorch version.
##################################################################
class MultiHeadAttention(nn.Module):
"""Multi-head attention.
Defined in :numref:`sec_multihead-attention`"""
def __init__(self, key_size, query_size, value_size, num_hiddens,
num_heads, dropout, bias=False, **kwargs):
super(MultiHeadAttention, self).__init__(**kwargs)
self.num_heads = num_heads
self.attention = d2l.DotProductAttention(dropout)
self.W_q = nn.Linear(query_size, num_hiddens, bias=bias) ######## should not be 'query_size'
self.W_k = nn.Linear(key_size, num_hiddens, bias=bias) #######should not be 'key_size'
self.W_v = nn.Linear(value_size, num_hiddens, bias=bias) ####### should not be 'value_size'
self.W_o = nn.Linear(num_hiddens, num_hiddens, bias=bias)
def forward(self, queries, keys, values, valid_lens):
# Shape of `queries`, `keys`, or `values`:
# (`batch_size`, no. of queries or key-value pairs, `num_hiddens`)
# Shape of `valid_lens`:
# (`batch_size`,) or (`batch_size`, no. of queries)
# After transposing, shape of output `queries`, `keys`, or `values`:
# (`batch_size` * `num_heads`, no. of queries or key-value pairs,
# `num_hiddens` / `num_heads`)
queries = transpose_qkv(self.W_q(queries), self.num_heads) ######## here, the last dime of queries is num_hiddens !
keys = transpose_qkv(self.W_k(keys), self.num_heads)
values = transpose_qkv(self.W_v(values), self.num_heads)
if valid_lens is not None:
# On axis 0, copy the first item (scalar or vector) for
# `num_heads` times, then copy the next item, and so on
valid_lens = torch.repeat_interleave(
valid_lens, repeats=self.num_heads, dim=0)
# Shape of `output`: (`batch_size` * `num_heads`, no. of queries,
# `num_hiddens` / `num_heads`)
output = self.attention(queries, keys, values, valid_lens)
# Shape of `output_concat`:
# (`batch_size`, no. of queries, `num_hiddens`)
output_concat = transpose_output(output, self.num_heads)
return self.W_o(output_concat)
#####################################################
When training, if you change the num_hiddens from 32 to 64, you will get "RuntimeError: mat1 dim 1 must match mat2 dim 0".
After debugging, I found in the MultiheadAttetion block, in the forward function, the shape of X is
(`batch_size`, no. of queries or key-value pairs, `num_hiddens`)
see the num_hiddens is the last dime
But the self.W_q = nn.Linear(query_size, num_hiddens, bias=bias)
the first dim of W_q is query_size !
So in this case, you always have to make num_hiddens = query_size to run. Which is obviously wrong.
#######################################################
My suggestion is to change self.W_q = nn.Linear(query_size, num_hiddens, bias=bias) ==> self.W_q = nn.Linear(num_hiddens, num_hiddens, bias=bias)
But there maybe another solution.
If my understanding is wrong, please correct me.
d2l is wonderful for sure.
P.S. The way for building a large sing-head attention and then bend it into multi-head is not elegant, it would be much better if your guys could find another solution.
|
open
|
2022-03-03T18:25:48Z
|
2022-04-19T12:05:00Z
|
https://github.com/d2l-ai/d2l-en/issues/2056
|
[] |
Y-H-Joe
| 2
|
thewhiteh4t/pwnedOrNot
|
api
| 44
|
Emails and password
|
closed
|
2020-07-07T12:29:31Z
|
2020-07-07T12:31:37Z
|
https://github.com/thewhiteh4t/pwnedOrNot/issues/44
|
[
"invalid"
] |
33DarkStar
| 0
|
|
airtai/faststream
|
asyncio
| 1,311
|
Implement explicit methods annotations in Kafka and confluent brokers (Refer RabbitBroker)
|
closed
|
2024-03-18T07:04:15Z
|
2024-04-15T06:03:38Z
|
https://github.com/airtai/faststream/issues/1311
|
[] |
davorrunje
| 1
|
|
tflearn/tflearn
|
data-science
| 484
|
Mulitple outputs with weighted loss?
|
The loss of the objective function would be like loss = loss_output1 + weight2 * loss_output2.
Is there any way to implement this under the current framework?
Thanks,
Lisheng
|
open
|
2016-11-27T22:48:15Z
|
2016-12-29T18:38:03Z
|
https://github.com/tflearn/tflearn/issues/484
|
[] |
fufrank5
| 3
|
encode/databases
|
sqlalchemy
| 404
|
url parse error
|
Hi, my url is like this
```
mysql://jydb:G2W9iPwpAqF4R#202@rm-wz9s90lao15s6j4v2ro.mysql.rds.aliyuncs.com:3306/jydb
```
the database password contain `#`, but it split it into two pieces, what can I do for this situation except change the password, thanks.
|
closed
|
2021-10-08T10:14:15Z
|
2021-10-12T09:11:06Z
|
https://github.com/encode/databases/issues/404
|
[
"question"
] |
szj2ys
| 2
|
gunthercox/ChatterBot
|
machine-learning
| 1,419
|
Time consumption for training is very high(reading from excel,55000 conversations)
|
I'm training data of 55000 questions and answers reading from excel with two columns,the time consumption for training the data is very high.Is there a solution for reducing time for training
alexa_bot=ChatBot("alexa_bot",trainer='chatterbot.trainers.ListTrainer',storage_adapter='chatterbot.storage.MongoDatabaseAdapter',database='Alexa_db')
#test_bot.storage.drop()
alexa_bot.set_trainer(ListTrainer)
alexa_data=pd.read_excel("Alexa_train_data.xlsx",encoding='latin1')
for index, row in alexa_data.iterrows():
alexa_bot.train([row["Question"], row['Answer']])
|
closed
|
2018-09-21T13:28:46Z
|
2019-08-07T23:45:17Z
|
https://github.com/gunthercox/ChatterBot/issues/1419
|
[
"answered"
] |
tghv
| 2
|
piskvorky/gensim
|
data-science
| 2,985
|
Bug report of gensim official webpage
|
https://radimrehurek.com/gensim/auto_examples/core/run_similarity_queries.html#sphx-glr-auto-examples-core-run-similarity-queries-py
In this page, at the last second code part, the original code is :
sims = sorted(enumerate(sims), key=lambda item: -item[1])
for i, s in enumerate(sims):
print(s, documents[i])
However, I guess here code should be:
sims = sorted(enumerate(sims), key=lambda item: -item[1])
for i, s in enumerate(sims):
print(s, documents[s[0]])
|
closed
|
2020-10-19T06:31:07Z
|
2020-10-19T08:09:17Z
|
https://github.com/piskvorky/gensim/issues/2985
|
[] |
yftadyz
| 1
|
explosion/spaCy
|
machine-learning
| 12,072
|
How to get confidence score for each entity for custom NER model?
|
* spaCy Version Used: 3.1
How to get confidence score for each entity for custom NER model?
|
closed
|
2023-01-09T06:27:20Z
|
2023-01-10T12:12:13Z
|
https://github.com/explosion/spaCy/issues/12072
|
[
"usage",
"feat / ner"
] |
koyelseba
| 0
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.