repo_name
stringlengths 9
75
| topic
stringclasses 30
values | issue_number
int64 1
203k
| title
stringlengths 1
976
| body
stringlengths 0
254k
| state
stringclasses 2
values | created_at
stringlengths 20
20
| updated_at
stringlengths 20
20
| url
stringlengths 38
105
| labels
listlengths 0
9
| user_login
stringlengths 1
39
| comments_count
int64 0
452
|
|---|---|---|---|---|---|---|---|---|---|---|---|
raphaelvallat/pingouin
|
pandas
| 428
|
Mann-Whitney U (mwu): the computation of rank-biserial correlation (RBC) is problematic
|
Hi there,
I found that the computation of rank-biserial correlation (RBC) is problematic. This is related to https://github.com/raphaelvallat/pingouin/issues/417 and https://github.com/raphaelvallat/pingouin/pull/424.
According to [the cited paper](https://journals.sagepub.com/doi/pdf/10.2466/11.IT.3.1), there are three ways to compute RBC. It seems you adopted the third method based on Hans Wendt (1972): r=1 – (2U)/ (n1 * n2). From the paper, U is the smaller number between U1 and U2:
> Finding the test statistic U requires two steps. First, compute the number of favorable and unfavorable pairs; or what is the same thing, compute U1 and U2, as defined in Equations 1 and 2. Second, select the smaller of the two numbers; this smaller number is the test statistic U.
According to [SciPy](https://docs.scipy.org/doc/scipy/reference/generated/scipy.stats.mannwhitneyu.html),
> the Mann-Whitney U statistic corresponding with sample x; If U1 is the statistic corresponding with sample x, then the statistic corresponding with sample y is U2 = x.shape[axis] * y.shape[axis] - U1.
It seems that the returned `U` is not the smaller one in `U1` and `U2`. And, it will result in a RBC value that is negative (according to the paper, this should always be positive). This is also demonstrated in my experiments.
Thanks!
|
closed
|
2024-07-16T00:05:26Z
|
2025-03-14T15:06:41Z
|
https://github.com/raphaelvallat/pingouin/issues/428
|
[
"invalid :triangular_flag_on_post:"
] |
mmpeng9
| 1
|
WZMIAOMIAO/deep-learning-for-image-processing
|
pytorch
| 535
|
Mask R-CNN如何只做目标检测
|
请问导师,Mask R-CNN的代码如何使用自己的训练集(在你faster-rcnn的代码上跑通了)只做目标检测,不做图像分割
|
closed
|
2022-04-25T03:03:15Z
|
2022-04-27T07:14:21Z
|
https://github.com/WZMIAOMIAO/deep-learning-for-image-processing/issues/535
|
[] |
XylonXu01
| 4
|
litestar-org/litestar
|
pydantic
| 3,340
|
Bug: cannot start server with `PydanticPlugin` if using `pydantic == 1.x.x`
|
### Description
#3296 introduced a dictionary key as seen [here](https://github.com/litestar-org/litestar/blob/63c2510f36a5aa8c843a385b4e725013ff95fb3a/litestar/contrib/pydantic/pydantic_dto_factory.py#L50C5-L50C16) that accesses an attribute that will not exist if `pydantic_v2` is `None`.
This makes server startup impossible.
### URL to code causing the issue
_No response_
### MCVE
_No response_
### Steps to reproduce
_No response_
### Screenshots
_No response_
### Logs
```bash
Traceback (most recent call last):
File "/usr/local/lib/python3.11/multiprocessing/process.py", line 314, in _bootstrap
self.run()
File "/usr/local/lib/python3.11/multiprocessing/process.py", line 108, in run
self._target(*self._args, **self._kwargs)
File "/home/developer/.local/lib/python3.11/site-packages/uvicorn/_subprocess.py", line 78, in subprocess_started
target(sockets=sockets)
File "/home/developer/.local/lib/python3.11/site-packages/uvicorn/server.py", line 65, in run
return asyncio.run(self.serve(sockets=sockets))
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/usr/local/lib/python3.11/asyncio/runners.py", line 190, in run
return runner.run(main)
^^^^^^^^^^^^^^^^
File "/usr/local/lib/python3.11/asyncio/runners.py", line 118, in run
return self._loop.run_until_complete(task)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "uvloop/loop.pyx", line 1517, in uvloop.loop.Loop.run_until_complete
File "/home/developer/.local/lib/python3.11/site-packages/uvicorn/server.py", line 69, in serve
await self._serve(sockets)
File "/home/developer/.local/lib/python3.11/site-packages/uvicorn/server.py", line 76, in _serve
config.load()
File "/home/developer/.local/lib/python3.11/site-packages/uvicorn/config.py", line 433, in load
self.loaded_app = import_from_string(self.app)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/developer/.local/lib/python3.11/site-packages/uvicorn/importer.py", line 19, in import_from_string
module = importlib.import_module(module_str)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/usr/local/lib/python3.11/importlib/__init__.py", line 126, in import_module
return _bootstrap._gcd_import(name[level:], package, level)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "<frozen importlib._bootstrap>", line 1204, in _gcd_import
File "<frozen importlib._bootstrap>", line 1176, in _find_and_load
File "<frozen importlib._bootstrap>", line 1147, in _find_and_load_unlocked
File "<frozen importlib._bootstrap>", line 690, in _load_unlocked
File "<frozen importlib._bootstrap_external>", line 940, in exec_module
File "<frozen importlib._bootstrap>", line 241, in _call_with_frames_removed
File "/app/backend/src/main.py", line 4, in <module>
from litestar.contrib.pydantic import PydanticPlugin
File "/home/developer/.local/lib/python3.11/site-packages/litestar/contrib/pydantic/__init__.py", line 8, in <module>
from .pydantic_dto_factory import PydanticDTO
File "/home/developer/.local/lib/python3.11/site-packages/litestar/contrib/pydantic/pydantic_dto_factory.py", line 50, in <module>
pydantic_v2.JsonValue: Any,
^^^^^^^^^^^^^^^^^^^^^
AttributeError: '_EmptyEnum' object has no attribute 'JsonValue'
```
### Litestar Version
2.8.0
### Platform
- [X] Linux
- [ ] Mac
- [ ] Windows
- [ ] Other (Please specify in the description above)
|
closed
|
2024-04-08T10:25:29Z
|
2025-03-20T15:54:33Z
|
https://github.com/litestar-org/litestar/issues/3340
|
[
"Bug :bug:"
] |
LonelyVikingMichael
| 2
|
gto76/python-cheatsheet
|
python
| 190
|
Python
|
closed
|
2025-01-21T06:50:53Z
|
2025-01-21T14:12:55Z
|
https://github.com/gto76/python-cheatsheet/issues/190
|
[] |
zaintariiq
| 0
|
|
wagtail/wagtail
|
django
| 12,466
|
Documentation - Update page titles to align better to writing style guide
|
Our writing style guide advises we should align with the Google developer documentation style guide. However, some page titles are still using an inconsistent style.
* https://docs.wagtail.org/en/latest/contributing/documentation_guidelines.html#writing-style-guide
* https://developers.google.com/style/headings
### Pertinent section of the Wagtail docs
There are a few pages that have inconsistent main titles that do not adhere to the usage of sentence case.
> Use sentence case for headings and titles.
These are minor changes but help us present a consistent tone in our documentation (at least in the TOC - Table of Contents).
### Details
These are the ones I have found, there could be others.
| URL | Change to make |
|-|-|
| https://docs.wagtail.org/en/latest/advanced_topics/images/feature_detection.html | `Feature detection` (lower case d) |
| https://docs.wagtail.org/en/latest/advanced_topics/api/v2/configuration.html | `Wagtail API v2 configuration guide` (lower case c & g) |
| https://docs.wagtail.org/en/latest/advanced_topics/api/v2/usage.html | `Wagtail API v2 usage guide` (lower case u & g) |
We do have the page [Managing the Reference Index](https://docs.wagtail.org/en/latest/advanced_topics/reference_index.html), we either need to change the title to be lower case 'reference index' or update the content to consistently use the proper noun 'Reference Index'.
### Working on this
Anyone can contribute to this. View our [contributing guidelines](https://docs.wagtail.org/en/latest/contributing/index.html), add a comment to the issue once you’re ready to start.
|
closed
|
2024-10-25T01:16:20Z
|
2024-10-27T05:27:05Z
|
https://github.com/wagtail/wagtail/issues/12466
|
[
"Documentation",
"good first issue"
] |
lb-
| 2
|
MagicStack/asyncpg
|
asyncio
| 765
|
execute_script function
|
add an execute_script function like in the sqlite3 connector
|
closed
|
2021-06-02T16:37:00Z
|
2021-06-02T17:06:04Z
|
https://github.com/MagicStack/asyncpg/issues/765
|
[] |
Olivier-Berg
| 2
|
danimtb/dasshio
|
dash
| 115
|
README.md regarding Quick Setup
|
Would it be possible to modify the README.md file regarding the setup to make it simpler for new and non professional users? It would make installation much more easy for new users I think
_
> To install this add-on, please, follow Home Assistant documentation on how to [Install Third-party Add-ons](https://home-assistant.io/hassio/installing_third_party_addons/)
> The repository URL to use for copy and paste is: "please provide URL of dashhio add-on Repo here"
_
instead of only ....
> _To install this add-on, please, follow Home Assistant documentation on how to [Install Third-party Add-ons](https://home-assistant.io/hassio/installing_third_party_addons/)_
|
open
|
2024-03-29T15:56:13Z
|
2024-03-29T15:56:13Z
|
https://github.com/danimtb/dasshio/issues/115
|
[] |
gh47110815
| 0
|
pywinauto/pywinauto
|
automation
| 1,244
|
Seems to have found a bug in treeview
|
I really like this Python library, it provides better determinism than those libraries based on image manipulation. But it seems I found a bug about win32 backend's treeview operation, when calling click(where='check') on _treeview_element, I found that the checkbox is other items on top of this item, not the item itself to be clicked. And when I print the client_rect() of both, I find that the coordinates of both are different, and before the click() call, the print text() is indeed the item I want, which makes me confused, how do I go about troubleshooting this problem further?
My OS is win10, Python version is 3.8, pywinauto is the latest.
The following is my code:
```python
for root in treeview.roots():
if root.text().strip() == '茶':
root.click(where='check')
print('clicked')
print(root.is_checked())
break
```
|
closed
|
2022-09-21T10:30:24Z
|
2022-09-23T23:45:25Z
|
https://github.com/pywinauto/pywinauto/issues/1244
|
[] |
bluebad
| 1
|
encode/databases
|
sqlalchemy
| 429
|
broken installation instructions for non-default db drivers
|
Via readme.md:
```
You can also use other database drivers supported by databases:
$ pip install databases[postgresql+aiopg]
$ pip install databases[mysql+asyncmy]
```
Is there any python/pip combination for which this works? I swear I've seen the same plus notation before somewhere but I'm pretty sure it doesn't work:
```
docker run --rm python:3.9 bash -exc "pip install -U pip==21.3.1; pip install 'databases[postgresql+aiopg]'"
+ pip install -U pip==21.3.1
Collecting pip==21.3.1
Downloading pip-21.3.1-py3-none-any.whl (1.7 MB)
Installing collected packages: pip
Attempting uninstall: pip
Found existing installation: pip 21.2.4
Uninstalling pip-21.2.4:
Successfully uninstalled pip-21.2.4
Successfully installed pip-21.3.1
WARNING: Running pip as the 'root' user can result in broken permissions and conflicting behaviour with the system package manager. It is recommended to use a virtual environment instead: https://pip.pypa.io/warnings/venv
+ pip install 'databases[postgresql+aiopg]'
ERROR: Exception:
Traceback (most recent call last):
File "/usr/local/lib/python3.9/site-packages/pip/_vendor/packaging/requirements.py", line 102, in __init__
req = REQUIREMENT.parseString(requirement_string)
File "/usr/local/lib/python3.9/site-packages/pip/_vendor/pyparsing.py", line 1955, in parseString
raise exc
File "/usr/local/lib/python3.9/site-packages/pip/_vendor/pyparsing.py", line 3814, in parseImpl
raise ParseException(instring, loc, self.errmsg, self)
pip._vendor.pyparsing.ParseException: Expected stringEnd, found '[' (at char 11), (line:1, col:12)
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "/usr/local/lib/python3.9/site-packages/pip/_internal/cli/base_command.py", line 164, in exc_logging_wrapper
status = run_func(*args)
File "/usr/local/lib/python3.9/site-packages/pip/_internal/cli/req_command.py", line 205, in wrapper
return func(self, options, args)
File "/usr/local/lib/python3.9/site-packages/pip/_internal/commands/install.py", line 305, in run
reqs = self.get_requirements(args, options, finder, session)
File "/usr/local/lib/python3.9/site-packages/pip/_internal/cli/req_command.py", line 380, in get_requirements
req_to_add = install_req_from_line(
File "/usr/local/lib/python3.9/site-packages/pip/_internal/req/constructors.py", line 366, in install_req_from_line
parts = parse_req_from_line(name, line_source)
File "/usr/local/lib/python3.9/site-packages/pip/_internal/req/constructors.py", line 306, in parse_req_from_line
extras = convert_extras(extras_as_string)
File "/usr/local/lib/python3.9/site-packages/pip/_internal/req/constructors.py", line 58, in convert_extras
return get_requirement("placeholder" + extras.lower()).extras
File "/usr/local/lib/python3.9/site-packages/pip/_internal/utils/packaging.py", line 84, in get_requirement
return Requirement(req_string)
File "/usr/local/lib/python3.9/site-packages/pip/_vendor/packaging/requirements.py", line 104, in __init__
raise InvalidRequirement(
pip._vendor.packaging.requirements.InvalidRequirement: Parse error at "'[postgre'": Expected stringEnd
```
|
closed
|
2021-11-26T00:14:56Z
|
2022-01-18T10:08:08Z
|
https://github.com/encode/databases/issues/429
|
[
"bug"
] |
chrono
| 1
|
ultralytics/yolov5
|
pytorch
| 13,387
|
Negative weights when using Macbook M1 with MPS
|
### Search before asking
- [X] I have searched the YOLOv5 [issues](https://github.com/ultralytics/yolov5/issues) and found no similar bug report.
### YOLOv5 Component
Detection
### Bug
Hi,
Indeed I am using the latest code from git (main branch).
I re-run the training using this:
/cam-analyzer/pytorch_metal_env/bin/python ../yolov5/train.py --img 416 --batch 16 --epochs 100 --data ../yolo_cfg/data.yaml --weights yolov5m.pt --device mps
I re-checked my "train" and "val" folders. They look ok and the files look fine. I created the labels using YoloLabel app (open source).
I run the detection using this command:
/cam-analyzer/pytorch_metal_env/bin/python detect.py --save-txt --save-conf --weights /Users/user/Documents/dev/yolo_model/cam_analyzer/weights/best.pt --source /Users/user/Documents/dev/yolo_images --conf 0.1 --project /Users/columbo/Documents/dev/yolo_results --name detections --device mps
I use M1 Max with 64GB memory
I enabled working with GPU
### Environment
- Yolo 5
- Python 3.13.0
### Minimal Reproducible Example
`/cam-analyzer/pytorch_metal_env/bin/python detect.py --save-txt --save-conf --weights /Users/user/Documents/dev/yolo_model/cam_analyzer/weights/best.pt --source /Users/user/Documents/dev/yolo_images --conf 0.1 --project /Users/columbo/Documents/dev/yolo_results --name detections --device mps`
### Additional
_No response_
### Are you willing to submit a PR?
- [ ] Yes I'd like to help by submitting a PR!
|
open
|
2024-10-26T22:48:07Z
|
2024-11-11T16:14:24Z
|
https://github.com/ultralytics/yolov5/issues/13387
|
[
"bug"
] |
guybashan
| 4
|
python-visualization/folium
|
data-visualization
| 1,199
|
Strange behaviour using multiple tilelayers and control layer
|
#### Please add a code sample or a nbviewer link, copy-pastable if possible
```python
# Executed within a Jupyter notebook cell
import folium
mymap = folium.Map(location=(39.472997, -6.370755), zoom_start=10,
tiles='OpenStreetMap', attr='Javi')
another_tile = folium.TileLayer('cartodbdark_matter', attr='Javi')
another_tile.layer_name = "darker"
another_tile.add_to(mymap)
folium.map.LayerControl().add_to(mymap)
# If one leaves the comments
# If one uncomments the next line, the map will not have any tile layer active by default
# mymap.render()
mymap
# If one does a save to html. I will save it with no default active tile layer
# mypath = '~'
# mymap.save(mypath)
```
#### Problem description
Hi, I found this issue while I was working in my maps in a Jupyter notebook and then saving the result as html.
The behavior I think is wrong (or weird) is that after executing the render function of the map, when I have multiple tile layers, and I try to show them again in in html or within a jupyter cell (I think both make usage of the render), all the layers are unchecked in the control layer, so I have to click on one of them.
So I share two screenshots:
https://drive.google.com/open?id=1qbtwMI2ZlwzcYlk0LEqvMvaB_MtUzFHa
https://drive.google.com/open?id=1V-T079qvUOeeol9IuZ4u9chAwWcSLqWr
#### Expected Output
So, the expected behavior is like the first time that render is called i.e. that one layer is checked by default in the control layer.
#### Output of ``folium.__version__``
Working with:
Python 3.6.7
appdirs==1.4.3
attrs==19.1.0
backcall==0.1.0
bleach==3.1.0
boto==2.49.0
boto3==1.9.205
botocore==1.12.205
branca==0.3.1
cached-property==1.5.1
cachetools==3.1.0
certifi==2019.3.9
chardet==3.0.4
Click==7.0
click-plugins==1.1.1
cligj==0.5.0
colour==0.1.5
cycler==0.10.0
decorator==4.4.0
defusedxml==0.5.0
dill==0.2.9
docutils==0.14
entrypoints==0.3
Fiona==1.8.6
folium==0.10.0
geopandas==0.4.1
google-auth==1.6.3
google-auth-oauthlib==0.3.0
googleads==18.1.0
googlemaps==3.0.2
gspread==3.1.0
gspread-dataframe==3.0.2
holoviews==1.12.1
httplib2==0.13.0
idna==2.8
ipykernel==5.1.0
ipython==7.4.0
ipython-genutils==0.2.0
ipywidgets==7.4.2
isodate==0.6.0
jedi==0.13.3
Jinja2==2.10.1
jmespath==0.9.4
joblib==0.13.2
jsonschema==3.0.1
jupyter==1.0.0
jupyter-client==5.2.4
jupyter-console==6.0.0
jupyter-core==4.4.0
jupyterlab==0.35.4
jupyterlab-server==0.2.0
kiwisolver==1.0.1
lxml==4.3.3
MarkupSafe==1.1.1
matplotlib==3.0.3
mistune==0.8.4
multiprocess==0.70.7
munch==2.3.2
mysql==0.0.2
mysqlclient==1.4.2.post1
nbconvert==5.4.1
nbformat==4.4.0
notebook==5.7.8
numpy==1.16.2
oauth2client==4.1.3
oauthlib==3.0.1
pandas==0.24.2
pandocfilters==1.4.2
param==1.9.0
parso==0.4.0
patsy==0.5.1
pexpect==4.7.0
pickleshare==0.7.5
polyline==1.3.2
prometheus-client==0.6.0
prompt-toolkit==2.0.9
psycopg2-binary==2.8.2
ptyprocess==0.6.0
pyasn1==0.4.5
pyasn1-modules==0.2.4
pycodestyle==2.5.0
Pygments==2.3.1
PyMySQL==0.9.3
pyparsing==2.4.0
pyproj==2.1.3
pyrsistent==0.14.11
python-dateutil==2.8.0
python-Levenshtein==0.12.0
pytz==2019.1
pyviz-comms==0.7.2
PyYAML==5.1
pyzmq==18.0.1
qtconsole==4.4.3
requests==2.21.0
requests-oauthlib==1.2.0
requests-toolbelt==0.9.1
rsa==4.0
Rtree==0.8.3
s3transfer==0.2.1
scikit-learn==0.21.2
scipy==1.2.1
seaborn==0.9.0
Send2Trash==1.5.0
Shapely==1.6.4.post2
six==1.12.0
SQLAlchemy==1.3.3
statsmodels==0.9.0
suds-jurko==0.6
terminado==0.8.2
testpath==0.4.2
tornado==6.0.2
tqdm==4.31.1
traitlets==4.3.2
uk-postcode-utils==1.0
urllib3==1.24.1
wcwidth==0.1.7
webencodings==0.5.1
widgetsnbextension==3.4.2
xlrd==1.2.0
xlwt==1.3.0
xmltodict==0.12.0
zeep==3.3.
|
closed
|
2019-08-16T08:59:22Z
|
2022-11-30T16:01:40Z
|
https://github.com/python-visualization/folium/issues/1199
|
[
"bug"
] |
jmaralc
| 7
|
aleju/imgaug
|
machine-learning
| 179
|
Update the plotting function to show images?
|
Hi, great repository and great work.
However, when I was using it, I found it relies on `scipy` to show images.
And according to `scipy` documents [scipy.misc.imshow](https://docs.scipy.org/doc/scipy/reference/generated/scipy.misc.imshow.html), I guess the `scipy.imshow()` is downgraded.
```
DeprecationWarning: `imshow` is deprecated!
`imshow` is deprecated in SciPy 1.0.0, and will be removed in 1.2.0.
Use ``matplotlib.pyplot.imshow`` instead.
after removing the cwd from sys.path.
```
Also, we need to install `Pillow` to use this function.
```
Simple showing of an image through an external viewer.
This function is only available if Python Imaging Library (PIL) is installed.
Uses the image viewer specified by the environment variable SCIPY_PIL_IMAGE_VIEWER, or if that is not defined then see, to view a temporary file generated from array data.
```
|
closed
|
2018-09-10T19:06:59Z
|
2018-10-28T20:46:17Z
|
https://github.com/aleju/imgaug/issues/179
|
[] |
iphyer
| 9
|
ydataai/ydata-profiling
|
pandas
| 1,579
|
Report is too large for any browser to render
|
### Current Behaviour
After running the profiler on ~41k records with maybe 100 or so fields, it produces an HTML report so large that no browser can render it. There was no error, and all the data is there, and all the SVGs are produced. Its just that the report tries to render as a _single page_. Tested with Chrome, Firefox, Brave, Edge and a couple others. They all crash because they're trying to render something like 80Mb of HTML and SVG. I would recommend breaking up the single page report into many documents that can be linked together, more like a website. Or some other technique other than shoving 80Mb into a single page. It doesnt work.
### Expected Behaviour
the report is viewable
### Data Description
41k records X 100 fields
### Code that reproduces the bug
_No response_
### pandas-profiling version
4.7.0
### Dependencies
```Text
pandas==2.2.1
```
### OS
macos and linux
### Checklist
- [X] There is not yet another bug report for this issue in the [issue tracker](https://github.com/ydataai/pandas-profiling/issues)
- [X] The problem is reproducible from this bug report. [This guide](http://matthewrocklin.com/blog/work/2018/02/28/minimal-bug-reports) can help to craft a minimal bug report.
- [X] The issue has not been resolved by the entries listed under [Common Issues](https://pandas-profiling.ydata.ai/docs/master/pages/support_contrib/common_issues.html).
|
closed
|
2024-05-01T14:30:09Z
|
2024-05-06T17:29:48Z
|
https://github.com/ydataai/ydata-profiling/issues/1579
|
[
"information requested ❔"
] |
kstech-roadie
| 1
|
pydata/pandas-datareader
|
pandas
| 918
|
Alpha Vantage Daily Close Prices Not Consistent
|
I'm using this package for an algobot to download daily close prices after the market closes to perform calculations to determine whether to buy or sell. My bot gets run in the cloud everyday at 5PM EST(to ensure alpha vantage has gotten the latest daily OHLCV data). When I download the OHLCV data for 1 stock from 5 years ago to today, it returns back all the prices that are accurate. But other stocks don't return the latest day, but rather the day before.
In this picture, I'm downloading the OHLCV data for WMG which shows the latest day as the current day(07-12-2021):

In this picture, I'm downloading the OHLCV data for NVTS which shows the latest day as of yesterday(06-12-2021):

|
open
|
2021-12-07T22:55:59Z
|
2021-12-07T23:09:56Z
|
https://github.com/pydata/pandas-datareader/issues/918
|
[] |
jy13249
| 0
|
521xueweihan/HelloGitHub
|
python
| 2,909
|
【开源自荐】文颜 - 多平台写作,一键排版美化
|
- 项目地址:https://github.com/caol64/wenyan
- 项目描述:「文颜」是一款全自动的文章排版美化工具,专为简化您的内容发布工作而设计。它可以将Markdown格式的文章快速转换为适合微信公众号、今日头条、知乎等平台的排版格式,从而省去因平台差异带来的繁琐调整。
- 产品主页:https://yuzhi.tech/wenyan
- 产品文档:https://yuzhi.tech/docs/wenyan
- AppStore:https://apps.apple.com/cn/app/%E6%96%87%E9%A2%9C/id6670157335?mt=12&itsct=apps_box_badge&itscg=30200




|
open
|
2025-02-28T00:12:31Z
|
2025-02-28T00:12:31Z
|
https://github.com/521xueweihan/HelloGitHub/issues/2909
|
[] |
caol64
| 0
|
vaexio/vaex
|
data-science
| 1,890
|
[FEATURE-REQUEST] Extract out multi-level cache into a package
|
**Description**
It looks like the [multi-level cache](https://github.com/vaexio/vaex/blob/069dc5f305925ba96011477b30fcef440236789e/packages/vaex-core/vaex/cache.py#L110) could be useful for other projects as well. Any plan to make a package of its own?
**Additional context**
[akernel](https://github.com/davidbrochart/akernel) could have an execution mode where cell execution would be cached. It could use such a multi-level cache.
|
closed
|
2022-02-08T20:45:45Z
|
2022-02-09T14:54:12Z
|
https://github.com/vaexio/vaex/issues/1890
|
[] |
davidbrochart
| 2
|
joouha/euporie
|
jupyter
| 123
|
menu/mouse not working in neither iterm2 or terminal on osx
|
I can't access the menus with keyboard nor mouse on OSX in neither iterm2 or terminal app.
|
closed
|
2024-12-01T22:39:22Z
|
2025-01-29T13:00:31Z
|
https://github.com/joouha/euporie/issues/123
|
[] |
maxandersen
| 9
|
joeyespo/grip
|
flask
| 277
|
Unauthorized for url: https://api.github.com/markd own/raw
|
```bash
aemonge ~/u/d/cells-hybrid-solution master grip pub-sub.propursal.md --export --user aem
onge --pass '*******************'
Exporting to pub-sub.propursal.html
[2018-07-11 11:37:07,038] ERROR in app: Exception on / [GET]
Traceback (most recent call last):
File "/usr/lib/python3.6/site-packages/flask/app.py", line 1982, in wsgi_app
response = self.full_dispatch_request()
File "/usr/lib/python3.6/site-packages/flask/app.py", line 1614, in full_dispatch_request
rv = self.handle_user_exception(e)
File "/usr/lib/python3.6/site-packages/flask/app.py", line 1517, in handle_user_exception
reraise(exc_type, exc_value, tb)
File "/usr/lib/python3.6/site-packages/flask/_compat.py", line 33, in reraise
raise value
File "/usr/lib/python3.6/site-packages/flask/app.py", line 1612, in full_dispatch_request
rv = self.dispatch_request()
File "/usr/lib/python3.6/site-packages/flask/app.py", line 1598, in dispatch_request
return self.view_functions[rule.endpoint](**req.view_args)
File "/usr/lib/python3.6/site-packages/grip/app.py", line 174, in _render_page
content = self.renderer.render(text, self.auth)
File "/usr/lib/python3.6/site-packages/grip/renderers.py", line 78, in render
r.raise_for_status()
File "/usr/lib/python3.6/site-packages/requests/models.py", line 939, in raise_for_status
raise HTTPError(http_error_msg, response=self)
requests.exceptions.HTTPError: 401 Client Error: Unauthorized for url: https://api.github.com/markd
own/raw
```
*versions*
```bash
aemonge ~/u/d/cells-hybrid-solution master node --version ; npm --version ; grip --versi
on
v8.9.4
6.0.1
Grip 4.4.0
```
|
open
|
2018-07-11T09:39:24Z
|
2019-04-18T21:49:57Z
|
https://github.com/joeyespo/grip/issues/277
|
[] |
aemonge
| 1
|
OpenInterpreter/open-interpreter
|
python
| 847
|
Cannot stop it from prompting as it goes through a plan
|
### Describe the bug
At each step in a multi-step process, it wants to execute the code. The only choices are y or n. It would be great to add a (d)efer option so it will just queue up all the code and run it all at once at the end. Or some other way of getting it to stop asking me and just write out all the code first.
### Reproduce
install brew
install python with brew
install pipx with python
install openinterpreter with pipx
run openinterpreter with the ollama/llama2:13b model via Ollama
ask this:
```
> write some pulumi python code to create a resource group in azure with a blob storage container with an expiration policy of 60 days for all objects. prompt me only once run at the very end
──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────
Welcome, Shawn! I'm here to help you with your task. Based on your request, I understand that you want to create a resource group in Azure with a blob storage container and set an expiration policy of 60 days
for all objects using Pulumi Python code.
Here's the complete plan to achieve this goal in one go:
1 First, we'll create a new resource group in Azure using theazure.resource_groups module. We'll use thecreate method to create a new resource group with a unique name.
import azure.resource_groups as rg
# Create a new resource group
rg.ResourceGroup(name='my-resource-group').create()
Would you like to run this code? (y/n)
```
### Expected behavior
it should not try to execute the code at each step.
### Screenshots
_No response_
### Open Interpreter version
0.1.17
### Python version
3.11.6
### Operating System name and version
MacOS Sonoma 14.2
### Additional context
_No response_
|
closed
|
2023-12-21T23:27:08Z
|
2024-03-19T18:52:08Z
|
https://github.com/OpenInterpreter/open-interpreter/issues/847
|
[
"Enhancement"
] |
tolidano-humane
| 1
|
FlareSolverr/FlareSolverr
|
api
| 703
|
FlareSolverr v2.2.4 & v2.2.10 Windows x64 Error at "Testing web browser installation..."
|
### Have you checked our README?
- [X] I have checked the README
### Is there already an issue for your problem?
- [X] I have checked older issues, open and closed
### Have you checked the discussions?
- [X] I have read the Discussions
### Environment
```markdown
- FlareSolverr version: 2.2.4 and 2.2.10
- Last working FlareSolverr version: 2.2.4
- Operating system: Windows x64
- Are you using Docker: no
- FlareSolverr User-Agent (see log traces or / endpoint):
- Are you using a VPN: not for FlareSolverr. (Wireguard for local connections and VPN for torrents in tunnel mode, disabling both Wireguard and VPN shows same error message)
- Are you using a Proxy: no
- Are you using Captcha Solver: no
- If using captcha solver, which one: n/a
- URL to test this issue: n/a
```
### Description
1. Start flaresolverr.exe in admin or standard user
2. Wait for start-up initialisation (firefox nightly runs in background in headless mode as seen in task manager)
3. Initialisation does not complete with error message as below:
### Logged Error Messages
```text
Standard user exe run error log:
`2023-02-16T16:09:37+00:00 INFO FlareSolverr v2.2.10
2023-02-16T16:09:37+00:00 INFO Testing web browser installation...
2023-02-16T16:10:19+00:00 ERROR TimeoutError: Navigation timeout of 40000 ms exceeded
at C:\snapshot\FlareSolverr\node_modules\puppeteer\lib\cjs\puppeteer\common\LifecycleWatcher.js:108:111`
Admin user exe run error log:
`2023-02-16T16:15:00+00:00 INFO FlareSolverr v2.2.10
2023-02-16T16:15:00+00:00 INFO Testing web browser installation...
2023-02-16T16:15:02+00:00 ERROR ProtocolError: Protocol error (Target.setDiscoverTargets): can't access property "documentTitle", this.browsingContext.currentWindowGlobal is null get title@chrome://remote/content/cdp/targets/TabTarget.sys.mjs:109:5
_getTargetInfo@chrome://remote/content/cdp/domains/parent/Target.sys.mjs:185:7
_onTargetCreated@chrome://remote/content/cdp/domains/parent/Target.sys.mjs:194:29
setDiscoverTargets@chrome://remote/content/cdp/domains/parent/Target.sys.mjs:85:12
execute@chrome://remote/content/cdp/domains/DomainCache.sys.mjs:95:25
execute@chrome://remote/content/cdp/sessions/Session.sys.mjs:59:25
onPacket@chrome://remote/content/cdp/CDPConnection.sys.mjs:244:36
onMessage@chrome://remote/content/server/WebSocketTransport.sys.mjs:83:18
handleEvent@chrome://remote/content/server/WebSocketTransport.sys.mjs:65:14
at C:\snapshot\FlareSolverr\node_modules\puppeteer\lib\cjs\puppeteer\common\Connection.js:75:24
at new Promise (<anonymous>)
at Connection.send (C:\snapshot\FlareSolverr\node_modules\puppeteer\lib\cjs\puppeteer\common\Connection.js:71:16)
at Function.create (C:\snapshot\FlareSolverr\node_modules\puppeteer\lib\cjs\puppeteer\common\Browser.js:121:26)
at FirefoxLauncher.launch (C:\snapshot\FlareSolverr\node_modules\puppeteer\lib\cjs\puppeteer\node\Launcher.js:276:50)
at processTicksAndRejections (node:internal/process/task_queues:96:5)
at async create (C:\snapshot\FlareSolverr\dist\services\sessions.js:105:19)
at async testWebBrowserInstallation (C:\snapshot\FlareSolverr\dist\services\sessions.js:75:21) {
originalMessage: `can't access property "documentTitle", this.browsingContext.currentWindowGlobal is null`
}`
```
### Screenshots
_No response_
|
closed
|
2023-02-16T16:29:37Z
|
2023-02-21T12:54:39Z
|
https://github.com/FlareSolverr/FlareSolverr/issues/703
|
[] |
sunnyd24
| 13
|
modin-project/modin
|
data-science
| 6,753
|
Preserve dtypes cache on `df[existing_col] = scalar`
|
It currently loses all dtypes on `.__setitem__()`
```python
import modin.pandas as pd
df = pd.DataFrame({"a": [1, 2, 3], "b": [3, 4, 5]})
df["b"] = 10
# known dtypes: {};
# cols with unknown dtypes: ['a', 'b'];
print(df._query_compiler._modin_frame._dtypes)
```
|
closed
|
2023-11-17T16:09:34Z
|
2023-11-21T11:57:07Z
|
https://github.com/modin-project/modin/issues/6753
|
[
"Performance 🚀",
"P2"
] |
dchigarev
| 0
|
ultralytics/yolov5
|
pytorch
| 12,841
|
How to convert segment json data labeled by labelme to yolo format?
|
### Search before asking
- [X] I have searched the YOLOv5 [issues](https://github.com/ultralytics/yolov5/issues) and [discussions](https://github.com/ultralytics/yolov5/discussions) and found no similar questions.
### Question
How to convert segment json data labeled by labelme to yolo format?
Can you share some details code?
### Additional
_No response_
|
closed
|
2024-03-23T02:26:10Z
|
2024-05-05T00:23:01Z
|
https://github.com/ultralytics/yolov5/issues/12841
|
[
"question",
"Stale"
] |
fewshotstudy
| 4
|
microsoft/qlib
|
machine-learning
| 1,819
|
Will consider improving and enhancing the functionality and examples of Reinforcement Learning?
|
Will consider improving and enhancing the functionality and examples of Reinforcement Learning?
The current sample is running slowly and has not been updated for a long time.

-------
会否考虑完善和增强强化学习部分的功能和样例?
当前的样例运行缓慢且许久未更新了。
|
open
|
2024-07-01T12:53:23Z
|
2024-09-28T07:09:33Z
|
https://github.com/microsoft/qlib/issues/1819
|
[
"question"
] |
ghyzx
| 1
|
pydantic/pydantic
|
pydantic
| 10,923
|
"le" and "ge" date values for date fields although they should float
|
### Initial Checks
- [X] I confirm that I'm using Pydantic V2
### Description
The "le" and "ge" properties of the Field class are annotated as "float".
However, it is possible to define a date field and assign a date instance for "le" or "ge" (and it is working fine).
Is this an implementation invariant or just a documentation failure?
It appears that "le" and "ge" are accepted in general values of the same type as the field value and standard comparison is applied.
### Example Code
```Python
from pydantic import BaseModel, Field
from datetime import date
class Foo(BaseModel):
d: date = Field(..., ge=date(2024, 1, 1), le=date(2024,12,31))
f = Foo(d=date(2025, 1, 1))
```
### Python, Pydantic & OS Version
```Text
backend) ➜ backend git:(main) ✗ python -c "import pydantic.version; print(pydantic.version.version_info())"
pydantic version: 2.7.1
pydantic-core version: 2.18.2
pydantic-core build: profile=release pgo=true
install path: /home/ajung/src/eon/GEP/backend/.venv/lib/python3.12/site-packages/pydantic
python version: 3.12.7 (main, Oct 8 2024, 00:20:25) [Clang 18.1.8 ]
platform: Linux-6.8.0-48-generic-x86_64-with-glibc2.39
related packages: pydantic-settings-2.2.1 fastapi-0.111.0 mypy-1.13.0 pyright-1.1.389 typing_extensions-4.12.2
commit: unknown
```
|
closed
|
2024-11-21T12:41:46Z
|
2024-11-21T13:07:03Z
|
https://github.com/pydantic/pydantic/issues/10923
|
[
"question"
] |
zopyx
| 1
|
litestar-org/litestar
|
asyncio
| 3,505
|
Examples not shown when using DTO
|
### Reported by
[Alc](https://discord.com/users/314787529100361748) in Discord: [#is Post request using OPENAPI body example working](https://discord.com/channels/919193495116337154/1240662108979597342/1240851853210554370)
### Description
When DTOs are used the example set in `Body` does not show up.
### MCVE
```py
from dataclasses import dataclass
from typing import Annotated
from litestar import Litestar, post
from litestar.dto import DataclassDTO
from litestar.openapi.spec import Example
from litestar.params import Body
@dataclass
class Item:
id: int
name: str
body = Body(
title="Create item",
description="Create a new item.",
examples=[
Example(
summary="Post is Ok",
value={
"id": "3fa85f64-5717-4562-b3fc-2c963f66afa6",
"name": "Swatch",
},
)
],
)
@post()
async def create_item(data: Annotated[Item, body]) -> Item:
return data
@post("dto", dto=DataclassDTO[Item])
async def create_item_with_dto(data: Annotated[Item, body]) -> Item:
return data
app = Litestar(route_handlers=[create_item, create_item_with_dto])
```
### Logs
no logs of interest
### Litestar Version
main
|
closed
|
2024-05-17T02:25:31Z
|
2025-03-20T15:54:43Z
|
https://github.com/litestar-org/litestar/issues/3505
|
[
"Bug :bug:"
] |
byte-bot-app[bot]
| 2
|
CorentinJ/Real-Time-Voice-Cloning
|
deep-learning
| 971
|
error: Microsoft Visual C++ 14.0 or greater is required. Get it with "Microsoft C++ Build Tools": https://visualstudio.microsoft.com/visual-cpp-build-tools/
|
Not sure how it can be solved. This is happened during installation `pip install -r requirements.txt`.
I've installed
- build tools: https://visualstudio.microsoft.com/visual-cpp-build-tools/
- Visual C++ https://docs.microsoft.com/en-US/cpp/windows/latest-supported-vc-redist?view=msvc-170
What did I miss?
Env:
OS: Win 11, CLI: git-bash, Python: 3.9 (venv)
build tools install window:

|
closed
|
2022-01-09T00:27:03Z
|
2023-01-21T10:04:20Z
|
https://github.com/CorentinJ/Real-Time-Voice-Cloning/issues/971
|
[] |
andkirby
| 7
|
alpacahq/alpaca-trade-api-python
|
rest-api
| 44
|
401 Client Error: Unauthorized for url: https://api.polygon.io
|
I'm going through the tutorial, and I'm getting an exception:
```
DEBUG:urllib3.connectionpool:https://api.polygon.io:443 "GET /v1/historic/agg/day/ATVI?from=2015-8-25&to=2018-12-6&apiKey=abcd HTTP/1.1" 401 None
WARNING:__main__:ATVI generated an exception: 401 Client Error: Unauthorized for url: https://api.polygon.io/v1/historic/agg/day/ATVI?from=2015-8-25&to=2018-12-6&apiKey=abcd
```
Previously I was getting an exception:
`alpaca_trade_api.rest.APIError: access key verification failed : request is unauthorized (generate APCA-API-KEY-ID and APCA-API-ACCESS-SECRET-KEY from dashboard and include in HTTP header) (Code = 40110000) (Code = 40110000)`
but I regenerated the key, and now the above exception is being returned.
I creating the REST api client like this:
`api = tradeapi.REST(key, skey, base_url='https://paper-api.alpaca.markets')`
|
closed
|
2018-12-07T15:09:54Z
|
2021-08-22T15:03:03Z
|
https://github.com/alpacahq/alpaca-trade-api-python/issues/44
|
[] |
SaviourSelf
| 3
|
hankcs/HanLP
|
nlp
| 859
|
如何利用不同的 CustomDictionary 进行 HanLP.segment()
|
<!--
注意事项和版本号必填,否则不回复。若希望尽快得到回复,请按模板认真填写,谢谢合作。
-->
## 注意事项
请确认下列注意事项:
* 我已仔细阅读下列文档,都没有找到答案:
- [首页文档](https://github.com/hankcs/HanLP)
- [wiki](https://github.com/hankcs/HanLP/wiki)
- [常见问题](https://github.com/hankcs/HanLP/wiki/FAQ)
* 我已经通过[Google](https://www.google.com/#newwindow=1&q=HanLP)和[issue区检索功能](https://github.com/hankcs/HanLP/issues)搜索了我的问题,也没有找到答案。
* 我明白开源社区是出于兴趣爱好聚集起来的自由社区,不承担任何责任或义务。我会礼貌发言,向每一个帮助我的人表示感谢。
* [x] 我在此括号内输入x打钩,代表上述事项确认完毕。
## 版本号
<!-- 发行版请注明jar文件名去掉拓展名的部分;GitHub仓库版请注明master还是portable分支 -->
当前最新版本号是:1.6.4
我使用的版本是:hanlp-1.6.4.jar、hanlp-1.6.4-sources.jar、hanlp.properties、data 版
<!--以上属于必填项,以下可自由发挥-->
## 我的问题
假设有两个用户自定义词典,CustomDictionary1.txt 与 CustomDictionary2.txt,如何在 HanLP.segment() 的基础上写一个分词函数,通过传入参数来控制分词的效果?
```
如:
Segment segment = HanLP.newSegment();
Segment segment1 = HanLP.newSegment();
HanLP.Config.Normalization = true;
segment.enableOffset(true);
segment1.enableOffset(true);
int availProcessors = Runtime.getRuntime().availableProcessors();
segment.enableMultithreading(availProcessors);
segment1.enableMultithreading(availProcessors);
组合 segment 与 segment1,并且他们的区别是 CustomDictionary 不同:
一个是
CustomDictionary.insert("雄安概念产业链", "n 1");
另一个是
CustomDictionary.insert("雄安", "n 1");
CustomDictionary.insert("概念", "n 1");
CustomDictionary.insert("产业链", "n 1");
但怎么把 CustomDictionary 当成参数传到不同的 segment 中呢?或是说怎么创建不同的 CustomDictionary 从而让 不同的 segment 依赖不同的 CustomDictionary 呢?
```
<!-- 请详细描述问题,越详细越可能得到解决 -->
## 复现问题
<!-- 你是如何操作导致产生问题的?比如修改了代码?修改了词典或模型?-->
### 步骤
1. 首先……
2. 然后……
3. 接着……
### 触发代码
```
public void testIssue1234() throws Exception
{
CustomDictionary.add("用户词语");
System.out.println(StandardTokenizer.segment("触发问题的句子"));
}
```
### 期望输出
```
雄安概念产业链 = CombineHanLP(parameter = true, normalization, offset, multithreading)
雄安/概念产/业链 = CombineHanLP(parameter = false, Normalization, offset, multithreading)
```
<!-- 你希望输出什么样的正确结果?-->
```
期望输出
```
### 实际输出
<!-- HanLP实际输出了什么?产生了什么效果?错在哪里?-->
```
实际输出
```
## 其他信息
<!-- 任何可能有用的信息,包括截图、日志、配置文件、相关issue等等。-->
|
closed
|
2018-06-11T08:44:30Z
|
2018-06-24T06:41:25Z
|
https://github.com/hankcs/HanLP/issues/859
|
[
"question"
] |
wenfeixiang1991
| 1
|
xzkostyan/clickhouse-sqlalchemy
|
sqlalchemy
| 330
|
Table metadata fails to reflect if is_deleted column is set along with version in ReplacingMergeTree engine
|
**Describe the bug**
For tables defined with version column and is_deleted:
ENGINE = ReplacingMergeTree(version_col, is_deleted)
metadata reflection fails with following:
metadata.reflect(bind=engine, schema=db_name)
*** sqlalchemy.exc.ConstraintColumnNotFoundError: Can't create TableCol on table 'myThirdReplacingMT
': no column named 'version_col, is_deleted' is present.`
The issue lays in insufficient parsing logic for ReplacingMergeTree([ver [, is_deleted]]) clause that assumes only version being set
**To Reproduce**
```
-- with ver and is_deleted
CREATE OR REPLACE TABLE myThirdReplacingMT
(
key Int64,
someCol String,
eventTime DateTime,
is_deleted UInt8
)
ENGINE = ReplacingMergeTree(eventTime, is_deleted)
ORDER BY key
SETTINGS allow_experimental_replacing_merge_with_cleanup = 1;
```
....
metadata.reflect(bind=engine, schema=url)
**Fix**
```
--- a/clickhouse_sqlalchemy/engines/mergetree.py
+++ b/clickhouse_sqlalchemy/engines/mergetree.py
@@ -218,6 +218,8 @@ class ReplacingMergeTree(MergeTree):
def reflect(cls, table, engine_full, **kwargs):
engine = parse_columns(engine_full, delimeter=' ')[0]
version_col = engine[len(cls.__name__):].strip('()') or None
+ if version_col is not None:
+ version_col = version_col.split(',')[0]
return cls(
version=version_col,
```
**Versions**
- Version of package with the problem: clickhouse-sqlalchemy==0.3.2
- Python version: 3.12.3
|
open
|
2024-08-19T22:28:47Z
|
2025-01-07T14:34:01Z
|
https://github.com/xzkostyan/clickhouse-sqlalchemy/issues/330
|
[] |
kxd8163
| 1
|
slackapi/python-slack-sdk
|
asyncio
| 867
|
Closing Modal after submission on AWS Lambda
|
Hey all, I'm having trouble closing a modal after the user submits the view. My code is in AWS Lambda and I have already tried several iterations of returning 200's with empty bodies, as per the documentation, but nothing seems to close the modal.
Am I missing something?

|
closed
|
2020-10-30T17:16:20Z
|
2020-11-02T01:24:25Z
|
https://github.com/slackapi/python-slack-sdk/issues/867
|
[
"question"
] |
tinoargentino
| 4
|
ultralytics/ultralytics
|
machine-learning
| 19,096
|
Comparison on inference code of YOLOv8 and YOLOv11
|
### Search before asking
- [x] I have searched the Ultralytics YOLO [issues](https://github.com/ultralytics/ultralytics/issues) and [discussions](https://github.com/orgs/ultralytics/discussions) and found no similar questions.
### Question
Does the inference code will change for YOLOv8 and YOLO11
we just need to know, the inference code will not change for YOLOv8 and YOLO11
### Additional
_No response_
|
open
|
2025-02-06T08:59:47Z
|
2025-02-06T10:17:23Z
|
https://github.com/ultralytics/ultralytics/issues/19096
|
[
"question"
] |
akhilsajeevan11
| 5
|
GibbsConsulting/django-plotly-dash
|
plotly
| 196
|
Add 'load new apps' admin action
|
Add an action in the django admin screen to create ORM model instances for stateless apps that are not already present.
|
closed
|
2019-11-16T16:20:19Z
|
2019-11-16T19:55:26Z
|
https://github.com/GibbsConsulting/django-plotly-dash/issues/196
|
[
"enhancement",
"good first issue"
] |
GibbsConsulting
| 1
|
noirbizarre/flask-restplus
|
flask
| 310
|
Making swagger docs function behind reverse proxy
|
I'm having some issues with trying to get the Swagger docs to work behind a reverse proxy. I have a microservice infrastructure, and am using NGINX to proxy to each service based on a location block. Here's a simplified service:
```
from flask import Flask
from flask_cors import CORS
from flask_restplus import Api, Resource
from werkzeug.contrib.fixers import ProxyFix
application = Flask(__name__)
application.wsgi_app = ProxyFix(application.wsgi_app)
api = Api(
application,
title='Test Microservice',
description="Test Microservice",
default='test-service'
)
CORS(application)
@api.route('/ping')
class Ping(Resource):
def get(self):
return 'pong', 200, {'Content-Type': 'text/plain; charset=utf-8'}
if __name__ == "__main__":
application.run(host='localhost', port=9090, debug=True, threaded=True)
```
This works fine in my IDE, but when going through NGINX, I end up with broken links. Here's my NGINX config:
```
upstream test-service {
server localhost:9090;
}
server {
listen 8282 default_server;
server_name localhost;
location /test-service/ {
proxy_pass http://test-service/;
proxy_redirect off;
proxy_set_header Host $host:$server_port/test-service;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
}
}
```
The HTTP headers resolve getting the `swagger.json` file, but I still have 2 issues:
1. The JS/CSS that Swagger provides still tries to retrieve it from the root URL (ie, http://localhost:8282/swaggerui/bower/swagger-ui/dist/css/reset.css). I've worked around this by actually hosting the JS/CSS on my NGINX box, so it truly is available at that URL. That's more of a bandaid though. I'd prefer to it to actual pass the location block of NGINX.
2. Even with my UI working, the `Try It Now` functionality uses the root URL without the added location. For instance: `http://localhost:8282/ping` instead of `http://localhost:8282/test-service/ping`. Is there a way, via simple HTTP headers/configuration, to have it use the reverse proxy location block (`/test-service/`)?
|
closed
|
2017-07-27T18:49:51Z
|
2019-03-27T09:35:00Z
|
https://github.com/noirbizarre/flask-restplus/issues/310
|
[
"duplicate",
"documentation"
] |
atoy3731
| 3
|
facebookresearch/fairseq
|
pytorch
| 5,043
|
About hubert finetuning
|
## 🚀 Feature Request
<!-- A clear and concise description of the feature proposal -->
At present, hubert finetuning has only a 10h config. If I want to finetune a 10min or 1h dataset, what should I change?
### Motivation
<!-- Please outline the motivation for the proposal. Is your feature request related to a problem? e.g., I'm always frustrated when [...]. If this is related to another GitHub issue, please link here too -->
### Pitch
<!-- A clear and concise description of what you want to happen. -->
### Alternatives
<!-- A clear and concise description of any alternative solutions or features you've considered, if any. -->
### Additional context
<!-- Add any other context or screenshots about the feature request here. -->
|
open
|
2023-03-23T23:38:15Z
|
2024-08-23T06:45:46Z
|
https://github.com/facebookresearch/fairseq/issues/5043
|
[
"enhancement",
"help wanted",
"needs triage"
] |
LYPinASR
| 1
|
PeterL1n/RobustVideoMatting
|
computer-vision
| 186
|
尝试训练过程中固定种子,但复现结果仍不稳定
|
我尝试使用RVM的训练代码进行结果的复现,但是在训练代码的初始阶段加入固定种子的代码段后训练过程中依然出现了偏差。
加入的代码段如下:

请问作者大大是否遇到过类似的问题或有解决这种问题的经验?
|
open
|
2022-07-13T10:48:50Z
|
2022-12-09T11:14:32Z
|
https://github.com/PeterL1n/RobustVideoMatting/issues/186
|
[] |
YihanHu-2022
| 5
|
Python3WebSpider/ProxyPool
|
flask
| 200
|
Proxy
|
**Describe the bug**
A clear and concise description of what the bug is.
**To Reproduce**
Steps to reproduce the behavior:
1. Go to '...'
2. Click on '....'
3. Scroll down to '....'
4. See error
**Expected behavior**
A clear and concise description of what you expected to happen.
**Screenshots**
If applicable, add screenshots to help explain your problem.
**Environments (please complete the following information):**
- OS: [e.g. macOS 10.15.2]
- Python [e.g. Python 3.6]
- Browser [e.g. Chrome 67 ]
**Additional context**
Add any other context about the problem here.
|
closed
|
2023-09-17T04:35:26Z
|
2024-07-09T13:34:36Z
|
https://github.com/Python3WebSpider/ProxyPool/issues/200
|
[] |
Luisvelix
| 0
|
kizniche/Mycodo
|
automation
| 822
|
fresh install - some errors in log - #### Generating widget HTML files
|
### Versions:
- Mycodo Version: [e.g. 8.7.1]
- Raspberry Pi Version: [e.g. 4B+]
- 3rd party OS
### Reproducibility
fresh install on a new OS (not raspian but I guess it does not matter)
the database is initialized later on (if I remember correctly) but the setup tries to access table misc upfront see attached log
[installlog.txt](https://github.com/kizniche/Mycodo/files/5114548/installlog.txt)
again everything seems to be working so I think it is a "cosmetic" issue...
BR and keep up the work I really like it ;-)
|
closed
|
2020-08-23T17:55:27Z
|
2020-08-23T19:22:28Z
|
https://github.com/kizniche/Mycodo/issues/822
|
[] |
Mark0Mi
| 2
|
huggingface/datasets
|
pandas
| 6,868
|
datasets.BuilderConfig does not work.
|
### Describe the bug
I custom a BuilderConfig and GeneratorBasedBuilder.
Here is the code for BuilderConfig
```
class UIEConfig(datasets.BuilderConfig):
def __init__(
self,
*args,
data_dir=None,
instruction_file=None,
instruction_strategy=None,
task_config_dir=None,
num_examples=None,
max_num_instances_per_task=None,
max_num_instances_per_eval_task=None,
over_sampling=None,
**kwargs
):
super().__init__(*args, **kwargs)
self.data_dir = data_dir
self.num_examples = num_examples
self.over_sampling = over_sampling
self.instructions = self._parse_instruction(instruction_file)
self.task_configs = self._parse_task_config(task_config_dir)
self.instruction_strategy = instruction_strategy
self.max_num_instances_per_task = max_num_instances_per_task
self.max_num_instances_per_eval_task = max_num_instances_per_eval_task
```
Besides, here is the code for GeneratorBasedBuilder.
```
class UIEInstructions(datasets.GeneratorBasedBuilder):
VERSION = datasets.Version("2.0.0")
BUILDER_CONFIG_CLASS = UIEConfig
BUILDER_CONFIGS = [
UIEConfig(name="default", description="Default config for NaturalInstructions")
]
DEFAULT_CONFIG_NAME = "default"
```
Here is the load_dataset
```
raw_datasets = load_dataset(
os.path.join(CURRENT_DIR, "uie_dataset.py"),
data_dir=data_args.data_dir,
task_config_dir=data_args.task_config_dir,
instruction_file=data_args.instruction_file,
instruction_strategy=data_args.instruction_strategy,
cache_dir=data_cache_dir, # for debug, change dataset size, otherwise open it
max_num_instances_per_task=data_args.max_num_instances_per_task,
max_num_instances_per_eval_task=data_args.max_num_instances_per_eval_task,
num_examples=data_args.num_examples,
over_sampling=data_args.over_sampling
)
```
Finally, I met the error.
```
BuilderConfig UIEConfig(name='default', version=0.0.0, data_dir=None, data_files=None, description='Default config for NaturalInstructions') doesn't have a 'task_config_dir' key.
```
I debugged the code, but I find the parameters added by me may not work.
### Steps to reproduce the bug
https://github.com/BeyonderXX/InstructUIE/blob/master/src/uie_dataset.py
### Expected behavior
```
BuilderConfig UIEConfig(name='default', version=0.0.0, data_dir=None, data_files=None, description='Default config for NaturalInstructions') doesn't have a 'task_config_dir' key.
```
### Environment info
torch 2.3.0+cu118
transformers 4.40.1
python 3.8
|
closed
|
2024-05-05T08:08:55Z
|
2024-05-05T12:15:02Z
|
https://github.com/huggingface/datasets/issues/6868
|
[] |
jdm4pku
| 1
|
ScrapeGraphAI/Scrapegraph-ai
|
machine-learning
| 860
|
ImportError: Could not import transformers python package.
|
**Describe the bug**
When running via local llm in Ollama, ( Llama3.2 ) with `SmartScraperMultiGraph` it keeps throwing this error.
**ImportError: Could not import transformers python package**. This is needed in order to calculate get_token_ids. Please install it with pip install transformers. I have installed transformers manually with pip in the environment but still encounter this issue.
This is related to this unresolved but closed issue: https://github.com/ScrapeGraphAI/Scrapegraph-ai/issues/719.
|
closed
|
2025-01-02T16:38:50Z
|
2025-01-06T13:25:17Z
|
https://github.com/ScrapeGraphAI/Scrapegraph-ai/issues/860
|
[] |
Qunlexie
| 6
|
deezer/spleeter
|
deep-learning
| 925
|
[Discussion] Whether to support cuda12?
|
I want to be able to use the same environment with whisperx and gradio, but when installing in the cuda12 environment, the following error message is displayed:
```
tensorflow/stream_executor/platform/default/dso_loader.cc:64] Could not load dynamic library 'cudart64_110.dll'; dlerror: cudart64_110.dll not found
```
And some versions of the packages are not compatible. Does anyone know how to run spleeter in cuda12?
I tried to install three software packages at the same time in python3.8, but there is still an error.
My environement:
OS: ubuntu 22.04
python: 3.10.7
cuda: 12.1
cudnn: 8
|
open
|
2025-01-23T01:44:34Z
|
2025-01-23T01:50:10Z
|
https://github.com/deezer/spleeter/issues/925
|
[
"question"
] |
aef5748
| 0
|
strawberry-graphql/strawberry
|
fastapi
| 2,896
|
Lazy annotations don't work with `List[]`
|
The problem occurs when using new-style lazy annotations (with `from __future__ import annotations`) wrapped into a `List[]`:
```python
# c.py
@strawberry.type
class C:
id: strawberry.ID
# d.py
from __future__ import annotations
from typing import TYPE_CHECKING, List
from typing_extensions import Annotated
import strawberry
if TYPE_CHECKING:
from tests.c import C
@strawberry.type
class D:
id: strawberry.ID
@strawberry.field
async def c_list(self) -> List[Annotated[C, strawberry.lazy("tests.c")]]:
...
```
Error:
```
E strawberry.exceptions.unresolved_field_type.UnresolvedFieldTypeError:
Could not resolve the type of 'c_list'. Check that the class is accessible from the global module scope.
strawberry/schema/schema_converter.py:321: UnresolvedFieldTypeError
```
However, if you add another field, which is just `Annotated[C, ...]`, everything is fine:
```python
@strawberry.type
class D:
id: strawberry.ID
@strawberry.field
async def c(self) -> Annotated[C, strawberry.lazy("tests.c")]:
...
@strawberry.field
async def c_list(self) -> List[Annotated[C, strawberry.lazy("tests.c")]]:
...
```
Also, it works as expected with the old-school annotations, `List[Annotated["C", ...]]`. I added all the necessary test cases in #2895.
Related PR with the feature:
- #2744
## System Information
- Operating system: macOS 13.4.1 (doesn't actually matter)
- Strawberry version (if applicable): 0.190.0
|
open
|
2023-06-27T13:28:28Z
|
2025-03-20T15:56:15Z
|
https://github.com/strawberry-graphql/strawberry/issues/2896
|
[
"bug"
] |
ddoroshev
| 4
|
alecxe/scrapy-fake-useragent
|
web-scraping
| 13
|
SSL Certificate expired
|
This library now errors whenever it's being used because the SSL certificate on the herokuapp is expired.
Traceback below:
```
Traceback (most recent call last):
File "/Library/Frameworks/Python.framework/Versions/3.6/lib/python3.6/site-packages/fake_useragent/utils.py", line 45, in get
return urlopen(request, timeout=settings.HTTP_TIMEOUT).read()
File "/Library/Frameworks/Python.framework/Versions/3.6/lib/python3.6/urllib/request.py", line 223, in urlopen
return opener.open(url, data, timeout)
File "/Library/Frameworks/Python.framework/Versions/3.6/lib/python3.6/urllib/request.py", line 526, in open
response = self._open(req, data)
File "/Library/Frameworks/Python.framework/Versions/3.6/lib/python3.6/urllib/request.py", line 544, in _open
'_open', req)
File "/Library/Frameworks/Python.framework/Versions/3.6/lib/python3.6/urllib/request.py", line 504, in _call_chain
result = func(*args)
File "/Library/Frameworks/Python.framework/Versions/3.6/lib/python3.6/urllib/request.py", line 1361, in https_open
context=self._context, check_hostname=self._check_hostname)
File "/Library/Frameworks/Python.framework/Versions/3.6/lib/python3.6/urllib/request.py", line 1320, in do_open
raise URLError(err)
urllib.error.URLError: <urlopen error [SSL: CERTIFICATE_VERIFY_FAILED] certificate verify failed (_ssl.c:749)>
```
|
closed
|
2017-02-10T18:26:29Z
|
2017-02-17T04:04:38Z
|
https://github.com/alecxe/scrapy-fake-useragent/issues/13
|
[] |
Slater-Victoroff
| 1
|
desec-io/desec-stack
|
rest-api
| 510
|
api: consider allowing certain non-alphanumeric names in CNAME targets
|
pdns does allow some non-alphanumeric characters in CNAME targets:
```
c =='-' || c == '_' || c=='*' || c=='.' || c=='/' || c=='@' || c==' ' || c=='\\' || c==':'
```
([source](https://github.com/PowerDNS/pdns/blob/a44a8d6617e3062846e6092a3fd836f11be757a7/pdns/dnsname.cc#L485:))
Our [dnspython preprocessing escapes some characters](https://github.com/rthalley/dnspython/blob/bee23ec15fdde8f0303b0a3699669599c5abf8cb/dns/name.py#L227) (e.g. `@` --> `\@`) which the pdns auth API does not accept. (They only except escaping for `.` and `\`, and `\DDD` format below 0x20 and above 0x7F.) Related: https://github.com/PowerDNS/pdns/issues/9870
I'm not sure how to resolve this, but we should at least fix the 500 response that this currently triggers, maybe by rejecting the characters that dnspython would escape or by unescaping them.* Both approaches would need some corner-case code of which I'm not sure where it should best live.
@nils-wisiol What do you think?
*: Insisting that users encode them using `\DDD` format does not help, as dnspython parses and canonicalizes that, of course. Btw, I checked, and we do accept e.g. `\013` in CNAME targets.
|
open
|
2021-01-14T15:46:55Z
|
2021-01-15T11:38:42Z
|
https://github.com/desec-io/desec-stack/issues/510
|
[
"enhancement",
"api",
"prio: low",
"more info needed"
] |
peterthomassen
| 2
|
ivy-llc/ivy
|
pytorch
| 28,737
|
Fix Frontend Failing Test: numpy - operators.jax.lax.min
|
Working on it
|
open
|
2024-04-15T05:20:04Z
|
2024-04-15T05:20:04Z
|
https://github.com/ivy-llc/ivy/issues/28737
|
[
"Sub Task"
] |
tanmaymunjal
| 0
|
jupyterlab/jupyter-ai
|
jupyter
| 1,209
|
The installation command pip install jupyter-ai[all] fails in a zsh environment
|
<!-- Welcome! Thank you for contributing. These HTML comments will not render in the issue.
Before creating a new issue:
* Search for relevant issues
* Follow the issue reporting guidelines:
https://jupyterlab.readthedocs.io/en/latest/getting_started/issue.html
-->
## Description
This is a documentation bugfix/request.
The installation step `pip install jupyter-ai[all]` fails in a zsh environment because zsh uses square brackets for pattern matching
The square brackets need to be escaped or the entire package name quoted.
`pip install 'jupyter-ai[all]'` works correctly.
Note that zsh is the default shell for Mac users since Catalina https://support.apple.com/en-ca/102360
<!--Describe the bug clearly and concisely. Include screenshots if possible-->
## Reproduce
<!--Describe step-by-step instructions to reproduce the behavior-->
1. Follow the standard jupyter-ai installation instructions in a zsh environment (the default Mac environment)
<!--Describe how you diagnosed the issue. See the guidelines at
https://jupyterlab.readthedocs.io/en/latest/getting_started/issue.html -->
## Expected behavior
Following the installation instructions should work on Macs running the default shell.
The installation instructions could be updated to a quoted version or separate installation steps for Mac users could be added.
## Context
<!--Complete the following for context, and add any other relevant context-->
- Operating System and version: Mac Catalina or later
- Browser and version: n/a
- JupyterLab version: n/a
<!--The more content you provide, the more we can help!-->
<details><summary>Troubleshoot Output</summary>
<pre>
n/a
</pre>
</details>
<details><summary>Command Line Output</summary>
<pre>
n/a
</pre>
</details>
<details><summary>Browser Output</summary>
<!--See https://webmasters.stackexchange.com/a/77337 for how to access the JavaScript console-->
<pre>
n/a
</pre>
</details>
|
closed
|
2025-01-20T06:21:08Z
|
2025-01-21T22:00:15Z
|
https://github.com/jupyterlab/jupyter-ai/issues/1209
|
[
"bug",
"documentation"
] |
michaele4321
| 1
|
2noise/ChatTTS
|
python
| 871
|
使用demo运行web页面 用dev分支,前几秒有噪音,用main分支没有噪音
|
[audio (1).zip](https://github.com/user-attachments/files/18355284/audio.1.zip)
dev分支 在web页面生成的音频文件
|
open
|
2025-01-09T02:29:44Z
|
2025-01-19T12:31:59Z
|
https://github.com/2noise/ChatTTS/issues/871
|
[
"help wanted",
"algorithm"
] |
louyongjiu
| 0
|
xlwings/xlwings
|
automation
| 1,673
|
can I remove 'xlwings' ribbon tab label in the ribbon? is there tcid for it?
|
#### OS (e.g. Windows 10 or macOS Sierra)
Windows 10
#### Versions of xlwings, Excel and Python (e.g. 0.11.8, Office 365, Python 3.7)
0.24.3 Office 365 Winpython64 395
#### Describe your issue (incl. Traceback!)
Can I use tcid in DisabledCmdBarItemsList registry to hide the tab? And what is it?
Or any command in python or xlwings conf file to toggle the show or hide behavior?
Or any other workaround? Thank you.
```python
# Your traceback here
```
#### Include a minimal code sample to reproduce the issue (and attach a sample workbook if required!)
```python
# Your code here
```
|
closed
|
2021-07-23T01:59:33Z
|
2021-07-23T06:14:51Z
|
https://github.com/xlwings/xlwings/issues/1673
|
[] |
kissson
| 1
|
jazzband/django-oauth-toolkit
|
django
| 1,040
|
Is silent renew supported?
|
Silent Renew has become more common than refresh tokens for SPAs as far as I can tell. Does Django OAuthToolkit support silent renew?
|
closed
|
2021-12-15T17:46:48Z
|
2022-05-03T17:59:53Z
|
https://github.com/jazzband/django-oauth-toolkit/issues/1040
|
[
"question"
] |
dopry
| 4
|
wger-project/wger
|
django
| 1,164
|
Create languages and licences when syncronising exercises
|
Sync the languages and licenses as well when donwloading exercises, in case there have been new entries the download would fail. See e.g. https://github.com/wger-project/docker/issues/36
`wger/exercises/management/commands/sync-exercises.py`
|
closed
|
2022-10-25T14:18:25Z
|
2023-04-08T18:10:26Z
|
https://github.com/wger-project/wger/issues/1164
|
[] |
rolandgeider
| 1
|
ultralytics/ultralytics
|
python
| 19,413
|
first epochs val mAp when fine tune very low
|
### Search before asking
- [x] I have searched the Ultralytics YOLO [issues](https://github.com/ultralytics/ultralytics/issues) and [discussions](https://github.com/orgs/ultralytics/discussions) and found no similar questions.
### Question
im finetuning yolov11n object detection on my custom dataset. The first epoch very low mAp is 0.154, is it normal? im not using COCO classes, use only my custom classes.
### Additional
task: detect
mode: train
model: yolo11n.pt
data: /Users/sina/train_yolo/dataset.yaml
epochs: 100
time: null
patience: 100
batch: 8
imgsz: 640
save: true
save_period: -1
cache: false
device: cpu
workers: 1
project: null
name: train21
exist_ok: false
pretrained: true
optimizer: auto
verbose: true
seed: 0
deterministic: true
single_cls: false
rect: false
cos_lr: false
close_mosaic: 10
resume: false
amp: true
fraction: 1.0
profile: false
freeze: 10
multi_scale: false
overlap_mask: true
mask_ratio: 4
dropout: 0.0
val: true
split: val
save_json: false
save_hybrid: false
conf: null
iou: 0.7
max_det: 300
half: false
dnn: false
plots: true
source: null
vid_stride: 1
stream_buffer: false
visualize: false
augment: false
agnostic_nms: false
classes: null
retina_masks: false
embed: null
show: false
save_frames: false
save_txt: false
save_conf: false
save_crop: false
show_labels: true
show_conf: true
show_boxes: true
line_width: null
format: torchscript
keras: false
optimize: false
int8: false
dynamic: false
simplify: true
opset: null
workspace: null
nms: false
lr0: 0.01
lrf: 0.01
momentum: 0.937
weight_decay: 0.0005
warmup_epochs: 3.0
warmup_momentum: 0.8
warmup_bias_lr: 0.0
box: 7.5
cls: 0.5
dfl: 1.5
pose: 12.0
kobj: 1.0
nbs: 64
hsv_h: 0.015
hsv_s: 0.7
hsv_v: 0.4
degrees: 0.0
translate: 0.1
scale: 0.5
shear: 0.0
perspective: 0.0
flipud: 0.0
fliplr: 0.5
bgr: 0.0
mosaic: 0
mixup: 0.0
copy_paste: 0.0
copy_paste_mode: flip
auto_augment: randaugment
erasing: 0.4
crop_fraction: 1.0
cfg: null
tracker: botsort.yaml
save_dir: /Users/sina/runs/detect/train21
|
closed
|
2025-02-25T04:01:55Z
|
2025-02-25T12:38:47Z
|
https://github.com/ultralytics/ultralytics/issues/19413
|
[
"question",
"detect"
] |
nisrinaam29
| 2
|
davidsandberg/facenet
|
tensorflow
| 1,147
|
Why can embedding be splited into anchor、positive、negative?
|
I can't understand the principle of why can embedding be splited into anchor、positive、negative?
I know the embedding is from the network, but I want to know the structure of the data set.
Thanks.
|
open
|
2020-03-31T08:35:28Z
|
2022-08-05T01:57:01Z
|
https://github.com/davidsandberg/facenet/issues/1147
|
[] |
JasonChenhx
| 1
|
holoviz/panel
|
jupyter
| 7,292
|
"Debugging in VS Code" Documentation insufficient?
|
#### ALL software version info
```plaintext
panel==1.5.0
VS Code Version: 1.93.1 (Universal)
```
#### Description of expected behavior and the observed behavior
When adding a VS Code debugging configuration as suggested in the documentation, I expect to see Panel site variables in the debugging pane (even without setting a breakpoint). Unfortunately, the debugging configurations don't work for me... the debugging pane remains empty (see screenshot below).
#### Complete, minimal, self-contained example code that reproduces the issue
```python
import panel as pn
def example(a):
b = a + 3
return b
pn.interact(example, a=2).servable()
```
Top configuration from the [VS Code documentation page](https://panel.holoviz.org/how_to/editor/vscode_configure.html#debugging), bottom configuration from the issue opened by @hoxbro: https://github.com/holoviz/panel/issues/2833
```json
{
"name": "panel serve",
"type": "debugpy",
"request": "launch",
"program": "-m",
"args": [
"panel",
"serve",
"${relativeFile}",
"--index",
"${fileBasenameNoExtension}",
"--show"
],
"console": "integratedTerminal",
"justMyCode": true
},
{
"name": "Python: Panel",
"type": "python",
"request": "launch",
"module": "panel",
"args": [
"serve",
"${file}",
],
}
```
#### Stack traceback and/or browser JavaScript console output
N/A
#### Screenshots or screencasts of the bug in action
<img width="1728" alt="Screenshot 2024-09-18 at 07 03 10" src="https://github.com/user-attachments/assets/b019ddaa-1e4f-4e16-a727-1363b5f7f853">
- [x] I may be interested in making a pull request to address this
|
closed
|
2024-09-18T05:16:34Z
|
2024-09-18T07:25:30Z
|
https://github.com/holoviz/panel/issues/7292
|
[] |
michaelweinold
| 3
|
2noise/ChatTTS
|
python
| 141
|
为什么不能固定音色
|
```
def generate_speaker_tensor(mean: float = 0.0, std: float = 15.247) -> torch.Tensor:
return torch.normal(mean, std, size=(768,))
def generate_speaker_tensor_a() -> torch.Tensor:
std, mean = torch.load(f'{Path(__file__).resolve().parent}/models/asset/spk_stat.pt').chunk(2)
rand_spk = torch.randn(768) * std + mean
return rand_spk
```
使用 generate_speaker_tensor 生成speaker1 和 generate_speaker_tensor_a 生成speaker2 ,然后分别保存到本地;
在推理时,本地加载speaker1 以及 speaker2 修改 params_infer_code 中的 rand_spk并生成多个语音
···
params_infer_code = {
'spk_emb': rand_spk, # add sampled speaker
'temperature': .3, # using custom temperature
'top_P': 0.7, # top P decode
'top_K': 20, # top K decode
}
···
为什么 speaker2 可以相对固定音色,但 speaker1则不行且每次都不同?
|
closed
|
2024-05-31T13:49:41Z
|
2024-08-06T04:01:44Z
|
https://github.com/2noise/ChatTTS/issues/141
|
[
"stale"
] |
craii
| 4
|
FactoryBoy/factory_boy
|
sqlalchemy
| 679
|
Support for async?
|
#### The problem
Coming from Django where we used Factory Boy really a lot to a new, async stack to fully support GraphQL with subscriptions which are really cool (uvicorn + Starlette + Ariadne) we also switched to async ORM (not really an ORM) named [GINO](https://github.com/fantix/gino). It is based on SQLAlchemy Core and works pretty robust. However, I am struggling to adapt Factory Boy to use GINO models.
#### Proposed solution
At first glance I thought that I need to implement `_create()` method in my factory model but the problem is that the `create()` method for GINO model is a coroutine and can't be called from a synchronous code. I tried to experiment with `asyncio._get_running_loop()` but I am really new to async stuff and my attempt failed.
#### Extra notes
I am using pytest with pytest-asyncio plugin to run tests with async code which works pretty well including working with DB. For that I have this in my conftest.py:
```
@pytest.fixture(scope="session")
def event_loop():
"""
This is to make the asyncio event loop shared for the whole test session, otherwise
it will be recreated for each test which will prevent using the test_db fixture.
"""
loop = asyncio.get_event_loop()
yield loop
loop.close()
@pytest.fixture(autouse=True, scope="session")
async def test_db(request):
"""
Here is some DB preparation code like (re)creating DB itself, making sure we have all
necessary rights etc.
"""
await db.gino.create_all() # this is to bind the GINO engine to DB
yield # passing context back to the tests
await db.pop_bind().close() # unbinding engine and performing other teardown later
```
I really miss Factory Boy and hope there is an easy solution to start my factories again.
I also created an issue for GINO here https://github.com/fantix/gino/issues/608 but decided to open one here too as I think Factory Boy developed a much wider community and I have better chances that someone has the same problem as I do. Thanks all!
|
open
|
2019-12-07T17:08:56Z
|
2025-01-14T11:32:58Z
|
https://github.com/FactoryBoy/factory_boy/issues/679
|
[] |
remarkov
| 17
|
recommenders-team/recommenders
|
data-science
| 1,997
|
[FEATURE] Add new LF references
|
### Description
<!--- Describe your expected feature in detail -->
See comments in #1995
### Expected behavior with the suggested feature
<!--- For example: -->
<!--- *Adding algorithm xxx will help people understand more about xxx use case scenarios. -->
### Other Comments
|
closed
|
2023-09-22T17:47:23Z
|
2023-10-07T14:45:20Z
|
https://github.com/recommenders-team/recommenders/issues/1997
|
[
"enhancement"
] |
miguelgfierro
| 0
|
microsoft/unilm
|
nlp
| 1,098
|
Inference results of fine-tuned LayoutLM model differ depending on word/input id and box order
|
Hi, I have fine-tuned LayoutLM (v1) on my own invoice data. The model, after 4 epochs, reaches a pretty good performance.
<img width="1018" alt="Screenshot 2023-05-23 at 15 14 57" src="https://github.com/microsoft/unilm/assets/8261902/f3d78342-b3ed-4cbe-90cd-27eabc57877d">
When using it for inference, though, I get different outputs depending on the order of `input_ids` and `bbox` tensors in the encoding. The difference I observe is mostly based on the order of `other` words versus words with any label except `other`. There is three different orderings I have tested:
1. first all words/boxes semantic interesting boxes (i.e. boxes are expected to be classified with any label except `other`), then all `other` boxes
2. random order of labeled `other` versus non-`other` words/boxes
3. words/boxes ordered by the box position, top left to bottom right
When I run the inference, the model yields the following predictions (I did only visualize the boxes that have non-other labels):
1. First non-other boxes/words, then other boxes/words
<img width="500" alt="ner_output_1" src="https://github.com/microsoft/unilm/assets/8261902/19a5d233-df63-419a-99a9-6df481f75ac2">
2. Random order
<img width="500" alt="ner_output_2" src="https://github.com/microsoft/unilm/assets/8261902/eefc22c8-e74c-4490-b6df-5fdd03ff6aac">
3. Top left to bottom right order
<img width="500" alt="ner_output_3" src="https://github.com/microsoft/unilm/assets/8261902/938f1e73-69ac-48b3-96b5-aa07a2e21f0d">
Case 1 matches the ground truth the most... However, the difference in results between the cases is not what I expected... I expect the same results for all these cases, i.e. that the result is independent of how words/boxes in the encoding for inference are ordered.
If the word/box order is relevant, what is the correct order for training and for inference?
Do you think it is beneficial for getting order-independent inference results to shuffle the word order for each training sample?
If useful, I can provide the encoding and fine-tuned model.
|
open
|
2023-05-23T13:44:42Z
|
2023-06-06T09:55:06Z
|
https://github.com/microsoft/unilm/issues/1098
|
[] |
rahelbeloch
| 1
|
flasgger/flasgger
|
api
| 355
|
Markdown Sanitizer Displays Raw HTML Tags
|
I'm using flasgger along with a flask_restful Resource to create my apidocs, like so
```python
# thing_resource.py
import flask_restful
class Thing(flask_restful.Resource):
def put(self):
"""
Create a thing
Here's a list:
* Bullet 1
""""
pass
```
The bullet list renders fine, but the text 'Create a thing' will literally render as
`<p>Create a thing</p>`
---
For completion, here's my setup (flask app setup omitted)
```python
# run_api.py
import flasgger
# Swagger setup
app.json_encoder = flasgger.LazyJSONEncoder
template = dict(swaggeUiPrefix=flasgger.LazyString(lambda: flask.request.environ.get('HTTP_X_SCRIPT_NAME', '')))
swagger = flasgger.Swagger(app, template=template, sanitizer=flasgger.MK_SANITIZER)
```
|
open
|
2020-01-21T10:09:57Z
|
2022-03-03T12:01:54Z
|
https://github.com/flasgger/flasgger/issues/355
|
[] |
Anti-Distinctlyminty
| 4
|
electricitymaps/electricitymaps-contrib
|
data-visualization
| 7,321
|
Import export scale range
|
Currently the actual import and export amounts are hardy visible for countries like Germany because the scale range which is used is the same as the maximum capacity of an individual production type.
Is it maybe useful to start using as the max range the maximum capacity of the individual interconnector for every country?
|
open
|
2024-10-14T18:34:37Z
|
2025-01-09T14:47:12Z
|
https://github.com/electricitymaps/electricitymaps-contrib/issues/7321
|
[
"frontend 🎨"
] |
DutchPower24
| 3
|
noirbizarre/flask-restplus
|
api
| 518
|
OpenAPI 3.0.x Support
|
I'm somewhat new to swagger, but if I understand it correctly, flask_restplus currently outputs the swagger.json in the old "Swagger 2.0" format. Are there any plans to also support the OpenAPI 3.0.x specification?
Background: We are having some trouble with the way swagger handles authorizations in the cookie, that have apparently been fixed in OpenAPI 3.0.0 (according to [this issue](https://github.com/swagger-api/swagger-ui/issues/461) on the openapi repo)
|
open
|
2018-08-31T09:19:53Z
|
2019-12-14T20:05:52Z
|
https://github.com/noirbizarre/flask-restplus/issues/518
|
[] |
mmanhertz
| 9
|
pytorch/pytorch
|
machine-learning
| 149,158
|
[torch.export] ExportedProgram.module() does not support torch.Size as input
|
Not sure if this is an expected behavior, so file an issue to understand it. The repro is below:
```python
import torch
import torch.nn as nn
class Model(nn.Module):
def __init__(self):
super(Model, self).__init__()
def forward(self, theta, size):
return torch.nn.functional.affine_grid(theta, size, align_corners=None)
model = Model()
theta = torch.ones((1, 2, 3))
size = torch.Size((1,3,24,24))
ep = torch.export.export(model, (theta, size,), strict=False)
args, kwargs = ep.example_inputs
# Fail with TreeSpec error
ep.module()(*args)
# Pass
from torch.utils._pytree import tree_map_only
args = tree_map_only(
torch.Size,
lambda x: torch.Tensor(x),
args
)
ep.module()(*args)
```
It looks like the graphmodule needs the input to be torch.Tensor, it is not following the original torch.nn.Module.
cc @chauhang @penguinwu @avikchaudhuri @gmagogsfm @zhxchen17 @tugsbayasgalan @angelayi @suo @ydwu4
|
closed
|
2025-03-13T21:54:46Z
|
2025-03-20T16:23:17Z
|
https://github.com/pytorch/pytorch/issues/149158
|
[
"oncall: pt2",
"oncall: export"
] |
titaiwangms
| 3
|
plotly/dash-table
|
dash
| 644
|
Accessibility issues
|
Some great issues & suggestions raised in https://community.plot.ly/t/solved-datatables-and-accessibility/31085
|
open
|
2019-11-15T10:54:02Z
|
2022-12-07T10:59:07Z
|
https://github.com/plotly/dash-table/issues/644
|
[] |
chriddyp
| 2
|
pytorch/pytorch
|
python
| 149,406
|
Recompils due to Python float object
|
### 🐛 Describe the bug
```python
import os
os.environ["TORCH_LOGS"] = "recompiles_verbose"
import torch
x = torch.randn((10, 10), device="cuda", requires_grad=False)
@torch.compile(dynamic=True)
def model(x, y):
return x * y
y = model(x, 1.5)
y2 = model(x, 2.5)
```
### Error logs
Just log
```
V0318 23:04:20.093000 2002586 site-packages/torch/_dynamo/guards.py:2811] [6/1] [__recompiles_verbose] Recompiling function model in /tmp/ipykernel_2002586/874691697.py:9
V0318 23:04:20.093000 2002586 site-packages/torch/_dynamo/guards.py:2811] [6/1] [__recompiles_verbose] triggered by the following guard failure(s):
V0318 23:04:20.093000 2002586 site-packages/torch/_dynamo/guards.py:2811] [6/1] [__recompiles_verbose] guard 0 failures:
V0318 23:04:20.093000 2002586 site-packages/torch/_dynamo/guards.py:2811] [6/1] [__recompiles_verbose] - 6/0: L['y'] == 1.5
```
### Versions
torch 2.5.1
cc @chauhang @penguinwu @ezyang @bobrenjc93 @voznesenskym @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @chenyang78 @kadeng @amjames
|
closed
|
2025-03-18T15:15:32Z
|
2025-03-22T05:59:02Z
|
https://github.com/pytorch/pytorch/issues/149406
|
[
"triaged",
"oncall: pt2",
"module: dynamic shapes",
"module: dynamo"
] |
efsotr
| 4
|
numba/numba
|
numpy
| 9,810
|
None
|
No issue
|
closed
|
2024-11-26T10:13:59Z
|
2024-11-26T11:16:45Z
|
https://github.com/numba/numba/issues/9810
|
[
"abandoned"
] |
ghost
| 0
|
thomaxxl/safrs
|
rest-api
| 84
|
expose_existing starts but no swagger.json found
|
Hi,
I was able to get the expose_existing.py script to generate a models.py file from an existing mssql db, but the http://localhost:5000/api shows an error 'Not found api/swagger.json'
what am I doing wrong?
also:
when I step through with a debugger, the static_folder that the swagger_ui blueprint is pointing to is the dist/ folder for flask_swagger_ui
thanks,
|
closed
|
2020-12-10T22:02:49Z
|
2021-01-10T08:51:20Z
|
https://github.com/thomaxxl/safrs/issues/84
|
[] |
jmsy00
| 1
|
axnsan12/drf-yasg
|
rest-api
| 289
|
swagger_fake_view => TypeError: get_queryset() missing 1 required positional argument: 'self'
|
Hi,
After update to 1.12.x, I used swagger_fake_view for all my routes.
For many routes (not all of them), I have the following trace:
```
view's type.get_parsers raised exception during schema generation; use `getattr(self, 'swagger_fake_view', False)` to detect and short-circuit this
Traceback (most recent call last):
File "/home/amoki/.local/share/virtualenvs/python-api-0OvACxs8/lib/python3.7/site-packages/drf_yasg/inspectors/base.py", line 28, in call_view_method
res = getattr(view, method_name)()
TypeError: get_queryset() missing 1 required positional argument: 'self'
```
After investigations, It comes from:
https://github.com/axnsan12/drf-yasg/blob/master/src/drf_yasg/generators.py#L492
where we try to call `get_queryset` from the class and not an instance, so we don't have the self parameter.
Therefore, I can't check self.swagger_fake_view because there is no self :/
|
closed
|
2019-01-14T14:57:33Z
|
2019-01-14T15:32:52Z
|
https://github.com/axnsan12/drf-yasg/issues/289
|
[] |
Amoki
| 2
|
scikit-learn-contrib/metric-learn
|
scikit-learn
| 286
|
ITML accuracy worse than guessing
|
I'm using the [Omniglot](https://github.com/brendenlake/omniglot) dataset, which has 20 samples of 964 classes of images, each which is 1 x 105 x 105.
I'm embedding these samples down to 512 dimensions. So the final dataset has shape `(964*20, 512)`.
To implement a 5-way one-shot task, for each one of the 964 classes, I create a support set of five images; the first one is another sample of the same class and the other four are from different classes. The tuples therefore consist of the query image with one of the support set images.
This is what it looks like below (I'm using preprocessor with indices):
```
# For each index, make a 5-way 1-shot task with four other randomly selected classes
indices = []
n_classes, n_examples, dim = train_embeddings.reshape(-1, 20, 512).shape
for i in range(n_classes):
ex1 = rng.randint(n_examples)
ex2 = rng.randint(n_examples)
indices.append([i* 20 + ex1, i * 20 + ex2]) # First pair is from the same class
# Remaining four pairs are from different classes
for j in range(4):
random_class = rng.randint(n_classes)
while random_class == i:
random_class = rng.randint(n_classes)
ex3 = rng.randint(n_examples)
indices.append([i * 20 + ex1, random_class* 20 + ex3])
labels = [1,-1,-1,-1,-1] * n_classes
indices, labels = shuffle(indices, labels)
itml = ITML(preprocessor=train_embeddings)
itml.fit(indices, labels)
```
When it comes to testing time, instead of using `predict()`, I use instead the `score_pairs()` function and return the index with the highest score. If it corresponds to the same index in the gold labels, then I return that as a correctly classified task:
```
pair_scores = model.score_pairs(encoded_pairs)
if np.argmax(pair_scores) == np.argmax(targets):
return 1
```
With all this in mind, I'm getting a 5-way accuracy of 0%. 3-way comes out to 12% though so it's not completely aberrant, but still way worse than random guessing. Am I using this algorithm correctly?
|
closed
|
2020-04-25T02:31:30Z
|
2020-05-29T06:02:10Z
|
https://github.com/scikit-learn-contrib/metric-learn/issues/286
|
[] |
drjosephliu
| 2
|
graphql-python/graphene
|
graphql
| 1,127
|
☂️ Graphene v3
|
This issue is to track v3 of Graphene which will contain some breaking changes.
## Breaking changes
* Upgrade to graphql-core to v3 which brings feature parity with GraphQL.js v14.6.0
* Drop support for Python v2
* Schema type changes
* "Backends" have been removed
* Switch arguments from `type` to `type_` since `type` is a builtin: https://github.com/graphql-python/graphene/pull/738
* Upgrade to graphql-core v.3.1 which corresponds to GraphQL.js v15 (will be released soon).
* Change enum behaviour: https://github.com/graphql-python/graphene/pull/1153
* Remove `to_const` function
## TODO
* [x] Write up some proper release notes: https://github.com/graphql-python/graphene/wiki/v3-release-notes
* [x] Merge https://github.com/graphql-python/graphene/pull/1111
* [x] Switch arguments from `type` to `type_` since `type` is a builtin: https://github.com/graphql-python/graphene/pull/738
* [x] Remove `to_const` function: #1212
* [x] Change enum behaviour: https://github.com/graphql-python/graphene/pull/1153
* [x] Set minimum graphql-core version to v3.1 https://github.com/graphql-python/graphene/pull/1215
* [x] Upgrade [graphene-sqlalchemy](https://github.com/graphql-python/graphene-sqlalchemy) to v3 [WIP](https://github.com/graphql-python/graphene-sqlalchemy/pull/306)
* [x] Upgrade [graphene-mongo](https://github.com/graphql-python/graphene-mongo) to v3 [WIP](https://github.com/graphql-python/graphene-mongo/pull/155)
|
closed
|
2020-01-29T11:09:09Z
|
2022-05-18T08:17:17Z
|
https://github.com/graphql-python/graphene/issues/1127
|
[] |
jkimbo
| 82
|
microsoft/RD-Agent
|
automation
| 549
|
Invalid Official WeChat Group Chat QR Code
|
Hey there, happy new year! It seems the QR code for the group chat expired, can you help renew it?
Thanks!
|
closed
|
2025-02-04T15:51:50Z
|
2025-02-09T06:03:06Z
|
https://github.com/microsoft/RD-Agent/issues/549
|
[
"question"
] |
LCyson
| 1
|
MagicStack/asyncpg
|
asyncio
| 841
|
[Bug] Wrong number of columns Error
|
* **asyncpg version**: 0.23
* **PostgreSQL version**: 13
* **Do you use a PostgreSQL SaaS? If so, which? Can you reproduce
the issue with a local PostgreSQL install?**: Yandex Cloud
* **Python version**: 3.7
* **Platform**: Ubuntu 18.04
* **Do you use pgbouncer?**: yes
* **Did you install asyncpg with pip?**: no
* **If you built asyncpg locally, which version of Cython did you use?**: no
* **Can the issue be reproduced under both asyncio and
[uvloop](https://github.com/magicstack/uvloop)?**: don't know
<!-- Enter your issue details below this comment. -->
We have a web server on Python which is sending queries to the PostgreSQL cluster. After adding a column to the table using a migration without disconnecting I encounter errors with some queries to that table. Example of the query below:
`WITH row_info as ( SELECT id FROM UNNEST($1::scheme.affected_table[]) ) DELETE FROM scheme.affected_table AS entities USING row_info WHERE entities.id = row_info.id;`
To $1 I pass the list of dictionaries like `[{'id': 1, 'field1': 'value1', ...}]`.
Before the migration or from new connections the query works perfectly. But for queries from web-server I get "wrong number of columns: 11, expected 12". Rebooting the web-server resolves the problem.
I think the problem can be related to Prepared Statements or encoding of Mapping.
|
open
|
2021-10-29T09:10:36Z
|
2021-11-16T18:16:43Z
|
https://github.com/MagicStack/asyncpg/issues/841
|
[] |
serjflint
| 5
|
coqui-ai/TTS
|
pytorch
| 3,099
|
KeyError: 'xtts_v1'
|
Hey, when i run the following python api i encounter KeyError :'xtts_v1'
```
import torch
from TTS.api import TTS
# Get device
device = "cuda" if torch.cuda.is_available() else "cpu"
# List available 🐸TTS models
print(TTS().list_models())
# Init TTS
tts = TTS("tts_models/multilingual/multi-dataset/xtts_v1").to(device)
# Run TTS
# ❗ Since this model is multi-lingual voice cloning model, we must set the target speaker_wav and language
# Text to speech list of amplitude values as output
wav = tts.tts(text="Hello world!", speaker_wav="my/cloning/audio.wav", language="en")
# Text to speech to a file
tts.tts_to_file(text="Hello world!", speaker_wav="my/cloning/audio.wav", language="en", file_path="output.wav")
```
### The Error
```
No API token found for 🐸Coqui Studio voices - https://coqui.ai
Visit 🔗https://app.coqui.ai/account to get one.
Set it as an environment variable `export COQUI_STUDIO_TOKEN=<token>`
['tts_models/multilingual/multi-dataset/your_tts', 'tts_models/bg/cv/vits', 'tts_models/cs/cv/vits', 'tts_models/da/cv/vits', 'tts_models/et/cv/vits', 'tts_models/ga/cv/vits', 'tts_models/en/ek1/tacotron2', 'tts_models/en/ljspeech/tacotron2-DDC', 'tts_models/en/ljspeech/tacotron2-DDC_ph', 'tts_models/en/ljspeech/glow-tts', 'tts_models/en/ljspeech/speedy-speech', 'tts_models/en/ljspeech/tacotron2-DCA', 'tts_models/en/ljspeech/vits', 'tts_models/en/ljspeech/vits--neon', 'tts_models/en/ljspeech/fast_pitch', 'tts_models/en/ljspeech/overflow', 'tts_models/en/ljspeech/neural_hmm', 'tts_models/en/vctk/vits', 'tts_models/en/vctk/fast_pitch', 'tts_models/en/sam/tacotron-DDC', 'tts_models/en/blizzard2013/capacitron-t2-c50', 'tts_models/en/blizzard2013/capacitron-t2-c150_v2', 'tts_models/en/multi-dataset/tortoise-v2', 'tts_models/en/jenny/jenny', 'tts_models/es/mai/tacotron2-DDC', 'tts_models/es/css10/vits', 'tts_models/fr/mai/tacotron2-DDC', 'tts_models/fr/css10/vits', 'tts_models/uk/mai/glow-tts', 'tts_models/uk/mai/vits', 'tts_models/zh-CN/baker/tacotron2-DDC-GST', 'tts_models/nl/mai/tacotron2-DDC', 'tts_models/nl/css10/vits', 'tts_models/de/thorsten/tacotron2-DCA', 'tts_models/de/thorsten/vits', 'tts_models/de/thorsten/tacotron2-DDC', 'tts_models/de/css10/vits-neon', 'tts_models/ja/kokoro/tacotron2-DDC', 'tts_models/tr/common-voice/glow-tts', 'tts_models/it/mai_female/glow-tts', 'tts_models/it/mai_female/vits', 'tts_models/it/mai_male/glow-tts', 'tts_models/it/mai_male/vits', 'tts_models/ewe/openbible/vits', 'tts_models/hau/openbible/vits', 'tts_models/lin/openbible/vits', 'tts_models/tw_akuapem/openbible/vits', 'tts_models/tw_asante/openbible/vits', 'tts_models/yor/openbible/vits', 'tts_models/hu/css10/vits', 'tts_models/el/cv/vits', 'tts_models/fi/css10/vits', 'tts_models/hr/cv/vits', 'tts_models/lt/cv/vits', 'tts_models/lv/cv/vits', 'tts_models/mt/cv/vits', 'tts_models/pl/mai_female/vits', 'tts_models/pt/cv/vits', 'tts_models/ro/cv/vits', 'tts_models/sk/cv/vits', 'tts_models/sl/cv/vits', 'tts_models/sv/cv/vits', 'tts_models/ca/custom/vits', 'tts_models/fa/custom/glow-tts', 'tts_models/bn/custom/vits-male', 'tts_models/bn/custom/vits-female']
Traceback (most recent call last):
File "d:/liveManga/work.py", line 11, in <module>
tts = TTS("tts_models/multilingual/multi-dataset/xtts_v1").to(device)
File "C:\Users\smart\AppData\Local\Programs\Python\Python37\lib\site-packages\TTS\api.py", line 289, in __init__
self.load_tts_model_by_name(model_name, gpu)
File "C:\Users\smart\AppData\Local\Programs\Python\Python37\lib\site-packages\TTS\api.py", line 386, in load_tts_model_by_name
model_name
File "C:\Users\smart\AppData\Local\Programs\Python\Python37\lib\site-packages\TTS\api.py", line 348, in download_model_by_name
model_path, config_path, model_item = self.manager.download_model(model_name)
File "C:\Users\smart\AppData\Local\Programs\Python\Python37\lib\site-packages\TTS\utils\manage.py", line 287, in download_model
model_item, model_full_name, model = self._set_model_item(model_name)
File "C:\Users\smart\AppData\Local\Programs\Python\Python37\lib\site-packages\TTS\utils\manage.py", line 269, in _set_model_item
model_item = self.models_dict[model_type][lang][dataset][model]
KeyError: 'xtts_v1'
```
### Environment
```shell
TTS version 0.14.3
python 3.7
cuda 12
windows 11
```
|
closed
|
2023-10-22T02:52:05Z
|
2023-10-22T19:16:14Z
|
https://github.com/coqui-ai/TTS/issues/3099
|
[] |
a-3isa
| 1
|
bregman-arie/devops-exercises
|
python
| 1
|
Add questions on Terraform
|
closed
|
2019-10-04T15:46:41Z
|
2019-10-21T10:18:05Z
|
https://github.com/bregman-arie/devops-exercises/issues/1
|
[] |
bregman-arie
| 1
|
|
grillazz/fastapi-sqlalchemy-asyncpg
|
pydantic
| 2
|
add model relations in async environment
|
closed
|
2021-06-13T17:31:55Z
|
2023-10-25T05:26:57Z
|
https://github.com/grillazz/fastapi-sqlalchemy-asyncpg/issues/2
|
[
"enhancement"
] |
grillazz
| 1
|
|
MaartenGr/BERTopic
|
nlp
| 1,806
|
Best practices on saving results
|
Hello,
I'm new to topic modelin, and I'm looking for some help on best practices for storing the results, including topics and the relationship between topics and documents, in a relational database (like MySQL or Postgres). Additionally, I'm interested in how to integrate this with the process of merging models whenever new data becomes available.
What are the recommended approaches for achieving these goals? Specifically, I'd like to understand how to assign unique identifiers to topics for easy referencing in the database.
|
closed
|
2024-02-13T09:53:29Z
|
2024-02-22T19:20:54Z
|
https://github.com/MaartenGr/BERTopic/issues/1806
|
[] |
jhgeluk
| 4
|
sinaptik-ai/pandas-ai
|
data-science
| 1,158
|
Agent not working as expected
|
### System Info
Python version: 3.11
PandasAI: 2.0.40
OS: macOS Ventura 13.6.4
### 🐛 Describe the bug
I'm using the following code from the documentation found on the official [docs](https://docs.pandas-ai.com/en/latest/getting-started/#clarification-questions) about using the Agent. I've modified it slightly by changing "deals_opened" value for France from 70 to 180 and modifying the queries slightly:
**Input:**
```
import os
import pandas as pd
from dotenv import load_dotenv
from pandasai import Agent
from pandasai.llm import OpenAI
def get_env(key: str):
load_dotenv()
return os.getenv(key)
sales_by_country = pd.DataFrame({
"country": ["United States", "United Kingdom", "France", "Germany", "Italy", "Spain", "Canada", "Australia",
"Japan", "China"],
"sales": [5000, 3200, 2900, 4100, 2300, 2100, 2500, 2600, 4500, 7000],
"deals_opened": [142, 80, 180, 90, 60, 50, 40, 30, 110, 120],
"deals_closed": [120, 70, 60, 80, 50, 40, 30, 20, 100, 110]
})
# By default, unless you choose a different LLM, it will use BambooLLM.
# You can get your free API key signing up at https://pandabi.ai (you can also configure it in your .env file)
openai = OpenAI(api_token=get_env('OPENAI_API_KEY'))
agent = Agent(sales_by_country, config={"llm": openai})
print(agent.chat('Which are the top 5 countries by sales?'))
print(agent.chat('And which country has the most deals?'))
```
**Expected Output:**
```
The top 5 countries by sales are: China, United States, Japan, Germany, United Kingdom
The country with the most deals is United States.
```
**Actual Output:**
```
The top 5 countries by sales are: China, United States, Japan, Germany, United Kingdom
The country with the most deals is France.
```
From what I understand about the Agent, is that additional queries after the first one should be "follow-up" queries if they're found to be relevant by the LLM. What is happening is that it is defaulting the original data and performing the search that way, **not** the new subset of data from the first query.
|
closed
|
2024-05-15T19:58:30Z
|
2024-11-18T16:04:15Z
|
https://github.com/sinaptik-ai/pandas-ai/issues/1158
|
[] |
fletchsims
| 3
|
roboflow/supervision
|
pytorch
| 1,091
|
How to add class filter in yolov8 tracker and zone counting
|
### Search before asking
- [X] I have searched the Supervision [issues](https://github.com/roboflow/supervision/issues) and found no similar feature requests.
### Question
How to add class filter in yolov8 tracker and zone counting
here is my code
```
import cv2
from ultralytics import YOLO
import time
import numpy as np
import supervision as sv
# Define the detection zone polygon in normalized coordinates
ZONE_POLYGON = np.array([
[0, 0],
[0.5, 0],
[0.5, 1],
[0, 1]
])
# Load the YOLOv8 model
model = YOLO('app/yolov8n.pt')
# Open the video file or webcam
cap = cv2.VideoCapture("app/traffics=.mp4")
# Initialize the counter for detections within the zone
zone_detections_count = 0
# Get video frame resolution
ret, frame = cap.read()
if not ret:
print("Failed to get frame from video source")
cap.release()
exit()
frame_height, frame_width = frame.shape[:2]
cap.set(cv2.CAP_PROP_POS_FRAMES, 0)
# Adjust the zone polygon to the video resolution
zone_polygon = (ZONE_POLYGON * np.array([frame_width, frame_height])).astype(int)
zone = sv.PolygonZone(polygon=zone_polygon, frame_resolution_wh=(frame_width, frame_height))
zone_annotator = sv.PolygonZoneAnnotator(
zone=zone,
color=sv.Color.red(),
thickness=2,
text_thickness=4,
text_scale=2
)
# Initialize FPS calculation
frame_count = 0
start_time = time.time()
# Create a named window and set it to be resizable, then make it full screen
cv2.namedWindow("YOLOv8 Tracking with Zone", cv2.WINDOW_NORMAL)
cv2.setWindowProperty("YOLOv8 Tracking with Zone", cv2.WND_PROP_FULLSCREEN, cv2.WINDOW_FULLSCREEN)
# Loop through the video frames
while cap.isOpened():
success, frame = cap.read()
if success:
results = model.track(frame, persist=True,)
detections = sv.Detections.from_yolov8(results[0])
detections = detections[detections.class_id == 0]
zone_triggered_detections = zone.trigger(detections=detections)
zone_detections_count += len(zone_triggered_detections)
annotated_frame = results[0].plot()
#cv2.putText(annotated_frame, f"Zone Detections: {zone_detections_count}", (10, 60), cv2.FONT_HERSHEY_SIMPLEX, 1, (255, 255, 255), 2)
frame_count += 1
elapsed_time = time.time() - start_time
if elapsed_time > 1:
fps = frame_count / elapsed_time
cv2.putText(annotated_frame, f"FPS: {fps:.2f}", (10, 30), cv2.FONT_HERSHEY_SIMPLEX, 1, (255, 255, 255), 2)
frame_count = 0
start_time = time.time()
annotated_frame = zone_annotator.annotate(scene=annotated_frame)
cv2.imshow("YOLOv8 Tracking with Zone", annotated_frame)
if cv2.waitKey(1) & 0xFF == ord("q"):
break
else:
break
cap.release()
cv2.destroyAllWindows()
```
>>>>>>>>>>>>>>>>>>>>>
I'm getting the following error
C:\Users\RizwanMLE\anaconda3\envs\yolov8\lib\site-packages\supervision\detection\core.py:175: FutureWarning: In the future `np.bool` will be defined as the corresponding NumPy scalar.
if isinstance(index, np.ndarray) and index.dtype == np.bool:
Traceback (most recent call last):
File "c:/Users/RizwanMLE/Desktop/freelancing projects/cv/waleed_fyp/app/test2.py", line 157, in <module>
detections = detections[detections.class_id == 0]
File "C:\Users\RizwanMLE\anaconda3\envs\yolov8\lib\site-packages\supervision\detection\core.py", line 175, in __getitem__
if isinstance(index, np.ndarray) and index.dtype == np.bool:
File "C:\Users\RizwanMLE\anaconda3\envs\yolov8\lib\site-packages\numpy\__init__.py", line 305, in __getattr__
raise AttributeError(__former_attrs__[attr])
AttributeError: module 'numpy' has no attribute 'bool'.
`np.bool` was a deprecated alias for the builtin `bool`. To avoid this error in existing code, use `bool` by itself. Doing this will not modify any behavior
and is safe. If you specifically wanted the numpy scalar type, use `np.bool_` here.
The aliases was originally deprecated in NumPy 1.20; for more details and guidance see the original release note at:
https://numpy.org/devdocs/release/1.20.0-notes.html#deprecations
### Additional
_No response_
|
closed
|
2024-04-04T09:39:38Z
|
2024-04-08T10:26:50Z
|
https://github.com/roboflow/supervision/issues/1091
|
[
"question"
] |
Rizwanali324
| 10
|
coqui-ai/TTS
|
pytorch
| 3,177
|
[Bug] Loading XTTS via Xtts.load_checkpoint()
|
### Describe the bug
When loading the model using `Xtts.load_checkpoint`, exception is raised as `Error(s) in loading state_dict for Xtts`, which leads to missing keys GPT embedding weights and size mismatch on Mel embedding. Even tried providing the directory which had base(v2) model checkpoints and got the same result.
### To Reproduce
```
import os
import torch
import torchaudio
from TTS.tts.configs.xtts_config import XttsConfig
from TTS.tts.models.xtts import Xtts
print("Loading model...")
config = XttsConfig()
config.load_json("/path/to/xtts/config.json")
model = Xtts.init_from_config(config)
model.load_checkpoint(config, checkpoint_dir="/path/to/xtts/", use_deepspeed=True)
model.cuda()
print("Computing speaker latents...")
gpt_cond_latent, speaker_embedding = model.get_conditioning_latents(audio_path=["reference.wav"])
print("Inference...")
out = model.inference(
"It took me quite a long time to develop a voice and now that I have it I am not going to be silent.",
"en",
gpt_cond_latent,
speaker_embedding,
temperature=0.7, # Add custom parameters here
)
torchaudio.save("xtts.wav", torch.tensor(out["wav"]).unsqueeze(0), 24000)
```
### Expected behavior
Load the checkpoint and run inference without exception.
### Logs
```shell
11-08 22:13:53 [__main__ ] ERROR - Error(s) in loading state_dict for Xtts:
Missing key(s) in state_dict: "gpt.gpt.wte.weight", "gpt.prompt_embedding.weight", "gpt.prompt_pos_embedding.emb.weight", "gpt.gpt_inference.transformer.h.0.ln_1.weight", "gpt.gpt_inference.transformer.h.0.ln_1.bias", "gpt.gpt_inference.transformer.h.0.attn.c_attn.weight", "gpt.gpt_inference.transformer.h.0.attn.c_attn.bias", "gpt.gpt_inference.transformer.h.0.attn.c_proj.weight", "gpt.gpt_inference.transformer.h.0.attn.c_proj.bias", "gpt.gpt_inference.transformer.h.0.ln_2.weight", "gpt.gpt_inference.transformer.h.0.ln_2.bias", "gpt.gpt_inference.transformer.h.0.mlp.c_fc.weight", "gpt.gpt_inference.transformer.h.0.mlp.c_fc.bias", "gpt.gpt_inference.transformer.h.0.mlp.c_proj.weight", "gpt.gpt_inference.transformer.h.0.mlp.c_proj.bias", "gpt.gpt_inference.transformer.h.1.ln_1.weight", "gpt.gpt_inference.transformer.h.1.ln_1.bias", "gpt.gpt_inference.transformer.h.1.attn.c_attn.weight", "gpt.gpt_inference.transformer.h.1.attn.c_attn.bias", "gpt.gpt_inference.transformer.h.1.attn.c_proj.weight", "gpt.gpt_inference.transformer.h.1.attn.c_proj.bias", "gpt.gpt_inference.transformer.h.1.ln_2.weight", "gpt.gpt_inference.transformer.h.1.ln_2.bias", "gpt.gpt_inference.transformer.h.1.mlp.c_fc.weight", "gpt.gpt_inference.transformer.h.1.mlp.c_fc.bias", "gpt.gpt_inference.transformer.h.1.mlp.c_proj.weight", "gpt.gpt_inference.transformer.h.1.mlp.c_proj.bias", "gpt.gpt_inference.transformer.h.2.ln_1.weight", "gpt.gpt_inference.transformer.h.2.ln_1.bias", "gpt.gpt_inference.transformer.h.2.attn.c_attn.weight", "gpt.gpt_inference.transformer.h.2.attn.c_attn.bias", "gpt.gpt_inference.transformer.h.2.attn.c_proj.weight", "gpt.gpt_inference.transformer.h.2.attn.c_proj.bias", "gpt.gpt_inference.transformer.h.2.ln_2.weight", "gpt.gpt_inference.transformer.h.2.ln_2.bias", "gpt.gpt_inference.transformer.h.2.mlp.c_fc.weight", "gpt.gpt_inference.transformer.h.2.mlp.c_fc.bias", "gpt.gpt_inference.transformer.h.2.mlp.c_proj.weight", "gpt.gpt_inference.transformer.h.2.mlp.c_proj.bias", "gpt.gpt_inference.transformer.h.3.ln_1.weight", "gpt.gpt_inference.transformer.h.3.ln_1.bias", "gpt.gpt_inference.transformer.h.3.attn.c_attn.weight", "gpt.gpt_inference.transformer.h.3.attn.c_attn.bias", "gpt.gpt_inference.transformer.h.3.attn.c_proj.weight", "gpt.gpt_inference.transformer.h.3.attn.c_proj.bias", "gpt.gpt_inference.transformer.h.3.ln_2.weight", "gpt.gpt_inference.transformer.h.3.ln_2.bias", "gpt.gpt_inference.transformer.h.3.mlp.c_fc.weight", "gpt.gpt_inference.transformer.h.3.mlp.c_fc.bias", "gpt.gpt_inference.transformer.h.3.mlp.c_proj.weight", "gpt.gpt_inference.transformer.h.3.mlp.c_proj.bias", "gpt.gpt_inference.transformer.h.4.ln_1.weight", "gpt.gpt_inference.transformer.h.4.ln_1.bias", "gpt.gpt_inference.transformer.h.4.attn.c_attn.weight", "gpt.gpt_inference.transformer.h.4.attn.c_attn.bias", "gpt.gpt_inference.transformer.h.4.attn.c_proj.weight", "gpt.gpt_inference.transformer.h.4.attn.c_proj.bias", "gpt.gpt_inference.transformer.h.4.ln_2.weight", "gpt.gpt_inference.transformer.h.4.ln_2.bias", "gpt.gpt_inference.transformer.h.4.mlp.c_fc.weight", "gpt.gpt_inference.transformer.h.4.mlp.c_fc.bias", "gpt.gpt_inference.transformer.h.4.mlp.c_proj.weight", "gpt.gpt_inference.transformer.h.4.mlp.c_proj.bias", "gpt.gpt_inference.transformer.h.5.ln_1.weight", "gpt.gpt_inference.transformer.h.5.ln_1.bias", "gpt.gpt_inference.transformer.h.5.attn.c_attn.weight", "gpt.gpt_inference.transformer.h.5.attn.c_attn.bias", "gpt.gpt_inference.transformer.h.5.attn.c_proj.weight", "gpt.gpt_inference.transformer.h.5.attn.c_proj.bias", "gpt.gpt_inference.transformer.h.5.ln_2.weight", "gpt.gpt_inference.transformer.h.5.ln_2.bias", "gpt.gpt_inference.transformer.h.5.mlp.c_fc.weight", "gpt.gpt_inference.transformer.h.5.mlp.c_fc.bias", "gpt.gpt_inference.transformer.h.5.mlp.c_proj.weight", "gpt.gpt_inference.transformer.h.5.mlp.c_proj.bias", "gpt.gpt_inference.transformer.h.6.ln_1.weight", "gpt.gpt_inference.transformer.h.6.ln_1.bias", "gpt.gpt_inference.transformer.h.6.attn.c_attn.weight", "gpt.gpt_inference.transformer.h.6.attn.c_attn.bias", "gpt.gpt_inference.transformer.h.6.attn.c_proj.weight", "gpt.gpt_inference.transformer.h.6.attn.c_proj.bias", "gpt.gpt_inference.transformer.h.6.ln_2.weight", "gpt.gpt_inference.transformer.h.6.ln_2.bias", "gpt.gpt_inference.transformer.h.6.mlp.c_fc.weight", "gpt.gpt_inference.transformer.h.6.mlp.c_fc.bias", "gpt.gpt_inference.transformer.h.6.mlp.c_proj.weight", "gpt.gpt_inference.transformer.h.6.mlp.c_proj.bias", "gpt.gpt_inference.transformer.h.7.ln_1.weight", "gpt.gpt_inference.transformer.h.7.ln_1.bias", "gpt.gpt_inference.transformer.h.7.attn.c_attn.weight", "gpt.gpt_inference.transformer.h.7.attn.c_attn.bias", "gpt.gpt_inference.transformer.h.7.attn.c_proj.weight", "gpt.gpt_inference.transformer.h.7.attn.c_proj.bias", "gpt.gpt_inference.transformer.h.7.ln_2.weight", "gpt.gpt_inference.transformer.h.7.ln_2.bias", "gpt.gpt_inference.transformer.h.7.mlp.c_fc.weight", "gpt.gpt_inference.transformer.h.7.mlp.c_fc.bias", "gpt.gpt_inference.transformer.h.7.mlp.c_proj.weight", "gpt.gpt_inference.transformer.h.7.mlp.c_proj.bias", "gpt.gpt_inference.transformer.h.8.ln_1.weight", "gpt.gpt_inference.transformer.h.8.ln_1.bias", "gpt.gpt_inference.transformer.h.8.attn.c_attn.weight", "gpt.gpt_inference.transformer.h.8.attn.c_attn.bias", "gpt.gpt_inference.transformer.h.8.attn.c_proj.weight", "gpt.gpt_inference.transformer.h.8.attn.c_proj.bias", "gpt.gpt_inference.transformer.h.8.ln_2.weight", "gpt.gpt_inference.transformer.h.8.ln_2.bias", "gpt.gpt_inference.transformer.h.8.mlp.c_fc.weight", "gpt.gpt_inference.transformer.h.8.mlp.c_fc.bias", "gpt.gpt_inference.transformer.h.8.mlp.c_proj.weight", "gpt.gpt_inference.transformer.h.8.mlp.c_proj.bias", "gpt.gpt_inference.transformer.h.9.ln_1.weight", "gpt.gpt_inference.transformer.h.9.ln_1.bias", "gpt.gpt_inference.transformer.h.9.attn.c_attn.weight", "gpt.gpt_inference.transformer.h.9.attn.c_attn.bias", "gpt.gpt_inference.transformer.h.9.attn.c_proj.weight", "gpt.gpt_inference.transformer.h.9.attn.c_proj.bias", "gpt.gpt_inference.transformer.h.9.ln_2.weight", "gpt.gpt_inference.transformer.h.9.ln_2.bias", "gpt.gpt_inference.transformer.h.9.mlp.c_fc.weight", "gpt.gpt_inference.transformer.h.9.mlp.c_fc.bias", "gpt.gpt_inference.transformer.h.9.mlp.c_proj.weight", "gpt.gpt_inference.transformer.h.9.mlp.c_proj.bias", "gpt.gpt_inference.transformer.h.10.ln_1.weight", "gpt.gpt_inference.transformer.h.10.ln_1.bias", "gpt.gpt_inference.transformer.h.10.attn.c_attn.weight", "gpt.gpt_inference.transformer.h.10.attn.c_attn.bias", "gpt.gpt_inference.transformer.h.10.attn.c_proj.weight", "gpt.gpt_inference.transformer.h.10.attn.c_proj.bias", "gpt.gpt_inference.transformer.h.10.ln_2.weight", "gpt.gpt_inference.transformer.h.10.ln_2.bias", "gpt.gpt_inference.transformer.h.10.mlp.c_fc.weight", "gpt.gpt_inference.transformer.h.10.mlp.c_fc.bias", "gpt.gpt_inference.transformer.h.10.mlp.c_proj.weight", "gpt.gpt_inference.transformer.h.10.mlp.c_proj.bias", "gpt.gpt_inference.transformer.h.11.ln_1.weight", "gpt.gpt_inference.transformer.h.11.ln_1.bias", "gpt.gpt_inference.transformer.h.11.attn.c_attn.weight", "gpt.gpt_inference.transformer.h.11.attn.c_attn.bias", "gpt.gpt_inference.transformer.h.11.attn.c_proj.weight", "gpt.gpt_inference.transformer.h.11.attn.c_proj.bias", "gpt.gpt_inference.transformer.h.11.ln_2.weight", "gpt.gpt_inference.transformer.h.11.ln_2.bias", "gpt.gpt_inference.transformer.h.11.mlp.c_fc.weight", "gpt.gpt_inference.transformer.h.11.mlp.c_fc.bias", "gpt.gpt_inference.transformer.h.11.mlp.c_proj.weight", "gpt.gpt_inference.transformer.h.11.mlp.c_proj.bias", "gpt.gpt_inference.transformer.h.12.ln_1.weight", "gpt.gpt_inference.transformer.h.12.ln_1.bias", "gpt.gpt_inference.transformer.h.12.attn.c_attn.weight", "gpt.gpt_inference.transformer.h.12.attn.c_attn.bias", "gpt.gpt_inference.transformer.h.12.attn.c_proj.weight", "gpt.gpt_inference.transformer.h.12.attn.c_proj.bias", "gpt.gpt_inference.transformer.h.12.ln_2.weight", "gpt.gpt_inference.transformer.h.12.ln_2.bias", "gpt.gpt_inference.transformer.h.12.mlp.c_fc.weight", "gpt.gpt_inference.transformer.h.12.mlp.c_fc.bias", "gpt.gpt_inference.transformer.h.12.mlp.c_proj.weight", "gpt.gpt_inference.transformer.h.12.mlp.c_proj.bias", "gpt.gpt_inference.transformer.h.13.ln_1.weight", "gpt.gpt_inference.transformer.h.13.ln_1.bias", "gpt.gpt_inference.transformer.h.13.attn.c_attn.weight", "gpt.gpt_inference.transformer.h.13.attn.c_attn.bias", "gpt.gpt_inference.transformer.h.13.attn.c_proj.weight", "gpt.gpt_inference.transformer.h.13.attn.c_proj.bias", "gpt.gpt_inference.transformer.h.13.ln_2.weight", "gpt.gpt_inference.transformer.h.13.ln_2.bias", "gpt.gpt_inference.transformer.h.13.mlp.c_fc.weight", "gpt.gpt_inference.transformer.h.13.mlp.c_fc.bias", "gpt.gpt_inference.transformer.h.13.mlp.c_proj.weight", "gpt.gpt_inference.transformer.h.13.mlp.c_proj.bias", "gpt.gpt_inference.transformer.h.14.ln_1.weight", "gpt.gpt_inference.transformer.h.14.ln_1.bias", "gpt.gpt_inference.transformer.h.14.attn.c_attn.weight", "gpt.gpt_inference.transformer.h.14.attn.c_attn.bias", "gpt.gpt_inference.transformer.h.14.attn.c_proj.weight", "gpt.gpt_inference.transformer.h.14.attn.c_proj.bias", "gpt.gpt_inference.transformer.h.14.ln_2.weight", "gpt.gpt_inference.transformer.h.14.ln_2.bias", "gpt.gpt_inference.transformer.h.14.mlp.c_fc.weight", "gpt.gpt_inference.transformer.h.14.mlp.c_fc.bias", "gpt.gpt_inference.transformer.h.14.mlp.c_proj.weight", "gpt.gpt_inference.transformer.h.14.mlp.c_proj.bias", "gpt.gpt_inference.transformer.h.15.ln_1.weight", "gpt.gpt_inference.transformer.h.15.ln_1.bias", "gpt.gpt_inference.transformer.h.15.attn.c_attn.weight", "gpt.gpt_inference.transformer.h.15.attn.c_attn.bias", "gpt.gpt_inference.transformer.h.15.attn.c_proj.weight", "gpt.gpt_inference.transformer.h.15.attn.c_proj.bias", "gpt.gpt_inference.transformer.h.15.ln_2.weight", "gpt.gpt_inference.transformer.h.15.ln_2.bias", "gpt.gpt_inference.transformer.h.15.mlp.c_fc.weight", "gpt.gpt_inference.transformer.h.15.mlp.c_fc.bias", "gpt.gpt_inference.transformer.h.15.mlp.c_proj.weight", "gpt.gpt_inference.transformer.h.15.mlp.c_proj.bias", "gpt.gpt_inference.transformer.h.16.ln_1.weight", "gpt.gpt_inference.transformer.h.16.ln_1.bias", "gpt.gpt_inference.transformer.h.16.attn.c_attn.weight", "gpt.gpt_inference.transformer.h.16.attn.c_attn.bias", "gpt.gpt_inference.transformer.h.16.attn.c_proj.weight", "gpt.gpt_inference.transformer.h.16.attn.c_proj.bias", "gpt.gpt_inference.transformer.h.16.ln_2.weight", "gpt.gpt_inference.transformer.h.16.ln_2.bias", "gpt.gpt_inference.transformer.h.16.mlp.c_fc.weight", "gpt.gpt_inference.transformer.h.16.mlp.c_fc.bias", "gpt.gpt_inference.transformer.h.16.mlp.c_proj.weight", "gpt.gpt_inference.transformer.h.16.mlp.c_proj.bias", "gpt.gpt_inference.transformer.h.17.ln_1.weight", "gpt.gpt_inference.transformer.h.17.ln_1.bias", "gpt.gpt_inference.transformer.h.17.attn.c_attn.weight", "gpt.gpt_inference.transformer.h.17.attn.c_attn.bias", "gpt.gpt_inference.transformer.h.17.attn.c_proj.weight", "gpt.gpt_inference.transformer.h.17.attn.c_proj.bias", "gpt.gpt_inference.transformer.h.17.ln_2.weight", "gpt.gpt_inference.transformer.h.17.ln_2.bias", "gpt.gpt_inference.transformer.h.17.mlp.c_fc.weight", "gpt.gpt_inference.transformer.h.17.mlp.c_fc.bias", "gpt.gpt_inference.transformer.h.17.mlp.c_proj.weight", "gpt.gpt_inference.transformer.h.17.mlp.c_proj.bias", "gpt.gpt_inference.transformer.h.18.ln_1.weight", "gpt.gpt_inference.transformer.h.18.ln_1.bias", "gpt.gpt_inference.transformer.h.18.attn.c_attn.weight", "gpt.gpt_inference.transformer.h.18.attn.c_attn.bias", "gpt.gpt_inference.transformer.h.18.attn.c_proj.weight", "gpt.gpt_inference.transformer.h.18.attn.c_proj.bias", "gpt.gpt_inference.transformer.h.18.ln_2.weight", "gpt.gpt_inference.transformer.h.18.ln_2.bias", "gpt.gpt_inference.transformer.h.18.mlp.c_fc.weight", "gpt.gpt_inference.transformer.h.18.mlp.c_fc.bias", "gpt.gpt_inference.transformer.h.18.mlp.c_proj.weight", "gpt.gpt_inference.transformer.h.18.mlp.c_proj.bias", "gpt.gpt_inference.transformer.h.19.ln_1.weight", "gpt.gpt_inference.transformer.h.19.ln_1.bias", "gpt.gpt_inference.transformer.h.19.attn.c_attn.weight", "gpt.gpt_inference.transformer.h.19.attn.c_attn.bias", "gpt.gpt_inference.transformer.h.19.attn.c_proj.weight", "gpt.gpt_inference.transformer.h.19.attn.c_proj.bias", "gpt.gpt_inference.transformer.h.19.ln_2.weight", "gpt.gpt_inference.transformer.h.19.ln_2.bias", "gpt.gpt_inference.transformer.h.19.mlp.c_fc.weight", "gpt.gpt_inference.transformer.h.19.mlp.c_fc.bias", "gpt.gpt_inference.transformer.h.19.mlp.c_proj.weight", "gpt.gpt_inference.transformer.h.19.mlp.c_proj.bias", "gpt.gpt_inference.transformer.h.20.ln_1.weight", "gpt.gpt_inference.transformer.h.20.ln_1.bias", "gpt.gpt_inference.transformer.h.20.attn.c_attn.weight", "gpt.gpt_inference.transformer.h.20.attn.c_attn.bias", "gpt.gpt_inference.transformer.h.20.attn.c_proj.weight", "gpt.gpt_inference.transformer.h.20.attn.c_proj.bias", "gpt.gpt_inference.transformer.h.20.ln_2.weight", "gpt.gpt_inference.transformer.h.20.ln_2.bias", "gpt.gpt_inference.transformer.h.20.mlp.c_fc.weight", "gpt.gpt_inference.transformer.h.20.mlp.c_fc.bias", "gpt.gpt_inference.transformer.h.20.mlp.c_proj.weight", "gpt.gpt_inference.transformer.h.20.mlp.c_proj.bias", "gpt.gpt_inference.transformer.h.21.ln_1.weight", "gpt.gpt_inference.transformer.h.21.ln_1.bias", "gpt.gpt_inference.transformer.h.21.attn.c_attn.weight", "gpt.gpt_inference.transformer.h.21.attn.c_attn.bias", "gpt.gpt_inference.transformer.h.21.attn.c_proj.weight", "gpt.gpt_inference.transformer.h.21.attn.c_proj.bias", "gpt.gpt_inference.transformer.h.21.ln_2.weight", "gpt.gpt_inference.transformer.h.21.ln_2.bias", "gpt.gpt_inference.transformer.h.21.mlp.c_fc.weight", "gpt.gpt_inference.transformer.h.21.mlp.c_fc.bias", "gpt.gpt_inference.transformer.h.21.mlp.c_proj.weight", "gpt.gpt_inference.transformer.h.21.mlp.c_proj.bias", "gpt.gpt_inference.transformer.h.22.ln_1.weight", "gpt.gpt_inference.transformer.h.22.ln_1.bias", "gpt.gpt_inference.transformer.h.22.attn.c_attn.weight", "gpt.gpt_inference.transformer.h.22.attn.c_attn.bias", "gpt.gpt_inference.transformer.h.22.attn.c_proj.weight", "gpt.gpt_inference.transformer.h.22.attn.c_proj.bias", "gpt.gpt_inference.transformer.h.22.ln_2.weight", "gpt.gpt_inference.transformer.h.22.ln_2.bias", "gpt.gpt_inference.transformer.h.22.mlp.c_fc.weight", "gpt.gpt_inference.transformer.h.22.mlp.c_fc.bias", "gpt.gpt_inference.transformer.h.22.mlp.c_proj.weight", "gpt.gpt_inference.transformer.h.22.mlp.c_proj.bias", "gpt.gpt_inference.transformer.h.23.ln_1.weight", "gpt.gpt_inference.transformer.h.23.ln_1.bias", "gpt.gpt_inference.transformer.h.23.attn.c_attn.weight", "gpt.gpt_inference.transformer.h.23.attn.c_attn.bias", "gpt.gpt_inference.transformer.h.23.attn.c_proj.weight", "gpt.gpt_inference.transformer.h.23.attn.c_proj.bias", "gpt.gpt_inference.transformer.h.23.ln_2.weight", "gpt.gpt_inference.transformer.h.23.ln_2.bias", "gpt.gpt_inference.transformer.h.23.mlp.c_fc.weight", "gpt.gpt_inference.transformer.h.23.mlp.c_fc.bias", "gpt.gpt_inference.transformer.h.23.mlp.c_proj.weight", "gpt.gpt_inference.transformer.h.23.mlp.c_proj.bias", "gpt.gpt_inference.transformer.h.24.ln_1.weight", "gpt.gpt_inference.transformer.h.24.ln_1.bias", "gpt.gpt_inference.transformer.h.24.attn.c_attn.weight", "gpt.gpt_inference.transformer.h.24.attn.c_attn.bias", "gpt.gpt_inference.transformer.h.24.attn.c_proj.weight", "gpt.gpt_inference.transformer.h.24.attn.c_proj.bias", "gpt.gpt_inference.transformer.h.24.ln_2.weight", "gpt.gpt_inference.transformer.h.24.ln_2.bias", "gpt.gpt_inference.transformer.h.24.mlp.c_fc.weight", "gpt.gpt_inference.transformer.h.24.mlp.c_fc.bias", "gpt.gpt_inference.transformer.h.24.mlp.c_proj.weight", "gpt.gpt_inference.transformer.h.24.mlp.c_proj.bias", "gpt.gpt_inference.transformer.h.25.ln_1.weight", "gpt.gpt_inference.transformer.h.25.ln_1.bias", "gpt.gpt_inference.transformer.h.25.attn.c_attn.weight", "gpt.gpt_inference.transformer.h.25.attn.c_attn.bias", "gpt.gpt_inference.transformer.h.25.attn.c_proj.weight", "gpt.gpt_inference.transformer.h.25.attn.c_proj.bias", "gpt.gpt_inference.transformer.h.25.ln_2.weight", "gpt.gpt_inference.transformer.h.25.ln_2.bias", "gpt.gpt_inference.transformer.h.25.mlp.c_fc.weight", "gpt.gpt_inference.transformer.h.25.mlp.c_fc.bias", "gpt.gpt_inference.transformer.h.25.mlp.c_proj.weight", "gpt.gpt_inference.transformer.h.25.mlp.c_proj.bias", "gpt.gpt_inference.transformer.h.26.ln_1.weight", "gpt.gpt_inference.transformer.h.26.ln_1.bias", "gpt.gpt_inference.transformer.h.26.attn.c_attn.weight", "gpt.gpt_inference.transformer.h.26.attn.c_attn.bias", "gpt.gpt_inference.transformer.h.26.attn.c_proj.weight", "gpt.gpt_inference.transformer.h.26.attn.c_proj.bias", "gpt.gpt_inference.transformer.h.26.ln_2.weight", "gpt.gpt_inference.transformer.h.26.ln_2.bias", "gpt.gpt_inference.transformer.h.26.mlp.c_fc.weight", "gpt.gpt_inference.transformer.h.26.mlp.c_fc.bias", "gpt.gpt_inference.transformer.h.26.mlp.c_proj.weight", "gpt.gpt_inference.transformer.h.26.mlp.c_proj.bias", "gpt.gpt_inference.transformer.h.27.ln_1.weight", "gpt.gpt_inference.transformer.h.27.ln_1.bias", "gpt.gpt_inference.transformer.h.27.attn.c_attn.weight", "gpt.gpt_inference.transformer.h.27.attn.c_attn.bias", "gpt.gpt_inference.transformer.h.27.attn.c_proj.weight", "gpt.gpt_inference.transformer.h.27.attn.c_proj.bias", "gpt.gpt_inference.transformer.h.27.ln_2.weight", "gpt.gpt_inference.transformer.h.27.ln_2.bias", "gpt.gpt_inference.transformer.h.27.mlp.c_fc.weight", "gpt.gpt_inference.transformer.h.27.mlp.c_fc.bias", "gpt.gpt_inference.transformer.h.27.mlp.c_proj.weight", "gpt.gpt_inference.transformer.h.27.mlp.c_proj.bias", "gpt.gpt_inference.transformer.h.28.ln_1.weight", "gpt.gpt_inference.transformer.h.28.ln_1.bias", "gpt.gpt_inference.transformer.h.28.attn.c_attn.weight", "gpt.gpt_inference.transformer.h.28.attn.c_attn.bias", "gpt.gpt_inference.transformer.h.28.attn.c_proj.weight", "gpt.gpt_inference.transformer.h.28.attn.c_proj.bias", "gpt.gpt_inference.transformer.h.28.ln_2.weight", "gpt.gpt_inference.transformer.h.28.ln_2.bias", "gpt.gpt_inference.transformer.h.28.mlp.c_fc.weight", "gpt.gpt_inference.transformer.h.28.mlp.c_fc.bias", "gpt.gpt_inference.transformer.h.28.mlp.c_proj.weight", "gpt.gpt_inference.transformer.h.28.mlp.c_proj.bias", "gpt.gpt_inference.transformer.h.29.ln_1.weight", "gpt.gpt_inference.transformer.h.29.ln_1.bias", "gpt.gpt_inference.transformer.h.29.attn.c_attn.weight", "gpt.gpt_inference.transformer.h.29.attn.c_attn.bias", "gpt.gpt_inference.transformer.h.29.attn.c_proj.weight", "gpt.gpt_inference.transformer.h.29.attn.c_proj.bias", "gpt.gpt_inference.transformer.h.29.ln_2.weight", "gpt.gpt_inference.transformer.h.29.ln_2.bias", "gpt.gpt_inference.transformer.h.29.mlp.c_fc.weight", "gpt.gpt_inference.transformer.h.29.mlp.c_fc.bias", "gpt.gpt_inference.transformer.h.29.mlp.c_proj.weight", "gpt.gpt_inference.transformer.h.29.mlp.c_proj.bias", "gpt.gpt_inference.transformer.ln_f.weight", "gpt.gpt_inference.transformer.ln_f.bias", "gpt.gpt_inference.transformer.wte.weight", "gpt.gpt_inference.pos_embedding.emb.weight", "gpt.gpt_inference.embeddings.weight", "gpt.gpt_inference.final_norm.weight", "gpt.gpt_inference.final_norm.bias", "gpt.gpt_inference.lm_head.0.weight", "gpt.gpt_inference.lm_head.0.bias", "gpt.gpt_inference.lm_head.1.weight", "gpt.gpt_inference.lm_head.1.bias".
Unexpected key(s) in state_dict: "gpt.conditioning_perceiver.latents", "gpt.conditioning_perceiver.layers.0.0.to_q.weight", "gpt.conditioning_perceiver.layers.0.0.to_kv.weight", "gpt.conditioning_perceiver.layers.0.0.to_out.weight", "gpt.conditioning_perceiver.layers.0.1.0.weight", "gpt.conditioning_perceiver.layers.0.1.0.bias", "gpt.conditioning_perceiver.layers.0.1.2.weight", "gpt.conditioning_perceiver.layers.0.1.2.bias", "gpt.conditioning_perceiver.layers.1.0.to_q.weight", "gpt.conditioning_perceiver.layers.1.0.to_kv.weight", "gpt.conditioning_perceiver.layers.1.0.to_out.weight", "gpt.conditioning_perceiver.layers.1.1.0.weight", "gpt.conditioning_perceiver.layers.1.1.0.bias", "gpt.conditioning_perceiver.layers.1.1.2.weight", "gpt.conditioning_perceiver.layers.1.1.2.bias", "gpt.conditioning_perceiver.norm.gamma".
size mismatch for gpt.mel_embedding.weight: copying a param with shape torch.Size([1026, 1024]) from checkpoint, the shape in current model is torch.Size([8194, 1024]).
size mismatch for gpt.mel_head.weight: copying a param with shape torch.Size([1026, 1024]) from checkpoint, the shape in current model is torch.Size([8194, 1024]).
size mismatch for gpt.mel_head.bias: copying a param with shape torch.Size([1026]) from checkpoint, the shape in current model is torch.Size([8194]).
```
### Environment
```shell
{
"CUDA": {
"GPU": ["NVIDIA A100-SXM4-80GB"],
"available": true,
"version": "11.8"
},
"Packages": {
"PyTorch_debug": false,
"PyTorch_version": "2.1.0+cu118",
"TTS": "0.20.1",
"numpy": "1.22.0"
},
"System": {
"OS": "Linux",
"architecture": [
"64bit",
"ELF"
],
"processor": "x86_64",
"python": "3.9.18",
"version": "#183-Ubuntu SMP Mon Oct 2 11:28:33 UTC 2023"
}
}
```
### Additional context
_No response_
|
closed
|
2023-11-09T03:28:30Z
|
2024-06-25T12:46:25Z
|
https://github.com/coqui-ai/TTS/issues/3177
|
[
"bug"
] |
caffeinetoomuch
| 12
|
plotly/plotly.py
|
plotly
| 4,834
|
Explore dropping Pandas requirement for Plotly Express
|
In https://github.com/plotly/plotly.py/pull/4790, nearly all Pandas functionality is removed from Plotly.py, _except_ some of our trendine functions.
Narwhals does not yet support `rolling`, `expanding` nor `ewm`, and until that support is added, we need to use Pandas to calculate trendlines.
Those functions _are_ on the Narwhals roadmap, and we should have a quick follow-up in Plotly.py to implement those changes after the first set of Narwhals changes are made and the good folks at Narwhals release that update 🙂 .
See: https://github.com/plotly/plotly.py/pull/4790/files/6676061152527160b44dbf67ecf6b89e46eef3b2#diff-e667a84629ee8d2879cf22a88c8ff80bcfec1c35be69ab9f50be5938c7089ddaR114-R129
|
open
|
2024-10-25T14:47:42Z
|
2024-11-16T20:43:29Z
|
https://github.com/plotly/plotly.py/issues/4834
|
[
"feature",
"P2"
] |
ndrezn
| 2
|
healthchecks/healthchecks
|
django
| 155
|
Title/note/alias for SMS numbers
|
Add an optional "Title" (or maybe call it "Label", "Note", "Recipient Name", ...) field in the "Add SMS Integration" page.
Currently, if the account has multiple SMS integrations for different phone numbers, it is hard to tell them apart.
|
closed
|
2018-02-15T15:56:54Z
|
2018-03-13T14:32:09Z
|
https://github.com/healthchecks/healthchecks/issues/155
|
[] |
cuu508
| 0
|
sinaptik-ai/pandas-ai
|
pandas
| 737
|
Support OpenAI API v1
|
### 🚀 The feature
Add support for the newest version of the API.
### Motivation, pitch
The old `openai.ChatCompletion` has been replaced by `openai.OpenAI.chat.completions` (a factual class) and `openai.AzureOpenAI.chat.completions` for Azure.
**PR coming soon!**
|
closed
|
2023-11-07T22:52:41Z
|
2023-11-15T10:06:15Z
|
https://github.com/sinaptik-ai/pandas-ai/issues/737
|
[
"bug",
"good first issue"
] |
mspronesti
| 4
|
OpenInterpreter/open-interpreter
|
python
| 1,338
|
Fix tip for API Keys
|
### Describe the bug
The tip to save API key for future use-cases by exporting to bash/zsh configs shows a redundant next line character "\n".
This can confuse non-programmers when they directly copy-paste the command.
Exact response on terminal:
```
Tip: To save this key for later, run one of the following and then restart your
terminal.
MacOS: echo '\nexport OPENAI_API_KEY=your_api_key' >> ~/.zshrc
Linux: echo '\nexport OPENAI_API_KEY=your_api_key' >> ~/.bashrc
Windows: setx OPENAI_API_KEY your_api_key
```
### Reproduce
Run interpreter for without OpenAI key set in the environment
### Expected behavior
```
Tip: To save this key for later, run one of the following and then restart your
terminal.
MacOS: echo 'export OPENAI_API_KEY=your_api_key' >> ~/.zshrc
Linux: echo 'export OPENAI_API_KEY=your_api_key' >> ~/.bashrc
Windows: setx OPENAI_API_KEY your_api_key
```
### Screenshots
_No response_
### Open Interpreter version
0.3.4
### Python version
3.10.12
### Operating System name and version
Ubuntu 22.04
### Additional context
_No response_
|
open
|
2024-07-14T18:27:16Z
|
2024-08-20T11:56:14Z
|
https://github.com/OpenInterpreter/open-interpreter/issues/1338
|
[] |
JSee98
| 1
|
marshmallow-code/apispec
|
rest-api
| 293
|
Bottle plugin: DeprecationWarning: Switch to Plugin API v2 and access the Route object directly.
|
I get this warning in the tests:
> DeprecationWarning: Switch to Plugin API v2 and access the Route object directly.
It comes from Bottle. See https://github.com/bottlepy/bottle/issues/770 for instance.
|
closed
|
2018-09-21T09:57:49Z
|
2018-10-04T11:34:04Z
|
https://github.com/marshmallow-code/apispec/issues/293
|
[
"help wanted"
] |
lafrech
| 0
|
hbldh/bleak
|
asyncio
| 1,223
|
client.disconnect() hangs on Win 10
|
* bleak version: bleak==0.19.5 bleak-winrt==1.2.0
* Python version: Python 3.10.7
* Operating System: Win 10
* BlueZ version (`bluetoothctl -v`) in case of Linux:
*
OS: Windows-10-10.0.19044-SP0
Platform:: uname_result(system='Windows', node='Ryzen', release='10', version='10.0.19044', machine='AMD64')
Python 3.10.7 (tags/v3.10.7:6cc6b13, Sep 5 2022, 14:08:36) [MSC v.1933 64 bit (AMD64)]
### Description
I encountered a problem in my app, closing a connection, so reproduced it with the small test program below. When the program tries to close a BLE connection, it hangs (I had to abort with Ctl-c in the run that was used for the log and packet capture below). Not sure if it's a bleak bug or a problem in my python program or BLE device. The devices is similar to the sample ESP32 device marked '2', I emailed last week.
### What I Did
It is preferable if an issue contains a [Miminal Workable Example](https://stackoverflow.com/help/minimal-reproducible-example).
This will otherwise be one of the first questions you will get as a response.
It is also preferable if that example is not dependent on a specific peripheral device, but can be run and
reproduced with other BLE peripherals as well.
### Test program:
```
import asyncio
import platform
import sys
import atexit
import time
import signal
from bleak import BleakClient, BleakScanner
import signal
# Adapt to your actual device.
device_address = "0C:8B:95:F2:B4:36"
signal.signal(signal.SIGINT, lambda number, frame: sys.exit())
print(f"OS: {platform.platform()}", flush=True)
print(f"Platform:: {platform.uname()}", flush=True)
print(f"Python {sys.version}", flush=True)
async def test():
#global client
print(f"Trying to connect to {device_address}", flush=True)
device = await BleakScanner.find_device_by_address(device_address, timeout=10.0)
assert device
client = BleakClient(device)
assert client
await client.connect(timeout=10.0)
assert client.is_connected
print(f"Connected", flush=True)
print(f"Waiting 5 secs...", flush=True)
time.sleep(5.0)
print(f"Closing...", flush=True)
assert client.is_connected
await client.disconnect()
print(f"Test done", flush=True)
event_loop = asyncio.new_event_loop()
event_loop.run_until_complete(test())
```
### Logs
```
/c/projects/ble_stepper_motor_analyzer/repo/app/test$ python connection_close.py
OS: Windows-10-10.0.19044-SP0
Platform:: uname_result(system='Windows', node='Ryzen', release='10', version='10.0.19044', machine='AMD64')
Python 3.10.7 (tags/v3.10.7:6cc6b13, Sep 5 2022, 14:08:36) [MSC v.1933 64 bit (AMD64)]
Trying to connect to 0C:8B:95:F2:B4:36
2023-02-16 12:57:24,597 bleak.backends.winrt.scanner DEBUG: Received 0C:8B:95:F2:B4:36: STP-0C8B95F2B436.
2023-02-16 12:57:24,598 bleak.backends.winrt.scanner DEBUG: skipping callback, waiting for scan response
2023-02-16 12:57:24,600 bleak.backends.winrt.scanner DEBUG: Received 0C:8B:95:F2:B4:36: .
2023-02-16 12:57:24,603 bleak.backends.winrt.scanner DEBUG: 1 devices found. Watcher status: 3.
2023-02-16 12:57:24,620 bleak.backends.winrt.client DEBUG: Connecting to BLE device @ 0C:8B:95:F2:B4:36
2023-02-16 12:57:24,657 bleak.backends.winrt.client DEBUG: getting services (service_cache_mode=None, cache_mode=None)...
2023-02-16 12:57:24,816 bleak.backends.winrt.client DEBUG: session_status_changed_event_handler: id: <_bleak_winrt_Windows_Devices_Bluetooth.BluetoothDeviceId object at 0x0000019AB567A510>, error: BluetoothError.SUCCESS, status: GattSessionStatus.ACTIVE
2023-02-16 12:57:24,891 bleak.backends.winrt.client DEBUG: max_pdu_size_changed_handler: 247
2023-02-16 12:57:25,708 bleak.backends.winrt.client DEBUG: 0C:8B:95:F2:B4:36: services changed
2023-02-16 12:57:25,709 bleak.backends.winrt.client DEBUG: 0C:8B:95:F2:B4:36: restarting get services due to services changed event
2023-02-16 12:57:25,764 bleak.backends.winrt.client DEBUG: 0C:8B:95:F2:B4:36: services changed
2023-02-16 12:57:25,778 bleak.backends.winrt.client DEBUG: getting services (service_cache_mode=<BluetoothCacheMode.CACHED: 0>, cache_mode=None)...
2023-02-16 12:57:25,778 bleak.backends.winrt.client DEBUG: 0C:8B:95:F2:B4:36: restarting get services due to services changed event
2023-02-16 12:57:25,808 bleak.backends.winrt.client DEBUG: 0C:8B:95:F2:B4:36: services changed
2023-02-16 12:57:25,861 bleak.backends.winrt.client DEBUG: getting services (service_cache_mode=<BluetoothCacheMode.CACHED: 0>, cache_mode=None)...
2023-02-16 12:57:25,861 bleak.backends.winrt.client DEBUG: 0C:8B:95:F2:B4:36: restarting get services due to services changed event
2023-02-16 12:57:25,911 bleak.backends.winrt.client DEBUG: getting services (service_cache_mode=<BluetoothCacheMode.CACHED: 0>, cache_mode=None)...
Connected
Waiting 5 secs...
Closing...
2023-02-16 12:57:31,427 bleak.backends.winrt.client DEBUG: Disconnecting from BLE device...
/c/projects/ble_stepper_motor_analyzer/repo/app/test$
```
Wireshark file:
[bleak_disconnec_hangs.zip](https://github.com/hbldh/bleak/files/10760751/bleak_disconnec_hangs.zip)
|
closed
|
2023-02-16T21:05:41Z
|
2024-09-30T13:50:48Z
|
https://github.com/hbldh/bleak/issues/1223
|
[
"bug",
"Backend: WinRT"
] |
zapta
| 9
|
huggingface/datasets
|
pytorch
| 7,217
|
ds.map(f, num_proc=10) is slower than df.apply
|
### Describe the bug
pandas columns: song_id, song_name
ds = Dataset.from_pandas(df)
def has_cover(song_name):
if song_name is None or pd.isna(song_name):
return False
return 'cover' in song_name.lower()
df['has_cover'] = df.song_name.progress_apply(has_cover)
ds = ds.map(lambda x: {'has_cover': has_cover(x['song_name'])}, num_proc=10)
time cost:
1. df.apply: 100%|██████████| 12500592/12500592 [00:13<00:00, 959825.47it/s]
2. ds.map: Map (num_proc=10): 31%
3899028/12500592 [00:28<00:38, 222532.89 examples/s]
### Steps to reproduce the bug
pandas columns: song_id, song_name
ds = Dataset.from_pandas(df)
def has_cover(song_name):
if song_name is None or pd.isna(song_name):
return False
return 'cover' in song_name.lower()
df['has_cover'] = df.song_name.progress_apply(has_cover)
ds = ds.map(lambda x: {'has_cover': has_cover(x['song_name'])}, num_proc=10)
### Expected behavior
ds.map is ~num_proc faster than df.apply
### Environment info
pandas: 2.2.2
datasets: 2.19.1
|
open
|
2024-10-11T11:04:05Z
|
2025-02-28T21:21:01Z
|
https://github.com/huggingface/datasets/issues/7217
|
[] |
lanlanlanlanlanlan365
| 3
|
nltk/nltk
|
nlp
| 3,338
|
nltk.find('tokenizer/punkt_tab') fails with version 3.9.1
|
I'm running version 3.9.1, but `nltk.find('tokenizer/punkt_tab', paths=nltk.data.path)` is still not working for me.
Here is my Python output where you can see `nltk.download('punkt_tab')` tells me that punkt_tab is already up-to-date, yet the `find` method still errors out.
```
Python 3.9.18 (main, Oct 15 2024, 22:21:30)
[Clang 15.0.0 (clang-1500.3.9.4)] on darwin
Type "help", "copyright", "credits" or "license" for more information.
>>> import nltk
>>> nltk.__version__
'3.9.1'
>>> nltk.download('punkt_tab')
[nltk_data] Downloading package punkt_tab to
[nltk_data] /Users/username/nltk_data...
[nltk_data] Package punkt_tab is already up-to-date!
True
>>> nltk.find('tokenizer/punkt_tab', paths=nltk.data.path)
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
File "/Users/username/.pyenv/versions/virtual-3.9.18/lib/python3.9/site-packages/nltk/data.py", line 579, in find
raise LookupError(resource_not_found)
LookupError:
**********************************************************************
Resource punkt_tab not found.
Please use the NLTK Downloader to obtain the resource:
>>> import nltk
>>> nltk.download('punkt_tab')
For more information see: https://www.nltk.org/data.html
Attempted to load tokenizer/punkt_tab
Searched in:
- '/Users/username/nltk_data'
- '/Users/username/.pyenv/versions/nlpHw3-3.9.18/nltk_data'
- '/Users/username/.pyenv/versions/nlpHw3-3.9.18/share/nltk_data'
- '/Users/username/.pyenv/versions/nlpHw3-3.9.18/lib/nltk_data'
- '/usr/share/nltk_data'
- '/usr/local/share/nltk_data'
- '/usr/lib/nltk_data'
- '/usr/local/lib/nltk_data'
**********************************************************************
```
If I `ls` the NLTK path then I can see that punkt_tab is clearly there. My terminal output:
```bash
>> ls /Users/username/nltk_data/tokenizers/punkt_tab
czech english french italian portuguese slovene turkish
danish estonian german norwegian README spanish
dutch finnish greek polish russian swedish
>> ls /Users/username/nltk_data/tokenizers/punkt_tab/english
abbrev_types.txt collocations.tab ortho_context.tab sent_starters.txt
```
|
closed
|
2024-11-07T19:17:20Z
|
2024-11-08T20:20:04Z
|
https://github.com/nltk/nltk/issues/3338
|
[] |
clashofphish
| 2
|
ray-project/ray
|
machine-learning
| 50,799
|
[Data] supper passing `pyarrow.dataset.Expression`s to `Dataset.filter`'s `expr`
|
### Description
Currently, one may only pass a string representation of an `Expression` to `Dataset.map(expr...)`, but it would be nice to allow passing either a string *or* an `Expression`.
### Use case
I want to programmatically generate a filter expression:
```python
from functools import reduce
from operator import or_
cols = ["a", "b"]
# filter rows where any of `cols` is null
expr = ~reduce(or_, [pa.field(col).is_null(nan_is_null=True) for col in cols])
dataset.filter(expr=expr)
```
and I'd rather do this with the native pyarrow Expression API instead of string manipulation.
If I try `dataset.fiilter(expr=str(expr)` it raises `ValueError: Invalid syntax in the expression`.
|
open
|
2025-02-21T19:04:41Z
|
2025-02-21T19:06:26Z
|
https://github.com/ray-project/ray/issues/50799
|
[
"enhancement",
"P1"
] |
schmidt-ai
| 0
|
PokeAPI/pokeapi
|
graphql
| 924
|
Graphql cannot provide nested pokemon species
|
<!--
Thanks for contributing to the PokéAPI project. To make sure we're effective, please check the following:
- Make sure your issue hasn't already been submitted on the issues tab. (It has search functionality!)
- If your issue is one of outdated API data, please note that we get our data from [veekun](https://github.com/veekun/pokedex/). If they are not up to date either, please look for or create an issue there. Otherwise, feel free to create an issue here.
- Provide a clear description of the issue.
- Provide a clear description of the steps to reproduce.
- Provide a clear description of the expected behavior.
Thank you!
-->
It seems like the graphql API is unable to provide a pokemon species data when nested under `pokemon -> pokemonspecy`.
This query returns an empty array as `pokemon_v2_pokemonspecies `
```gql
query samplePokeAPIquery {
pokemon_v2_pokemon(where: {name: {_eq: "charizard"}}) {
name
pokemon_v2_pokemonspecy {
pokemon_v2_pokemonspecies {
name
}
}
}
}
```
This query returns a valid response (but requires a separate query).
```gql
query samplePokeAPIquery {
pokemon_v2_pokemonspecies(where: {id: {_eq: 64}}) {
base_happiness
capture_rate
id
}
}
```
|
open
|
2023-09-21T06:40:09Z
|
2023-09-21T06:40:09Z
|
https://github.com/PokeAPI/pokeapi/issues/924
|
[] |
oriel-beck
| 0
|
QuivrHQ/quivr
|
api
| 3,316
|
[Feature]: Updatation of Documentation especially on "Compsite brain connection"
|
### The Feature
The documentation needs an update... I am searchin on how to crate composite bbrain connections
### Motivation, pitch
The documentation needs an update... I am searchin on how to crate composite bbrain connections
### Twitter / LinkedIn details
_No response_
|
closed
|
2024-10-03T17:33:00Z
|
2025-01-13T04:07:35Z
|
https://github.com/QuivrHQ/quivr/issues/3316
|
[
"enhancement",
"Stale",
"area: docs"
] |
ChatIBC
| 4
|
waditu/tushare
|
pandas
| 1,585
|
股票列表stock_basic没有当天新上市的股票信息
|
沪深股票-基础数据-股票列表stock_basic 9.17号请求,没有688697 N纽威(纽威数控)的信息。纽威数控是9.17上市的。
365547
|
closed
|
2021-09-18T06:03:59Z
|
2021-10-10T08:42:52Z
|
https://github.com/waditu/tushare/issues/1585
|
[] |
shuijingkuanggong
| 1
|
twopirllc/pandas-ta
|
pandas
| 724
|
Supertrend Value Mismatch - proposal to add both implementations
|
**Which version are you running?**
0.3.14b0
**Is your feature request related to a problem? Please describe.**
Supertrend calculation in pandas-ta is different from most stock/crypto trading websites (like Binance and Zerodha) and platforms, even though many of them are using TradingView's charting library.
On investigating further into this mismatch and checking the source code of various Python and Pinescript implementations of supertrend, it seems like TradingView js charting library's supertrend uses SMA(true_range, length) instead of ATR.
**Describe the solution you'd like**
I have added a custom indicator named supertrend_classic.
After changing `matr = multiplier * atr(high, low, close, length)` to `matr = multiplier * sma(true_range(high, low, close), length)`, the results are matching perfectly with the aforementioned platforms' charts.
I'm making this issue to help others with similar problems and also propose that supertrend_classic be included in the main library.
https://www.tradingview.com/script/r6dAP7yi/
Lines 10-11 here illustrate both the approaches.
|
open
|
2023-10-06T10:21:34Z
|
2025-02-14T02:20:49Z
|
https://github.com/twopirllc/pandas-ta/issues/724
|
[
"enhancement"
] |
MLpranav
| 8
|
Miserlou/Zappa
|
flask
| 1,686
|
zappa update --zip local.zip causes boto ConnectionClosedError with lambda
|
## Context
`zappa update --zip local.zip` causes a ConnectionClosedError.
```
$ zappa package development --output foo.zip
Calling package for stage development..
Downloading and installing dependencies..
- sqlite==python36: Using precompiled lambda package
Packaging project as zip.
Package created: foo.zip (21.2MiB)
$ zappa update development --zip foo.zip
Calling update for stage development..
Updating Lambda function code..
Oh no! An error occurred! :(
==============
Traceback (most recent call last):
File "/home/kyle/.local/share/virtualenvs/asin-Dofq7uIK/lib/python3.6/site-packages/urllib3/connectionpool.py", line 600, in urlopen
chunked=chunked)
File "/home/kyle/.local/share/virtualenvs/asin-Dofq7uIK/lib/python3.6/site-packages/urllib3/connectionpool.py", line 354, in _make_request
conn.request(method, url, **httplib_request_kw)
File "/home/kyle/.pyenv/versions/3.6.5/lib/python3.6/http/client.py", line 1239, in request
self._send_request(method, url, body, headers, encode_chunked)
File "/home/kyle/.local/share/virtualenvs/asin-Dofq7uIK/lib/python3.6/site-packages/botocore/awsrequest.py", line 125, in _send_request
method, url, body, headers, *args, **kwargs)
File "/home/kyle/.pyenv/versions/3.6.5/lib/python3.6/http/client.py", line 1285, in _send_request
self.endheaders(body, encode_chunked=encode_chunked)
File "/home/kyle/.pyenv/versions/3.6.5/lib/python3.6/http/client.py", line 1234, in endheaders
self._send_output(message_body, encode_chunked=encode_chunked)
File "/home/kyle/.local/share/virtualenvs/asin-Dofq7uIK/lib/python3.6/site-packages/botocore/awsrequest.py", line 152, in _send_output
self.send(msg)
File "/home/kyle/.local/share/virtualenvs/asin-Dofq7uIK/lib/python3.6/site-packages/botocore/awsrequest.py", line 236, in send
return super(AWSConnection, self).send(str)
File "/home/kyle/.pyenv/versions/3.6.5/lib/python3.6/http/client.py", line 986, in send
self.sock.sendall(data)
File "/home/kyle/.pyenv/versions/3.6.5/lib/python3.6/ssl.py", line 972, in sendall
v = self.send(byte_view[count:])
File "/home/kyle/.pyenv/versions/3.6.5/lib/python3.6/ssl.py", line 941, in send
return self._sslobj.write(data)
File "/home/kyle/.pyenv/versions/3.6.5/lib/python3.6/ssl.py", line 642, in write
return self._sslobj.write(data)
socket.timeout: The write operation timed out
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "/home/kyle/.local/share/virtualenvs/asin-Dofq7uIK/lib/python3.6/site-packages/botocore/httpsession.py", line 258, in send
decode_content=False,
File "/home/kyle/.local/share/virtualenvs/asin-Dofq7uIK/lib/python3.6/site-packages/urllib3/connectionpool.py", line 638, in urlopen
_stacktrace=sys.exc_info()[2])
File "/home/kyle/.local/share/virtualenvs/asin-Dofq7uIK/lib/python3.6/site-packages/urllib3/util/retry.py", line 343, in increment
raise six.reraise(type(error), error, _stacktrace)
File "/home/kyle/.local/share/virtualenvs/asin-Dofq7uIK/lib/python3.6/site-packages/urllib3/packages/six.py", line 685, in reraise
raise value.with_traceback(tb)
File "/home/kyle/.local/share/virtualenvs/asin-Dofq7uIK/lib/python3.6/site-packages/urllib3/connectionpool.py", line 600, in urlopen
chunked=chunked)
File "/home/kyle/.local/share/virtualenvs/asin-Dofq7uIK/lib/python3.6/site-packages/urllib3/connectionpool.py", line 354, in _make_request
conn.request(method, url, **httplib_request_kw)
File "/home/kyle/.pyenv/versions/3.6.5/lib/python3.6/http/client.py", line 1239, in request
self._send_request(method, url, body, headers, encode_chunked)
File "/home/kyle/.local/share/virtualenvs/asin-Dofq7uIK/lib/python3.6/site-packages/botocore/awsrequest.py", line 125, in _send_request
method, url, body, headers, *args, **kwargs)
File "/home/kyle/.pyenv/versions/3.6.5/lib/python3.6/http/client.py", line 1285, in _send_request
self.endheaders(body, encode_chunked=encode_chunked)
File "/home/kyle/.pyenv/versions/3.6.5/lib/python3.6/http/client.py", line 1234, in endheaders
self._send_output(message_body, encode_chunked=encode_chunked)
File "/home/kyle/.local/share/virtualenvs/asin-Dofq7uIK/lib/python3.6/site-packages/botocore/awsrequest.py", line 152, in _send_output
self.send(msg)
File "/home/kyle/.local/share/virtualenvs/asin-Dofq7uIK/lib/python3.6/site-packages/botocore/awsrequest.py", line 236, in send
return super(AWSConnection, self).send(str)
File "/home/kyle/.pyenv/versions/3.6.5/lib/python3.6/http/client.py", line 986, in send
self.sock.sendall(data)
File "/home/kyle/.pyenv/versions/3.6.5/lib/python3.6/ssl.py", line 972, in sendall
v = self.send(byte_view[count:])
File "/home/kyle/.pyenv/versions/3.6.5/lib/python3.6/ssl.py", line 941, in send
return self._sslobj.write(data)
File "/home/kyle/.pyenv/versions/3.6.5/lib/python3.6/ssl.py", line 642, in write
return self._sslobj.write(data)
urllib3.exceptions.ProtocolError: ('Connection aborted.', timeout('The write operation timed out',))
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "/home/kyle/.local/share/virtualenvs/asin-Dofq7uIK/lib/python3.6/site-packages/zappa/cli.py", line 2710, in handle
sys.exit(cli.handle())
File "/home/kyle/.local/share/virtualenvs/asin-Dofq7uIK/lib/python3.6/site-packages/zappa/cli.py", line 508, in handle
self.dispatch_command(self.command, stage)
File "/home/kyle/.local/share/virtualenvs/asin-Dofq7uIK/lib/python3.6/site-packages/zappa/cli.py", line 555, in dispatch_command
self.update(self.vargs['zip'], self.vargs['no_upload'])
File "/home/kyle/.local/share/virtualenvs/asin-Dofq7uIK/lib/python3.6/site-packages/zappa/cli.py", line 933, in update
num_revisions=self.num_retained_versions
File "/home/kyle/.local/share/virtualenvs/asin-Dofq7uIK/lib/python3.6/site-packages/zappa/core.py", line 1104, in update_lambda_function
response = self.lambda_client.update_function_code(**kwargs)
File "/home/kyle/.local/share/virtualenvs/asin-Dofq7uIK/lib/python3.6/site-packages/botocore/client.py", line 320, in _api_call
return self._make_api_call(operation_name, kwargs)
File "/home/kyle/.local/share/virtualenvs/asin-Dofq7uIK/lib/python3.6/site-packages/botocore/client.py", line 610, in _make_api_call
operation_model, request_dict)
File "/home/kyle/.local/share/virtualenvs/asin-Dofq7uIK/lib/python3.6/site-packages/botocore/endpoint.py", line 102, in make_request
return self._send_request(request_dict, operation_model)
File "/home/kyle/.local/share/virtualenvs/asin-Dofq7uIK/lib/python3.6/site-packages/botocore/endpoint.py", line 136, in _send_request
success_response, exception):
File "/home/kyle/.local/share/virtualenvs/asin-Dofq7uIK/lib/python3.6/site-packages/botocore/endpoint.py", line 210, in _needs_retry
caught_exception=caught_exception, request_dict=request_dict)
File "/home/kyle/.local/share/virtualenvs/asin-Dofq7uIK/lib/python3.6/site-packages/botocore/hooks.py", line 356, in emit
return self._emitter.emit(aliased_event_name, **kwargs)
File "/home/kyle/.local/share/virtualenvs/asin-Dofq7uIK/lib/python3.6/site-packages/botocore/hooks.py", line 228, in emit
return self._emit(event_name, kwargs)
File "/home/kyle/.local/share/virtualenvs/asin-Dofq7uIK/lib/python3.6/site-packages/botocore/hooks.py", line 211, in _emit
response = handler(**kwargs)
File "/home/kyle/.local/share/virtualenvs/asin-Dofq7uIK/lib/python3.6/site-packages/botocore/retryhandler.py", line 183, in __call__
if self._checker(attempts, response, caught_exception):
File "/home/kyle/.local/share/virtualenvs/asin-Dofq7uIK/lib/python3.6/site-packages/botocore/retryhandler.py", line 251, in __call__
caught_exception)
File "/home/kyle/.local/share/virtualenvs/asin-Dofq7uIK/lib/python3.6/site-packages/botocore/retryhandler.py", line 277, in _should_retry
return self._checker(attempt_number, response, caught_exception)
File "/home/kyle/.local/share/virtualenvs/asin-Dofq7uIK/lib/python3.6/site-packages/botocore/retryhandler.py", line 317, in __call__
caught_exception)
File "/home/kyle/.local/share/virtualenvs/asin-Dofq7uIK/lib/python3.6/site-packages/botocore/retryhandler.py", line 223, in __call__
attempt_number, caught_exception)
File "/home/kyle/.local/share/virtualenvs/asin-Dofq7uIK/lib/python3.6/site-packages/botocore/retryhandler.py", line 359, in _check_caught_exception
raise caught_exception
File "/home/kyle/.local/share/virtualenvs/asin-Dofq7uIK/lib/python3.6/site-packages/botocore/endpoint.py", line 179, in _get_response
http_response = self._send(request)
File "/home/kyle/.local/share/virtualenvs/asin-Dofq7uIK/lib/python3.6/site-packages/botocore/endpoint.py", line 223, in _send
return self.http_session.send(request)
File "/home/kyle/.local/share/virtualenvs/asin-Dofq7uIK/lib/python3.6/site-packages/botocore/httpsession.py", line 289, in send
endpoint_url=request.url
botocore.exceptions.ConnectionClosedError: Connection was closed before we received a valid response from endpoint URL: "https://lambda.us-east-1.amazonaws.com/2015-03-31/functions/xxxxxxxx/code".
```
Note that `zappa update development` works as expected:
```
Calling update for stage development..
Downloading and installing dependencies..
- sqlite==python36: Using precompiled lambda package
Packaging project as zip.
Uploading xxxxxxxx-1540763761.zip (21.2MiB)..
100%|███████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 22.2M/22.2M [00:07<00:00, 2.91MB/s]
Updating Lambda function code..
Updating Lambda function configuration..
Uploading xxxxxxx-template-1540763777.json (1.6KiB)..
100%|███████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 1.63K/1.63K [00:00<00:00, 12.5KB/s]
Deploying API Gateway..
Scheduling..
Unscheduled xxxxxxxx-zappa-keep-warm-handler.keep_warm_callback.
Scheduled xxxxx-zappa-keep-warm-handler.keep_warm_callback with expression rate(4 minutes)!
Your updated Zappa deployment is live!: https://xxxxxxxx
```
## Expected Behavior
zappa update --zip should upload the zip to the lambda
## Actual Behavior
zappa update --zip crashes
## Your Environment
* Zappa version used: 0.47
* Operating System and Python version: ubuntu 16.04 python 3.6.5
|
open
|
2018-10-28T22:26:47Z
|
2020-05-23T19:34:01Z
|
https://github.com/Miserlou/Zappa/issues/1686
|
[] |
kylegibson
| 6
|
itamarst/eliot
|
numpy
| 41
|
Switch to tox setup that also does sphinx, pyflakes, pyflakes3
|
closed
|
2014-04-15T20:44:58Z
|
2018-09-22T20:59:12Z
|
https://github.com/itamarst/eliot/issues/41
|
[] |
itamarst
| 0
|
|
zappa/Zappa
|
flask
| 1,333
|
S3 bucket is not reused with `slim_handler: True`
|
When Zappa is using the `slim_handler`, it does not reuse the temporary S3 bucket that it creates --- Even when zappa `update` command is used. This results in the creation of many, many, many temporary buckets, until the quota is reached.
## Context
This is similar to #1643, except that user is encountering problems with temporary buckets are not needed for the Zappa operation.
## Expected Behavior
The created bucket should be reused. Right now, the name is not resused.
## Actual Behavior
New buckets keep getting created.
## Possible Fix
After a bucket is created, update that `zappa_settings.json` file to note the bucket name.
## Your Environment
<!--- Include as many relevant details about the environment you experienced the bug in -->
* Zappa version used: 0.58.0
* Operating System and Python version: MacOS
|
closed
|
2024-05-03T12:03:24Z
|
2024-08-11T12:54:00Z
|
https://github.com/zappa/Zappa/issues/1333
|
[
"no-activity",
"auto-closed"
] |
simsong
| 2
|
fastapi/fastapi
|
api
| 13,316
|
Callable object as dependency with body params is not parsing parameters inside `__call__` properly
|
When callable object is used as dependency, there is problem with parameters inside `__call__` method of object class. With following setup, you end up with query parameters instead of body parameters:
```python
class SomeModel(BaseModel):
arg1: str
class SomeDependency:
def __call__(
self,
some_model: Annotated[SomeModel, Body(..., description="Some model")],
) -> dict:
print(some_model.arg1)
@app.post("/hello")
async def hello(data: Annotated[dict, Depends(SomeDependency())]):
return data
```
Console:
```
pydantic.errors.PydanticUserError: `TypeAdapter[typing.Annotated[ForwardRef("Annotated[SomeModel, Body(..., description='Some model')]"), Query(PydanticUndefined)]]` is not fully defined; you should define `typing.Annotated[ForwardRef("Annotated[SomeModel, Body(..., description='Some model')]"), Query(PydanticUndefined)]` and all referenced types, then call `.rebuild()` on the instance.
```
`Query(PydanticUndefined)` makes no sense, there are no query parameters. But when this function (in `fastapi/dependencies/utils.py`) gets updated:
```python
def get_typed_signature(call: Callable[..., Any]) -> inspect.Signature:
signature = inspect.signature(call)
globalns = getattr(call, "__globals__", {})
...
```
into this:
```python
def get_typed_signature(call: Callable[..., Any]) -> inspect.Signature:
signature = inspect.signature(call)
if isinstance(call, type) or isinstance(call, types.FunctionType):
globalns = getattr(call, "__globals__", {})
else:
globalns = getattr(call.__call__, "__globals__", {})
...
```
it works fine. Probably not the best solution. It's just to point out to the problem.
_Originally posted by @SobikXexe in https://github.com/fastapi/fastapi/discussions/13286#discussioncomment-12043516_
|
closed
|
2025-02-03T16:44:42Z
|
2025-02-04T08:59:57Z
|
https://github.com/fastapi/fastapi/issues/13316
|
[] |
SobikXexe
| 0
|
piskvorky/gensim
|
nlp
| 3,025
|
Custom Keyword inclusion
|
<!--
**IMPORTANT**:
- Use the [Gensim mailing list](https://groups.google.com/forum/#!forum/gensim) to ask general or usage questions. Github issues are only for bug reports.
- Check [Recipes&FAQ](https://github.com/RaRe-Technologies/gensim/wiki/Recipes-&-FAQ) first for common answers.
Github bug reports that do not include relevant information and context will be closed without an answer. Thanks!
-->
#### Problem description
Our need is the generated summary should have specific keywords from the input text.
#### Steps/code/corpus to reproduce
We need the summarize function to have accept keywords as input parameter.
output = summarize(text, word_count=9, **custom_keywords=keywords**)
**For example,**
```
from gensim.summarization.summarizer import summarize
keywords = ['apple', 'red', 'yellow']
text = "Apple is red. Grape is black. Banana is yellow"
output = summarize(text, word_count=9, custom_keywords=keywords)
print(output)
```
#### Output
**Apple** is **red.**
Banana is **yellow**
As in above example, we need a parameter to include custom keywords and those keywords must be present in the summarized text.
(i.e) The sentences with the keywords should be present in the output.
|
closed
|
2021-01-12T07:33:17Z
|
2021-01-12T10:28:36Z
|
https://github.com/piskvorky/gensim/issues/3025
|
[] |
Vignesh9395
| 1
|
python-visualization/folium
|
data-visualization
| 1,807
|
LayerControl is not draggable
|
**Describe the bug**
The LayerControler isn't draggable even it's turned on.
**To Reproduce**
```
m = folium.Map()
# add layer control
layer_control = folium.LayerControl(collapsed=False, position='topleft', draggable=True)
layer_control.add_to(m)
```
**Environment (please complete the following information):**
- Browser chrome
- Jupyter Notebook and html
- Python version (3.10.12)
- folium version (0.13.0)
- branca version (0.6.0)
|
closed
|
2023-09-19T14:50:06Z
|
2025-03-16T11:23:43Z
|
https://github.com/python-visualization/folium/issues/1807
|
[
"documentation"
] |
zxdawn
| 2
|
Esri/arcgis-python-api
|
jupyter
| 1,308
|
sdf to_featurelayer
|
**Describe the bug**
Currently we need to convert int64 to str. That is not a issue, but the lacking of warning, error causes new users to make this mistake.
**To Reproduce**
Steps to reproduce the behavior:

```python
The gis module is publishing int64 columns. But if there are ints over 32 bits it the insert fails.
```
**Expected behavior**
It should warn and try to make str conversion.
**Platform (please complete the following information):**
- windows 10
- chrome
- Python API Version 1.9.1
|
closed
|
2022-07-22T17:50:54Z
|
2022-07-25T12:50:04Z
|
https://github.com/Esri/arcgis-python-api/issues/1308
|
[
"bug"
] |
hildermesmedeiros
| 3
|
pytest-dev/pytest-django
|
pytest
| 898
|
TypeError: isinstance() arg 2 must be a type or tuple of types
|
```
_________________________________________________ ERROR at setup of TestCardCase.test_card_instance __________________________________________________
request = <SubRequest '_django_db_marker' for <Function test_card_instance>>
@pytest.fixture(autouse=True)
def _django_db_marker(request):
"""Implement the django_db marker, internal to pytest-django.
This will dynamically request the ``db``, ``transactional_db`` or
``django_db_reset_sequences`` fixtures as required by the django_db marker.
"""
marker = request.node.get_closest_marker("django_db")
if marker:
transaction, reset_sequences = validate_django_db(marker)
if reset_sequences:
request.getfixturevalue("django_db_reset_sequences")
elif transaction:
request.getfixturevalue("transactional_db")
else:
> request.getfixturevalue("db")
../env/lib/python3.8/site-packages/pytest_django/plugin.py:436:
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
../env/lib/python3.8/site-packages/pytest_django/fixtures.py:105: in django_db_setup
db_cfg = setup_databases(
../env/lib/python3.8/site-packages/django/test/utils.py:170: in setup_databases
connection.creation.create_test_db(
../env/lib/python3.8/site-packages/django/db/backends/base/creation.py:72: in create_test_db
call_command(
../env/lib/python3.8/site-packages/django/core/management/__init__.py:168: in call_command
return command.execute(*args, **defaults)
../env/lib/python3.8/site-packages/django/core/management/base.py:371: in execute
output = self.handle(*args, **options)
../env/lib/python3.8/site-packages/django/core/management/base.py:85: in wrapped
res = handle_func(*args, **kwargs)
../env/lib/python3.8/site-packages/django/core/management/commands/migrate.py:243: in handle
post_migrate_state = executor.migrate(
../env/lib/python3.8/site-packages/django/db/migrations/executor.py:117: in migrate
state = self._migrate_all_forwards(state, plan, full_plan, fake=fake, fake_initial=fake_initial)
../env/lib/python3.8/site-packages/django/db/migrations/executor.py:147: in _migrate_all_forwards
state = self.apply_migration(state, migration, fake=fake, fake_initial=fake_initial)
../env/lib/python3.8/site-packages/django/db/migrations/executor.py:227: in apply_migration
state = migration.apply(state, schema_editor)
../env/lib/python3.8/site-packages/django/db/migrations/migration.py:124: in apply
operation.database_forwards(self.app_label, schema_editor, old_state, project_state)
../env/lib/python3.8/site-packages/django/db/migrations/operations/fields.py:104: in database_forwards
schema_editor.add_field(
../env/lib/python3.8/site-packages/django/db/backends/sqlite3/schema.py:328: in add_field
self._remake_table(model, create_field=field)
../env/lib/python3.8/site-packages/django/db/backends/sqlite3/schema.py:189: in _remake_table
self.effective_default(create_field)
../env/lib/python3.8/site-packages/django/db/backends/base/schema.py:303: in effective_default
return field.get_db_prep_save(self._effective_default(field), self.connection)
../env/lib/python3.8/site-packages/django/db/backends/base/schema.py:282: in _effective_default
default = field.get_default()
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
self = <django.db.models.fields.related.ForeignKey: vendor>
def get_default(self):
"""Return the to_field if the default value is an object."""
field_default = super().get_default()
> if isinstance(field_default, self.remote_field.model):
E TypeError: isinstance() arg 2 must be a type or tuple of types`
```
This happens for every Model instantiation, and if I remove the db decorator then another error pops up related to its absence. Is there a way to fix this without forking Django? I see issues with this on SO going back to 2013.
|
closed
|
2020-12-24T20:03:50Z
|
2020-12-26T17:21:31Z
|
https://github.com/pytest-dev/pytest-django/issues/898
|
[] |
Denis-Step
| 2
|
PokemonGoF/PokemonGo-Bot
|
automation
| 5,508
|
Telegram Task does not react, I think i broke it with wrong commands
|
### Actual Behavior
In my telegram app: Just the normal output of my subscriptions. The output I was expecting because of typing for example '/info' did not appear.
terminal output window:
`Traceback (most recent call last):
File "/home/crepes/pgo/PokemonGo-Bot/local/lib/python2.7/site-packages/eventlet/hubs/poll.py", line 115, in wait
listener.cb(fileno)
File "/home/crepes/pgo/PokemonGo-Bot/local/lib/python2.7/site-packages/eventlet/green/thread.py", line 41, in __thread_body
func(_args, *_kwargs)
File "/home/crepes/pgo/PokemonGo-Bot/pokemongo_bot/event_handlers/telegram_handler.py", line 382, in run
self.display_events(update)
File "/home/crepes/pgo/PokemonGo-Bot/pokemongo_bot/event_handlers/telegram_handler.py", line 265, in display_events
events.remove('vanish_log')
ValueError: list.remove(x): x not in list
Removing descriptor: 9
`
But the Bot is still running...
(Restarting of PokemonGo-bot does not help. (I opened a new terminal as well, because I thought there might be some cache.))
### Your FULL config.json (remove your username, password, gmapkey and any other private info)
```
{
"type": "TelegramTask",
"config": {
"enabled": true,
"master": MY ID,
"// old syntax, still supported: alert_catch": ["all"],
"// new syntax:": {},
"alert_catch": {
"all": {"operator": "or", "cp": 999, "iv": 0.98},
"Dratini": {"operator": "or", "cp": 10, "iv": 0.01},
"Dragonair": {"operator": "or", "cp": 10, "iv": 0.001},
"Dragonite": {"operator": "or", "cp": 10, "iv": 0.01},
"Snorlax": {"operator": "or", "cp": 10, "iv": 0.01},
"Lapras": {"operator": "or", "cp": 10, "iv": 0.01}
}
}
}
```
### Steps to Reproduce
-sending to my telegram bot while my pokemonGo-Bot was offline: /info
-starting my pokemonGo-Bot in terminal
-typing '/events pokemon' in telegram app (I tried to filter the output of '/events')
-typing '/events egg' in telegram app
### possible solution
maybe clearing some cache?
### Other Information
OS: ubuntu 14.04
Branch: master
Git Commit: fef76945022210f4663c091b55750c57684026ec
Python Version: Python 2.7.6
EDIT: Rebooting the computer and deleting 'PokemonGo-Bot' folder and reinstalling does not fix the issue. So i guess I need to create a new telegram bot. But i am going to wait a little. Maybe there is a better soluton.
|
closed
|
2016-09-17T10:43:32Z
|
2016-09-19T03:44:22Z
|
https://github.com/PokemonGoF/PokemonGo-Bot/issues/5508
|
[] |
crepes11
| 3
|
thewhiteh4t/pwnedOrNot
|
api
| 20
|
Error:could not collect tokens | 403 Client Error
|
HI , i was just trying to use this tool just as described in demo video and using 12345@gmail.com to test for data breaches and i am getting following error
>] Created by : thewhiteh4t
[>] Version : 1.1.7
[+] Checking for updates...
[+] Script is up-to-date...
[+] Bypassing Cloudflare Restriction...
ERROR:root:'https://haveibeenpwned.com/api/v2/breachedaccount/test@example.com' returned an error. Could not collect tokens.
Traceback (most recent call last):
File "pwnedornot.py", line 273, in <module>
main()
File "pwnedornot.py", line 64, in main
cookies, user_agent = cfscrape.get_tokens('https://haveibeenpwned.com/api/v2/breachedaccount/test@example.com', user_agent='pwnedornot')
File "/usr/local/lib/python3.7/dist-packages/cfscrape/__init__.py", line 182, in get_tokens
resp.raise_for_status()
File "/usr/lib/python3/dist-packages/requests/models.py", line 940, in raise_for_status
raise HTTPError(http_error_msg, response=self)
requests.exceptions.HTTPError: 403 Client Error: Forbidden for url: https://haveibeenpwned.com/api/v2/breachedaccount/test@example.com
root@kali:~/Downloads/breach/pwnedOrNot#
the platform i am using this on are as follows:
root@kali:~/Downloads/breach/pwnedOrNot# uname -r
4.19.0-kali3-amd64
root@kali:~/Downloads/breach/pwnedOrNot# uname -v
#1 SMP Debian 4.19.20-1kali1 (2019-02-14)

|
closed
|
2019-04-21T12:00:40Z
|
2019-06-27T20:52:07Z
|
https://github.com/thewhiteh4t/pwnedOrNot/issues/20
|
[] |
stack00
| 48
|
robotframework/robotframework
|
automation
| 4,995
|
Empty variable name in VAR crashes execution
|
Following example:
```
*** Test Cases ***
Test
VAR
...
```
will stop robot execution and throw:
```
options = ['scope', 'separator'] if name.value[0] == '$' else ['scope']
E IndexError: string index out of range
```
instead of invalid variable name error (in case variable name is in the same line, ie just ``VAR``
|
closed
|
2023-12-30T16:22:08Z
|
2024-01-02T23:48:53Z
|
https://github.com/robotframework/robotframework/issues/4995
|
[
"priority: high",
"task"
] |
bhirsz
| 1
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.