repo_name
stringlengths 9
75
| topic
stringclasses 30
values | issue_number
int64 1
203k
| title
stringlengths 1
976
| body
stringlengths 0
254k
| state
stringclasses 2
values | created_at
stringlengths 20
20
| updated_at
stringlengths 20
20
| url
stringlengths 38
105
| labels
listlengths 0
9
| user_login
stringlengths 1
39
| comments_count
int64 0
452
|
|---|---|---|---|---|---|---|---|---|---|---|---|
vitalik/django-ninja
|
pydantic
| 1,272
|
How to create an alias for a field created via annotation
|
I have an ORM query that creates annotations. While `hour` makes sense within the context of the query, as a property in the API response, I'd like to call the field `time`. In the `WaterLevelSchema`, I can specify `hour: datetime`, which works but if I only specify `time: datetime = Field(..., alias='hour')` I get an error:
```response.historic_water_levels_m3.8.hour
Field required [type=missing, input_value=<DjangoGetter: WaterLevel...o.ZoneInfo(key='UTC')))>, input_type=DjangoGetter]
For further information visit https://errors.pydantic.dev/2.8/v/missing
```
How can I create an alias for a field that was created via annotation in a Django ORM query? When I specify both `hour` and `time` in the `WaterLevelSchema`, there is no error, but then I have both properties in the API response, which I do not want. Do I have my thinking backwards? How can this be done?
```python
class WaterLevelSchema(Schema):
hour: datetime # --> works
time: datetime = Field(..., alias='hour') # --> raises an error unless hour is specified as above
class DashboardSchema(Schema):
...
historic_water_levels_m3: list[WaterLevelSchema]
@router.get('/dashboard/{location_id}', response=DashboardSchema)
def dashboard(request, location_id):
latest_water_levels = (
ExecutionLog.objects
.filter(forecast_id__in=latest_forecasts_ids)
.annotate(day=TruncDay('created_at'), hour=TruncHour('created_at')) # Truncate timestamp to day and hour
.annotate(avg_water_level_m3=Avg('water_level_m3')) # Calculate average water level per hour per day
.values('day', 'hour') # Group by day and hour
)
return DashboardSchema(
...
historic_water_levels_m3=latest_water_levels
)
```
|
open
|
2024-08-21T10:11:51Z
|
2024-08-22T07:31:39Z
|
https://github.com/vitalik/django-ninja/issues/1272
|
[] |
tobi-or-not
| 2
|
stitchfix/hamilton
|
numpy
| 123
|
explore extracting columns with validations
|
**Is your feature request related to a problem? Please describe.**
A clear and concise description of what the problem is. Ex. I'm always frustrated when [...]
When we extract columns, it would be very handy to be able to run checks against those columns. [pandera](https://pandera.readthedocs.io/en/stable/) is a great, lightweight tool for validating dtypes, nullability, uniqueness, and any arbitrary `Check` callable.
**Describe the solution you'd like**
A clear and concise description of what you want to happen.
Ideally this would be a decorator that would work similar to `extra_columns`, would ingest a `DataFrame` and return the same dataframe, and expand the nodes to have a dataframe validation node. This could be specific to pandera, or could be made more general, so something like
```python
import pandas as pd
from pandera import DataFrameSchema, Column, Check
@validate_columns({
"user_id": Column(str, unique=True),
"age": Column(int, Check.in_range(18, 150),
"shirt_size": Column(float, description="arm length in inches", Check.greater_than(10)),
"favorite_apparel": Column(str, Check.isin(["pants", "shirts", "hats"]),
})
def users(input_file: str) -> pd.DataFrame:
return pd.read_csv(input_file)
```
or more generically
```python
import pandas as pd
import abc
class Schema(abc.ABC):
@abc.abstract_method
def validate(self):
pass
class SimpleColumnChecker(Schema):
def __init__(self, columns: Dict[str, Any]):
self.columns = columns
def validate(self, df):
for column, col_schema in self.columns.items():
assert column in df.columns
if col_schema.get("unique"):
assert df[column].shape[0] == df[column].drop_duplicates().shape[0]
if col_schema.get("min"):
assert df[column].min() > col_schema.get("min")
if col_schema.get("max"):
assert df[column].max() < col_schema.get("min")
if col_schema.get("isin"):
assert set(df[column]) == set(col_schema.get("isin"))
@validate_columns({
"user_id": { "unique": True},
"age": {"min": 18, "max": 150},
"shirt_size": {"min": 10},
"favorite_apparel": {"isin": ["pants", "shirts", "hats"]},
})
def users(input_file: str) -> pd.DataFrame:
return pd.read_csv(input_file)
```
**Describe alternatives you've considered**
A clear and concise description of any alternative solutions or features you've considered.
certainly you can have a splitting node where you validate data yourself, but I think this is a common enough pattern (or it really should be common enough and made a first class citizen of any dataframe manipulation) that it would benefit from being easy to plug in directly to a node
**Additional context**
Add any other context or screenshots about the feature request here.
|
closed
|
2022-04-29T17:18:10Z
|
2022-07-15T05:23:20Z
|
https://github.com/stitchfix/hamilton/issues/123
|
[] |
chrisaddy
| 3
|
healthchecks/healthchecks
|
django
| 88
|
Add "copy to clipboard" function to example code snippets
|
For the example code snippets (bash, python, ruby, etc.), it would be great to have a "copy" buttons next to each.
The snippets are in several places on the site:
- the welcome page for not-logged-in users
- documentation
- My Checks → gear icon → Usage Examples
The project is already using clipboard.js for ping addresses.
|
closed
|
2016-09-28T10:31:14Z
|
2016-10-01T14:57:58Z
|
https://github.com/healthchecks/healthchecks/issues/88
|
[] |
cuu508
| 5
|
ranaroussi/yfinance
|
pandas
| 2,299
|
TypeError('super(type, obj): ... on yf.download() with 0.2.54
|
**Mod edit: only reply if `yf.__version__` prints latest version. Ignore PIP Conda etc**
---
The following code worked for over a year until 2/18 with the following errors. Why is this?
I uninstalled and installed yfinance. I have version 0.2.54
ERROR:
1 Failed download:
['MSFT']: TypeError('super(type, obj): obj must be an instance or subtype of type')
[*********************100%***********************] 1 of 1 completed
1 Failed download:
['IBM']: TypeError('super(type, obj): obj must be an instance or subtype of type')
Wed Feb 19 22:46:22 2025
CODE:
```
import yfinance as yf
import pandas as pd
import datetime
import time
symbols = ["MSFT", "IBM"]
start1 = "2024-08-01"
end1 = "2025-02-17" # Day after last trading day
print(time.ctime())
# Loop through the symbols and get the historical quotes
for symbol in symbols:
# Get the data from Yahoo Finance
#print (yf.Ticker("IBM").info)
data = yf.download(tickers = symbol, start=start1, end=end1, interval="1d")
data.to_csv(f"{symbol}.csv", mode="w")
# Print the symbol and the file name to screen
print(f"Yahoo historical quotes for {symbol} saved as {symbol}.csv")
```
|
closed
|
2025-02-20T03:50:00Z
|
2025-02-22T16:45:44Z
|
https://github.com/ranaroussi/yfinance/issues/2299
|
[] |
lenovo520
| 8
|
sloria/TextBlob
|
nlp
| 408
|
How is the sentiment scoring derived?
|
How have the scores been assigned in https://github.com/sloria/TextBlob/blob/dev/textblob/en/en-sentiment.xml?
For example, how have polarity, subjectively, intensity or confidence been derived in:
`<word form="afloat" cornetto_synset_id="n_a-533320" wordnet_id="a-00076921" pos="JJ" sense="borne on the water" polarity="0.0" subjectivity="0.1" intensity="1.0" confidence="0.8" />`
Has someone labelled these manually? Or have they been learnt (via a machine learning model)?
|
open
|
2022-05-06T16:50:50Z
|
2022-05-06T16:51:18Z
|
https://github.com/sloria/TextBlob/issues/408
|
[] |
geobetts
| 0
|
tradingstrategy-ai/web3-ethereum-defi
|
pytest
| 131
|
Most things do not work?
|
Hi,
I've tried working with this library but to me seems like most things do not work.
I'll just provide extra information
I am working on Windows 10
Python 3.10.6
Foremost installing this lib is very difficult on Windows. It is using ethash which is the problem. To install this you have to do some manual funky stuff found here: [unable to install pyethash · Issue #131 · ethereum/ethash (github.com)](https://github.com/ethereum/ethash/issues/131) for whoever encounters the same.
Then I tried to do this tutorial:
https://web3-ethereum-defi.readthedocs.io/tutorials/uniswap-v3-price-analysis.html
Just scraping the data already fails:
`state = JSONFileScanState("/tmp/uniswap-v3-price-scan.json")
fetch_events_to_csv(json_rpc_url, state, start_block=start_block, end_block=end_block)
`
the `fetch_events_to_csv` fails from the start because it needs the files
`uniswap-v3-swap.csv`
`uniswap-v3-poolcreated.csv`
`uniswap-v3-burn.csv`
`uniswap-v3-mint.csv`
Which I think should be created automatically if not existing. After creating them manually I still had difficulties. The program was not finding them in the /tmp folder which is the default folder according to the API.
It then also failed because it could not find "uniswap-v3-price-scan.json". Initially it at least detected there was no state, then afterwards it failed without state. Then I had to look into the code to actually see what is supposed to be in this file because I can't get the code to run without this state file.
Then I found out that uniswap-v3-price-scan.json just contains the block number...
After doing that, I finally get to scrape data which gets me into the following error:
` File "C:\Python310\lib\site-packages\eth_defi\uniswap_v3\events.py", line 435, in fetch_events_to_csv
raise RuntimeError(f"Could not decode {log_result}") from e
RuntimeError: Could not decode {'address': '0x1f98431c8ad98523631ae4a59f267346ea31f984', 'blockHash': '0xe8228e3e736a42c7357d2ce6882a1662c588ce608897dd53c3053bcbefb4309a', 'blockNumber': 12369739, 'data': '0x000000000000000000000000000000000000000000000000000000000000003c0000000000000000000000001d42064fc4beb5f8aaf85f4617ae8b3b5b8bd801', 'logIndex': '0x18', 'removed': False, 'topics': ['0x783cca1c0412dd0d695e784568c96da2e9c22ff989357a2e8b1d9b2b4e6b7118', '0x0000000000000000000000001f9840a85d5af5bf1d1762f925bdaddc4201f984', '0x000000000000000000000000c02aaa39b223fe8d0a0e5c4f27ead9083c756cc2', '0x0000000000000000000000000000000000000000000000000000000000000bb8'], 'transactionHash': '0x37d8f4b1b371fde9e4b1942588d16a1cbf424b7c66e731ec915aca785ca2efcf', 'transactionIndex': '0x21', 'context': <eth_defi.uniswap_v3.events.TokenCache object at 0x000001DBE157D9F0>, 'event': <class 'web3._utils.datatypes.PoolCreated'>, 'chunk_id': 12369721, 'timestamp': 1620157956}`
|
closed
|
2023-06-18T17:31:05Z
|
2023-07-13T10:39:28Z
|
https://github.com/tradingstrategy-ai/web3-ethereum-defi/issues/131
|
[] |
LouisVA
| 1
|
coqui-ai/TTS
|
deep-learning
| 2,778
|
[Bug] css10 formatter doesn't set the root_path attribute
|
### Describe the bug
css10 formatter doesn't set the root_path attribute
https://github.com/coqui-ai/TTS/blob/b5cd6441328fd9569f370d51a2449d09d546d335/TTS/tts/datasets/formatters.py#L244C95-L244C95
### To Reproduce
python -m trainer.distribute --script path/train_vits.py
### Expected behavior
_No response_
### Logs
_No response_
### Environment
```shell
TTS=0.15.6
```
### Additional context
_No response_
|
closed
|
2023-07-17T10:11:46Z
|
2023-09-09T01:03:37Z
|
https://github.com/coqui-ai/TTS/issues/2778
|
[
"bug",
"wontfix"
] |
whozwhat
| 1
|
axnsan12/drf-yasg
|
django
| 628
|
Circular references for swagger_serializer_method
|
Is it a way to create circular references to generate open API schema using rest_framework.serializers?
To build tree structure Serializer needs to reference for self.
```python
class ItemSerializer(Serializer):
item = serializers.SerializerMethodField()
class Meta:
swagger_schema_fields = item_schema
@swagger_serializer_method(serializer_or_field=ItemSerializer)
def get_item(self, item):
return ItemSerializer(item).data
def to_representation(self, item):
return {
"items": [self.get_item(item) for item in item.items]
}
class TreeSerializer(Serializer):
item = serializers.SerializerMethodField()
class Meta:
swagger_schema_fields = tree_schema
@swagger_serializer_method(serializer_or_field=ItemSerializer)
def get_item(self, data):
return ItemSerializer(data, markdown_to_html=self.markdown_to_html).data
def to_representation(self, data):
return {
"items": [self.get_item(item) for item in data]
}
```
It is incorrect code, is it exists how to correctly create tree structure for swagger?
|
open
|
2020-08-20T14:54:00Z
|
2025-03-07T12:13:19Z
|
https://github.com/axnsan12/drf-yasg/issues/628
|
[
"triage"
] |
vchepurko
| 1
|
pydantic/logfire
|
fastapi
| 62
|
OpenAI SDK traces will fail when `.with_raw_response` is used
|
### Description
I have this code:
```python
chat_completion_response = await openai_client.chat.completions.with_raw_response.create(
messages=query_messages, # type: ignore
# Azure OpenAI takes the deployment name as the model name
model=self.chatgpt_deployment if self.chatgpt_deployment else self.chatgpt_model,
temperature=0.0, # Minimize creativity for search query generation
max_tokens=100, # Setting too low risks malformed JSON, setting too high may affect performance
n=1,
tools=tools,
tool_choice="auto",
)
self.meter_ratelimit_remaining_tokens.set(
int(chat_completion_response.headers.get("x-ratelimit-remaining-tokens", 0))
)
self.meter_ratelimit_remaining_requests.set(
int(chat_completion_response.headers.get("x-ratelimit-remaining-requests", 0))
)
chat_completion = chat_completion_response.parse()
```
That causes a crash when instrumentation is enabled using logfire:
```default
ERROR:root:Exception while generating response stream: 'LegacyAPIResponse' object has no attribute 'choices'
Traceback (most recent call last):
File "/Users/anthonyshaw/projects/azure-search-openai-demo/app/backend/app.py", line 181, in format_as_ndjson
async for event in r:
File "/Users/anthonyshaw/projects/azure-search-openai-demo/app/backend/approaches/chatapproach.py", line 152, in run_with_streaming
extra_info, chat_coroutine = await self.run_until_final_call(
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/Users/anthonyshaw/projects/azure-search-openai-demo/app/backend/approaches/chatreadretrieveread.py", line 140, in run_until_final_call
chat_completion_response = await self.openai_client.chat.completions.with_raw_response.create(
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/Users/anthonyshaw/projects/azure-search-openai-demo/.venv/lib/python3.11/site-packages/openai/_legacy_response.py", line 349, in wrapped
return cast(LegacyAPIResponse[R], await func(*args, **kwargs))
^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/Users/anthonyshaw/projects/azure-search-openai-demo/.venv/lib/python3.11/site-packages/opentelemetry/instrumentation/openai/shared/chat_wrappers.py", line 128, in achat_wrapper
response = await wrapped(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/Users/anthonyshaw/projects/azure-search-openai-demo/.venv/lib/python3.11/site-packages/openai/resources/chat/completions.py", line 1334, in create
return await self._post(
^^^^^^^^^^^^^^^^^
File "/Users/anthonyshaw/projects/azure-search-openai-demo/.venv/lib/python3.11/site-packages/openai/_base_client.py", line 1743, in post
return await self.request(cast_to, opts, stream=stream, stream_cls=stream_cls)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/Users/anthonyshaw/projects/azure-search-openai-demo/.venv/lib/python3.11/site-packages/openai/_base_client.py", line 1446, in request
return await self._request(
^^^^^^^^^^^^^^^^^^^^
File "/Users/anthonyshaw/projects/azure-search-openai-demo/.venv/lib/python3.11/site-packages/logfire/_internal/integrations/openai.py", line 154, in instrumented_openai_request
on_response(response, span)
File "/Users/anthonyshaw/projects/azure-search-openai-demo/.venv/lib/python3.11/site-packages/logfire/_internal/integrations/openai.py", line 214, in on_chat_response
'message': response.choices[0].message,
^^^^^^^^^^^^^^^^
AttributeError: 'LegacyAPIResponse' object has no attribute 'choices'
```
This looks to be because the instrumentation is wrapping the `.create()` function but assuming that the response is _always_ the pydantic model. When you call OpenAI SDK with `.with_raw_response` you get a LegacyAPIResponse object instead and you need to run `.parse()` on it.
I'm doing `.with_raw_response` because I convert the rate-limiting headers into OpenTelemetry Metrics for the OTLP meters API.
### Python, Logfire & OS Versions, related packages
```TOML
logfire="0.28.0"
platform="macOS-13.6.6-x86_64-i386-64bit"
python="3.13.0a0 (heads/main:8ac2085b80, Sep 26 2023, 19:39:32) [Clang 14.0.3 (clang-1403.0.22.14.1)]"
[related_packages]
requests="2.31.0"
protobuf="4.25.3"
rich="13.7.1"
opentelemetry-api="1.24.0"
opentelemetry-exporter-otlp-proto-common="1.24.0"
opentelemetry-exporter-otlp-proto-http="1.24.0"
opentelemetry-instrumentation="0.45b0"
opentelemetry-proto="1.24.0"
opentelemetry-sdk="1.24.0"
opentelemetry-semantic-conventions="0.45b0"
```
|
closed
|
2024-05-01T07:00:53Z
|
2024-05-03T12:46:59Z
|
https://github.com/pydantic/logfire/issues/62
|
[
"bug",
"OpenAI"
] |
tonybaloney
| 4
|
Guovin/iptv-api
|
api
| 747
|
[Bug]: IPV6测速还是有问题
|
### Don't skip these steps | 不要跳过这些步骤
- [X] I understand that I will be **blocked** if I *intentionally* remove or skip any mandatory\* field | 我明白,如果我“故意”删除或跳过任何强制性的\*字段,我将被**封锁**
- [X] I have checked through the search that there are no similar issues that already exist | 我已经通过搜索仔细检查过没有存在已经创建的相似问题
- [X] I will not submit any issues that are not related to this project | 我不会提交任何与本项目无关的问题
### Occurrence environment | 触发环境
- [ ] Workflow | 工作流
- [ ] GUI | 软件
- [X] Docker
- [ ] Command line | 命令行
### Bug description | 具体描述
已拉取最新的1.5.7版DOCKER,我的网络确定能支持IPV6,播放器也能正常播放IPV6的源,但是在SORT.LOG里面发现所有的IPV6源都没有速度,IPV4源能正常测速,部份日志文件如下:
SORT.LOG
`Name: CCTV1, URL: http://36.99.206.9:9901/tsfile/live/0001_1.m3u8$订阅源, Date: None, Delay: 974 ms, Speed: 0.06 M/s, Resolution: None
Name: CCTV1, URL: http://124.228.161.8:9901/tsfile/live/0017_1.m3u8$订阅源, Date: None, Delay: 27 ms, Speed: 0.13 M/s, Resolution: None
Name: CCTV1, URL: http://[2409:8087:5e00:24::1e]:6060/200000001898/460000089800010011/1.m3u8$订阅源, Date: None, Delay: 7066 ms, Speed: 0.00 M/s, Resolution: None
Name: CCTV1, URL: http://[2409:8087:5e00:24::1e]:6060/200000001898/460000089800010011/index.m3u8$订阅源, Date: None, Delay: 7066 ms, Speed: 0.00 M/s, Resolution: None
Name: CCTV1, URL: http://121.19.134.222:808/tsfile/live/0001_1.m3u8$订阅源, Date: None, Delay: 51 ms, Speed: 0.15 M/s, Resolution: None
Name: CCTV1, URL: http://play.kankanlive.com/live/1661761962676984.m3u8$订阅源, Date: None, Delay: 6 ms, Speed: 0.04 M/s, Resolution: None
Name: CCTV1, URL: http://[2409:8087:5e00:24::1e]:6060/000000001000/5000000010000030810/1.m3u8$订阅源, Date: None, Delay: 7066 ms, Speed: 0.00 M/s, Resolution: None
Name: CCTV1, URL: http://[2409:8087:5e00:24::1e]:6060/000000001000/5000000004000002226/1.m3u8$订阅源, Date: None, Delay: 7066 ms, Speed: 0.00 M/s, Resolution: None
Name: CCTV1, URL: http://61.136.172.236:9901/tsfile/live/0001_1.m3u8$订阅源, Date: None, Delay: 31 ms, Speed: 0.21 M/s, Resolution: None
Name: CCTV1, URL: http://[2409:8087:1a01:df::7005]/ottrrs.hl.chinamobile.com/PLTV/88888888/224/3221226559/index.m3u8$订阅源, Date: None, Delay: 140 ms, Speed: 0.00 M/s, Resolution: None`
系统最终输出的result.txt的源只有IPV6源,似乎这个测速机制没有起到作用。
部份result.txt数据:
`CCTV1,http://[2409:8087:5e00:24::10]:6060/200000001898/460000089800010144/
CCTV1,http://[2409:8087:5e00:24::11]:6060/200000001898/460000089800010144/
CCTV1,http://[2409:8087:5e08:24::17]:6610/000000001000/6000000001000029752/1.m3u8?channel-id=wasusyt&Contentid=6000000001000029752&livemode=1&stbId=3
CCTV1,http://[2409:8087:5e08:24::17]:6610/000000001000/1000000005000265001/1.m3u8?channel-id=ystenlive&Contentid=1000000005000265001&livemode=1&stbId=3
CCTV1,http://[2409:8087:1a0a:df::4031]:80/wh7f454c46tw705163907_345094122/ottrrs.hl.chinamobile.com/PLTV/88888888/224/3221226016/index.m3u8?icpid=88888888&RTS=1713967534&from=4&hms_devid=1143&online=1713967534&vqe=3
CCTV1,http://[2409:8087:1a0a:df::4031]:80/wh7f454c46tw705163907_345094122/ottrrs.hl.chinamobile.com/PLTV/88888888/224/3221226016/index.m3u8
CCTV1,http://[2409:8087:1a0a:df::4038]/ottrrs.hl.chinamobile.com/TVOD/88888888/224/3221226559/index.m3u8
CCTV1,http://[2409:8087:1a01:df::7005]/ottrrs.hl.chinamobile.com/TVOD/88888888/224/3221226559/index.m3u8
CCTV1,http://[2409:8087:1a01:df::7005]/ottrrs.hl.chinamobile.com/PLTV/88888888/224/3221226559/index.m3u8
CCTV1,http://[2409:8087:1a01:df::7005]/ottrrs.hl.chinamobile.com/yinhe/88888888/224/3221226559/index.m3u8
CCTV1,http://[2409:8087:1a0a:df::4031]/ottrrs.hl.chinamobile.com/PLTV/88888888/224/3221226016/index.m3u8
CCTV1,http://[2409:8087:1a01:df::4077]/ottrrs.hl.chinamobile.com/PLTV/88888888/224/3221226016/index.m3u8?
CCTV1,http://[2409:8087:1a01:df::7005]/ottrrs.hl.chinamobile.com/PLTV/88888888/224/3221226016/index.m3u8
CCTV1,http://[2409:8087:1a01:df::4077]/ottrrs.hl.chinamobile.com/PLTV/88888888/224/3221226016/1.m3u8?icpid=88888888&from=1&hms_devid=1012&vqe=3
CCTV1,http://[2409:8087:1a01:df::4077]/ottrrs.hl.chinamobile.com/PLTV/88888888/8/3221226016/index.m3u8
CCTV2,http://[2409:8087:5e00:24::11]:6060/200000001898/460000089800010211/`
config.ini配置:
`[Settings]
open_driver = False
open_empty_category = False
open_filter_resolution = True
open_filter_speed = True
open_hotel = False
open_hotel_foodie = False
open_hotel_fofa = False
open_keep_all = False
open_m3u_result = True
open_multicast = False
open_multicast_foodie = False
open_multicast_fofa = False
open_online_search = False
open_proxy = False
open_request = False
open_service = False
open_sort = True
open_subscribe = True
open_update = True
open_update_time = False
open_url_info = False
open_use_cache = True
open_use_old_result = True
app_port = 8000
final_file = output/result.txt
hotel_num = 10
hotel_page_num = 1
hotel_region_list = 全部
ipv4_num = 5
ipv6_num = 10
ipv6_support = False
ipv_type = 全部
ipv_type_prefer = IPv6
min_resolution = 1920x1080
min_speed = 0.5
multicast_num = 10
multicast_page_num = 1
multicast_region_list = 全部
online_search_num = 0
online_search_page_num = 1
origin_type_prefer =
recent_days = 30
request_timeout = 10
sort_timeout = 10
source_file = config/demo.txt
subscribe_num = 10
urls_limit = 15`
### Error log | 报错日志
_No response_
|
closed
|
2024-12-26T03:55:04Z
|
2024-12-26T06:04:37Z
|
https://github.com/Guovin/iptv-api/issues/747
|
[
"bug",
"duplicate"
] |
liulei120
| 1
|
koxudaxi/datamodel-code-generator
|
pydantic
| 2,358
|
feature: adding dataclasses_json
|
**Is your feature request related to a problem? Please describe.**
The data that will be returned of the dataclass is not in the expected output casing that we need for business.
But we don't want to have code smells so our dataclasses are in snake casing but the output should be in camel casings.
**Describe the solution you'd like**
to add dataclasses_json library in to this solution as parameter in the creation of the model.
**Describe alternatives you've considered**
Adding this manual after creation of the model (a lot of work :'( )
**Additional context**
https://pypi.org/project/dataclasses-json/
@dataclass_json(letter_case=LetterCase.CAMEL)
|
open
|
2025-03-24T14:29:29Z
|
2025-03-24T14:29:29Z
|
https://github.com/koxudaxi/datamodel-code-generator/issues/2358
|
[] |
puttemanss
| 0
|
eriklindernoren/ML-From-Scratch
|
machine-learning
| 30
|
issues when we set up Pooling layer in CNN
|
clf.add(Conv2D(n_filters=16, filter_shape=(3,3), input_shape=(1,8,8), padding='same'))
clf.add(Activation('relu'))
clf.add(MaxPooling2D(pool_shape=(2, 2), stride=2))
clf.add(BatchNormalization())
clf.add(Dropout(0.25))
clf.add(Conv2D(n_filters=32, filter_shape=(3,3), padding='same'))
clf.add(Activation('relu'))
clf.add(Dropout(0.25))
clf.add(BatchNormalization())
clf.add(Flatten())
clf.add(Dense(256))
clf.add(Activation('relu'))
clf.add(Dropout(0.4))
clf.add(BatchNormalization())
clf.add(Dense(10))
clf.add(Activation('softmax'))
When we add one more pooling layer to CNN, bug appears
Traceback (most recent call last): ] ETA: --:--:--
File "convolutional_neural_network.py", line 85, in <module>
main()
File "convolutional_neural_network.py", line 71, in main
train_err, val_err = clf.fit(X_train, y_train, n_epochs=50, batch_size=256)
File "/home/deng106/ML-From-Scratch/mlfromscratch/deep_learning/neural_network.py", line 79, in fit
loss, _ = self.train_on_batch(X_batch, y_batch)
File "/home/deng106/ML-From-Scratch/mlfromscratch/deep_learning/neural_network.py", line 63, in train_on_batch
y_pred = self._forward_pass(X)
File "/home/deng106/ML-From-Scratch/mlfromscratch/deep_learning/neural_network.py", line 94, in _forward_pass
layer_output = layer.forward_pass(layer_output, training)
File "/home/deng106/ML-From-Scratch/mlfromscratch/deep_learning/layers.py", line 380, in forward_pass
X_col = image_to_column(X, self.pool_shape, self.stride, self.padding)
File "/home/deng106/ML-From-Scratch/mlfromscratch/deep_learning/layers.py", line 696, in image_to_column
pad_h, pad_w = determine_padding(filter_shape, output_shape)
TypeError: 'NoneType' object is not iterable
It turns out output_shape=0 appears for determine_padding(filter_shape, output_shape="same").
waiting for your answer. Appreciate that.
|
closed
|
2017-11-15T16:02:05Z
|
2018-02-07T23:06:27Z
|
https://github.com/eriklindernoren/ML-From-Scratch/issues/30
|
[] |
WayneDW
| 1
|
globaleaks/globaleaks-whistleblowing-software
|
sqlalchemy
| 3,212
|
Failure on export of reports with large number of attachments
|
I'm trying to export a report submission that has about 160 attachments (about 2.2MB each). The zip creation starts, but fails to download at about 100-150MB.
**To Reproduce**
Steps to reproduce the behavior:
1. Go to Reports
2. Open a report with a lot of attachments (mine is 160 attachments of about 2.2MB each)
3. Click on export
4. The zip creation starts but at about 150-200MB the browser fails to download.
5. Nothing shows on globaleaks.log
6. In access.log, if I download a single file i get this
```
POST /api/token
PUT /api/token/<file-hash>
GET /api/rfile/<hash>?token=<file-hash>
```
But when I try to export the whole report, the access.log only show first two requests and the zip file download fails.
**Expected behavior**
Download the complete report, or throw an error on logs.
As a workaround, let the user select (checkboxes) a number of files (10, 20, etc.), instead of havoing to download one by one.
**Screenshots**
Attached two screenshots of the beggining of the download (Unknown remaining time)

and the abnormal termination (Failed)

**Desktop (please complete the following information):**
- OS: Windows 10
- Browser: chrome 99.0.4844.74 (64 bits), firefox 98.0.1 (64-bits), also in private mode
**Dedicated Server (only globaleaks installed)
- Ubuntu 20.04.3 LTS (virtualized on VMWare)
- 1x Intel Xeon CPU D-1521
- 2 GB RAM
- 7.4GB of free space in filesystem
|
closed
|
2022-03-24T14:05:57Z
|
2022-03-25T18:07:06Z
|
https://github.com/globaleaks/globaleaks-whistleblowing-software/issues/3212
|
[
"T: Bug",
"C: Backend"
] |
informatic-oaib
| 3
|
pallets/flask
|
flask
| 5,545
|
Pyright type errors: `src/flask/blueprint.py`
|
Pyright reports type errors for `src/flask/blueprint.py`:
```
flask/src/flask/blueprints.py
flask/src/flask/blueprints.py:126:20 - error: Expression of type "BufferedReader" is incompatible with return type "IO[AnyStr@open_resource]"
"BufferedReader" is incompatible with "IO[AnyStr@open_resource]"
Type parameter "AnyStr@IO" is invariant, but "bytes" is not the same as "AnyStr@open_resource" (reportReturnType)
```
Command which was run:
```shell
.venv/bin/pyright --pythonpath .venv/bin/python3 --project pyproject.toml
```
Environment:
- Python version: `3.12`
- Flask version: `3.1.0`
|
closed
|
2024-08-06T23:17:47Z
|
2024-08-23T00:06:50Z
|
https://github.com/pallets/flask/issues/5545
|
[] |
brendon-codes
| 0
|
zihangdai/xlnet
|
tensorflow
| 97
|
Understanding rel_shift function
|
Could you please elaborate more on your implementation of rel_shift? I found that it was different from that in transformer-xl. Thanks!
|
open
|
2019-07-02T00:57:09Z
|
2020-04-24T09:44:42Z
|
https://github.com/zihangdai/xlnet/issues/97
|
[] |
JinhaoLei
| 2
|
JaidedAI/EasyOCR
|
machine-learning
| 348
|
Can this identify the verification code
|
Can this identify the verification code?
|
closed
|
2021-01-11T09:39:03Z
|
2021-07-02T08:56:27Z
|
https://github.com/JaidedAI/EasyOCR/issues/348
|
[] |
Esword618
| 2
|
encode/uvicorn
|
asyncio
| 1,474
|
TypeError: An asyncio.Future, a coroutine or an awaitable is required
|
uvicorn: 0.17.5
uviloop: 0.16.0
```
TypeError: An asyncio.Future, a coroutine or an awaitable is required
File "uvicorn/protocols/websockets/websockets_impl.py", line 184, in run_asgi
result = await self.app(self.scope, self.asgi_receive, self.asgi_send)
File "uvicorn/middleware/proxy_headers.py", line 75, in __call__
return await self.app(scope, receive, send)
File "channels/routing.py", line 71, in __call__
return await application(scope, receive, send)
File "relevvo/middleware.py", line 66, in __call__
return await super().__call__(scope, receive, send)
File "channels/middleware.py", line 26, in __call__
return await self.inner(scope, receive, send)
File "channels/sessions.py", line 47, in __call__
return await self.inner(dict(scope, cookies=cookies), receive, send)
File "channels/sessions.py", line 263, in __call__
return await self.inner(wrapper.scope, receive, wrapper.send)
File "channels/auth.py", line 185, in __call__
return await super().__call__(scope, receive, send)
File "channels/middleware.py", line 26, in __call__
return await self.inner(scope, receive, send)
File "channels/routing.py", line 150, in __call__
return await application(
File "channels/consumer.py", line 94, in app
return await consumer(scope, receive, send)
File "channels/consumer.py", line 58, in __call__
await await_many_dispatch(
File "channels/utils.py", line 58, in await_many_dispatch
await task
File "channels/utils.py", line 50, in await_many_dispatch
result = task.result()
File "uvicorn/protocols/websockets/websockets_impl.py", line 285, in asgi_receive
data = await self.recv()
File "websockets/legacy/protocol.py", line 535, in recv
await asyncio.wait(
File "asyncio/tasks.py", line 424, in wait
fs = {ensure_future(f, loop=loop) for f in set(fs)}
File "asyncio/tasks.py", line 424, in <setcomp>
fs = {ensure_future(f, loop=loop) for f in set(fs)}
File "asyncio/tasks.py", line 684, in ensure_future
raise TypeError('An asyncio.Future, a coroutine or an awaitable is '
```
Getting this error frequently on sentry.
|
closed
|
2022-05-05T13:33:16Z
|
2022-05-09T08:22:55Z
|
https://github.com/encode/uvicorn/issues/1474
|
[
"need confirmation"
] |
moneyrelevvo
| 5
|
jina-ai/serve
|
machine-learning
| 5,584
|
Refactor: use K8sDeploymentConfig inside Deployment to generate kubernetes yamls
|
After Deployment is exposed to serve Executors, we want to use it in order to generate kubernetes yaml as well.
This means, the Deployment should use K8sDeploymentConfig to implement `to_kubernetes_yaml`.
Potentially, we also want to refactor Flow.to_kubernetes_yaml to use the new method and maybe the same for Executor (or deprecate Executor.to_kubernetes_yaml)
|
closed
|
2023-01-09T13:57:20Z
|
2023-01-24T18:02:35Z
|
https://github.com/jina-ai/serve/issues/5584
|
[] |
alaeddine-13
| 0
|
autokey/autokey
|
automation
| 927
|
Insert of date is often not right
|
### AutoKey is a Xorg application and will not function in a Wayland session. Do you use Xorg (X11) or Wayland?
Xorg
### Has this issue already been reported?
- [X] I have searched through the existing issues.
### Is this a question rather than an issue?
- [X] This is not a question.
### What type of issue is this?
Bug
### Choose one or more terms that describe this issue:
- [ ] autokey triggers
- [ ] autokey-gtk
- [X] autokey-qt
- [ ] beta
- [ ] bug
- [ ] critical
- [ ] development
- [ ] documentation
- [ ] enhancement
- [ ] installation/configuration
- [ ] phrase expansion
- [X] scripting
- [ ] technical debt
- [ ] user interface
### Other terms that describe this issue if not provided above:
_No response_
### Which Linux distribution did you use?
Linux Mint 21.2
### Which AutoKey GUI did you use?
Both
### Which AutoKey version did you use?
0.95.10 and Qt 5.15.3
### How did you install AutoKey?
With Linux Mint 21.2 Programm Installer
### Can you briefly describe the issue?
I use Autokey to paste the actual date. But often the date is not correct or a part of the shortcut (in my case "dat#") ist no deleted.
The script is:
from datetime import datetime
keyboard.send_keys(datetime.now().strftime('%d.%m.%Y'))
Maybe you can improve her the performance or whatever that ist works in the future.
### Can the issue be reproduced?
None
### What are the steps to reproduce the issue?
Try out the script under Linux Mint 21.2 and past e.g. in thunderbird contacts info field some text and the date as described above.
### What should have happened?
That the date es pasted correctly
### What actually happened?
Auto key sometimes is not deleted or the date is pasted wrong.
The error comes in 2 of 3 attempts.
### Do you have screenshots?
_No response_
### Can you provide the output of the AutoKey command?
_No response_
### Anything else?
_No response_
|
closed
|
2023-12-11T06:35:20Z
|
2023-12-16T14:48:15Z
|
https://github.com/autokey/autokey/issues/927
|
[
"duplicate",
"user support"
] |
stefan-franz
| 10
|
ResidentMario/missingno
|
data-visualization
| 171
|
Distance metric in dendrogram
|
Hi,
In the dendrogram function [link to scipy](https://docs.scipy.org/doc/scipy/reference/generated/scipy.cluster.hierarchy.linkage.html#scipy.cluster.hierarchy.linkage) the default metric function to calculate distance between pairs of points is `Euclidean`. Since we are dealing with nullity binary values, won't it be more convenient to use a *similarity metric* such as `Hamming` distance (i.e. proportion of disagrees), `Jaccard index`, etc?
Thanks!
|
open
|
2024-04-19T05:54:01Z
|
2024-04-19T06:11:50Z
|
https://github.com/ResidentMario/missingno/issues/171
|
[] |
hadilou
| 0
|
OpenInterpreter/open-interpreter
|
python
| 1,309
|
Continuous Instruction Refusal
|
### Describe the bug
OI will work fine for a while then it begins to disregard file location directions.
while writing this another error occured. now the RAW OpenAI hidden parameter chunks apper to render in terminal after submitting a prompt

### Reproduce
typing ```--version``` into the prompt terminal
``` I'm sorry for any misunderstandings, but the OpenAI interpreter (as I am) does not have a version and does not get installed on the client machine. I am a service provided by OpenAI, powered by GPT-3 AI model. I execute code
and interact with you in this chat interface. I don't have a software version in a traditional sense. However, if you are referring to the version of your browser, operating system, or any other software installed on your
machine, I would be happy to help you find that information. Could you please clarify?```
### Expected behavior
expect it to look through the correct directory. Even after explicit instruction to not look through ```/node_modules/``` folder it continues to search though. This is a continuing issue when working with web app directories.
### Screenshots

### Open Interpreter version
0.2.6
### Python version
3.10.2
### Operating System name and version
Ubuntu 22.04
### Additional context
_No response_
|
open
|
2024-06-18T14:48:18Z
|
2024-07-02T04:50:14Z
|
https://github.com/OpenInterpreter/open-interpreter/issues/1309
|
[] |
accessor-io
| 4
|
miguelgrinberg/Flask-SocketIO
|
flask
| 893
|
How do you run Flask-SocketIO as Larger Application (package format)
|
My flask application is setup as a package. Since the "flask run" has been removed from flask-socketio, then I have not been able to run the application without it freezing. Is there any way to provide some documentation on how to run flask-socketio as a package?
|
closed
|
2019-02-01T22:08:53Z
|
2019-05-19T07:36:33Z
|
https://github.com/miguelgrinberg/Flask-SocketIO/issues/893
|
[
"question"
] |
Dracio
| 1
|
python-restx/flask-restx
|
api
| 478
|
fields.Nested() does not respect 'skip_none=True'
|
I have a model with fields.Nested() of another model. In the mashal_with() I have also set `skip_none=True`.
When that value is missing, it causes the marshaling to give me an dictionary where all the values are none, instead of omitting the dictionary completely or even just giving me an empty dictionary.
This is causing me issues because my return code differs depending on the instance, so sometimes my return will have some fields, and sometimes others. But marshalling forces all the fields to exist. And without `marshal_with()` I cannot get automatic documentation based on the models. I also send the dictionary values as **kwargs elsewhere, and them being filled with None (instead of not existing) is crashing the function.
If anyone has an idea for a workaround, please let me know
|
closed
|
2022-09-25T01:11:07Z
|
2022-09-25T14:24:57Z
|
https://github.com/python-restx/flask-restx/issues/478
|
[
"bug"
] |
db0
| 1
|
plotly/dash
|
data-visualization
| 2,551
|
[BUG] Flask 2.2.3 dependency has HIGH security vulnerability (fixed in 2.2.5)
|
Issue #2538 pinned the upper bound of the Flask dependency to 2.2.3. However Flask 2.2.3 is affected by a HIGH security vulnerability that is fixed in Flask 2.2.5. See https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2023-30861
Debian 11, Python 3.11 (from Python official 3.11 Docker image)
```
# pip install dash
Collecting dash
Downloading dash-2.10.1-py3-none-any.whl (10.3 MB)
━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ 10.3/10.3 MB 14.1 MB/s eta 0:00:00
Collecting Flask<=2.2.3,>=1.0.4 (from dash)
Downloading Flask-2.2.3-py3-none-any.whl (101 kB)
━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ 101.8/101.8 kB 17.0 MB/s eta 0:00:00
```
```
dash 2.10.1
dash-core-components 2.0.0
dash-html-components 2.0.0
dash-table 5.0.0
```
**Describe the bug**
Dash installs a vulnerable version of Flask and dependency scans flag the vulnerability.
**Expected behavior**
No known and fixed security vulnerabilities added. Perhaps Pin to 2.2.* instead of specific 2.2.3 version where future pins will find new security issues.
|
closed
|
2023-05-30T23:56:29Z
|
2023-05-31T15:42:33Z
|
https://github.com/plotly/dash/issues/2551
|
[] |
eweidner
| 0
|
itamarst/eliot
|
numpy
| 445
|
Force timestamp does not work anymore with eliot 1.11.0
|
Using eliot 1.10.0, I have in my code some features which define timestamp of log messages, code written like this:
```python
expected = datetime.datetime(2005, 10, 2, 8, 30, 48, 70500)
logger.debug("Message", timestamp=expected.timestamp())
```
Timestamp is ok using eliot `1.10.0` but with `1.11.0` it is now ignored.
Any idea?
|
closed
|
2020-01-07T10:41:00Z
|
2020-01-15T13:12:56Z
|
https://github.com/itamarst/eliot/issues/445
|
[] |
lebouquetin
| 9
|
rthalley/dnspython
|
asyncio
| 310
|
dnspython 1.15.0 is raising no exception with resolver.query()
|
Hey,
I have really strange behaviour on my system with dnspython 1.15.0.
Any idea?
```
In [1]: from dns import resolver
In [2]: res = resolver.Resolver()
In [3]: res.nameservers = ['8.8.8.8']
In [4]: res.nameservers
Out[4]: ['8.8.8.8']
In [5]: res.query('8y439t4hg893tgyh89453')
Out[5]: <dns.resolver.Answer at 0x7f057b445240>
In [6]: answers = res.query('y8h3h89hg94h90845g')
In [7]: for rdata in answers:
...: print(rdata)
...:
37.120.168.88
```
shouldn't I get an exception when I do `answers = res.query('<garbage>')`? And why the hell do I get an IP address back that forwards to my personal website?
|
closed
|
2018-05-20T23:28:34Z
|
2018-06-08T19:02:03Z
|
https://github.com/rthalley/dnspython/issues/310
|
[] |
shibumi
| 6
|
JaidedAI/EasyOCR
|
pytorch
| 590
|
Memory usage never drops on api
|
Hello! First of all thanks for this wonderful work! EasyOCR has been a great tool for me!
Well.. I'm building an API to read text from images, but I'm running into an issue that I need to save resources when I'm not using it directly! I found the easyocr model keep in my memory even if I delete it manually using del.

I want to find out a way to free up memory as soon as the prediction is done! Because this is an API, I cannot just close the application and start it again! Any ideas?
|
closed
|
2021-11-09T18:09:46Z
|
2022-08-07T05:01:21Z
|
https://github.com/JaidedAI/EasyOCR/issues/590
|
[] |
igormcsouza
| 2
|
ymcui/Chinese-LLaMA-Alpaca
|
nlp
| 849
|
可否提供一个量化后的模型,供一些小白直接拿来采用?例如int4, int8
|
### 提交前必须检查以下项目
- [X] 请确保使用的是仓库最新代码(git pull),一些问题已被解决和修复。
- [X] 由于相关依赖频繁更新,请确保按照[Wiki](https://github.com/ymcui/Chinese-LLaMA-Alpaca/wiki)中的相关步骤执行
- [X] 我已阅读[FAQ章节](https://github.com/ymcui/Chinese-LLaMA-Alpaca/wiki/常见问题)并且已在Issue中对问题进行了搜索,没有找到相似问题和解决方案
- [X] 第三方插件问题:例如[llama.cpp](https://github.com/ggerganov/llama.cpp)、[text-generation-webui](https://github.com/oobabooga/text-generation-webui)、[LlamaChat](https://github.com/alexrozanski/LlamaChat)等,同时建议到对应的项目中查找解决方案
- [X] 模型正确性检查:务必检查模型的[SHA256.md](https://github.com/ymcui/Chinese-LLaMA-Alpaca/blob/main/SHA256.md),模型不对的情况下无法保证效果和正常运行
### 问题类型
模型量化和部署
### 基础模型
Alpaca-Plus-13B
### 操作系统
Windows
### 详细描述问题
如题。很多初学者是不知道如何对模型进行量化的的,想直接拿到一个量化后的模型使用
### 依赖情况(代码类问题务必提供)
Win11+pytorch2.0.1
### 运行日志或截图
_No response_
|
closed
|
2023-10-02T14:19:47Z
|
2023-10-12T22:02:25Z
|
https://github.com/ymcui/Chinese-LLaMA-Alpaca/issues/849
|
[
"stale"
] |
msh01
| 2
|
vitalik/django-ninja
|
rest-api
| 1,390
|
[BUG] 404 handler override not working as expected
|
**Describe the bug**
I have a simple demo Django-Ninja app working thanks to a helpful onboarding documentation on the related docs.
All of the endpoints in my simple demo work as expected.
However, the following code in my urls.py file does not produce the expected result based on the docs:
```
from django.http import Http404
```
```
@api.exception_handler(Http404)
def page_not_found(request, exc):
return api.create_response(
request,
{"message": "Please retry later"},
status=404,
)
```
I expect my handler to pick up 404s when the server is running, but I get the default
Not Found
The requested resource was not found on this server.
Did a lot of research beforehand and tried a variety of overrides, but none worked-- so am documenting the method from the dn docs:
https://django-ninja.dev/guides/errors/
My code is here:
https://github.com/ErikPohl444/django_ninja_playground
Working control case:
http://127.0.0.1:8000/api/hello_html?name=you
Not working experimental case:
http://127.0.0.1:8000/api/this_endpoint_is_undefined
I've done similar captures of the 404 in Flask and in Django, but I'm hitting a wall here.
It is either:
a) something is buggy with d-n [unlikely]
b) something can be added to for newbies like me to the documentation [potentially]
c) my code is buggy [in which case I might want to make a PR with doc changes --see b -- for folks like me :) ]
**Versions (please complete the following information):**
- Python version: [e.g. 3.6]
Python 3.12.0
- Django version: [e.g. 4.0]
Django==5.1.4
- Django-Ninja version: [e.g. 0.16.2]
django-ninja==1.3.0
- Pydantic version: [e.g. 1.9.0]
pydantic==2.10.4
|
open
|
2025-01-12T00:50:36Z
|
2025-02-24T20:05:57Z
|
https://github.com/vitalik/django-ninja/issues/1390
|
[] |
ErikPohl444
| 5
|
ydataai/ydata-profiling
|
pandas
| 1,013
|
Pandas profiling to slow too generate html report
|
### Current Behaviour
The profile report.to_html() function takes around 78 sec for (10000, 295) shape data frame. Also, I am already using the minimal model.
### Expected Behaviour
Expected behaviour the .to_json() takes only 2-3 sec. Would expect maximum 10 sec for .to_html(). How can I optimize it further?
### Data Description
Pandas dataframe with data shape (10000, 295)
### Code that reproduces the bug
```Python
from pandas import pandas as pd
from pandas_profiling import ProfileReport
import matplotlib
def generate_html_report():
df = pd.read_pickle('sample.pkl')
profile = ProfileReport(df,minimal=True)
matplotlib.pyplot.switch_backend('Agg')
text=profile.to_html()
```
### pandas-profiling version
v3.2.0
### Dependencies
```Text
pandas
```
### OS
mac0S 16
### Checklist
- [X] There is not yet another bug report for this issue in the [issue tracker](https://github.com/ydataai/pandas-profiling/issues)
- [X] The problem is reproducible from this bug report. [This guide](http://matthewrocklin.com/blog/work/2018/02/28/minimal-bug-reports) can help to craft a minimal bug report.
- [X] The issue has not been resolved by the entries listed under [Common Issues](https://pandas-profiling.ydata.ai/docs/master/pages/support_contrib/common_issues.html).
|
open
|
2022-07-22T22:53:07Z
|
2022-10-05T16:09:58Z
|
https://github.com/ydataai/ydata-profiling/issues/1013
|
[
"information requested ❔"
] |
adwaitas28
| 1
|
miguelgrinberg/microblog
|
flask
| 48
|
8.3 Error when running 'flask db migrate'
|
I got the following error when running 'flask db migrate -m "followers"'
`NameError: name 'followers' is not defined`
I removed the changes to the User class then ran the migration and it works fine so just seems to be an error in the order of operations.
BTW, thanks for the great update to the tutorial!
|
closed
|
2017-12-18T18:41:54Z
|
2019-07-03T06:59:08Z
|
https://github.com/miguelgrinberg/microblog/issues/48
|
[
"question"
] |
sheadscott
| 6
|
ymcui/Chinese-BERT-wwm
|
tensorflow
| 41
|
显存不够问题
|
请问一下,如果机器显存只有32G,或者更小,batch_size无法设置过高的情况下有没有折中的解决方法,谢谢
|
closed
|
2019-09-12T01:25:25Z
|
2019-10-21T10:59:48Z
|
https://github.com/ymcui/Chinese-BERT-wwm/issues/41
|
[] |
sunyilgdx
| 2
|
PaddlePaddle/ERNIE
|
nlp
| 754
|
ERNIE-doc能否支持长度大于512的句子输入?
|
ERNIE-doc能否支持长度大于512的句子输入?这个目前有实现吗?
|
closed
|
2021-10-11T08:22:57Z
|
2022-01-09T01:31:42Z
|
https://github.com/PaddlePaddle/ERNIE/issues/754
|
[
"wontfix"
] |
geolvr
| 2
|
JaidedAI/EasyOCR
|
deep-learning
| 776
|
Is there any onnx model deploy example?
|
closed
|
2022-07-05T16:07:03Z
|
2022-07-11T02:22:24Z
|
https://github.com/JaidedAI/EasyOCR/issues/776
|
[] |
yuanyan3060
| 1
|
|
scikit-optimize/scikit-optimize
|
scikit-learn
| 1,171
|
`np.int` was a deprecated alias for the builtin `int`.
|
My version is **R1.10.1**. If my numpy's version is beyond or equal to R1.24.0, an error will output like this when skopt.space.Integer is used:
**AttributeError: module 'numpy' has no attribute 'int'.
`np.int` was a deprecated alias for the builtin `int`. To avoid this error in existing code, use `int` by itself.**
This is how to fix it:
[https://stackoverflow.com/questions/74844262/how-can-i-solve-error-module-numpy-has-no-attribute-float-in-python](url)
Hope next version will fix it.
|
open
|
2023-06-01T09:31:52Z
|
2024-02-14T16:27:09Z
|
https://github.com/scikit-optimize/scikit-optimize/issues/1171
|
[] |
a3678911
| 17
|
biolab/orange3
|
data-visualization
| 6,643
|
Bulk .ows reader for monitoring our assets
|
**What's your use case?**
I would like to analyse my orange workflows (.ows), and for that, what would be better than using Orange Data Mining ?
**What's your proposed solution?**
A tool where you can send folders or adresses (like \\srv_simon\odm\workflows\prod*2023*.ows and \\srv_simon\odm\workflows\), choose if you want to scan subfolders. 2 output :
-workflow level (1 row by workflow),worfklname and adress of the workflow, some files metadata (creation, etc)
-node level : worfklname and adress of the workflow, the used tools, etc, the annotations.
**Are there any alternative solutions?**
Using something else than Orange but that means no cool inception.
|
closed
|
2023-11-19T09:30:23Z
|
2023-11-26T18:13:43Z
|
https://github.com/biolab/orange3/issues/6643
|
[] |
simonaubertbd
| 1
|
rgerum/pylustrator
|
matplotlib
| 13
|
allow to change font size of legend and legend title
|
closed
|
2020-01-28T12:51:41Z
|
2020-02-03T09:56:05Z
|
https://github.com/rgerum/pylustrator/issues/13
|
[] |
rgerum
| 0
|
|
d2l-ai/d2l-en
|
pytorch
| 2,595
|
The content is outdated
|
I found the book having very good content for the topics it covers. But the book stopped at GANs. Many not-very-new topics like YOLO, Diffusion were never discussed. I've seen some opened issues mentioned this several years ago but it seems no contents have been added. Will the book continue to be updated or it's archived?
|
open
|
2024-03-31T03:33:11Z
|
2024-12-15T15:41:30Z
|
https://github.com/d2l-ai/d2l-en/issues/2595
|
[] |
hiepdang-ml
| 1
|
gradio-app/gradio
|
machine-learning
| 10,471
|
Things to deprecate for `gradio==6.0` and `gradio_client==2.0`
|
Starting a list now:
Gradio 6.0
- [ ] `type="tuples"` for `gr.Chatbot` / `gr.ChatInterface`
- [ ] `hf_token` from `load`
- [ ] Consider removing `ruff` as a core dependency and doing extras-install for custom components
- [ ] `ImageEditor` crop_size
- [ ] `DataFrame` row_count and col_count format should be better specified
- [ ] Make `allow_tags=True` the default in `gr.Chatbot`, see https://github.com/gradio-app/gradio/pull/10743
Client 2.0
- [ ] Client.deploy_discord
- [ ] Client.file()
|
open
|
2025-01-30T22:16:14Z
|
2025-03-07T21:37:56Z
|
https://github.com/gradio-app/gradio/issues/10471
|
[
"refactor",
"tracking"
] |
abidlabs
| 0
|
KevinMusgrave/pytorch-metric-learning
|
computer-vision
| 287
|
Selectively Contrastive Triplet loss
|
Hi Kevin
Would you mind add my Selectively Contrastive Triplet loss (which is published in ECCV2020)? The major idea of this paper is to overcome the local minima during the triplet optimization. This loss work especially well on high intra-variance datasets such as Hotel50K. And I provide my implementation in this repo:
https://github.com/littleredxh/HardNegative
You can find the loss function in "_code/Loss.py" file
I think this loss function can be easily adapted into your framework.
|
open
|
2021-03-04T08:19:55Z
|
2023-08-30T14:00:50Z
|
https://github.com/KevinMusgrave/pytorch-metric-learning/issues/287
|
[
"new algorithm request"
] |
littleredxh
| 3
|
streamlit/streamlit
|
deep-learning
| 9,977
|
Add a min_selections parameter to st.multiselect
|
### Checklist
- [X] I have searched the [existing issues](https://github.com/streamlit/streamlit/issues) for similar feature requests.
- [x] I added a descriptive title and summary to this issue.
### Summary
I wanted to do a st.multiselect where users could select up to (and only) 2. I checked the [API Reference](https://docs.streamlit.io/develop/api-reference/widgets/st.multiselect) from docs.streamlit.io but I haven't found such thing.
### Why?
I am doing a case study project using streamlit. I wanted to use a st.multiselect, but I need to limit the minimum selections, or my exploratory graph won't work. There's no such argument as `min_selections` in the st.multiselect, I'm pretty sure.
### How?
I want a min_selections argument for st.multiselect
It should be easy, something like:
```python
if len(selections) < min_selection:
raise ValueError("The number of selections is smaller than the minimum allowed selections")
```
Thanks!
### Additional Context
_No response_
|
open
|
2024-12-09T04:17:42Z
|
2024-12-20T14:46:44Z
|
https://github.com/streamlit/streamlit/issues/9977
|
[
"type:enhancement",
"feature:st.multiselect"
] |
Unknownuserfrommars
| 3
|
sqlalchemy/alembic
|
sqlalchemy
| 920
|
NameError Importing local packages in Alembic Operations 1.7.2
|
**Describe the bug**
Importing local packages creates NameError in version 1.7.2 was not the case in 1.6.5
**Expected behavior**
Ability to use locally installed packages in alembic revisions
**To Reproduce**
Local Package "fun_stuff" installed via -e
The details on this file come from [replaceable objects cookbook](https://alembic.sqlalchemy.org/en/latest/cookbook.html#replaceable-objects)
### fun_stuff/alembic_utils.py
```py
from typing import Any, Optional
from alembic.operations import Operations, MigrateOperation
class ReplaceableObject:
def __init__(self, name: str, sqltext: str) -> None:
self.name = name
self.sqltext = sqltext
class ReversibleOp(MigrateOperation):
"""This is the base of our “replaceable” operation, which includes not just a base operation for emitting
CREATE and DROP instructions on a ReplaceableObject, it also assumes a certain model of “reversibility” which
makes use of references to other migration files in order to refer to the “previous” version of an object.
https://alembic.sqlalchemy.org/en/latest/cookbook.html#replaceable-objects
"""
def __init__(self, target: ReplaceableObject) -> None:
self.target = target
@classmethod
def invoke_for_target(cls, operations: Operations, target: ReplaceableObject):
op = cls(target)
return operations.invoke(op)
def reverse(self):
raise NotImplementedError
@classmethod
def _get_object_from_version(cls, operations: Operations, ident: str) -> Any:
version, objectname = ident.split(".")
module = operations.get_context().script.get_revision(version).module
return getattr(module, objectname)
@classmethod
def replace(
cls,
operations: Operations,
target: ReplaceableObject,
replaces: Optional[str] = None,
replace_with: Optional[str] = None,
) -> None:
if replaces is None and replace_with is None:
raise TypeError("replaces or replace_with is required")
old_obj = cls._get_object_from_version(
operations,
replaces if replaces is not None else replace_with,
)
drop_old = cls(old_obj).reverse()
create_new = cls(target)
operations.invoke(drop_old)
operations.invoke(create_new)
"""To create usable operations from this base, we will build a series of stub classes and use
[Operations.register_operation()](https://alembic.sqlalchemy.org/en/latest/ops.html#alembic.operations.Operations.register_operation)
to make them part of the op.* namespace"""
@Operations.register_operation("create_view", "invoke_for_target")
@Operations.register_operation("replace_view", "replace")
class CreateViewOp(ReversibleOp):
def reverse(self):
return DropViewOp(self.target)
@Operations.register_operation("drop_view", "invoke_for_target")
class DropViewOp(ReversibleOp):
def reverse(self):
return CreateViewOp(self.target)
@Operations.implementation_for(CreateViewOp)
def create_view(operations: Operations, operation: ReversibleOp) -> None:
operations.execute(f"CREATE VIEW {operation.target.name} AS {operation.target.sqltext}")
@Operations.implementation_for(DropViewOp)
def drop_view(operations: Operations, operation: ReversibleOp) -> None:
operations.execute(f"DROP VIEW {operation.target.name}")
```
### 24627210e3e6_swap_old_with_new.py
```py
"""adjust view
Revision ID: 24627210e3e6
Revises: bcf089c643e3
Create Date: 2021-07-16 17:03:05.673405
"""
from alembic import op
import sqlalchemy as sa
from sqlalchemy import orm
from fun_stuff import alembic_utils
# revision identifiers, used by Alembic.
revision = '24627210e3e6'
down_revision = 'bcf089c643e3'
branch_labels = None
depends_on = None
new_view = alembic_utils.ReplaceableObject(
name="E_CrazyFunView",
sqltext="""SELECT * FROM WowTable""",
)
old_view = alembic_utils.ReplaceableObject(
name="E_CrazyFunView",
sqltext="""SELECT * FROM NotWowTable""",
)
def upgrade():
op.drop_view(old_view)
op.create_view(new_view)
def downgrade():
op.drop_view(new_view)
op.create_view(old_view)
```
The error below occurs when running
`alembic history`
**Error**
```
Traceback (most recent call last):
File "/usr/local/bin/alembic", line 8, in <module>
sys.exit(main())
File "/usr/local/lib/python3.8/site-packages/alembic/config.py", line 588, in main
CommandLine(prog=prog).main(argv=argv)
File "/usr/local/lib/python3.8/site-packages/alembic/config.py", line 582, in main
self.run_cmd(cfg, options)
File "/usr/local/lib/python3.8/site-packages/alembic/config.py", line 559, in run_cmd
fn(
File "/usr/local/lib/python3.8/site-packages/alembic/command.py", line 461, in history
_display_history(config, script, base, head)
File "/usr/local/lib/python3.8/site-packages/alembic/command.py", line 429, in _display_history
for sc in script.walk_revisions(
File "/usr/local/lib/python3.8/site-packages/alembic/script/base.py", line 277, in walk_revisions
for rev in self.revision_map.iterate_revisions(
File "/usr/local/lib/python3.8/site-packages/alembic/script/revision.py", line 793, in iterate_revisions
revisions, heads = fn(
File "/usr/local/lib/python3.8/site-packages/alembic/script/revision.py", line 1393, in _collect_upgrade_revisions
targets: Collection["Revision"] = self._parse_upgrade_target(
File "/usr/local/lib/python3.8/site-packages/alembic/script/revision.py", line 1193, in _parse_upgrade_target
return self.get_revisions(target)
File "/usr/local/lib/python3.8/site-packages/alembic/script/revision.py", line 527, in get_revisions
resolved_id, branch_label = self._resolve_revision_number(
File "/usr/local/lib/python3.8/site-packages/alembic/script/revision.py", line 747, in _resolve_revision_number
self._revision_map
File "/usr/local/lib/python3.8/site-packages/sqlalchemy/util/langhelpers.py", line 1113, in __get__
obj.__dict__[self.__name__] = result = self.fget(obj)
File "/usr/local/lib/python3.8/site-packages/alembic/script/revision.py", line 189, in _revision_map
for revision in self._generator():
File "/usr/local/lib/python3.8/site-packages/alembic/script/base.py", line 136, in _load_revisions
script = Script._from_filename(self, vers, file_)
File "/usr/local/lib/python3.8/site-packages/alembic/script/base.py", line 999, in _from_filename
module = util.load_python_file(dir_, filename)
File "/usr/local/lib/python3.8/site-packages/alembic/util/pyfiles.py", line 92, in load_python_file
module = load_module_py(module_id, path)
File "/usr/local/lib/python3.8/site-packages/alembic/util/pyfiles.py", line 108, in load_module_py
spec.loader.exec_module(module) # type: ignore
File "<frozen importlib._bootstrap_external>", line 843, in exec_module
File "<frozen importlib._bootstrap>", line 219, in _call_with_frames_removed
File "/src/employees/alembic/versions/24627210e3e6_swap_old_with_new.py", line 13, in <module>
from fun_stuff import alembic_utils
File "/src/fun_stuff/alembic_utils.py", line 67, in <module>
class CreateViewOp(ReversibleOp):
File "/usr/local/lib/python3.8/site-packages/alembic/operations/base.py", line 163, in register
exec(func_text, globals_, lcl)
File "<string>", line 1, in <module>
NameError: name 'fun_stuff' is not defined
```
**Versions.**
- OS: [Python3.8-buster image](https://hub.docker.com/layers/python/library/python/3.8-buster/images/sha256-ec2b332b19977579455b219f0500abc2848533daa507c2602a421adcf79a85a2?context=explore)
- Python: 3.8.11
- Alembic: 1.7.2
- SQLAlchemy: 1.4.23
- Database: MSSQL
**Additional context**
This is not a problem in alembic 1.6.5. I love the project!
|
closed
|
2021-09-15T18:56:17Z
|
2024-04-19T22:06:07Z
|
https://github.com/sqlalchemy/alembic/issues/920
|
[
"migration environment",
"op directives",
"regression"
] |
daniel-butler
| 10
|
litestar-org/polyfactory
|
pydantic
| 138
|
Enhancement: Add support for "mimesis" as an alternative to faker
|
Built in support for [`mimesis`](https://github.com/lk-geimfari/mimesis/stargazers) would be nice.
Main motivation is that much of the faker methods such as `get_faker().latitude()` etc aren't typed and result in `Any` types :(
|
closed
|
2023-02-02T12:17:38Z
|
2023-04-03T15:17:57Z
|
https://github.com/litestar-org/polyfactory/issues/138
|
[
"enhancement",
"help wanted",
"good first issue"
] |
michaeloliverx
| 1
|
jupyter-widgets-contrib/ipycanvas
|
jupyter
| 95
|
Binder build does not work with Jupyter lab
|
If I follow the Binder link and then manually switch the URL to lab
`https://notebooks.gesis.org/binder/jupyter/user/martinrenou-ipycanvas-w7kg97yf/lab`
the Jupyter Lab user interface starts and prompts for a rebuild. After
the rebuild and restart the canvas does not work. I see this in the image demo:
<img width="715" alt="Screen Shot 2020-05-11 at 6 13 59 AM" src="https://user-images.githubusercontent.com/13857252/81551013-1c04d680-934f-11ea-8c72-39fd448c374f.png">
And in the Javascript console I see this
```
Error: "Object 'jupyter.widget' not found in registry"
loadObject default.js:1538
loadObject default.js:1517
_handleCommOpen default.js:994
_handleMessage default.js:1101
```
|
open
|
2020-05-11T10:20:01Z
|
2020-05-11T10:30:29Z
|
https://github.com/jupyter-widgets-contrib/ipycanvas/issues/95
|
[
"enhancement"
] |
AaronWatters
| 2
|
widgetti/solara
|
fastapi
| 264
|
Getting bounds of visible area of an ipyleaflet.Map.element
|
Hi,
Is it possible to get the bound of a map whenever it changes? Similar to on_zoom or on_center functionality like below.
bound, set_bound = solara.use_state(None)
zoom, set_zoom = solara.use_state(10)
ipyleaflet.Map.element(...
zoom=zoom,
on_zoom = set_zoom,
on_bound_change = set_bound
)
|
closed
|
2023-08-31T13:29:47Z
|
2023-09-03T16:08:44Z
|
https://github.com/widgetti/solara/issues/264
|
[] |
hkayabilisim
| 4
|
piskvorky/gensim
|
nlp
| 3,401
|
pip installation of gensim fails using python 3.11.0 on Mac OS X
|
<!--
**IMPORTANT**:
- Use the [Gensim mailing list](https://groups.google.com/forum/#!forum/gensim) to ask general or usage questions. Github issues are only for bug reports.
- Check [Recipes&FAQ](https://github.com/RaRe-Technologies/gensim/wiki/Recipes-&-FAQ) first for common answers.
Github bug reports that do not include relevant information and context will be closed without an answer. Thanks!
-->
#### Problem description
Installing gensim 4.2.0 (and also 4.1.2, and 3.8.3 were also tested) fails to install using `pip` under python 3.11.0 on Mac OS X (other platforms not tested).
#### Steps/code/corpus to reproduce
1. I created a new empty Python environment using `pyenv` and python 3.11.0:
```
$ pyenv virtualenv 3.11.0 gensim-test
$ pyenv local gensim-test
```
2. I ran `pip install gensim`, output was:
```
Collecting gensim
Using cached gensim-4.2.0.tar.gz (23.2 MB)
Preparing metadata (setup.py) ... done
Collecting numpy>=1.17.0
Using cached numpy-1.23.4-cp311-cp311-macosx_10_9_x86_64.whl (18.1 MB)
Collecting scipy>=0.18.1
Using cached scipy-1.9.3-cp311-cp311-macosx_10_9_x86_64.whl (34.2 MB)
Collecting smart_open>=1.8.1
Using cached smart_open-6.2.0-py3-none-any.whl (58 kB)
Installing collected packages: smart_open, numpy, scipy, gensim
DEPRECATION: gensim is being installed using the legacy 'setup.py install' method, because it does not have a 'pyproject.toml' and the 'wheel' package is not installed. pip 23.1 will enforce this behaviour change. A possible replacement is to enable the '--use-pep517' option. Discussion can be found at https://github.com/pypa/pip/issues/8559
Running setup.py install for gensim ... error
error: subprocess-exited-with-error
× Running setup.py install for gensim did not run successfully.
│ exit code: 1
╰─> [540 lines of output]
running install
/Users/dsc/.pyenv/versions/3.11.0/envs/gensim-test/lib/python3.11/site-packages/setuptools/command/install.py:34: SetuptoolsDeprecationWarning: setup.py install is deprecated. Use build and pip and other standards-based tools.
warnings.warn(
running build
running build_py
creating build
creating build/lib.macosx-12.6-x86_64-cpython-311
creating build/lib.macosx-12.6-x86_64-cpython-311/gensim
copying gensim/interfaces.py -> build/lib.macosx-12.6-x86_64-cpython-311/gensim
copying gensim/downloader.py -> build/lib.macosx-12.6-x86_64-cpython-311/gensim
copying gensim/matutils.py -> build/lib.macosx-12.6-x86_64-cpython-311/gensim
copying gensim/__init__.py -> build/lib.macosx-12.6-x86_64-cpython-311/gensim
copying gensim/utils.py -> build/lib.macosx-12.6-x86_64-cpython-311/gensim
copying gensim/nosy.py -> build/lib.macosx-12.6-x86_64-cpython-311/gensim
creating build/lib.macosx-12.6-x86_64-cpython-311/gensim/similarities
copying gensim/similarities/docsim.py -> build/lib.macosx-12.6-x86_64-cpython-311/gensim/similarities
copying gensim/similarities/__init__.py -> build/lib.macosx-12.6-x86_64-cpython-311/gensim/similarities
copying gensim/similarities/nmslib.py -> build/lib.macosx-12.6-x86_64-cpython-311/gensim/similarities
copying gensim/similarities/levenshtein.py -> build/lib.macosx-12.6-x86_64-cpython-311/gensim/similarities
copying gensim/similarities/termsim.py -> build/lib.macosx-12.6-x86_64-cpython-311/gensim/similarities
copying gensim/similarities/annoy.py -> build/lib.macosx-12.6-x86_64-cpython-311/gensim/similarities
creating build/lib.macosx-12.6-x86_64-cpython-311/gensim/test
copying gensim/test/test_text_analysis.py -> build/lib.macosx-12.6-x86_64-cpython-311/gensim/test
copying gensim/test/test_glove2word2vec.py -> build/lib.macosx-12.6-x86_64-cpython-311/gensim/test
copying gensim/test/test_parsing.py -> build/lib.macosx-12.6-x86_64-cpython-311/gensim/test
copying gensim/test/test_tmdiff.py -> build/lib.macosx-12.6-x86_64-cpython-311/gensim/test
copying gensim/test/test_utils.py -> build/lib.macosx-12.6-x86_64-cpython-311/gensim/test
copying gensim/test/test_matutils.py -> build/lib.macosx-12.6-x86_64-cpython-311/gensim/test
copying gensim/test/test_word2vec.py -> build/lib.macosx-12.6-x86_64-cpython-311/gensim/test
copying gensim/test/test_segmentation.py -> build/lib.macosx-12.6-x86_64-cpython-311/gensim/test
copying gensim/test/test_lsimodel.py -> build/lib.macosx-12.6-x86_64-cpython-311/gensim/test
copying gensim/test/test_coherencemodel.py -> build/lib.macosx-12.6-x86_64-cpython-311/gensim/test
copying gensim/test/test_miislita.py -> build/lib.macosx-12.6-x86_64-cpython-311/gensim/test
copying gensim/test/test_phrases.py -> build/lib.macosx-12.6-x86_64-cpython-311/gensim/test
copying gensim/test/test_dtm.py -> build/lib.macosx-12.6-x86_64-cpython-311/gensim/test
copying gensim/test/test_tfidfmodel.py -> build/lib.macosx-12.6-x86_64-cpython-311/gensim/test
copying gensim/test/test_poincare.py -> build/lib.macosx-12.6-x86_64-cpython-311/gensim/test
copying gensim/test/test_rpmodel.py -> build/lib.macosx-12.6-x86_64-cpython-311/gensim/test
copying gensim/test/test_corpora_hashdictionary.py -> build/lib.macosx-12.6-x86_64-cpython-311/gensim/test
copying gensim/test/test_ensemblelda.py -> build/lib.macosx-12.6-x86_64-cpython-311/gensim/test
copying gensim/test/simspeed2.py -> build/lib.macosx-12.6-x86_64-cpython-311/gensim/test
copying gensim/test/test_aggregation.py -> build/lib.macosx-12.6-x86_64-cpython-311/gensim/test
copying gensim/test/__init__.py -> build/lib.macosx-12.6-x86_64-cpython-311/gensim/test
copying gensim/test/test_lee.py -> build/lib.macosx-12.6-x86_64-cpython-311/gensim/test
copying gensim/test/test_big.py -> build/lib.macosx-12.6-x86_64-cpython-311/gensim/test
copying gensim/test/test_translation_matrix.py -> build/lib.macosx-12.6-x86_64-cpython-311/gensim/test
copying gensim/test/basetmtests.py -> build/lib.macosx-12.6-x86_64-cpython-311/gensim/test
copying gensim/test/test_ldaseqmodel.py -> build/lib.macosx-12.6-x86_64-cpython-311/gensim/test
copying gensim/test/test_datatype.py -> build/lib.macosx-12.6-x86_64-cpython-311/gensim/test
copying gensim/test/test_similarity_metrics.py -> build/lib.macosx-12.6-x86_64-cpython-311/gensim/test
copying gensim/test/test_ldamodel.py -> build/lib.macosx-12.6-x86_64-cpython-311/gensim/test
copying gensim/test/test_nmf.py -> build/lib.macosx-12.6-x86_64-cpython-311/gensim/test
copying gensim/test/test_lda_callback.py -> build/lib.macosx-12.6-x86_64-cpython-311/gensim/test
copying gensim/test/utils.py -> build/lib.macosx-12.6-x86_64-cpython-311/gensim/test
copying gensim/test/test_keyedvectors.py -> build/lib.macosx-12.6-x86_64-cpython-311/gensim/test
copying gensim/test/test_corpora_dictionary.py -> build/lib.macosx-12.6-x86_64-cpython-311/gensim/test
copying gensim/test/svd_error.py -> build/lib.macosx-12.6-x86_64-cpython-311/gensim/test
copying gensim/test/test_logentropy_model.py -> build/lib.macosx-12.6-x86_64-cpython-311/gensim/test
copying gensim/test/test_doc2vec.py -> build/lib.macosx-12.6-x86_64-cpython-311/gensim/test
copying gensim/test/test_similarities.py -> build/lib.macosx-12.6-x86_64-cpython-311/gensim/test
copying gensim/test/test_direct_confirmation.py -> build/lib.macosx-12.6-x86_64-cpython-311/gensim/test
copying gensim/test/test_api.py -> build/lib.macosx-12.6-x86_64-cpython-311/gensim/test
copying gensim/test/test_atmodel.py -> build/lib.macosx-12.6-x86_64-cpython-311/gensim/test
copying gensim/test/test_corpora.py -> build/lib.macosx-12.6-x86_64-cpython-311/gensim/test
copying gensim/test/test_indirect_confirmation.py -> build/lib.macosx-12.6-x86_64-cpython-311/gensim/test
copying gensim/test/test_probability_estimation.py -> build/lib.macosx-12.6-x86_64-cpython-311/gensim/test
copying gensim/test/test_fasttext.py -> build/lib.macosx-12.6-x86_64-cpython-311/gensim/test
copying gensim/test/test_scripts.py -> build/lib.macosx-12.6-x86_64-cpython-311/gensim/test
copying gensim/test/test_normmodel.py -> build/lib.macosx-12.6-x86_64-cpython-311/gensim/test
copying gensim/test/test_hdpmodel.py -> build/lib.macosx-12.6-x86_64-cpython-311/gensim/test
copying gensim/test/test_sharded_corpus.py -> build/lib.macosx-12.6-x86_64-cpython-311/gensim/test
copying gensim/test/simspeed.py -> build/lib.macosx-12.6-x86_64-cpython-311/gensim/test
creating build/lib.macosx-12.6-x86_64-cpython-311/gensim/models
copying gensim/models/poincare.py -> build/lib.macosx-12.6-x86_64-cpython-311/gensim/models
copying gensim/models/basemodel.py -> build/lib.macosx-12.6-x86_64-cpython-311/gensim/models
copying gensim/models/phrases.py -> build/lib.macosx-12.6-x86_64-cpython-311/gensim/models
copying gensim/models/ensemblelda.py -> build/lib.macosx-12.6-x86_64-cpython-311/gensim/models
copying gensim/models/_fasttext_bin.py -> build/lib.macosx-12.6-x86_64-cpython-311/gensim/models
copying gensim/models/word2vec.py -> build/lib.macosx-12.6-x86_64-cpython-311/gensim/models
copying gensim/models/normmodel.py -> build/lib.macosx-12.6-x86_64-cpython-311/gensim/models
copying gensim/models/translation_matrix.py -> build/lib.macosx-12.6-x86_64-cpython-311/gensim/models
copying gensim/models/__init__.py -> build/lib.macosx-12.6-x86_64-cpython-311/gensim/models
copying gensim/models/lsimodel.py -> build/lib.macosx-12.6-x86_64-cpython-311/gensim/models
copying gensim/models/rpmodel.py -> build/lib.macosx-12.6-x86_64-cpython-311/gensim/models
copying gensim/models/logentropy_model.py -> build/lib.macosx-12.6-x86_64-cpython-311/gensim/models
copying gensim/models/keyedvectors.py -> build/lib.macosx-12.6-x86_64-cpython-311/gensim/models
copying gensim/models/lda_worker.py -> build/lib.macosx-12.6-x86_64-cpython-311/gensim/models
copying gensim/models/fasttext.py -> build/lib.macosx-12.6-x86_64-cpython-311/gensim/models
copying gensim/models/tfidfmodel.py -> build/lib.macosx-12.6-x86_64-cpython-311/gensim/models
copying gensim/models/doc2vec.py -> build/lib.macosx-12.6-x86_64-cpython-311/gensim/models
copying gensim/models/callbacks.py -> build/lib.macosx-12.6-x86_64-cpython-311/gensim/models
copying gensim/models/hdpmodel.py -> build/lib.macosx-12.6-x86_64-cpython-311/gensim/models
copying gensim/models/ldamulticore.py -> build/lib.macosx-12.6-x86_64-cpython-311/gensim/models
copying gensim/models/ldaseqmodel.py -> build/lib.macosx-12.6-x86_64-cpython-311/gensim/models
copying gensim/models/lsi_dispatcher.py -> build/lib.macosx-12.6-x86_64-cpython-311/gensim/models
copying gensim/models/lda_dispatcher.py -> build/lib.macosx-12.6-x86_64-cpython-311/gensim/models
copying gensim/models/ldamodel.py -> build/lib.macosx-12.6-x86_64-cpython-311/gensim/models
copying gensim/models/nmf.py -> build/lib.macosx-12.6-x86_64-cpython-311/gensim/models
copying gensim/models/atmodel.py -> build/lib.macosx-12.6-x86_64-cpython-311/gensim/models
copying gensim/models/lsi_worker.py -> build/lib.macosx-12.6-x86_64-cpython-311/gensim/models
copying gensim/models/coherencemodel.py -> build/lib.macosx-12.6-x86_64-cpython-311/gensim/models
creating build/lib.macosx-12.6-x86_64-cpython-311/gensim/scripts
copying gensim/scripts/make_wikicorpus.py -> build/lib.macosx-12.6-x86_64-cpython-311/gensim/scripts
copying gensim/scripts/word2vec2tensor.py -> build/lib.macosx-12.6-x86_64-cpython-311/gensim/scripts
copying gensim/scripts/package_info.py -> build/lib.macosx-12.6-x86_64-cpython-311/gensim/scripts
copying gensim/scripts/benchmark.py -> build/lib.macosx-12.6-x86_64-cpython-311/gensim/scripts
copying gensim/scripts/segment_wiki.py -> build/lib.macosx-12.6-x86_64-cpython-311/gensim/scripts
copying gensim/scripts/__init__.py -> build/lib.macosx-12.6-x86_64-cpython-311/gensim/scripts
copying gensim/scripts/make_wiki_online.py -> build/lib.macosx-12.6-x86_64-cpython-311/gensim/scripts
copying gensim/scripts/make_wiki.py -> build/lib.macosx-12.6-x86_64-cpython-311/gensim/scripts
copying gensim/scripts/make_wiki_online_nodebug.py -> build/lib.macosx-12.6-x86_64-cpython-311/gensim/scripts
copying gensim/scripts/glove2word2vec.py -> build/lib.macosx-12.6-x86_64-cpython-311/gensim/scripts
copying gensim/scripts/word2vec_standalone.py -> build/lib.macosx-12.6-x86_64-cpython-311/gensim/scripts
creating build/lib.macosx-12.6-x86_64-cpython-311/gensim/parsing
copying gensim/parsing/__init__.py -> build/lib.macosx-12.6-x86_64-cpython-311/gensim/parsing
copying gensim/parsing/preprocessing.py -> build/lib.macosx-12.6-x86_64-cpython-311/gensim/parsing
copying gensim/parsing/porter.py -> build/lib.macosx-12.6-x86_64-cpython-311/gensim/parsing
creating build/lib.macosx-12.6-x86_64-cpython-311/gensim/topic_coherence
copying gensim/topic_coherence/direct_confirmation_measure.py -> build/lib.macosx-12.6-x86_64-cpython-311/gensim/topic_coherence
copying gensim/topic_coherence/aggregation.py -> build/lib.macosx-12.6-x86_64-cpython-311/gensim/topic_coherence
copying gensim/topic_coherence/text_analysis.py -> build/lib.macosx-12.6-x86_64-cpython-311/gensim/topic_coherence
copying gensim/topic_coherence/probability_estimation.py -> build/lib.macosx-12.6-x86_64-cpython-311/gensim/topic_coherence
copying gensim/topic_coherence/__init__.py -> build/lib.macosx-12.6-x86_64-cpython-311/gensim/topic_coherence
copying gensim/topic_coherence/indirect_confirmation_measure.py -> build/lib.macosx-12.6-x86_64-cpython-311/gensim/topic_coherence
copying gensim/topic_coherence/segmentation.py -> build/lib.macosx-12.6-x86_64-cpython-311/gensim/topic_coherence
creating build/lib.macosx-12.6-x86_64-cpython-311/gensim/corpora
copying gensim/corpora/sharded_corpus.py -> build/lib.macosx-12.6-x86_64-cpython-311/gensim/corpora
copying gensim/corpora/textcorpus.py -> build/lib.macosx-12.6-x86_64-cpython-311/gensim/corpora
copying gensim/corpora/wikicorpus.py -> build/lib.macosx-12.6-x86_64-cpython-311/gensim/corpora
copying gensim/corpora/csvcorpus.py -> build/lib.macosx-12.6-x86_64-cpython-311/gensim/corpora
copying gensim/corpora/opinosiscorpus.py -> build/lib.macosx-12.6-x86_64-cpython-311/gensim/corpora
copying gensim/corpora/__init__.py -> build/lib.macosx-12.6-x86_64-cpython-311/gensim/corpora
copying gensim/corpora/malletcorpus.py -> build/lib.macosx-12.6-x86_64-cpython-311/gensim/corpora
copying gensim/corpora/mmcorpus.py -> build/lib.macosx-12.6-x86_64-cpython-311/gensim/corpora
copying gensim/corpora/ucicorpus.py -> build/lib.macosx-12.6-x86_64-cpython-311/gensim/corpora
copying gensim/corpora/indexedcorpus.py -> build/lib.macosx-12.6-x86_64-cpython-311/gensim/corpora
copying gensim/corpora/lowcorpus.py -> build/lib.macosx-12.6-x86_64-cpython-311/gensim/corpora
copying gensim/corpora/bleicorpus.py -> build/lib.macosx-12.6-x86_64-cpython-311/gensim/corpora
copying gensim/corpora/dictionary.py -> build/lib.macosx-12.6-x86_64-cpython-311/gensim/corpora
copying gensim/corpora/svmlightcorpus.py -> build/lib.macosx-12.6-x86_64-cpython-311/gensim/corpora
copying gensim/corpora/hashdictionary.py -> build/lib.macosx-12.6-x86_64-cpython-311/gensim/corpora
running egg_info
writing gensim.egg-info/PKG-INFO
writing dependency_links to gensim.egg-info/dependency_links.txt
writing requirements to gensim.egg-info/requires.txt
writing top-level names to gensim.egg-info/top_level.txt
reading manifest file 'gensim.egg-info/SOURCES.txt'
reading manifest template 'MANIFEST.in'
warning: no files found matching 'COPYING.LESSER'
warning: no files found matching 'ez_setup.py'
warning: no files found matching 'gensim/models/doc2vec_inner.c'
adding license file 'COPYING'
writing manifest file 'gensim.egg-info/SOURCES.txt'
/Users/dsc/.pyenv/versions/3.11.0/envs/gensim-test/lib/python3.11/site-packages/setuptools/command/build_py.py:202: SetuptoolsDeprecationWarning: Installing 'gensim.test.test_data' as data is deprecated, please list it in `packages`.
!!
############################
# Package would be ignored #
############################
Python recognizes 'gensim.test.test_data' as an importable package,
but it is not listed in the `packages` configuration of setuptools.
'gensim.test.test_data' has been automatically added to the distribution only
because it may contain data files, but this behavior is likely to change
in future versions of setuptools (and therefore is considered deprecated).
Please make sure that 'gensim.test.test_data' is included as a package by using
the `packages` configuration field or the proper discovery methods
(for example by using `find_namespace_packages(...)`/`find_namespace:`
instead of `find_packages(...)`/`find:`).
You can read more about "package discovery" and "data files" on setuptools
documentation page.
!!
check.warn(importable)
/Users/dsc/.pyenv/versions/3.11.0/envs/gensim-test/lib/python3.11/site-packages/setuptools/command/build_py.py:202: SetuptoolsDeprecationWarning: Installing 'gensim.test.test_data.DTM' as data is deprecated, please list it in `packages`.
!!
############################
# Package would be ignored #
############################
Python recognizes 'gensim.test.test_data.DTM' as an importable package,
but it is not listed in the `packages` configuration of setuptools.
'gensim.test.test_data.DTM' has been automatically added to the distribution only
because it may contain data files, but this behavior is likely to change
in future versions of setuptools (and therefore is considered deprecated).
Please make sure that 'gensim.test.test_data.DTM' is included as a package by using
the `packages` configuration field or the proper discovery methods
(for example by using `find_namespace_packages(...)`/`find_namespace:`
instead of `find_packages(...)`/`find:`).
You can read more about "package discovery" and "data files" on setuptools
documentation page.
!!
check.warn(importable)
/Users/dsc/.pyenv/versions/3.11.0/envs/gensim-test/lib/python3.11/site-packages/setuptools/command/build_py.py:202: SetuptoolsDeprecationWarning: Installing 'gensim.test.test_data.PathLineSentences' as data is deprecated, please list it in `packages`.
!!
############################
# Package would be ignored #
############################
Python recognizes 'gensim.test.test_data.PathLineSentences' as an importable package,
but it is not listed in the `packages` configuration of setuptools.
'gensim.test.test_data.PathLineSentences' has been automatically added to the distribution only
because it may contain data files, but this behavior is likely to change
in future versions of setuptools (and therefore is considered deprecated).
Please make sure that 'gensim.test.test_data.PathLineSentences' is included as a package by using
the `packages` configuration field or the proper discovery methods
(for example by using `find_namespace_packages(...)`/`find_namespace:`
instead of `find_packages(...)`/`find:`).
You can read more about "package discovery" and "data files" on setuptools
documentation page.
!!
check.warn(importable)
/Users/dsc/.pyenv/versions/3.11.0/envs/gensim-test/lib/python3.11/site-packages/setuptools/command/build_py.py:202: SetuptoolsDeprecationWarning: Installing 'gensim.test.test_data.old_d2v_models' as data is deprecated, please list it in `packages`.
!!
############################
# Package would be ignored #
############################
Python recognizes 'gensim.test.test_data.old_d2v_models' as an importable package,
but it is not listed in the `packages` configuration of setuptools.
'gensim.test.test_data.old_d2v_models' has been automatically added to the distribution only
because it may contain data files, but this behavior is likely to change
in future versions of setuptools (and therefore is considered deprecated).
Please make sure that 'gensim.test.test_data.old_d2v_models' is included as a package by using
the `packages` configuration field or the proper discovery methods
(for example by using `find_namespace_packages(...)`/`find_namespace:`
instead of `find_packages(...)`/`find:`).
You can read more about "package discovery" and "data files" on setuptools
documentation page.
!!
check.warn(importable)
/Users/dsc/.pyenv/versions/3.11.0/envs/gensim-test/lib/python3.11/site-packages/setuptools/command/build_py.py:202: SetuptoolsDeprecationWarning: Installing 'gensim.test.test_data.old_w2v_models' as data is deprecated, please list it in `packages`.
!!
############################
# Package would be ignored #
############################
Python recognizes 'gensim.test.test_data.old_w2v_models' as an importable package,
but it is not listed in the `packages` configuration of setuptools.
'gensim.test.test_data.old_w2v_models' has been automatically added to the distribution only
because it may contain data files, but this behavior is likely to change
in future versions of setuptools (and therefore is considered deprecated).
Please make sure that 'gensim.test.test_data.old_w2v_models' is included as a package by using
the `packages` configuration field or the proper discovery methods
(for example by using `find_namespace_packages(...)`/`find_namespace:`
instead of `find_packages(...)`/`find:`).
You can read more about "package discovery" and "data files" on setuptools
documentation page.
!!
check.warn(importable)
copying gensim/_matutils.c -> build/lib.macosx-12.6-x86_64-cpython-311/gensim
copying gensim/_matutils.pyx -> build/lib.macosx-12.6-x86_64-cpython-311/gensim
copying gensim/similarities/fastss.c -> build/lib.macosx-12.6-x86_64-cpython-311/gensim/similarities
creating build/lib.macosx-12.6-x86_64-cpython-311/gensim/test/test_data
copying gensim/test/test_data/EN.1-10.cbow1_wind5_hs0_neg10_size300_smpl1e-05.txt -> build/lib.macosx-12.6-x86_64-cpython-311/gensim/test/test_data
copying gensim/test/test_data/IT.1-10.cbow1_wind5_hs0_neg10_size300_smpl1e-05.txt -> build/lib.macosx-12.6-x86_64-cpython-311/gensim/test/test_data
copying gensim/test/test_data/OPUS_en_it_europarl_train_one2ten.txt -> build/lib.macosx-12.6-x86_64-cpython-311/gensim/test/test_data
copying gensim/test/test_data/alldata-id-10.txt -> build/lib.macosx-12.6-x86_64-cpython-311/gensim/test/test_data
copying gensim/test/test_data/atmodel_3_0_1_model -> build/lib.macosx-12.6-x86_64-cpython-311/gensim/test/test_data
copying gensim/test/test_data/atmodel_3_0_1_model.expElogbeta.npy -> build/lib.macosx-12.6-x86_64-cpython-311/gensim/test/test_data
copying gensim/test/test_data/atmodel_3_0_1_model.id2word -> build/lib.macosx-12.6-x86_64-cpython-311/gensim/test/test_data
copying gensim/test/test_data/atmodel_3_0_1_model.state -> build/lib.macosx-12.6-x86_64-cpython-311/gensim/test/test_data
copying gensim/test/test_data/bgwiki-latest-pages-articles-shortened.xml.bz2 -> build/lib.macosx-12.6-x86_64-cpython-311/gensim/test/test_data
copying gensim/test/test_data/compatible-hash-true.model -> build/lib.macosx-12.6-x86_64-cpython-311/gensim/test/test_data
copying gensim/test/test_data/cp852_fasttext.bin -> build/lib.macosx-12.6-x86_64-cpython-311/gensim/test/test_data
copying gensim/test/test_data/crime-and-punishment.bin -> build/lib.macosx-12.6-x86_64-cpython-311/gensim/test/test_data
copying gensim/test/test_data/crime-and-punishment.txt -> build/lib.macosx-12.6-x86_64-cpython-311/gensim/test/test_data
copying gensim/test/test_data/crime-and-punishment.vec -> build/lib.macosx-12.6-x86_64-cpython-311/gensim/test/test_data
copying gensim/test/test_data/d2v-lee-v0.13.0 -> build/lib.macosx-12.6-x86_64-cpython-311/gensim/test/test_data
copying gensim/test/test_data/doc2vec_old -> build/lib.macosx-12.6-x86_64-cpython-311/gensim/test/test_data
copying gensim/test/test_data/doc2vec_old_sep -> build/lib.macosx-12.6-x86_64-cpython-311/gensim/test/test_data
copying gensim/test/test_data/doc2vec_old_sep.syn0_lockf.npy -> build/lib.macosx-12.6-x86_64-cpython-311/gensim/test/test_data
copying gensim/test/test_data/doc2vec_old_sep.syn1neg.npy -> build/lib.macosx-12.6-x86_64-cpython-311/gensim/test/test_data
copying gensim/test/test_data/dtm_test.dict -> build/lib.macosx-12.6-x86_64-cpython-311/gensim/test/test_data
copying gensim/test/test_data/dtm_test.mm -> build/lib.macosx-12.6-x86_64-cpython-311/gensim/test/test_data
copying gensim/test/test_data/ensemblelda -> build/lib.macosx-12.6-x86_64-cpython-311/gensim/test/test_data
copying gensim/test/test_data/enwiki-latest-pages-articles1.xml-p000000010p000030302-shortened.bz2 -> build/lib.macosx-12.6-x86_64-cpython-311/gensim/test/test_data
copying gensim/test/test_data/enwiki-table-markup.xml.bz2 -> build/lib.macosx-12.6-x86_64-cpython-311/gensim/test/test_data
copying gensim/test/test_data/euclidean_vectors.bin -> build/lib.macosx-12.6-x86_64-cpython-311/gensim/test/test_data
copying gensim/test/test_data/fasttext_old -> build/lib.macosx-12.6-x86_64-cpython-311/gensim/test/test_data
copying gensim/test/test_data/fasttext_old_sep -> build/lib.macosx-12.6-x86_64-cpython-311/gensim/test/test_data
copying gensim/test/test_data/fasttext_old_sep.syn0_lockf.npy -> build/lib.macosx-12.6-x86_64-cpython-311/gensim/test/test_data
copying gensim/test/test_data/fasttext_old_sep.syn1neg.npy -> build/lib.macosx-12.6-x86_64-cpython-311/gensim/test/test_data
copying gensim/test/test_data/fb-ngrams.txt -> build/lib.macosx-12.6-x86_64-cpython-311/gensim/test/test_data
copying gensim/test/test_data/ft_kv_3.6.0.model.gz -> build/lib.macosx-12.6-x86_64-cpython-311/gensim/test/test_data
copying gensim/test/test_data/ft_model_2.3.0 -> build/lib.macosx-12.6-x86_64-cpython-311/gensim/test/test_data
copying gensim/test/test_data/head500.noblanks.cor -> build/lib.macosx-12.6-x86_64-cpython-311/gensim/test/test_data
copying gensim/test/test_data/head500.noblanks.cor.bz2 -> build/lib.macosx-12.6-x86_64-cpython-311/gensim/test/test_data
copying gensim/test/test_data/head500.noblanks.cor_tfidf.model -> build/lib.macosx-12.6-x86_64-cpython-311/gensim/test/test_data
copying gensim/test/test_data/head500.noblanks.cor_wordids.txt -> build/lib.macosx-12.6-x86_64-cpython-311/gensim/test/test_data
copying gensim/test/test_data/high_precision.kv.bin -> build/lib.macosx-12.6-x86_64-cpython-311/gensim/test/test_data
copying gensim/test/test_data/high_precision.kv.txt -> build/lib.macosx-12.6-x86_64-cpython-311/gensim/test/test_data
copying gensim/test/test_data/large_tag_doc_10_iter50 -> build/lib.macosx-12.6-x86_64-cpython-311/gensim/test/test_data
copying gensim/test/test_data/lda_3_0_1_model -> build/lib.macosx-12.6-x86_64-cpython-311/gensim/test/test_data
copying gensim/test/test_data/lda_3_0_1_model.expElogbeta.npy -> build/lib.macosx-12.6-x86_64-cpython-311/gensim/test/test_data
copying gensim/test/test_data/lda_3_0_1_model.id2word -> build/lib.macosx-12.6-x86_64-cpython-311/gensim/test/test_data
copying gensim/test/test_data/lda_3_0_1_model.state -> build/lib.macosx-12.6-x86_64-cpython-311/gensim/test/test_data
copying gensim/test/test_data/ldamodel_python_2_7 -> build/lib.macosx-12.6-x86_64-cpython-311/gensim/test/test_data
copying gensim/test/test_data/ldamodel_python_2_7.expElogbeta.npy -> build/lib.macosx-12.6-x86_64-cpython-311/gensim/test/test_data
copying gensim/test/test_data/ldamodel_python_2_7.id2word -> build/lib.macosx-12.6-x86_64-cpython-311/gensim/test/test_data
copying gensim/test/test_data/ldamodel_python_2_7.state -> build/lib.macosx-12.6-x86_64-cpython-311/gensim/test/test_data
copying gensim/test/test_data/ldamodel_python_3_5 -> build/lib.macosx-12.6-x86_64-cpython-311/gensim/test/test_data
copying gensim/test/test_data/ldamodel_python_3_5.expElogbeta.npy -> build/lib.macosx-12.6-x86_64-cpython-311/gensim/test/test_data
copying gensim/test/test_data/ldamodel_python_3_5.id2word -> build/lib.macosx-12.6-x86_64-cpython-311/gensim/test/test_data
copying gensim/test/test_data/ldamodel_python_3_5.state -> build/lib.macosx-12.6-x86_64-cpython-311/gensim/test/test_data
copying gensim/test/test_data/ldavowpalwabbit.dict.txt -> build/lib.macosx-12.6-x86_64-cpython-311/gensim/test/test_data
copying gensim/test/test_data/ldavowpalwabbit.txt -> build/lib.macosx-12.6-x86_64-cpython-311/gensim/test/test_data
copying gensim/test/test_data/lee.cor -> build/lib.macosx-12.6-x86_64-cpython-311/gensim/test/test_data
copying gensim/test/test_data/lee_background.cor -> build/lib.macosx-12.6-x86_64-cpython-311/gensim/test/test_data
copying gensim/test/test_data/lee_fasttext -> build/lib.macosx-12.6-x86_64-cpython-311/gensim/test/test_data
copying gensim/test/test_data/lee_fasttext.bin -> build/lib.macosx-12.6-x86_64-cpython-311/gensim/test/test_data
copying gensim/test/test_data/lee_fasttext.vec -> build/lib.macosx-12.6-x86_64-cpython-311/gensim/test/test_data
copying gensim/test/test_data/lee_fasttext_new.bin -> build/lib.macosx-12.6-x86_64-cpython-311/gensim/test/test_data
copying gensim/test/test_data/miIslita.cor -> build/lib.macosx-12.6-x86_64-cpython-311/gensim/test/test_data
copying gensim/test/test_data/mini_newsgroup -> build/lib.macosx-12.6-x86_64-cpython-311/gensim/test/test_data
copying gensim/test/test_data/nmf_model -> build/lib.macosx-12.6-x86_64-cpython-311/gensim/test/test_data
copying gensim/test/test_data/non_ascii_fasttext.bin -> build/lib.macosx-12.6-x86_64-cpython-311/gensim/test/test_data
copying gensim/test/test_data/old_keyedvectors_320.dat -> build/lib.macosx-12.6-x86_64-cpython-311/gensim/test/test_data
copying gensim/test/test_data/pang_lee_polarity.cor -> build/lib.macosx-12.6-x86_64-cpython-311/gensim/test/test_data
copying gensim/test/test_data/pang_lee_polarity_fasttext.bin -> build/lib.macosx-12.6-x86_64-cpython-311/gensim/test/test_data
copying gensim/test/test_data/pang_lee_polarity_fasttext.vec -> build/lib.macosx-12.6-x86_64-cpython-311/gensim/test/test_data
copying gensim/test/test_data/para2para_text1.txt -> build/lib.macosx-12.6-x86_64-cpython-311/gensim/test/test_data
copying gensim/test/test_data/para2para_text2.txt -> build/lib.macosx-12.6-x86_64-cpython-311/gensim/test/test_data
copying gensim/test/test_data/phraser-3.6.0.model -> build/lib.macosx-12.6-x86_64-cpython-311/gensim/test/test_data
copying gensim/test/test_data/phraser-no-common-terms.pkl -> build/lib.macosx-12.6-x86_64-cpython-311/gensim/test/test_data
copying gensim/test/test_data/phraser-no-scoring.pkl -> build/lib.macosx-12.6-x86_64-cpython-311/gensim/test/test_data
copying gensim/test/test_data/phraser-scoring-str.pkl -> build/lib.macosx-12.6-x86_64-cpython-311/gensim/test/test_data
copying gensim/test/test_data/phrases-3.6.0.model -> build/lib.macosx-12.6-x86_64-cpython-311/gensim/test/test_data
copying gensim/test/test_data/phrases-no-common-terms.pkl -> build/lib.macosx-12.6-x86_64-cpython-311/gensim/test/test_data
copying gensim/test/test_data/phrases-no-scoring.pkl -> build/lib.macosx-12.6-x86_64-cpython-311/gensim/test/test_data
copying gensim/test/test_data/phrases-scoring-str.pkl -> build/lib.macosx-12.6-x86_64-cpython-311/gensim/test/test_data
copying gensim/test/test_data/poincare_cp852.tsv -> build/lib.macosx-12.6-x86_64-cpython-311/gensim/test/test_data
copying gensim/test/test_data/poincare_hypernyms.tsv -> build/lib.macosx-12.6-x86_64-cpython-311/gensim/test/test_data
copying gensim/test/test_data/poincare_hypernyms_large.tsv -> build/lib.macosx-12.6-x86_64-cpython-311/gensim/test/test_data
copying gensim/test/test_data/poincare_test_3.4.0 -> build/lib.macosx-12.6-x86_64-cpython-311/gensim/test/test_data
copying gensim/test/test_data/poincare_utf8.tsv -> build/lib.macosx-12.6-x86_64-cpython-311/gensim/test/test_data
copying gensim/test/test_data/poincare_vectors.bin -> build/lib.macosx-12.6-x86_64-cpython-311/gensim/test/test_data
copying gensim/test/test_data/pre_0_13_2_model -> build/lib.macosx-12.6-x86_64-cpython-311/gensim/test/test_data
copying gensim/test/test_data/pre_0_13_2_model.state -> build/lib.macosx-12.6-x86_64-cpython-311/gensim/test/test_data
copying gensim/test/test_data/pretrained.vec -> build/lib.macosx-12.6-x86_64-cpython-311/gensim/test/test_data
copying gensim/test/test_data/questions-words.txt -> build/lib.macosx-12.6-x86_64-cpython-311/gensim/test/test_data
copying gensim/test/test_data/reproduce.dat -> build/lib.macosx-12.6-x86_64-cpython-311/gensim/test/test_data
copying gensim/test/test_data/reproduce.dat.gz -> build/lib.macosx-12.6-x86_64-cpython-311/gensim/test/test_data
copying gensim/test/test_data/similarities0-1.txt -> build/lib.macosx-12.6-x86_64-cpython-311/gensim/test/test_data
copying gensim/test/test_data/simlex999.txt -> build/lib.macosx-12.6-x86_64-cpython-311/gensim/test/test_data
copying gensim/test/test_data/small_tag_doc_5_iter50 -> build/lib.macosx-12.6-x86_64-cpython-311/gensim/test/test_data
copying gensim/test/test_data/test_corpus_ok.mm -> build/lib.macosx-12.6-x86_64-cpython-311/gensim/test/test_data
copying gensim/test/test_data/test_corpus_small.mm -> build/lib.macosx-12.6-x86_64-cpython-311/gensim/test/test_data
copying gensim/test/test_data/test_glove.txt -> build/lib.macosx-12.6-x86_64-cpython-311/gensim/test/test_data
copying gensim/test/test_data/test_mmcorpus_corrupt.mm -> build/lib.macosx-12.6-x86_64-cpython-311/gensim/test/test_data
copying gensim/test/test_data/test_mmcorpus_no_index.mm -> build/lib.macosx-12.6-x86_64-cpython-311/gensim/test/test_data
copying gensim/test/test_data/test_mmcorpus_no_index.mm.bz2 -> build/lib.macosx-12.6-x86_64-cpython-311/gensim/test/test_data
copying gensim/test/test_data/test_mmcorpus_no_index.mm.gz -> build/lib.macosx-12.6-x86_64-cpython-311/gensim/test/test_data
copying gensim/test/test_data/test_mmcorpus_overflow.mm -> build/lib.macosx-12.6-x86_64-cpython-311/gensim/test/test_data
copying gensim/test/test_data/test_mmcorpus_with_index.mm -> build/lib.macosx-12.6-x86_64-cpython-311/gensim/test/test_data
copying gensim/test/test_data/test_mmcorpus_with_index.mm.index -> build/lib.macosx-12.6-x86_64-cpython-311/gensim/test/test_data
copying gensim/test/test_data/testcorpus.blei -> build/lib.macosx-12.6-x86_64-cpython-311/gensim/test/test_data
copying gensim/test/test_data/testcorpus.blei.index -> build/lib.macosx-12.6-x86_64-cpython-311/gensim/test/test_data
copying gensim/test/test_data/testcorpus.blei.vocab -> build/lib.macosx-12.6-x86_64-cpython-311/gensim/test/test_data
copying gensim/test/test_data/testcorpus.low -> build/lib.macosx-12.6-x86_64-cpython-311/gensim/test/test_data
copying gensim/test/test_data/testcorpus.low.index -> build/lib.macosx-12.6-x86_64-cpython-311/gensim/test/test_data
copying gensim/test/test_data/testcorpus.mallet -> build/lib.macosx-12.6-x86_64-cpython-311/gensim/test/test_data
copying gensim/test/test_data/testcorpus.mallet.index -> build/lib.macosx-12.6-x86_64-cpython-311/gensim/test/test_data
copying gensim/test/test_data/testcorpus.mm -> build/lib.macosx-12.6-x86_64-cpython-311/gensim/test/test_data
copying gensim/test/test_data/testcorpus.mm.index -> build/lib.macosx-12.6-x86_64-cpython-311/gensim/test/test_data
copying gensim/test/test_data/testcorpus.svmlight -> build/lib.macosx-12.6-x86_64-cpython-311/gensim/test/test_data
copying gensim/test/test_data/testcorpus.svmlight.index -> build/lib.macosx-12.6-x86_64-cpython-311/gensim/test/test_data
copying gensim/test/test_data/testcorpus.txt -> build/lib.macosx-12.6-x86_64-cpython-311/gensim/test/test_data
copying gensim/test/test_data/testcorpus.uci -> build/lib.macosx-12.6-x86_64-cpython-311/gensim/test/test_data
copying gensim/test/test_data/testcorpus.uci.index -> build/lib.macosx-12.6-x86_64-cpython-311/gensim/test/test_data
copying gensim/test/test_data/testcorpus.uci.vocab -> build/lib.macosx-12.6-x86_64-cpython-311/gensim/test/test_data
copying gensim/test/test_data/testcorpus.xml.bz2 -> build/lib.macosx-12.6-x86_64-cpython-311/gensim/test/test_data
copying gensim/test/test_data/tfidf_model.tst -> build/lib.macosx-12.6-x86_64-cpython-311/gensim/test/test_data
copying gensim/test/test_data/tfidf_model.tst.bz2 -> build/lib.macosx-12.6-x86_64-cpython-311/gensim/test/test_data
copying gensim/test/test_data/tfidf_model_3_2.tst -> build/lib.macosx-12.6-x86_64-cpython-311/gensim/test/test_data
copying gensim/test/test_data/toy-data.txt -> build/lib.macosx-12.6-x86_64-cpython-311/gensim/test/test_data
copying gensim/test/test_data/toy-model-pretrained.bin -> build/lib.macosx-12.6-x86_64-cpython-311/gensim/test/test_data
copying gensim/test/test_data/toy-model.bin -> build/lib.macosx-12.6-x86_64-cpython-311/gensim/test/test_data
copying gensim/test/test_data/toy-model.vec -> build/lib.macosx-12.6-x86_64-cpython-311/gensim/test/test_data
copying gensim/test/test_data/varembed_lee_subcorpus.cor -> build/lib.macosx-12.6-x86_64-cpython-311/gensim/test/test_data
copying gensim/test/test_data/varembed_morfessor.bin -> build/lib.macosx-12.6-x86_64-cpython-311/gensim/test/test_data
copying gensim/test/test_data/varembed_vectors.pkl -> build/lib.macosx-12.6-x86_64-cpython-311/gensim/test/test_data
copying gensim/test/test_data/w2v-lee-v0.12.0 -> build/lib.macosx-12.6-x86_64-cpython-311/gensim/test/test_data
copying gensim/test/test_data/w2v_keyedvectors_load_test.modeldata -> build/lib.macosx-12.6-x86_64-cpython-311/gensim/test/test_data
copying gensim/test/test_data/w2v_keyedvectors_load_test.vocab -> build/lib.macosx-12.6-x86_64-cpython-311/gensim/test/test_data
copying gensim/test/test_data/word2vec_3.3 -> build/lib.macosx-12.6-x86_64-cpython-311/gensim/test/test_data
copying gensim/test/test_data/word2vec_old -> build/lib.macosx-12.6-x86_64-cpython-311/gensim/test/test_data
copying gensim/test/test_data/word2vec_old_sep -> build/lib.macosx-12.6-x86_64-cpython-311/gensim/test/test_data
copying gensim/test/test_data/word2vec_old_sep.syn0_lockf.npy -> build/lib.macosx-12.6-x86_64-cpython-311/gensim/test/test_data
copying gensim/test/test_data/word2vec_old_sep.syn1neg.npy -> build/lib.macosx-12.6-x86_64-cpython-311/gensim/test/test_data
copying gensim/test/test_data/word2vec_pre_kv_c -> build/lib.macosx-12.6-x86_64-cpython-311/gensim/test/test_data
copying gensim/test/test_data/word2vec_pre_kv_py2 -> build/lib.macosx-12.6-x86_64-cpython-311/gensim/test/test_data
copying gensim/test/test_data/word2vec_pre_kv_py3 -> build/lib.macosx-12.6-x86_64-cpython-311/gensim/test/test_data
copying gensim/test/test_data/word2vec_pre_kv_py3_4 -> build/lib.macosx-12.6-x86_64-cpython-311/gensim/test/test_data
copying gensim/test/test_data/word2vec_pre_kv_sep_py2 -> build/lib.macosx-12.6-x86_64-cpython-311/gensim/test/test_data
copying gensim/test/test_data/word2vec_pre_kv_sep_py2.neg_labels.npy -> build/lib.macosx-12.6-x86_64-cpython-311/gensim/test/test_data
copying gensim/test/test_data/word2vec_pre_kv_sep_py2.syn0.npy -> build/lib.macosx-12.6-x86_64-cpython-311/gensim/test/test_data
copying gensim/test/test_data/word2vec_pre_kv_sep_py2.syn0_lockf.npy -> build/lib.macosx-12.6-x86_64-cpython-311/gensim/test/test_data
copying gensim/test/test_data/word2vec_pre_kv_sep_py2.syn1neg.npy -> build/lib.macosx-12.6-x86_64-cpython-311/gensim/test/test_data
copying gensim/test/test_data/word2vec_pre_kv_sep_py3 -> build/lib.macosx-12.6-x86_64-cpython-311/gensim/test/test_data
copying gensim/test/test_data/word2vec_pre_kv_sep_py3.neg_labels.npy -> build/lib.macosx-12.6-x86_64-cpython-311/gensim/test/test_data
copying gensim/test/test_data/word2vec_pre_kv_sep_py3.syn0.npy -> build/lib.macosx-12.6-x86_64-cpython-311/gensim/test/test_data
copying gensim/test/test_data/word2vec_pre_kv_sep_py3.syn0_lockf.npy -> build/lib.macosx-12.6-x86_64-cpython-311/gensim/test/test_data
copying gensim/test/test_data/word2vec_pre_kv_sep_py3.syn1neg.npy -> build/lib.macosx-12.6-x86_64-cpython-311/gensim/test/test_data
copying gensim/test/test_data/word2vec_pre_kv_sep_py3_4 -> build/lib.macosx-12.6-x86_64-cpython-311/gensim/test/test_data
copying gensim/test/test_data/word2vec_pre_kv_sep_py3_4.neg_labels.npy -> build/lib.macosx-12.6-x86_64-cpython-311/gensim/test/test_data
copying gensim/test/test_data/word2vec_pre_kv_sep_py3_4.syn0.npy -> build/lib.macosx-12.6-x86_64-cpython-311/gensim/test/test_data
copying gensim/test/test_data/word2vec_pre_kv_sep_py3_4.syn0_lockf.npy -> build/lib.macosx-12.6-x86_64-cpython-311/gensim/test/test_data
copying gensim/test/test_data/word2vec_pre_kv_sep_py3_4.syn1neg.npy -> build/lib.macosx-12.6-x86_64-cpython-311/gensim/test/test_data
copying gensim/test/test_data/wordsim353.tsv -> build/lib.macosx-12.6-x86_64-cpython-311/gensim/test/test_data
creating build/lib.macosx-12.6-x86_64-cpython-311/gensim/test/test_data/DTM
copying gensim/test/test_data/DTM/ldaseq_3_0_1_model -> build/lib.macosx-12.6-x86_64-cpython-311/gensim/test/test_data/DTM
copying gensim/test/test_data/DTM/sstats_test.txt -> build/lib.macosx-12.6-x86_64-cpython-311/gensim/test/test_data/DTM
creating build/lib.macosx-12.6-x86_64-cpython-311/gensim/test/test_data/PathLineSentences
copying gensim/test/test_data/PathLineSentences/1.txt -> build/lib.macosx-12.6-x86_64-cpython-311/gensim/test/test_data/PathLineSentences
copying gensim/test/test_data/PathLineSentences/2.txt.bz2 -> build/lib.macosx-12.6-x86_64-cpython-311/gensim/test/test_data/PathLineSentences
creating build/lib.macosx-12.6-x86_64-cpython-311/gensim/test/test_data/old_d2v_models
copying gensim/test/test_data/old_d2v_models/d2v_0.12.0.mdl -> build/lib.macosx-12.6-x86_64-cpython-311/gensim/test/test_data/old_d2v_models
copying gensim/test/test_data/old_d2v_models/d2v_0.12.1.mdl -> build/lib.macosx-12.6-x86_64-cpython-311/gensim/test/test_data/old_d2v_models
copying gensim/test/test_data/old_d2v_models/d2v_0.12.2.mdl -> build/lib.macosx-12.6-x86_64-cpython-311/gensim/test/test_data/old_d2v_models
copying gensim/test/test_data/old_d2v_models/d2v_0.12.3.mdl -> build/lib.macosx-12.6-x86_64-cpython-311/gensim/test/test_data/old_d2v_models
copying gensim/test/test_data/old_d2v_models/d2v_0.12.4.mdl -> build/lib.macosx-12.6-x86_64-cpython-311/gensim/test/test_data/old_d2v_models
copying gensim/test/test_data/old_d2v_models/d2v_0.13.0.mdl -> build/lib.macosx-12.6-x86_64-cpython-311/gensim/test/test_data/old_d2v_models
copying gensim/test/test_data/old_d2v_models/d2v_0.13.1.mdl -> build/lib.macosx-12.6-x86_64-cpython-311/gensim/test/test_data/old_d2v_models
copying gensim/test/test_data/old_d2v_models/d2v_0.13.2.mdl -> build/lib.macosx-12.6-x86_64-cpython-311/gensim/test/test_data/old_d2v_models
copying gensim/test/test_data/old_d2v_models/d2v_0.13.3.mdl -> build/lib.macosx-12.6-x86_64-cpython-311/gensim/test/test_data/old_d2v_models
copying gensim/test/test_data/old_d2v_models/d2v_0.13.4.mdl -> build/lib.macosx-12.6-x86_64-cpython-311/gensim/test/test_data/old_d2v_models
copying gensim/test/test_data/old_d2v_models/d2v_1.0.0.mdl -> build/lib.macosx-12.6-x86_64-cpython-311/gensim/test/test_data/old_d2v_models
copying gensim/test/test_data/old_d2v_models/d2v_1.0.1.mdl -> build/lib.macosx-12.6-x86_64-cpython-311/gensim/test/test_data/old_d2v_models
copying gensim/test/test_data/old_d2v_models/d2v_2.0.0.mdl -> build/lib.macosx-12.6-x86_64-cpython-311/gensim/test/test_data/old_d2v_models
copying gensim/test/test_data/old_d2v_models/d2v_2.1.0.mdl -> build/lib.macosx-12.6-x86_64-cpython-311/gensim/test/test_data/old_d2v_models
copying gensim/test/test_data/old_d2v_models/d2v_2.2.0.mdl -> build/lib.macosx-12.6-x86_64-cpython-311/gensim/test/test_data/old_d2v_models
copying gensim/test/test_data/old_d2v_models/d2v_2.3.0.mdl -> build/lib.macosx-12.6-x86_64-cpython-311/gensim/test/test_data/old_d2v_models
copying gensim/test/test_data/old_d2v_models/d2v_3.0.0.mdl -> build/lib.macosx-12.6-x86_64-cpython-311/gensim/test/test_data/old_d2v_models
copying gensim/test/test_data/old_d2v_models/d2v_3.1.0.mdl -> build/lib.macosx-12.6-x86_64-cpython-311/gensim/test/test_data/old_d2v_models
copying gensim/test/test_data/old_d2v_models/d2v_3.2.0.mdl -> build/lib.macosx-12.6-x86_64-cpython-311/gensim/test/test_data/old_d2v_models
copying gensim/test/test_data/old_d2v_models/d2v_3.3.0.mdl -> build/lib.macosx-12.6-x86_64-cpython-311/gensim/test/test_data/old_d2v_models
copying gensim/test/test_data/old_d2v_models/d2v_3.4.0.mdl -> build/lib.macosx-12.6-x86_64-cpython-311/gensim/test/test_data/old_d2v_models
creating build/lib.macosx-12.6-x86_64-cpython-311/gensim/test/test_data/old_w2v_models
copying gensim/test/test_data/old_w2v_models/w2v_0.12.0.mdl -> build/lib.macosx-12.6-x86_64-cpython-311/gensim/test/test_data/old_w2v_models
copying gensim/test/test_data/old_w2v_models/w2v_0.12.1.mdl -> build/lib.macosx-12.6-x86_64-cpython-311/gensim/test/test_data/old_w2v_models
copying gensim/test/test_data/old_w2v_models/w2v_0.12.2.mdl -> build/lib.macosx-12.6-x86_64-cpython-311/gensim/test/test_data/old_w2v_models
copying gensim/test/test_data/old_w2v_models/w2v_0.12.3.mdl -> build/lib.macosx-12.6-x86_64-cpython-311/gensim/test/test_data/old_w2v_models
copying gensim/test/test_data/old_w2v_models/w2v_0.12.4.mdl -> build/lib.macosx-12.6-x86_64-cpython-311/gensim/test/test_data/old_w2v_models
copying gensim/test/test_data/old_w2v_models/w2v_0.13.0.mdl -> build/lib.macosx-12.6-x86_64-cpython-311/gensim/test/test_data/old_w2v_models
copying gensim/test/test_data/old_w2v_models/w2v_0.13.1.mdl -> build/lib.macosx-12.6-x86_64-cpython-311/gensim/test/test_data/old_w2v_models
copying gensim/test/test_data/old_w2v_models/w2v_0.13.2.mdl -> build/lib.macosx-12.6-x86_64-cpython-311/gensim/test/test_data/old_w2v_models
copying gensim/test/test_data/old_w2v_models/w2v_0.13.3.mdl -> build/lib.macosx-12.6-x86_64-cpython-311/gensim/test/test_data/old_w2v_models
copying gensim/test/test_data/old_w2v_models/w2v_0.13.4.mdl -> build/lib.macosx-12.6-x86_64-cpython-311/gensim/test/test_data/old_w2v_models
copying gensim/test/test_data/old_w2v_models/w2v_1.0.0.mdl -> build/lib.macosx-12.6-x86_64-cpython-311/gensim/test/test_data/old_w2v_models
copying gensim/test/test_data/old_w2v_models/w2v_1.0.1.mdl -> build/lib.macosx-12.6-x86_64-cpython-311/gensim/test/test_data/old_w2v_models
copying gensim/test/test_data/old_w2v_models/w2v_2.0.0.mdl -> build/lib.macosx-12.6-x86_64-cpython-311/gensim/test/test_data/old_w2v_models
copying gensim/test/test_data/old_w2v_models/w2v_2.1.0.mdl -> build/lib.macosx-12.6-x86_64-cpython-311/gensim/test/test_data/old_w2v_models
copying gensim/test/test_data/old_w2v_models/w2v_2.2.0.mdl -> build/lib.macosx-12.6-x86_64-cpython-311/gensim/test/test_data/old_w2v_models
copying gensim/test/test_data/old_w2v_models/w2v_2.3.0.mdl -> build/lib.macosx-12.6-x86_64-cpython-311/gensim/test/test_data/old_w2v_models
copying gensim/test/test_data/old_w2v_models/w2v_3.0.0.mdl -> build/lib.macosx-12.6-x86_64-cpython-311/gensim/test/test_data/old_w2v_models
copying gensim/test/test_data/old_w2v_models/w2v_3.1.0.mdl -> build/lib.macosx-12.6-x86_64-cpython-311/gensim/test/test_data/old_w2v_models
copying gensim/test/test_data/old_w2v_models/w2v_3.2.0.mdl -> build/lib.macosx-12.6-x86_64-cpython-311/gensim/test/test_data/old_w2v_models
copying gensim/test/test_data/old_w2v_models/w2v_3.3.0.mdl -> build/lib.macosx-12.6-x86_64-cpython-311/gensim/test/test_data/old_w2v_models
copying gensim/test/test_data/old_w2v_models/w2v_3.4.0.mdl -> build/lib.macosx-12.6-x86_64-cpython-311/gensim/test/test_data/old_w2v_models
copying gensim/models/doc2vec_corpusfile.cpp -> build/lib.macosx-12.6-x86_64-cpython-311/gensim/models
copying gensim/models/doc2vec_corpusfile.pyx -> build/lib.macosx-12.6-x86_64-cpython-311/gensim/models
copying gensim/models/doc2vec_inner.cpp -> build/lib.macosx-12.6-x86_64-cpython-311/gensim/models
copying gensim/models/doc2vec_inner.pxd -> build/lib.macosx-12.6-x86_64-cpython-311/gensim/models
copying gensim/models/doc2vec_inner.pyx -> build/lib.macosx-12.6-x86_64-cpython-311/gensim/models
copying gensim/models/fast_line_sentence.h -> build/lib.macosx-12.6-x86_64-cpython-311/gensim/models
copying gensim/models/fasttext_corpusfile.cpp -> build/lib.macosx-12.6-x86_64-cpython-311/gensim/models
copying gensim/models/fasttext_corpusfile.pyx -> build/lib.macosx-12.6-x86_64-cpython-311/gensim/models
copying gensim/models/fasttext_inner.c -> build/lib.macosx-12.6-x86_64-cpython-311/gensim/models
copying gensim/models/fasttext_inner.pxd -> build/lib.macosx-12.6-x86_64-cpython-311/gensim/models
copying gensim/models/fasttext_inner.pyx -> build/lib.macosx-12.6-x86_64-cpython-311/gensim/models
copying gensim/models/nmf_pgd.c -> build/lib.macosx-12.6-x86_64-cpython-311/gensim/models
copying gensim/models/nmf_pgd.pyx -> build/lib.macosx-12.6-x86_64-cpython-311/gensim/models
copying gensim/models/stdint_wrapper.h -> build/lib.macosx-12.6-x86_64-cpython-311/gensim/models
copying gensim/models/voidptr.h -> build/lib.macosx-12.6-x86_64-cpython-311/gensim/models
copying gensim/models/word2vec_corpusfile.cpp -> build/lib.macosx-12.6-x86_64-cpython-311/gensim/models
copying gensim/models/word2vec_corpusfile.pxd -> build/lib.macosx-12.6-x86_64-cpython-311/gensim/models
copying gensim/models/word2vec_corpusfile.pyx -> build/lib.macosx-12.6-x86_64-cpython-311/gensim/models
copying gensim/models/word2vec_inner.c -> build/lib.macosx-12.6-x86_64-cpython-311/gensim/models
copying gensim/models/word2vec_inner.pxd -> build/lib.macosx-12.6-x86_64-cpython-311/gensim/models
copying gensim/models/word2vec_inner.pyx -> build/lib.macosx-12.6-x86_64-cpython-311/gensim/models
copying gensim/corpora/_mmreader.c -> build/lib.macosx-12.6-x86_64-cpython-311/gensim/corpora
copying gensim/corpora/_mmreader.pyx -> build/lib.macosx-12.6-x86_64-cpython-311/gensim/corpora
running build_ext
building 'gensim.models.word2vec_inner' extension
creating build/temp.macosx-12.6-x86_64-cpython-311
creating build/temp.macosx-12.6-x86_64-cpython-311/gensim
creating build/temp.macosx-12.6-x86_64-cpython-311/gensim/models
clang -Wsign-compare -Wunreachable-code -DNDEBUG -g -fwrapv -O3 -Wall -I/Library/Developer/CommandLineTools/SDKs/MacOSX.sdk/usr/include -I/Library/Developer/CommandLineTools/SDKs/MacOSX.sdk/usr/include -I/Users/dsc/.pyenv/versions/3.11.0/envs/gensim-test/include -I/Users/dsc/.pyenv/versions/3.11.0/include/python3.11 -I/Users/dsc/.pyenv/versions/3.11.0/envs/gensim-test/lib/python3.11/site-packages/numpy/core/include -c gensim/models/word2vec_inner.c -o build/temp.macosx-12.6-x86_64-cpython-311/gensim/models/word2vec_inner.o
In file included from gensim/models/word2vec_inner.c:706:
In file included from /Users/dsc/.pyenv/versions/3.11.0/envs/gensim-test/lib/python3.11/site-packages/numpy/core/include/numpy/arrayobject.h:5:
In file included from /Users/dsc/.pyenv/versions/3.11.0/envs/gensim-test/lib/python3.11/site-packages/numpy/core/include/numpy/ndarrayobject.h:12:
In file included from /Users/dsc/.pyenv/versions/3.11.0/envs/gensim-test/lib/python3.11/site-packages/numpy/core/include/numpy/ndarraytypes.h:1948:
/Users/dsc/.pyenv/versions/3.11.0/envs/gensim-test/lib/python3.11/site-packages/numpy/core/include/numpy/npy_1_7_deprecated_api.h:17:2: warning: "Using deprecated NumPy API, disable it with " "#define NPY_NO_DEPRECATED_API NPY_1_7_API_VERSION" [-W#warnings]
#warning "Using deprecated NumPy API, disable it with " \
^
gensim/models/word2vec_inner.c:12424:5: error: incomplete definition of type 'struct _frame'
__Pyx_PyFrame_SetLineNumber(py_frame, py_line);
^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
gensim/models/word2vec_inner.c:457:62: note: expanded from macro '__Pyx_PyFrame_SetLineNumber'
#define __Pyx_PyFrame_SetLineNumber(frame, lineno) (frame)->f_lineno = (lineno)
~~~~~~~^
/Users/dsc/.pyenv/versions/3.11.0/include/python3.11/pytypedefs.h:22:16: note: forward declaration of 'struct _frame'
typedef struct _frame PyFrameObject;
^
1 warning and 1 error generated.
error: command '/usr/bin/clang' failed with exit code 1
[end of output]
note: This error originates from a subprocess, and is likely not a problem with pip.
error: legacy-install-failure
× Encountered error while trying to install package.
╰─> gensim
note: This is an issue with the package mentioned above, not pip.
hint: See above for output from the failure.
[notice] A new release of pip available: 22.3 -> 22.3.1
[notice] To update, run: python3.11 -m pip install --upgrade pip
```
#### Versions
Please provide the output of:
```python
>>> import platform; print(platform.platform())
macOS-13.0-x86_64-i386-64bit
>>> import sys; print("Python", sys.version)
Python 3.11.0 (main, Oct 28 2022, 09:50:19) [Clang 14.0.0 (clang-1400.0.29.201)]
>>> import struct; print("Bits", 8 * struct.calcsize("P"))
Bits 64
>>> import numpy; print("NumPy", numpy.__version__)
NumPy 1.23.4
>>> import scipy; print("SciPy", scipy.__version__)
SciPy 1.9.3
```
(I skipped the remaining commands as gensim was not installed)
|
closed
|
2022-11-07T16:02:29Z
|
2022-12-06T12:24:44Z
|
https://github.com/piskvorky/gensim/issues/3401
|
[] |
davechallis
| 2
|
graphql-python/graphene-django
|
django
| 961
|
Implementing custom Visitors
|
I'm trying to make usage of the visitor patterns for a project. Just by declaring a Custom visitor, I can see it being registered but I have no idea how to make it visible to the graphene flow so it get's called and runs my enter/leave methods. any help is appreciated.
Thanks
Edit: This added the wrong label as I might have started this in the wrong place. Was trying to change the label to "HELP_NEEDED" but I can't seem to be able to do it. Sorry
|
open
|
2020-05-13T16:51:06Z
|
2020-09-18T21:14:21Z
|
https://github.com/graphql-python/graphene-django/issues/961
|
[
"question",
"wontfix"
] |
rdmrocha
| 1
|
blacklanternsecurity/bbot
|
automation
| 1,680
|
Masscan command not found
|
**Describe the bug**
What happened?
Masscan did not run as expected
**Expected behavior**
What was supposed to happen?
Masscan was supposed to run and detect open ports.
**BBOT Command**
bbot -m portscan -t evilcorp.com
**OS, BBOT Installation Method + Version**
OS: Ubuntu, Installation Method: pip, BBOT Version: v2.0.0
**Logs**
[VERB] portscan: sudo: masscan: command not found
|
closed
|
2024-08-19T15:42:16Z
|
2024-08-26T19:44:15Z
|
https://github.com/blacklanternsecurity/bbot/issues/1680
|
[
"bug"
] |
TheFunky1Markimark
| 5
|
horovod/horovod
|
tensorflow
| 2,946
|
the reference Dockerfile is not find
|
To streamline the installation process on GPU machines, we have published the reference Dockerfile so you can get started with Horovod in minutes. The container includes Examples in the /examples directory.
Pre-built Docker containers with Horovod are available on DockerHub.
Building
Before building, you can modify Dockerfile.gpu to your liking, e.g. select a different CUDA, TensorFlow or Python version.
$ mkdir horovod-docker-gpu
$ wget -O horovod-docker-gpu/Dockerfile https://raw.githubusercontent.com/horovod/horovod/master/Dockerfile.gpu
$ docker build -t horovod:latest horovod-docker-gpu
For users without GPUs available in their environments, we've also published a CPU Dockerfile you can build and run similarly.
|
closed
|
2021-05-29T07:15:45Z
|
2021-06-03T17:35:26Z
|
https://github.com/horovod/horovod/issues/2946
|
[
"enhancement"
] |
xxyp
| 1
|
tflearn/tflearn
|
data-science
| 951
|
Save model and load it on another model with same structure
|
I have two DNN-models with the same structure and they are trained independently, but I want to transfer the weights of the first model to the second model at runtime.
```
import tflearn
from tflearn.layers.core import input_data, fully_connected
from tflearn.layers.estimator import regression
#import copy
def create_model(shape):
fc = fully_connected(shape, n_units=12, activation="relu")
fc2 = fully_connected(fc, n_units=2, activation="softmax")
regressor = regression(fc2)
return tflearn.DNN(regressor)
shape = input_data([None, 2])
model1 = create_model(shape) #first model
model2 = create_model(shape) #second model
"""
...
train model1
...
"""
path = "models/test_model.tfl"
model1.save(path)
model2.load(path, weights_only=True) #transfer the weights of the first model
#model2 = copy.deepcopy(model1) throws Error because of thread.Lock Object
```
If I run this I get the following Error:
```
2017-11-05 17:42:47.978858: W C:\tf_jenkins\home\workspace\rel-win\M\windows\PY\36\tensorflow\core\framework\op_kernel.cc:1192] Not found: Key FullyConnected_3/b not found in checkpoint
2017-11-05 17:42:47.981882: W C:\tf_jenkins\home\workspace\rel-win\M\windows\PY\36\tensorflow\core\framework\op_kernel.cc:1192] Not found: Key FullyConnected_3/W not found in checkpoint
2017-11-05 17:42:47.982010: W C:\tf_jenkins\home\workspace\rel-win\M\windows\PY\36\tensorflow\core\framework\op_kernel.cc:1192] Not found: Key FullyConnected_2/b not found in checkpoint
2017-11-05 17:42:47.982181: W C:\tf_jenkins\home\workspace\rel-win\M\windows\PY\36\tensorflow\core\framework\op_kernel.cc:1192] Not found: Key FullyConnected_2/W not found in checkpoint
Traceback (most recent call last):
File "D:\Python\Python36\lib\site-packages\tensorflow\python\client\session.py", line 1323, in _do_call
return fn(*args)
File "D:\Python\Python36\lib\site-packages\tensorflow\python\client\session.py", line 1302, in _run_fn
status, run_metadata)
File "D:\Python\Python36\lib\site-packages\tensorflow\python\framework\errors_impl.py", line 473, in __exit__
c_api.TF_GetCode(self.status.status))
tensorflow.python.framework.errors_impl.NotFoundError: Key FullyConnected_3/b not found in checkpoint
[[Node: save_6/RestoreV2_7 = RestoreV2[dtypes=[DT_FLOAT], _device="/job:localhost/replica:0/task:0/device:CPU:0"](_arg_save_6/Const_0_0, save_6/RestoreV2_7/tensor_names, save_6/RestoreV2_7/shape_and_slices)]]
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "C:/Users/Jonas/PycharmProjects/ReinforcementLearning/TflearnSaveLoadTest.py", line 20, in <module>
model2.load(path, weights_only=True)
File "D:\Python\Python36\lib\site-packages\tflearn\models\dnn.py", line 308, in load
self.trainer.restore(model_file, weights_only, **optargs)
File "D:\Python\Python36\lib\site-packages\tflearn\helpers\trainer.py", line 492, in restore
self.restorer_trainvars.restore(self.session, model_file)
File "D:\Python\Python36\lib\site-packages\tensorflow\python\training\saver.py", line 1666, in restore
{self.saver_def.filename_tensor_name: save_path})
File "D:\Python\Python36\lib\site-packages\tensorflow\python\client\session.py", line 889, in run
run_metadata_ptr)
File "D:\Python\Python36\lib\site-packages\tensorflow\python\client\session.py", line 1120, in _run
feed_dict_tensor, options, run_metadata)
File "D:\Python\Python36\lib\site-packages\tensorflow\python\client\session.py", line 1317, in _do_run
options, run_metadata)
File "D:\Python\Python36\lib\site-packages\tensorflow\python\client\session.py", line 1336, in _do_call
raise type(e)(node_def, op, message)
tensorflow.python.framework.errors_impl.NotFoundError: Key FullyConnected_3/b not found in checkpoint
[[Node: save_6/RestoreV2_7 = RestoreV2[dtypes=[DT_FLOAT], _device="/job:localhost/replica:0/task:0/device:CPU:0"](_arg_save_6/Const_0_0, save_6/RestoreV2_7/tensor_names, save_6/RestoreV2_7/shape_and_slices)]]
Caused by op 'save_6/RestoreV2_7', defined at:
File "C:/Users/Jonas/PycharmProjects/ReinforcementLearning/TflearnSaveLoadTest.py", line 16, in <module>
model2 = create_model(shape)
File "C:/Users/Jonas/PycharmProjects/ReinforcementLearning/TflearnSaveLoadTest.py", line 11, in create_model
return tflearn.DNN(regressor)
File "D:\Python\Python36\lib\site-packages\tflearn\models\dnn.py", line 65, in __init__
best_val_accuracy=best_val_accuracy)
File "D:\Python\Python36\lib\site-packages\tflearn\helpers\trainer.py", line 155, in __init__
allow_empty=True)
File "D:\Python\Python36\lib\site-packages\tensorflow\python\training\saver.py", line 1218, in __init__
self.build()
File "D:\Python\Python36\lib\site-packages\tensorflow\python\training\saver.py", line 1227, in build
self._build(self._filename, build_save=True, build_restore=True)
File "D:\Python\Python36\lib\site-packages\tensorflow\python\training\saver.py", line 1263, in _build
build_save=build_save, build_restore=build_restore)
File "D:\Python\Python36\lib\site-packages\tensorflow\python\training\saver.py", line 751, in _build_internal
restore_sequentially, reshape)
File "D:\Python\Python36\lib\site-packages\tensorflow\python\training\saver.py", line 427, in _AddRestoreOps
tensors = self.restore_op(filename_tensor, saveable, preferred_shard)
File "D:\Python\Python36\lib\site-packages\tensorflow\python\training\saver.py", line 267, in restore_op
[spec.tensor.dtype])[0])
File "D:\Python\Python36\lib\site-packages\tensorflow\python\ops\gen_io_ops.py", line 1020, in restore_v2
shape_and_slices=shape_and_slices, dtypes=dtypes, name=name)
File "D:\Python\Python36\lib\site-packages\tensorflow\python\framework\op_def_library.py", line 787, in _apply_op_helper
op_def=op_def)
File "D:\Python\Python36\lib\site-packages\tensorflow\python\framework\ops.py", line 2956, in create_op
op_def=op_def)
File "D:\Python\Python36\lib\site-packages\tensorflow\python\framework\ops.py", line 1470, in __init__
self._traceback = self._graph._extract_stack() # pylint: disable=protected-access
NotFoundError (see above for traceback): Key FullyConnected_3/b not found in checkpoint
[[Node: save_6/RestoreV2_7 = RestoreV2[dtypes=[DT_FLOAT], _device="/job:localhost/replica:0/task:0/device:CPU:0"](_arg_save_6/Const_0_0, save_6/RestoreV2_7/tensor_names, save_6/RestoreV2_7/shape_and_slices)]]
```
I've also tried to use the deepcopy function of the copy module to duplicate the model, but that doesn't seem to work:
`TypeError: can't pickle _thread.lock objects`
|
open
|
2017-11-05T17:13:42Z
|
2017-11-07T06:28:13Z
|
https://github.com/tflearn/tflearn/issues/951
|
[] |
TheRealfanibu
| 1
|
marimo-team/marimo
|
data-science
| 3,640
|
SQL Engines break base duckdb from df
|
### Describe the bug
Hmm, ran into a new issue with engines when trying to update the md docs.
````md
```sql {.marimo}
SELECT GREATEST(a, b), SQRT(c) from {random_numbers}
```
```sql {.marimo query="random_numbers"}
SELECT i AS id,
random() AS a,
random() AS b,
random() AS c
FROM range(1,101) i;
```
````
Fails with
```txt
Traceback (most recent call last):
File "/home/dylan/src/marimo/marimo/_runtime/executor.py", line 141, in execute_cell
exec(cell.body, glbls)
Cell marimo:///home/dylan/src/marimo/marimo/_tutorials/markdown_format.md#cell=cell-21
, line 1, in <module>
_df = mo.sql(
^^^^^^^
File "/home/dylan/src/marimo/marimo/_sql/sql.py", line 73, in sql
df = _execute_query(query, sql_engine)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/dylan/src/marimo/marimo/_sql/sql.py", line 153, in _execute_query
return eval(
^^^^^
File "<string>", line 1, in <module>
File "/home/dylan/src/marimo/marimo/_sql/engines.py", line 43, in execute
relation = duckdb.sql(query)
^^^^^^^^^^^^^^^^^
duckdb.duckdb.ParserException: Parser Error: syntax error at or near ":"
```
If we actually look, `query` has already been evaluated ( I just threw a print on line 42)
```
SELECT GREATEST(a, b), SQRT(c) from shape: (100, 4)
┌───────────┬──────────┬──────────┬──────────┐
│ id ┆ a ┆ b ┆ c │
│ --- ┆ --- ┆ --- ┆ --- │
│ struct[1] ┆ f64 ┆ f64 ┆ f64 │
╞═══════════╪══════════╪══════════╪══════════╡
│ {1} ┆ 0.969375 ┆ 0.31721 ┆ 0.189477 │
│ {2} ┆ 0.264934 ┆ 0.960304 ┆ 0.000486 │
│ {3} ┆ 0.305962 ┆ 0.908223 ┆ 0.386767 │
│ {4} ┆ 0.994923 ┆ 0.291543 ┆ 0.060228 │
│ {5} ┆ 0.113762 ┆ 0.009761 ┆ 0.962222 │
│ … ┆ … ┆ … ┆ … │
│ {96} ┆ 0.53154 ┆ 0.88564 ┆ 0.377939 │
│ {97} ┆ 0.516885 ┆ 0.567692 ┆ 0.845767 │
│ {98} ┆ 0.284095 ┆ 0.379701 ┆ 0.769615 │
│ {99} ┆ 0.183121 ┆ 0.361992 ┆ 0.125663 │
│ {100} ┆ 0.877826 ┆ 0.284728 ┆ 0.061547 │
└───────────┴──────────┴──────────┴──────────┘
```
I doubt this is particular to md, only place I've tried it yet so far though
### Environment
Head
### Code to reproduce
````md
```sql {.marimo}
SELECT GREATEST(a, b), SQRT(c) from {random_numbers}
```
```sql {.marimo query="random_numbers"}
SELECT i AS id,
random() AS a,
random() AS b,
random() AS c
FROM range(1,101) i;
```
````
@Light2Dark @mscolnick
But super cool :eyes: excited to try it with sqlalchemy
|
closed
|
2025-01-31T19:04:34Z
|
2025-01-31T22:36:47Z
|
https://github.com/marimo-team/marimo/issues/3640
|
[
"bug"
] |
dmadisetti
| 3
|
huggingface/pytorch-image-models
|
pytorch
| 1,723
|
[FEATURE] Python 3.11 support
|
**Is your feature request related to a problem? Please describe.**
PyTorch 2.0 was released yesterday, adding Python 3.11 support. However, the latest release of timm does not yet support Python 3.11 (#1530).
**Describe the solution you'd like**
I hate to be that guy, but could we get a new release soon that includes #1649 so we can use Python 3.11? Apologies if a new release is already being planned, I don't want to rush you.
**Describe alternatives you've considered**
Alternative is Python 3.10.
**Additional context**
We're trying to update TorchGeo to Python 3.11 (https://github.com/microsoft/torchgeo/pull/1180), but rely on timm for its amazing models, so we're stuck until there's a new release.
|
closed
|
2023-03-16T21:34:05Z
|
2023-03-24T01:07:18Z
|
https://github.com/huggingface/pytorch-image-models/issues/1723
|
[
"enhancement"
] |
adamjstewart
| 6
|
feder-cr/Jobs_Applier_AI_Agent_AIHawk
|
automation
| 705
|
[FEATURE]: Job Suitability Const
|
### Feature summary
Job Suitability Const
### Feature description
Instead of hard coding the job suitability requirement to 7, I'll change it to a const in the appconfig which will allow users to set this how they like
### Motivation
tunability
### Alternatives considered
I think this already is the alternative to https://github.com/feder-cr/Auto_Jobs_Applier_AIHawk/issues/669
This could be a command line arg instead but I think with too many command line args we are just going to clutter the whole thing.
### Additional context
_No response_
|
closed
|
2024-11-01T22:38:23Z
|
2024-11-07T00:42:42Z
|
https://github.com/feder-cr/Jobs_Applier_AI_Agent_AIHawk/issues/705
|
[
"enhancement"
] |
FrancescoVassalli
| 2
|
aio-libs/aiomysql
|
sqlalchemy
| 64
|
Ping should not ignore CancelledError
|
This part:
``` Python
try:
yield from self._execute_command(COMMAND.COM_PING, "")
yield from self._read_ok_packet()
except Exception:
```
In ping function will result in strange behavior on cancelling coroutine. CancelledError should be propagated outside of this try/except without any reconnect.
|
open
|
2016-02-16T15:17:17Z
|
2022-01-13T01:01:15Z
|
https://github.com/aio-libs/aiomysql/issues/64
|
[
"bug"
] |
tvoinarovskyi
| 0
|
Lightning-AI/pytorch-lightning
|
pytorch
| 20,053
|
Can I nest LightningModules inside child modules?
|
### Bug description
Suppose I have a `LightningModule` (parent) that contains a `nn.Module` (child), which in turn contains another `LightningModule` (grandchild). Calling `.log` inside the `LightningModule` (the grandchild) results in the following warning:
> You are trying to `self.log()` but the `self.trainer` reference is not registered on the model yet. This is most likely because the model hasn't been passed to the `Trainer`
The trainer is only set on the direct `children` of the parent `LightningModule`, not all the descendants, since the `trainer.setter` uses `self.children()` rather than `self.modules()`: https://github.com/Lightning-AI/pytorch-lightning/blob/3730e980e388c23f7e9d1f535793e8d614633362/src/lightning/pytorch/core/module.py#L221-L226
### What version are you seeing the problem on?
master
### How to reproduce the bug
```python
# %%
import lightning as L
import torch
from torch import nn
class GrandChild(L.LightningModule):
def dummy_log(self):
self.log("foo", 1)
class Child(nn.Module):
def __init__(self):
super().__init__()
self.module = nn.Linear(1, 1)
self.grandchild = GrandChild()
def forward(self):
self.grandchild.dummy_log()
return 1
class Parent(L.LightningModule):
def __init__(self):
super().__init__()
self.child = Child()
def training_step(self, batch, batch_idx):
return self.child()
def configure_optimizers(self):
optimizer = torch.optim.Adam(self.parameters(), lr=1e-3)
return optimizer
def train_dataloader(self):
return torch.utils.data.DataLoader(
torch.utils.data.TensorDataset(torch.randn(10, 1)), batch_size=1
)
# model
parent = Parent()
# train model
trainer = L.Trainer()
trainer.fit(model=parent)
optimizer = parent.configure_optimizers()
loss = parent.training_step(batch=None, batch_idx=0)
```
### Error messages and logs
```
You are trying to `self.log()` but the `self.trainer` reference is not registered on the model yet. This is most likely because the model hasn't been passed to the `Trainer`
```
### Environment
<details>
<summary>Current environment</summary>
* CUDA:
- GPU:
- NVIDIA A100-SXM4-80GB
- available: True
- version: 12.1
* Lightning:
- lightning: 2.2.1
- lightning-utilities: 0.11.2
- pytorch-lightning: 2.2.1
- torch: 2.3.1
- torchmetrics: 1.3.2
- torchvision: 0.18.1
* System:
- OS: Linux
- architecture:
- 64bit
- ELF
- processor: x86_64
- python: 3.11.9
- release: 5.15.0-113-generic
- version: #123-Ubuntu SMP Mon Jun 10 08:16:17 UTC 2024
</details>
### More info
_No response_
cc @carmocca @justusschock @awaelchli @borda
|
open
|
2024-07-05T15:53:34Z
|
2024-07-09T02:27:20Z
|
https://github.com/Lightning-AI/pytorch-lightning/issues/20053
|
[
"question",
"lightningmodule",
"ver: 2.2.x"
] |
jackdent
| 4
|
lensacom/sparkit-learn
|
scikit-learn
| 77
|
Poor performances
|
Hi to all!
I just started to deep into spark machine learning, coming from scikit-learn. I tried to fit a linear SVC from scikit-learn and sparkit-learn. Splearn is remaining slower than scikit. How is this possible? (I am attaching my code and results)
import time as t
from sklearn.datasets import make_classification
from sklearn.tree import DecisionTreeClassifier
from sklearn.svm import LinearSVC
from splearn.svm import SparkLinearSVC
from splearn.rdd import ArrayRDD, DictRDD
import numpy as np
X,y=make_classification(n_samples=20000,n_classes=2)
print 'Dataset created. # of samples: ',X.shape[0]
skstart = t.time()
dt=DecisionTreeClassifier()
local_clf = LinearSVC()
local_clf.fit(X,y)
sktime = t.time()-skstart
print 'Scikit-learn fitting time: ',sktime
#
spstart= t.time()
X_rdd=sc.parallelize(X,20)
y_rdd=sc.parallelize(y,20)
Z = DictRDD((X_rdd, y_rdd),
columns=('X', 'y'),
dtype=[np.ndarray, np.ndarray])
distr_clf = SparkLinearSVC()
distr_clf.fit(Z,np.unique(y))
sptime = t.time()-spstart
print 'Spark time: ',sptime
============== RESULTS =================
Dataset created. # of samples: 20000
Scikit-learn fitting time: 3.03552293777
Spark time: 3.919039011
OR for less samples:
Dataset created. # of samples: 2000
Scikit-learn fitting time: 0.244801998138
Spark time: 3.15833210945
|
closed
|
2017-06-16T14:19:51Z
|
2017-10-09T12:52:37Z
|
https://github.com/lensacom/sparkit-learn/issues/77
|
[] |
orfi2017
| 3
|
dnouri/nolearn
|
scikit-learn
| 154
|
Prevent Layers from being named `max` or `batch`
|
Prevent Layers from being named `max` or `batch` as these will conflict with NN params.
|
open
|
2015-09-21T23:23:01Z
|
2015-09-21T23:23:01Z
|
https://github.com/dnouri/nolearn/issues/154
|
[] |
cancan101
| 0
|
skypilot-org/skypilot
|
data-science
| 4,421
|
[Core] Failure in pytorch distributed training code failed to get a job into FAILED state
|
<!-- Describe the bug report / feature request here -->
A user reported that although their distributed training code failed due to NCCL error on worker node, SkyPilot does not set the job in FAILED state.
<!-- If relevant, fill in versioning info to help us troubleshoot -->
_Version & Commit info:_
* `sky -v`: PLEASE_FILL_IN
* `sky -c`: PLEASE_FILL_IN
|
open
|
2024-11-27T02:21:16Z
|
2024-12-19T23:08:43Z
|
https://github.com/skypilot-org/skypilot/issues/4421
|
[
"triage"
] |
Michaelvll
| 1
|
littlecodersh/ItChat
|
api
| 346
|
请问itchat是否有消息撤回的接口?
|
我用itchat发消息已经成功了,但是我有一个需求,有时候需要把itchat已经发出去的消息撤回,请问有没有撤回接口?
补充:
```python
@itchat.msg_register(NOTE)
def msg_back(msg):
e = xml.etree.ElementTree.fromstring(msg['Content'].encode('utf8'))
for oldmsgid in e.findall("./revokemsg/oldmsgid"):
print oldmsgid.text
```
已经能获得希望撤回消息的ID了,但是不清楚调用什么方法能撤回。
|
closed
|
2017-05-04T13:05:04Z
|
2017-06-09T03:02:45Z
|
https://github.com/littlecodersh/ItChat/issues/346
|
[
"question"
] |
zhongwf
| 2
|
CorentinJ/Real-Time-Voice-Cloning
|
python
| 332
|
Anyone willing to pick this up?
|
It's always sad when a really cool open source project gets abandoned to go commercial. Is there anyone else who is willing to pick this up and keep it going?
|
closed
|
2020-04-29T15:33:19Z
|
2020-07-04T19:41:48Z
|
https://github.com/CorentinJ/Real-Time-Voice-Cloning/issues/332
|
[] |
nmcbride
| 27
|
pandas-dev/pandas
|
data-science
| 61,010
|
PERF: bottleneck in `where()`
|
### Pandas version checks
- [x] I have checked that this issue has not already been reported.
- [x] I have confirmed this issue exists on the [latest version](https://pandas.pydata.org/docs/whatsnew/index.html) of pandas.
- [x] I have confirmed this issue exists on the main branch of pandas.
### Reproducible Example
```python
import numpy as np
import pandas as pd
df = pd.DataFrame(np.random.randn(1, 1_000_000))
mask = df > 0.5
```
```python
%%timeit
_ = df.where(mask)
# 693 ms ± 3.49 ms per loop (mean ± std. dev. of 7 runs, 1 loop each)
```
perf result taken from pyinstrument:

This issue seems to be related to this:
https://github.com/pandas-dev/pandas/blob/d1ec1a4c9b58a9ebff482af2b918094e39d87893/pandas/core/generic.py#L9735-L9737
When dataframe is large, this overhead of `is_bool_dtype` accumulates. Would it be better to use `cond.dtypes.unique()` instead?
### Installed Versions
<details>
INSTALLED VERSIONS
------------------
commit : 0691c5cf90477d3503834d983f69350f250a6ff7
python : 3.10.14
python-bits : 64
OS : Linux
OS-release : 5.15.0-122-generic
Version : #132-Ubuntu SMP Thu Aug 29 13:45:52 UTC 2024
machine : x86_64
processor : x86_64
byteorder : little
LC_ALL : None
LANG : en_US.UTF-8
LOCALE : en_US.UTF-8
pandas : 2.2.3
numpy : 1.26.4
pytz : 2024.1
dateutil : 2.9.0
pip : 24.0
Cython : 3.0.7
sphinx : 7.3.7
IPython : 8.25.0
adbc-driver-postgresql: None
adbc-driver-sqlite : None
bs4 : 4.12.3
blosc : None
bottleneck : None
dataframe-api-compat : None
fastparquet : None
fsspec : 2024.6.0
html5lib : None
hypothesis : None
gcsfs : None
jinja2 : 3.1.4
lxml.etree : None
matplotlib : 3.9.2
numba : 0.60.0
numexpr : 2.10.0
odfpy : None
openpyxl : 3.1.5
pandas_gbq : None
psycopg2 : 2.9.9
pymysql : 1.4.6
pyarrow : 16.1.0
pyreadstat : None
pytest : 8.2.2
python-calamine : None
pyxlsb : None
s3fs : None
scipy : 1.14.0
sqlalchemy : 2.0.31
tables : 3.9.2
tabulate : None
xarray : None
xlrd : None
xlsxwriter : None
zstandard : 0.22.0
tzdata : 2024.1
qtpy : 2.4.1
pyqt5 : None
</details>
### Prior Performance
_No response_
|
closed
|
2025-02-26T09:47:27Z
|
2025-02-28T18:23:02Z
|
https://github.com/pandas-dev/pandas/issues/61010
|
[
"Performance",
"Needs Triage"
] |
auderson
| 5
|
google-deepmind/graph_nets
|
tensorflow
| 128
|
Is there relationship between model and form of base graph?
|
Hi, I'm interested in your novel work.
I tried to change some condition in 'demo/physics.ipynb'.
I wonder whether it also predict velocity well when I change initial position of spring.
For example, I changed that the only first mass is fixed in function 'base_graph'.(it means the last mass is not fixed.)
And I added damping force with hooke's law in Springsimulator.
I thought this is so small condition for affecting but, the result of inference didn't follow target and spread out.
Is there relationship between model structure and form of initial condition(such as base graph)?
and how can I change it to fit the target?
|
open
|
2020-10-04T15:02:00Z
|
2020-10-06T16:59:01Z
|
https://github.com/google-deepmind/graph_nets/issues/128
|
[] |
cheezzjazz
| 2
|
ultralytics/yolov5
|
deep-learning
| 13,508
|
A problem about calculating confusion matrices
|
### Search before asking
- [x] I have searched the YOLOv5 [issues](https://github.com/ultralytics/yolov5/issues) and found no similar bug report.
### YOLOv5 Component
_No response_
### Bug
in yolov5/utils/metics.py
class ConfusionMatrix:
"""Generates and visualizes a confusion matrix for evaluating object detection classification performance."""
def __init__(self, nc, conf=0.25, iou_thres=0.45):
"""Initializes ConfusionMatrix with given number of classes, confidence, and IoU threshold."""
self.matrix = np.zeros((nc + 1, nc + 1))
self.nc = nc # number of classes
self.conf = conf
self.iou_thres = iou_thres
def process_batch(self, detections, labels):
"""
Return intersection-over-union (Jaccard index) of boxes.
Both sets of boxes are expected to be in (x1, y1, x2, y2) format.
Arguments:
detections (Array[N, 6]), x1, y1, x2, y2, conf, class
labels (Array[M, 5]), class, x1, y1, x2, y2
Returns:
None, updates confusion matrix accordingly
"""
if detections is None:
gt_classes = labels.int()
for gc in gt_classes:
self.matrix[self.nc, gc] += 1 # background FN
return
detections = detections[detections[:, 4] > self.conf]
gt_classes = labels[:, 0].int()
detection_classes = detections[:, 5].int()
iou = box_iou(labels[:, 1:], detections[:, :4])
x = torch.where(iou > self.iou_thres)
if x[0].shape[0]:
matches = torch.cat((torch.stack(x, 1), iou[x[0], x[1]][:, None]), 1).cpu().numpy()
if x[0].shape[0] > 1:
matches = matches[matches[:, 2].argsort()[::-1]]
matches = matches[np.unique(matches[:, 1], return_index=True)[1]]
matches = matches[matches[:, 2].argsort()[::-1]]
matches = matches[np.unique(matches[:, 0], return_index=True)[1]]
else:
matches = np.zeros((0, 3))
n = matches.shape[0] > 0
m0, m1, _ = matches.transpose().astype(int)
for i, gc in enumerate(gt_classes):
j = m0 == i
if n and sum(j) == 1:
self.matrix[detection_classes[m1[j]], gc] += 1 # correct
else:
self.matrix[self.nc, gc] += 1 # true background
if n:
for i, dc in enumerate(detection_classes):
if not any(m1 == i):
self.matrix[dc, self.nc] += 1 # predicted background
When the detection box does not match with gt, that is, when iou is 0, the detection box is not calculated into fp. Is this a special design or a bug. The confusion matrix in YOLOV11 is not quite the same
### Environment
_No response_
### Minimal Reproducible Example
_No response_
### Additional
_No response_
### Are you willing to submit a PR?
- [ ] Yes I'd like to help by submitting a PR!
|
open
|
2025-02-12T01:58:28Z
|
2025-02-16T19:01:55Z
|
https://github.com/ultralytics/yolov5/issues/13508
|
[
"bug",
"detect"
] |
SwustLiC
| 3
|
davidsandberg/facenet
|
tensorflow
| 503
|
Error while generating embeddings
|
Hello,
I got this error while generating embeddings:
```
Exception in thread Thread-1:
Traceback (most recent call last):
File "C:\Users\gengstah\Anaconda3\envs\facenet\lib\threading.py", line 916, in _bootstrap_inner
self.run()
File "C:\Users\gengstah\Anaconda3\envs\facenet\lib\threading.py", line 864, in run
self._target(*self._args, **self._kwargs)
File "contributed\face-recognition.py", line 84, in capture
faces = self.face_recognition.identify(frame)
File "C:\Users\gengstah\python-workspace\facenet\contributed\face.py", line 80, in identify
face.embedding = self.encoder.generate_embedding(face)
File "C:\Users\gengstah\python-workspace\facenet\contributed\face.py", line 110, in generate_embedding
images_placeholder = tf.get_default_graph().get_tensor_by_name("input:0")
File "C:\Users\gengstah\Anaconda3\envs\facenet\lib\site-packages\tensorflow\python\framework\ops.py", line 2734, in get_tensor_by_name
return self.as_graph_element(name, allow_tensor=True, allow_operation=False)
File "C:\Users\gengstah\Anaconda3\envs\facenet\lib\site-packages\tensorflow\python\framework\ops.py", line 2584, in as_graph_element
return self._as_graph_element_locked(obj, allow_tensor, allow_operation)
File "C:\Users\gengstah\Anaconda3\envs\facenet\lib\site-packages\tensorflow\python\framework\ops.py", line 2627, in _as_graph_element_locked
"graph." % (repr(name), repr(op_name)))
KeyError: "The name 'input:0' refers to a Tensor which does not exist. The operation, 'input', does not exist in the graph."
```
I rewrote the `contributed/real_time_face_recognition.py` to use tkinter. Since the continuous loop of grabbing an image from camera, predicting the face, and displaying it in a tk.Label (tkinter) is causing the application not to be responsive, I decided to implement these processes in a separate thread. After doing so, I get the error above. Here is a snippet of what I have done so far:
```
def __init__(self, master=None):
super().__init__(master)
print("Loading Face Recognition...")
self.face_recognition = face.Recognition()
print("Done")
def start(self):
print("start")
self.stop_event = threading.Event()
self.channel1 = threading.Thread(target=self.capture, args=("0", self.face_recognition))
self.channel1.start()
def capture(self, channel, face_recognition):
frame_interval = 3 # Number of frames after which to run face detection
fps_display_interval = 5 # seconds
frame_rate = 0
frame_count = 0
start_time = time.time()
while(not self.stop_event.is_set()):
try:
url_response = urllib.request.urlopen("[SNAPSHOT_IMAGE_FROM_IP_CAMERA]")
except Exception as e:
print("Failed to get feed from channel {}: {}".format(channel, e))
break
img_array = np.array(bytearray(url_response.read()), dtype=np.uint8)
frame = cv2.imdecode(img_array, -1)
# Identify faces
if (frame_count % frame_interval) == 0:
faces = face_recognition.identify(frame)
# Check our current fps
end_time = time.time()
if (end_time - start_time) > fps_display_interval:
frame_rate = int(frame_count / (end_time - start_time))
start_time = time.time()
frame_count = 0
self.add_overlays(frame, faces, frame_rate)
frame_count += 1
b,g,r = cv2.split(frame)
frame = cv2.merge((r,g,b))
img = Image.fromarray(frame)
imagetk = ImageTk.PhotoImage(image=img)
if channel == "0":
self.imageview1.configure(image=imagetk)
self.imageview1.image = imagetk
```
EDIT 1: I am using the frozen graph provided by David and generated my own classifier. The model and classifier is working nicely using my webcam.
EDIT 2: I tried to store the face.Recognition object to thread local and tested it in one thread. It works but having more than 1 thread (more than 1 face.Recognition object) eats up all my memory. How can I use only one face.Recognition for many threads without getting the error above?
|
closed
|
2017-10-27T06:54:47Z
|
2017-10-27T11:13:23Z
|
https://github.com/davidsandberg/facenet/issues/503
|
[] |
gengstah
| 1
|
iperov/DeepFaceLab
|
machine-learning
| 5,336
|
how to create a new project?
|
Does I need create a new folder and copy **_internal** folder with all .bat files to the new created folder?
Is this the right way to do it?
|
closed
|
2021-05-24T11:16:47Z
|
2023-06-09T16:11:40Z
|
https://github.com/iperov/DeepFaceLab/issues/5336
|
[] |
chshouyu
| 2
|
robinhood/faust
|
asyncio
| 637
|
The point of using concurrency on agents?
|
Although in some cases it is a possibility that agents/processors have some I/O wait cycles (like a disk write etc.), In general, I feel they involve computation and hence are mostly CPU bound. In such a case, is the concurrency parameter of much use? Especially since all agents run in a single thread.
Would it not be better to utilise multiple c/gpus to move things faster? Maybe an integration with Ray?
|
open
|
2020-08-21T10:41:48Z
|
2020-08-25T13:12:31Z
|
https://github.com/robinhood/faust/issues/637
|
[] |
CowBoy4mH3LL
| 3
|
serengil/deepface
|
deep-learning
| 454
|
GPU Configuration
|
Dear Shefik,
Can you please recommend the GPU hardware configuration for ArcFace & retina-face detector?
Also how to activate using GPU in the code?
Thanks
|
closed
|
2022-04-12T16:08:09Z
|
2022-04-15T20:58:34Z
|
https://github.com/serengil/deepface/issues/454
|
[
"question"
] |
shafichoudhary
| 1
|
adbar/trafilatura
|
web-scraping
| 379
|
Doesn't seem to work with recent charset-normalizer
|
I get an error when I use trafilatura with version 3.1.0 of this package charset-normalizer, but reverting to v2.0.4 seems to work fine
|
closed
|
2023-06-20T18:56:06Z
|
2023-06-22T07:43:29Z
|
https://github.com/adbar/trafilatura/issues/379
|
[
"feedback"
] |
Stevod
| 2
|
fugue-project/fugue
|
pandas
| 201
|
[FEATURE] Dask repartitioning improvement
|
**Describe the solution you'd like**
Let's see how we can make even repartition happen on dask
|
closed
|
2021-05-09T17:29:42Z
|
2021-05-10T21:03:21Z
|
https://github.com/fugue-project/fugue/issues/201
|
[
"enhancement",
"dask"
] |
goodwanghan
| 0
|
onnx/onnxmltools
|
scikit-learn
| 233
|
Unsufficient error handling in save_model
|
`onnxmltools.save_model` does not seem to raise exceptions on incorrect input.
**Example**
```py
import onnxmltools
model = None
onnxmltools.save_model(model, './estimator.onnx')
```
**Expected result**
A Python exception with a message mentioning that `model` is not a valid onnxml model.
**Actual result**
No python exception. A printed message,
```
Failed trying to save file './estimator.onnx'.
```
**Version**
onnxmltools version 1.3.1
|
closed
|
2019-02-08T17:09:43Z
|
2019-02-19T21:25:38Z
|
https://github.com/onnx/onnxmltools/issues/233
|
[] |
rth
| 1
|
SYSTRAN/faster-whisper
|
deep-learning
| 189
|
Lower the host memory footprint?
|
Running a the small model only takes up about 1.32gb on my GPU but 2.7gb on my host machines's normal memory. Any thoughts on how to reduce normal memory footprint?
Thanks!
|
closed
|
2023-04-27T22:44:55Z
|
2023-05-04T18:12:58Z
|
https://github.com/SYSTRAN/faster-whisper/issues/189
|
[] |
daxaxelrod
| 1
|
vitalik/django-ninja
|
rest-api
| 1,160
|
[BUG] `ModelSchema` produces `id > (integer | null)` openapi
|
**Describe the bug**
Having this model definition:
```python
class MyModel(ModelSchema):
class Meta:
model = MyModel
fields = ["id"]
```
Produces next definition on Swagger: `id > (integer | null)`.
However this definition:
```python
class MyModel(ModelSchema):
id: int
class Meta:
model = MyModel
```
Produces `id* integer` as expected. SQL is default — `id bigint NOT NULL`
**Versions (please complete the following information):**
- Python version: 3.12.2
- Django version: 5.0.2
- Django-Ninja version: 1.1.0
- Pydantic version: 2.6.4
Probably a duplicate of https://github.com/vitalik/django-ninja/issues/907
|
open
|
2024-05-10T09:45:42Z
|
2025-02-21T19:36:23Z
|
https://github.com/vitalik/django-ninja/issues/1160
|
[] |
viktorvsk
| 2
|
wagtail/wagtail
|
django
| 12,848
|
Update TypedTableBlock.normalize to return empty TypedTable for None
|
Follow-up to #12808, to be picked up after #12827 is fixed.
As per https://github.com/wagtail/wagtail/pull/12808#discussion_r1943446266 - methods such as `get_prep_value` and `clean`, which accept block-native values, currently have the logic:
if value:
iterate over table contents
else:
treat as an empty table
We can update `TypedTableBlock.normalize` to return an empty `TypedTable` when passed `None` as input, thus removing `None` as a possible block-native value. This will allow us to eliminate the `if value` check from these methods.
(`to_python` should be left unchanged, so that any existing `None` values in the database are still handled.)
|
open
|
2025-02-05T18:46:52Z
|
2025-02-21T07:19:48Z
|
https://github.com/wagtail/wagtail/issues/12848
|
[
"type:Cleanup/Optimisation",
"component:Streamfield"
] |
gasman
| 2
|
fugue-project/fugue
|
pandas
| 421
|
[FEATURE] Add the namespace concept to Fugue extensions
|
Fugue has very flexible ways to parse custom extensions expressions. We are working with more partners to enable their extensions for Fugue SQL, so there should be a more standard way to host extensions provided by different partners.
We should use a `Tuple[str,Any]` to represent such extensions, where the first element indicates the domain (provider) of the extension, and the second value can be used to construct the extension by the domain owner.
To enable this in Fugue SQL, we need to enable a new syntax: `USING domain:ext_expr`, where we use `:` to separate the domain and extension expression. If domain is provided then Fugue SQL parser should parse this as a tuple `(domain, ext_expr)` else just a string `ext_expr`
|
closed
|
2023-01-22T23:00:11Z
|
2023-02-16T07:49:18Z
|
https://github.com/fugue-project/fugue/issues/421
|
[
"enhancement",
"core feature"
] |
goodwanghan
| 0
|
streamlit/streamlit
|
data-visualization
| 10,775
|
Support `use_container_width=False` in `st.table()`
|
### Checklist
- [x] I have searched the [existing issues](https://github.com/streamlit/streamlit/issues) for similar feature requests.
- [x] I added a descriptive title and summary to this issue.
### Summary
I would like to have the option `st.table(df, use_container_width=False)`.
### Why?
I appreciate the `use_container_width=False` in `st.dataframe()` and I would like to have the option in `st.table()`.
### How?
_No response_
### Additional Context
In the meantime, here is a workaround:
```py
import pandas as pd
import streamlit as st
st.html("""
<style>
div[data-testid="stElementContainer"]:nth-child(4) > div[data-testid="stTable"] {
max-width: fit-content;
}
</style>
""")
df = pd.DataFrame({"x": [0.1, 0.2, 0.3], "y": [0.4, 0.5, 0.6]})
st.code("st.table()")
st.table(df)
st.table(df) # Equivalent of use_container_width=False done with the CSS above.
st.divider()
st.code("st.dataframe()")
st.dataframe(df)
st.dataframe(df, use_container_width=False)
```

|
open
|
2025-03-14T02:12:31Z
|
2025-03-14T02:29:46Z
|
https://github.com/streamlit/streamlit/issues/10775
|
[
"type:enhancement",
"feature:st.table"
] |
JosephMarinier
| 1
|
sinaptik-ai/pandas-ai
|
data-science
| 782
|
PandaAI With AzureOpenAI
|
Hello everyone , Can anyone please tell me how to resolve the below error

I have tried it many times , If anyone of you tried and generating the graphs using azureopenai , Could you please share the prompts you have used , I am using GPT3.5 AzureOpenAI
|
closed
|
2023-11-27T06:57:01Z
|
2024-06-01T00:20:59Z
|
https://github.com/sinaptik-ai/pandas-ai/issues/782
|
[] |
sumeet-cresen
| 14
|
deezer/spleeter
|
tensorflow
| 350
|
Feature Request/Suggestion: Audience Track
|
Possible to learn feedback from Audience such a cheers. whistles and clapping and isolate to its own track
|
closed
|
2020-04-27T15:29:07Z
|
2020-06-12T15:18:41Z
|
https://github.com/deezer/spleeter/issues/350
|
[
"enhancement",
"feature"
] |
kylemj89
| 2
|
python-restx/flask-restx
|
flask
| 454
|
Werkzeug 2.1.2 Did not attempt to load JSON data
|
### **Code**
```python
from flask import Flask
from flask_restx import Api, Resource
from werkzeug.middleware.proxy_fix import ProxyFix
app = Flask(__name__)
app.wsgi_app = ProxyFix(app.wsgi_app)
api = Api(
app,
version="1.0",
title="Foo",
description="A simple foo",
)
ns = api.namespace("Foo", description="bar")
@ns.route("/")
class Foo(Resource):
"""Shows a list of all todos, and lets you POST to add new tasks"""
get_parser = api.parser()
get_parser.add_argument(
"foo",
type=bool,
default=True,
)
@ns.doc("foo", parser=get_parser)
def get(self):
"""List all tasks"""
self.get_parser.parse_args()
return "foo"
if __name__ == "__main__":
app.run(debug=True)
```
### **Repro Steps** (if applicable)
1. Run the app
2. Then try out the only endpoint with the `foo` query param set as `true`/`false`
3. Broken!
### **Expected Behavior**
I would expect this to return a just and empty list (i.e. not really do anything)
### **Actual Behavior**
```json
{
"message": "Did not attempt to load JSON data because the request Content-Type was not 'application/json'."
}
```
### **Error Messages/Stack Trace**
`parse_args` raises this error in `reqparse.py`
### **Environment**
- Python version 3.8
- Flask version 2.1.2
- Flask-RESTX version 0.5.1
- Werkzeug 2.1.2
|
open
|
2022-07-14T15:28:37Z
|
2025-01-16T04:35:35Z
|
https://github.com/python-restx/flask-restx/issues/454
|
[
"bug"
] |
jbmoorhouse
| 6
|
iperov/DeepFaceLab
|
deep-learning
| 5,201
|
Head extraction size
|
## Expected behavior
Extract head aligned with more tight crop of the head for frontal shots, so we can use lower model resolution and faster training, rather than trying to compensate with higher res.
## Actual behavior
Frontal aligned are extracted at 40% of aligned frame, so there is ~60% of frame resolution wasted.
## Steps to reproduce
extract head (aligned)

|
closed
|
2020-12-15T14:48:11Z
|
2023-06-08T22:39:54Z
|
https://github.com/iperov/DeepFaceLab/issues/5201
|
[] |
zabique
| 5
|
tox-dev/tox
|
automation
| 3,456
|
Missing environment variable doesn't throw an error in Tox 4+
|
## Issue
Per the documentation, a missing environment variable should throw an error: https://tox.wiki/en/latest/config.html#environment-variable-substitutions. However, this doesn't seem to the be the case in tox 4+ - the value is just retrieved as an empty string.
## Environment
Provide at least:
- OS: macOS Sonoma 14.7.1
<details open>
<summary>Output of <code>pip list</code> of the host Python, where <code>tox</code> is installed</summary>
```console
Package Version
------------- -------
cachetools 5.5.0
chardet 5.2.0
colorama 0.4.6
distlib 0.3.9
filelock 3.16.1
packaging 24.2
pip 24.3.1
platformdirs 4.3.6
pluggy 1.5.0
py 1.11.0
pyproject-api 1.8.0
six 1.17.0
tox 4.0.0
virtualenv 20.28.0
```
</details>
## Output of running tox
<details open>
<summary>Output of <code>tox -rvv</code></summary>
In 3.x:
```console
ERROR: python: unresolvable substitution(s):
commands: 'FOO'
Environment variables are missing or defined recursively.
```
In 4.x:
```console
Python 3.12.5
py: commands[1]> echo bar
bar
py: commands[2]> echo
```
</details>
## Minimal example
<!-- If possible, provide a minimal reproducer for the issue. -->
Where neither variable is defined:
```console
[tox]
skipsdist = true
[testenv]
allowlist_externals =
echo
setenv =
TEST_ENVIRONMENT={env:FOO}
commands =
python --version
echo {env:FOO:bar}
echo {env:TEST_ENVIRONMENT:baz}
```
Not sure if this is intended but at least the documentation should be updated - if it's intended, then is there any way to mark a variable as required?
|
closed
|
2024-12-05T23:14:16Z
|
2025-02-15T18:20:36Z
|
https://github.com/tox-dev/tox/issues/3456
|
[] |
plondino
| 2
|
sgl-project/sglang
|
pytorch
| 4,312
|
Trouble with Install
|
### Checklist
- [x] 1. I have searched related issues but cannot get the expected help.
- [x] 2. The bug has not been fixed in the latest version.
- [x] 3. Please note that if the bug-related issue you submitted lacks corresponding environment info and a minimal reproducible demo, it will be challenging for us to reproduce and resolve the issue, reducing the likelihood of receiving feedback.
- [x] 4. If the issue you raised is not a bug but a question, please raise a discussion at https://github.com/sgl-project/sglang/discussions/new/choose Otherwise, it will be closed.
- [x] 5. Please use English, otherwise it will be closed.
### Describe the bug
Following instructions from [olmocr](https://github.com/allenai/olmocr) for installing sgl-kernel and got:

Tried:
```
git clone https://github.com/sgl-project/sglang.git
cd sglang/sgl-kernel
pip install . --no-deps
```
but received CUDA_HOME errors. In an effort to fix, modified setup.py by adding (after "root=Path(..."):
```
os.environ["CUDA_HOME"] = "C:/Program Files/NVIDIA GPU Computing Toolkit/CUDA/v12.8"
print(f"CUDA_HOME in build: {os.environ.get('CUDA_HOME')}")
```
but continue to get errors related to CUDA_HOME. Here is the (verbose) output:
```
Using pip 25.0.1 from C:\ProgramData\miniforge3\envs\olmocr\Lib\site-packages\pip (python 3.11)
Processing c:\windows\system32\olmocr\sglang\sgl-kernel
Running command pip subprocess to install build dependencies
Using pip 25.0.1 from C:\ProgramData\miniforge3\envs\olmocr\Lib\site-packages\pip (python 3.11)
Collecting setuptools>=75.0
Obtaining dependency information for setuptools>=75.0 from https://files.pythonhosted.org/packages/37/66/d2d7e6ad554f3a7c7297c3f8ef6e22643ad3d35ef5c63bf488bc89f32f31/setuptools-76.0.0-py3-none-any.whl.metadata
Using cached setuptools-76.0.0-py3-none-any.whl.metadata (6.7 kB)
Collecting scikit-build-core>=0.10
Obtaining dependency information for scikit-build-core>=0.10 from https://files.pythonhosted.org/packages/0a/ba/b37b9802f503894a46ef34aaa5851344cde48b39ab0af5057a6ee4f0d631/scikit_build_core-0.11.0-py3-none-any.whl.metadata
Using cached scikit_build_core-0.11.0-py3-none-any.whl.metadata (21 kB)
Collecting torch==2.5.1
Obtaining dependency information for torch==2.5.1 from https://files.pythonhosted.org/packages/0d/4a/e51420d46cfc90562e85af2fee912237c662ab31140ab179e49bd69401d6/torch-2.5.1-cp311-cp311-win_amd64.whl.metadata
Using cached torch-2.5.1-cp311-cp311-win_amd64.whl.metadata (28 kB)
Collecting wheel
Obtaining dependency information for wheel from https://files.pythonhosted.org/packages/0b/2c/87f3254fd8ffd29e4c02732eee68a83a1d3c346ae39bc6822dcbcb697f2b/wheel-0.45.1-py3-none-any.whl.metadata
Using cached wheel-0.45.1-py3-none-any.whl.metadata (2.3 kB)
Collecting filelock (from torch==2.5.1)
Obtaining dependency information for filelock from https://files.pythonhosted.org/packages/89/ec/00d68c4ddfedfe64159999e5f8a98fb8442729a63e2077eb9dcd89623d27/filelock-3.17.0-py3-none-any.whl.metadata
Using cached filelock-3.17.0-py3-none-any.whl.metadata (2.9 kB)
Collecting typing-extensions>=4.8.0 (from torch==2.5.1)
Obtaining dependency information for typing-extensions>=4.8.0 from https://files.pythonhosted.org/packages/26/9f/ad63fc0248c5379346306f8668cda6e2e2e9c95e01216d2b8ffd9ff037d0/typing_extensions-4.12.2-py3-none-any.whl.metadata
Using cached typing_extensions-4.12.2-py3-none-any.whl.metadata (3.0 kB)
Collecting networkx (from torch==2.5.1)
Obtaining dependency information for networkx from https://files.pythonhosted.org/packages/b9/54/dd730b32ea14ea797530a4479b2ed46a6fb250f682a9cfb997e968bf0261/networkx-3.4.2-py3-none-any.whl.metadata
Using cached networkx-3.4.2-py3-none-any.whl.metadata (6.3 kB)
Collecting jinja2 (from torch==2.5.1)
Obtaining dependency information for jinja2 from https://files.pythonhosted.org/packages/62/a1/3d680cbfd5f4b8f15abc1d571870c5fc3e594bb582bc3b64ea099db13e56/jinja2-3.1.6-py3-none-any.whl.metadata
Using cached jinja2-3.1.6-py3-none-any.whl.metadata (2.9 kB)
Collecting fsspec (from torch==2.5.1)
Obtaining dependency information for fsspec from https://files.pythonhosted.org/packages/56/53/eb690efa8513166adef3e0669afd31e95ffde69fb3c52ec2ac7223ed6018/fsspec-2025.3.0-py3-none-any.whl.metadata
Using cached fsspec-2025.3.0-py3-none-any.whl.metadata (11 kB)
Collecting sympy==1.13.1 (from torch==2.5.1)
Obtaining dependency information for sympy==1.13.1 from https://files.pythonhosted.org/packages/b2/fe/81695a1aa331a842b582453b605175f419fe8540355886031328089d840a/sympy-1.13.1-py3-none-any.whl.metadata
Using cached sympy-1.13.1-py3-none-any.whl.metadata (12 kB)
Collecting mpmath<1.4,>=1.1.0 (from sympy==1.13.1->torch==2.5.1)
Obtaining dependency information for mpmath<1.4,>=1.1.0 from https://files.pythonhosted.org/packages/43/e3/7d92a15f894aa0c9c4b49b8ee9ac9850d6e63b03c9c32c0367a13ae62209/mpmath-1.3.0-py3-none-any.whl.metadata
Using cached mpmath-1.3.0-py3-none-any.whl.metadata (8.6 kB)
Collecting packaging>=21.3 (from scikit-build-core>=0.10)
Obtaining dependency information for packaging>=21.3 from https://files.pythonhosted.org/packages/88/ef/eb23f262cca3c0c4eb7ab1933c3b1f03d021f2c48f54763065b6f0e321be/packaging-24.2-py3-none-any.whl.metadata
Using cached packaging-24.2-py3-none-any.whl.metadata (3.2 kB)
Collecting pathspec>=0.10.1 (from scikit-build-core>=0.10)
Obtaining dependency information for pathspec>=0.10.1 from https://files.pythonhosted.org/packages/cc/20/ff623b09d963f88bfde16306a54e12ee5ea43e9b597108672ff3a408aad6/pathspec-0.12.1-py3-none-any.whl.metadata
Using cached pathspec-0.12.1-py3-none-any.whl.metadata (21 kB)
Collecting MarkupSafe>=2.0 (from jinja2->torch==2.5.1)
Obtaining dependency information for MarkupSafe>=2.0 from https://files.pythonhosted.org/packages/da/b8/3a3bd761922d416f3dc5d00bfbed11f66b1ab89a0c2b6e887240a30b0f6b/MarkupSafe-3.0.2-cp311-cp311-win_amd64.whl.metadata
Using cached MarkupSafe-3.0.2-cp311-cp311-win_amd64.whl.metadata (4.1 kB)
Using cached torch-2.5.1-cp311-cp311-win_amd64.whl (203.1 MB)
Using cached sympy-1.13.1-py3-none-any.whl (6.2 MB)
Using cached setuptools-76.0.0-py3-none-any.whl (1.2 MB)
Using cached scikit_build_core-0.11.0-py3-none-any.whl (179 kB)
Using cached wheel-0.45.1-py3-none-any.whl (72 kB)
Using cached packaging-24.2-py3-none-any.whl (65 kB)
Using cached pathspec-0.12.1-py3-none-any.whl (31 kB)
Using cached typing_extensions-4.12.2-py3-none-any.whl (37 kB)
Using cached filelock-3.17.0-py3-none-any.whl (16 kB)
Using cached fsspec-2025.3.0-py3-none-any.whl (193 kB)
Using cached jinja2-3.1.6-py3-none-any.whl (134 kB)
Using cached networkx-3.4.2-py3-none-any.whl (1.7 MB)
Using cached MarkupSafe-3.0.2-cp311-cp311-win_amd64.whl (15 kB)
Using cached mpmath-1.3.0-py3-none-any.whl (536 kB)
Installing collected packages: mpmath, wheel, typing-extensions, sympy, setuptools, pathspec, packaging, networkx, MarkupSafe, fsspec, filelock, scikit-build-core, jinja2, torch
Creating C:\Users\tennant\AppData\Local\Temp\pip-build-env-0hsjw9y9\overlay\Scripts
ERROR: pip's dependency resolver does not currently take into account all the packages that are installed. This behaviour is the source of the following dependency conflicts.
torchaudio 2.6.0+cu126 requires torch==2.6.0+cu126, but you have torch 2.5.1 which is incompatible.
torchvision 0.21.0+cu126 requires torch==2.6.0+cu126, but you have torch 2.5.1 which is incompatible.
Successfully installed MarkupSafe-3.0.2 filelock-3.17.0 fsspec-2025.3.0 jinja2-3.1.6 mpmath-1.3.0 networkx-3.4.2 packaging-24.2 pathspec-0.12.1 scikit-build-core-0.11.0 setuptools-76.0.0 sympy-1.13.1 torch-2.5.1 typing-extensions-4.12.2 wheel-0.45.1
Installing build dependencies ... done
Running command Getting requirements to build wheel
C:\Users\tennant\AppData\Local\Temp\pip-build-env-0hsjw9y9\overlay\Lib\site-packages\torch\_subclasses\functional_tensor.py:295: UserWarning: Failed to initialize NumPy: No module named 'numpy' (Triggered internally at C:\actions-runner\_work\pytorch\pytorch\builder\windows\pytorch\torch\csrc\utils\tensor_numpy.cpp:84.)
cpu = _conversion_method_template(device=torch.device("cpu"))
CUDA_HOME in build: C:/Program Files/NVIDIA GPU Computing Toolkit/CUDA/v12.8
Traceback (most recent call last):
File "C:\ProgramData\miniforge3\envs\olmocr\Lib\site-packages\pip\_vendor\pyproject_hooks\_in_process\_in_process.py", line 389, in <module>
main()
File "C:\ProgramData\miniforge3\envs\olmocr\Lib\site-packages\pip\_vendor\pyproject_hooks\_in_process\_in_process.py", line 373, in main
json_out["return_val"] = hook(**hook_input["kwargs"])
^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "C:\ProgramData\miniforge3\envs\olmocr\Lib\site-packages\pip\_vendor\pyproject_hooks\_in_process\_in_process.py", line 143, in get_requires_for_build_wheel
return hook(config_settings)
^^^^^^^^^^^^^^^^^^^^^
File "C:\Users\tennant\AppData\Local\Temp\pip-build-env-0hsjw9y9\overlay\Lib\site-packages\setuptools\build_meta.py", line 334, in get_requires_for_build_wheel
return self._get_build_requires(config_settings, requirements=[])
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "C:\Users\tennant\AppData\Local\Temp\pip-build-env-0hsjw9y9\overlay\Lib\site-packages\setuptools\build_meta.py", line 304, in _get_build_requires
self.run_setup()
File "C:\Users\tennant\AppData\Local\Temp\pip-build-env-0hsjw9y9\overlay\Lib\site-packages\setuptools\build_meta.py", line 320, in run_setup
exec(code, locals())
File "<string>", line 210, in <module>
File "C:\Users\tennant\AppData\Local\Temp\pip-build-env-0hsjw9y9\overlay\Lib\site-packages\torch\utils\cpp_extension.py", line 1078, in CUDAExtension
library_dirs += library_paths(cuda=True)
^^^^^^^^^^^^^^^^^^^^^^^^
File "C:\Users\tennant\AppData\Local\Temp\pip-build-env-0hsjw9y9\overlay\Lib\site-packages\torch\utils\cpp_extension.py", line 1216, in library_paths
paths.append(_join_cuda_home(lib_dir))
^^^^^^^^^^^^^^^^^^^^^^^^
File "C:\Users\tennant\AppData\Local\Temp\pip-build-env-0hsjw9y9\overlay\Lib\site-packages\torch\utils\cpp_extension.py", line 2416, in _join_cuda_home
raise OSError('CUDA_HOME environment variable is not set. '
OSError: CUDA_HOME environment variable is not set. Please set it to your CUDA install root.
error: subprocess-exited-with-error
× Getting requirements to build wheel did not run successfully.
│ exit code: 1
╰─> See above for output.
note: This error originates from a subprocess, and is likely not a problem with pip.
full command: 'C:\ProgramData\miniforge3\envs\olmocr\python.exe' 'C:\ProgramData\miniforge3\envs\olmocr\Lib\site-packages\pip\_vendor\pyproject_hooks\_in_process\_in_process.py' get_requires_for_build_wheel 'C:\Users\tennant\AppData\Local\Temp\tmpsqkqyg9x'
cwd: C:\Windows\System32\olmocr\sglang\sgl-kernel
Getting requirements to build wheel ... error
error: subprocess-exited-with-error
× Getting requirements to build wheel did not run successfully.
│ exit code: 1
╰─> See above for output.
note: This error originates from a subprocess, and is likely not a problem with pip.
```
Verified that CUDA_HOME is being set correctly in the build environment. Not sure how to proceed.
### Reproduction
Unsuccessfully tried:
`pip install sgl-kernel==0.0.3.post1 --force-reinstall --no-deps
`
Also unsuccessfully tried:
```
git clone https://github.com/sgl-project/sglang.git
cd sglang/sgl-kernel
pip install . --no-deps
```
### Environment
sgl-kernel never installed.
Microsoft Windows 11 Enterprise
10.0.22631 N/A Build 22631
CUDA version: 12.8
PyTorch version: 2.6.0+cu126
|
open
|
2025-03-11T14:11:13Z
|
2025-03-11T19:41:33Z
|
https://github.com/sgl-project/sglang/issues/4312
|
[] |
cdtennant
| 1
|
xlwings/xlwings
|
automation
| 1,617
|
Settings hierarchy overrides empty values
|
If you have an `xlwings.conf` sheet with all settings in there, empty values get overridden if you have them specified further down in the hierarchy (e.g. in a user config file). For example:
sheet
```
Conda Path | (empty)
```
user config file
```
"Conda Path","C:\some\path"
```
Will use `C:\some\path` instead of using the empty value as it should.
|
closed
|
2021-06-11T07:56:25Z
|
2021-06-15T08:13:03Z
|
https://github.com/xlwings/xlwings/issues/1617
|
[
"bug"
] |
fzumstein
| 0
|
jina-ai/serve
|
fastapi
| 5,916
|
Streaming for gRPC for Deployment
|
closed
|
2023-06-19T13:01:35Z
|
2023-07-25T10:07:37Z
|
https://github.com/jina-ai/serve/issues/5916
|
[] |
alaeddine-13
| 0
|
|
deezer/spleeter
|
deep-learning
| 884
|
Cannot do `poetry add spleeter` due to `tensorflow-io-gcs-filesystem` error
|
- [✅] I didn't find a similar issue already open.
- [✅] I read the documentation (README AND Wiki)
- [✅ ] I have installed FFMpeg
- [✅ ] My problem is related to Spleeter only, not a derivative product (such as Webapplication, or GUI provided by others)
## Description
If I run `poetry add spleeter` I get the following error
```
Unable to find installation candidates for tensorflow-io-gcs-filesystem (0.34.0)
• Installing opt-einsum (3.3.0)
• Installing rfc3986 (1.5.0)
• Installing tensorboard (2.9.1)
• Installing tensorflow-estimator (2.9.0)
• Installing tensorflow-io-gcs-filesystem (0.34.0): Failed
RuntimeError
Unable to find installation candidates for tensorflow-io-gcs-filesystem (0.34.0)
• Installing libclang (16.0.6): Downloading... 100%
• Installing opt-einsum (3.3.0)
• Installing rfc3986 (1.5.0)
• Installing tensorboard (2.9.1)
• Installing tensorflow-estimator (2.9.0)
• Installing tensorflow-io-gcs-filesystem (0.34.0): Failed
RuntimeError
```
I think this is due to you defining `tensorflow = "^2.5.0, <2.10.0"` in `pyproject.toml` but not `tensorflow-io-gcs-filesystem`.
From [here](https://pypi.org/project/tensorflow-io-gcs-filesystem/) we see the correct version is `tensorflow-io-gcs-filesystem = "0.27.0"`.
Please add that line to `pyproject.toml`. I added the following to my own `pyproject.toml` and it fixed the problem
```
tensorflow = "^2.5.0, <2.10.0"
tensorflow-io-gcs-filesystem = "0.27.0"
spleeter = "^2.4.0"
```
## Environment
<!-- Fill the following table -->
| | |
| ----------------- | ------------------------------- |
| OS | Windows |
| Installation type | Poetry |
| RAM available | 32 GB|
| Hardware spec | RTX 3070 Ti GPU, 12th Gen Intel i7|
|
open
|
2023-12-06T15:42:50Z
|
2023-12-06T15:43:51Z
|
https://github.com/deezer/spleeter/issues/884
|
[
"bug",
"invalid"
] |
codeananda
| 0
|
widgetti/solara
|
jupyter
| 756
|
GridDraggable not working on colab
|
see #323
|
open
|
2024-08-28T09:02:52Z
|
2024-09-02T07:35:29Z
|
https://github.com/widgetti/solara/issues/756
|
[] |
maartenbreddels
| 3
|
sktime/sktime
|
scikit-learn
| 7,228
|
[BUG] `DartsRegressionModel` can not be used if `u8darts` is not installed even if `darts` is installed
|
In `darts` adapter, `u8darts` is mentioned as the dependency since it has smaller dependency set than full `darts`. But `darts` is a proper superset of `u8darts` and both gets imported as `import darts` only. Therefore as a user if I have `darts` installed, my expectation is I should be able to use `DartsRegressionModel`. However, that is failing with `ModuleNotFoundError`.
To reproduce, please follow the following steps:
1. Activate a python environment where `sktime` is installed, `u8darts` is not installed, and `darts` is installed.
2. Run following code snippet.
3. It should fail with `` ModuleNotFoundError: DartsRegressionModel requires package 'u8darts>=0.29' to be present in the python environment, but 'u8darts>=0.29' was not found. 'u8darts>=0.29' is a soft dependency and not included in the base sktime installation. Please run: `pip install u8darts>=0.29` to install the u8darts>=0.29 package. To install all soft dependencies, run: `pip install sktime[all_extras]` ``.
```python3
from sktime.forecasting.darts import DartsRegressionModel
DartsRegressionModel()
```
I am currently on `main` branch at commit 0f75b7ad0.
|
closed
|
2024-10-05T19:50:02Z
|
2025-03-22T15:52:10Z
|
https://github.com/sktime/sktime/issues/7228
|
[
"bug",
"module:forecasting"
] |
yarnabrina
| 20
|
RayVentura/ShortGPT
|
automation
| 85
|
❓ [Question]: installation error
|
### Your Question
<string>:45: RuntimeWarning: Pillow 9.0.0 does not support Python 3.11 and does not provide prebuilt Windows binaries. We do not recommend building from source on Windows.
[end of output]
note: This error originates from a subprocess, and is likely not a problem with pip.
ERROR: Failed building wheel for pillow
Failed to build pillow
ERROR: Could not build wheels for pillow, which is required to install pyproject.toml-based projects
|
closed
|
2023-08-05T19:35:24Z
|
2023-08-06T08:19:03Z
|
https://github.com/RayVentura/ShortGPT/issues/85
|
[
"question"
] |
DingSenSong
| 3
|
ni1o1/transbigdata
|
data-visualization
| 92
|
采用自定义bounds划分栅格时,出现多余的栅格
|
采用自定义bounds来划分栅格时,划分出来的格子,超过了设置的bounds(图中白色部分)
猜测可能是坐标系的原因?请问area_to_grid的默认坐标系是什么?
代码:
```
grid, params = tbd.area_to_grid(boundary, accuracy=3000, method='rect')
#栅格参数,方形栅格下method参数是rect,代表方形栅格
pprint.pprint(params)
#栅格几何图形
grid.head()
```
输出:
```
{'deltalat': 0.02697963123853744,
'deltalon': 0.033639196976294326,
'gridsize': 3000,
'method': 'rect',
'slat': 36.636559,
'slon': 116.950786,
'theta': 0}
```

|
open
|
2023-11-05T09:30:43Z
|
2023-11-06T12:15:54Z
|
https://github.com/ni1o1/transbigdata/issues/92
|
[] |
mokeeqian
| 2
|
hack4impact/flask-base
|
flask
| 55
|
ImportError: No module named faker
|
do other people get this when setting up? I don't see faker in either of the requirements files
|
closed
|
2016-10-04T18:45:46Z
|
2016-10-05T05:24:05Z
|
https://github.com/hack4impact/flask-base/issues/55
|
[
"bug"
] |
yoninachmany
| 3
|
jupyter/docker-stacks
|
jupyter
| 1,723
|
[BUG] - permission denied error with datascience-notebook
|
### What docker image(s) are you using?
datascience-notebook
### OS system and architecture running docker image
Fedora CoreOS 35
### What Docker command are you running?
Whatever is the default in Zero To JupyterHub version 1.2.0...
### How to Reproduce the problem?
1. Launch an R server using the `datascience-notebook` image
2. Try to run an R command e.g. `capabilities()`
### Command output
_No response_
### Expected behavior
R command is executed and output is shown
### Actual behavior
No output visible in the notebook and in the server logs there are errors:
```
[I 2022-06-13 15:48:33.536 SingleUserLabApp restarter:75] AsyncIOLoopKernelRestarter: restarting kernel (5/5), new random ports
[D 2022-06-13 15:48:33.547 SingleUserLabApp manager:386] Starting kernel: ['R', '--slave', '-e', 'IRkernel::main()', '--args', '/home/jovyan/.local/share/jupyter/runtime/kernel-928c71a4-9525-49a1-a3fd-792974bd24bf.json']
[D 2022-06-13 15:48:33.548 SingleUserLabApp connect:604] Connecting to: tcp://127.0.0.1:37933
OMP: Error #179: Function Can't open SHM2 failed:
OMP: System error #13: Permission denied
```
### Anything else?
Test R commands do work on this same JupyterHub installation using the `r-notebook` image.
|
closed
|
2022-06-13T17:16:00Z
|
2022-09-27T10:28:31Z
|
https://github.com/jupyter/docker-stacks/issues/1723
|
[
"type:Bug",
"status:Need Info"
] |
verdurin
| 5
|
dask/dask
|
pandas
| 11,230
|
Roundtripping timezone-aware DataFrame through parquet doesn't preserve timestamp resolution
|
While diagnosing some of the failures we're seeing over in https://github.com/coiled/dask-bigquery/pull/81, I stumbled across an issue with roundtripping timezone-aware timeseries data through parquet with Dask. Here's a minimal reproducer:
```python
import random
import pandas as pd
import dask.dataframe as dd
# Generate some random synthetic data
records = [
{
"number": random.randint(0, 100),
"timestamp": pd.Timestamp.utcnow(),
"idx": i,
}
for i in range(10)
]
df = pd.DataFrame(records)
# Change timestamp resolution to us (this is important)
df["timestamp"] = df["timestamp"].astype("datetime64[us, UTC]")
# Roundtrip through parquet with Dask
ddf = dd.from_pandas(df, npartitions=2)
outdir = "test.parquet"
ddf.to_parquet(outdir)
ddf2 = dd.read_parquet(outdir)
dd.utils.assert_eq(ddf, ddf2, check_divisions=False)
```
which raises this error:
```
Traceback (most recent call last):
File "/Users/james/projects/dask/dask/test.py", line 24, in <module>
dd.utils.assert_eq(ddf, ddf2, check_divisions=False)
File "/Users/james/projects/dask/dask/dask/dataframe/utils.py", line 603, in assert_eq
tm.assert_frame_equal(
File "/Users/james/mambaforge/envs/dask-py312/lib/python3.12/site-packages/pandas/_testing/asserters.py", line 1279, in assert_frame_equal
assert_series_equal(
File "/Users/james/mambaforge/envs/dask-py312/lib/python3.12/site-packages/pandas/_testing/asserters.py", line 975, in assert_series_equal
assert_attr_equal("dtype", left, right, obj=f"Attributes of {obj}")
File "/Users/james/mambaforge/envs/dask-py312/lib/python3.12/site-packages/pandas/_testing/asserters.py", line 421, in assert_attr_equal
raise_assert_detail(obj, msg, left_attr, right_attr)
File "/Users/james/mambaforge/envs/dask-py312/lib/python3.12/site-packages/pandas/_testing/asserters.py", line 614, in raise_assert_detail
raise AssertionError(msg)
AssertionError: Attributes of DataFrame.iloc[:, 1] (column name="timestamp") are different
Attribute "dtype" are different
[left]: datetime64[us, UTC]
[right]: datetime64[ns, UTC]
```
Note the initial `ddf` DataFrame has `us` resolution, but after roundtripping through parquet, the `ddf2` DataFrame has `ns` resolution.
A couple of additional observations:
1. The equivalent `pandas` code (i.e. removing `dd.from_pandas`) doesn't raise an error.
2. If I remove timezone information altogether (e.g. use `pd.Timestamp.now()` instead of `pd.Timestamp.utcnow()`), then this also doesn't raise an error.
cc @phofl @fjetter
|
closed
|
2024-07-16T21:30:09Z
|
2024-07-17T16:25:48Z
|
https://github.com/dask/dask/issues/11230
|
[
"dataframe"
] |
jrbourbeau
| 0
|
lk-geimfari/mimesis
|
pandas
| 644
|
Refactor issue templates
|
Create issue page is misleading:
<img width="784" alt="2019-03-01 12 28 16" src="https://user-images.githubusercontent.com/4660275/53629325-538dd400-3c1e-11e9-802c-8b809fc065fc.png">
What is this "Custom template"? I am not sure that this is a good DX.
And I propose to rename it.
|
closed
|
2019-03-01T09:34:41Z
|
2019-03-01T10:17:43Z
|
https://github.com/lk-geimfari/mimesis/issues/644
|
[
"bug",
"docs"
] |
sobolevn
| 1
|
voxel51/fiftyone
|
computer-vision
| 5,079
|
[BUG] CloseSession Event is never triggered on Chrome and Firefox
|
With **FiftyOne 1.0.1** on my **Windows 10** with **Python 3.10** I want to continue my script after closing the browser session. Unfortunately, the `CloseSession` event is never triggered, hence `session.wait(1)` waits forever.
### Code to reproduce issue
**Chrome v130.0.6723.117** is set to *default browser* with

```python
import fiftyone as fo
dataset = fo.Dataset()
if __name__ == '__main__':
print("Launch...")
session = fo.launch_app(dataset) # , browser=r"C:\Program Files\Google\Chrome\Application\chrome.exe %s")
session.wait(1)
print("Yipee")
```
Same with **Firefox v132.0.1**
### System information
- **OS Platform and Distribution** Windows 10 Pro 22H2:
- **Python version** (`python --version`): Python 3.10.9
- **FiftyOne version** (`fiftyone --version`): FiftyOne v1.0.1, Voxel51, Inc.
- **FiftyOne installed from** (pip or source):
Can anyone please give me some hint what's missing?
|
open
|
2024-11-08T09:43:38Z
|
2025-01-22T02:13:55Z
|
https://github.com/voxel51/fiftyone/issues/5079
|
[
"bug"
] |
JRGit4UE
| 1
|
httpie/cli
|
api
| 1,577
|
JSON properties are shuffled
|
Hi,
I just execute this:
```
http http://localhost:9021/api/development
```
And i get this output:
```
HTTP/1.1 200 OK
Content-Type: application/json; charset=utf-8
Date: Sun, 19 May 2024 10:58:34 GMT
Server: Kestrel
Transfer-Encoding: chunked
[
{
"allowedIps": null,
"configurationFile": "/tmp/proxies/ups1.conf",
"createdAt": "2024-05-18T18:13:44.204+00:00",
"id": "60ac07ef-5a80-4352-8b03-2ab8f68ef68f",
"password": "string",
"port": 1001,
"updatedAt": "2024-05-18T18:13:44.204+00:00",
"upstream": "ups1",
"username": "string"
}
]
```
I don't understand why properties are shuffled, should be like this:
```json
[
{
"id": "60ac07ef-5a80-4352-8b03-2ab8f68ef68f",
"createdAt": "2024-05-18T18:13:44.204+00:00",
"updatedAt": "2024-05-18T18:13:44.204+00:00",
"upstream": "ups1",
"port": 1001,
"username": "string",
"password": "string",
"allowedIps": null,
"configurationFile": "/tmp/proxies/ups1.conf"
}
]
```
Regards
|
closed
|
2024-05-19T11:00:16Z
|
2024-10-30T10:53:37Z
|
https://github.com/httpie/cli/issues/1577
|
[
"bug",
"new"
] |
imclint21
| 5
|
charlesq34/pointnet
|
tensorflow
| 73
|
How can I use the model I trained before to test a picture I took in the real world ?
|
How can I use the model I trained before to test a picture I took in the real world ?
|
open
|
2018-01-24T14:36:17Z
|
2021-09-02T08:56:52Z
|
https://github.com/charlesq34/pointnet/issues/73
|
[] |
Bigwode
| 1
|
pyg-team/pytorch_geometric
|
deep-learning
| 9,892
|
After call NeighborLoader my file logger prints in console
|
### 🐛 Describe the bug
After call bellow, my file logger outputs to console, and changed the log format. Except that all fine. Could you tell me it is bug or i did wrong?
### Versions
```
train_loader = NeighborLoader(
dataset._data,
num_neighbors = {key: NEIGHBOR_SIZES for key in dataset._data.edge_types},
batch_size=BATCH_SIZE,
input_nodes=('user', train_mask)
)
```
My logger:
```
def get_file_pylogger(filepath: str, name: str = __name__) -> logging.Logger:
"""Initialize .log file logger"""
formatter = logging.Formatter(
fmt="%(asctime)s %(levelname)s %(message)s",
datefmt="%Y-%m-%d %H:%M:%S",
)
logger = logging.getLogger(name)
file_handler = logging.FileHandler(filepath, "a+")
file_handler.setFormatter(formatter)
logger.addHandler(file_handler)
logger.setLevel(logging.DEBUG)
return logger
```
|
open
|
2024-12-26T06:13:59Z
|
2024-12-28T19:21:19Z
|
https://github.com/pyg-team/pytorch_geometric/issues/9892
|
[
"bug",
"loader"
] |
munkhbuyan
| 3
|
python-gino/gino
|
sqlalchemy
| 648
|
sssssssssssssssssssssssssss
|
* GINO version:
* Python version:
* asyncpg version:
* aiocontextvars version:
* PostgreSQL version:
### Description
Describe what you were trying to get done.
Tell us what happened, what went wrong, and what you expected to happen.
### What I Did
```
Paste the command(s) you ran and the output.
If there was a crash, please include the traceback here.
```
|
closed
|
2020-04-05T07:33:51Z
|
2020-04-05T15:52:15Z
|
https://github.com/python-gino/gino/issues/648
|
[] |
reinoldus
| 2
|
Lightning-AI/LitServe
|
api
| 218
|
Decorator design for `LitServer`
|
Hello there! Great release. Just dropping in to suggest a stylistic refactor. Let me know what you think 😄
## 🚀 Feature
Provide a decorator/pythonic pattern to instantiate your API Server.
### Motivation
<!-- Please outline the motivation for the proposal. Is your feature request related to a problem? e.g., I'm always frustrated when [...]. If this is related to another GitHub issue, please link here too -->
Mostly aesthetics, readability and comfort.
### Pitch
<!-- A clear and concise description of what you want to happen. -->
Now:
```python
import litserve as ls
class MyAPI(ls.LitAPI):
...
if __name__ == "__main__":
server = ls.LitServer(MyAPI(), ...)
```
The proposal
```python
import litserve as ls
@ls.server(accelerator="auto", max_batch_size=1, ...)
class MyAPI(ls.LitAPI):
...
```
Advantages:
* keeps configurations coupled to the service.
* A bit more pythonic/less typing
* You don't need to rewrite a lot of code: the decorator basically does `server = ls.LitServer(MyAPI(), ...)` and returns it for you.
* You could go as far as even dropping the fact that `MyAPI` needs to inherit from `LitAPI`, because the decorator logic could handle the validation (say: `LitServer.from_api(...)`). I guess that using an ABC also provides you with static analysis (e.g. "you did not implement X").
* this plays along well with uvicorn, since you can launch a server from a function returning one using `--application-factory=...` ([ref](https://www.uvicorn.org/#application-factories)).
Cons:
* Two ways of doing the same thing, I guess?
|
closed
|
2024-08-23T13:11:32Z
|
2024-08-23T15:57:37Z
|
https://github.com/Lightning-AI/LitServe/issues/218
|
[
"enhancement",
"help wanted"
] |
baggiponte
| 1
|
iterative/dvc
|
machine-learning
| 9,843
|
stage environment
|
Passing of dependencies and outputs to a dvc process requires a quite a bit of boilerplate at the moment.
```yaml
stage:
train:
cmd: >
python -m stages.train \
--training-data=data/training.json \
--validation-data=data/validation.json \
--model model/model.joblib \
--metrics=metrics/training.json
deps:
- data/training.json
- data/validation.json
outs:
- model/model.joblib
metrics:
- metrics/training.json
```
The example repos are inconsistent with when or which paths to pass as parameters or arguments. Also,
this is a bit awkward error prone, so we can use variables to make it more robust:
```yaml
vars:
TRAINING_DATA_PATH: data/training.json
VALIDATION_DATA_PATH: data/validation.json
MODEL_PATH: model/model.joblib
TRAINING_METRICS_PATH: metrics/training.json
stage:
train:
cmd: >
python -m stages.train \
--training-data=${TRAINING_DATA_PATH} \
--validation-data=${VALIDATION_DATA_PATH} \
--model ${MODEL_PATH} \
--metrics=${TRAINING_METRICS_PATH}
deps:
- ${TRAINING_DATA_PATH}
- ${VALIDATION_DATA_PATH}
outs:
- ${MODEL_PATH}
metrics:
- ${TRAINING_METRICS_PATH}
```
This is more robust, but it starts getting very "boilerplatey".
My suggestion is to provide a environment to the stage, which passes variables to the stage command.
```yaml
vars:
TRAINING_DATA_PATH: data/training.json
VALIDATION_DATA_PATH: data/validation.json
MODEL_PATH: model/model.joblib
TRAINING_METRICS_PATH: metrics/training.json
stage:
train:
cmd: python -m stages.train
env:
training_data: ${TRAINING_DATA_PATH}
validation_data: ${VALIDATION_DATA_PATH}
model: ${MODEL_PATH}
metrics: ${TRAINING_METRICS_PATH}
outs:
- ${MODEL_PATH}
metrics:
- ${TRAINING_METRICS_PATH}
```
this could make any parameters or variables be available to the stage without parameter passing:
```python
model = train(os.environ['training_data'])
model.save(os.environ['model'])
...
```
And dvc can assume that any path not listed as an output or metric is a stage dependency.
A different option could be avoiding all parameter passing and adding some magic DVC environment:
```yaml
vars:
TRAINING_DATA_PATH: data/training.json
VALIDATION_DATA_PATH: data/validation.json
MODEL_PATH: model/model.joblib
TRAINING_METRICS_PATH: metrics/training.json
stage:
train:
cmd: python -m stages.train
deps:
- ${TRAINING_DATA_PATH}
- ${VALIDATION_DATA_PATH}
outs:
- ${MODEL_PATH}
metrics:
- ${TRAINING_METRICS_PATH}
```
which could be used by the stage:
```python
model = train(os.environ['DEPS_TRAINING_DATA_PATH'])
model.save(os.environ['OUTS_MODEL_PATH'])
...
```
|
open
|
2023-08-15T11:49:04Z
|
2023-08-16T09:36:17Z
|
https://github.com/iterative/dvc/issues/9843
|
[
"feature request",
"discussion",
"A: templating"
] |
janrito
| 3
|
huggingface/diffusers
|
deep-learning
| 10,107
|
gradient checkpointing runs during validations in the training examples
|
### Describe the bug
The gradient checkpointing is enabled via self.training, but then the log_validations also unnecessarily encounter this codepath.
I found that I have to disable this when running even under `torch.no_grad()`, looked and saw that the official examples do not do this either.
This gives a substantial performance boost more similar to a normal inference script running outside of a training loop.
### Reproduction
Add print statements to the checkpointing function.
### Logs
_No response_
### System Info
-
### Who can help?
@linoytsaban @yiyixuxu
|
closed
|
2024-12-03T22:10:25Z
|
2025-01-03T16:01:36Z
|
https://github.com/huggingface/diffusers/issues/10107
|
[
"bug",
"stale"
] |
bghira
| 4
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.