repo_name
stringlengths 9
75
| topic
stringclasses 30
values | issue_number
int64 1
203k
| title
stringlengths 1
976
| body
stringlengths 0
254k
| state
stringclasses 2
values | created_at
stringlengths 20
20
| updated_at
stringlengths 20
20
| url
stringlengths 38
105
| labels
listlengths 0
9
| user_login
stringlengths 1
39
| comments_count
int64 0
452
|
|---|---|---|---|---|---|---|---|---|---|---|---|
tartiflette/tartiflette
|
graphql
| 140
|
Enhance customization imports
|
Currently, to be able to use custom scalar / directive / resolver & subscription, we need to manually import those elements at the same level as the engine initialization.
This is not very practical and forces to make imports that are not used.
Shouldn't we provide an API at engine initialization time that would take a module list to dynamically import ?
|
closed
|
2019-03-05T14:34:14Z
|
2019-03-07T12:19:25Z
|
https://github.com/tartiflette/tartiflette/issues/140
|
[
"enhancement",
"question"
] |
Maximilien-R
| 4
|
exaloop/codon
|
numpy
| 605
|
cannot import name 'RollingMedian' from 'vec'
|
Here is my `extension.codon` file
```
import collections
class RollingMedian:
n: int
data: collections.deque[float]
def __init__(self, n: int = 10):
self.n = n
self.data = collections.deque(maxlen=n)
def input(self, value: float) -> float:
self.data.append(value)
return self.get_median()
def get_median(self) -> float:
sorted_data = sorted(self.data)
mid = len(sorted_data) // 2
if len(sorted_data) % 2 == 0:
return (sorted_data[mid - 1] + sorted_data[mid]) / 2.0
else:
return sorted_data[mid]
```
I run `python3 setup.py build_ext --inplace` and got:
```
Found Codon: /Users/river/.codon
running build_ext
/Users/river/.codon/bin/codon build -release --relocation-model=pic -pyext -o build/lib.macosx-11.0-arm64-cpython-310/vec.cpython-310-darwin.so.o -module vec extension.codon
clang -bundle -undefined dynamic_lookup -Wl,-rpath,/Users/river/miniforge3/lib -L/Users/river/miniforge3/lib -Wl,-rpath,/Users/river/miniforge3/lib -L/Users/river/miniforge3/lib -Wl,-rpath,@loader_path build/lib.macosx-11.0-arm64-cpython-310/vec.cpython-310-darwin.so.o -L/Users/river/.codon/lib/codon -Wl,-rpath,/Users/river/.codon/lib/codon -lcodonrt -o build/lib.macosx-11.0-arm64-cpython-310/vec.cpython-310-darwin.so
ld: warning: duplicate -rpath '/Users/river/miniforge3/lib' ignored
ld: warning: object file (/Users/river/Desktop/pyext/build/lib.macosx-11.0-arm64-cpython-310/vec.cpython-310-darwin.so.o) was built for newer 'macOS' version (14.0) than being linked (11.0)
copying build/lib.macosx-11.0-arm64-cpython-310/vec.cpython-310-darwin.so ->
```
Then I tried to `from vec import RollingMedian`, but got:
```
Traceback (most recent call last):
File "/Users/river/Desktop/pyext/test.py", line 1, in <module>
from vec import RollingMedian
ImportError: cannot import name 'RollingMedian' from 'vec' (/Users/river/Desktop/pyext/vec.cpython-310-darwin.so)
```
|
closed
|
2024-11-08T18:51:38Z
|
2024-11-12T05:55:53Z
|
https://github.com/exaloop/codon/issues/605
|
[] |
River-Shi
| 6
|
oegedijk/explainerdashboard
|
dash
| 54
|
ModuleNotFoundError: No module named 'numba.serialize'
|
what is the correct version of the numba package to run the ClassifierExplainer.from_file ?? when I try to run the code below, I get the following message: ModuleNotFoundError: No module named 'numba.serialize'. My current version of numba is 0.52.0
**attempted code**:
from flask import Flask
from explainerdashboard import ClassifierExplainer, ExplainerDashboard
app = Flask(__name__)
explainer = ClassifierExplainer.from_file("explainer.joblib")
db = ExplainerDashboard(explainer, server=app, url_base_pathname="/dashboard/")
@app.route('/dashboard')
def return_dashboard():
return db.app.index()
app.run()
|
closed
|
2020-12-31T19:21:11Z
|
2021-01-02T20:13:49Z
|
https://github.com/oegedijk/explainerdashboard/issues/54
|
[] |
mvpalheta
| 3
|
tqdm/tqdm
|
jupyter
| 1,398
|
tqdm by default rounds up and reports 100% before actually finishing
|
If there are enough steps in the loop so that the increment is small enough, it can happen that `tqdm` shows `100%` even though the process hasn't finished yet. This is the default behavior and I think it should be changed to stop at 99% before actually finishing.
MVE:
```python
from time import sleep
from tqdm.cli import tqdm
n = 1000
for idx in tqdm(range(n)):
if idx == n-1:
break
sleep(1/n)
```
<img width="481" alt="image" src="https://user-images.githubusercontent.com/37869250/204011511-c8ed015d-aeea-4382-a0ec-01b690608b06.png">
I know it's possible to change the bar format to show decimals, e.g.
```python
from time import sleep
from tqdm.cli import tqdm
n = 1000
for idx in tqdm(range(n), bar_format = "{desc}: {percentage:.1f}%|{bar}| {n_fmt}/{total_fmt} [{elapsed}<{remaining}]"):
if idx == n-1:
break
sleep(1/n)
```
<img width="371" alt="image" src="https://user-images.githubusercontent.com/37869250/204011558-ff7f10af-1434-4119-9654-426d217eb736.png">
but this probably isn't an acceptable solution for everyone.
|
open
|
2022-11-25T15:02:10Z
|
2022-11-25T15:02:20Z
|
https://github.com/tqdm/tqdm/issues/1398
|
[] |
mlubej
| 0
|
dot-agent/nextpy
|
fastapi
| 77
|
Migrate Nextpy to Pydantic v2 for Enhanced Performance and Compatibility
|
It's time to upgrade Nextpy to Pydantic v2. This migration is crucial to leverage the latest performance improvements and ensure compatibility with other libraries that are also moving to Pydantic v2.
### Expected Benefits
- **Performance Improvements**: Pydantic v2 comes with significant enhancements in performance, which can positively impact the overall efficiency of Nextpy.
- **Better Compatibility**: Keeping up with the latest version ensures that Nextpy remains compatible with other tools and libraries in the ecosystem that rely on Pydantic.
- **Access to New Features**: Pydantic v2 introduces new features and improvements, which can be beneficial for future development and feature enhancements in Nextpy.
### Potential Challenges & Blockers
- **Dependencies on Other Libraries**: Some dependencies like `sqlmodel` might have compatibility issues that need to be addressed.
- **Internal API Changes**: Pydantic v2 has made changes to some of its internal APIs (e.g., `ModelField` no longer exists). We need to find suitable alternatives or workarounds for these changes.
### Call for Contributions
We invite contributors to join in on this upgrade process. Whether you have experience with Pydantic internals or are new to it, your input and help would be valuable.
- If you have experience with Pydantic v2 or its internals, your guidance can help overcome specific challenges.
- For those who are new, this could be a great learning opportunity and a way to contribute significantly to the Nextpy project.
### Progress Tracking
- [ ] Assess the impact of migration on existing codebase
- [ ] Identify and resolve dependencies issues with `sqlmodel`
- [ ] Update the Nextpy codebase to adapt to Pydantic v2 API changes
- [ ] Thorough testing to ensure stability post-migration
- [ ] Update documentation to reflect changes
### Collaboration and Updates
- For ongoing discussions, please refer to this thread.
- Contributors working on related tasks are encouraged to share updates and findings here.
- Any significant breakthroughs or challenges can be discussed in follow-up comments.
### Conclusion
Migrating to Pydantic v2 is an important step for the future of Nextpy. It ensures that our framework stays up-to-date with the latest advancements and continues to integrate smoothly within the broader Python ecosystem.
|
open
|
2023-12-13T14:35:20Z
|
2023-12-13T14:36:01Z
|
https://github.com/dot-agent/nextpy/issues/77
|
[
"enhancement",
"help wanted"
] |
anubrag
| 0
|
csu/quora-api
|
web-scraping
| 27
|
GET question/<question> returns only want_answers, topics and answer_count
|
http://quora-api.herokuapp.com/questions/If-space-is-3-dimensional-can-time-also-be-3-dimensional
Answer wiki, question and question details are missing.
|
closed
|
2015-01-06T07:10:17Z
|
2015-01-07T02:22:31Z
|
https://github.com/csu/quora-api/issues/27
|
[] |
rohithpr
| 1
|
LAION-AI/Open-Assistant
|
python
| 2,657
|
He detectado faltas graves de ortografía en la sección 'Chat', ¿cómo debo proceder?
|
En la sección Chat, en la que es posible interactuar con diferentes modelos, he detectado graves faltas de ortografía propias de niños. Pongo el ejemplo del siguiente párrafo, con algunas correcciones en negrita abajo. Son principalmente de puntuación, pero también tildes, ya no digamos estilo. ¿Cómo debo proceder? No veo ninguna forma de decir "esto contiene errores".
> La escritura del número depende del contexto y de la convención estilística que se siga. En general, los números enteros hasta diez se pueden escribir tanto en numéricos (1-9) como en ordinales (1º -10º). A partir de esa cifra suele ser más común verlos escritos en forma numérica, aunque dependiendo del idioma, su cultura e incluso su país puede variar mucho la forma habitual de representarlo escrito. Volviendo a tu pregunta concreta, tanto 5 como Cinco son válidos para referirse a ese numero y dependerá también del uso que le vaya a dar al texto donde lo intenta incluir para decidir si llevarlo escrito de una u otra manera. Otra opción podría ser quinientos, qué representa otro tipo de números a la vez que mantiene la diferencia entre ambos sistemas aritméticos. Si deseas saber más alternativas para representar alguno de estos números, estaré encantado de ayudarte. Simplemente házmelo saber en este mismo chat :)
> La escritura del número depende del contexto y de la convención estilística que se siga. En general, los números enteros hasta diez se pueden escribir tanto en numéricos (1-9) como en ordinales (1º -10º). A partir de esa cifra**,** suele ser más común verlos escritos en forma numérica, aunque dependiendo del idioma, su cultura e incluso su país puede variar mucho la forma habitual de representarlo escrito. Volviendo a tu pregunta concreta, tanto 5 como **cinco** son válidos para referirse a ese numero y dependerá también del uso que le vaya a dar al texto donde lo intenta incluir para decidir si llevarlo escrito de una u otra manera. Otra opción podría ser quinientos, **que** representa otro tipo de números a la vez que mantiene la diferencia entre ambos sistemas aritméticos. Si deseas saber más alternativas para representar alguno de estos números, estaré encantado de ayudarte. Simplemente**,** házmelo saber en este mismo chat**.** :)
|
closed
|
2023-04-17T10:26:54Z
|
2023-04-29T21:18:04Z
|
https://github.com/LAION-AI/Open-Assistant/issues/2657
|
[] |
Euklidiadas
| 2
|
ploomber/ploomber
|
jupyter
| 323
|
Ploomber scaffold raises PermissionError on Windows & WSL2
|
Hi there,
I suspect this is a Windows permissions issue which is specific to my laptop so I fear it may be impossible to reproduce!
Nonetheless I'm raising it here as an issue here regardless in case others have the same problem, with I hope a simple solution
I installed ploomber via conda on both OS's
---
On Windows:
```
➜ ploomber scaffold
Enter project name:
* Alphanumeric
* Lowercase
* Underscores allowed
* First character cannot be numeric
Enter project name: x
Traceback (most recent call last):
File "C:\Users\Rowan\mambaforge\envs\urban-atlas\Scripts\ploomber-script.py", line 9, in <module>
sys.exit(cmd_router())
File "C:\Users\Rowan\mambaforge\envs\urban-atlas\lib\site-packages\ploomber\cli\cli.py", line 120, in cmd_router
cli()
File "C:\Users\Rowan\mambaforge\envs\urban-atlas\lib\site-packages\click\core.py", line 1137, in __call__
return self.main(*args, **kwargs)
File "C:\Users\Rowan\mambaforge\envs\urban-atlas\lib\site-packages\click\core.py", line 1062, in main
rv = self.invoke(ctx)
File "C:\Users\Rowan\mambaforge\envs\urban-atlas\lib\site-packages\click\core.py", line 1668, in invoke
return _process_result(sub_ctx.command.invoke(sub_ctx))
File "C:\Users\Rowan\mambaforge\envs\urban-atlas\lib\site-packages\click\core.py", line 1404, in invoke
return ctx.invoke(self.callback, **ctx.params)
File "C:\Users\Rowan\mambaforge\envs\urban-atlas\lib\site-packages\click\core.py", line 763, in invoke
return __callback(*args, **kwargs)
File "C:\Users\Rowan\mambaforge\envs\urban-atlas\lib\site-packages\ploomber\cli\cli.py", line 72, in scaffold
scaffold_project.cli(project_path=None,
File "C:\Users\Rowan\mambaforge\envs\urban-atlas\lib\site-packages\ploomber_scaffold\scaffold.py", line 176, in cli
File "C:\Users\Rowan\mambaforge\envs\urban-atlas\lib\contextlib.py", line 117, in __enter__
return next(self.gen) File "C:\Users\Rowan\mambaforge\envs\urban-atlas\lib\importlib\resources.py", line 175, in _path_from_reader
opener_reader = reader.open_resource(norm_resource)
File "<frozen importlib._bootstrap_external>", line 1055, in open_resource
PermissionError: [Errno 13] Permission denied: 'C:\\Users\\Rowan\\mambaforge\\envs\\urban-atlas\\lib\\site-packages\\ploomber_scaffold\\template'
```
```
ploomber 0.12.5
ploomber-scaffold 0.2.2
```
On WSL2:
```
❯ ploomber scaffold
Enter project name:
* Alphanumeric
* Lowercase
* Underscores allowed
* First character cannot be numeric
Enter project name: x
Traceback (most recent call last):
File "/home/wsl-rowanm/mambaforge/envs/urban-atlas/bin/ploomber", line 10, in <module>
sys.exit(cmd_router())
File "/home/wsl-rowanm/mambaforge/envs/urban-atlas/lib/python3.9/site-packages/ploomber/cli/cli.py", line 120, in cmd_router
cli()
File "/home/wsl-rowanm/mambaforge/envs/urban-atlas/lib/python3.9/site-packages/click/core.py", line 1137, in __call__
return self.main(*args, **kwargs)
File "/home/wsl-rowanm/mambaforge/envs/urban-atlas/lib/python3.9/site-packages/click/core.py", line 1062, in main
rv = self.invoke(ctx)
File "/home/wsl-rowanm/mambaforge/envs/urban-atlas/lib/python3.9/site-packages/click/core.py", line 1668, in invoke
return _process_result(sub_ctx.command.invoke(sub_ctx))
File "/home/wsl-rowanm/mambaforge/envs/urban-atlas/lib/python3.9/site-packages/click/core.py", line 1404, in invoke
return ctx.invoke(self.callback, **ctx.params)
File "/home/wsl-rowanm/mambaforge/envs/urban-atlas/lib/python3.9/site-packages/click/core.py", line 763, in invoke
return __callback(*args, **kwargs)
File "/home/wsl-rowanm/mambaforge/envs/urban-atlas/lib/python3.9/site-packages/ploomber/cli/cli.py", line 72, in scaffold
scaffold_project.cli(project_path=None,
File "/home/wsl-rowanm/mambaforge/envs/urban-atlas/lib/python3.9/site-packages/ploomber_scaffold/scaffold.py", line 176, in cli
copy_template(project_path, package=package, conda=conda)
File "/home/wsl-rowanm/mambaforge/envs/urban-atlas/lib/python3.9/site-packages/ploomber_scaffold/scaffold.py", line 25, in copy_template
with resources.path(ploomber_scaffold, 'template') as path_to_template:
File "/home/wsl-rowanm/mambaforge/envs/urban-atlas/lib/python3.9/contextlib.py", line 117, in __enter__
return next(self.gen)
File "/home/wsl-rowanm/mambaforge/envs/urban-atlas/lib/python3.9/importlib/resources.py", line 175, in _path_from_reader
opener_reader = reader.open_resource(norm_resource)
File "<frozen importlib._bootstrap_external>", line 1055, in open_resource
IsADirectoryError: [Errno 21] Is a directory: '/home/wsl-rowanm/mambaforge/envs/urban-atlas/lib/python3.9/site-packages/ploomber_scaffold/template'
```
```
ploomber 0.12.5
ploomber-scaffold 0.2.2
```
---
- [x] ploomber-scaffold raises IsADirectoryError on Linux on ``ploomber scaffold``
- [ ] ploomber-scaffold raises PermissionError on Windows on ``ploomber scaffold``
|
closed
|
2021-09-02T13:20:25Z
|
2021-09-02T14:54:18Z
|
https://github.com/ploomber/ploomber/issues/323
|
[] |
rdmolony
| 8
|
Avaiga/taipy
|
automation
| 1,764
|
Change format for editable date and date ranges
|
### Description
The goal would be to answer/extend [this need](https://github.com/Avaiga/taipy/issues/1037) even for editable date. The goal is not for display purposes but to change the order from MM/DD/YYYY to DD/MM/YYYY or more.
### Acceptance Criteria
- [ ] Ensure new code is unit tested, and check code coverage is at least 90%.
- [ ] Create related issue in taipy-doc for documentation and Release Notes.
- [ ] Check if a new demo could be provided based on this, or if legacy demos could be benefit from it.
- [ ] Ensure any change is well documented.
### Code of Conduct
- [X] I have checked the [existing issues](https://github.com/Avaiga/taipy/issues?q=is%3Aissue+).
- [ ] I am willing to work on this issue (optional)
|
closed
|
2024-09-09T12:55:45Z
|
2024-09-27T12:59:58Z
|
https://github.com/Avaiga/taipy/issues/1764
|
[
"🖰 GUI",
"🟩 Priority: Low",
"✨New feature"
] |
FlorianJacta
| 8
|
aiogram/aiogram
|
asyncio
| 1,178
|
types.ChatMemberUpdated Can't parse entities: unsupported start tag "aiogram.types.user.user" at byte offset 1
|
## Context
aiogram.utils.exceptions.CantParseEntities: Can't parse entities: unsupported start tag "aiogram.types.user.user" at byte offset 1
```python
@dp.chat_member_handler(IsGroup())
async def handle_chat_member_updated(message: types.ChatMemberUpdated):
chat_id = message.chat.id
user_id = message.from_user.id
new_chat_member = message.new_chat_member
old_chat_member = message.old_chat_member
if new_chat_member:
# User joined or was added to the chat
if new_chat_member.user.id == bot.id:
# Bot was added to the chat
print(f"Bot joined the chat. Chat ID: {chat_id}")
else:
# User joined the chat
print(f"User {new_chat_member.user.username} (ID: {user_id}) joined the chat. Chat ID: {chat_id}")
elif old_chat_member:
# User was removed or left the chat
if old_chat_member.user.id == bot.id:
# Bot was removed from the chat
print(f"Bot was removed from the chat. Chat ID: {chat_id}")
else:
# User left the chat
print(f"User {old_chat_member.user.username} (ID: {user_id}) left the chat. Chat ID: {chat_id}")
```
|
closed
|
2023-05-14T10:25:49Z
|
2023-06-01T18:43:35Z
|
https://github.com/aiogram/aiogram/issues/1178
|
[
"needs triage"
] |
azimxxm
| 3
|
scrapy/scrapy
|
web-scraping
| 6,686
|
MetaContract JSON Decoder
|
### Issue: Custom JSON Decoder
**Description**
I'm encountering an issue with meta contract when passing it `playwright_page_methods` , requires a custom JSON decoder.
Is there any possible way to work around this restriction without writing custom JSON decoder ?
```python
from scrapy_playwright.page import PageMethod
def start_requests(self) -> Iterable[Request]:
yield scrapy.Request(
url=self.source,
callback=self.parse,
meta={
"playwright": True,
"playwright_context_kwargs": {
"java_script_enabled": False
},
"playwright_page_methods": [
PageMethod("wait_for_selector", "//article[@id='primary-content']")
]
}
)
def parse(self, response: Response) -> Iterable[Request]:
"""
@url https://www.miamiherald.com/latest-news/
@meta {"playwright":true,"playwright_context_kwargs":{"java_script_enabled":false}}
@returns requests 1
"""
```
|
closed
|
2025-02-20T05:36:25Z
|
2025-02-21T02:27:00Z
|
https://github.com/scrapy/scrapy/issues/6686
|
[
"contracts"
] |
Ehsan-U
| 4
|
seleniumbase/SeleniumBase
|
pytest
| 2,203
|
"Customize Chrome to give your browser a new look" pop-up is now appearing
|
## "Customize Chrome to give your browser a new look" pop-up is now appearing
It must be raining pop-ups today, because earlier I encountered https://github.com/seleniumbase/SeleniumBase/issues/2201, and now I'm encountering this:
<img width="500" alt="Screenshot 2023-10-20 at 2 29 05 PM" src="https://github.com/seleniumbase/SeleniumBase/assets/6788579/f356b2b5-79c6-4030-8335-f85862563f30">
Thanks to the info in https://github.com/GoogleChrome/chrome-launcher/blob/main/docs/chrome-flags-for-tools.md, I can remove this pop-up by using: `--ash-no-nudges` in Chromium options.
It appears that the latest Chromium release has added multiple pop-ups that haven't been seen before.
|
closed
|
2023-10-20T18:57:49Z
|
2025-02-24T13:26:06Z
|
https://github.com/seleniumbase/SeleniumBase/issues/2203
|
[
"bug"
] |
mdmintz
| 2
|
PokemonGoF/PokemonGo-Bot
|
automation
| 5,851
|
[ERROR] Sentry responded with an error: 'ascii' codec can't decode byte 0x9c in position 1
|
Running the bot on a fresh install disabling the niantic api check gives:
[ERROR] Sentry responded with an error: 'ascii' codec can't decode byte 0x9c in position 1: ordinal not in range(128) (url: https://app.getsentry.com/api/90254/store/)
related to the API?
Full error:
```
File "pokecli.py", line 856, in <module>
main()
File "pokecli.py", line 195, in main
bot = start_bot(bot, config)
File "pokecli.py", line 147, in start_bot
bot.start()
File "/Users/Me/PokemonGo-Bot/pokemongo_bot/__init__.py", line 149, in start
init_inventory(self)
File "/Users/Me/PokemonGo-Bot/pokemongo_bot/inventory.py", line 1416, in init_inventory
_inventory = Inventory(bot)
File "/Users/Me/PokemonGo-Bot/pokemongo_bot/inventory.py", line 1260, in __init__
self.refresh()
File "/Users/Me/PokemonGo-Bot/pokemongo_bot/inventory.py", line 1270, in refresh
i.refresh(inventory)
File "/Users/Me/PokemonGo-Bot/pokemongo_bot/inventory.py", line 75, in refresh
self._data = self.retrieve_data(inventory)
File "/Users/Me/PokemonGo-Bot/pokemongo_bot/inventory.py", line 71, in retrieve_data
ret[key] = self.parse(item)
File "/Users/Me/PokemonGo-Bot/pokemongo_bot/inventory.py", line 490, in parse
return Pokemon(item)
File "/Users/Me/PokemonGo-Bot/pokemongo_bot/inventory.py", line 991, in __init__
assert max(int(self.cp_exact), 10) == self.cp
AssertionError
[2016-12-26 11:57:54] [sentry.errors] [ERROR] Sentry responded with an error: 'ascii' codec can't decode byte 0x9c in position 1: ordinal not in range(128) (url: https://app.getsentry.com/api/90254/store/)
Traceback (most recent call last):
File "/Users/Me/PokemonGo-Bot/lib/python2.7/site-packages/raven/transport/threaded.py", line 174, in send_sync
super(ThreadedHTTPTransport, self).send(data, headers)
File "/Users/Me/PokemonGo-Bot/lib/python2.7/site-packages/raven/transport/http.py", line 47, in send
ca_certs=self.ca_certs,
File "/Users/Me/PokemonGo-Bot/lib/python2.7/site-packages/raven/utils/http.py", line 66, in urlopen
return opener.open(url, data, timeout)
File "/Users/Me/PokemonGo-Bot/lib/python2.7/site-packages/future/backports/urllib/request.py", line 494, in open
response = self._open(req, data)
File "/Users/Me/PokemonGo-Bot/lib/python2.7/site-packages/future/backports/urllib/request.py", line 512, in _open
'_open', req)
File "/Users/Me/PokemonGo-Bot/lib/python2.7/site-packages/future/backports/urllib/request.py", line 466, in _call_chain
result = func(*args)
File "/Users/Me/PokemonGo-Bot/lib/python2.7/site-packages/raven/utils/http.py", line 46, in https_open
return self.do_open(ValidHTTPSConnection, req)
File "/Users/Me/PokemonGo-Bot/lib/python2.7/site-packages/future/backports/urllib/request.py", line 1284, in do_open
h.request(req.get_method(), req.selector, req.data, headers)
File "/usr/local/Cellar/python/2.7.12_2/Frameworks/Python.framework/Versions/2.7/lib/python2.7/httplib.py", line 1057, in request
self._send_request(method, url, body, headers)
File "/usr/local/Cellar/python/2.7.12_2/Frameworks/Python.framework/Versions/2.7/lib/python2.7/httplib.py", line 1097, in _send_request
self.endheaders(body)
File "/usr/local/Cellar/python/2.7.12_2/Frameworks/Python.framework/Versions/2.7/lib/python2.7/httplib.py", line 1053, in endheaders
self._send_output(message_body)
File "/usr/local/Cellar/python/2.7.12_2/Frameworks/Python.framework/Versions/2.7/lib/python2.7/httplib.py", line 895, in _send_output
msg += message_body
UnicodeDecodeError: 'ascii' codec can't decode byte 0x9c in position 1: ordinal not in range(128)
[2016-12-26 11:57:54] [sentry.errors.uncaught] [ERROR] [u'AssertionError', u' File "pokecli.py", line 856, in <module>', u' File "pokecli.py", line 195, in main', u' File "pokecli.py", line 147, in start_bot', u' File "pokemongo_bot/__init__.py", line 149, in start', u' File "pokemongo_bot/inventory.py", line 1416, in init_inventory', u' File "pokemongo_bot/inventory.py", line 1260, in __init__', u' File "pokemongo_bot/inventory.py", line 1270, in refresh', u' File "pokemongo_bot/inventory.py", line 75, in refresh', u' File "pokemongo_bot/inventory.py", line 71, in retrieve_data', u' File "pokemongo_bot/inventory.py", line 490, in parse', u' File "pokemongo_bot/inventory.py", line 991, in __init__']
ma 26 dec 2016 11:57:54 CET Pokebot Stopped.
Press any button or wait 20 seconds to continue.
```
|
closed
|
2016-12-26T10:59:47Z
|
2017-01-14T12:56:01Z
|
https://github.com/PokemonGoF/PokemonGo-Bot/issues/5851
|
[] |
helderdb
| 1
|
ipyflow/ipyflow
|
jupyter
| 12
|
support class definitions
|
closed
|
2020-04-30T15:12:49Z
|
2020-05-07T04:25:45Z
|
https://github.com/ipyflow/ipyflow/issues/12
|
[] |
smacke
| 0
|
|
huggingface/datasets
|
pandas
| 6,690
|
Add function to convert a script-dataset to Parquet
|
Add function to convert a script-dataset to Parquet and push it to the Hub, analogously to the Space: "Convert a Hugging Face dataset to Parquet"
|
closed
|
2024-02-23T10:28:20Z
|
2024-04-12T15:27:05Z
|
https://github.com/huggingface/datasets/issues/6690
|
[
"enhancement"
] |
albertvillanova
| 0
|
jina-ai/serve
|
deep-learning
| 5,729
|
docs: create new sub-section or relocate retries section
|
Currently the [retry handling section](https://docs.jina.ai/concepts/client/callbacks/#transient-fault-handling-with-retries) is put under the `Callbacks` sub section which is not correct.
|
closed
|
2023-03-01T08:35:34Z
|
2023-06-15T00:19:46Z
|
https://github.com/jina-ai/serve/issues/5729
|
[
"Stale"
] |
girishc13
| 1
|
alteryx/featuretools
|
scikit-learn
| 1,922
|
The usage of custom feature primitives in `calculate_feature_matrix`
|
[a description of what documentation you believe needs to be fixed/improved]
Hello, guys!
My issue is following: I have done some research and created some custom primitives for my ML task and successfully created them with `dfs` and saved with `save_features`. But then I wanted to reuse those my custom primitives with `calculate_feature_matrix` function. However, when I try to download `feature_defs` from memory I get the following error
`RuntimeError: Primitive "LastTime" in module "__main__" not found` where `LastTime` is my custom primitive name
This error is is understandable but still I haven't found any information about how to add my custom primitives the scope of available primitives of featuretools. Looking forward to your help!
|
closed
|
2022-02-22T13:40:07Z
|
2022-02-28T15:23:10Z
|
https://github.com/alteryx/featuretools/issues/1922
|
[
"documentation"
] |
VGODIE
| 2
|
mwaskom/seaborn
|
matplotlib
| 3,475
|
This is the excel plot, the labels are perfectly aligned under bars
|

|
closed
|
2023-09-15T17:14:18Z
|
2023-09-15T21:41:15Z
|
https://github.com/mwaskom/seaborn/issues/3475
|
[] |
Utsav-2301
| 0
|
JaidedAI/EasyOCR
|
pytorch
| 1,292
|
How do I control maximum and mimimum font size to be read?
|
I have an image like this

it is always reading like this @61.23
I just wanted to read like 61.23
|
open
|
2024-08-07T06:22:34Z
|
2024-10-10T05:52:33Z
|
https://github.com/JaidedAI/EasyOCR/issues/1292
|
[] |
ishandutta2007
| 2
|
apify/crawlee-python
|
web-scraping
| 480
|
Create a new guide for session management
|
- We should create a new documentation guide on how to work with sessions (`SessionPool`).
- Inspiration: https://crawlee.dev/docs/guides/session-management
|
closed
|
2024-08-30T12:04:41Z
|
2025-01-06T13:59:14Z
|
https://github.com/apify/crawlee-python/issues/480
|
[
"documentation",
"t-tooling"
] |
vdusek
| 2
|
vitalik/django-ninja
|
pydantic
| 1,410
|
Avoid magic in DjangoGetter for FieldFile
|
I would like to create a custom schema for `FieldFile` that returns something like
```
{
name: "...",
url: "...", // content-disposition: inline
download_url: "...", // content-disposition: attachment
content_type: "...", // guessed based on extension
}
```
Would expect it to not be a problem, just define a schema for the `FileField` property, however `DjangoGetter` automatically converts the result to `result.url` meaning you lose the original `FieldFile` and are now stuck with only the url.
|
open
|
2025-02-16T21:29:58Z
|
2025-02-16T21:29:58Z
|
https://github.com/vitalik/django-ninja/issues/1410
|
[] |
jakajancar
| 0
|
huggingface/datasets
|
tensorflow
| 6,451
|
Unable to read "marsyas/gtzan" data
|
Hi, this is my code and the error:
```
from datasets import load_dataset
gtzan = load_dataset("marsyas/gtzan", "all")
```
[error_trace.txt](https://github.com/huggingface/datasets/files/13464397/error_trace.txt)
[audio_yml.txt](https://github.com/huggingface/datasets/files/13464410/audio_yml.txt)
Python 3.11.5
Jupyter Notebook 6.5.4
Windows 10
I'm able to download and work with other datasets, but not this one. For example, both these below work fine:
```
from datasets import load_dataset
dataset = load_dataset("facebook/voxpopuli", "pl", split="train", streaming=True)
minds = load_dataset("PolyAI/minds14", name="en-US", split="train")
```
Thanks for your help
https://huggingface.co/datasets/marsyas/gtzan/tree/main
|
closed
|
2023-11-25T15:13:17Z
|
2023-12-01T12:53:46Z
|
https://github.com/huggingface/datasets/issues/6451
|
[] |
gerald-wrona
| 3
|
voila-dashboards/voila
|
jupyter
| 664
|
Whitelisting favicon
|
```
WARNING:tornado.general:403 GET /voila/files/favicon.ico (::1): File not whitelisted
```
|
open
|
2020-07-29T17:38:29Z
|
2025-01-07T17:51:50Z
|
https://github.com/voila-dashboards/voila/issues/664
|
[] |
SylvainCorlay
| 4
|
KevinMusgrave/pytorch-metric-learning
|
computer-vision
| 518
|
Error for InferenceModel and its `trunk` parameter
|
## version
- pytorch-metric-learning 1.5.2
- python 3.8.9
- OS: ubuntu 18.04 LTS
- CUDA Version 10.1.243
- faiss-cpu 1.7.1.post2
- faiss-gpu 1.7.2
## Description
After executing all cells in `MetricLossOnly.ipynb`, I got some model weights.
- `embedder_optimizer_best1.pth`
- `trunk_optimizer_best1.pth`
There are `Inference.ipynb` and I want to combine them. Then I though what should I specify the `trunk` parameter?
```
inference_model = InferenceModel(trunk=trunk,
match_finder=match_finder,
data_device="cpu",
)
```
In `MetricLossOnly.ipynb`, it defines `trunk` and `embedder` models each, and put them into the model dict as `models = {"trunk": trunk, "embedder": embedder} `. In this situation, when defining the `InferenceModel`, one should specify the `models` instead of `trunk` because it lacks `embedder phase`.
When training the knn model, there were errors.
It says, `'dict' object has no attribute 'eval'`.
```
inference_model.train_knn(test_dataset)
```
> ---------------------------------------------------------------------------
AttributeError Traceback (most recent call last)
<ipython-input-9-1d9cd8e901b0> in <module>
1 # create faiss index
----> 2 inference_model.train_knn(test_dataset)
~/.pyenv/versions/3.8.9/lib/python3.8/site-packages/pytorch_metric_learning/utils/inference.py in train_knn(self, inputs, batch_size)
97
98 def train_knn(self, inputs, batch_size=64):
---> 99 self.call_knn(self.knn_func.train, inputs, batch_size)
100
101 def add_to_knn(self, inputs, batch_size=64):
~/.pyenv/versions/3.8.9/lib/python3.8/site-packages/pytorch_metric_learning/utils/inference.py in call_knn(self, func, inputs, batch_size)
103
104 def call_knn(self, func, inputs, batch_size):
--> 105 embeddings = self.get_embeddings_from_tensor_or_dataset(inputs, batch_size)
106 func(embeddings)
107
~/.pyenv/versions/3.8.9/lib/python3.8/site-packages/pytorch_metric_learning/utils/inference.py in get_embeddings_from_tensor_or_dataset(self, inputs, batch_size)
91 dataloader = torch.utils.data.DataLoader(inputs, batch_size=batch_size)
92 for inp, _ in dataloader:
---> 93 embeddings.append(self.get_embeddings(inp))
94 else:
95 raise TypeError(f"Indexing {type(inputs)} is not supported.")
~/.pyenv/versions/3.8.9/lib/python3.8/site-packages/pytorch_metric_learning/utils/inference.py in get_embeddings(self, x)
114 if isinstance(x, torch.Tensor):
115 x = c_f.to_device(x, device=self.data_device, dtype=self.dtype)
--> 116 self.trunk.eval()
117 self.embedder.eval()
118 with torch.no_grad():
AttributeError: 'dict' object has no attribute 'eval'
How could I avoid them?
|
closed
|
2022-08-25T08:47:18Z
|
2022-08-25T13:58:22Z
|
https://github.com/KevinMusgrave/pytorch-metric-learning/issues/518
|
[
"question"
] |
makkimaki
| 1
|
ets-labs/python-dependency-injector
|
asyncio
| 178
|
Create DependenciesContainer provider
|
Based on conversation in terms of #177 , there is a need to create ``DependenciesContainer`` provider.
Example:
```python
# Example code
class Adapters(containers.DeclarativeContainer):
email_sender = providers.Singleton(SmtpEmailSender)
class TestAdapters(containers.DeclarativeContainer):
email_sender = providers.Singleton(EchoEmailSender)
class UseCases(containers.DeclarativeContainer):
adapters = providers.DependenciesContainer() # Want to use container as dependency
signup = providers.Factory(SignupUseCase, email_sender=adapters.email_sender)
use_cases = UseCases(adapters=Adapters)
# or
use_cases = UseCases(adapters=TestAdapters)
# Another file, views.py
from .containers import use_cases
use_case = use_cases.signup()
use_case.execute()
```
|
closed
|
2018-01-17T13:42:59Z
|
2018-01-27T05:52:39Z
|
https://github.com/ets-labs/python-dependency-injector/issues/178
|
[
"feature"
] |
rmk135
| 5
|
PaddlePaddle/ERNIE
|
nlp
| 286
|
GPU Memory error while running script/zh_task/ernie_base/run_drcd.sh
|
I was trying to develop a custom Machine comprehension model over squad v1 data by running script/zh_task/ernie_base/run_drcd.sh, and encountered the following error. Any help would be appreciated.
+ export FLAGS_eager_delete_tensor_gb=0
+ FLAGS_eager_delete_tensor_gb=0
+ export FLAGS_sync_nccl_allreduce=1
+ FLAGS_sync_nccl_allreduce=1
+ export CUDA_VISIBLE_DEVICES=0,1,2,3
+ CUDA_VISIBLE_DEVICES=0,1,2,3
+ python -u run_mrc.py --use_cuda true --train_set /home/ubuntu/cibin/squad_v1_1__data/train.json --batch_size 16 --in_tokens false --use_fast_executor true --checkpoints ./checkpoints --vocab_path /home/ubuntu/cibin/ERNIE/pretrained_model/vocab.txt --ernie_config_path /home/ubuntu/cibin/ERNIE/pretrained_model/ernie_config.json --do_train true --do_val true --do_test true --verbose true --save_steps 1000 --validation_steps 100 --warmup_proportion 0.0 --weight_decay 0.01 --epoch 2 --max_seq_len 512 --do_lower_case true --doc_stride 128 --dev_set /home/ubuntu/cibin/squad_v1_1__data/dev.json --test_set /home/ubuntu/cibin/squad_v1_1__data/test.json --learning_rate 5e-5 --num_iteration_per_drop_scope 1 --init_pretraining_params /home/ubuntu/cibin/ERNIE/pretrained_model/params --skip_steps 10
attention_probs_dropout_prob: 0.1
hidden_act: gelu
hidden_dropout_prob: 0.1
hidden_size: 768
initializer_range: 0.02
max_position_embeddings: 512
num_attention_heads: 12
num_hidden_layers: 12
sent_type_vocab_size: 4
task_type_vocab_size: 16
vocab_size: 30522
------------------------------------------------
Device count: 4
Num train examples: 1483
Max train steps: 46
Num warmup steps: 0
memory_optimize is deprecated. Use CompiledProgram and Executor
Theoretical memory usage in training: 13971.085 - 14636.375 MB
W0819 05:09:46.604622 511 device_context.cc:259] Please NOTE: device: 0, CUDA Capability: 61, Driver API Version: 10.1, Runtime API Version: 10.0
W0819 05:09:46.606772 511 device_context.cc:267] device: 0, cuDNN Version: 7.6.
Load pretraining parameters from /home/ubuntu/cibin/libor/github/ERNIE/pretrained_model/params.
I0819 05:09:49.962049 511 parallel_executor.cc:329] The number of CUDAPlace, which is used in ParallelExecutor, is 4. And the Program will be copied 4 copies
I0819 05:09:51.959648 511 build_strategy.cc:340] SeqOnlyAllReduceOps:0, num_trainers:1
W0819 05:09:55.706979 569 system_allocator.cc:121] Cannot malloc 9770.01 MB GPU memory. Please shrink FLAGS_fraction_of_gpu_memory_to_use or FLAGS_initial_gpu_memory_in_mb or FLAGS_reallocate_gpu_memory_in_mbenvironment variable to a lower value. Current FLAGS_fraction_of_gpu_memory_to_use value is 0.92. Current FLAGS_initial_gpu_memory_in_mb value is 0. Current FLAGS_reallocate_gpu_memory_in_mb value is 0
F0819 05:09:55.707295 569 legacy_allocator.cc:201] Cannot allocate 139.869873MB in GPU 1, available 648.500000MBtotal 11721506816GpuMinChunkSize 256.000000BGpuMaxChunkSize 9.541025GBGPU memory used: 9.507799GB
*** Check failure stack trace: ***
@ 0x7f4d748e639d google::LogMessage::Fail()
@ 0x7f4d748e9e4c google::LogMessage::SendToLog()
@ 0x7f4d748e5ec3 google::LogMessage::Flush()
@ 0x7f4d748eb35e google::LogMessageFatal::~LogMessageFatal()
@ 0x7f4d768c77d4 paddle::memory::legacy::Alloc<>()
@ 0x7f4d768c7ab5 paddle::memory::allocation::LegacyAllocator::AllocateImpl()
@ 0x7f4d768bbbd5 paddle::memory::allocation::AllocatorFacade::Alloc()
@ 0x7f4d768bbd5a paddle::memory::allocation::AllocatorFacade::AllocShared()
@ 0x7f4d764b489c paddle::memory::AllocShared()
@ 0x7f4d7688d924 paddle::framework::Tensor::mutable_data()
@ 0x7f4d74b90ba5 paddle::operators::MatMulGradKernel<>::MatMul()
@ 0x7f4d74b90e1f paddle::operators::MatMulGradKernel<>::CalcInputGrad()
@ 0x7f4d74b912e7 paddle::operators::MatMulGradKernel<>::Compute()
@ 0x7f4d74b916f3 _ZNSt17_Function_handlerIFvRKN6paddle9framework16ExecutionContextEEZNKS1_24OpKernelRegistrarFunctorINS0_8platform9CUDAPlaceELb0ELm0EINS0_9operators16MatMulGradKernelINS7_17CUDADeviceContextEfEENSA_ISB_dEENSA_ISB_NS7_7float16EEEEEclEPKcSI_iEUlS4_E_E9_M_invokeERKSt9_Any_dataS4_
@ 0x7f4d7682f187 paddle::framework::OperatorWithKernel::RunImpl()
@ 0x7f4d7682f561 paddle::framework::OperatorWithKernel::RunImpl()
@ 0x7f4d7682cb5c paddle::framework::OperatorBase::Run()
@ 0x7f4d7662805a paddle::framework::details::ComputationOpHandle::RunImpl()
@ 0x7f4d7661aa00 paddle::framework::details::OpHandleBase::Run()
@ 0x7f4d765fbd76 paddle::framework::details::FastThreadedSSAGraphExecutor::RunOpSync()
@ 0x7f4d765fa9df paddle::framework::details::FastThreadedSSAGraphExecutor::RunOp()
@ 0x7f4d765fad9f _ZNSt17_Function_handlerIFvvESt17reference_wrapperISt12_Bind_simpleIFS1_ISt5_BindIFZN6paddle9framework7details28FastThreadedSSAGraphExecutor10RunOpAsyncEPSt13unordered_mapIPNS6_12OpHandleBaseESt6atomicIiESt4hashISA_ESt8equal_toISA_ESaISt4pairIKSA_SC_EEESA_RKSt10shared_ptrINS5_13BlockingQueueImEEEEUlvE_vEEEvEEEE9_M_invokeERKSt9_Any_data
@ 0x7f4d749d3b53 std::_Function_handler<>::_M_invoke()
@ 0x7f4d74869c47 std::__future_base::_State_base::_M_do_set()
@ 0x7f4dc650da99 __pthread_once_slow
@ 0x7f4d765f6422 _ZNSt13__future_base11_Task_stateISt5_BindIFZN6paddle9framework7details28FastThreadedSSAGraphExecutor10RunOpAsyncEPSt13unordered_mapIPNS4_12OpHandleBaseESt6atomicIiESt4hashIS8_ESt8equal_toIS8_ESaISt4pairIKS8_SA_EEES8_RKSt10shared_ptrINS3_13BlockingQueueImEEEEUlvE_vEESaIiEFvvEE6_M_runEv
@ 0x7f4d7486b1c4 _ZZN10ThreadPoolC1EmENKUlvE_clEv
@ 0x7f4da839cc80 (unknown)
@ 0x7f4dc65066ba start_thread
@ 0x7f4dc623c41d clone
@ (nil) (unknown)
script/zh_task/ernie_base/run_drcd.sh: line 50: 511 Aborted (core dumped) python -u run_mrc.py --use_cuda true --train_set ${TASK_DATA_PATH1}/train.json --batch_size 16 --in_tokens false --use_fast_executor true --checkpoints ./checkpoints --vocab_path ${MODEL_PATH}/vocab.txt --ernie_config_path ${MODEL_PATH}/ernie_config.json --do_train true --do_val true --do_test true --verbose true --save_steps 1000 --validation_steps 100 --warmup_proportion 0.0 --weight_decay 0.01 --epoch 2 --max_seq_len 512 --do_lower_case true --doc_stride 128 --dev_set ${TASK_DATA_PATH}/dev.json --test_set ${TASK_DATA_PATH}/test.json --learning_rate 5e-5 --num_iteration_per_drop_scope 1 --init_pretraining_params ${MODEL_PATH}/params --skip_steps 10
|
closed
|
2019-08-19T06:23:22Z
|
2019-08-23T06:33:00Z
|
https://github.com/PaddlePaddle/ERNIE/issues/286
|
[] |
cibinjohn
| 2
|
kaliiiiiiiiii/Selenium-Driverless
|
web-scraping
| 35
|
detected with RDP
|
when i used selenium in my device was undetected when i used in RDP detected but when click on checkbox it's pass there is any code make checkbox when fail to do it auto
and thanks for u efforts
|
closed
|
2023-08-20T07:48:17Z
|
2023-08-20T14:37:14Z
|
https://github.com/kaliiiiiiiiii/Selenium-Driverless/issues/35
|
[] |
DeltaDarkness
| 3
|
InstaPy/InstaPy
|
automation
| 6,619
|
Got an issue 'attributeError: 'NoneType' object has no attribute 'get'
|
here's the full code:
PS C:\Users\a\Desktop> py bot.py
InstaPy Version: 0.6.16
.. .. .. .. .. .. .. .. ._.
Workspace in use: "C:/Users/a/InstaPy"
OOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOO
INFO [2022-06-11 19:23:28] [itayabergell] Session started!
oooooooooooooooooooooooooooooooooooooooooooooooooooooo
INFO [2022-06-11 19:23:36] [i] - Cookie file for user '...' loaded...
..................................................................
INFO [2022-06-11 19:24:03] Logged in successfully!
''''''''''''''''''''''''''''''''''''''''''''''''''''''''''''''''''
INFO [2022-06-11 19:24:03] [i] Saving account progress...
INFO [2022-06-11 19:24:56] [ Failed to get followers count of 'b'...'' ~empty list
INFO [2022-06-11 19:25:46] [] Failed to get following count of 'b'...'' ~empty list
WARNING [2022-06-11 19:25:46] [...] Unable to save account progress, skipping data update
b"'NoneType' object has no attribute 'get'"
Traceback (most recent call last):
File "C:\Users\Desktop\bot.py", line 9, in
session.login()
File "C:\Users\AppData\Local\Programs\Python\Python310\lib\site-packages\instapy\instapy.py", line 475, in login
self.followed_by = log_follower_num(self.browser, self.username, self.logfolder)
File "C:\Usersי\AppData\Local\Programs\Python\Python310\lib\site-packages\instapy\print_log_writer.py", line 21, in log_follower_num
followed_by = getUserData("graphql.user.edge_followed_by.count", browser)
File "C:\Users\AppData\Local\Programs\Python\Python310\lib\site-packages\instapy\util.py", line 501, in getUserData
get_key = shared_data.get("entry_data").get("ProfilePage")
AttributeError: 'NoneType' object has no attribute 'get'
PS C:\Users\a\Desktop>
|
open
|
2022-06-11T18:34:40Z
|
2024-10-15T12:23:57Z
|
https://github.com/InstaPy/InstaPy/issues/6619
|
[] |
xProSen
| 8
|
davidsandberg/facenet
|
tensorflow
| 416
|
The error is happend when the model is loading
|
Model directory: /home/zts/PycharmProjects/facenet/models/facenet/20170512-110547
Metagraph file: model-20170512-110547.meta
Checkpoint file: model-20170512-110547.ckpt-250000
Traceback (most recent call last):
File "src/validate_on_lfw.py", line 113, in <module>
main(parse_arguments(sys.argv[1:]))
File "src/validate_on_lfw.py", line 57, in main
facenet.load_model(args.model)
File "/home/zts/PycharmProjects/facenet/src/facenet.py", line 388, in load_model
saver = tf.train.import_meta_graph(os.path.join(model_exp, meta_file))
File "/usr/local/lib/python2.7/dist-packages/tensorflow/python/training/saver.py", line 1526, in import_meta_graph
**kwargs)
File "/usr/local/lib/python2.7/dist-packages/tensorflow/python/framework/meta_graph.py", line 502, in import_scoped_meta_graph
producer_op_list=producer_op_list)
File "/usr/local/lib/python2.7/dist-packages/tensorflow/python/framework/importer.py", line 258, in import_graph_def
op_def = op_dict[node.op]
KeyError: u'VariableV2'
|
closed
|
2017-08-11T08:54:33Z
|
2017-10-21T10:04:32Z
|
https://github.com/davidsandberg/facenet/issues/416
|
[] |
Paleve
| 1
|
satwikkansal/wtfpython
|
python
| 66
|
Ebook version?
|
Hey guys,
I was preparing a pdf for the collection, while a friend of mine recommended me to put in some extra effort to release an ebook version for the same.
I have absolutely no experience with this and I doubt if it'd be an overkill. That's why I've opened up this issue, maybe someone having prior experience with all this can help me in figuring out what to do and if it actually makes sense to do so 😅
|
open
|
2018-01-30T19:45:13Z
|
2024-10-17T06:41:17Z
|
https://github.com/satwikkansal/wtfpython/issues/66
|
[] |
satwikkansal
| 11
|
pydantic/logfire
|
fastapi
| 850
|
weird behaviour with uvicorn
|
### Description
If I run uvicorn normally (e.g. `uv run uvicorn app:app`), I see two logs, one from logfire and one from uvicorn
```
16:11:29.487 GET /
INFO: 172.17.0.1:59302 - "GET / HTTP/1.1" 404 Not Found
```
If I run uvicorn with `--log-level warning`, then I see neither. This seems wrong.
### Python, Logfire & OS Versions, related packages (not required)
```TOML
logfire="3.5.3"
platform="macOS-14.6.1-arm64-arm-64bit-Mach-O"
python="3.13.0 (main, Oct 16 2024, 08:05:40) [Clang 18.1.8 ]"
[related_packages]
requests="2.32.3"
pydantic="2.10.6"
protobuf="5.29.3"
rich="13.9.4"
executing="2.2.0"
opentelemetry-api="1.30.0"
opentelemetry-exporter-otlp-proto-common="1.30.0"
opentelemetry-exporter-otlp-proto-http="1.30.0"
opentelemetry-instrumentation="0.51b0"
opentelemetry-instrumentation-asgi="0.51b0"
opentelemetry-instrumentation-httpx="0.51b0"
opentelemetry-instrumentation-starlette="0.51b0"
opentelemetry-instrumentation-system-metrics="0.51b0"
opentelemetry-proto="1.30.0"
opentelemetry-sdk="1.30.0"
opentelemetry-semantic-conventions="0.51b0"
opentelemetry-util-http="0.51b0"
```
|
open
|
2025-02-08T16:16:14Z
|
2025-02-09T08:48:01Z
|
https://github.com/pydantic/logfire/issues/850
|
[] |
samuelcolvin
| 1
|
litestar-org/litestar
|
pydantic
| 3,249
|
Enhancement: Add plugin for grafana loki
|
### Summary
[Grafana Loki](https://grafana.com/docs/loki/latest/) is a compact log aggregator. Logs of different sources (files, streams, etc.) are ingested by a client (e.g., [Grafana Promtail](https://grafana.com/docs/loki/latest/send-data/promtail/)) and pushed to the Loki API. The logs are indexed by labels, not content. Loki stores the data as chunks on the local file system or an S3 bucket. The stored data can be visualized via Grafana or pulled from the Loki API for use in a different application.
I played around with this as a weekend project to look for a simpler alternative to the ELK stack. There are already multiple agent solutions to ingest logs from k8s pods or docker compose services but I wanted to learn more about how the API works and how I can use it for custom logging of timeseries data.
My question is whether this would be desirable to add it to the litestar library as a `contrib` package. Since the client would not add any new dependencies it would be just another "nice to have".
### Basic Example
I wrote a Loki client that can manage a queue to send messages concurrently to the main workflow (to not block the route handler). It also offers a method to send a message immediately instead of queueing it. I wrote a minimal litestar demo that showcases its use. I also appended my docker compose file that I used to set up a Grafana and Grafana loki instance.
Note: I use loguru here for convenvience, it is not essential to the client.
**Loki Client**
```python
import asyncio
import copy
from datetime import datetime
import time
import threading
from dataclasses import dataclass, field, InitVar
from contextlib import asynccontextmanager
import httpx
from loguru import logger
from litestar import Litestar
from litestar.config.app import AppConfig
from litestar.di import Provide
from litestar.plugins import InitPluginProtocol
@dataclass
class LokiValue:
"""Container for a timestamped log line."""
time_value: InitVar[int | datetime]
line: str
time_ns: str = field(init=False)
def __post_init__(self, time_value: int | datetime):
if isinstance(time_value, int):
self.time_ns = str(time_value)
else:
self.time_ns = str(time_value.timestamp() * 1e9)
def loki_format(self) -> list[str]:
data = [self.time_ns, self.line]
return data
@classmethod
def timestamped_line(cls, line: str):
return cls(
time_value=time.time_ns(),
line=line,
)
@dataclass
class LokiMessage:
"""Container for a full Loki message including labels and one or more values"""
labels: dict[str, str]
values: list[LokiValue]
def as_payload(self):
data = {
"streams": [{
"stream": self.labels,
"values": [v.loki_format() for v in self.values]
}]
}
return data
@dataclass
class LokiMessageTemplate:
"""Convenience class to pre-define a set of labels and use it to create Loki messages ad-hoc
without needing to set the labels again and again.
"""
labels: dict[str, str]
def to_message(self, values: list[LokiValue]) -> LokiMessage:
return LokiMessage(
labels=self.labels,
values=values
)
def timestamped_message(self, line: str, extra_labels: dict[str, str] = None):
value = LokiValue.timestamped_line(line)
if extra_labels:
labels = copy.copy(self.labels)
labels.update(extra_labels)
return LokiMessage(
labels=labels,
values=[value]
)
else:
return self.to_message([value])
class LokiQueueHandler:
def __init__(
self,
address: str = "http://localhost:3100",
*,
extra_labels: dict | None = None,
):
"""Async loki client that pushes messages from a queue or sends a message immediately on request."""
self._address = address
self._endpoint = "/loki/api/v1/push"
self._full_address = self._address + self._endpoint
self._headers = {'Content-type': 'application/json'}
self._extra_labels = extra_labels
self._queue: asyncio.Queue[LokiMessage] = asyncio.Queue()
self._queue_run = threading.Event()
self._queue_run.set()
def put_nowait(self, msg: LokiMessage):
"""Place a message in the message queue that will be sent in the near future."""
self._queue.put_nowait(msg)
logger.debug("Message put in queue")
async def write(self, msg: LokiMessage):
"""Send a message immediately."""
await self._send_message(msg)
async def _send_message(self, msg: LokiMessage):
"""Send a message to the loki endpoint. Bypasses the queue."""
async with httpx.AsyncClient(base_url=self._address, headers=self._headers) as client:
r = await client.post(url=self._endpoint, json=msg.as_payload())
logger.info(f"message sent. {r}. {r.text}")
async def handle_queue(self):
"""While event is set, fetch items from the queue and then them to the Loki endpoint."""
logger.info("Queue started")
while self._queue_run.wait(0):
try:
msg = self._queue.get_nowait()
except asyncio.QueueEmpty:
await asyncio.sleep(1)
else:
logger.info("Received message from queue")
await self._send_message(msg)
logger.info("Queue ended.")
class LokiQueuePlugin(LokiQueueHandler, InitPluginProtocol):
"""Registers a lifespan that runs the Loki queue. Adds a dependency to the app so that any route handler
can send messages to loki or put them in the queue."""
@asynccontextmanager
async def lifespan(self, _: Litestar):
task = asyncio.create_task(self.handle_queue())
yield
self._queue_run.clear()
try:
await task
except Exception:
pass
def get_handle(self):
return self
def on_app_init(self, app_config: AppConfig) -> AppConfig:
app_config.dependencies["loki"] = Provide(self.get_handle, sync_to_thread=True)
app_config.lifespan.append(self.lifespan)
return app_config
```
**Litestar App**
```python
from litestar import Litestar, get
from loki_queue import LokiQueuePlugin, LokiMessageTemplate
loki_plugin = LokiQueuePlugin()
template = LokiMessageTemplate(
{"app": "litestar_demo"}
)
@get("/")
async def home(loki: LokiQueuePlugin) -> str:
message = template.timestamped_message("Hello World")
loki.put_nowait(message)
return "Loki message triggered."
@get("/user/{user:str}")
async def home_user(user: str, loki: LokiQueuePlugin) -> str:
message = template.timestamped_message(f"Hello {user}", extra_labels={"user": user})
loki.put_nowait(message)
return f"Loki message triggered by user {user}."
app = Litestar(
route_handlers=[
home,
home_user
],
plugins=[loki_plugin]
)
```
**docker-compose.yml**
```yaml
version: "2"
services:
grafana:
image: grafana/grafana
container_name: grafana
restart: unless-stopped
ports:
- '3000:3000'
volumes:
- grafana-storage:/var/lib/grafana
environment:
- GF_PATHS_PROVISIONING=/etc/grafana/provisioning
- GF_AUTH_ANONYMOUS_ENABLED=true
- GF_AUTH_ANONYMOUS_ORG_ROLE=Admin
loki:
image: grafana/loki
container_name: loki
restart: unless-stopped
volumes:
- ./config/loki_config.yaml:/etc/loki/local-config.yaml
- loki-storage:/loki
ports:
- '3100:3100'
command: -config.file=/etc/loki/local-config.yaml
volumes:
grafana-storage: {}
loki-storage: {}
```
### Drawbacks and Impact
I just started learning about Grafana Loki, so I'm not 100% sure if this is the best use of it. Maybe it's better just to log and ingest everything. On the other hand, while it seems easy to get a basic setup running, even with one of the other agents, it might take longer to understand all the different configuration details and options.
Using this plugin as a starter seems way simpler to me, since it's more "what you see is what you get".
### Unresolved questions
The example provided is my first iteration and can use some improvement. E.g., it's probably better to use Pydantic instead of dataclass for input validation. Loki is quite strict on the formatting, so passing a timestamp as an integer insted of a string will not fly.
|
closed
|
2024-03-24T18:57:06Z
|
2025-03-20T15:54:31Z
|
https://github.com/litestar-org/litestar/issues/3249
|
[
"Enhancement"
] |
aranvir
| 5
|
keras-team/keras
|
pytorch
| 20,521
|
EarlyStopping instance is not reusable
|
From my observations it seems that we cannot use the same instance of EarlyStopping callback when training multiple models. My assumption is that it saves the best value from one training and uses it to compare during the training of the next model.
In `on_train_begin` we have comment "# Allow instances to be re-used", however the value `self.best` is not cleared. I think it should be reinitialized with ` self.best = (float("inf") if self.monitor_op == ops.less else -float("inf"))` as in `__init__()`.
|
closed
|
2024-11-19T23:32:16Z
|
2024-11-22T17:52:43Z
|
https://github.com/keras-team/keras/issues/20521
|
[] |
maciejbiesek
| 1
|
robotframework/robotframework
|
automation
| 4,473
|
Robot debugger is not stopped at "IF" and "ELSE IF"
|
Steps to reproduce:
1) Create test case with **false** If-condition, for example:
```
1 Test_if
2 IF 9 % 2 == 0
3 ...
4 ELSE IF 9 % 4 == 0
5 ...
6 ELSE
7 ...
8 END
```
2) Set breakpoints at lines 2 and 4, they are both false
3) Launch debug of this test case
Expected result: Debugger will be stopped at line 2, after user clicks "Continue" button, debugger will be stopped at line 4.
Actual result: Debugger will **not** be stopped at lines 2 and 4.
Another case:
Create test case with **true** If-condition, for example:
```
1 Test_if
2 IF 9 % 3 == 0
3 ...
4 ELSE IF 9 % 4 == 0
5 ...
6 ELSE
7 ...
8 END
```
2) Set breakpoints at lines 2 and 4, condition of "IF" is true, condition of "ELSE IF" is false
3) Launch debug of this test case
Expected result: Debugger will be stopped at line 2, after user clicks "Continue" button, debugger will be stopped at line 4.
Actual result: Debugger will be stopped at lines 2, after user clicks "Continue" button, debugger will **not** be stopped at line 4.
So debugger is stopped only at true IF conditions. The same with true ELSE IF conditions - debugger is stopped at true-condition and is **not** stopped at false-conditions.
Version of robotframework python lib - 5.0.1
|
closed
|
2022-09-21T10:32:39Z
|
2022-09-21T12:36:36Z
|
https://github.com/robotframework/robotframework/issues/4473
|
[] |
granicayv
| 2
|
jupyter/docker-stacks
|
jupyter
| 2,031
|
[BUG] - Content Security Policy directive: "frame-ancestors 'self'" when trying to create iframe in Jupyter Notebook
|
### What docker image(s) are you using?
base-notebook
### Host OS system
Ubuntu 22.04
### Host architecture
x86_64
### What Docker command are you running?
docker run -it --rm -p 8888:8888 jupyter/base-notebook start-notebook.py --ServerApp.base_url=/jupyter/ --ServerApp.allow_origin='*' --ServerApp.disable_check_xsrf=True --ServerApp.allow_remote_access=True --ServerApp.tornado_settings=/{'headers'={'Content-Security-Policy':"frame-ancestors http://localhost:4200"}}/
### How to Reproduce the problem?
1. Run docker with command using the command above.

2. Get token (for example `18ee1a18726cfde363660db90ac167bc0f720a93067987cf`) and paste it into src in iframe:
` <iframe src="http://localhost:8888/jupyter/lab?token=18ee1a18726cfde363660db90ac167bc0f720a93067987cf"></iframe>`
3. Catch an error

### Command output
_No response_
### Expected behavior
_No response_
### Actual behavior
Catch error **Refused to frame 'http://localhost:8888/' because an ancestor violates the following Content Security Policy directive: "frame-ancestors 'self'".**

### Anything else?
I saw the solution given in #1963. However i get the same problem. I think there is a typo in the attribute --ServerApp.tornado_settings=/{'headers'={'Content-Security-Policy':"frame-ancestors http://localhost:4200"}}/
### Latest Docker version
- [X] I've updated my Docker version to the latest available, and the issue still persists
|
closed
|
2023-11-14T13:13:32Z
|
2023-12-01T01:08:47Z
|
https://github.com/jupyter/docker-stacks/issues/2031
|
[
"type:Bug",
"status:Need Info"
] |
IDianaM
| 4
|
Lightning-AI/pytorch-lightning
|
pytorch
| 20,258
|
Registered buffers not moved to correct device when using DeepSpeed Stage 3
|
### Bug description
Using the DeepSpeed `Strategy` configuration
```yaml
_target_: lightning.pytorch.strategies.DeepSpeedStrategy
zero_optimization: true
stage: 3
allgather_bucket_size: 2e8
reduce_bucket_size: 2e8
offload_optimizer: false
offload_parameters: false
partition_activations: false
cpu_checkpointing: false
contiguous_gradients: false
overlap_comm: false
```
I am experiencing an issue (specifically with DeepSpeed stage 3, not stages 1-2) where the tensors registered within sub-`nn.Modules` of my `LightningModule`'s main `lit_model.network` `nn.Module` are not moved by `register_buffer()` to the correct device upon training the `lit_module.network`. In particular, I am trying to register buffers as
```python
distance_bins_tensor = tensor([0.0, 1.0, 2.0, 3.0])
self.register_buffer("distance_bins", distance_bins_tensor)
```
within the various submodules of my `lit_module.network`. When my optimizer tries to perform a step, I get the error
```bash
RuntimeError: Expected all tensors to be on the same device, but found at least two devices, cuda:6 and cpu!
```
when trying to use these registered buffers e.g., by multiplying them by feature tensors loaded onto (in this case) `cuda:6`.
### What version are you seeing the problem on?
v2.4
### How to reproduce the bug
_No response_
### Error messages and logs
```
# Error messages and logs here please
```
### Environment
<details>
<summary>Current environment</summary>
* CUDA:
- GPU:
- NVIDIA A100 80GB PCIe
- NVIDIA A100 80GB PCIe
- available: True
- version: 11.8
* Lightning:
- adam-atan2-pytorch: 0.0.10
- alphafold3-pytorch: 0.0.41
- alphafold3-pytorch-lightning-hydra: 0.1.111
- frame-averaging-pytorch: 0.0.19
- lightning: 2.4.0
- lightning-utilities: 0.11.6
- pytorch-lightning: 2.4.0
- rotary-embedding-torch: 0.6.1
- torch: 2.3.0+cu118
- torch-geometric: 2.5.3
- torchaudio: 2.3.0+cu118
- torchmetrics: 1.4.1
- torchtyping: 0.1.4
- torchvision: 0.18.0+cu118
* Packages:
- adam-atan2-pytorch: 0.0.10
- aiofiles: 23.2.1
- aiohttp: 3.9.5
- aiosignal: 1.3.1
- alembic: 1.13.1
- alphafold3-pytorch: 0.0.41
- alphafold3-pytorch-lightning-hydra: 0.1.111
- annotated-types: 0.7.0
- antlr4-python3-runtime: 4.9.3
- anyio: 4.4.0
- appdirs: 1.4.4
- argcomplete: 3.3.0
- asttokens: 2.4.1
- async-timeout: 4.0.3
- attrs: 23.2.0
- autopage: 0.5.2
- beartype: 0.18.5
- beautifulsoup4: 4.12.3
- biopandas: 0.5.1.dev0
- biopython: 1.83
- bioservices: 1.11.2
- cattrs: 23.2.3
- certifi: 2024.8.30
- cfgv: 3.4.0
- chardet: 5.2.0
- charset-normalizer: 3.3.2
- click: 8.1.7
- cliff: 4.7.0
- cmaes: 0.10.0
- cmd2: 2.4.3
- colorama: 0.4.6
- colorlog: 6.8.2
- colt5-attention: 0.11.0
- comm: 0.2.2
- contourpy: 1.2.1
- cycler: 0.12.1
- debugpy: 1.8.1
- decorator: 5.1.1
- deepdiff: 7.0.1
- deepspeed: 0.15.0
- distlib: 0.3.8
- docker-pycreds: 0.4.0
- easydev: 0.13.2
- einops: 0.8.0
- einx: 0.2.2
- environs: 11.0.0
- exceptiongroup: 1.2.1
- executing: 2.0.1
- fastapi: 0.112.2
- ffmpy: 0.4.0
- filelock: 3.13.1
- fonttools: 4.52.4
- frame-averaging-pytorch: 0.0.19
- freetype-py: 2.3.0
- frozendict: 2.4.4
- frozenlist: 1.4.1
- fsspec: 2024.2.0
- gemmi: 0.6.6
- gevent: 24.2.1
- gitdb: 4.0.11
- gitpython: 3.1.43
- gradio: 4.43.0
- gradio-client: 1.3.0
- gradio-molecule3d: 0.0.5
- graphein: 1.7.6
- greenlet: 3.0.3
- grequests: 0.7.0
- h11: 0.14.0
- hjson: 3.1.0
- httpcore: 1.0.5
- httpx: 0.27.2
- huggingface-hub: 0.23.4
- hydra-colorlog: 1.2.0
- hydra-core: 1.3.2
- hydra-optuna-sweeper: 1.2.0
- identify: 2.5.36
- idna: 3.7
- importlib-resources: 6.4.4
- iniconfig: 2.0.0
- ipykernel: 6.29.4
- ipython: 8.24.0
- jaxtyping: 0.2.28
- jedi: 0.19.1
- jinja2: 3.1.3
- joblib: 1.4.2
- jupyter-client: 8.6.2
- jupyter-core: 5.7.2
- kiwisolver: 1.4.5
- lightning: 2.4.0
- lightning-utilities: 0.11.6
- line-profiler: 4.1.3
- local-attention: 1.9.1
- loguru: 0.7.2
- looseversion: 1.1.2
- lxml: 5.2.2
- mako: 1.3.5
- markdown-it-py: 3.0.0
- markupsafe: 2.1.5
- marshmallow: 3.21.3
- matplotlib: 3.8.4
- matplotlib-inline: 0.1.7
- mdurl: 0.1.2
- mmtf-python: 1.1.3
- mpmath: 1.3.0
- msgpack: 1.0.8
- multidict: 6.0.5
- multipledispatch: 1.0.0
- munkres: 1.1.4
- nest-asyncio: 1.6.0
- networkx: 3.2.1
- ninja: 1.11.1.1
- nodeenv: 1.8.0
- numpy: 1.23.5
- nvidia-cublas-cu11: 11.11.3.6
- nvidia-cuda-cupti-cu11: 11.8.87
- nvidia-cuda-nvrtc-cu11: 11.8.89
- nvidia-cuda-runtime-cu11: 11.8.89
- nvidia-cudnn-cu11: 8.7.0.84
- nvidia-cufft-cu11: 10.9.0.58
- nvidia-curand-cu11: 10.3.0.86
- nvidia-cusolver-cu11: 11.4.1.48
- nvidia-cusparse-cu11: 11.7.5.86
- nvidia-ml-py: 12.560.30
- nvidia-nccl-cu11: 2.20.5
- nvidia-nvtx-cu11: 11.8.86
- omegaconf: 2.3.0
- optree: 0.11.0
- optuna: 2.10.1
- ordered-set: 4.1.0
- orjson: 3.10.7
- packaging: 24.0
- pandas: 1.5.3
- parso: 0.8.4
- pbr: 6.0.0
- pdbeccdutils: 0.8.5
- pexpect: 4.9.0
- pillow: 10.2.0
- pip: 24.0
- pipx: 1.5.0
- platformdirs: 4.2.2
- plotly: 5.22.0
- pluggy: 1.5.0
- polars: 1.3.0
- pre-commit: 3.7.1
- prettytable: 3.10.0
- prompt-toolkit: 3.0.45
- protobuf: 4.25.4
- psutil: 5.9.8
- ptyprocess: 0.7.0
- pure-eval: 0.2.2
- py-cpuinfo: 9.0.0
- pycairo: 1.26.0
- pydantic: 2.8.2
- pydantic-core: 2.20.1
- pydub: 0.25.1
- pygments: 2.18.0
- pyparsing: 3.1.2
- pyperclip: 1.8.2
- pytest: 8.2.1
- python-dateutil: 2.9.0
- python-dotenv: 1.0.1
- python-multipart: 0.0.9
- pytorch-lightning: 2.4.0
- pytz: 2024.1
- pyyaml: 6.0.1
- pyzmq: 26.0.3
- rdkit: 2024.3.2
- reportlab: 4.1.0
- requests: 2.32.2
- requests-cache: 1.2.0
- retrying: 1.3.4
- rich: 13.7.1
- rich-click: 1.8.2
- rlpycairo: 0.2.0
- rootutils: 1.0.7
- rotary-embedding-torch: 0.6.1
- ruff: 0.6.4
- scikit-learn: 1.5.0
- scipy: 1.13.1
- seaborn: 0.13.2
- semantic-version: 2.10.0
- sentry-sdk: 2.12.0
- setproctitle: 1.3.3
- setuptools: 70.0.0
- sh: 2.0.7
- shellingham: 1.5.4
- shortuuid: 1.0.13
- six: 1.16.0
- smmap: 5.0.1
- sniffio: 1.3.1
- soupsieve: 2.5
- sqlalchemy: 2.0.30
- stack-data: 0.6.3
- starlette: 0.38.4
- stevedore: 5.2.0
- suds-community: 1.1.2
- sympy: 1.12
- taylor-series-linear-attention: 0.1.12
- tenacity: 8.3.0
- threadpoolctl: 3.5.0
- timeout-decorator: 0.5.0
- tomli: 2.0.1
- tomlkit: 0.12.0
- torch: 2.3.0+cu118
- torch-geometric: 2.5.3
- torchaudio: 2.3.0+cu118
- torchmetrics: 1.4.1
- torchtyping: 0.1.4
- torchvision: 0.18.0+cu118
- tornado: 6.4
- tqdm: 4.66.4
- traitlets: 5.14.3
- triton: 2.3.0
- typeguard: 2.13.3
- typer: 0.12.5
- typing-extensions: 4.11.0
- tzdata: 2024.1
- unicodedata2: 15.1.0
- url-normalize: 1.4.3
- urllib3: 2.2.1
- userpath: 1.9.2
- uvicorn: 0.30.6
- virtualenv: 20.26.2
- wandb: 0.16.6
- wcwidth: 0.2.13
- websockets: 12.0
- wget: 3.2
- wheel: 0.43.0
- wrapt: 1.16.0
- xarray: 2024.3.0
- xmltodict: 0.13.0
- yarl: 1.9.4
- zope.event: 5.0
- zope.interface: 6.4.post2
* System:
- OS: Linux
- architecture:
- 64bit
- ELF
- processor: x86_64
- python: 3.10.14
- release: 4.18.0-553.16.1.el8_10.x86_64
- version: #1 SMP Thu Aug 8 07:11:46 EDT 2024
</details>
### More info
_No response_
|
open
|
2024-09-06T23:01:29Z
|
2025-02-23T06:27:59Z
|
https://github.com/Lightning-AI/pytorch-lightning/issues/20258
|
[
"bug",
"needs triage",
"ver: 2.4.x"
] |
amorehead
| 3
|
taverntesting/tavern
|
pytest
| 410
|
Support for specifying a file as the body of an upload
|
I'm aware it's possible to use the `files` property to specify a map of files to be uploaded as part of a multipart file upload.
What I need is the ability to specify the content of a file as the _entire_ body, as described here:
https://docs.microsoft.com/en-us/rest/api/storageservices/put-blob
Here's the sample request that MS provides in their docs:
```
Request Syntax:
PUT https://myaccount.blob.core.windows.net/mycontainer/myblockblob HTTP/1.1
Request Headers:
x-ms-version: 2015-02-21
x-ms-date: <date>
Content-Type: text/plain; charset=UTF-8
x-ms-blob-content-disposition: attachment; filename="fname.ext"
x-ms-blob-type: BlockBlob
x-ms-meta-m1: v1
x-ms-meta-m2: v2
Authorization: SharedKey myaccount:YhuFJjN4fAR8/AmBrqBz7MG2uFinQ4rkh4dscbj598g=
Content-Length: 11
Request Body:
hello world
```
(in this case, the content of the file would be "hello world")
I may be missing something, in which case apologies! So far in my reading it seems that this is not yet supported. I think tavern would need to [stream the upload](https://2.python-requests.org//en/master/user/advanced/#streaming-uploads) instead of using [multi-part encoded file uploads](https://2.python-requests.org//en/master/user/advanced/#post-multiple-multipart-encoded-files)
|
closed
|
2019-08-14T19:40:16Z
|
2020-05-01T15:53:23Z
|
https://github.com/taverntesting/tavern/issues/410
|
[
"Type: Enhancement"
] |
joeapearson
| 2
|
sqlalchemy/sqlalchemy
|
sqlalchemy
| 11,220
|
we set is_dml for fromstatement but not is_select
|
### Discussed in https://github.com/sqlalchemy/sqlalchemy/discussions/11219
<div type='discussions-op-text'>
<sup>Originally posted by **samscott89** April 1, 2024</sup>
Is it intentional that `is_select` return `False` when using `from_statement` in the following? If so, is there a suggested way to in the `do_orm_execute` code to determine whether the query is a select?
Example code:
```py
from sqlalchemy.orm import declarative_base, sessionmaker
from sqlalchemy import Column, String, event, select, text, create_engine
engine = create_engine("sqlite:///:memory:", echo=True, future=True)
Session = sessionmaker(engine, expire_on_commit=False)
Base = declarative_base()
class Foo(Base):
__tablename__ = "foo"
name = Column(String, primary_key=True)
Base.metadata.create_all(engine)
was_select = None
@event.listens_for(Session, "do_orm_execute")
def do_orm_execute(execute_state):
global was_select
was_select = execute_state.is_select
print(execute_state.is_select)
session = Session()
with Session() as session:
with session.begin():
query = select(Foo).filter_by(name="bar")
session.execute(query)
assert was_select == True # succeeds
query = select(Foo).from_statement(text("select * from foo where name = 'bar'"))
session.execute(query)
assert was_select == True, "expected was_select to be True"
```
Edit to add: on digging a little more, it's pretty clear the root of this is that `FromStatement.is_select` returns `False`:
```
(Pdb) p execute_state.statement
<sqlalchemy.orm.context.FromStatement object at 0x104442e70>
(Pdb) p execute_state.statement.is_select
False
```
Updated the title to reflect what I think is the main problem. I'll keep the code as-is though, because it shows the context of what I'm trying to achieve.
Note: this is specific to sqlalchemy 2.0, on 1.4 the above returns `True` for `is_select</div>
|
closed
|
2024-04-01T21:28:04Z
|
2024-04-02T20:34:30Z
|
https://github.com/sqlalchemy/sqlalchemy/issues/11220
|
[
"bug",
"orm",
"events",
"near-term release"
] |
zzzeek
| 3
|
ultrafunkamsterdam/undetected-chromedriver
|
automation
| 1,224
|
after driver.get('url') sometimes i get this error
|
Exception ignored in: <function Chrome.__del__ at 0x000001711AF5D6C0>
Traceback (most recent call last):
File "C:\Program Files\Python311\Lib\site-packages\undetected_chromedriver\__init__.py", line 788, in __del__
File "C:\Program Files\Python311\Lib\site-packages\undetected_chromedriver\__init__.py", line 743, in quit
OSError: [WinError 6] The handle is invalid
it occurs sometimes
i am with or without any options m using latest chrome and windows 10 1909
|
open
|
2023-05-01T12:47:03Z
|
2023-05-01T12:47:03Z
|
https://github.com/ultrafunkamsterdam/undetected-chromedriver/issues/1224
|
[] |
rastinder
| 0
|
mwouts/itables
|
jupyter
| 6
|
Refreshing the page breaks itables
|
Steps to reproduce:
Open https://mybinder.org/v2/gh/mwouts/itables/master?filepath=README.md
Run the first cell
Refresh the page
Re-run the first cell
|
closed
|
2019-08-27T00:59:56Z
|
2020-12-22T22:07:42Z
|
https://github.com/mwouts/itables/issues/6
|
[] |
neon-ninja
| 2
|
raphaelvallat/pingouin
|
pandas
| 402
|
Two new test failures fails in Fedora Rawhide
|
Two tests (`TestRegression::test_linear_regression` and `TestMultivariate::test_box_m`) recently started failing in Fedora Rawhide. This link shows what dependency updates *could* be responsible: https://koschei.fedoraproject.org/build/17077907.
A failing build log is attached, [build.log](https://github.com/raphaelvallat/pingouin/files/14115002/build.log).
I’ve determined that the regression in `TestMultivariate::test_box_m` is associated with the update of openblas from 0.3.25 to 0.3.26. The test still passes when I set `export FLEXIBLAS=netlib` to run the tests.
Setting `export FLEXIBLAS=netlib` does *not* fix `TestRegression::test_linear_regression`; it still fails on all applicable architectures (`x86_64`, `ppc64le`, `aarch64`, and `s390x`).
It’s not immediately obvious to me what to do about either of these, and I know you don’t have a lot of time to work on pingouin right now. For now, I’m going to run the tests under netlib BLAS, and I’m going to skip `TestRegression::test_linear_regression`. I am very happy to do any testing that might help.
|
closed
|
2024-01-31T17:29:15Z
|
2024-09-03T09:52:39Z
|
https://github.com/raphaelvallat/pingouin/issues/402
|
[] |
musicinmybrain
| 3
|
opengeos/leafmap
|
jupyter
| 435
|
Error after upgrading leafmap to 0.20.0
|
The following script was working fine with leafmap 0.19.0.
import leafmap,os,time
m = leafmap.Map()
m.add_basemap('SATELLITE')
m.add_shp(in_shp=extent_vector, layer_name="Extent shapefile")
m
But now the following error is popping with 0.20.0 version.
---------------------------------------------------------------------------
ImportError Traceback (most recent call last)
File ~\miniconda3\envs\gee\lib\site-packages\leafmap\common.py:1443, in shp_to_geojson(in_shp, out_json, encoding, **kwargs)
1442 try:
-> 1443 import geopandas as gpd
1445 except Exception:
File ~\miniconda3\envs\gee\lib\site-packages\geopandas\__init__.py:1
----> 1 from geopandas._config import options # noqa
3 from geopandas.geoseries import GeoSeries # noqa
File ~\miniconda3\envs\gee\lib\site-packages\geopandas\_config.py:109
104 compat.set_use_pygeos(value)
107 use_pygeos = Option(
108 key="use_pygeos",
--> 109 default_value=_default_use_pygeos(),
110 doc=(
111 "Whether to use PyGEOS to speed up spatial operations. The default is True "
112 "if PyGEOS is installed, and follows the USE_PYGEOS environment variable "
113 "if set."
114 ),
115 validator=_validate_bool,
116 callback=_callback_use_pygeos,
117 )
120 options = Options({"display_precision": display_precision, "use_pygeos": use_pygeos})
File ~\miniconda3\envs\gee\lib\site-packages\geopandas\_config.py:95, in _default_use_pygeos()
94 def _default_use_pygeos():
---> 95 import geopandas._compat as compat
97 return compat.USE_PYGEOS
File ~\miniconda3\envs\gee\lib\site-packages\geopandas\_compat.py:9
8 import pandas as pd
----> 9 import pyproj
10 import shapely
File ~\miniconda3\envs\gee\lib\site-packages\pyproj\__init__.py:49
47 import warnings
---> 49 import pyproj.network
50 from pyproj._datadir import ( # noqa: F401 pylint: disable=unused-import
51 _pyproj_global_context_initialize,
52 set_use_global_context,
53 )
File ~\miniconda3\envs\gee\lib\site-packages\pyproj\network.py:10
8 import certifi
---> 10 from pyproj._network import ( # noqa: F401 pylint: disable=unused-import
11 _set_ca_bundle_path,
12 is_network_enabled,
13 set_network_enabled,
14 )
17 def set_ca_bundle_path(ca_bundle_path: Union[Path, str, bool, None] = None) -> None:
ImportError: DLL load failed while importing _network: The specified module could not be found.
During handling of the above exception, another exception occurred:
ImportError Traceback (most recent call last)
File ~\miniconda3\envs\gee\lib\site-packages\leafmap\common.py:1446, in shp_to_geojson(in_shp, out_json, encoding, **kwargs)
1445 except Exception:
-> 1446 raise ImportError(
1447 "Geopandas is required to perform reprojection of the data. See https://geopandas.org/install.html"
1448 )
1450 try:
ImportError: Geopandas is required to perform reprojection of the data. See https://geopandas.org/install.html
During handling of the above exception, another exception occurred:
Exception Traceback (most recent call last)
Cell In[3], line 4
1 m = leafmap.Map()
2 m.add_basemap('SATELLITE')
----> 4 m.add_shp(in_shp=extent_vector, layer_name="Extent shapefile")
6 m
File ~\miniconda3\envs\gee\lib\site-packages\leafmap\leafmap.py:2153, in Map.add_shp(self, in_shp, layer_name, style, hover_style, style_callback, fill_colors, info_mode, encoding)
2150 if not os.path.exists(in_shp):
2151 raise FileNotFoundError("The provided shapefile could not be found.")
-> 2153 geojson = shp_to_geojson(in_shp, encoding=encoding)
2154 self.add_geojson(
2155 geojson,
2156 layer_name,
(...)
2162 encoding,
2163 )
File ~\miniconda3\envs\gee\lib\site-packages\leafmap\common.py:1475, in shp_to_geojson(in_shp, out_json, encoding, **kwargs)
1472 return out_dict
1474 except Exception as e:
-> 1475 raise Exception(e)
Exception: Geopandas is required to perform reprojection of the data. See https://geopandas.org/install.html
|
closed
|
2023-04-25T04:59:49Z
|
2023-04-25T15:06:53Z
|
https://github.com/opengeos/leafmap/issues/435
|
[
"bug"
] |
ravishbapna
| 2
|
ageitgey/face_recognition
|
python
| 732
|
how to connect sql database and grab image and information?
|
well i need to connect mysql database so when i open my camera its grab my information from database
is it possible?
|
closed
|
2019-01-31T09:10:38Z
|
2020-08-14T16:21:37Z
|
https://github.com/ageitgey/face_recognition/issues/732
|
[] |
anasalzuvix
| 1
|
gradio-app/gradio
|
deep-learning
| 9,924
|
When trying to contribute, gradio might be imported from the wrong place
|
### Describe the bug
Users have to tell python where to find the cloned git version of gradio, rather than using an installation from somewhere else on the machine.
### Have you searched existing issues? 🔎
- [X] I have searched and found no existing issues
### Reproduction
- Follow the CONTRIBUTING.md
- Get to the spot where you try to run the simple chatbot:
`gradio demo/chatbot_simple/run.py`
- Hit this error if there isn't an environment variable somewhere already set up to tell python where to look for gradio.
### Screenshot
_No response_
### Logs
```shell
(.venv) (base) me@my-computer gradio % gradio demo/chatbot_simple/run.py
Watching: ‘/Users/margaretmitchell/HuggingFace/git/gradio/demo/chatbot_simple’ ‘/Users/margaretmitchell/HuggingFace/git/gradio’
Traceback (most recent call last):
File “/Users/margaretmitchell/HuggingFace/git/gradio/demo/chatbot_simple/run.py”, line 6, in <module>
chatbot = gr.Chatbot(type=“messages”)
File “/Users/margaretmitchell/HuggingFace/git/gradio/.venv/lib/python3.10/site-packages/gradio/component_meta.py”, line 157, in wrapper
return fn(self, **kwargs)
TypeError: Chatbot.__init__() got an unexpected keyword argument ‘type’
IMPORTANT: You are using gradio version 4.19.1, however version 4.44.1 is available, please upgrade.
```
### System Info
```shell
From cloned git repo, most recent commit: c0cf80bddd99ad0f836e618cc3d2b13e73cb5611
```
### Severity
I can work around it
|
closed
|
2024-11-10T03:46:20Z
|
2024-11-12T20:44:58Z
|
https://github.com/gradio-app/gradio/issues/9924
|
[
"bug"
] |
meg-huggingface
| 0
|
HumanSignal/labelImg
|
deep-learning
| 307
|
Rotating Bounding Box (Enhancement)
|
<!--
Please provide as much as detail and example as you can.
You can add screenshots if appropriate.
-->
It would be great to add a feature for rotating the bounding box. Most of the items I label are of an arbitrary rotation (see attached image).

|
closed
|
2018-05-30T23:47:56Z
|
2018-05-30T23:49:28Z
|
https://github.com/HumanSignal/labelImg/issues/307
|
[] |
jamesthesken
| 0
|
keras-team/keras
|
machine-learning
| 20,804
|
Results of the Conv2D layer are not identical across backends
|
The [docs](https://keras.io/keras_3/) state:
> numerics are identical across backends [...]
> [...] up to 1e-7 precision in float32, per function execution
However, this minimal example does not confirm this:
```python3
import os.path
import numpy as np
from keras import layers, Model
from keras.src.saving import load_model
np.random.seed(0)
data = np.random.rand(1, 256, 256, 1024)
if os.path.isfile("model.keras"):
model = load_model("model.keras")
else:
inputs = layers.Input(shape=(256, 256, 1024))
outputs = layers.Conv2D(1024, kernel_size=(4, 7), padding="same", dilation_rate=(3, 2))(inputs)
model = Model(inputs=[inputs], outputs=outputs)
model.save("model.keras")
print(np.sum([data]))
print(os.environ["KERAS_BACKEND"])
print(np.sum(np.array(model([data]))))
```
Output with `tensorflow` backend:
```bash
KERAS_BACKEND=tensorflow python main.py
```
```
33550919.07926151
tensorflow
58094.56
```
Output with `jax` backend:
```bash
KERAS_BACKEND=jax python main.py
```
```
33550919.07926151
jax
58094.523
```
Versions used:
```bash
python -c "import keras; import tensorflow; import jax; print(keras.__version__); print(tensorflow.__version__); print(jax.__version__)"
```
```
3.8.0
2.18.0
0.5.0
```
|
closed
|
2025-01-23T17:35:53Z
|
2025-02-06T20:34:55Z
|
https://github.com/keras-team/keras/issues/20804
|
[
"type:Bug"
] |
Dobiasd
| 6
|
ageitgey/face_recognition
|
machine-learning
| 715
|
How to deal with 1000 face images,
|
* face_recognition version: 1.2.3.
* Python version: 2.7
* Operating System: Mac
### Description
I want to recognize the face of guest onboard on hotel and want to update checkin status in data base, I have guest image of 1000 guest, will this library work with 1K image is there any performance impact or any other better way to do the same.
|
closed
|
2019-01-08T04:20:33Z
|
2020-01-08T07:16:09Z
|
https://github.com/ageitgey/face_recognition/issues/715
|
[] |
verma171
| 3
|
sqlalchemy/alembic
|
sqlalchemy
| 764
|
Add type annotations for generated files when Python > 3
|
**Is your feature request related to a problem? Please describe.**
`env.py` and `script.py.mako` do not include type annotations when they are generated by `alembic init`.
**Describe the solution you'd like**
If `alembic` is used with Python 3, `alembic init` could generate a version of `env.py` and `script.py.mako` that includes type annotations (namely, `-> None` for `upgrade`, `downgrade`, `run_migrations_offline`, `run_migrations_online`.
**Describe alternatives you've considered**
`alembic` could wait for SQLAlchemy 2 which will only support Python 3. That's a bit sad because Python 3 is so pervasive, and because we know with which Python version `alembic init` is ran, so we could take advantage of that to generate the right files.
**Have a nice day!**
Thanks for all the great work!
|
closed
|
2020-11-30T12:42:56Z
|
2022-04-21T16:56:46Z
|
https://github.com/sqlalchemy/alembic/issues/764
|
[
"motivated volunteers requested",
"pep 484"
] |
charlax
| 11
|
tortoise/tortoise-orm
|
asyncio
| 836
|
`.first()` does not apply, and updates all instead
|
In my FastAPI app, the following code is ran:
```py
await Job.filter(pending=False, closed=False, gpu=True).order_by("number").first().update(completor=client.uuid, pending=True)
```
For some reason, this updates **all** the `Job` instances, instead of just the first one after ordering.
I've attempted replacing `.first()` with `.limit(1)` but it got the same results.
Is there a better way to do this, or is this an issue on the tortoise side of things?
p.s. it's important to mention the fact that this needs to do both filtering and updating at the same time, or there's a slight chance multiple people using the website may be given the same `Job`. Unless there's a way to call two methods at once with tortoise?
|
closed
|
2021-07-25T22:25:55Z
|
2021-07-26T07:26:14Z
|
https://github.com/tortoise/tortoise-orm/issues/836
|
[] |
TheoCoombes
| 2
|
qubvel-org/segmentation_models.pytorch
|
computer-vision
| 58
|
torchvision 0.4
|
Why do we have a constraint on the torchvision version?
https://github.com/qubvel/segmentation_models.pytorch/blob/master/requirements.txt#L1
|
closed
|
2019-09-17T19:35:45Z
|
2019-10-15T15:05:36Z
|
https://github.com/qubvel-org/segmentation_models.pytorch/issues/58
|
[] |
ternaus
| 4
|
CorentinJ/Real-Time-Voice-Cloning
|
python
| 887
|
Expression 'paInvalidSampleRate' failed
|
My headphones are working in demo_cli.py but not in demo_toolbox.
Can anyone help me debug this
```
Expression 'paInvalidSampleRate' failed in 'src/hostapi/alsa/pa_linux_alsa.c', line: 2050
Expression 'PaAlsaStreamComponent_InitialConfigure( &self->playback, outParams, self->primeBuffers, hwParamsPlayback, &realSr )' failed in 'src/hostapi/alsa/pa_linux_alsa.c', line: 2724
Expression 'PaAlsaStream_Configure( stream, inputParameters, outputParameters, sampleRate, framesPerBuffer, &inputLatency, &outputLatency, &hostBufferSizeMode )' failed in 'src/hostapi/alsa/pa_linux_alsa.c', line: 2845
Error opening OutputStream: Invalid sample rate [PaErrorCode -9997]
```
Thanks
|
closed
|
2021-11-06T22:27:19Z
|
2021-11-08T20:30:10Z
|
https://github.com/CorentinJ/Real-Time-Voice-Cloning/issues/887
|
[] |
0xrushi
| 5
|
litestar-org/litestar
|
pydantic
| 3,199
|
Enhancement: Support for RFC 7807 / 9457
|
### Summary
A Boolean option to enable [RFC7807](https://datatracker.ietf.org/doc/html/rfc7807#section-6.1) support might be a good way to enable this otherwise breaking change.
ie: ` app = Litestar(rfc7807=True)`
Perhaps some of the common exceptions (like ValueError) could have:
- **type** URI enpoint / page provided by Litestar (plugin?)
- **title** field set automatically
- **status** field set automatically (use same value determination as current)
- **detail** field set automatically
A new Exception class perhaps `JsonProblem(HTTPException)` which auto-includes "application/problem+json" in header and has fields for: type, title, status, etc.
This would be nice to have and complement OpenAPI / Swagger support with the emerging standard for error responses.
### Basic Example
_No response_
### Drawbacks and Impact
_No response_
### Unresolved questions
_No response_
|
closed
|
2024-03-13T16:45:08Z
|
2025-03-20T15:54:28Z
|
https://github.com/litestar-org/litestar/issues/3199
|
[
"Enhancement",
"Help Wanted :sos:"
] |
skewty
| 16
|
AUTOMATIC1111/stable-diffusion-webui
|
pytorch
| 16,222
|
[Feature Request]: Keep file name toggle on extras batch from directory
|
### Is there an existing issue for this?
- [X] I have searched the existing issues and checked the recent builds/commits
### What would your feature do ?
Within `Extras > Batch from directory` add a toggle to keep file names when outputting result image instead of outputting them with a numeric id.
Missing this feature has forced me to rename those images manually myself.
### Proposed workflow
1. Go to `Extras > Batch from directory`
2. Enable `Keep source image names` toggle
3. Generate images as normal
### Additional information

|
closed
|
2024-07-17T12:27:06Z
|
2024-07-17T15:39:04Z
|
https://github.com/AUTOMATIC1111/stable-diffusion-webui/issues/16222
|
[
"enhancement"
] |
PereViader
| 1
|
reloadware/reloadium
|
flask
| 1
|
Create Wheels for 32bit Python on Windows
|
I'm stuck in a 32bit Python environment (3.7), and would love to use Reloadium, but there are no wheels available for that architecture. Upon installing the IntelliJ plugin, the 0.0.1 version of from pypi was installed for my environment, but this lacks the necessary imports and did not work.
|
closed
|
2022-03-03T14:41:45Z
|
2022-03-10T02:09:53Z
|
https://github.com/reloadware/reloadium/issues/1
|
[] |
jgiers9872
| 1
|
pallets-eco/flask-sqlalchemy
|
flask
| 768
|
Documention about raw connections with multiples databases
|
flask-sqlalchemy support multiple connections through binds, but the documentation how to use raw connections is missing.
The issue [#107](https://github.com/pallets/flask-sqlalchemy/issues/107) shows you could get the bind connection engine just using `db.get_engine(app, 'oh_my_bind').execute(...)`.
The is missing the documentation about this nice feature.
|
closed
|
2019-07-30T14:41:12Z
|
2022-10-03T00:21:57Z
|
https://github.com/pallets-eco/flask-sqlalchemy/issues/768
|
[
"docs"
] |
rafaelreuber
| 2
|
quokkaproject/quokka
|
flask
| 565
|
create tinydb debug toolbar
|
Add tinydb to flask debug toolbar
https://github.com/rochacbruno/quokka_ng/issues/69
|
closed
|
2018-02-07T01:36:02Z
|
2018-02-07T01:39:33Z
|
https://github.com/quokkaproject/quokka/issues/565
|
[
"1.0.0",
"hacktoberfest"
] |
rochacbruno
| 0
|
saulpw/visidata
|
pandas
| 2,218
|
sample_data is missing from sdist
|
**Small description**
When using the sdist, there is no `sample_data` directory. This causes tests to create the test data from scratch, causing them to fail as expected information isn't there.
**Expected result**
Tests pass.
**Actual result with screenshot**
Running `pytest` produces:
```
=================================== FAILURES ===================================
__________________________ TestFeatures.test_features __________________________
self = <visidata.tests.test_features.TestFeatures object at 0x7f0562954bc0>
mock_screen = <Mock id='139661071287984'>
def test_features(self, mock_screen):
tests = [
(mod, getattr(mod, k))
for mod in visidata.vd.importedModules
for k in dir(mod)
if k.startswith('test_')
]
for mod, testfunc in tests:
print(mod, testfunc.__name__)
visidata.vd.resetVisiData()
visidata.vd.scr = mock_screen
> testfunc(visidata.vd)
visidata/tests/test_features.py:17:
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
visidata/features/slide.py:124: in test_slide_keycol_1
t('', 'OrderDate Region Rep Item Units Unit_Cost Total')
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
vdx = '', golden = 'OrderDate Region Rep Item Units Unit_Cost Total'
def t(vdx, golden):
global vd
vd = visidata.vd.resetVisiData()
vd.runvdx(setup_vdx)
vd.runvdx(vdx)
colnames = [c.name for c in vd.sheet.visibleCols]
> assert colnames == golden.split(), ' '.join(colnames)
E AssertionError: H
visidata/features/slide.py:112: AssertionError
----------------------------- Captured stdout call -----------------------------
<module 'visidata.fuzzymatch' from '/builddir/build/BUILD/visidata-3.0/visidata/fuzzymatch.py'> test_fuzzymatch
<module 'visidata.features.slide' from '/builddir/build/BUILD/visidata-3.0/visidata/features/slide.py'> test_slide_keycol_1
cmdlog open-file
sample key-col
sample key-col
sample key-col
----------------------------- Captured stderr call -----------------------------
sample_data/sample.tsv does not exist, creating new sheet
sample has no column "OrderDate"
sample has no column "Region"
sample has no column "Rep"
```
**Additional context**
VisiData 3.0 and Python 3.12.
|
closed
|
2024-01-01T08:05:33Z
|
2024-01-04T05:53:08Z
|
https://github.com/saulpw/visidata/issues/2218
|
[
"bug",
"fixed"
] |
QuLogic
| 3
|
vaexio/vaex
|
data-science
| 2,157
|
[BUG-REPORT] Trying to install version 4.11.1 gives error 'ssize_t': undeclared identifier
|
**Description**
Run `python.exe -m pip install --upgrade vaex-core vaex-viz vaex-jupyter vaex-server vaex-hdf5 vaex-astro vaex-ml`
**Software information**
- Vaex version (`import vaex; vaex.__version__)`: Trying to install 4.11.1 with no current version installed
- Vaex was installed via: pip
- OS: Windows 10
**Additional information**
Pip install gives this error:
```
Collecting vaex-core
Using cached vaex-core-4.11.1.tar.gz (2.5 MB)
Installing build dependencies ... done
Getting requirements to build wheel ... done
Preparing metadata (pyproject.toml) ... done
Collecting vaex-viz
Using cached vaex_viz-0.5.2-py3-none-any.whl (19 kB)
Collecting vaex-jupyter
Using cached vaex_jupyter-0.8.0-py3-none-any.whl (43 kB)
Collecting vaex-server
Using cached vaex_server-0.8.1-py3-none-any.whl (23 kB)
Collecting vaex-hdf5
Using cached vaex_hdf5-0.12.3-py3-none-any.whl (16 kB)
Collecting vaex-astro
Using cached vaex_astro-0.9.1-py3-none-any.whl (20 kB)
Collecting vaex-ml
Using cached vaex_ml-0.18.0-py3-none-any.whl (58 kB)
Requirement already satisfied: nest-asyncio>=1.3.3 in c:\wpy64-31040\python-3.10.4.amd64\lib\site-packages (from vaex-core) (1.5.5)
Requirement already satisfied: numpy>=1.16 in c:\wpy64-31040\python-3.10.4.amd64\lib\site-packages (from vaex-core) (1.22.4+mkl)
Requirement already satisfied: rich in c:\wpy64-31040\python-3.10.4.amd64\lib\site-packages (from vaex-core) (12.3.0)
Collecting frozendict!=2.2.0
Using cached frozendict-2.3.4-cp310-cp310-win_amd64.whl (35 kB)
Requirement already satisfied: future>=0.15.2 in c:\wpy64-31040\python-3.10.4.amd64\lib\site-packages (from vaex-core) (0.18.2)
Requirement already satisfied: pyarrow>=5.0.0 in c:\wpy64-31040\python-3.10.4.amd64\lib\site-packages (from vaex-core) (7.0.0)
Requirement already satisfied: six in c:\wpy64-31040\python-3.10.4.amd64\lib\site-packages (from vaex-core) (1.16.0)
Requirement already satisfied: dask!=2022.4.0 in c:\wpy64-31040\python-3.10.4.amd64\lib\site-packages (from vaex-core) (2022.4.2)
Requirement already satisfied: tabulate>=0.8.3 in c:\wpy64-31040\python-3.10.4.amd64\lib\site-packages (from vaex-core) (0.8.9)
Requirement already satisfied: pandas in c:\wpy64-31040\python-3.10.4.amd64\lib\site-packages (from vaex-core) (1.4.2)
Requirement already satisfied: pydantic>=1.8.0 in c:\wpy64-31040\python-3.10.4.amd64\lib\site-packages (from vaex-core) (1.8.2)
Requirement already satisfied: pyyaml in c:\wpy64-31040\python-3.10.4.amd64\lib\site-packages (from vaex-core) (6.0)
Collecting aplus
Using cached aplus-0.11.0-py3-none-any.whl
Collecting progressbar2
Using cached progressbar2-4.0.0-py2.py3-none-any.whl (26 kB)
Collecting blake3
Using cached blake3-0.3.1-cp310-none-win_amd64.whl (193 kB)
Requirement already satisfied: filelock in c:\wpy64-31040\python-3.10.4.amd64\lib\site-packages (from vaex-core) (3.6.0)
Requirement already satisfied: cloudpickle in c:\wpy64-31040\python-3.10.4.amd64\lib\site-packages (from vaex-core) (2.0.0)
Requirement already satisfied: requests in c:\wpy64-31040\python-3.10.4.amd64\lib\site-packages (from vaex-core) (2.28.1)
Requirement already satisfied: pillow in c:\wpy64-31040\python-3.10.4.amd64\lib\site-packages (from vaex-viz) (9.1.1)
Requirement already satisfied: matplotlib>=1.3.1 in c:\wpy64-31040\python-3.10.4.amd64\lib\site-packages (from vaex-viz) (3.5.1)
Requirement already satisfied: bqplot>=0.10.1 in c:\wpy64-31040\python-3.10.4.amd64\lib\site-packages (from vaex-jupyter) (0.12.33)
Requirement already satisfied: ipympl in c:\wpy64-31040\python-3.10.4.amd64\lib\site-packages (from vaex-jupyter) (0.9.1)
Collecting ipyvolume>=0.4
Using cached ipyvolume-0.5.2-py2.py3-none-any.whl (2.9 MB)
Collecting ipyvuetify<2,>=1.2.2
Using cached ipyvuetify-1.8.2-1-py2.py3-none-any.whl (11.7 MB)
Requirement already satisfied: ipyleaflet in c:\wpy64-31040\python-3.10.4.amd64\lib\site-packages (from vaex-jupyter) (0.14.0)
Requirement already satisfied: xarray in c:\wpy64-31040\python-3.10.4.amd64\lib\site-packages (from vaex-jupyter) (2022.3.0)
Requirement already satisfied: cachetools in c:\wpy64-31040\python-3.10.4.amd64\lib\site-packages (from vaex-server) (5.1.0)
Requirement already satisfied: tornado>4.1 in c:\wpy64-31040\python-3.10.4.amd64\lib\site-packages (from vaex-server) (6.1)
Requirement already satisfied: fastapi in c:\wpy64-31040\python-3.10.4.amd64\lib\site-packages (from vaex-server) (0.78.0)
Requirement already satisfied: uvicorn[standard] in c:\wpy64-31040\python-3.10.4.amd64\lib\site-packages (from vaex-server) (0.17.6)
Requirement already satisfied: h5py>=2.9 in c:\wpy64-31040\python-3.10.4.amd64\lib\site-packages (from vaex-hdf5) (3.6.0)
Requirement already satisfied: astropy in c:\wpy64-31040\python-3.10.4.amd64\lib\site-packages (from vaex-astro) (5.0.4)
Requirement already satisfied: numba in c:\wpy64-31040\python-3.10.4.amd64\lib\site-packages (from vaex-ml) (0.55.1)
Requirement already satisfied: traitlets in c:\wpy64-31040\python-3.10.4.amd64\lib\site-packages (from vaex-ml) (5.1.1)
Requirement already satisfied: jinja2 in c:\wpy64-31040\python-3.10.4.amd64\lib\site-packages (from vaex-ml) (3.0.3)
Requirement already satisfied: ipywidgets>=7.5.0 in c:\wpy64-31040\python-3.10.4.amd64\lib\site-packages (from bqplot>=0.10.1->vaex-jupyter) (7.7.0)
Requirement already satisfied: traittypes>=0.0.6 in c:\wpy64-31040\python-3.10.4.amd64\lib\site-packages (from bqplot>=0.10.1->vaex-jupyter) (0.2.1)
Requirement already satisfied: partd>=0.3.10 in c:\wpy64-31040\python-3.10.4.amd64\lib\site-packages (from dask!=2022.4.0->vaex-core) (1.2.0)
Requirement already satisfied: fsspec>=0.6.0 in c:\wpy64-31040\python-3.10.4.amd64\lib\site-packages (from dask!=2022.4.0->vaex-core) (2022.3.0)
Requirement already satisfied: toolz>=0.8.2 in c:\wpy64-31040\python-3.10.4.amd64\lib\site-packages (from dask!=2022.4.0->vaex-core) (0.11.2)
Requirement already satisfied: packaging>=20.0 in c:\wpy64-31040\python-3.10.4.amd64\lib\site-packages (from dask!=2022.4.0->vaex-core) (21.3)
Collecting ipywebrtc
Using cached ipywebrtc-0.6.0-py2.py3-none-any.whl (260 kB)
Collecting pythreejs>=1.0.0
Using cached pythreejs-2.3.0-py2.py3-none-any.whl (3.4 MB)
Collecting ipyvue<2,>=1.5
Using cached ipyvue-1.7.0-py2.py3-none-any.whl (2.7 MB)
Requirement already satisfied: kiwisolver>=1.0.1 in c:\wpy64-31040\python-3.10.4.amd64\lib\site-packages (from matplotlib>=1.3.1->vaex-viz) (1.4.2)
Requirement already satisfied: cycler>=0.10 in c:\wpy64-31040\python-3.10.4.amd64\lib\site-packages (from matplotlib>=1.3.1->vaex-viz) (0.11.0)
Requirement already satisfied: pyparsing>=2.2.1 in c:\wpy64-31040\python-3.10.4.amd64\lib\site-packages (from matplotlib>=1.3.1->vaex-viz) (3.0.9)
Requirement already satisfied: python-dateutil>=2.7 in c:\wpy64-31040\python-3.10.4.amd64\lib\site-packages (from matplotlib>=1.3.1->vaex-viz) (2.8.2)
Requirement already satisfied: fonttools>=4.22.0 in c:\wpy64-31040\python-3.10.4.amd64\lib\site-packages (from matplotlib>=1.3.1->vaex-viz) (4.31.2)
Requirement already satisfied: pytz>=2020.1 in c:\wpy64-31040\python-3.10.4.amd64\lib\site-packages (from pandas->vaex-core) (2022.1)
Requirement already satisfied: typing-extensions>=3.7.4.3 in c:\wpy64-31040\python-3.10.4.amd64\lib\site-packages (from pydantic>=1.8.0->vaex-core) (4.2.0)
Requirement already satisfied: pyerfa>=2.0 in c:\wpy64-31040\python-3.10.4.amd64\lib\site-packages (from astropy->vaex-astro) (2.0.0.1)
Requirement already satisfied: starlette==0.19.1 in c:\wpy64-31040\python-3.10.4.amd64\lib\site-packages (from fastapi->vaex-server) (0.19.1)
Requirement already satisfied: anyio<5,>=3.4.0 in c:\wpy64-31040\python-3.10.4.amd64\lib\site-packages (from starlette==0.19.1->fastapi->vaex-server) (3.5.0)
Requirement already satisfied: ipython-genutils in c:\wpy64-31040\python-3.10.4.amd64\lib\site-packages (from ipympl->vaex-jupyter) (0.2.0)
Requirement already satisfied: ipython<9 in c:\wpy64-31040\python-3.10.4.amd64\lib\site-packages (from ipympl->vaex-jupyter) (7.32.0)
Requirement already satisfied: MarkupSafe>=2.0 in c:\wpy64-31040\python-3.10.4.amd64\lib\site-packages (from jinja2->vaex-ml) (2.1.1)
Collecting numpy>=1.16
Using cached numpy-1.21.6-cp310-cp310-win_amd64.whl (14.0 MB)
Requirement already satisfied: setuptools in c:\wpy64-31040\python-3.10.4.amd64\lib\site-packages (from numba->vaex-ml) (62.3.2)
Requirement already satisfied: llvmlite<0.39,>=0.38.0rc1 in c:\wpy64-31040\python-3.10.4.amd64\lib\site-packages (from numba->vaex-ml) (0.38.0)
Collecting python-utils>=3.0.0
Using cached python_utils-3.3.3-py2.py3-none-any.whl (23 kB)
Requirement already satisfied: certifi>=2017.4.17 in c:\wpy64-31040\python-3.10.4.amd64\lib\site-packages (from requests->vaex-core) (2022.5.18.1)
Requirement already satisfied: idna<4,>=2.5 in c:\wpy64-31040\python-3.10.4.amd64\lib\site-packages (from requests->vaex-core) (3.3)
Requirement already satisfied: charset-normalizer<3,>=2 in c:\wpy64-31040\python-3.10.4.amd64\lib\site-packages (from requests->vaex-core) (2.0.12)
Requirement already satisfied: urllib3<1.27,>=1.21.1 in c:\wpy64-31040\python-3.10.4.amd64\lib\site-packages (from requests->vaex-core) (1.26.9)
Requirement already satisfied: pygments<3.0.0,>=2.6.0 in c:\wpy64-31040\python-3.10.4.amd64\lib\site-packages (from rich->vaex-core) (2.11.2)
Requirement already satisfied: commonmark<0.10.0,>=0.9.0 in c:\wpy64-31040\python-3.10.4.amd64\lib\site-packages (from rich->vaex-core) (0.9.1)
Requirement already satisfied: h11>=0.8 in c:\wpy64-31040\python-3.10.4.amd64\lib\site-packages (from uvicorn[standard]->vaex-server) (0.12.0)
Requirement already satisfied: asgiref>=3.4.0 in c:\wpy64-31040\python-3.10.4.amd64\lib\site-packages (from uvicorn[standard]->vaex-server) (3.5.0)
Requirement already satisfied: click>=7.0 in c:\wpy64-31040\python-3.10.4.amd64\lib\site-packages (from uvicorn[standard]->vaex-server) (8.0.4)
Requirement already satisfied: websockets>=10.0 in c:\wpy64-31040\python-3.10.4.amd64\lib\site-packages (from uvicorn[standard]->vaex-server) (10.3)
Requirement already satisfied: colorama>=0.4 in c:\wpy64-31040\python-3.10.4.amd64\lib\site-packages (from uvicorn[standard]->vaex-server) (0.4.4)
Requirement already satisfied: watchgod>=0.6 in c:\wpy64-31040\python-3.10.4.amd64\lib\site-packages (from uvicorn[standard]->vaex-server) (0.8.2)
Requirement already satisfied: httptools>=0.4.0 in c:\wpy64-31040\python-3.10.4.amd64\lib\site-packages (from uvicorn[standard]->vaex-server) (0.4.0)
Requirement already satisfied: python-dotenv>=0.13 in c:\wpy64-31040\python-3.10.4.amd64\lib\site-packages (from uvicorn[standard]->vaex-server) (0.19.2)
Requirement already satisfied: matplotlib-inline in c:\wpy64-31040\python-3.10.4.amd64\lib\site-packages (from ipython<9->ipympl->vaex-jupyter) (0.1.3)
Requirement already satisfied: backcall in c:\wpy64-31040\python-3.10.4.amd64\lib\site-packages (from ipython<9->ipympl->vaex-jupyter) (0.2.0)
Requirement already satisfied: jedi>=0.16 in c:\wpy64-31040\python-3.10.4.amd64\lib\site-packages (from ipython<9->ipympl->vaex-jupyter) (0.18.1)
Requirement already satisfied: decorator in c:\wpy64-31040\python-3.10.4.amd64\lib\site-packages (from ipython<9->ipympl->vaex-jupyter) (4.4.2)
Requirement already satisfied: prompt-toolkit!=3.0.0,!=3.0.1,<3.1.0,>=2.0.0 in c:\wpy64-31040\python-3.10.4.amd64\lib\site-packages (from ipython<9->ipympl->vaex-jupyter) (3.0.29)
Requirement already satisfied: pickleshare in c:\wpy64-31040\python-3.10.4.amd64\lib\site-packages (from ipython<9->ipympl->vaex-jupyter) (0.7.5)
Requirement already satisfied: widgetsnbextension~=3.6.0 in c:\wpy64-31040\python-3.10.4.amd64\lib\site-packages (from ipywidgets>=7.5.0->bqplot>=0.10.1->vaex-jupyter) (3.6.0)
Requirement already satisfied: jupyterlab-widgets>=1.0.0 in c:\wpy64-31040\python-3.10.4.amd64\lib\site-packages (from ipywidgets>=7.5.0->bqplot>=0.10.1->vaex-jupyter) (1.1.0)
Requirement already satisfied: nbformat>=4.2.0 in c:\wpy64-31040\python-3.10.4.amd64\lib\site-packages (from ipywidgets>=7.5.0->bqplot>=0.10.1->vaex-jupyter) (5.3.0)
Requirement already satisfied: ipykernel>=4.5.1 in c:\wpy64-31040\python-3.10.4.amd64\lib\site-packages (from ipywidgets>=7.5.0->bqplot>=0.10.1->vaex-jupyter) (6.13.0)
Requirement already satisfied: locket in c:\wpy64-31040\python-3.10.4.amd64\lib\site-packages (from partd>=0.3.10->dask!=2022.4.0->vaex-core) (1.0.0)
Collecting ipydatawidgets>=1.1.1
Using cached ipydatawidgets-4.3.1.post1-py2.py3-none-any.whl (271 kB)
Requirement already satisfied: sniffio>=1.1 in c:\wpy64-31040\python-3.10.4.amd64\lib\site-packages (from anyio<5,>=3.4.0->starlette==0.19.1->fastapi->vaex-server) (1.2.0)
Requirement already satisfied: jupyter-client>=6.1.12 in c:\wpy64-31040\python-3.10.4.amd64\lib\site-packages (from ipykernel>=4.5.1->ipywidgets>=7.5.0->bqplot>=0.10.1->vaex-jupyter) (7.3.0)
Requirement already satisfied: psutil in c:\wpy64-31040\python-3.10.4.amd64\lib\site-packages (from ipykernel>=4.5.1->ipywidgets>=7.5.0->bqplot>=0.10.1->vaex-jupyter) (5.9.0)
Requirement already satisfied: debugpy>=1.0 in c:\wpy64-31040\python-3.10.4.amd64\lib\site-packages (from ipykernel>=4.5.1->ipywidgets>=7.5.0->bqplot>=0.10.1->vaex-jupyter) (1.6.0)
Requirement already satisfied: parso<0.9.0,>=0.8.0 in c:\wpy64-31040\python-3.10.4.amd64\lib\site-packages (from jedi>=0.16->ipython<9->ipympl->vaex-jupyter) (0.8.3)
Requirement already satisfied: fastjsonschema in c:\wpy64-31040\python-3.10.4.amd64\lib\site-packages (from nbformat>=4.2.0->ipywidgets>=7.5.0->bqplot>=0.10.1->vaex-jupyter) (2.15.3)
Requirement already satisfied: jsonschema>=2.6 in c:\wpy64-31040\python-3.10.4.amd64\lib\site-packages (from nbformat>=4.2.0->ipywidgets>=7.5.0->bqplot>=0.10.1->vaex-jupyter) (4.4.0)
Requirement already satisfied: jupyter-core in c:\wpy64-31040\python-3.10.4.amd64\lib\site-packages (from nbformat>=4.2.0->ipywidgets>=7.5.0->bqplot>=0.10.1->vaex-jupyter) (4.10.0)
Requirement already satisfied: wcwidth in c:\wpy64-31040\python-3.10.4.amd64\lib\site-packages (from prompt-toolkit!=3.0.0,!=3.0.1,<3.1.0,>=2.0.0->ipython<9->ipympl->vaex-jupyter) (0.2.5)
Requirement already satisfied: notebook>=4.4.1 in c:\wpy64-31040\python-3.10.4.amd64\lib\site-packages (from widgetsnbextension~=3.6.0->ipywidgets>=7.5.0->bqplot>=0.10.1->vaex-jupyter) (6.4.11)
Requirement already satisfied: attrs>=17.4.0 in c:\wpy64-31040\python-3.10.4.amd64\lib\site-packages (from jsonschema>=2.6->nbformat>=4.2.0->ipywidgets>=7.5.0->bqplot>=0.10.1->vaex-jupyter) (21.4.0)
Requirement already satisfied: pyrsistent!=0.17.0,!=0.17.1,!=0.17.2,>=0.14.0 in c:\wpy64-31040\python-3.10.4.amd64\lib\site-packages (from jsonschema>=2.6->nbformat>=4.2.0->ipywidgets>=7.5.0->bqplot>=0.10.1->vaex-jupyter) (0.18.1)
Requirement already satisfied: entrypoints in c:\wpy64-31040\python-3.10.4.amd64\lib\site-packages (from jupyter-client>=6.1.12->ipykernel>=4.5.1->ipywidgets>=7.5.0->bqplot>=0.10.1->vaex-jupyter) (0.4)
Requirement already satisfied: pyzmq>=22.3 in c:\wpy64-31040\python-3.10.4.amd64\lib\site-packages (from jupyter-client>=6.1.12->ipykernel>=4.5.1->ipywidgets>=7.5.0->bqplot>=0.10.1->vaex-jupyter) (22.3.0)
Requirement already satisfied: pywin32>=1.0 in c:\wpy64-31040\python-3.10.4.amd64\lib\site-packages (from jupyter-core->nbformat>=4.2.0->ipywidgets>=7.5.0->bqplot>=0.10.1->vaex-jupyter) (303)
Requirement already satisfied: terminado>=0.8.3 in c:\wpy64-31040\python-3.10.4.amd64\lib\site-packages (from notebook>=4.4.1->widgetsnbextension~=3.6.0->ipywidgets>=7.5.0->bqplot>=0.10.1->vaex-jupyter) (0.13.3)
Requirement already satisfied: prometheus-client in c:\wpy64-31040\python-3.10.4.amd64\lib\site-packages (from notebook>=4.4.1->widgetsnbextension~=3.6.0->ipywidgets>=7.5.0->bqplot>=0.10.1->vaex-jupyter) (0.14.1)
Requirement already satisfied: nbconvert>=5 in c:\wpy64-31040\python-3.10.4.amd64\lib\site-packages (from notebook>=4.4.1->widgetsnbextension~=3.6.0->ipywidgets>=7.5.0->bqplot>=0.10.1->vaex-jupyter) (6.5.0)
Requirement already satisfied: argon2-cffi in c:\wpy64-31040\python-3.10.4.amd64\lib\site-packages (from notebook>=4.4.1->widgetsnbextension~=3.6.0->ipywidgets>=7.5.0->bqplot>=0.10.1->vaex-jupyter) (21.3.0)
Requirement already satisfied: Send2Trash>=1.8.0 in c:\wpy64-31040\python-3.10.4.amd64\lib\site-packages (from notebook>=4.4.1->widgetsnbextension~=3.6.0->ipywidgets>=7.5.0->bqplot>=0.10.1->vaex-jupyter) (1.8.0)
Requirement already satisfied: jupyterlab-pygments in c:\wpy64-31040\python-3.10.4.amd64\lib\site-packages (from nbconvert>=5->notebook>=4.4.1->widgetsnbextension~=3.6.0->ipywidgets>=7.5.0->bqplot>=0.10.1->vaex-jupyter) (0.2.2)
Requirement already satisfied: beautifulsoup4 in c:\wpy64-31040\python-3.10.4.amd64\lib\site-packages (from nbconvert>=5->notebook>=4.4.1->widgetsnbextension~=3.6.0->ipywidgets>=7.5.0->bqplot>=0.10.1->vaex-jupyter) (4.11.1)
Requirement already satisfied: defusedxml in c:\wpy64-31040\python-3.10.4.amd64\lib\site-packages (from nbconvert>=5->notebook>=4.4.1->widgetsnbextension~=3.6.0->ipywidgets>=7.5.0->bqplot>=0.10.1->vaex-jupyter) (0.7.1)
Requirement already satisfied: nbclient>=0.5.0 in c:\wpy64-31040\python-3.10.4.amd64\lib\site-packages (from nbconvert>=5->notebook>=4.4.1->widgetsnbextension~=3.6.0->ipywidgets>=7.5.0->bqplot>=0.10.1->vaex-jupyter) (0.5.13)
Requirement already satisfied: pandocfilters>=1.4.1 in c:\wpy64-31040\python-3.10.4.amd64\lib\site-packages (from nbconvert>=5->notebook>=4.4.1->widgetsnbextension~=3.6.0->ipywidgets>=7.5.0->bqplot>=0.10.1->vaex-jupyter) (1.5.0)
Requirement already satisfied: bleach in c:\wpy64-31040\python-3.10.4.amd64\lib\site-packages (from nbconvert>=5->notebook>=4.4.1->widgetsnbextension~=3.6.0->ipywidgets>=7.5.0->bqplot>=0.10.1->vaex-jupyter) (5.0.0)
Requirement already satisfied: tinycss2 in c:\wpy64-31040\python-3.10.4.amd64\lib\site-packages (from nbconvert>=5->notebook>=4.4.1->widgetsnbextension~=3.6.0->ipywidgets>=7.5.0->bqplot>=0.10.1->vaex-jupyter) (1.1.1)
Requirement already satisfied: mistune<2,>=0.8.1 in c:\wpy64-31040\python-3.10.4.amd64\lib\site-packages (from nbconvert>=5->notebook>=4.4.1->widgetsnbextension~=3.6.0->ipywidgets>=7.5.0->bqplot>=0.10.1->vaex-jupyter) (0.8.4)
Requirement already satisfied: pywinpty>=1.1.0 in c:\wpy64-31040\python-3.10.4.amd64\lib\site-packages (from terminado>=0.8.3->notebook>=4.4.1->widgetsnbextension~=3.6.0->ipywidgets>=7.5.0->bqplot>=0.10.1->vaex-jupyter) (2.0.5)
Requirement already satisfied: argon2-cffi-bindings in c:\wpy64-31040\python-3.10.4.amd64\lib\site-packages (from argon2-cffi->notebook>=4.4.1->widgetsnbextension~=3.6.0->ipywidgets>=7.5.0->bqplot>=0.10.1->vaex-jupyter) (21.2.0)
Requirement already satisfied: cffi>=1.0.1 in c:\wpy64-31040\python-3.10.4.amd64\lib\site-packages (from argon2-cffi-bindings->argon2-cffi->notebook>=4.4.1->widgetsnbextension~=3.6.0->ipywidgets>=7.5.0->bqplot>=0.10.1->vaex-jupyter) (1.15.0)
Requirement already satisfied: soupsieve>1.2 in c:\wpy64-31040\python-3.10.4.amd64\lib\site-packages (from beautifulsoup4->nbconvert>=5->notebook>=4.4.1->widgetsnbextension~=3.6.0->ipywidgets>=7.5.0->bqplot>=0.10.1->vaex-jupyter) (2.3.2.post1)
Requirement already satisfied: webencodings in c:\wpy64-31040\python-3.10.4.amd64\lib\site-packages (from bleach->nbconvert>=5->notebook>=4.4.1->widgetsnbextension~=3.6.0->ipywidgets>=7.5.0->bqplot>=0.10.1->vaex-jupyter) (0.5.1)
Requirement already satisfied: pycparser in c:\wpy64-31040\python-3.10.4.amd64\lib\site-packages (from cffi>=1.0.1->argon2-cffi-bindings->argon2-cffi->notebook>=4.4.1->widgetsnbextension~=3.6.0->ipywidgets>=7.5.0->bqplot>=0.10.1->vaex-jupyter) (2.21)
Building wheels for collected packages: vaex-core
Building wheel for vaex-core (pyproject.toml) ... error
error: subprocess-exited-with-error
× Building wheel for vaex-core (pyproject.toml) did not run successfully.
│ exit code: 1
╰─> [294 lines of output]
<string>:4: DeprecationWarning: the imp module is deprecated in favour of importlib and slated for removal in Python 3.12; see the module's documentation for alternative uses
running bdist_wheel
running build
running build_py
creating build
creating build\lib.win-amd64-cpython-310
creating build\lib.win-amd64-cpython-310\vaex
copying vaex\agg.py -> build\lib.win-amd64-cpython-310\vaex
copying vaex\array_types.py -> build\lib.win-amd64-cpython-310\vaex
copying vaex\asyncio.py -> build\lib.win-amd64-cpython-310\vaex
copying vaex\benchmark.py -> build\lib.win-amd64-cpython-310\vaex
copying vaex\cache.py -> build\lib.win-amd64-cpython-310\vaex
copying vaex\column.py -> build\lib.win-amd64-cpython-310\vaex
copying vaex\config.py -> build\lib.win-amd64-cpython-310\vaex
copying vaex\convert.py -> build\lib.win-amd64-cpython-310\vaex
copying vaex\cpu.py -> build\lib.win-amd64-cpython-310\vaex
copying vaex\dataframe.py -> build\lib.win-amd64-cpython-310\vaex
copying vaex\dataframe_protocol.py -> build\lib.win-amd64-cpython-310\vaex
copying vaex\dataset.py -> build\lib.win-amd64-cpython-310\vaex
copying vaex\dataset_misc.py -> build\lib.win-amd64-cpython-310\vaex
copying vaex\dataset_mmap.py -> build\lib.win-amd64-cpython-310\vaex
copying vaex\dataset_utils.py -> build\lib.win-amd64-cpython-310\vaex
copying vaex\datatype.py -> build\lib.win-amd64-cpython-310\vaex
copying vaex\datatype_test.py -> build\lib.win-amd64-cpython-310\vaex
copying vaex\delayed.py -> build\lib.win-amd64-cpython-310\vaex
copying vaex\docstrings.py -> build\lib.win-amd64-cpython-310\vaex
copying vaex\encoding.py -> build\lib.win-amd64-cpython-310\vaex
copying vaex\events.py -> build\lib.win-amd64-cpython-310\vaex
copying vaex\execution.py -> build\lib.win-amd64-cpython-310\vaex
copying vaex\export.py -> build\lib.win-amd64-cpython-310\vaex
copying vaex\expression.py -> build\lib.win-amd64-cpython-310\vaex
copying vaex\expresso.py -> build\lib.win-amd64-cpython-310\vaex
copying vaex\formatting.py -> build\lib.win-amd64-cpython-310\vaex
copying vaex\functions.py -> build\lib.win-amd64-cpython-310\vaex
copying vaex\geo.py -> build\lib.win-amd64-cpython-310\vaex
copying vaex\grids.py -> build\lib.win-amd64-cpython-310\vaex
copying vaex\groupby.py -> build\lib.win-amd64-cpython-310\vaex
copying vaex\hash.py -> build\lib.win-amd64-cpython-310\vaex
copying vaex\image.py -> build\lib.win-amd64-cpython-310\vaex
copying vaex\itertools.py -> build\lib.win-amd64-cpython-310\vaex
copying vaex\join.py -> build\lib.win-amd64-cpython-310\vaex
copying vaex\json.py -> build\lib.win-amd64-cpython-310\vaex
copying vaex\kld.py -> build\lib.win-amd64-cpython-310\vaex
copying vaex\legacy.py -> build\lib.win-amd64-cpython-310\vaex
copying vaex\logging.py -> build\lib.win-amd64-cpython-310\vaex
copying vaex\memory.py -> build\lib.win-amd64-cpython-310\vaex
copying vaex\meta.py -> build\lib.win-amd64-cpython-310\vaex
copying vaex\metal.py -> build\lib.win-amd64-cpython-310\vaex
copying vaex\misc_cmdline.py -> build\lib.win-amd64-cpython-310\vaex
copying vaex\multiprocessing.py -> build\lib.win-amd64-cpython-310\vaex
copying vaex\multithreading.py -> build\lib.win-amd64-cpython-310\vaex
copying vaex\parallelize.py -> build\lib.win-amd64-cpython-310\vaex
copying vaex\progress.py -> build\lib.win-amd64-cpython-310\vaex
copying vaex\promise.py -> build\lib.win-amd64-cpython-310\vaex
copying vaex\registry.py -> build\lib.win-amd64-cpython-310\vaex
copying vaex\rolling.py -> build\lib.win-amd64-cpython-310\vaex
copying vaex\samp.py -> build\lib.win-amd64-cpython-310\vaex
copying vaex\schema.py -> build\lib.win-amd64-cpython-310\vaex
copying vaex\scopes.py -> build\lib.win-amd64-cpython-310\vaex
copying vaex\selections.py -> build\lib.win-amd64-cpython-310\vaex
copying vaex\serialize.py -> build\lib.win-amd64-cpython-310\vaex
copying vaex\settings.py -> build\lib.win-amd64-cpython-310\vaex
copying vaex\shift.py -> build\lib.win-amd64-cpython-310\vaex
copying vaex\stat.py -> build\lib.win-amd64-cpython-310\vaex
copying vaex\strings.py -> build\lib.win-amd64-cpython-310\vaex
copying vaex\struct.py -> build\lib.win-amd64-cpython-310\vaex
copying vaex\tasks.py -> build\lib.win-amd64-cpython-310\vaex
copying vaex\utils.py -> build\lib.win-amd64-cpython-310\vaex
copying vaex\version.py -> build\lib.win-amd64-cpython-310\vaex
copying vaex\_version.py -> build\lib.win-amd64-cpython-310\vaex
copying vaex\__init__.py -> build\lib.win-amd64-cpython-310\vaex
copying vaex\__main__.py -> build\lib.win-amd64-cpython-310\vaex
creating build\lib.win-amd64-cpython-310\vaex\arrow
copying vaex\arrow\convert.py -> build\lib.win-amd64-cpython-310\vaex\arrow
copying vaex\arrow\dataset.py -> build\lib.win-amd64-cpython-310\vaex\arrow
copying vaex\arrow\numpy_dispatch.py -> build\lib.win-amd64-cpython-310\vaex\arrow
copying vaex\arrow\opener.py -> build\lib.win-amd64-cpython-310\vaex\arrow
copying vaex\arrow\utils.py -> build\lib.win-amd64-cpython-310\vaex\arrow
copying vaex\arrow\utils_test.py -> build\lib.win-amd64-cpython-310\vaex\arrow
copying vaex\arrow\_version.py -> build\lib.win-amd64-cpython-310\vaex\arrow
creating build\lib.win-amd64-cpython-310\vaex\core
copying vaex\core\_version.py -> build\lib.win-amd64-cpython-310\vaex\core
copying vaex\core\__init__.py -> build\lib.win-amd64-cpython-310\vaex\core
creating build\lib.win-amd64-cpython-310\vaex\file
copying vaex\file\asyncio.py -> build\lib.win-amd64-cpython-310\vaex\file
copying vaex\file\cache.py -> build\lib.win-amd64-cpython-310\vaex\file
copying vaex\file\column.py -> build\lib.win-amd64-cpython-310\vaex\file
copying vaex\file\gcs.py -> build\lib.win-amd64-cpython-310\vaex\file
copying vaex\file\s3.py -> build\lib.win-amd64-cpython-310\vaex\file
copying vaex\file\s3arrow.py -> build\lib.win-amd64-cpython-310\vaex\file
copying vaex\file\s3fs.py -> build\lib.win-amd64-cpython-310\vaex\file
copying vaex\file\s3_test.py -> build\lib.win-amd64-cpython-310\vaex\file
copying vaex\file\__init__.py -> build\lib.win-amd64-cpython-310\vaex\file
creating build\lib.win-amd64-cpython-310\vaex\test
copying vaex\test\all.py -> build\lib.win-amd64-cpython-310\vaex\test
copying vaex\test\cmodule.py -> build\lib.win-amd64-cpython-310\vaex\test
copying vaex\test\dataset.py -> build\lib.win-amd64-cpython-310\vaex\test
copying vaex\test\expresso.py -> build\lib.win-amd64-cpython-310\vaex\test
copying vaex\test\misc.py -> build\lib.win-amd64-cpython-310\vaex\test
copying vaex\test\plot.py -> build\lib.win-amd64-cpython-310\vaex\test
copying vaex\test\ui.py -> build\lib.win-amd64-cpython-310\vaex\test
copying vaex\test\__init__.py -> build\lib.win-amd64-cpython-310\vaex\test
copying vaex\test\__main__.py -> build\lib.win-amd64-cpython-310\vaex\test
creating build\lib.win-amd64-cpython-310\vaex\ext
copying vaex\ext\bokeh.py -> build\lib.win-amd64-cpython-310\vaex\ext
copying vaex\ext\common.py -> build\lib.win-amd64-cpython-310\vaex\ext
copying vaex\ext\ipyvolume.py -> build\lib.win-amd64-cpython-310\vaex\ext
copying vaex\ext\jprops.py -> build\lib.win-amd64-cpython-310\vaex\ext
copying vaex\ext\readcol.py -> build\lib.win-amd64-cpython-310\vaex\ext
copying vaex\ext\__init__.py -> build\lib.win-amd64-cpython-310\vaex\ext
creating build\lib.win-amd64-cpython-310\vaex\misc
copying vaex\misc\expressions.py -> build\lib.win-amd64-cpython-310\vaex\misc
copying vaex\misc\ordereddict.py -> build\lib.win-amd64-cpython-310\vaex\misc
copying vaex\misc\pandawrap.py -> build\lib.win-amd64-cpython-310\vaex\misc
copying vaex\misc\parallelize.py -> build\lib.win-amd64-cpython-310\vaex\misc
copying vaex\misc\progressbar.py -> build\lib.win-amd64-cpython-310\vaex\misc
copying vaex\misc\samp.py -> build\lib.win-amd64-cpython-310\vaex\misc
copying vaex\misc\__init__.py -> build\lib.win-amd64-cpython-310\vaex\misc
creating build\lib.win-amd64-cpython-310\vaex\datasets
copying vaex\datasets\__init__.py -> build\lib.win-amd64-cpython-310\vaex\datasets
running egg_info
writing vaex_core.egg-info\PKG-INFO
writing dependency_links to vaex_core.egg-info\dependency_links.txt
writing entry points to vaex_core.egg-info\entry_points.txt
writing requirements to vaex_core.egg-info\requires.txt
writing top-level names to vaex_core.egg-info\top_level.txt
reading manifest file 'vaex_core.egg-info\SOURCES.txt'
reading manifest template 'MANIFEST.in'
warning: no files found matching '*.c' under directory 'vendor'
warning: no files found matching '*.h' under directory 'src'
warning: no files found matching '*.c' under directory 'src'
adding license file 'LICENSE.txt'
writing manifest file 'vaex_core.egg-info\SOURCES.txt'
C:\Users\User\AppData\Local\Temp\pip-build-env-2lzeauxq\overlay\Lib\site-packages\setuptools\command\build_py.py:153: SetuptoolsDeprecationWarning: Installing 'vaex.test.files' as data is deprecated, please list it in `packages`.
!!
############################
# Package would be ignored #
############################
Python recognizes 'vaex.test.files' as an importable package,
but it is not listed in the `packages` configuration of setuptools.
'vaex.test.files' has been automatically added to the distribution only
because it may contain data files, but this behavior is likely to change
in future versions of setuptools (and therefore is considered deprecated).
Please make sure that 'vaex.test.files' is included as a package by using
the `packages` configuration field or the proper discovery methods
(for example by using `find_namespace_packages(...)`/`find_namespace:`
instead of `find_packages(...)`/`find:`).
You can read more about "package discovery" and "data files" on setuptools
documentation page.
!!
check.warn(importable)
copying vaex\pcre.dll -> build\lib.win-amd64-cpython-310\vaex
copying vaex\pcrecpp.dll -> build\lib.win-amd64-cpython-310\vaex
copying vaex\vcruntime140_1.dll -> build\lib.win-amd64-cpython-310\vaex
creating build\lib.win-amd64-cpython-310\vaex\test\files
copying vaex\test\files\gaia-small-colfits-basic.fits -> build\lib.win-amd64-cpython-310\vaex\test\files
copying vaex\test\files\gaia-small-colfits-plus.fits -> build\lib.win-amd64-cpython-310\vaex\test\files
copying vaex\test\files\gaia-small-fits-basic.fits -> build\lib.win-amd64-cpython-310\vaex\test\files
copying vaex\test\files\gaia-small-fits-plus.fits -> build\lib.win-amd64-cpython-310\vaex\test\files
copying vaex\test\files\gaia-small-votable.vot -> build\lib.win-amd64-cpython-310\vaex\test\files
copying vaex\test\files\default_amuse_plummer.hdf5 -> build\lib.win-amd64-cpython-310\vaex\test\files
copying vaex\datasets\iris.hdf5 -> build\lib.win-amd64-cpython-310\vaex\datasets
copying vaex\datasets\titanic.hdf5 -> build\lib.win-amd64-cpython-310\vaex\datasets
running build_ext
building 'vaex.vaexfast' extension
creating build\temp.win-amd64-cpython-310
creating build\temp.win-amd64-cpython-310\Release
creating build\temp.win-amd64-cpython-310\Release\src
"C:\Program Files (x86)\Microsoft Visual Studio\2019\BuildTools\VC\Tools\MSVC\14.29.30133\bin\HostX86\x64\cl.exe" /c /nologo /O2 /W3 /GL /DNDEBUG /MD -IC:\Users\User\AppData\Local\Temp\pip-build-env-2lzeauxq\overlay\Lib\site-packages\numpy\core\include -IC:\WPy64-31040\python-3.10.4.amd64\include -IC:\WPy64-31040\python-3.10.4.amd64\Include "-IC:\Program Files (x86)\Microsoft Visual Studio\2019\BuildTools\VC\Tools\MSVC\14.29.30133\include" "-IC:\Program Files (x86)\Windows Kits\NETFXSDK\4.8\include\um" "-IC:\Program Files (x86)\Windows Kits\10\include\10.0.19041.0\ucrt" "-IC:\Program Files (x86)\Windows Kits\10\include\10.0.19041.0\shared" "-IC:\Program Files (x86)\Windows Kits\10\include\10.0.19041.0\um" "-IC:\Program Files (x86)\Windows Kits\10\include\10.0.19041.0\winrt" "-IC:\Program Files (x86)\Windows Kits\10\include\10.0.19041.0\cppwinrt" /EHsc /Tpsrc\vaexfast.cpp /Fobuild\temp.win-amd64-cpython-310\Release\src\vaexfast.obj /EHsc
vaexfast.cpp
src\vaexfast.cpp(18): warning C4005: 'INFINITY': macro redefinition
C:\Program Files (x86)\Windows Kits\10\include\10.0.19041.0\ucrt\corecrt_math.h(88): note: see previous definition of 'INFINITY'
C:\Users\User\AppData\Local\Temp\pip-build-env-2lzeauxq\overlay\Lib\site-packages\numpy\core\include\numpy\npy_1_7_deprecated_api.h(14) : Warning Msg: Using deprecated NumPy API, disable it with #define NPY_NO_DEPRECATED_API NPY_1_7_API_VERSION
src\vaexfast.cpp(201): warning C4244: 'argument': conversion from '__int64' to 'int', possible loss of data
src\vaexfast.cpp(532): warning C4244: 'argument': conversion from '__int64' to 'const int', possible loss of data
src\vaexfast.cpp(956): warning C4244: '=': conversion from 'Py_ssize_t' to 'int', possible loss of data
src\vaexfast.cpp(1798): warning C4244: 'argument': conversion from '__int64' to 'int', possible loss of data
src\vaexfast.cpp(1798): warning C4244: 'argument': conversion from '__int64' to 'int', possible loss of data
src\vaexfast.cpp(64): warning C4244: '=': conversion from 'npy_intp' to 'int', possible loss of data
src\vaexfast.cpp(198): note: see reference to function template instantiation 'void object_to_numpy1d_nocopy<double>(T *&,PyObject *,__int64 &,int &,int)' being compiled
with
[
T=double
]
src\vaexfast.cpp(88): warning C4244: '=': conversion from 'npy_intp' to 'int', possible loss of data
src\vaexfast.cpp(280): note: see reference to function template instantiation 'void object_to_numpy1d_nocopy_endian<double>(T *&,PyObject *,__int64 &,bool &,int &,int)' being compiled
with
[
T=double
]
src\vaexfast.cpp(105): warning C4244: 'initializing': conversion from 'npy_intp' to 'int', possible loss of data
src\vaexfast.cpp(644): note: see reference to function template instantiation 'void object_to_numpy2d_nocopy<double>(T *&,PyObject *,int &,int &,int)' being compiled
with
[
T=double
]
src\vaexfast.cpp(108): warning C4244: 'initializing': conversion from 'npy_intp' to 'int', possible loss of data
src\vaexfast.cpp(667): warning C4244: 'initializing': conversion from 'const double' to 'float', possible loss of data
src\vaexfast.cpp(775): note: see reference to function template instantiation 'void histogram2d_f4<__int64>(const float *__restrict const ,const float *__restrict const ,const float *const ,const __int64,bool,bool,bool,Tout *__restrict const ,const int,const int,const double,const double,const double,const double,const __int64,const __int64)' being compiled
with
[
Tout=__int64
]
src\vaexfast.cpp(667): warning C4244: 'initializing': conversion from 'const double' to 'const float', possible loss of data
src\vaexfast.cpp(668): warning C4244: 'initializing': conversion from 'const double' to 'float', possible loss of data
src\vaexfast.cpp(668): warning C4244: 'initializing': conversion from 'const double' to 'const float', possible loss of data
src\vaexfast.cpp(669): warning C4244: 'initializing': conversion from 'const double' to 'float', possible loss of data
src\vaexfast.cpp(669): warning C4244: 'initializing': conversion from 'const double' to 'const float', possible loss of data
src\vaexfast.cpp(670): warning C4244: 'initializing': conversion from 'const double' to 'float', possible loss of data
src\vaexfast.cpp(670): warning C4244: 'initializing': conversion from 'const double' to 'const float', possible loss of data
src\vaexfast.cpp(671): warning C4244: 'initializing': conversion from 'double' to 'float', possible loss of data
src\vaexfast.cpp(671): warning C4244: 'initializing': conversion from 'double' to 'const float', possible loss of data
src\vaexfast.cpp(672): warning C4244: 'initializing': conversion from 'double' to 'float', possible loss of data
src\vaexfast.cpp(672): warning C4244: 'initializing': conversion from 'double' to 'const float', possible loss of data
src\vaexfast.cpp(133): warning C4244: 'initializing': conversion from 'npy_intp' to 'int', possible loss of data
src\vaexfast.cpp(887): note: see reference to function template instantiation 'void object_to_numpy3d_nocopy<double>(T *&,PyObject *,int &,int &,int &,int)' being compiled
with
[
T=double
]
src\vaexfast.cpp(136): warning C4244: 'initializing': conversion from 'npy_intp' to 'int', possible loss of data
src\vaexfast.cpp(139): warning C4244: 'initializing': conversion from 'npy_intp' to 'int', possible loss of data
src\vaexfast.cpp(174): warning C4244: '=': conversion from 'npy_intp' to 'int', possible loss of data
src\vaexfast.cpp(983): note: see reference to function template instantiation 'void object_to_numpyNd_nocopy<double>(T *&,PyObject *,int,int &,int *,__int64 *,int)' being compiled
with
[
T=double
]
src\vaexfast.cpp(1335): warning C4244: '=': conversion from 'Py_ssize_t' to 'int', possible loss of data
src\vaexfast.cpp(2072): note: see reference to function template instantiation 'PyObject *statisticNd_<double,NPY_DOUBLE>(PyObject *,PyObject *)' being compiled
src\vaexfast.cpp(1338): warning C4244: '=': conversion from 'Py_ssize_t' to 'int', possible loss of data
src\vaexfast.cpp(1149): warning C4244: 'initializing': conversion from 'double' to 'T', possible loss of data
with
[
T=float
]
src\vaexfast.cpp(1271): note: see reference to function template instantiation 'void statisticNd<T,op_add1<T,double,endian>,endian>(const T *__restrict const [],const T *__restrict const [],__int64,const int,const int,double *__restrict const ,const __int64 *__restrict const ,const int *__restrict const ,const T *__restrict const ,const T *__restrict const ,int)' being compiled
with
[
T=float,
endian=functor_double_to_native
]
src\vaexfast.cpp(1308): note: see reference to function template instantiation 'void statisticNd_wrap_template_endian<T,functor_double_to_native>(const T *const [],const T *const [],__int64,int,int,double *,__int64 [],int [],T [],T [],int,int)' being compiled
with
[
T=float
]
src\vaexfast.cpp(1402): note: see reference to function template instantiation 'void statisticNd_wrap_template<T>(const T *const [],const T *const [],__int64,int,int,double *,__int64 [],int [],T [],T [],bool,int,int)' being compiled
with
[
T=float
]
src\vaexfast.cpp(2073): note: see reference to function template instantiation 'PyObject *statisticNd_<float,NPY_FLOAT>(PyObject *,PyObject *)' being compiled
src\vaexfast.cpp(1178): warning C4244: 'initializing': conversion from 'double' to 'T', possible loss of data
with
[
T=float
]
src\vaexfast.cpp(1198): warning C4244: 'initializing': conversion from 'double' to 'T', possible loss of data
with
[
T=float
]
src\vaexfast.cpp(1216): warning C4244: 'initializing': conversion from 'double' to 'T', possible loss of data
with
[
T=float
]
"C:\Program Files (x86)\Microsoft Visual Studio\2019\BuildTools\VC\Tools\MSVC\14.29.30133\bin\HostX86\x64\link.exe" /nologo /INCREMENTAL:NO /LTCG /DLL /MANIFEST:EMBED,ID=2 /MANIFESTUAC:NO /LIBPATH:C:\WPy64-31040\python-3.10.4.amd64\libs /LIBPATH:C:\WPy64-31040\python-3.10.4.amd64 /LIBPATH:C:\WPy64-31040\python-3.10.4.amd64\PCbuild\amd64 "/LIBPATH:C:\Program Files (x86)\Microsoft Visual Studio\2019\BuildTools\VC\Tools\MSVC\14.29.30133\lib\x64" "/LIBPATH:C:\Program Files (x86)\Windows Kits\NETFXSDK\4.8\lib\um\x64" "/LIBPATH:C:\Program Files (x86)\Windows Kits\10\lib\10.0.19041.0\ucrt\x64" "/LIBPATH:C:\Program Files (x86)\Windows Kits\10\lib\10.0.19041.0\um\x64" /EXPORT:PyInit_vaexfast build\temp.win-amd64-cpython-310\Release\src\vaexfast.obj /OUT:build\lib.win-amd64-cpython-310\vaex\vaexfast.cp310-win_amd64.pyd /IMPLIB:build\temp.win-amd64-cpython-310\Release\src\vaexfast.cp310-win_amd64.lib
Creating library build\temp.win-amd64-cpython-310\Release\src\vaexfast.cp310-win_amd64.lib and object build\temp.win-amd64-cpython-310\Release\src\vaexfast.cp310-win_amd64.exp
Generating code
Finished generating code
building 'vaex.superstrings' extension
"C:\Program Files (x86)\Microsoft Visual Studio\2019\BuildTools\VC\Tools\MSVC\14.29.30133\bin\HostX86\x64\cl.exe" /c /nologo /O2 /W3 /GL /DNDEBUG /MD -IC:\Users\User\AppData\Local\Temp\pip-build-env-2lzeauxq\overlay\Lib\site-packages\numpy\core\include -Ivendor/pybind11/include -Ivendor/pybind11/include -Ivendor/string-view-lite/include -Ivendor/boost -IC:\WPy64-31040\python-3.10.4.amd64\include -IC:\WPy64-31040\python-3.10.4.amd64\Library\include -Ivendor\pcre\Library\include -IC:\WPy64-31040\python-3.10.4.amd64\include -IC:\WPy64-31040\python-3.10.4.amd64\Include "-IC:\Program Files (x86)\Microsoft Visual Studio\2019\BuildTools\VC\Tools\MSVC\14.29.30133\include" "-IC:\Program Files (x86)\Windows Kits\NETFXSDK\4.8\include\um" "-IC:\Program Files (x86)\Windows Kits\10\include\10.0.19041.0\ucrt" "-IC:\Program Files (x86)\Windows Kits\10\include\10.0.19041.0\shared" "-IC:\Program Files (x86)\Windows Kits\10\include\10.0.19041.0\um" "-IC:\Program Files (x86)\Windows Kits\10\include\10.0.19041.0\winrt" "-IC:\Program Files (x86)\Windows Kits\10\include\10.0.19041.0\cppwinrt" /EHsc /Tpsrc\string_utils.cpp /Fobuild\temp.win-amd64-cpython-310\Release\src\string_utils.obj /EHsc
string_utils.cpp
C:\Users\User\AppData\Local\Temp\pip-install-ax3b7ho9\vaex-core_99be9e7b90ff48fa91d344a9b93eeaf8\src\string_utils.hpp(208): warning C4244: '=': conversion from 'char32_t' to 'char', possible loss of data
"C:\Program Files (x86)\Microsoft Visual Studio\2019\BuildTools\VC\Tools\MSVC\14.29.30133\bin\HostX86\x64\cl.exe" /c /nologo /O2 /W3 /GL /DNDEBUG /MD -IC:\Users\User\AppData\Local\Temp\pip-build-env-2lzeauxq\overlay\Lib\site-packages\numpy\core\include -Ivendor/pybind11/include -Ivendor/pybind11/include -Ivendor/string-view-lite/include -Ivendor/boost -IC:\WPy64-31040\python-3.10.4.amd64\include -IC:\WPy64-31040\python-3.10.4.amd64\Library\include -Ivendor\pcre\Library\include -IC:\WPy64-31040\python-3.10.4.amd64\include -IC:\WPy64-31040\python-3.10.4.amd64\Include "-IC:\Program Files (x86)\Microsoft Visual Studio\2019\BuildTools\VC\Tools\MSVC\14.29.30133\include" "-IC:\Program Files (x86)\Windows Kits\NETFXSDK\4.8\include\um" "-IC:\Program Files (x86)\Windows Kits\10\include\10.0.19041.0\ucrt" "-IC:\Program Files (x86)\Windows Kits\10\include\10.0.19041.0\shared" "-IC:\Program Files (x86)\Windows Kits\10\include\10.0.19041.0\um" "-IC:\Program Files (x86)\Windows Kits\10\include\10.0.19041.0\winrt" "-IC:\Program Files (x86)\Windows Kits\10\include\10.0.19041.0\cppwinrt" /EHsc /Tpsrc\strings.cpp /Fobuild\temp.win-amd64-cpython-310\Release\src\strings.obj /EHsc
strings.cpp
vendor/pybind11/include\pybind11/numpy.h(35): error C2065: 'ssize_t': undeclared identifier
vendor/pybind11/include\pybind11/numpy.h(35): error C2338: ssize_t != Py_intptr_t
C:\Users\User\AppData\Local\Temp\pip-install-ax3b7ho9\vaex-core_99be9e7b90ff48fa91d344a9b93eeaf8\src\string_utils.hpp(208): warning C4244: '=': conversion from 'char32_t' to 'char', possible loss of data
vendor\pcre\Library\include\pcrecpp.h(701): warning C4251: 'pcrecpp::RE::pattern_': class 'std::basic_string<char,std::char_traits<char>,std::allocator<char>>' needs to have dll-interface to be used by clients of class 'pcrecpp::RE'
C:\Program Files (x86)\Microsoft Visual Studio\2019\BuildTools\VC\Tools\MSVC\14.29.30133\include\xstring(4871): note: see declaration of 'std::basic_string<char,std::char_traits<char>,std::allocator<char>>'
src\strings.cpp(273): warning C4018: '>': signed/unsigned mismatch
src\strings.cpp(282): warning C4018: '>': signed/unsigned mismatch
error: command 'C:\\Program Files (x86)\\Microsoft Visual Studio\\2019\\BuildTools\\VC\\Tools\\MSVC\\14.29.30133\\bin\\HostX86\\x64\\cl.exe' failed with exit code 2
[end of output]
note: This error originates from a subprocess, and is likely not a problem with pip.
ERROR: Failed building wheel for vaex-core
Failed to build vaex-core
ERROR: Could not build wheels for vaex-core, which is required to install pyproject.toml-based projects
```
|
closed
|
2022-08-06T16:06:22Z
|
2022-08-06T16:18:47Z
|
https://github.com/vaexio/vaex/issues/2157
|
[] |
Code4SAFrankie
| 1
|
aeon-toolkit/aeon
|
scikit-learn
| 1,696
|
[BUG] Arsenal predict not working properly?!
|
### Describe the issue
I was using sktime for Arsenal but because of the split - the reasons stated by Tony in [this](https://github.com/aeon-toolkit/aeon/issues/456) issue - and because of wanting to test, whether aeon is faster than sktime, I wanted to try aeon. As aeon is a fork of sktime, the basics are the same so it was kind of easy to switch to aeon. Well, that's what I thought. Turns out that the predict of the Arsenal algorithm now uses another type. I'm still trying to use the predict and can't get it to work.
For Arsenal, I'm fitting it in a different file and save its outcome with the joblib.dump function. The generated file is then being loaded back in the main script in which I'm trying to predict the class of my measurement. This always gives me the following error:
```
TypeError: No matching definition for argument type(s) array(float64, 1d, C), array(float64, 1d, C), Tuple(array(int32, 1d, C), array(int32, 1d, C), array(float32, 1d, C)), Tuple(array(int32, 1d, C), array(int32, 1d, C), array(float32, 1d, C)), int64
```
I'm trying to predict it like that:
```
acf = joblib.load(arsenal.joblib)
data = np.array(['0.0', '0.0', '0.0', ..., '0.0'], dtype=np.float64)
acf.predict(data)
```
The list contains 200 strings all being 0.0
After that I went to the [examples directory in the aeon git-repo](https://github.com/aeon-toolkit/aeon/blob/main/examples/classification/convolution_based.ipynb) and tried to use the example written there. While reading this example I noticed that the line in "In [2]:" should be ```motions_test, motions_test_labels = load_basic_motions(split="test")``` instead of ```motions_test, motions_test_labels = load_basic_motions(split="train")```, if I'm not completely misunderstanding the example. But this just as a side node.
So I tried to use the example with which I didn't get the above mentioned error anymore but instead a new one:
```
Terminating: fork() called from a process already using Gnu OpenMP, this is unsafe.
```
I assume that this is a direct cause of me using Pythons multiprocessing module to run the prediction parallel with another module?!
Is this a bug or am I doing something completely wrong? As I said, the example works great in sktime.
### Suggest a potential alternative/fix
_No response_
### Additional context
_No response_
|
open
|
2024-06-18T12:13:36Z
|
2024-09-25T10:33:16Z
|
https://github.com/aeon-toolkit/aeon/issues/1696
|
[
"bug",
"classification",
"multithreading"
] |
tim-bsm
| 28
|
CorentinJ/Real-Time-Voice-Cloning
|
tensorflow
| 1,149
|
How to Train Our Custom Neural Voice model
|
help me
explain step by step tutorial
|
closed
|
2022-12-14T09:41:20Z
|
2023-01-08T08:55:16Z
|
https://github.com/CorentinJ/Real-Time-Voice-Cloning/issues/1149
|
[] |
vivek1432ps
| 0
|
jupyter/nbgrader
|
jupyter
| 1,042
|
UnicodeDecodeError appeared on CentOS,but not appeared on MacOS or Ubuntu
|
```shell
[INFO] Updating/creating assignment 'test': {}
[INFO] Converting notebook /home/itutor/instructor01/assignments/source/./test/test-01.ipynb
[ERROR] There was an error processing assignment: /home/itutor/instructor01/assignments/source/./test
[ERROR] Traceback (most recent call last):
File "/home/itutor/anaconda3/envs/test-nbgrader/lib/python3.6/site-packages/nbgrader/converters/base.py", line 293, in convert_notebooks
self.convert_single_notebook(notebook_filename)
File "/home/itutor/anaconda3/envs/test-nbgrader/lib/python3.6/site-packages/nbgrader/converters/base.py", line 249, in convert_single_notebook
output, resources = self.exporter.from_filename(notebook_filename, resources=resources)
File "/home/itutor/anaconda3/envs/test-nbgrader/lib/python3.6/site-packages/nbconvert/exporters/exporter.py", line 174, in from_filename
return self.from_file(f, resources=resources, **kw)
File "/home/itutor/anaconda3/envs/test-nbgrader/lib/python3.6/site-packages/nbconvert/exporters/exporter.py", line 192, in from_file
return self.from_notebook_node(nbformat.read(file_stream, as_version=4), resources=resources, **kw)
File "/home/itutor/anaconda3/envs/test-nbgrader/lib/python3.6/site-packages/nbconvert/exporters/notebook.py", line 31, in from_notebook_node
nb_copy, resources = super(NotebookExporter, self).from_notebook_node(nb, resources, **kw)
File "/home/itutor/anaconda3/envs/test-nbgrader/lib/python3.6/site-packages/nbconvert/exporters/exporter.py", line 134, in from_notebook_node
nb_copy, resources = self._preprocess(nb_copy, resources)
File "/home/itutor/anaconda3/envs/test-nbgrader/lib/python3.6/site-packages/nbconvert/exporters/exporter.py", line 311, in _preprocess
nbc, resc = preprocessor(nbc, resc)
File "/home/itutor/anaconda3/envs/test-nbgrader/lib/python3.6/site-packages/nbconvert/preprocessors/base.py", line 47, in __call__
return self.preprocess(nb, resources)
File "/home/itutor/anaconda3/envs/test-nbgrader/lib/python3.6/site-packages/nbgrader/preprocessors/headerfooter.py", line 24, in preprocess
header_nb = read_nb(fh, as_version=current_nbformat)
File "/home/itutor/anaconda3/envs/test-nbgrader/lib/python3.6/site-packages/nbgrader/nbgraderformat/v1.py", line 124, in read_v1
nb = _read(source, as_version, **kwargs)
File "/home/itutor/anaconda3/envs/test-nbgrader/lib/python3.6/site-packages/nbformat/__init__.py", line 141, in read
return reads(fp.read(), as_version, **kwargs)
File "/home/itutor/anaconda3/envs/test-nbgrader/lib/python3.6/encodings/ascii.py", line 26, in decode
return codecs.ascii_decode(input, self.errors)[0]
UnicodeDecodeError: 'ascii' codec can't decode byte 0xe4 in position 90: ordinal not in range(128)
[ERROR] There was an error processing assignment 'test' for student '.'
[ERROR] Please see the the above traceback for details on the specific errors on the above failures.
```
|
closed
|
2018-11-02T07:46:15Z
|
2018-12-15T16:40:54Z
|
https://github.com/jupyter/nbgrader/issues/1042
|
[
"question"
] |
mqyang56
| 0
|
HumanSignal/labelImg
|
deep-learning
| 445
|
Wrong image dimensions in xml file
|
We found out that sometimes the image dimensions specified in the xml file (width and height) don't correspond to the real dimensions of the image. Sometimes they're off by a few pixels, sometimes by hundreds of pixels. On a large dataset, we found this happening for around 2% of the labeled images.
- **OS: Windows 10**
- **PyQt version: 5.10.1**
|
open
|
2019-02-20T10:24:21Z
|
2019-02-20T10:24:21Z
|
https://github.com/HumanSignal/labelImg/issues/445
|
[] |
oscmansan
| 0
|
serengil/deepface
|
machine-learning
| 1,098
|
Error message on verify function
|
With relation to #1056,
in 0.0.86 on calling Deepface.verfiy if face is not detected in any image, it throws ValueError with message as `Exception while processing img1_path`
seems to have come post [#1072](https://github.com/serengil/deepface/pull/1072/) merge
|
closed
|
2024-03-11T10:33:51Z
|
2024-03-11T18:49:57Z
|
https://github.com/serengil/deepface/issues/1098
|
[
"question"
] |
sanket-valani-tss
| 11
|
microsoft/MMdnn
|
tensorflow
| 666
|
Incorrect results with converted model [Caffe to Keras]
|
Platform (like ubuntu 16.04/win10):
Ubuntu 18.04
Python version:
3.6
Source framework with version (like Tensorflow 1.4.1 with GPU):
Caffe, GPU
Destination framework with version (like CNTK 2.3 with GPU):
Keras, GPU
I have a modified GoogLeNet, in which I changed the input structure and the final number of classes. This works perfectly in Caffe. I converted this to Keras using the command:
mmconvert -sf caffe -in deploy.prototxt -iw caffe.caffemodel -df keras -om ./keras.model
I resize the input image (grayscale) to the desired dimensions, switch the channels to either first dimension or third (for caffe and keras), and pass it to both keras and caffe, like so:
Caffe:
def check_keras(arr):
arr = arr[:,None,...]
net.blobs['data'].data[...] = arr[0]
out = net.forward()
return(np.argmax(out['softmax']))
Keras:
def check_keras(arr):
y_train_pred_keras=m.predict(arr[...,None])
return(np.argmax(y_train_pred_keras,axis=1))
For the same image, this returns different results.
For Caffe:
[[2.8024021e-01 1.7262013e-04 7.1958715e-01]]
Predicted - class 2
For Keras:
[[0.8009005 0.00084618 0.19825332]]
Predicted - class 0
Any help is appreciated.
|
open
|
2019-05-31T11:34:14Z
|
2019-06-03T09:40:04Z
|
https://github.com/microsoft/MMdnn/issues/666
|
[] |
vedantbhatia
| 1
|
flairNLP/flair
|
nlp
| 2,648
|
RuntimeErro
|
Hello
when I increase the training datasets I get the following Error, how I can solve the problem?
***************
RuntimeError Traceback (most recent call last)
<ipython-input-16-bb761bddd0eb> in <module>
14 max_epochs=100, # very few epochs of fine-tuning
15 #train_with_dev=True,
---> 16 train_with_test=True
17 #shuffle=False,
18 #train_with_test = True,
/opt/conda/lib/python3.7/site-packages/flair/trainers/trainer.py in train(self, base_path, learning_rate, mini_batch_size, mini_batch_chunk_size, max_epochs, train_with_dev, train_with_test, monitor_train, monitor_test, main_evaluation_metric, scheduler, anneal_factor, patience, min_learning_rate, initial_extra_patience, optimizer, cycle_momentum, warmup_fraction, embeddings_storage_mode, checkpoint, save_final_model, anneal_with_restarts, anneal_with_prestarts, anneal_against_dev_loss, batch_growth_annealing, shuffle, param_selection_mode, write_weights, num_workers, sampler, use_amp, amp_opt_level, eval_on_train_fraction, eval_on_train_shuffle, save_model_each_k_epochs, tensorboard_comment, use_swa, use_final_model_for_eval, gold_label_dictionary_for_eval, create_file_logs, create_loss_file, epoch, use_tensorboard, tensorboard_log_dir, metrics_for_tensorboard, optimizer_state_dict, scheduler_state_dict, save_optimizer_state, **kwargs)
465
466 # forward pass
--> 467 loss = self.model.forward_loss(batch_step)
468
469 if isinstance(loss, Tuple):
/opt/conda/lib/python3.7/site-packages/flair/models/sequence_tagger_model.py in forward_loss(self, data_points, sort)
392 self, data_points: Union[List[Sentence], Sentence], sort=True
393 ) -> torch.tensor:
--> 394 features = self.forward(data_points)
395 return self._calculate_loss(features, data_points)
396
/opt/conda/lib/python3.7/site-packages/flair/models/sequence_tagger_model.py in forward(self, sentences)
457 rnn_output, hidden = self.rnn(packed, initial_hidden_state)
458 else:
--> 459 rnn_output, hidden = self.rnn(packed)
460
461 sentence_tensor, output_lengths = torch.nn.utils.rnn.pad_packed_sequence(
/opt/conda/lib/python3.7/site-packages/torch/nn/modules/module.py in _call_impl(self, *input, **kwargs)
725 result = self._slow_forward(*input, **kwargs)
726 else:
--> 727 result = self.forward(*input, **kwargs)
728 for hook in itertools.chain(
729 _global_forward_hooks.values(),
/opt/conda/lib/python3.7/site-packages/torch/nn/modules/rnn.py in forward(self, input, hx)
583 else:
584 result = _VF.lstm(input, batch_sizes, hx, self._flat_weights, self.bias,
--> 585 self.num_layers, self.dropout, self.training, self.bidirectional)
586 output = result[0]
587 hidden = result[1:]
RuntimeError: cuDNN error: CUDNN_STATUS_EXECUTION_FAILED
|
closed
|
2022-02-26T14:00:17Z
|
2022-08-01T05:48:22Z
|
https://github.com/flairNLP/flair/issues/2648
|
[
"question",
"wontfix"
] |
Astudnew
| 2
|
paperless-ngx/paperless-ngx
|
django
| 7,686
|
[Documentation] PAPERLESS_GPG_DECRYPTOR in Docs but not in Code
|
### Description
`PAPERLESS_GPG_DECRYPTOR` appears in docs but not in code. I think `PAPERLESS_ENABLE_GPG_DECRYPTOR` is meant.
### Steps to reproduce
n/a
### Webserver logs
```bash
n/a
```
### Browser logs
_No response_
### Paperless-ngx version
n/a
### Host OS
n/a
### Installation method
Docker - official image
### System status
_No response_
### Browser
_No response_
### Configuration changes
_No response_
### Please confirm the following
- [X] I believe this issue is a bug that affects all users of Paperless-ngx, not something specific to my installation.
- [X] This issue is not about the OCR or archive creation of a specific file(s). Otherwise, please see above regarding OCR tools.
- [X] I have already searched for relevant existing issues and discussions before opening this report.
- [X] I have updated the title field above with a concise description.
|
closed
|
2024-09-11T17:32:03Z
|
2024-10-12T03:06:08Z
|
https://github.com/paperless-ngx/paperless-ngx/issues/7686
|
[
"documentation"
] |
stevenengland
| 2
|
hankcs/HanLP
|
nlp
| 777
|
命名实体识别后分词不理想
|
<!--
注意事项和版本号必填,否则不回复。若希望尽快得到回复,请按模板认真填写,谢谢合作。
-->
## 注意事项
请确认下列注意事项:
* 我已仔细阅读下列文档,都没有找到答案:
- [首页文档](https://github.com/hankcs/HanLP)
- [wiki](https://github.com/hankcs/HanLP/wiki)
- [常见问题](https://github.com/hankcs/HanLP/wiki/FAQ)
* 我已经通过[Google](https://www.google.com/#newwindow=1&q=HanLP)和[issue区检索功能](https://github.com/hankcs/HanLP/issues)搜索了我的问题,也没有找到答案。
* 我明白开源社区是出于兴趣爱好聚集起来的自由社区,不承担任何责任或义务。我会礼貌发言,向每一个帮助我的人表示感谢。
* [x] 我在此括号内输入x打钩,代表上述事项确认完毕。
## 版本号
<!-- 发行版请注明jar文件名去掉拓展名的部分;GitHub仓库版请注明master还是portable分支 -->
当前最新版本号是:
我使用的版本是: master主分支
我新建了实体识别楼盘名,ViterbiSegment 分词 “地址是星海城三期”,
识别出楼名
[星海城/nbd] 和 [星海城三期/nbd],但分词结果却是
[地址/n, 是/vshi, 星海/n, 城/n, 三期/nbdp]
请问如何使结果为
[地址/n, 是/vshi, 星海城三期/nbd]
|
closed
|
2018-03-30T09:02:19Z
|
2020-01-01T10:50:37Z
|
https://github.com/hankcs/HanLP/issues/777
|
[
"ignored"
] |
YannLex
| 4
|
Miserlou/Zappa
|
flask
| 1,241
|
TraceConfig error when deploying
|
I keep getting this exact error when using `zappa deploy`:
> Traceback (most recent call last):
File "/home/judah/.local/share/virtualenvs/kpfyes2-l82cPnw3/lib/python2.7/site-packages/zappa/cli.py", line 2610, in handle
sys.exit(cli.handle())
File "/home/judah/.local/share/virtualenvs/kpfyes2-l82cPnw3/lib/python2.7/site-packages/zappa/cli.py", line 505, in handle
self.dispatch_command(self.command, stage)
File "/home/judah/.local/share/virtualenvs/kpfyes2-l82cPnw3/lib/python2.7/site-packages/zappa/cli.py", line 539, in dispatch_command
self.deploy(self.vargs['zip'])
File "/home/judah/.local/share/virtualenvs/kpfyes2-l82cPnw3/lib/python2.7/site-packages/zappa/cli.py", line 769, in deploy
self.lambda_arn = self.zappa.create_lambda_function(**kwargs)
File "/home/judah/.local/share/virtualenvs/kpfyes2-l82cPnw3/lib/python2.7/site-packages/zappa/core.py", line 1048, in create_lambda_function
response = self.lambda_client.create_function(**kwargs)
File "/home/judah/.local/share/virtualenvs/kpfyes2-l82cPnw3/lib/python2.7/site-packages/botocore/client.py", line 253, in _api_call
return self._make_api_call(operation_name, kwargs)
File "/home/judah/.local/share/virtualenvs/kpfyes2-l82cPnw3/lib/python2.7/site-packages/botocore/client.py", line 518, in _make_api_call
api_params, operation_model, context=request_context)
File "/home/judah/.local/share/virtualenvs/kpfyes2-l82cPnw3/lib/python2.7/site-packages/botocore/client.py", line 573, in _convert_to_request_dict
api_params, operation_model)
File "/home/judah/.local/share/virtualenvs/kpfyes2-l82cPnw3/lib/python2.7/site-packages/botocore/validate.py", line 291, in serialize_to_request
raise ParamValidationError(report=report.generate_report())
ParamValidationError: Parameter validation failed:
Unknown parameter in input: "TracingConfig", must be one of: FunctionName, Runtime, Role, Handler, Code, Description, Timeout, MemorySize, Publish, VpcConfig, DeadLetterConfig, Environment, KMSKeyArn
I also get this error when running 'zappa invoke production --raw`:
> Traceback (most recent call last):
File "/home/judah/.local/share/virtualenvs/kpfyes2-l82cPnw3/lib/python2.7/site-packages/zappa/cli.py", line 2610, in handle
sys.exit(cli.handle())
File "/home/judah/.local/share/virtualenvs/kpfyes2-l82cPnw3/lib/python2.7/site-packages/zappa/cli.py", line 505, in handle
self.dispatch_command(self.command, stage)
File "/home/judah/.local/share/virtualenvs/kpfyes2-l82cPnw3/lib/python2.7/site-packages/zappa/cli.py", line 561, in dispatch_command
no_color=self.vargs['no_color'],
File "/home/judah/.local/share/virtualenvs/kpfyes2-l82cPnw3/lib/python2.7/site-packages/zappa/cli.py", line 1223, in invoke
invocation_type='RequestResponse',
File "/home/judah/.local/share/virtualenvs/kpfyes2-l82cPnw3/lib/python2.7/site-packages/zappa/core.py", line 1153, in invoke_lambda_function
Payload=payload
File "/home/judah/.local/share/virtualenvs/kpfyes2-l82cPnw3/lib/python2.7/site-packages/botocore/client.py", line 253, in _api_call
return self._make_api_call(operation_name, kwargs)
File "/home/judah/.local/share/virtualenvs/kpfyes2-l82cPnw3/lib/python2.7/site-packages/botocore/client.py", line 544, in _make_api_call
raise error_class(parsed_response, operation_name)
ResourceNotFoundException: An error occurred (ResourceNotFoundException) when calling the Invoke operation: Function not found: arn:aws:lambda:us-west-1:key_for_aws:function:projectname-production
My project is a single app.py file, and my zappa config looks like this:
>{
"production": {
"app_function": "app.app",
"aws_region": "us-west-1",
"profile_name": "default",
"project_name": "projejct_name",
"runtime": "python2.7",
"s3_bucket": "bucket_name"
}
}`
|
closed
|
2017-11-15T17:28:51Z
|
2017-11-15T18:01:04Z
|
https://github.com/Miserlou/Zappa/issues/1241
|
[] |
judah-caruso
| 6
|
modelscope/modelscope
|
nlp
| 967
|
modelscope同步huggingface模型库镜像频率高吗
|
flux最近更新比较频繁,例如https://huggingface.co/Shakker-Labs这个flux controlnet相关的模型,modelscope上还没有
|
closed
|
2024-08-29T09:47:06Z
|
2024-08-29T10:28:34Z
|
https://github.com/modelscope/modelscope/issues/967
|
[] |
zhangvia
| 1
|
Colin-b/pytest_httpx
|
pytest
| 127
|
Switch from setup.py to a build library
|
Current solution is deprecated and we need something more future proof
|
closed
|
2023-11-13T19:34:18Z
|
2024-02-21T17:57:39Z
|
https://github.com/Colin-b/pytest_httpx/issues/127
|
[
"enhancement",
"good first issue"
] |
Colin-b
| 1
|
miguelgrinberg/Flask-Migrate
|
flask
| 413
|
QUESTION: Not able to rename a Table name.
|
Whenever I try to rename a table name it actually doesn't run op.rename, but deletes the table and creates a new table.
Now In my case, I don't want my existing data to be gone, I want it to persistent but whenever I just rename the model and run `flask db migrate` and then` flask db upgrade`, I encountered this error:
`sqlalchemy.exc.IntegrityError: (MySQLdb._exceptions.IntegrityError) (1217, 'Cannot delete or update a parent row: a foreign key constraint fails')
[SQL:
DROP TABLE person]
(Background on this error at: http://sqlalche.me/e/14/gkpj)
`
|
closed
|
2021-06-24T06:01:11Z
|
2021-06-25T15:13:44Z
|
https://github.com/miguelgrinberg/Flask-Migrate/issues/413
|
[
"question"
] |
aalokrmb
| 4
|
liangliangyy/DjangoBlog
|
django
| 151
|
请问
|
./manage.py makemigrations
Traceback (most recent call last):
File "./manage.py", line 22, in <module>
execute_from_command_line(sys.argv)
File "/home/chalaza/anaconda3/lib/python3.6/site-packages/django/core/management/__init__.py", line 371, in execute_from_command_line
utility.execute()
File "/home/chalaza/anaconda3/lib/python3.6/site-packages/django/core/management/__init__.py", line 347, in execute
django.setup()
File "/home/chalaza/anaconda3/lib/python3.6/site-packages/django/__init__.py", line 24, in setup
apps.populate(settings.INSTALLED_APPS)
File "/home/chalaza/anaconda3/lib/python3.6/site-packages/django/apps/registry.py", line 89, in populate
app_config = AppConfig.create(entry)
File "/home/chalaza/anaconda3/lib/python3.6/site-packages/django/apps/config.py", line 90, in create
module = import_module(entry)
File "/home/chalaza/anaconda3/lib/python3.6/importlib/__init__.py", line 126, in import_module
return _bootstrap._gcd_import(name[level:], package, level)
File "<frozen importlib._bootstrap>", line 994, in _gcd_import
File "<frozen importlib._bootstrap>", line 971, in _find_and_load
File "<frozen importlib._bootstrap>", line 953, in _find_and_load_unlocked
ModuleNotFoundError: No module named 'pagedown'
|
closed
|
2018-08-05T12:10:22Z
|
2018-08-06T03:00:00Z
|
https://github.com/liangliangyy/DjangoBlog/issues/151
|
[] |
evaloui
| 2
|
seleniumbase/SeleniumBase
|
pytest
| 2,427
|
Set download folder
|
How can I define the directory in which the files will be downloaded when interacting with the pages? by default I know it is "downloaded_files" in the current folder, but I want to keep my project folder free of files being downloaded, in fact I would like them to be saved in f'{gettempdir()}/chrome_downloads'. Is this possible?
Thanks for your amazing work!
|
closed
|
2024-01-13T18:00:28Z
|
2025-03-18T07:45:31Z
|
https://github.com/seleniumbase/SeleniumBase/issues/2427
|
[
"duplicate",
"question"
] |
FRIKIdelTO
| 8
|
comfyanonymous/ComfyUI
|
pytorch
| 7,142
|
No such file or directory: 'I:\\AI\\ComfyUI\\output\\ComfyUI_02638_.png [output]'
|
### Expected Behavior
node load image from bug
### Actual Behavior

### Steps to Reproduce
0
### Debug Logs
```powershell
File "I:\AI\ComfyUI\.ext\Lib\site-packages\PIL\Image.py", line 3431, in open
fp = builtins.open(filename, "rb")
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
FileNotFoundError: [Errno 2] No such file or directory: 'I:\\AI\\ComfyUI\\output\\ComfyUI_02638_.png [output]'
Prompt executed in 0.01 seconds
got prompt
!!! Exception during processing !!! [Errno 2] No such file or directory: 'I:\\AI\\ComfyUI\\output\\ComfyUI_02661_.png [output]'
Traceback (most recent call last):
File "I:\AI\ComfyUI\node_helpers.py", line 21, in pillow
x = fn(arg)
^^^^^^^
File "I:\AI\ComfyUI\.ext\Lib\site-packages\PIL\Image.py", line 3431, in open
fp = builtins.open(filename, "rb")
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
FileNotFoundError: [Errno 2] No such file or directory: 'I:\\AI\\ComfyUI\\output\\ComfyUI_02661_.png [output]'
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "I:\AI\ComfyUI\execution.py", line 327, in execute
output_data, output_ui, has_subgraph = get_output_data(obj, input_data_all, execution_block_cb=execution_block_cb, pre_execute_cb=pre_execute_cb)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "I:\AI\ComfyUI\execution.py", line 202, in get_output_data
return_values = _map_node_over_list(obj, input_data_all, obj.FUNCTION, allow_interrupt=True, execution_block_cb=execution_block_cb, pre_execute_cb=pre_execute_cb)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "I:\AI\ComfyUI\execution.py", line 174, in _map_node_over_list
process_inputs(input_dict, i)
File "I:\AI\ComfyUI\execution.py", line 163, in process_inputs
results.append(getattr(obj, func)(**inputs))
^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "I:\AI\ComfyUI\nodes.py", line 1791, in load_image_output
return self.load_image(f"{image}")
^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "I:\AI\ComfyUI\nodes.py", line 1663, in load_image
img = node_helpers.pillow(Image.open, image_path)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "I:\AI\ComfyUI\node_helpers.py", line 25, in pillow
x = fn(arg)
^^^^^^^
File "I:\AI\ComfyUI\.ext\Lib\site-packages\PIL\Image.py", line 3431, in open
fp = builtins.open(filename, "rb")
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
FileNotFoundError: [Errno 2] No such file or directory: 'I:\\AI\\ComfyUI\\output\\ComfyUI_02661_.png [output]'
Prompt executed in 0.03 seconds
```
### Other
_No response_
|
closed
|
2025-03-09T09:38:17Z
|
2025-03-11T08:30:26Z
|
https://github.com/comfyanonymous/ComfyUI/issues/7142
|
[
"Potential Bug"
] |
bihailantian655
| 3
|
ionelmc/pytest-benchmark
|
pytest
| 181
|
Remove 'See calibration_ and FAQ_.' from project summary
|
It would be awesome if 'See calibration_ and FAQ_.' could be removed from the `description` in `setup.py`.
I could not find a place in which the links would work, but they get shown everywhere, where there is the description of the project, such as on the [project page on pypi](https://pypi.org/project/pytest-benchmark/) and the current [plugin list](https://plugincompat.herokuapp.com/) of pytest, which is getting reworked as of pytest-dev/pytest#5105 .
before:
```python
description='A ``pytest`` fixture for benchmarking code. It will group the tests into rounds that are calibrated to the chosen timer. '
'See calibration_ and FAQ_.',
```
after:
```python
description='A ``pytest`` fixture for benchmarking code. It will group the tests into rounds that are calibrated to the chosen timer. ',
```
It is also in the `README.rst` and `.cookiecutterrc`. I can understand if updating the cookiecutter details and the docs is undesired, but i see no problem in updating the package metadata.
I would be happy to open a PR if you agree with this proposal!
|
closed
|
2020-09-29T11:42:49Z
|
2020-09-29T19:26:15Z
|
https://github.com/ionelmc/pytest-benchmark/issues/181
|
[] |
mcsitter
| 3
|
jpadilla/django-rest-framework-jwt
|
django
| 52
|
DRF 3.0.1: ImportError: ImportError: cannot import name smart_text.
|
Hi there, I've just upgraded to the latest Django Rest Framework, and am getting this error...
```
ImportError: Could not import 'rest_framework_jwt.authentication.JSONWebTokenAuthentication' for API setting 'DEFAULT_AUTHENTICATION_CLASSES'. ImportError: cannot import name smart_text.
```
I believe this will happen on a clean install of DRF and DRF-JWT, but if it doesn't let me know and I'll provide more details.
|
closed
|
2014-12-11T22:06:20Z
|
2014-12-11T23:07:59Z
|
https://github.com/jpadilla/django-rest-framework-jwt/issues/52
|
[
"bug"
] |
KevinGrahamFoster
| 3
|
QuivrHQ/quivr
|
api
| 3,115
|
Notion pages upload bug
|
should trim naming to append extension
|
closed
|
2024-08-30T08:25:58Z
|
2024-12-03T16:06:58Z
|
https://github.com/QuivrHQ/quivr/issues/3115
|
[
"bug",
"Stale"
] |
linear[bot]
| 2
|
amidaware/tacticalrmm
|
django
| 1,819
|
URL Actions Permissions / Global Settings Sub-Permissions
|
**Is your feature request related to a problem? Please describe.**
We are working on technician permissions for a new hire coming onboard soon, and I noticed that he won't be able to view URL Actions without being given the "View Global Settings" permission. We use URL Actions currently just to take control of an agent using ScreenConnect. This permission issue is troubling because he would then be able to see our email alert/SMTP settings as well as the SMS Twilio key.
**Describe the solution you'd like**
Breaking down these global setting permissions further for Email, SMS, MeshCentral, key store, url actions, etc.
**Describe alternatives you've considered**
Implementing something like #667 with the ability to set it to a URL action that doesn't care about the 'View Global Settings' permission.
Alternatively, make the "Run URL Actions" permission allow viewing only URL actions if the 'View Global Settings' permission is not checked.
Or any other solution you may think of.
**Additional context**
Add any other context or screenshots about the feature request here.
|
closed
|
2024-03-29T19:49:03Z
|
2024-04-09T01:10:50Z
|
https://github.com/amidaware/tacticalrmm/issues/1819
|
[
"bug"
] |
ZzBombardierzZ
| 2
|
opengeos/leafmap
|
plotly
| 160
|
Add a toolbar for the plotly backend
|
Demo

|
closed
|
2021-12-29T16:05:50Z
|
2022-01-03T14:33:10Z
|
https://github.com/opengeos/leafmap/issues/160
|
[
"Feature Request"
] |
giswqs
| 1
|
viewflow/viewflow
|
django
| 464
|
DependentModelSelect and AjaxModelSelect not compatible
|
I've e ModelViewset
this will work
```
form_widgets = {
'primary_contact': DependentModelSelect(
depends_on='customer',
queryset=lambda customer: customer.contacts
),
}
```
this will also work
```
form_widgets = {
'customer': AjaxModelSelect(lookups=['name__istartswith'],),
}
```
but this won't work
```
form_widgets = {
'primary_contact': DependentModelSelect(
depends_on='customer',
queryset=lambda customer: customer.contacts
),
'customer': AjaxModelSelect(lookups=['name__istartswith'],),
}
```
any hints ?
|
closed
|
2024-08-05T06:43:47Z
|
2024-08-15T15:13:58Z
|
https://github.com/viewflow/viewflow/issues/464
|
[
"request/bug",
"dev/forms"
] |
joff13
| 2
|
TencentARC/GFPGAN
|
deep-learning
| 481
|
Favourite picture
|
Please upload Picture problem solve
|
open
|
2024-01-02T06:34:55Z
|
2024-01-20T17:25:44Z
|
https://github.com/TencentARC/GFPGAN/issues/481
|
[] |
nuruddin96031
| 1
|
graphql-python/graphene-django
|
django
| 1,530
|
Error debugging
|
**problem**
The code generated an error, and Graphene Django returned the error through a response, which increased the difficulty of debugging
**django setting**
Directly throwing global settings in the development environment is a good choice
|
open
|
2024-07-25T06:24:30Z
|
2024-07-25T06:24:30Z
|
https://github.com/graphql-python/graphene-django/issues/1530
|
[
"✨enhancement"
] |
HuLight
| 0
|
huggingface/datasets
|
machine-learning
| 7,363
|
ImportError: To support decoding images, please install 'Pillow'.
|
### Describe the bug
Following this tutorial locally using a macboko and VSCode: https://huggingface.co/docs/diffusers/en/tutorials/basic_training
This line of code: for i, image in enumerate(dataset[:4]["image"]):
throws: ImportError: To support decoding images, please install 'Pillow'.
Pillow is installed.
### Steps to reproduce the bug
Run the tutorial
### Expected behavior
Images should be rendered
### Environment info
MacBook, VSCode
|
open
|
2025-01-08T02:22:57Z
|
2025-02-07T07:30:33Z
|
https://github.com/huggingface/datasets/issues/7363
|
[] |
jamessdixon
| 3
|
dynaconf/dynaconf
|
django
| 462
|
Regression detected [was] Case insensitive access of structures inside lists
|
**Describe the bug**
When I access a Box that is stored in a BoxList the access becomes case sensitive. I know about DynaBox, but for some reason the list access returns a vanilla Box and not a DynaBox.
Background: I need to parse more or less complex data from the config (routing stuff) and enrich the data structures with defaults after parsing. Therefore I want to set change the settings from within code.
If something like this is out of scope for Dynaconf, could someone recommend an alternative approach? Maybe only store user provided routing settings and all the other general simple configs like logging level in Dynaconf and manage the routing config elsewhere?
**To Reproduce**
Steps to reproduce the behavior:
1. Run the following code placed in `tmp.py` with pytest `pytest tmp.py`:
```python
from dynaconf.vendor.box import BoxList, Box
from dynaconf.utils.boxing import DynaBox
def test_accessing_dynabox_inside_boxlist_inside_dynabox():
data = DynaBox({"nested": [{"deeper": "nest"}]})
assert data.nested[0].deeper == "nest"
assert data.NESTED[0].deeper == "nest"
with pytest.raises(BoxKeyError):
assert data.NESTED[0].DEEPER == "nest"
data = DynaBox({"nested": [DynaBox({"deeper": "nest"})]})
assert data.nested[0].deeper == "nest"
assert data.NESTED[0].deeper == "nest"
with pytest.raises(BoxKeyError):
assert data.NESTED[0].DEEPER == "nest"
```
Even though I am passing in a DynaBox it gets changed to a Box
Dynaconf 3.1.2
|
closed
|
2020-10-25T19:31:35Z
|
2021-03-08T18:50:18Z
|
https://github.com/dynaconf/dynaconf/issues/462
|
[
"bug",
"enhancement"
] |
trallnag
| 9
|
OpenInterpreter/open-interpreter
|
python
| 876
|
interpreter --os --model local
|
### Describe the bug
not working like the interpreter local does
### Reproduce
not sure
### Expected behavior
should work
### Screenshots
_No response_
### Open Interpreter version
.2
### Python version
.01
### Operating System name and version
14
### Additional context
_No response_
|
closed
|
2024-01-06T10:23:35Z
|
2024-04-04T20:58:00Z
|
https://github.com/OpenInterpreter/open-interpreter/issues/876
|
[
"Documentation",
"Enhancement",
"Local Model"
] |
jmanhype
| 6
|
zappa/Zappa
|
flask
| 892
|
[Migrated] Zappa requires ec2:Describe* permissions all of a sudden
|
Originally from: https://github.com/Miserlou/Zappa/issues/2151 by [m90](https://github.com/m90)
Starting some time tonight (ie. 10.08. to 11.08.) our automated Zappa deployment started failing with the following message:
```
botocore.exceptions.ClientError: An error occurred (AccessDeniedException) when calling the UpdateFunctionConfiguration operation:
Your access has been denied by EC2, please make sure your request credentials have permission to DescribeSecurityGroups for sg-1111111.
EC2 Error Code: UnauthorizedOperation. EC2 Error Message: You are not authorized to perform this operation.
```
which we could "fix" by allowing `ec2:Describe*` (`DescribeSecurityGroups` itself was not enough, it kept asking for more and more things) for the user that drives the deploys.
It would be very interesting to understand why this happens (and why it wasn't needed before) though. The Lambdas we deploy using Zappa are in a VPC and talk to a RDS instance, we are using Zappa 0.51.0. No change to our codebase has been introduced that could possibly cause this.
Is there any reason this happens?
|
closed
|
2021-02-20T13:03:25Z
|
2022-08-18T12:21:28Z
|
https://github.com/zappa/Zappa/issues/892
|
[] |
jneves
| 1
|
man-group/arctic
|
pandas
| 178
|
ChunkStore - Append not working correctly
|
Behavior of append is incorrect - should simply append whatever data is in the dataframe to the symbol.
|
closed
|
2016-07-19T17:10:12Z
|
2016-07-20T15:03:20Z
|
https://github.com/man-group/arctic/issues/178
|
[
"bug"
] |
bmoscon
| 1
|
recommenders-team/recommenders
|
machine-learning
| 1,853
|
[FEATURE] Enable secure external execution of tests
|
### Description
<!--- Describe your expected feature in detail -->
Since we removed the pull_request_target on GitHub (see #1840), external contributors won't trigger the tests. We need to think in ways of enabling this solution.
### Expected behavior with the suggested feature
<!--- For example: -->
<!--- *Adding algorithm xxx will help people understand more about xxx use case scenarios. -->
### Other Comments
|
open
|
2022-11-17T08:51:26Z
|
2024-05-13T14:57:49Z
|
https://github.com/recommenders-team/recommenders/issues/1853
|
[
"enhancement"
] |
miguelgfierro
| 8
|
Textualize/rich
|
python
| 2,530
|
[BUG] Incorrect width calculation with Tamil language
|
You may find a solution to your problem in the [docs](https://rich.readthedocs.io/en/latest/introduction.html) or [issues](https://github.com/textualize/rich/issues).
**Describe the bug**
When markdown text is written in Tamil, the width calculation is incorrect. This is very easy to see when the text is written in a Panel such as done in the included screenshot.

This is the first language in which I run into this issue. In addition to the English default, I have used French, Italian, Spanish, Hebrew, and Russian without noticing any such problems.
**Platform**
<details>
<summary>Click to expand</summary>
What platform (Win/Linux/Mac) are you running on? What terminal software are you using?
```
python -m rich.diagnose
╭───────────────────────── <class 'rich.console.Console'> ─────────────────────────╮
│ A high level console interface. │
│ │
│ ╭──────────────────────────────────────────────────────────────────────────────╮ │
│ │ <console width=88 ColorSystem.TRUECOLOR> │ │
│ ╰──────────────────────────────────────────────────────────────────────────────╯ │
│ │
│ color_system = 'truecolor' │
│ encoding = 'utf-8' │
│ file = <_io.TextIOWrapper name='<stdout>' mode='w' encoding='utf-8'> │
│ height = 45 │
│ is_alt_screen = False │
│ is_dumb_terminal = False │
│ is_interactive = True │
│ is_jupyter = False │
│ is_terminal = True │
│ legacy_windows = False │
│ no_color = False │
│ options = ConsoleOptions( │
│ size=ConsoleDimensions(width=88, height=45), │
│ legacy_windows=False, │
│ min_width=1, │
│ max_width=88, │
│ is_terminal=True, │
│ encoding='utf-8', │
│ max_height=45, │
│ justify=None, │
│ overflow=None, │
│ no_wrap=False, │
│ highlight=None, │
│ markup=None, │
│ height=None │
│ ) │
│ quiet = False │
│ record = False │
│ safe_box = True │
│ size = ConsoleDimensions(width=88, height=45) │
│ soft_wrap = False │
│ stderr = False │
│ style = None │
│ tab_size = 8 │
│ width = 88 │
╰──────────────────────────────────────────────────────────────────────────────────╯
╭── <class 'rich._windows.WindowsConsoleFeatures'> ───╮
│ Windows features available. │
│ │
│ ╭─────────────────────────────────────────────────╮ │
│ │ WindowsConsoleFeatures(vt=True, truecolor=True) │ │
│ ╰─────────────────────────────────────────────────╯ │
│ │
│ truecolor = True │
│ vt = True │
╰─────────────────────────────────────────────────────╯
╭────── Environment Variables ───────╮
│ { │
│ 'TERM': None, │
│ 'COLORTERM': None, │
│ 'CLICOLOR': None, │
│ 'NO_COLOR': None, │
│ 'TERM_PROGRAM': None, │
│ 'COLUMNS': None, │
│ 'LINES': None, │
│ 'JUPYTER_COLUMNS': None, │
│ 'JUPYTER_LINES': None, │
│ 'JPY_PARENT_PID': None, │
│ 'VSCODE_VERBOSE_LOGGING': None │
│ } │
╰────────────────────────────────────╯
platform="Windows"
```
```
> python -m pip list
Package Version
------------------ -------
appdirs 1.4.4
asttokens 2.0.5
colorama 0.4.5
commonmark 0.9.1
distro 1.6.0
executing 0.8.3
friendly-styles 0.2
friendly-traceback 0.6.0
pip 22.2.2
platformdirs 2.5.2
psutil 5.8.0
pure-eval 0.2.1
py-cpuinfo 8.0.0
Pygments 2.13.0
rich 12.5.1
setuptools 58.1.0
six 1.16.0
stack-data 0.5.0
textual 0.1.15
tiptop 0.1.2
```
</details>
|
closed
|
2022-09-19T16:12:43Z
|
2022-09-20T11:05:03Z
|
https://github.com/Textualize/rich/issues/2530
|
[
"Needs triage"
] |
aroberge
| 8
|
databricks/koalas
|
pandas
| 1,579
|
Bug import
| ERROR: type should be string, got "https://colab.research.google.com/drive/1CmWY3mKhhlnLEy7G9yGAoIMORwrj3T9R?usp=sharing\r\n\r\nimport databricks.koalas as ks\r\n\r\nImportError Traceback (most recent call last)\r\n<ipython-input-20-0376b7c81ae0> in <module>()\r\n----> 1 import databricks.koalas as ks\r\n\r\n6 frames\r\n/content/spark-3.0.0-preview2-bin-hadoop3.2/python/pyspark/sql/utils.py in require_minimum_pyarrow_version()\r\n 175 if LooseVersion(pyarrow.__version__) < LooseVersion(minimum_pyarrow_version):\r\n 176 raise ImportError(\"PyArrow >= %s must be installed; however, \"\r\n--> 177 \"your version was %s.\" % (minimum_pyarrow_version, pyarrow.__version__))\r\n 178 if os.environ.get(\"ARROW_PRE_0_15_IPC_FORMAT\", \"0\") == \"1\":\r\n 179 raise RuntimeError(\"Arrow legacy IPC format is not supported in PySpark, \"\r\n\r\nImportError: PyArrow >= 0.15.1 must be installed; however, your version was 0.14.1.\r\n\r\n---------------------------------------------------------------------------\r\nNOTE: If your import is failing due to a missing package, you can\r\nmanually install dependencies using either !pip or !apt.\r\n\r\nTo view examples of installing some common dependencies, click the\r\n\"Open Examples\" button below.\r\n---------------------------------------------------------------------------"
|
closed
|
2020-06-15T00:56:06Z
|
2020-06-16T10:25:46Z
|
https://github.com/databricks/koalas/issues/1579
|
[
"not a koalas issue"
] |
JoseRFJuniorLLMs
| 1
|
mars-project/mars
|
numpy
| 2,464
|
[BUG] Hang when creating mars worker
|
<!--
Thank you for your contribution!
Please review https://github.com/mars-project/mars/blob/master/CONTRIBUTING.rst before opening an issue.
-->
**Describe the bug**
Error was found in worker log as below, however, the main process is still alive which led to hang.
``` Python
Process MarsActorPool11274289152:
Traceback (most recent call last):
File "/opt/conda/lib/python3.7/multiprocessing/process.py", line 297, in _bootstrap
self.run()
File "/opt/conda/lib/python3.7/multiprocessing/process.py", line 99, in run
self._target(*self._args, **self._kwargs)
File "/home/admin/work/_public-mars-0.7.1.zip/mars/oscar/backends/mars/pool.py", line 163, in _start_sub_pool
asyncio.run(coro)
File "/opt/conda/lib/python3.7/asyncio/runners.py", line 43, in run
return loop.run_until_complete(main)
File "/opt/conda/lib/python3.7/asyncio/base_events.py", line 587, in run_until_complete
return future.result()
File "/home/admin/work/_public-mars-0.7.1.zip/mars/oscar/backends/mars/pool.py", line 177, in _create_sub_pool
'process_index': process_index
File "/home/admin/work/_public-mars-0.7.1.zip/mars/oscar/backends/pool.py", line 574, in create
await asyncio.gather(*create_server_tasks)
File "/home/admin/work/_public-mars-0.7.1.zip/mars/oscar/backends/communication/socket.py", line 206, in create
handle_connection, host=host, port=port, **config)
File "/opt/conda/lib/python3.7/asyncio/streams.py", line 114, in start_server
return await loop.create_server(factory, host, port, **kwds)
File "/opt/conda/lib/python3.7/asyncio/base_events.py", line 1389, in create_server
% (sa, err.strerror.lower())) from None
OSError: [Errno 98] error while attempting to bind on address ('11.9.109.161', 51317): address already in use
```
|
closed
|
2021-09-17T08:33:13Z
|
2021-09-18T13:33:22Z
|
https://github.com/mars-project/mars/issues/2464
|
[
"type: bug",
"mod: deploy"
] |
hekaisheng
| 0
|
babysor/MockingBird
|
pytorch
| 202
|
synthesizer/models/tacotron.Encoder注释错误
|
https://github.com/babysor/MockingBird/blob/6c8f3f45150122f38f9b895dfe3940d326a9b0cc/synthesizer/models/tacotron.py#L62-L72
ln:63注释错误
`speaker_embedding`扩展后大小为` (batch_size, num_chars * speaker_embedding_size) `
而不是` (batch_size, num_chars * tts_embed_dims) `
执行`torch.cat((x, e), 2)`后
`x`的大小为`(batch_size, num_chars, tts_embed_dims+speaker_embedding_size) `
|
closed
|
2021-11-09T03:57:34Z
|
2021-11-09T07:54:19Z
|
https://github.com/babysor/MockingBird/issues/202
|
[] |
castleKing1997
| 0
|
wkentaro/labelme
|
deep-learning
| 420
|
labelme_json_to_dataset and rename the label file
|
I am going to make a semantic segmentation dataset by myself, but every time i run labelme_json_to_dataset , i will get four file like this:

I have to rename the label.png file when i make the dataset.Can i add the name of . json file to the labe.png file when i convert the json to dataset file ? Need I modify code for which python file? Thank you very much
|
closed
|
2019-06-19T07:59:15Z
|
2022-06-30T15:05:28Z
|
https://github.com/wkentaro/labelme/issues/420
|
[] |
Jingchensun
| 7
|
keras-team/autokeras
|
tensorflow
| 1,460
|
Text classifier has trouble running BERT
|
### Bug Description
When I try to run a sample code using the text classifier, there is error in loading the BERT checkpoint from google cloud system
> "UnimplementedError: File system scheme 'gs' not implemented (file: 'gs://cloud-tpu-checkpoints/bert/keras_bert/uncased_L-12_H-768_A-12\vocab.txt')"
complete trace :
[autokeras_error.txt](https://github.com/keras-team/autokeras/files/5671936/autokeras_error.txt)
### Bug Reproduction
Code for reproducing the bug:
```
import autokeras as ak
import numpy as np
clf = ak.TextClassifier(overwrite=True,max_trials=10)
clf.fit(tr_x, tr_y, epochs=1,validation_data=(te_x,te_y))
```
Data used by the code:
```
tr_x = np.array(["If you like adult comedy cartoons, like South Park, then this is nearly a similar format about the small adventures of three teenage girls at Bromwell High. Keisha, Natella and Latrina have given exploding sweets and behaved like bitches, I think Keisha is a good leader. There are also small stories going on with the teachers of the school. There's the idiotic principal, Mr. Bip, the nervous Maths teacher and many others. The cast is also fantastic, Lenny Henry's Gina Yashere, EastEnders Chrissie Watts, Tracy-Ann Oberman, Smack The Pony's Doon Mackichan, Dead Ringers' Mark Perry and Blunder's Nina Conti. I didn't know this came from Canada, but it is very good. Very good!",\
"good movie",
"I basically skimmed through the movie but just enough to catch watch the plot was about. To tell you the truth it was kind of boring to me and at some spots it didn't make sense. The only reason I watched this movie in the first place was to see CHACE CRAWFORD!!! He is so hot, but in this movie his hair was kind of weird. But still hot.<br /><br />However, despite how hot CHACE is, it really did not make up for the film. I guess the plot isn't that bad but what really threw me over was the fact that they cuss in like every sentence. Is it that hard to express your anger without saying the F word every time?The cussing was annoying and the whole flashy, camera shaking thing gave me a headache.<br /><br />All in all, although the plot was OK, I found the film to be a bore and over dramatic. That's why I only cut to scenes with CHACE in it. LOL Anyways, not worth renting unless your a die-hard fan of a specific cast member like I was. Oh yeah the cast was Hot. The girls were HOT!!! But CHACE IS THE BEST!!"])
tr_y=np.array(['pos','pos','neg'])
te_x = np.array(["worst Movie","Nice one"])
te_y = np.array(['bad','good'])
```
### Expected Behavior
<!---
the model must be run successfully
-->
### Setup Details
Include the details about the versions of:
- OS type and version:
- Python: 3.7.9
- autokeras: <!--- 1.0.12-->
- keras-tuner:1.0.3
- scikit-learn:0.23.2
- numpy:1.18.5
- pandas:1.1.5
- tensorflow:2.3.0
### Additional context
I installed the following procedure for installation using anaconda prompt:
```
conda create -n env_autokeras
conda activate env_autokeras
conda install tensorflow
pip install git+https://github.com/keras-team/keras-tuner.git
pip install autokeras
```
|
closed
|
2020-12-10T10:22:15Z
|
2021-06-10T04:15:26Z
|
https://github.com/keras-team/autokeras/issues/1460
|
[
"wontfix"
] |
raghuvarranvh
| 10
|
fugue-project/fugue
|
pandas
| 132
|
[BUG] The README code does not immediately work
|
**Minimal Code To Reproduce**
```python
from typing import Iterable, Dict, Any, List
# Creating sample data
data = [
["A", "2020-01-01", 10],
["A", "2020-01-02", None],
["A", "2020-01-03", 30],
["B", "2020-01-01", 20],
["B", "2020-01-02", None],
["B", "2020-01-03", 40]
]
schema = "id:str,date:date,value:int"
# schema: *, filled:int
def fillna(df:Iterable[Dict[str,Any]],value:int=0) -> Iterable[Dict[str,Any]]:
for row in df:
for col in cols:
row["filled"] = (row["value"] or value)
yield row
with FugueWorkflow() as dag:
df1 = dag.df(data, schema).transform(fillna)
df1.show()
```
**Describe the bug**
1. FugueWorkflow needs to be imported
2. Int will have problems on Dask because it can't hold None. Change the type to double
**Expected behavior**
It should work out of the box. All other code in the README should work as well so just run the code and make sure everything works.
**Environment (please complete the following information):**
- Backend: All engines
- Backend version: 0.4.9
- Python version: 3.7
- OS: linux/windows: Both
|
closed
|
2021-01-08T16:49:03Z
|
2021-01-10T02:01:36Z
|
https://github.com/fugue-project/fugue/issues/132
|
[
"documentation",
"good first issue"
] |
kvnkho
| 2
|
psf/requests
|
python
| 6,070
|
Migrate test suite to support Flask>2.0
|
As shown in #5846 and #6069, our test suite is now thoroughly broken from Flask 2.0 onwards. We never did a deep dive to understand what caused the breakage and now changes in the underlying modules (markupsafe, werkzeug) are causing further problems. We need to sit down and work out the issue, unblocking upgrading Flask if possible.
|
closed
|
2022-02-19T01:42:34Z
|
2024-08-13T00:03:49Z
|
https://github.com/psf/requests/issues/6070
|
[] |
nateprewitt
| 2
|
ultrafunkamsterdam/undetected-chromedriver
|
automation
| 1,383
|
Chromedriver detected when run via SSH, but works well via RDP
|
Good afternoon, I caught a strange error on Windows Server 2016
I'm trying to parse the site after they installed a new protection and successfully pass the protection using UC if I run the code from under the RDP session, however, when I try to run the same script using SSH, I immediately catch the lock. I am attaching a video to help you understand.
I first run the script from an RDP shell and then I use SSH to connect to the remote server where the script is running
in the first case, the defense is passed in the second - block
I am attaching my code.
I also want to note that I tried everything I can, compared the env of the RDP session and SSH, checked each chromedrive option, changed the server ip addresses, window sizes, used both headless and windowed mode, used different proxies (only residential private proxies ) and got the same result everywhere. I get the feeling that there is a rendering check via fingerprint and it is somehow different when UC is launched via SSH
Or perhaps the check goes to the debug port
Thanks in advance to @ultrafunkamsterdam for an amazing free product and I'm very sorry if I'm wasting your time on a trifling question, but I spent more than a day figuring it out.
P.S. protection is configured in such a way that the site may not open from European and American ip when you try to check it
```
import undetected_chromedriver as uc
import time
if __name__ == '__main__' :
driver = uc.Chrome(headless=True,version_main=112)
driver.get('https://site.ru/')
time.sleep(2)
driver.save_screenshot('result.png')
driver.close()
```
https://github.com/ultrafunkamsterdam/undetected-chromedriver/assets/46750997/797937b3-73fc-4ab1-84a1-e0741849114f
https://github.com/ultrafunkamsterdam/undetected-chromedriver/assets/46750997/afef8f93-b45e-4370-aa32-7b05f5d3f245
Server Configuration : Windows Server 2016
The machine to which RDP and SSH connection is made Windows 10
Version undetected_chromedriver 3.5.0
chromedriver version 112
selenium version 4.1.0
|
closed
|
2023-07-06T07:24:12Z
|
2023-07-08T15:20:36Z
|
https://github.com/ultrafunkamsterdam/undetected-chromedriver/issues/1383
|
[] |
gusd1994
| 5
|
tensorflow/tensor2tensor
|
machine-learning
| 1,010
|
Cannot run tensor2tensor directly
|
### Description
If I try to run tensor2tensor directly (without using the command line), an exception is thrown at compile time. This is because the tensorflow flag "problem" is defined twice.
This is useful for debugging tensor2tensor (for understanding how it works and for searching for problems).
### Environment information
OS: Ubuntu 18.04
```
$ pip freeze | grep tensor
tensor2tensor==1.6.6
tensorboard==1.9.0
tensorflow==1.9.0
tensorflow-hub==0.1.1
tensorflow-tensorboard==1.5.1
$ python -V
Python 3.6.5
```
# Steps to reproduce:
To reproduce, run this test:
```
import unittest
from tensor2tensor.bin import t2t_datagen, t2t_trainer
import tensorflow as tf
class RunT2tDirectly(unittest.TestCase):
def test_running_t2t(self):
try:
tf.app.run(main=t2t_datagen.main, argv=['t2t_datagen'])
tf.app.run(main=t2t_trainer.main, argv=['t2t_trainer'])
except SystemExit as ex:
pass
```
# Error logs:
The exception thrown is:
_Exception: The flag 'problem' is defined twice. First from tensor2tensor.bin.t2t_datagen, Second from tensor2tensor.utils.flags. Description from first occurrence: The name of the problem to generate data for._
|
open
|
2018-08-21T13:32:45Z
|
2018-08-21T13:39:02Z
|
https://github.com/tensorflow/tensor2tensor/issues/1010
|
[] |
ales004
| 1
|
d2l-ai/d2l-en
|
tensorflow
| 1,816
|
DEEP LEARNING
|
closed
|
2021-06-26T08:42:47Z
|
2021-06-26T21:57:35Z
|
https://github.com/d2l-ai/d2l-en/issues/1816
|
[] |
ardey26
| 1
|
|
tensorflow/tensor2tensor
|
deep-learning
| 1,783
|
Flaky test in evolved_transformer_test.py
|
### Description
The test `EvolvedTransformerTest.testSlowVsFastNoInput` in `tensor2tensor/models/evolved_transformer_test.py` fails each time if run in isolation with the following dimensional error:
```python
> self.assertAllClose(slow_res, fast_res)
tensor2tensor/models/evolved_transformer_test.py:194:
E AssertionError: Tuples differ: (3, 3) != (3, 2)
E
E First differing element 1:
E 3
E 2
E
E - (3, 3)
E ? ^
E
E + (3, 2)
E ? ^
E : Shape mismatch: expected (3, 3), got (3, 2) with contents [[5 1]
E [8 5]
E [1 6]].
```
The test however passes if I run all the tests in the file together. I think this is because the test `testSlowVsFast` (run before `testSlowVsFastNoInput`) is setting the seed in tensorflow. If I add that seed, then this test passes always. Other tests in this file do not fail without even seeds set though. Does that mean that this is a bug?
### Environment information
```
Ubuntu 16.04
python 3.7
tensorflow==1.15.0
tensorflow-probability==0.7.0
tensorflow-datasets==1.3.2
tensorflow-estimator==1.15.1
tensorflow-gan==2.0.0
numpy==1.17.4
mesh-tensorflow==0.1.9
```
### For bugs: reproduction and error logs
# Steps to reproduce:
Running the single test using pytest can reproduce this bug like:
`pytest tensor2tensor/models/evolved_transformer_test.py::EvolvedTransformerTest::testSlowVsFastNoInput`
# Error logs:
```python
============================================================================================== test session starts ==============================================================================================
platform linux -- Python 3.7.6, pytest-5.3.2, py-1.8.1, pluggy-0.13.1
rootdir: tensor2tensor
collected 1 item
tensor2tensor/models/evolved_transformer_test.py F [100%]
=================================================================================================== FAILURES ====================================================================================================
_________________________________________________________________________________ EvolvedTransformerTest.testSlowVsFastNoInput __________________________________________________________________________________
self = <tensor2tensor.models.evolved_transformer_test.EvolvedTransformerTest testMethod=testSlowVsFastNoInput>
def testSlowVsFastNoInput(self):
model, features = get_model(transformer.transformer_tiny(), has_input=False)
decode_length = DECODE_LENGTH
out_logits, _ = model(features)
out_logits = tf.squeeze(out_logits, axis=[2, 3])
loss = tf.nn.sparse_softmax_cross_entropy_with_logits(
logits=tf.reshape(out_logits, [-1, VOCAB_SIZE]),
labels=tf.reshape(features["targets"], [-1]))
loss = tf.reduce_mean(loss)
apply_grad = tf.train.AdamOptimizer(0.001).minimize(loss)
with self.test_session():
tf.global_variables_initializer().run()
for _ in range(10):
apply_grad.run()
model.set_mode(tf.estimator.ModeKeys.PREDICT)
with tf.variable_scope(tf.get_variable_scope(), reuse=True):
slow_result = model._slow_greedy_infer(features, decode_length)["outputs"]
slow_result = tf.squeeze(slow_result, axis=[2, 3])
fast_result = model._greedy_infer(features, decode_length)["outputs"]
with self.test_session():
slow_res = slow_result.eval()
fast_res = fast_result.eval()
self.assertEqual(slow_res.shape, (BATCH_SIZE, decode_length))
> self.assertAllClose(slow_res, fast_res)
tensor2tensor/models/evolved_transformer_test.py:172:
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
lib/python3.7/site-packages/tensorflow_core/python/framework/test_util.py:1122: in decorated
return f(*args, **kwds)
lib/python3.7/site-packages/tensorflow_core/python/framework/test_util.py:2411: in assertAllClose
self._assertAllCloseRecursive(a, b, rtol=rtol, atol=atol, msg=msg)
lib/python3.7/site-packages/tensorflow_core/python/framework/test_util.py:2378: in _assertAllCloseRecursive
(path_str, path_str, msg)))
lib/python3.7/site-packages/tensorflow_core/python/framework/test_util.py:2282: in _assertArrayLikeAllClose
self.assertEqual(a.shape, b.shape, shape_mismatch_msg)
lib/python3.7/site-packages/absl/testing/absltest.py:1665: in fail
return super(TestCase, self).fail(self._formatMessage(prefix, msg))
E AssertionError: Tuples differ: (3, 3) != (3, 2)
E
E First differing element 1:
E 3
E 2
E
E - (3, 3)
E ? ^
E
E + (3, 2)
E ? ^
E : Shape mismatch: expected (3, 3), got (3, 2) with contents [[5 1]
E [8 5]
E [1 6]].
================================================================================== 1 failed, 19 warnings in 119.33s (0:01:59) ===================================================================================
```
|
open
|
2020-01-21T00:32:18Z
|
2020-01-21T00:33:45Z
|
https://github.com/tensorflow/tensor2tensor/issues/1783
|
[] |
loopylangur
| 0
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.