qid
int64 46k
74.7M
| question
stringlengths 54
37.8k
| date
stringlengths 10
10
| metadata
listlengths 3
3
| response_j
stringlengths 17
26k
| response_k
stringlengths 26
26k
|
|---|---|---|---|---|---|
58,475,837
|
I am trying to learn the functional programming way of doing things in python. I am trying to serialize a list of strings in python using the following code
```
S = ["geeks", "are", "awesome"]
reduce(lambda x, y: (str(len(x)) + '~' + x) + (str(len(y)) + '~' + y), S)
```
I am expecting:
```
5~geeks3~are7~awesome
```
But I am seeing:
```
12~5~geeks3~are7~awesome
```
Can someone point out why? Thanks in advance!
|
2019/10/20
|
[
"https://Stackoverflow.com/questions/58475837",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/6101835/"
] |
You need to add the `initializer` parameter - an empty string to the `reduce()` function. It will be the first argument passed to the `lambda` function before the values from the list.
```
from functools import reduce
S = ["geeks", "are", "awesome"]
reduce(lambda x, y: x + f'{len(y)}~{y}', S, '')
# 5~geeks3~are7~awesome
```
Equivalent to:
```
((('' + '5~geeks') + '3~are') + '7~awesome')
# 5~geeks3~are7~awesome
```
|
* here is the solution for `pyton3.7+` using `fstring`.
```py
>>> S = ["geeks", "are", "awesome"]
>>> ''.join(f'{len(s)}~{s}' for s in S)
'5~geeks3~are7~awesome'
```
|
70,765,867
|
I have been trying to use github actions to deploy a docker image to AWS ECR, but there is a step that is consistently failing.
Here is the portion that is failing:
```
- name: Pulling ECR for updates and instantiating new updated containers.
uses: appleboy/ssh-action@master
with:
host: ${{secrets.STAGING_HOST}}
username: ${{secrets.STAGING_USERNAME}}
key: ${{secrets.STAGING_PEM}}
port: ${{secrets.STAGING_PORT}}
script: |
cd staging
aws ecr get-login-password --region us-east-2 | docker login -u AWS -p-stdin ***.dkr.ecr.us-east-2.amazonaws.com
docker pull ***.dkr.ecr.us-east-2.amazonaws.com/*container name*:latest
docker-compose -f docker-compose.staging.yml up -d
docker rmi $(docker images --filter dangling=true -q 2>/dev/null) 2>/dev/null
docker exec -i *** python manage.py makemigrations *dir name*
docker exec -i *** python manage.py makemigrations accountsettings
docker exec -i *** python manage.py makemigrations payment
docker exec -i *** python manage.py runapscheduler
docker exec -i *** python manage.py migrate
```
Not sure why it is an issue as github action's virtual environments already has AWS CLI installed (<https://github.com/actions/virtual-environments/blob/main/images/linux/Ubuntu2004-Readme.md>), and also I am using the AWS CLI in other steps in my github actions and there is no issue, for example:
```
- name: Build, Tag and Push image to Amazon ECR.
id: build-image
env:
ECR_REGISTRY: ${{ steps.login-ecr.outputs.registry }}
ECR_REPOSITORY: *ecr name*
IMAGE_TAG: latest
run: |
cd *dir name*
docker build -f Dockerfile.staging -t *container name* .
aws ecr get-login-password --region us-east-2 | docker login --username AWS --password-stdin ***.dkr.ecr.us-east-2.amazonaws.com
docker tag *container name*:latest ***.dkr.ecr.us-east-2.amazonaws.com/*container name*:latest
docker push ***.dkr.ecr.us-east-2.amazonaws.com/*container name*:latest
```
and the image successfully gets pushed to my aws ECR.
I have tried to install the aws cli as suggested here: [GitHub Action - AWS CLI](https://stackoverflow.com/questions/59166099/github-action-aws-cli), but still to no avail.
here is the code I used to install the aws cli:
```
- name: Intalling aws cli via python pip
run: |
python -m pip install --upgrade pip
pip install awscli
```
Here is the full error I have been getting:
```
======END======
err: bash: line 2: aws: command not found
err: WARNING! Using -*** the CLI is insecure. Use --password-stdin.
err: Error response from daemon: login attempt to https://***.dkr.ecr.us-east-2.amazonaws.com/v2/ failed with status: 400 Bad Request
err: Error response from daemon: Head "https://***.dkr.ecr.us-east-2.amazonaws.com/v2/*ecr name*/manifests/latest": no basic auth credentials
err: Pulling web (***.dkr.ecr.us-east-2.amazonaws.com/*ecr-name*:latest)...
err: Head "https://***.dkr.ecr.us-east-2.amazonaws.com/v2/*ecr-name*/manifests/latest": no basic auth credentials
err: Error: No such container: ***
err: Error: No such container: ***
err: Error: No such container: ***
err: Error: No such container: ***
err: Error: No such container: ***
err: Error: No such container: ***
err: Error: No such container: ***
err: Error: No such container: ***
20***/01/19 04:59:42 Process exited with status 1
```
|
2022/01/19
|
[
"https://Stackoverflow.com/questions/70765867",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/17970861/"
] |
Welcome to StackOverflow and the joys of programming and the cloud!
It seems that the AWS CLI is failing to configure the access key id and secret on the pipeline. In order to solve this and make it easier to manage in the long run, I would recommend using the pre-built actions from AWS to ease your pipeline's setup process.
The most common way of building a Github action pipeline for pushing images to AWS ECR is by using the following actions:
* `aws-actions/configure-aws-credentials@v1`
* `aws-actions/amazon-ecr-login@v1`
Using the combination of these actions together enables us to configure the pipeline's shell session to store temporary credentials for the AWS CLI and the ECR credentials for the docker login.
```sh
steps:
- name: Checkout
uses: actions/checkout@v2
- name: Configure AWS credentials
uses: aws-actions/configure-aws-credentials@v1
with:
aws-access-key-id: ${{ secrets.AWS_ACCESS_KEY_ID }}
aws-secret-access-key: ${{ secrets.AWS_SECRET_ACCESS_KEY }}
aws-region: ap-south-1
- name: Login to Amazon ECR
id: login-ecr
uses: aws-actions/amazon-ecr-login@v1
- name: Build, tag, and push the image to Amazon ECR
id: build-image
env:
ECR_REGISTRY: ${{ steps.login-ecr.outputs.registry }}
ECR_REPOSITORY: ${{ secrets.REPO_NAME }}
IMAGE_TAG: 1.0
run: |
# Build a docker container and push it to ECR
docker build -t $ECR_REGISTRY/$ECR_REPOSITORY:$IMAGE_TAG .
echo "Pushing image to ECR..."
docker push $ECR_REGISTRY/$ECR_REPOSITORY:$IMAGE_TAG
echo "::set-output name=image::$ECR_REGISTRY/$ECR_REPOSITORY:$IMAGE_TAG"
```
If the guide above is not sufficient and you need help in configuring the access keys and secrets, I would recommend following the blog written [here](https://aws.plainenglish.io/build-a-docker-image-and-publish-it-to-aws-ecr-using-github-actions-f20accd774c3)
|
Actually, I just had to install AWS CLI on my EC2 instance, but thank you so much for the help!
|
55,966,757
|
When (and why) was the Python `__new__()` function introduced?
There are three steps in creating an instance of a class, e.g. `MyClass()`:
* `MyClass.__call__()` is called. This method must be defined in the metaclass of `MyClass`.
* `MyClass.__new__()` is called (by `__call__`). Defined on `MyClass` itself. This creates the instance.
* `MyClass.__init__()` is called (also by `__call__`). This initializes the instance.
Creation of the instance can be influenced either by overloading `__call__` or `__new__`. There usually is little reason to overload `__call__` instead of `__new__` (e.g. [Using the \_\_call\_\_ method of a metaclass instead of \_\_new\_\_?](https://stackoverflow.com/questions/6966772/using-the-call-method-of-a-metaclass-instead-of-new)).
We have some old code (still running strong!) where `__call__` is overloaded. The reason given was that `__new__` was not available at the time. So I tried to learn more about the history of both Python and our code, but I could not figure out when `__new__` was introduced.
`__new__` appears in the [documentation for Python 2.4](https://docs.python.org/2.4/ref/customization.html) and not in those for [Python 2.3](https://docs.python.org/2.3/ref/customization.html), but it does not appear in the [whathsnew](https://docs.python.org/2/whatsnew/2.4.html) of any of the Python 2 versions. The [first commit that introduced `__new__`](https://github.com/python/cpython/commit/6d6c1a35e08b95a83dbe47dbd9e6474daff00354) (Merge of descr-branch back into trunk.) that I could find is from 2001, but the 'back into trunk' message is an indication that there was something before. [PEP 252 (Making Types Look More Like Classes)](https://www.python.org/dev/peps/pep-0252/) and [PEP 253 (Subtyping Built-in Types)](https://www.python.org/dev/peps/pep-0253/) from a few months earlier seem to be relevant.
Learning more about the introduction of `__new__` would teach us more about why Python is the way it is.
---
Edit for clarification:
It seems that `class.__new__` duplicates functionality that is already provided by `metaclass.__call__`. It seems un-Pythonic to add a method only to replicate existing functionality in a better way.
`__new__` is one of the few class methods that you get out of the box (i.e. with `cls` as first argument), thereby introducing complexity that wasn't there before. If the class is the first argument of a function, then it can be argued that the function should be a normal method of the metaclass. But that method did already exist: `__call__()`. I feel like I'm missing something.
>
> There should be one-- and preferably only one --obvious way to do it.
>
>
>
|
2019/05/03
|
[
"https://Stackoverflow.com/questions/55966757",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/2097/"
] |
The blog post [**`The Inside Story on New-Style Classes`**](http://python-history.blogspot.com/2010/06/inside-story-on-new-style-classes.html)
(from the aptly named **`http://python-history.blogspot.com`**) written by [**`Guido van Rossum`**](https://en.wikipedia.org/wiki/Guido_van_Rossum) (Python's BDFL) provides some good information regarding this subject.
Some relevant quotes:
>
> New-style classes introduced a new class method `__new__()` that lets
> the class author customize how new class instances are created. By
> overriding `__new__()` a class author can implement patterns like the
> Singleton Pattern, return a previously created instance (e.g., from a
> free list), or to return an instance of a different class (e.g., a
> subclass). However, the use of `__new__` has other important
> applications. For example, in the pickle module, `__new__` is used to
> create instances when unserializing objects. In this case, instances
> are created, but the `__init__` method is not invoked.
>
>
> Another use of `__new__` is to help with the subclassing of immutable
> types. By the nature of their immutability, these kinds of objects can
> not be initialized through a standard `__init__()` method. Instead, any
> kind of special initialization must be performed as the object is
> created; for instance, if the class wanted to modify the value being
> stored in the immutable object, the `__new__` method can do this by
> passing the modified value to the base class `__new__` method.
>
>
>
You can read the entire post for more information on this subject.
Another post about [**`New-style Classes`**](http://python-history.blogspot.com/2010/06/new-style-classes.html) which was written along with the above quoted post has some additional information.
**Edit:**
In response to OP's edit and the quote from the Zen of Python, I would say this.
[Zen of Python](https://www.python.org/dev/peps/pep-0020/) was not written by the creator of the language but by Tim Peters and was published only in August 19, 2004. We have to take into account the fact that `__new__` appears only in the documentation of Python 2.4 (which was released on [November 30, 2004](https://en.wikipedia.org/wiki/History_of_Python)), and this particular guideline (or aphorism) did not even exist **publicly** when `__new__` was introduced into the language.
Even if such a document of guidelines existed *informally* before, I do not think that the author(s) intended them to be misinterpreted as a design document for an entire language and ecosystem.
|
I will not explain the history of `__new__` here because I have only used Python since 2005, so after it was introduced into the language. But here is the rationale behind it.
The *normal* configuration method for a new object is the `__init__` method of its class. The object has already been created (usually via an indirect call to `object.__new__`) and the method just *initializes* it. Simply, if you have a truely non mutable object, it is too late.
In that use case the Pythonic way is the `__new__` method, which builds and returns the new object. The nice point with it, is that is is still included in the class definition and does not require a specific metaclass. Standard documentation states:
>
> **new**() is intended mainly to allow subclasses of immutable types (like int, str, or tuple) to customize instance creation. It is also commonly overridden in custom metaclasses in order to customize class creation.
>
>
>
Defining a `__call__` method on the metaclass is indeed allowed but is IMHO non Pythonic, because `__new__` should be enough. In addition, `__init__`, `__new__` and metaclasses each dig deeper inside the internal Python machinery. So the rule shoud be *do not use `__new__` if `__init__` is enough, and do not use metaclasses if `__new__` is enough*.
|
40,007,305
|
I am using kivy to create a small Gui for my python program. This Gui is not always visible. So I start it with these settings:
```
Config.set('graphics', 'borderless', True)
Config.set('graphics', 'resizable', False)
Config.set('graphics', 'window_state', 'hidden')
```
However: Somewhere in my program I want to make the window visible again. How do I do that? I couldnt find anything that changes configuration at runtime.
|
2016/10/12
|
[
"https://Stackoverflow.com/questions/40007305",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/2129897/"
] |
It seems that if you are using the SDL provider you have a **hide & show** functions on the Window object
from the kivy.core.window docs:
```
hide() Added in 1.9.0
Hides the window. This method should be used on desktop platforms only.
Note
This feature requires the SDL2 window provider and is currently only supported on desktop platforms.
show()¶Added in 1.9.0
Shows the window. This method should be used on desktop platforms only.
Note
This feature requires the SDL2 window provider and is currently only supported on desktop platforms.
```
|
I'm not familiar with Kivy, but it looks like you just need to set it to visible.
`window_state`: string , one of 'visible', 'hidden', 'maximized' \
or 'minimized'
from:
<https://kivy.org/docs/_modules/kivy/config.html>
Looking at this github post: <https://github.com/kivy/kivy/issues/3637>
The method they're using is .hide() and .show().
|
9,724,872
|
I have a python (django) web application. It uses an external web service (Facebook Graph). All the code for making external (http) calls is wrapped in one extra function (called `facebook_api`), which takes some arguments and returns a parsed dict (it does some logging, checks for errors etc.) Around this function, I have written some code. I want to do some unittests of this code.
How can I "mock" this one function with some known input? i.e. I want to provide some input & output for my facebook\_api function (i.e. if you see this input, return this, and if it's this, return that, etc.). I then want to check that my code handles it OK.
I want to provide various different scenarios for my code (what if the API started returning this, or that) and check that my code does what I want. I don't want to actually make external (http) requests.
I tried looking at some mock libraries (e.g.http://www.voidspace.org.uk/python/mock/ ), but they always talk about creating class objects, or just checking that something was called. I can't see easy ways to say "Intercept this function, and if the input is A, return X, if it's like D, return B"
|
2012/03/15
|
[
"https://Stackoverflow.com/questions/9724872",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/161922/"
] |
I think you are looking for [Mock's side\_effect](http://www.voidspace.org.uk/python/mock/mock.html#mock.Mock.side_effect) . For example
```
def my_facebook_api(input):
if input=='A':
return 'X'
elif input=='B':
return 'D'
facebook_api = Mock(side_effect=my_facebook_api)
```
|
I have been using mockito-python (<http://code.google.com/p/mockito-python/>) with a good success. It allows you to specify behaviour of mocks with simple syntax (straight from their documentation):
```
>>> dummy = mock()
>>> when(dummy).reply("hi").thenReturn("hello")
>>> when(dummy).reply("bye").thenReturn("good-bye")
>>> dummy.hi()
>>> dummy.reply("hi")
'hello'
>>> dummy.reply("bye")
'good-bye'
```
This of course requires that you are able to change object containing facebook\_api to mock during testing.
|
9,724,872
|
I have a python (django) web application. It uses an external web service (Facebook Graph). All the code for making external (http) calls is wrapped in one extra function (called `facebook_api`), which takes some arguments and returns a parsed dict (it does some logging, checks for errors etc.) Around this function, I have written some code. I want to do some unittests of this code.
How can I "mock" this one function with some known input? i.e. I want to provide some input & output for my facebook\_api function (i.e. if you see this input, return this, and if it's this, return that, etc.). I then want to check that my code handles it OK.
I want to provide various different scenarios for my code (what if the API started returning this, or that) and check that my code does what I want. I don't want to actually make external (http) requests.
I tried looking at some mock libraries (e.g.http://www.voidspace.org.uk/python/mock/ ), but they always talk about creating class objects, or just checking that something was called. I can't see easy ways to say "Intercept this function, and if the input is A, return X, if it's like D, return B"
|
2012/03/15
|
[
"https://Stackoverflow.com/questions/9724872",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/161922/"
] |
I think you are looking for [Mock's side\_effect](http://www.voidspace.org.uk/python/mock/mock.html#mock.Mock.side_effect) . For example
```
def my_facebook_api(input):
if input=='A':
return 'X'
elif input=='B':
return 'D'
facebook_api = Mock(side_effect=my_facebook_api)
```
|
This is an example with [mock](http://www.voidspace.org.uk/python/mock/):
```
>>> import mock
>>> patcher = mock.patch('django.core.urlresolvers.reverse')
>>> reverse_mock = patcher.start()
>>> reverse_mock.return_value = "/foo/"
>>> from django.core.urlresolvers import reverse
>>> reverse('someview')
'/foo/'
>>> patcher.stop()
```
Few notes:
* `patch` acts as a [function decorator, class decorator or a context manager](http://www.voidspace.org.uk/python/mock/patch.html#mock.patch) (above example could be added to `setUp` method, or use `with` inside test method),
* read [Where to patch](http://www.voidspace.org.uk/python/mock/patch.html#where-to-patch)
|
11,021,853
|
The IPython documentation pages suggest that opening several different sessions of IPython notebook is the only way to interact with saved notebooks in different directories or subdirectories, but this is not explicitly confirmed anywhere.
I am facing a situation where I might need to interact with hundreds of different notebooks, which are classified according to different properties and stored in subdirectories of a main directory. I have set that main directory (let's call it `/main`) in the `ipython_notebook_config.py` configuration file to be the default directory.
When I launch IPython notebook, indeed it displays any saved notebooks that are within `/main` (but *not* saved notebooks within subdirectories within `/main`).
How can I achieve one single IPython dashboard that shows me the notebooks within `/main` *and also* shows subdirectories, lets me expand a subdirectory and choose from its contents, or just shows all notebooks from all subdirectories?
Doing this by launching new instances of IPython every time is completely out of the question.
I'm willing to tinker with source code if I have to for this ability. It's an extremely basic sort of feature, we need it, and it's surprising that it's not just the default IPython behavior. For any amount of saved notebooks over maybe 10 or 15, this feature is *necessary*.
|
2012/06/13
|
[
"https://Stackoverflow.com/questions/11021853",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/567620/"
] |
>
> The IPython documentation pages suggest that opening several different sessions of IPython notebook is the only way to interact with saved notebooks in different directories or subdirectories, but this is not explicitly confirmed anywhere.
>
>
>
Yes, this is a current (*temporary*) limitation of the Notebook server. Multi-directory support is very high on the notebook todo list (unfortunately that list is long, and devs are few and have day jobs), it is just not there yet. By 0.14 (Fall, probably), you should have no reason to be running more than one nb server, but for now that's the only option for multiple directories. All that is missing for a simple first draft is:
1. Associating individual notebooks with directories (fairly trivial), and
2. Web UI for simple filesystem navigation (slightly less trivial).
>
> I'm willing to tinker with source code if I have to for this ability
>
>
>
The limiting factor, if you want to poke around in the source, is the [NotebookManager](https://github.com/ipython/ipython/blob/rel-0.13/IPython/frontend/html/notebook/notebookmanager.py), which is associated with a particular directory. If you tweak the list\_notebooks() method to handle subdirectories, you are 90% there.
I was curious about this as well, so I tossed together an quick example [here](https://github.com/minrk/ipython/tree/nbwalk) that allows you to at least read/run/edit/save notebooks in subdirs (walk depth is limited to 2, but easy to change). Any new notebooks will be in the top-level dir, and there is no UI for moving them around.
|
The interface and architecture design issues for multiple directory support (and more generally for "project" support) for iPython notebook are important to get right. A design is described in
[IPEP 16: Notebook multi directory dashboard and URL mapping](https://github.com/ipython/ipython/wiki/IPEP-16%3A-Notebook-multi-directory-dashboard-and-URL-mapping)
and is being discussed at [IPEP 16: Notebook multi directory dashboard and URL mapping · Issue #3166 · ipython/ipython](https://github.com/ipython/ipython/issues/3166)
|
13,586,153
|
**Objectives:**
Implement a program (java or python) to retrieve data from videos that I published on my Youtube channel.
This program will be launched daily (1:00 AM).
**Solutions:**
To retrieve data Youtube, including the number of views per day, YouTube Analytics API is in my opinion the best solution. I use the Google Account Service ("GoogleCredential") to authenticate me:
```
static {
// Build service account credential.
try {
// Create a listener for automatic refresh OAuthAccessToken
List<CredentialRefreshListener> list = new ArrayList<CredentialRefreshListener>();
list.add(new CredentialRefreshListener() {
public void onTokenResponse(Credential credential,
TokenResponse tokenResponse) throws IOException {
System.out.println(tokenResponse.toPrettyString());
}
public void onTokenErrorResponse(Credential credential,
TokenErrorResponse tokenErrorResponse)
throws IOException {
System.err.println("Error: "
+ tokenErrorResponse.toPrettyString());
}
});
// Create a GoogleCredential for authenticate with ServiceAccount
// service
credential = new GoogleCredential.Builder()
.setTransport(HTTP_TRANSPORT)
.setJsonFactory(JSON_FACTORY)
.setServiceAccountId(SERVICE_ACCOUNT_EMAIL)
.setServiceAccountScopes(SCOPES)
.setClock(Clock.SYSTEM)
.setServiceAccountPrivateKeyFromP12File(
new File("key.p12"))
.setRefreshListeners(list).build();
credential.refreshToken();
} catch (GeneralSecurityException e) {
e.printStackTrace();
} catch (IOException e) {
e.printStackTrace();
}
}
```
And I execute Youtube Analytics query:
```
YoutubeAnalytics youtubeAnalytics = new YoutubeAnalytics.Builder(HTTP_TRANSPORT, JSON_FACTORY, credential)
.setApplicationName("Test-YouTube-Analytics/1.0").build();
// Create request
credential.refreshToken();
YoutubeAnalyticsRequest<?> request = youtubeAnalytics.reports()
.query("channel==" + channelId, "2012-10-01", "2012-12-01", "views")
.setAlt("json")
.setKey(API_KEY)
.setDimensions("month")
.setPrettyPrint(true);
System.out.println(request.buildHttpRequest().getUrl().toString());
ResultTable first = (ResultTable) request.execute();
}
```
But I get the following error:
```
com.google.api.client.googleapis.json.GoogleJsonResponseException: 500 Internal Server Error
{
"code" : 500,
"errors" : [ {
"domain" : "global",
"message" : "Unknown error occurred on the server.",
"reason" : "internalError"
} ],
"message" : "Unknown error occurred on the server."
}
```
Thanks for your insight!
|
2012/11/27
|
[
"https://Stackoverflow.com/questions/13586153",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/1856335/"
] |
You can't use a service account when making a YouTube Analytics API request. You need to use an account that is either the owner of the YouTube channel or a content owner associated with the channel, and I don't believe a service account can be either of those things. Please go through the OAuth 2 flow once while signed in as the Google Account that owns the YouTube channel, and the saved OAuth 2 refresh token could then be used repeatedly in the future to get fresh access tokens which can be used to run reports.
Could you please resolve that issue and then try running your report again?
|
Yes you can authenticate for any of Youtubes APIs using a Service Account.
The service account and the account you want to work with, have to be in the same CMS. (note for Youtube-Partner-Channels you will also need to set their content-owner-ID, when calling the API).
How it works for me:
I generate an access\_token from the keyfile that I downloaded from Gcloud when creating the Service Accounts keys.
[You can read more about Server-server authentication with Oauth2 here](https://developers.google.com/identity/protocols/oauth2/service-account)
|
53,259,674
|
it 's possible to put a variable into the path in python/linux
for example :
```
>>>counter = 0;
>>>image = ClImage(file_obj=open('/home/user/image'counter'.jpeg', 'rb'))
```
I have syntax error when i do that.
|
2018/11/12
|
[
"https://Stackoverflow.com/questions/53259674",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/9100401/"
] |
You could use an [f-string](https://www.python.org/dev/peps/pep-0498/) if you’re working in python 3.6+
This is the most efficient method.
```
counter = 0
filepath = f"/home/user/image{counter}.jpeg"
image = ClImage(file_obj=open(filepath, 'rb'))
```
Otherwise the second best would be using the [.format()](https://docs.python.org/3.4/library/functions.html#format) function:
```
counter = 0
filepath = "/home/user/image{0}.jpeg".format(counter)
image = ClImage(file_obj=open(filepath, 'rb'))
```
|
You can use Python's [.format()](https://realpython.com/python-string-formatting/) method:
```
counter = 0
filepath = '/home/user/image{0}.jpeg'.format(counter)
image = ClImage(file_obj=open(filepath, 'rb'))
```
|
53,259,674
|
it 's possible to put a variable into the path in python/linux
for example :
```
>>>counter = 0;
>>>image = ClImage(file_obj=open('/home/user/image'counter'.jpeg', 'rb'))
```
I have syntax error when i do that.
|
2018/11/12
|
[
"https://Stackoverflow.com/questions/53259674",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/9100401/"
] |
You could use an [f-string](https://www.python.org/dev/peps/pep-0498/) if you’re working in python 3.6+
This is the most efficient method.
```
counter = 0
filepath = f"/home/user/image{counter}.jpeg"
image = ClImage(file_obj=open(filepath, 'rb'))
```
Otherwise the second best would be using the [.format()](https://docs.python.org/3.4/library/functions.html#format) function:
```
counter = 0
filepath = "/home/user/image{0}.jpeg".format(counter)
image = ClImage(file_obj=open(filepath, 'rb'))
```
|
You need [string concatenation](https://www.pythonforbeginners.com/concatenation/string-concatenation-and-formatting-in-python).
```
>>>counter = 0;
>>>image = ClImage(file_obj=open('/home/user/image' + str(counter) + '.jpeg', 'rb'))
```
|
36,551,531
|
**My Flume configuration**
```
source_agent.sources = tail
source_agent.sources.tail.type = exec
source_agent.sources.tail.command = python loggen.py
source_agent.sources.tail.batchSize = 1
source_agent.sources.tail.channels = memoryChannel
#memory-channel
source_agent.channels = memoryChannel
source_agent.channels.memoryChannel.type = memory
source_agent.channels.memoryChannel.capacity = 10000
source_agent.channels.memoryChannel.transactionCapacity=10000
source_agent.channels.memoryChannel.byteCapacityBufferPercentage = 20
source_agent.channels.memoryChannel.byteCapacity = 800000
# Send to Flume Collector on saprk sink
source_agent.sinks = spark
source_agent.sinks.spark.type=org.apache.spark.streaming.flume.sink.SparkSink
source_agent.sinks.spark.batchSize=100
source_agent.sinks.spark.channel = memoryChannel
source_agent.sinks.spark.hostname=localhost
source_agent.sinks.spark.port=1234
```
**My Spark-Scala Code**
```
package com.thanga.twtsteam
import org.apache.spark.streaming.flume._
import org.apache.spark.streaming._
import org.apache.spark.streaming.StreamingContext._
import org.apache.spark.SparkConf
object SampleStream {
def main(args: Array[String]) {
val conf = new SparkConf().setMaster("local[2]").setAppName("SampleStream")
val ssc = new StreamingContext(conf, Seconds(1))
val flumeStream = FlumeUtils.createPollingStream(ssc, "localhost", 1234)
ssc.stop()
}
}
```
**i am using SBT to build Jar my SBT configuration is below:**
```
name := "Flume"
version := "1.0"
scalaVersion := "2.10.4"
publishMavenStyle := true
libraryDependencies += "org.apache.spark" % "spark-core_2.10" % "1.4.1"
libraryDependencies += "org.apache.spark" % "spark-streaming_2.10" % "1.4.1"
libraryDependencies += "org.apache.spark" % "spark-streaming-flume_2.10" % "1.4.1"
libraryDependencies += "org.apache.spark" % "spark-streaming-flume-sink_2.10" % "1.4.1"
libraryDependencies += "org.scala-lang" % "scala-library" % "2.10.4"
resolvers += "Akka Repository" at "http://repo.akka.io/releases/"
```
**The problem is now i can get build my jar without any error but while running i am getting the below error:**
```
16/04/11 19:52:56 INFO BlockManagerMaster: Registered BlockManager
Exception in thread "main" java.lang.NoClassDefFoundError: org/apache/spark/streaming/flume/FlumeUtils$
at com.thagna.twtsteam.SampleStream$.main(SampleStream.scala:10)
at com.thanga.twtsteam.SampleStream.main(SampleStream.scala)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:606)
at org.apache.spark.deploy.SparkSubmit$.org$apache$spark$deploy$SparkSubmit$$runMain(SparkSubmit.scala:665)
at org.apache.spark.deploy.SparkSubmit$.doRunMain$1(SparkSubmit.scala:170)
at org.apache.spark.deploy.SparkSubmit$.submit(SparkSubmit.scala:193)
at org.apache.spark.deploy.SparkSubmit$.main(SparkSubmit.scala:112)
at org.apache.spark.deploy.SparkSubmit.main(SparkSubmit.scala)
Caused by: java.lang.ClassNotFoundException: org.apache.spark.streaming.flume.FlumeUtils$
at java.net.URLClassLoader$1.run(URLClassLoader.java:366)
at java.net.URLClassLoader$1.run(URLClassLoader.java:355)
at java.security.AccessController.doPrivileged(Native Method)
at java.net.URLClassLoader.findClass(URLClassLoader.java:354)
at java.lang.ClassLoader.loadClass(ClassLoader.java:425)
at java.lang.ClassLoader.loadClass(ClassLoader.java:358)
... 11 more
16/04/11 19:52:56 INFO SparkContext: Invoking stop() from shutdown hook
```
**can anyone help to get resolve**
|
2016/04/11
|
[
"https://Stackoverflow.com/questions/36551531",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/3536400/"
] |
You can use --jars option if you are running the job using spark-submit
For Ex:
```
spark-submit --jars ....../lib/spark-streaming_2.10-1.2.1.2.2.6.0-2800.jar
```
or
add this to your SBT configuration
```
libraryDependencies += "org.apache.spark" %% "spark-streaming-flume" % "2.1.0"
```
<https://spark.apache.org/docs/latest/streaming-flume-integration.html>
|
Add this to your build to get rid of this error:
```
<!-- https://mvnrepository.com/artifact/org.apache.spark/spark-streaming-flume_2.10 -->
<dependency>
<groupId>org.apache.spark</groupId>
<artifactId>spark-streaming-flume_2.10</artifactId>
<version>2.0.0</version>
</dependency>
```
|
27,218,638
|
I need to replace `\` into `\\` with python from pattern matching.
For example, `$$\a\b\c$$` should be matched replaced with `$$\\a\\b\\c$$`.
I couldn't use the regular expression to find a match.
```
>>> import re
>>> p = re.compile("\$\$([^$]+)\$\$")
>>> a = "$$\a\b\c$$"
>>> m = p.search(a)
>>> m.group(1)
'\x07\x08\\c'
```
I can't simply make the input as raw string such as `a=r'$$\a\b\c$$'` because it's automatically processed with markdown processor.
I also found that I couldn't use replace method:
```
>>> a.replace('\\','\\\\')
'$$\x07\x08\\\\c$$'
```
How can I solve this issue?
|
2014/11/30
|
[
"https://Stackoverflow.com/questions/27218638",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/260127/"
] |
The reason you're having trouble is because the string you're inputting is `$$\a\b\c$$`, which python translates to `'$$\x07\x08\\c$$'`, and the only back slash in the string is actually in the segment '\c' the best way to deal with this would be to input a as such
```
a=r'$$\a\b\c$$'
```
This will tell python to convert the string literals as raw chars. If you're reading in from a file, this is done automatically for you.
|
Split the string with single backslashes, then join the resulting list with double backslashes.
```
s = r'$$\a\b\c$$'
t = r'\\'.join(s.split('\\'))
print('%s -> %s' % (s, t))
```
|
67,687,962
|
I am trying to build a Word2vec model but when I try to reshape the vector for tokens, I am getting this error. Any idea ?
```
wordvec_arrays = np.zeros((len(tokenized_tweet), 100))
for i in range(len(tokenized_tweet)):
wordvec_arrays[i,:] = word_vector(tokenized_tweet[i], 100)
wordvec_df = pd.DataFrame(wordvec_arrays)
wordvec_df.shape
---------------------------------------------------------------------------
TypeError Traceback (most recent call last)
<ipython-input-101-71156bf1c4a3> in <module>
1 wordvec_arrays = np.zeros((len(tokenized_tweet), 100))
2 for i in range(len(tokenized_tweet)):
----> 3 wordvec_arrays[i,:] = word_vector(tokenized_tweet[i], 100)
4 wordvec_df = pd.DataFrame(wordvec_arrays)
5 wordvec_df.shape
<ipython-input-100-e3a82e60af93> in word_vector(tokens, size)
4 for word in tokens:
5 try:
----> 6 vec += model_w2v[word].reshape((1, size))
7 count += 1.
8 except KeyError: # handling the case where the token is not in vocabulary
TypeError: 'Word2Vec' object is not subscriptable
```
|
2021/05/25
|
[
"https://Stackoverflow.com/questions/67687962",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/8020986/"
] |
As of Gensim 4.0 & higher, the `Word2Vec` model doesn't support subscripted-indexed access (the `['...']') to individual words. (Previous versions would display a deprecation warning,` Method will be removed in 4.0.0, use self.wv.**getitem**() instead`, for such uses.)
So, when you want to access a specific word, do it via the `Word2Vec` model's `.wv` property, which holds just the word-vectors, instead. So, your (unshown) `word_vector()` function should have its line highlighted in the error stack changed to:
```
vec += model_w2v.wv[word].reshape((1, size))
```
|
use the following method:
```
model.wv.get_item()
```
|
67,687,962
|
I am trying to build a Word2vec model but when I try to reshape the vector for tokens, I am getting this error. Any idea ?
```
wordvec_arrays = np.zeros((len(tokenized_tweet), 100))
for i in range(len(tokenized_tweet)):
wordvec_arrays[i,:] = word_vector(tokenized_tweet[i], 100)
wordvec_df = pd.DataFrame(wordvec_arrays)
wordvec_df.shape
---------------------------------------------------------------------------
TypeError Traceback (most recent call last)
<ipython-input-101-71156bf1c4a3> in <module>
1 wordvec_arrays = np.zeros((len(tokenized_tweet), 100))
2 for i in range(len(tokenized_tweet)):
----> 3 wordvec_arrays[i,:] = word_vector(tokenized_tweet[i], 100)
4 wordvec_df = pd.DataFrame(wordvec_arrays)
5 wordvec_df.shape
<ipython-input-100-e3a82e60af93> in word_vector(tokens, size)
4 for word in tokens:
5 try:
----> 6 vec += model_w2v[word].reshape((1, size))
7 count += 1.
8 except KeyError: # handling the case where the token is not in vocabulary
TypeError: 'Word2Vec' object is not subscriptable
```
|
2021/05/25
|
[
"https://Stackoverflow.com/questions/67687962",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/8020986/"
] |
As of Gensim 4.0 & higher, the `Word2Vec` model doesn't support subscripted-indexed access (the `['...']') to individual words. (Previous versions would display a deprecation warning,` Method will be removed in 4.0.0, use self.wv.**getitem**() instead`, for such uses.)
So, when you want to access a specific word, do it via the `Word2Vec` model's `.wv` property, which holds just the word-vectors, instead. So, your (unshown) `word_vector()` function should have its line highlighted in the error stack changed to:
```
vec += model_w2v.wv[word].reshape((1, size))
```
|
**Since Gensim > 4.0** I tried to store words with:
```
vocab = w2v_model.wv.key_to_index.keys()
```
and then iterate, but the method has been changed:
```
for word in vocab:
w2v_model.wv.get_index(word)
...
```
And finally I created the words vectors matrix without issues..
|
67,687,962
|
I am trying to build a Word2vec model but when I try to reshape the vector for tokens, I am getting this error. Any idea ?
```
wordvec_arrays = np.zeros((len(tokenized_tweet), 100))
for i in range(len(tokenized_tweet)):
wordvec_arrays[i,:] = word_vector(tokenized_tweet[i], 100)
wordvec_df = pd.DataFrame(wordvec_arrays)
wordvec_df.shape
---------------------------------------------------------------------------
TypeError Traceback (most recent call last)
<ipython-input-101-71156bf1c4a3> in <module>
1 wordvec_arrays = np.zeros((len(tokenized_tweet), 100))
2 for i in range(len(tokenized_tweet)):
----> 3 wordvec_arrays[i,:] = word_vector(tokenized_tweet[i], 100)
4 wordvec_df = pd.DataFrame(wordvec_arrays)
5 wordvec_df.shape
<ipython-input-100-e3a82e60af93> in word_vector(tokens, size)
4 for word in tokens:
5 try:
----> 6 vec += model_w2v[word].reshape((1, size))
7 count += 1.
8 except KeyError: # handling the case where the token is not in vocabulary
TypeError: 'Word2Vec' object is not subscriptable
```
|
2021/05/25
|
[
"https://Stackoverflow.com/questions/67687962",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/8020986/"
] |
**Since Gensim > 4.0** I tried to store words with:
```
vocab = w2v_model.wv.key_to_index.keys()
```
and then iterate, but the method has been changed:
```
for word in vocab:
w2v_model.wv.get_index(word)
...
```
And finally I created the words vectors matrix without issues..
|
use the following method:
```
model.wv.get_item()
```
|
58,945,475
|
I'm somewhat new to python:
I'm trying to write a text file into a different format. Given a file of format:
```
[header]
rho = 1.1742817531
mu = 1.71997e-05
q = 411385.1046712013
...
```
I want:
```
[header]
1.1742817531, 1.71997e-05, 411385.1046712013, ...
```
and be able to write successive lines below that.
Right now, I have the following:
```
inFile = open('test.txt', 'r')
f = open('test.txt').readlines()
firstLine = f.pop(0) #removes the first line
D = ''
for line in f:
D = line.strip('\n')
b=D.rfind('=')
c=D[b+2:]
line = inFile.readline()
```
It returns only the last value, "3".
How do I get it to return a string (which will be saved to a new txt file) in the format I want?
Thanks in advance.
|
2019/11/20
|
[
"https://Stackoverflow.com/questions/58945475",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/12400757/"
] |
Cache the images on the filesystem when you first download them. When you load an image, check the cache, and download the images only if they're not yet cached. If they are, load them from the filesystem instead.
|
Try using glide or Picasso to load images in different list views. Glide internally caches images using their url as a key to retrieve cache. That way when your images are loaded once in any of your list view, they can be cached for future use in other list views. However, you will still need to create new instances of the image view as you will be using a completely different listview. You CAN create your own factory of image view with content (image) populated inside them and get such views based on unique keys (that you will define yourself which can be pain) but that would be an overkill for very little outcome.
|
57,464,098
|
I am currently doing some exercises with Kernel Density Estimation and I am trying to run this piece of code:
```py
from sklearn.datasets import load_digits
from sklearn.model_selection import GridSearchCV
digits = load_digits()
bandwidths = 10 ** np.linspace(0, 2, 100)
grid = GridSearchCV(KDEClassifier(), {'bandwidth': bandwidths}, cv=3)
grid.fit(digits.data, digits.target)
scores = [val.mean_validation_score for val in grid.cv_results_]
```
but as the title says I get an
```py
---------------------------------------------------------------------------
AttributeError Traceback (most recent call last)
<ipython-input-29-15a5f685e6d6> in <module>
8 grid.fit(digits.data, digits.target)
9
---> 10 scores = [val.mean_validation_score for val in grid.cv_results_]
<ipython-input-29-15a5f685e6d6> in <listcomp>(.0)
8 grid.fit(digits.data, digits.target)
9
---> 10 scores = [val.mean_validation_score for val in grid.cv_results_]
AttributeError: 'str' object has no attribute 'mean_validation_score'
```
regarding mean\_validation\_score and I don't understand why. The code is directly out of a book with a few changes due running an up to date scikit learn package. Here is the original code snipet:
```py
from sklearn.datasets import load_digits
from sklearn.grid_search import GridSearchCV
digits = load_digits()
bandwidths = 10 ** np.linspace(0, 2, 100)
grid = GridSearchCV(KDEClassifier(), {'bandwidth': bandwidths})
grid.fit(digits.data, digits.target)
scores = [val.mean_validation_score for val in grid.grid_scores_]
```
EDIT:
Forgot to add how bandwiths is defined:
```py
from sklearn.base import BaseEstimator, ClassifierMixin
class KDEClassifier(BaseEstimator, ClassifierMixin):
"""Bayesian generative classification based on KDE
Parameters
----------
bandwidth : float
the kernel bandwidth within each class
kernel : str
the kernel name, passed to KernelDensity
"""
def __init__(self, bandwidth=1.0, kernel='gaussian'):
self.bandwidth = bandwidth
self.kernel = kernel
def fit(self, X, y):
self.classes_ = np.sort(np.unique(y))
training_sets = [X[y == yi] for yi in self.classes_]
self.models_ = [KernelDensity(bandwidth=self.bandwidth,
kernel=self.kernel).fit(Xi)
for Xi in training_sets]
self.logpriors_ = [np.log(Xi.shape[0] / X.shape[0])
for Xi in training_sets]
return self
def predict_proba(self, X):
logprobs = np.array([model.score_samples(X)
for model in self.models_]).T
result = np.exp(logprobs + self.logpriors_)
return result / result.sum(1, keepdims=True)
def predict(self, X):
return self.classes_[np.argmax(self.predict_proba(X), 1)]
```
|
2019/08/12
|
[
"https://Stackoverflow.com/questions/57464098",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/11491321/"
] |
It's simple, I also face the same problem, Just replace this line-
```
scores = [val.mean_test_score for val in grid.cv_results_]
```
with
```
scores = grid.cv_results_.get('mean_test_score').tolist()
```
Because, 'mean\_test\_score' is depricated and grid.cv\_results\_ is in dict format.
|
The [documentation](https://scikit-learn.org/stable/modules/generated/sklearn.model_selection.GridSearchCV.html) of the object `GridSearchCV` specifies that the attribute `cv_results_` is a dictionary, therefore, iterating over a python dictionary returns the strings of the keys as you can se [here](https://realpython.com/iterate-through-dictionary-python/).
My recommendation is to specify at the `GridSearchCV` constructor the `scoring` you want to use and then have a look at the `cv_results_` dictionary.
Hope it helps.
|
20,386,727
|
Currently I have data in the following format
```
A
A -> B -> C -> D -> Z
A -> B -> O
A -> X
```
This is stored in a list [line1,line2, and so forth]
Now I want to print this in the following manner
```
A
|- X
|- B
|- O
|- C
|- D
|- Z
```
I'm new to python so. I was thinking of finding '->' in each element in array and replacing with space. I don't know to go forward.
|
2013/12/04
|
[
"https://Stackoverflow.com/questions/20386727",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/2838679/"
] |
1. You don't modify method parameters, you make copies of them.
2. You don't null-check/empty-check inside the loop, you do it first thing in the method.
3. The standard in a `for loop` is `i < size`, not `size > i`... meh
```
/**
* Splits the string str into individual characters: Small becomes S m a l l
*/
public static String split(final String str)
{
String result = "";
// If parameter is null or empty, return an empty string
if (str == null || str.isEmpty())
return result;
// Go through the parameter's characters, and modify the result
for (int i = 0; i < str.length(); i++)
{
// The new result will be the previous result,
// plus the current character at position i,
// plus a white space.
result = result + str.charAt(i) + " ";
}
return result;
}
```
---
4. Go pro, use `StringBuilder` for the result, and static final constants for empty string and space character.
Peace!
|
Ask yourself a question, where is **s** coming from?
```
char space = s.charAt(); ??? s ???
```
A second question, character at?
```
public static String split(String str){
for(int i = 0; i < str.length(); i++) {
if (str.length() > 0) {
char space = str.charAt(i)
}
}
return str;
}
```
|
20,386,727
|
Currently I have data in the following format
```
A
A -> B -> C -> D -> Z
A -> B -> O
A -> X
```
This is stored in a list [line1,line2, and so forth]
Now I want to print this in the following manner
```
A
|- X
|- B
|- O
|- C
|- D
|- Z
```
I'm new to python so. I was thinking of finding '->' in each element in array and replacing with space. I don't know to go forward.
|
2013/12/04
|
[
"https://Stackoverflow.com/questions/20386727",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/2838679/"
] |
My solution uses `concat` to build the `str2`, and `trim` to remove last white space.
```
public static String split(String str) {
String str2 = "";
for(int i=0; i<str.length(); i++) {
str2 = str2.concat(str.charAt(i)+" ");
}
return str2.trim();
}
```
|
Ask yourself a question, where is **s** coming from?
```
char space = s.charAt(); ??? s ???
```
A second question, character at?
```
public static String split(String str){
for(int i = 0; i < str.length(); i++) {
if (str.length() > 0) {
char space = str.charAt(i)
}
}
return str;
}
```
|
20,386,727
|
Currently I have data in the following format
```
A
A -> B -> C -> D -> Z
A -> B -> O
A -> X
```
This is stored in a list [line1,line2, and so forth]
Now I want to print this in the following manner
```
A
|- X
|- B
|- O
|- C
|- D
|- Z
```
I'm new to python so. I was thinking of finding '->' in each element in array and replacing with space. I don't know to go forward.
|
2013/12/04
|
[
"https://Stackoverflow.com/questions/20386727",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/2838679/"
] |
1. You don't modify method parameters, you make copies of them.
2. You don't null-check/empty-check inside the loop, you do it first thing in the method.
3. The standard in a `for loop` is `i < size`, not `size > i`... meh
```
/**
* Splits the string str into individual characters: Small becomes S m a l l
*/
public static String split(final String str)
{
String result = "";
// If parameter is null or empty, return an empty string
if (str == null || str.isEmpty())
return result;
// Go through the parameter's characters, and modify the result
for (int i = 0; i < str.length(); i++)
{
// The new result will be the previous result,
// plus the current character at position i,
// plus a white space.
result = result + str.charAt(i) + " ";
}
return result;
}
```
---
4. Go pro, use `StringBuilder` for the result, and static final constants for empty string and space character.
Peace!
|
@Babanfaraj, this a answer from a newbie like you!!
The code is very easy. The corrected program is-
```
class fopl
{
public static void main(String str)
{
int n=str.length();
for (int i = 0;i<n; i++)
{
if (n>=0)
{
String space = str.charAt(i)+" ";
System.out.print(space);
}
}
}
}
```
Happy to help you!
|
20,386,727
|
Currently I have data in the following format
```
A
A -> B -> C -> D -> Z
A -> B -> O
A -> X
```
This is stored in a list [line1,line2, and so forth]
Now I want to print this in the following manner
```
A
|- X
|- B
|- O
|- C
|- D
|- Z
```
I'm new to python so. I was thinking of finding '->' in each element in array and replacing with space. I don't know to go forward.
|
2013/12/04
|
[
"https://Stackoverflow.com/questions/20386727",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/2838679/"
] |
My solution uses `concat` to build the `str2`, and `trim` to remove last white space.
```
public static String split(String str) {
String str2 = "";
for(int i=0; i<str.length(); i++) {
str2 = str2.concat(str.charAt(i)+" ");
}
return str2.trim();
}
```
|
@Babanfaraj, this a answer from a newbie like you!!
The code is very easy. The corrected program is-
```
class fopl
{
public static void main(String str)
{
int n=str.length();
for (int i = 0;i<n; i++)
{
if (n>=0)
{
String space = str.charAt(i)+" ";
System.out.print(space);
}
}
}
}
```
Happy to help you!
|
54,174,950
|
**Context**
I am trying to run my Django application and Postgres database in a docker development environment using docker-compose (it's my first time using Docker).
I want to use my application with a custom role and database both named `teddycrepineau` (as opposed to using the default postgres user and db).
**Goal**
My goal is to deploy a web app powered on the front end by react and the backend by django restapi, the whole running in a docker.
**System/Version**
* python: 3.7
* django: 2.1
* OS: Mac OS High Sierra
**What error am I getting**
When running `docker-compose up` with my custom role and db, I am getting the following error `django.db.utils.OperationalError: FATAL: role "teddycrepineau" does not exist`. When running the same command with the default role and db `postgres` Django is able to start normally.
My understanding was that running docker-compose up would create the role and db passed as environment variable.
**What I have tried so far**
I read multiple threat on this site, GitHub, and docker:
* tried to delete my container and rebuilt it with formatting as suggested [here](https://stackoverflow.com/questions/49112545/postgres-and-docker-compose-cant-create-a-custom-role-and-database)
* Went through [this](https://github.com/docker-library/postgres/issues/41) GitHub issue
* Tried to move my environment variable from `.env` file the `environment` inside my `docker-compose.yml` file and rebuild my container
---
Files
-----
**docker-compose.yml**
```
version: '3'
volumes:
postgres_data: {}
services:
postgres:
image: postgres
volumes:
- postgres_data:/var/lib/postgresql/data
env_file: .env
ports:
- "5432"
django:
build:
context: teddycrepineau-backend
dockerfile: teddycrepineau-root/Dockerfile
command: ./teddycrepineau-backend/teddycrepineau-root/start.sh
env_file: .env
volumes:
- .:/teddycrepineau-backend
ports:
- "8000:8000"
depends_on:
- postgres
```
**Dockerfile**
```
FROM python:3.7
ENV PYTHONUNBUFFERED 1
WORKDIR /teddycrepineau-backend/
ADD ./teddycrepineau-root/requirements.txt /teddycrepineau-backend/
RUN pip install -r requirements.txt
ADD . /teddycrepineau-backend/
RUN chmod +x ./teddycrepineau-root/start.sh
```
**start.sh**
```
#!/usr/bin/env bash
python3 ./teddycrepineau-backend/teddycrepineau-root/manage.py runserver
```
**.env**
```
POSTGRES_PASSWORD=
POSTGRES_USER=teddycrepineau
POSTGRES_DB=teddycrepineau
```
---
**EDIT**
My file structure is as follow
```
root
|___ teddycrepineau-backend
|___ teddycrepineau-root
|___ teddycrepineau
|___ Dockerfile
|___ manage.py
|___ start.sh
|___ teddycrepineau-frontend
|___ React-App
|___ .env
|___ docker-compose.yml
```
When I move my docker-compose.yml file inside my backend folder, it starts as expected (though I am not able to access my site when going to `127.0.0.1:8000` but that is mostly a different issue) with custom user and db. When I put my `docker-compose.yml` file to my root folder, I get the error `django.db.utils.OperationalError: FATAL: role "teddycrepineau" does not exist`
|
2019/01/14
|
[
"https://Stackoverflow.com/questions/54174950",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/5022051/"
] |
This happens because your pgsql db was launched without any envs. The pgsql docker image only uses the envs the first time you created the container, after that it won't recreate DB and users.
The solution is to remove the pgsql volume so next time you `docker-compose up` you will have a fresh db with envs read. Simple way to do it is `docker-compose down -v`
|
Change your env order like this.
```
POSTGRES_DB=teddycrepineau
POSTGRES_USER=teddycrepineau
POSTGRES_PASSWORD=
```
I find it at [this issue](https://github.com/docker-library/postgres/issues/41#issuecomment-382925263). I hope it works.
|
54,174,950
|
**Context**
I am trying to run my Django application and Postgres database in a docker development environment using docker-compose (it's my first time using Docker).
I want to use my application with a custom role and database both named `teddycrepineau` (as opposed to using the default postgres user and db).
**Goal**
My goal is to deploy a web app powered on the front end by react and the backend by django restapi, the whole running in a docker.
**System/Version**
* python: 3.7
* django: 2.1
* OS: Mac OS High Sierra
**What error am I getting**
When running `docker-compose up` with my custom role and db, I am getting the following error `django.db.utils.OperationalError: FATAL: role "teddycrepineau" does not exist`. When running the same command with the default role and db `postgres` Django is able to start normally.
My understanding was that running docker-compose up would create the role and db passed as environment variable.
**What I have tried so far**
I read multiple threat on this site, GitHub, and docker:
* tried to delete my container and rebuilt it with formatting as suggested [here](https://stackoverflow.com/questions/49112545/postgres-and-docker-compose-cant-create-a-custom-role-and-database)
* Went through [this](https://github.com/docker-library/postgres/issues/41) GitHub issue
* Tried to move my environment variable from `.env` file the `environment` inside my `docker-compose.yml` file and rebuild my container
---
Files
-----
**docker-compose.yml**
```
version: '3'
volumes:
postgres_data: {}
services:
postgres:
image: postgres
volumes:
- postgres_data:/var/lib/postgresql/data
env_file: .env
ports:
- "5432"
django:
build:
context: teddycrepineau-backend
dockerfile: teddycrepineau-root/Dockerfile
command: ./teddycrepineau-backend/teddycrepineau-root/start.sh
env_file: .env
volumes:
- .:/teddycrepineau-backend
ports:
- "8000:8000"
depends_on:
- postgres
```
**Dockerfile**
```
FROM python:3.7
ENV PYTHONUNBUFFERED 1
WORKDIR /teddycrepineau-backend/
ADD ./teddycrepineau-root/requirements.txt /teddycrepineau-backend/
RUN pip install -r requirements.txt
ADD . /teddycrepineau-backend/
RUN chmod +x ./teddycrepineau-root/start.sh
```
**start.sh**
```
#!/usr/bin/env bash
python3 ./teddycrepineau-backend/teddycrepineau-root/manage.py runserver
```
**.env**
```
POSTGRES_PASSWORD=
POSTGRES_USER=teddycrepineau
POSTGRES_DB=teddycrepineau
```
---
**EDIT**
My file structure is as follow
```
root
|___ teddycrepineau-backend
|___ teddycrepineau-root
|___ teddycrepineau
|___ Dockerfile
|___ manage.py
|___ start.sh
|___ teddycrepineau-frontend
|___ React-App
|___ .env
|___ docker-compose.yml
```
When I move my docker-compose.yml file inside my backend folder, it starts as expected (though I am not able to access my site when going to `127.0.0.1:8000` but that is mostly a different issue) with custom user and db. When I put my `docker-compose.yml` file to my root folder, I get the error `django.db.utils.OperationalError: FATAL: role "teddycrepineau" does not exist`
|
2019/01/14
|
[
"https://Stackoverflow.com/questions/54174950",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/5022051/"
] |
Change your env order like this.
```
POSTGRES_DB=teddycrepineau
POSTGRES_USER=teddycrepineau
POSTGRES_PASSWORD=
```
I find it at [this issue](https://github.com/docker-library/postgres/issues/41#issuecomment-382925263). I hope it works.
|
when you run the
```
sudo docker-compose exec web python manage.py migrate
```
yes of course you will receive
***"django.db.utils.OperationalError: FATAL: role "user" does not exist***
first you need to put
```
sudo docker-compose down -v
sudo docker system prune
```
check container, they should be deleted
```
sudo docker ps -a
```
then check images
```
sudo docker image ls
```
don`t forget delete images
```
sudo docker image rm 3e57319a7a3a
```
run to the project folder and then check out
```
python manage.py migrate
```
if it didn`t works put the
```
python manage.py migrate —run-syncdb
```
and
```
sudo docker-compose up -d --build
sudo docker-compose exec web python manage.py collectstatic --no-input
sudo docker-compose exec web python manage.py makemigrations
sudo docker-compose exec web python manage.py migrate auth
sudo docker-compose exec web python manage.py migrate --run-syncdb
```
|
54,174,950
|
**Context**
I am trying to run my Django application and Postgres database in a docker development environment using docker-compose (it's my first time using Docker).
I want to use my application with a custom role and database both named `teddycrepineau` (as opposed to using the default postgres user and db).
**Goal**
My goal is to deploy a web app powered on the front end by react and the backend by django restapi, the whole running in a docker.
**System/Version**
* python: 3.7
* django: 2.1
* OS: Mac OS High Sierra
**What error am I getting**
When running `docker-compose up` with my custom role and db, I am getting the following error `django.db.utils.OperationalError: FATAL: role "teddycrepineau" does not exist`. When running the same command with the default role and db `postgres` Django is able to start normally.
My understanding was that running docker-compose up would create the role and db passed as environment variable.
**What I have tried so far**
I read multiple threat on this site, GitHub, and docker:
* tried to delete my container and rebuilt it with formatting as suggested [here](https://stackoverflow.com/questions/49112545/postgres-and-docker-compose-cant-create-a-custom-role-and-database)
* Went through [this](https://github.com/docker-library/postgres/issues/41) GitHub issue
* Tried to move my environment variable from `.env` file the `environment` inside my `docker-compose.yml` file and rebuild my container
---
Files
-----
**docker-compose.yml**
```
version: '3'
volumes:
postgres_data: {}
services:
postgres:
image: postgres
volumes:
- postgres_data:/var/lib/postgresql/data
env_file: .env
ports:
- "5432"
django:
build:
context: teddycrepineau-backend
dockerfile: teddycrepineau-root/Dockerfile
command: ./teddycrepineau-backend/teddycrepineau-root/start.sh
env_file: .env
volumes:
- .:/teddycrepineau-backend
ports:
- "8000:8000"
depends_on:
- postgres
```
**Dockerfile**
```
FROM python:3.7
ENV PYTHONUNBUFFERED 1
WORKDIR /teddycrepineau-backend/
ADD ./teddycrepineau-root/requirements.txt /teddycrepineau-backend/
RUN pip install -r requirements.txt
ADD . /teddycrepineau-backend/
RUN chmod +x ./teddycrepineau-root/start.sh
```
**start.sh**
```
#!/usr/bin/env bash
python3 ./teddycrepineau-backend/teddycrepineau-root/manage.py runserver
```
**.env**
```
POSTGRES_PASSWORD=
POSTGRES_USER=teddycrepineau
POSTGRES_DB=teddycrepineau
```
---
**EDIT**
My file structure is as follow
```
root
|___ teddycrepineau-backend
|___ teddycrepineau-root
|___ teddycrepineau
|___ Dockerfile
|___ manage.py
|___ start.sh
|___ teddycrepineau-frontend
|___ React-App
|___ .env
|___ docker-compose.yml
```
When I move my docker-compose.yml file inside my backend folder, it starts as expected (though I am not able to access my site when going to `127.0.0.1:8000` but that is mostly a different issue) with custom user and db. When I put my `docker-compose.yml` file to my root folder, I get the error `django.db.utils.OperationalError: FATAL: role "teddycrepineau" does not exist`
|
2019/01/14
|
[
"https://Stackoverflow.com/questions/54174950",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/5022051/"
] |
Change your env order like this.
```
POSTGRES_DB=teddycrepineau
POSTGRES_USER=teddycrepineau
POSTGRES_PASSWORD=
```
I find it at [this issue](https://github.com/docker-library/postgres/issues/41#issuecomment-382925263). I hope it works.
|
I encountered the issue due to a mismatch between the `$POSTGRES_DB` and `$POSTGRES_USER` variables. By default, psql will attempt to set the database to the same name as the user logging in, so when there is a mismatch between the variables it fails with an error along the lines of psql:
>
> FATAL: database "root" does not exist
>
>
>
I had to edit the init script that I was writing to include the `-d "$POSTGRES_DB"` option like so:
```
#!/bin/bash
set -e
psql -v ON_ERROR_STOP=1 --username "$POSTGRES_USER" -d "$POSTGRES_DB" <<-EOSQL
CREATE USER docker;
CREATE DATABASE docker;
GRANT ALL PRIVILEGES ON DATABASE docker TO docker;
EOSQL
```
|
54,174,950
|
**Context**
I am trying to run my Django application and Postgres database in a docker development environment using docker-compose (it's my first time using Docker).
I want to use my application with a custom role and database both named `teddycrepineau` (as opposed to using the default postgres user and db).
**Goal**
My goal is to deploy a web app powered on the front end by react and the backend by django restapi, the whole running in a docker.
**System/Version**
* python: 3.7
* django: 2.1
* OS: Mac OS High Sierra
**What error am I getting**
When running `docker-compose up` with my custom role and db, I am getting the following error `django.db.utils.OperationalError: FATAL: role "teddycrepineau" does not exist`. When running the same command with the default role and db `postgres` Django is able to start normally.
My understanding was that running docker-compose up would create the role and db passed as environment variable.
**What I have tried so far**
I read multiple threat on this site, GitHub, and docker:
* tried to delete my container and rebuilt it with formatting as suggested [here](https://stackoverflow.com/questions/49112545/postgres-and-docker-compose-cant-create-a-custom-role-and-database)
* Went through [this](https://github.com/docker-library/postgres/issues/41) GitHub issue
* Tried to move my environment variable from `.env` file the `environment` inside my `docker-compose.yml` file and rebuild my container
---
Files
-----
**docker-compose.yml**
```
version: '3'
volumes:
postgres_data: {}
services:
postgres:
image: postgres
volumes:
- postgres_data:/var/lib/postgresql/data
env_file: .env
ports:
- "5432"
django:
build:
context: teddycrepineau-backend
dockerfile: teddycrepineau-root/Dockerfile
command: ./teddycrepineau-backend/teddycrepineau-root/start.sh
env_file: .env
volumes:
- .:/teddycrepineau-backend
ports:
- "8000:8000"
depends_on:
- postgres
```
**Dockerfile**
```
FROM python:3.7
ENV PYTHONUNBUFFERED 1
WORKDIR /teddycrepineau-backend/
ADD ./teddycrepineau-root/requirements.txt /teddycrepineau-backend/
RUN pip install -r requirements.txt
ADD . /teddycrepineau-backend/
RUN chmod +x ./teddycrepineau-root/start.sh
```
**start.sh**
```
#!/usr/bin/env bash
python3 ./teddycrepineau-backend/teddycrepineau-root/manage.py runserver
```
**.env**
```
POSTGRES_PASSWORD=
POSTGRES_USER=teddycrepineau
POSTGRES_DB=teddycrepineau
```
---
**EDIT**
My file structure is as follow
```
root
|___ teddycrepineau-backend
|___ teddycrepineau-root
|___ teddycrepineau
|___ Dockerfile
|___ manage.py
|___ start.sh
|___ teddycrepineau-frontend
|___ React-App
|___ .env
|___ docker-compose.yml
```
When I move my docker-compose.yml file inside my backend folder, it starts as expected (though I am not able to access my site when going to `127.0.0.1:8000` but that is mostly a different issue) with custom user and db. When I put my `docker-compose.yml` file to my root folder, I get the error `django.db.utils.OperationalError: FATAL: role "teddycrepineau" does not exist`
|
2019/01/14
|
[
"https://Stackoverflow.com/questions/54174950",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/5022051/"
] |
This happens because your pgsql db was launched without any envs. The pgsql docker image only uses the envs the first time you created the container, after that it won't recreate DB and users.
The solution is to remove the pgsql volume so next time you `docker-compose up` you will have a fresh db with envs read. Simple way to do it is `docker-compose down -v`
|
when you run the
```
sudo docker-compose exec web python manage.py migrate
```
yes of course you will receive
***"django.db.utils.OperationalError: FATAL: role "user" does not exist***
first you need to put
```
sudo docker-compose down -v
sudo docker system prune
```
check container, they should be deleted
```
sudo docker ps -a
```
then check images
```
sudo docker image ls
```
don`t forget delete images
```
sudo docker image rm 3e57319a7a3a
```
run to the project folder and then check out
```
python manage.py migrate
```
if it didn`t works put the
```
python manage.py migrate —run-syncdb
```
and
```
sudo docker-compose up -d --build
sudo docker-compose exec web python manage.py collectstatic --no-input
sudo docker-compose exec web python manage.py makemigrations
sudo docker-compose exec web python manage.py migrate auth
sudo docker-compose exec web python manage.py migrate --run-syncdb
```
|
54,174,950
|
**Context**
I am trying to run my Django application and Postgres database in a docker development environment using docker-compose (it's my first time using Docker).
I want to use my application with a custom role and database both named `teddycrepineau` (as opposed to using the default postgres user and db).
**Goal**
My goal is to deploy a web app powered on the front end by react and the backend by django restapi, the whole running in a docker.
**System/Version**
* python: 3.7
* django: 2.1
* OS: Mac OS High Sierra
**What error am I getting**
When running `docker-compose up` with my custom role and db, I am getting the following error `django.db.utils.OperationalError: FATAL: role "teddycrepineau" does not exist`. When running the same command with the default role and db `postgres` Django is able to start normally.
My understanding was that running docker-compose up would create the role and db passed as environment variable.
**What I have tried so far**
I read multiple threat on this site, GitHub, and docker:
* tried to delete my container and rebuilt it with formatting as suggested [here](https://stackoverflow.com/questions/49112545/postgres-and-docker-compose-cant-create-a-custom-role-and-database)
* Went through [this](https://github.com/docker-library/postgres/issues/41) GitHub issue
* Tried to move my environment variable from `.env` file the `environment` inside my `docker-compose.yml` file and rebuild my container
---
Files
-----
**docker-compose.yml**
```
version: '3'
volumes:
postgres_data: {}
services:
postgres:
image: postgres
volumes:
- postgres_data:/var/lib/postgresql/data
env_file: .env
ports:
- "5432"
django:
build:
context: teddycrepineau-backend
dockerfile: teddycrepineau-root/Dockerfile
command: ./teddycrepineau-backend/teddycrepineau-root/start.sh
env_file: .env
volumes:
- .:/teddycrepineau-backend
ports:
- "8000:8000"
depends_on:
- postgres
```
**Dockerfile**
```
FROM python:3.7
ENV PYTHONUNBUFFERED 1
WORKDIR /teddycrepineau-backend/
ADD ./teddycrepineau-root/requirements.txt /teddycrepineau-backend/
RUN pip install -r requirements.txt
ADD . /teddycrepineau-backend/
RUN chmod +x ./teddycrepineau-root/start.sh
```
**start.sh**
```
#!/usr/bin/env bash
python3 ./teddycrepineau-backend/teddycrepineau-root/manage.py runserver
```
**.env**
```
POSTGRES_PASSWORD=
POSTGRES_USER=teddycrepineau
POSTGRES_DB=teddycrepineau
```
---
**EDIT**
My file structure is as follow
```
root
|___ teddycrepineau-backend
|___ teddycrepineau-root
|___ teddycrepineau
|___ Dockerfile
|___ manage.py
|___ start.sh
|___ teddycrepineau-frontend
|___ React-App
|___ .env
|___ docker-compose.yml
```
When I move my docker-compose.yml file inside my backend folder, it starts as expected (though I am not able to access my site when going to `127.0.0.1:8000` but that is mostly a different issue) with custom user and db. When I put my `docker-compose.yml` file to my root folder, I get the error `django.db.utils.OperationalError: FATAL: role "teddycrepineau" does not exist`
|
2019/01/14
|
[
"https://Stackoverflow.com/questions/54174950",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/5022051/"
] |
This happens because your pgsql db was launched without any envs. The pgsql docker image only uses the envs the first time you created the container, after that it won't recreate DB and users.
The solution is to remove the pgsql volume so next time you `docker-compose up` you will have a fresh db with envs read. Simple way to do it is `docker-compose down -v`
|
I encountered the issue due to a mismatch between the `$POSTGRES_DB` and `$POSTGRES_USER` variables. By default, psql will attempt to set the database to the same name as the user logging in, so when there is a mismatch between the variables it fails with an error along the lines of psql:
>
> FATAL: database "root" does not exist
>
>
>
I had to edit the init script that I was writing to include the `-d "$POSTGRES_DB"` option like so:
```
#!/bin/bash
set -e
psql -v ON_ERROR_STOP=1 --username "$POSTGRES_USER" -d "$POSTGRES_DB" <<-EOSQL
CREATE USER docker;
CREATE DATABASE docker;
GRANT ALL PRIVILEGES ON DATABASE docker TO docker;
EOSQL
```
|
47,031,382
|
I am using PyTorch with python3.
I tried the following while in ipdb mode:
```
regions = np.zeros([107,4], dtype='uint8')
torch.from_numpy(regions)
```
This prints the tensor.
However when trying:
```
regions = np.zeros([107,107,4], dtype='uint8')
torch.from_numpy(regions)
```
I get the following error:
```
*** UnicodeEncodeError: 'ascii' codec can't encode character '\u22ee' in position 72: ordinal not in range(128)
```
I'm am using:
```
numpy==1.11.3
torch==0.2.0.post4
torchvision==0.1.9
```
and python3.5.3
|
2017/10/31
|
[
"https://Stackoverflow.com/questions/47031382",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/8683130/"
] |
```
Pleas make sure our AWS S3 configuration :
<CORSConfiguration>
<CORSRule>
<AllowedOrigin>*</AllowedOrigin>
<AllowedMethod>GET</AllowedMethod>
<MaxAgeSeconds>3000</MaxAgeSeconds>
<AllowedHeader>Authorization</AllowedHeader>
</CORSRule>
</CORSConfiguration>
```
|
One of your uploads is failing. You will need to catch the error from s3Client.uploadPart() and retry.
I recommend the following improvements on the simple code below.
1) Add an increasing timeout for each retry.
2) Process the type of error to determine if a retry will make sense. For some errors you should just report the error and abort.
3) Limit the number of retries to something like 10 to prevent a forever while loop.
```
// repeat the upload until it succeeds.
boolean anotherPass;
do {
anotherPass = false; // assume everythings ok
try {
// Upload part and add response to our list.
partETags.add(s3Client.uploadPart(uploadRequest).getPartETag());
} catch (Exception e) {
anotherPass = true; // repeat
}
} while (anotherPass);
```
This Stack Overflow question has code for improving the error handling for your example.
[Problems when uploading large files to Amazon S3](https://stackoverflow.com/questions/4698869/problems-when-uploading-large-files-to-amazon-s3)
|
47,031,382
|
I am using PyTorch with python3.
I tried the following while in ipdb mode:
```
regions = np.zeros([107,4], dtype='uint8')
torch.from_numpy(regions)
```
This prints the tensor.
However when trying:
```
regions = np.zeros([107,107,4], dtype='uint8')
torch.from_numpy(regions)
```
I get the following error:
```
*** UnicodeEncodeError: 'ascii' codec can't encode character '\u22ee' in position 72: ordinal not in range(128)
```
I'm am using:
```
numpy==1.11.3
torch==0.2.0.post4
torchvision==0.1.9
```
and python3.5.3
|
2017/10/31
|
[
"https://Stackoverflow.com/questions/47031382",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/8683130/"
] |
```
Pleas make sure our AWS S3 configuration :
<CORSConfiguration>
<CORSRule>
<AllowedOrigin>*</AllowedOrigin>
<AllowedMethod>GET</AllowedMethod>
<MaxAgeSeconds>3000</MaxAgeSeconds>
<AllowedHeader>Authorization</AllowedHeader>
</CORSRule>
</CORSConfiguration>
```
|
I was able to resolve this issue by using **GeneratePresignedUrlRequest** feature of AWS.
But now I am getting a new error.**413 request entity too large nginx**. I have googled for the solution where I found that I need to make changes in nginx.conf file of server.
Now the question arises that since I am going to have multiple instances of server/load-balancers, so, would I have to manually set it for every instance?
|
47,031,382
|
I am using PyTorch with python3.
I tried the following while in ipdb mode:
```
regions = np.zeros([107,4], dtype='uint8')
torch.from_numpy(regions)
```
This prints the tensor.
However when trying:
```
regions = np.zeros([107,107,4], dtype='uint8')
torch.from_numpy(regions)
```
I get the following error:
```
*** UnicodeEncodeError: 'ascii' codec can't encode character '\u22ee' in position 72: ordinal not in range(128)
```
I'm am using:
```
numpy==1.11.3
torch==0.2.0.post4
torchvision==0.1.9
```
and python3.5.3
|
2017/10/31
|
[
"https://Stackoverflow.com/questions/47031382",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/8683130/"
] |
One of your uploads is failing. You will need to catch the error from s3Client.uploadPart() and retry.
I recommend the following improvements on the simple code below.
1) Add an increasing timeout for each retry.
2) Process the type of error to determine if a retry will make sense. For some errors you should just report the error and abort.
3) Limit the number of retries to something like 10 to prevent a forever while loop.
```
// repeat the upload until it succeeds.
boolean anotherPass;
do {
anotherPass = false; // assume everythings ok
try {
// Upload part and add response to our list.
partETags.add(s3Client.uploadPart(uploadRequest).getPartETag());
} catch (Exception e) {
anotherPass = true; // repeat
}
} while (anotherPass);
```
This Stack Overflow question has code for improving the error handling for your example.
[Problems when uploading large files to Amazon S3](https://stackoverflow.com/questions/4698869/problems-when-uploading-large-files-to-amazon-s3)
|
I was able to resolve this issue by using **GeneratePresignedUrlRequest** feature of AWS.
But now I am getting a new error.**413 request entity too large nginx**. I have googled for the solution where I found that I need to make changes in nginx.conf file of server.
Now the question arises that since I am going to have multiple instances of server/load-balancers, so, would I have to manually set it for every instance?
|
55,210,888
|
I faced with problem when I installed python-pptx with conda on cleaned environment:
conda install -c conda-forge python-pptx. After install was successfully finished I tried to import pptx module and got following error:
>
>
> ```
> >>> import pptx
> Traceback (most recent call last):
> File "<stdin>", line 1, in <module>
> File "C:\Users\SazonovEO\AppData\Local\Continuum\anaconda3\envs\new\lib\site-p
> ackages\pptx\__init__.py", line 13, in <module>
> from pptx.api import Presentation # noqa
> File "C:\Users\SazonovEO\AppData\Local\Continuum\anaconda3\envs\new\lib\site-p
> ackages\pptx\api.py", line 17, in <module>
> from .package import Package
> File "C:\Users\SazonovEO\AppData\Local\Continuum\anaconda3\envs\new\lib\site-p
> ackages\pptx\package.py", line 13, in <module>
> from .opc.package import OpcPackage
> File "C:\Users\SazonovEO\AppData\Local\Continuum\anaconda3\envs\new\lib\site-p
> ackages\pptx\opc\package.py", line 13, in <module>
> from .oxml import CT_Relationships, serialize_part_xml
> File "C:\Users\SazonovEO\AppData\Local\Continuum\anaconda3\envs\new\lib\site-p
> ackages\pptx\opc\oxml.py", line 12, in <module>
> from lxml import etree
> ImportError: DLL load failed: Не найден указанный модуль.
>
> ```
>
>
But if I installed this library (python-pptx) with pip like this (also into new cleaned environment):
```
pip install python-pptx
```
this works.
I have following versions: python version - 3.7.1, python-pptx-0.6.17, lxml-4.3.0.
Do you have any ideas about this issue?
|
2019/03/17
|
[
"https://Stackoverflow.com/questions/55210888",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/9869122/"
] |
If you are allowed to use built-in functions, you could do this:
```
idx = s[::-1].find(c[::-1])
return len(s) - (idx + len(c)) if idx >= 0 else -1
```
|
Your problem is this line:
```
last_position = next_position + len(c)
```
This is skipping potential matches. As it is, your code considers only the first, third, and fifth positions for matches. As you say, the right answer comes from checking the fourth position (index == 3). But you're skipping that because you move the length of the test string each time, rather than moving forward by only one character.
I think you want:
```
last_position = next_position + 1
```
|
55,210,888
|
I faced with problem when I installed python-pptx with conda on cleaned environment:
conda install -c conda-forge python-pptx. After install was successfully finished I tried to import pptx module and got following error:
>
>
> ```
> >>> import pptx
> Traceback (most recent call last):
> File "<stdin>", line 1, in <module>
> File "C:\Users\SazonovEO\AppData\Local\Continuum\anaconda3\envs\new\lib\site-p
> ackages\pptx\__init__.py", line 13, in <module>
> from pptx.api import Presentation # noqa
> File "C:\Users\SazonovEO\AppData\Local\Continuum\anaconda3\envs\new\lib\site-p
> ackages\pptx\api.py", line 17, in <module>
> from .package import Package
> File "C:\Users\SazonovEO\AppData\Local\Continuum\anaconda3\envs\new\lib\site-p
> ackages\pptx\package.py", line 13, in <module>
> from .opc.package import OpcPackage
> File "C:\Users\SazonovEO\AppData\Local\Continuum\anaconda3\envs\new\lib\site-p
> ackages\pptx\opc\package.py", line 13, in <module>
> from .oxml import CT_Relationships, serialize_part_xml
> File "C:\Users\SazonovEO\AppData\Local\Continuum\anaconda3\envs\new\lib\site-p
> ackages\pptx\opc\oxml.py", line 12, in <module>
> from lxml import etree
> ImportError: DLL load failed: Не найден указанный модуль.
>
> ```
>
>
But if I installed this library (python-pptx) with pip like this (also into new cleaned environment):
```
pip install python-pptx
```
this works.
I have following versions: python version - 3.7.1, python-pptx-0.6.17, lxml-4.3.0.
Do you have any ideas about this issue?
|
2019/03/17
|
[
"https://Stackoverflow.com/questions/55210888",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/9869122/"
] |
If you are allowed to use built-in functions, you could do this:
```
idx = s[::-1].find(c[::-1])
return len(s) - (idx + len(c)) if idx >= 0 else -1
```
|
It's because you're increasing next\_position with length of found substring thus missing last match.
```
def find_last(s, c):
last_position = 0
result = -1
while True:
next_position = s.find(c, last_position)
if next_position == -1:
break
result = next_position
#last_position = next_position + len(c)
last_position += 1
return result
print(find_last("aaaaa", "aa")) # -> 3
```
You could also use built-in python function [`rindex()`](https://docs.python.org/3/library/stdtypes.html#str.rindex) which will return first index counting from end of string
```
print("aaaaa".rindex("aa")) # -> 3
```
|
49,105,693
|
I have the following code:
```
import csv
import requests
from bs4 import BeautifulSoup
import datetime
with open("D:/python/sursa_alimentare.csv", "w+") as f:
writer = csv.writer(f)
writer.writerow(["Descriere", "Pret"])`
```
Because I run this quite often, I want to save the csv file with a name that include the datetime format.
Any help would appreciated. Thank you.
|
2018/03/05
|
[
"https://Stackoverflow.com/questions/49105693",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/8947024/"
] |
you have to use `.strftime`
```
filename = datetime.datetime.now().strftime("%Y-%m-%d-%H-%M.csv")
with open(filename, "w+") as f:
writer = csv.writer(f)
writer.writerow(["Descriere", "Pret"])`
```
here is some details <https://www.tutorialspoint.com/python/time_strftime.htm>
|
I guess this might help you add datetime to your filename,
```
import csv
import requests
from bs4 import BeautifulSoup
import datetime
file_name = 'sursa_alimentare-'+str(datetime.datetime.now())+'.csv'
with open(file_name, "w+") as f:
writer = csv.writer(f)
writer.writerow(["Descriere", "Pret"])
```
|
49,105,693
|
I have the following code:
```
import csv
import requests
from bs4 import BeautifulSoup
import datetime
with open("D:/python/sursa_alimentare.csv", "w+") as f:
writer = csv.writer(f)
writer.writerow(["Descriere", "Pret"])`
```
Because I run this quite often, I want to save the csv file with a name that include the datetime format.
Any help would appreciated. Thank you.
|
2018/03/05
|
[
"https://Stackoverflow.com/questions/49105693",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/8947024/"
] |
Avoid using `datetime.now()` directly, as this will create `:` in the filename, which would fail if used on a Windows file system. Instead use [`strftime()`](https://docs.python.org/3.6/library/datetime.html?highlight=strptime#strftime-and-strptime-behavior) to add the required formatting:
```
from datetime import datetime
filename = datetime.now().strftime('sursa_alimentare-%Y-%m-%d-%H-%M.csv')
with open(filename, "w+") as f_output:
csv_output = csv.writer(f_output)
csv_output.writerow(["Descriere", "Pret"])
```
Giving a filename such as:
```
sursa_alimentare-2018-03-05-09-01.csv
```
|
I guess this might help you add datetime to your filename,
```
import csv
import requests
from bs4 import BeautifulSoup
import datetime
file_name = 'sursa_alimentare-'+str(datetime.datetime.now())+'.csv'
with open(file_name, "w+") as f:
writer = csv.writer(f)
writer.writerow(["Descriere", "Pret"])
```
|
21,265,633
|
I need to read a huge (larger than memory) unquoted TSV file. Fields may contain the string "\n". However, python tries to be clever and split that string in two. So for example a row containing:
```
cat dog fish\nchips 4.50
```
gets split into two lines:
```
['cat', 'dog', 'fish']
['chips', 4.5]
```
What I want is a single line:
```
['cat', 'dog', 'fish\nchips', 4.5]
```
How can I make python stop being clever and just split lines on 0x0a?
My code is:
```
with open(path, 'r') as file:
for line in file:
row = line.split("\t")
```
Quoting the TSV file is not an option since I don't create it myself.
|
2014/01/21
|
[
"https://Stackoverflow.com/questions/21265633",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/2400966/"
] |
This already works correctly; for a file with a literal `\` followed by a literal `n` character (two bytes), will **never** be seen by Python as a newline.
What you have, then, is a single `\n` character, an actual newline. The *rest* of your file is separated by the `\r\n` Windows conventional line separator.
Use [`io.open()`](http://docs.python.org/2/library/io.html#io.open) to control how newlines are to be treated:
```
import io
with io.open(path, newline='\r\n') as infh:
for line in infh:
row = line.strip().split('\t')
```
Demo:
```
>>> import io
>>> with open('/tmp/test.txt', 'wb') as outfh:
... outfh.write('cat\tdog\tfish\nchips\t4.50\r\nsnake\tegg\tspam\nham\t42.38\r\n')
...
>>> with io.open('/tmp/test.txt', newline='\r\n') as infh:
... for line in infh:
... row = line.strip().split('\t')
... print row
...
[u'cat', u'dog', u'fish\nchips', u'4.50']
[u'snake', u'egg', u'spam\nham', u'42.38']
```
Note that `io.open()` also decodes your file data to unicode; you may need to specify an explicit encoding for non-ASCII file data.
|
If your problem is .readline() and splitting on \t, try using the csv builtin:
```
import csv
with open(path, 'r') as file:
reader = csv.Reader(file, delimiter='\t') # Or DictReader - I like DictReader.
reader.next()
```
It handles these things for us.
|
20,998,832
|
I've ran the brown-clustering algorithm from <https://github.com/percyliang/brown-cluster> and also a python implementation <https://github.com/mheilman/tan-clustering>. And they both give some sort of binary and another integer for each unique token. For example:
```
0 the 6
10 chased 3
110 dog 2
1110 mouse 2
1111 cat 2
```
**What does the binary and the integer mean?**
From the first [link](https://github.com/percyliang/brown-cluster), the binary is known as a `bit-string`, see <http://saffron.deri.ie/acl_acl/document/ACL_ANTHOLOGY_ACL_P11-1053/>
But how do I tell from the output that `dog and mouse and cat` is one cluster and `the and chased` is not in the same cluster?
|
2014/01/08
|
[
"https://Stackoverflow.com/questions/20998832",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/610569/"
] |
If I understand correctly, the algorithm gives you a tree and you need to truncate it at some level to get clusters. In case of those bit strings, you should just take first `L` characters.
For example, cutting at the second character gives you two clusters
```
10 chased
11 dog
11 mouse
11 cat
```
At the third character you get
```
110 dog
111 mouse
111 cat
```
The cutting strategy is a different subject though.
|
My guess is:
According to Figure 2 in [Brown et al 1992](http://acl.ldc.upenn.edu/J/J92/J92-4003.pdf), the clustering is hierarchical and to get from the root to each word "leaf" you have to make an up/down decision. If up is 0 and down is 1, you can represent each word as a bit string.
From <https://github.com/mheilman/tan-clustering/blob/master/class_lm_cluster.py> :
```
# the 0/1 bit to add when walking up the hierarchy
# from a word to the top-level cluster
```
|
20,998,832
|
I've ran the brown-clustering algorithm from <https://github.com/percyliang/brown-cluster> and also a python implementation <https://github.com/mheilman/tan-clustering>. And they both give some sort of binary and another integer for each unique token. For example:
```
0 the 6
10 chased 3
110 dog 2
1110 mouse 2
1111 cat 2
```
**What does the binary and the integer mean?**
From the first [link](https://github.com/percyliang/brown-cluster), the binary is known as a `bit-string`, see <http://saffron.deri.ie/acl_acl/document/ACL_ANTHOLOGY_ACL_P11-1053/>
But how do I tell from the output that `dog and mouse and cat` is one cluster and `the and chased` is not in the same cluster?
|
2014/01/08
|
[
"https://Stackoverflow.com/questions/20998832",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/610569/"
] |
The integers are counts of how many times the word is seen in the document. (I have tested this in the python implementation.)
From the comments at the top of the python implementation:
>
> Instead of using a window (e.g., as in Brown et al., sec. 4), this
> code computed PMI using the probability that two randomly selected
> clusters from the same document will be c1 and c2. Also, since the
> total numbers of cluster tokens and pairs are constant across pairs,
> this code use counts instead of probabilities.
>
>
>
From the code in the python implementation we see that it outputs the word, the bit string and the word counts.
```
def save_clusters(self, output_path):
with open(output_path, 'w') as f:
for w in self.words:
f.write("{}\t{}\t{}\n".format(w, self.get_bitstring(w),
self.word_counts[w]))
```
|
My guess is:
According to Figure 2 in [Brown et al 1992](http://acl.ldc.upenn.edu/J/J92/J92-4003.pdf), the clustering is hierarchical and to get from the root to each word "leaf" you have to make an up/down decision. If up is 0 and down is 1, you can represent each word as a bit string.
From <https://github.com/mheilman/tan-clustering/blob/master/class_lm_cluster.py> :
```
# the 0/1 bit to add when walking up the hierarchy
# from a word to the top-level cluster
```
|
20,998,832
|
I've ran the brown-clustering algorithm from <https://github.com/percyliang/brown-cluster> and also a python implementation <https://github.com/mheilman/tan-clustering>. And they both give some sort of binary and another integer for each unique token. For example:
```
0 the 6
10 chased 3
110 dog 2
1110 mouse 2
1111 cat 2
```
**What does the binary and the integer mean?**
From the first [link](https://github.com/percyliang/brown-cluster), the binary is known as a `bit-string`, see <http://saffron.deri.ie/acl_acl/document/ACL_ANTHOLOGY_ACL_P11-1053/>
But how do I tell from the output that `dog and mouse and cat` is one cluster and `the and chased` is not in the same cluster?
|
2014/01/08
|
[
"https://Stackoverflow.com/questions/20998832",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/610569/"
] |
In Percy Liang's implementation (<https://github.com/percyliang/brown-cluster>), the `-C` parameter allows you to specify the number of word clusters. The output contains all the words in the corpus, together with a bit-string annotating the cluster and the word frequency in the following format: `<bit string> <word> <word frequency>`. The number of distinct bit strings in the output equals the number of desired clusters and the words with the same bit string belong to the same cluster.
|
My guess is:
According to Figure 2 in [Brown et al 1992](http://acl.ldc.upenn.edu/J/J92/J92-4003.pdf), the clustering is hierarchical and to get from the root to each word "leaf" you have to make an up/down decision. If up is 0 and down is 1, you can represent each word as a bit string.
From <https://github.com/mheilman/tan-clustering/blob/master/class_lm_cluster.py> :
```
# the 0/1 bit to add when walking up the hierarchy
# from a word to the top-level cluster
```
|
20,998,832
|
I've ran the brown-clustering algorithm from <https://github.com/percyliang/brown-cluster> and also a python implementation <https://github.com/mheilman/tan-clustering>. And they both give some sort of binary and another integer for each unique token. For example:
```
0 the 6
10 chased 3
110 dog 2
1110 mouse 2
1111 cat 2
```
**What does the binary and the integer mean?**
From the first [link](https://github.com/percyliang/brown-cluster), the binary is known as a `bit-string`, see <http://saffron.deri.ie/acl_acl/document/ACL_ANTHOLOGY_ACL_P11-1053/>
But how do I tell from the output that `dog and mouse and cat` is one cluster and `the and chased` is not in the same cluster?
|
2014/01/08
|
[
"https://Stackoverflow.com/questions/20998832",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/610569/"
] |
Change your running : ./wcluster --text input.txt --c 3
--c number
this number means the number of cluster, and the default is 50. You can't distinguish the different cluster of words because the default input has only three sentences. Change 50 clusters to 3 clusters and you can tell the difference.
I enter three tweets into the input and give 3 as the cluster parameter
[](https://i.stack.imgur.com/W4POn.png)
|
My guess is:
According to Figure 2 in [Brown et al 1992](http://acl.ldc.upenn.edu/J/J92/J92-4003.pdf), the clustering is hierarchical and to get from the root to each word "leaf" you have to make an up/down decision. If up is 0 and down is 1, you can represent each word as a bit string.
From <https://github.com/mheilman/tan-clustering/blob/master/class_lm_cluster.py> :
```
# the 0/1 bit to add when walking up the hierarchy
# from a word to the top-level cluster
```
|
20,998,832
|
I've ran the brown-clustering algorithm from <https://github.com/percyliang/brown-cluster> and also a python implementation <https://github.com/mheilman/tan-clustering>. And they both give some sort of binary and another integer for each unique token. For example:
```
0 the 6
10 chased 3
110 dog 2
1110 mouse 2
1111 cat 2
```
**What does the binary and the integer mean?**
From the first [link](https://github.com/percyliang/brown-cluster), the binary is known as a `bit-string`, see <http://saffron.deri.ie/acl_acl/document/ACL_ANTHOLOGY_ACL_P11-1053/>
But how do I tell from the output that `dog and mouse and cat` is one cluster and `the and chased` is not in the same cluster?
|
2014/01/08
|
[
"https://Stackoverflow.com/questions/20998832",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/610569/"
] |
If I understand correctly, the algorithm gives you a tree and you need to truncate it at some level to get clusters. In case of those bit strings, you should just take first `L` characters.
For example, cutting at the second character gives you two clusters
```
10 chased
11 dog
11 mouse
11 cat
```
At the third character you get
```
110 dog
111 mouse
111 cat
```
The cutting strategy is a different subject though.
|
The integers are counts of how many times the word is seen in the document. (I have tested this in the python implementation.)
From the comments at the top of the python implementation:
>
> Instead of using a window (e.g., as in Brown et al., sec. 4), this
> code computed PMI using the probability that two randomly selected
> clusters from the same document will be c1 and c2. Also, since the
> total numbers of cluster tokens and pairs are constant across pairs,
> this code use counts instead of probabilities.
>
>
>
From the code in the python implementation we see that it outputs the word, the bit string and the word counts.
```
def save_clusters(self, output_path):
with open(output_path, 'w') as f:
for w in self.words:
f.write("{}\t{}\t{}\n".format(w, self.get_bitstring(w),
self.word_counts[w]))
```
|
20,998,832
|
I've ran the brown-clustering algorithm from <https://github.com/percyliang/brown-cluster> and also a python implementation <https://github.com/mheilman/tan-clustering>. And they both give some sort of binary and another integer for each unique token. For example:
```
0 the 6
10 chased 3
110 dog 2
1110 mouse 2
1111 cat 2
```
**What does the binary and the integer mean?**
From the first [link](https://github.com/percyliang/brown-cluster), the binary is known as a `bit-string`, see <http://saffron.deri.ie/acl_acl/document/ACL_ANTHOLOGY_ACL_P11-1053/>
But how do I tell from the output that `dog and mouse and cat` is one cluster and `the and chased` is not in the same cluster?
|
2014/01/08
|
[
"https://Stackoverflow.com/questions/20998832",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/610569/"
] |
If I understand correctly, the algorithm gives you a tree and you need to truncate it at some level to get clusters. In case of those bit strings, you should just take first `L` characters.
For example, cutting at the second character gives you two clusters
```
10 chased
11 dog
11 mouse
11 cat
```
At the third character you get
```
110 dog
111 mouse
111 cat
```
The cutting strategy is a different subject though.
|
In Percy Liang's implementation (<https://github.com/percyliang/brown-cluster>), the `-C` parameter allows you to specify the number of word clusters. The output contains all the words in the corpus, together with a bit-string annotating the cluster and the word frequency in the following format: `<bit string> <word> <word frequency>`. The number of distinct bit strings in the output equals the number of desired clusters and the words with the same bit string belong to the same cluster.
|
20,998,832
|
I've ran the brown-clustering algorithm from <https://github.com/percyliang/brown-cluster> and also a python implementation <https://github.com/mheilman/tan-clustering>. And they both give some sort of binary and another integer for each unique token. For example:
```
0 the 6
10 chased 3
110 dog 2
1110 mouse 2
1111 cat 2
```
**What does the binary and the integer mean?**
From the first [link](https://github.com/percyliang/brown-cluster), the binary is known as a `bit-string`, see <http://saffron.deri.ie/acl_acl/document/ACL_ANTHOLOGY_ACL_P11-1053/>
But how do I tell from the output that `dog and mouse and cat` is one cluster and `the and chased` is not in the same cluster?
|
2014/01/08
|
[
"https://Stackoverflow.com/questions/20998832",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/610569/"
] |
If I understand correctly, the algorithm gives you a tree and you need to truncate it at some level to get clusters. In case of those bit strings, you should just take first `L` characters.
For example, cutting at the second character gives you two clusters
```
10 chased
11 dog
11 mouse
11 cat
```
At the third character you get
```
110 dog
111 mouse
111 cat
```
The cutting strategy is a different subject though.
|
Change your running : ./wcluster --text input.txt --c 3
--c number
this number means the number of cluster, and the default is 50. You can't distinguish the different cluster of words because the default input has only three sentences. Change 50 clusters to 3 clusters and you can tell the difference.
I enter three tweets into the input and give 3 as the cluster parameter
[](https://i.stack.imgur.com/W4POn.png)
|
20,998,832
|
I've ran the brown-clustering algorithm from <https://github.com/percyliang/brown-cluster> and also a python implementation <https://github.com/mheilman/tan-clustering>. And they both give some sort of binary and another integer for each unique token. For example:
```
0 the 6
10 chased 3
110 dog 2
1110 mouse 2
1111 cat 2
```
**What does the binary and the integer mean?**
From the first [link](https://github.com/percyliang/brown-cluster), the binary is known as a `bit-string`, see <http://saffron.deri.ie/acl_acl/document/ACL_ANTHOLOGY_ACL_P11-1053/>
But how do I tell from the output that `dog and mouse and cat` is one cluster and `the and chased` is not in the same cluster?
|
2014/01/08
|
[
"https://Stackoverflow.com/questions/20998832",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/610569/"
] |
In Percy Liang's implementation (<https://github.com/percyliang/brown-cluster>), the `-C` parameter allows you to specify the number of word clusters. The output contains all the words in the corpus, together with a bit-string annotating the cluster and the word frequency in the following format: `<bit string> <word> <word frequency>`. The number of distinct bit strings in the output equals the number of desired clusters and the words with the same bit string belong to the same cluster.
|
The integers are counts of how many times the word is seen in the document. (I have tested this in the python implementation.)
From the comments at the top of the python implementation:
>
> Instead of using a window (e.g., as in Brown et al., sec. 4), this
> code computed PMI using the probability that two randomly selected
> clusters from the same document will be c1 and c2. Also, since the
> total numbers of cluster tokens and pairs are constant across pairs,
> this code use counts instead of probabilities.
>
>
>
From the code in the python implementation we see that it outputs the word, the bit string and the word counts.
```
def save_clusters(self, output_path):
with open(output_path, 'w') as f:
for w in self.words:
f.write("{}\t{}\t{}\n".format(w, self.get_bitstring(w),
self.word_counts[w]))
```
|
20,998,832
|
I've ran the brown-clustering algorithm from <https://github.com/percyliang/brown-cluster> and also a python implementation <https://github.com/mheilman/tan-clustering>. And they both give some sort of binary and another integer for each unique token. For example:
```
0 the 6
10 chased 3
110 dog 2
1110 mouse 2
1111 cat 2
```
**What does the binary and the integer mean?**
From the first [link](https://github.com/percyliang/brown-cluster), the binary is known as a `bit-string`, see <http://saffron.deri.ie/acl_acl/document/ACL_ANTHOLOGY_ACL_P11-1053/>
But how do I tell from the output that `dog and mouse and cat` is one cluster and `the and chased` is not in the same cluster?
|
2014/01/08
|
[
"https://Stackoverflow.com/questions/20998832",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/610569/"
] |
Change your running : ./wcluster --text input.txt --c 3
--c number
this number means the number of cluster, and the default is 50. You can't distinguish the different cluster of words because the default input has only three sentences. Change 50 clusters to 3 clusters and you can tell the difference.
I enter three tweets into the input and give 3 as the cluster parameter
[](https://i.stack.imgur.com/W4POn.png)
|
The integers are counts of how many times the word is seen in the document. (I have tested this in the python implementation.)
From the comments at the top of the python implementation:
>
> Instead of using a window (e.g., as in Brown et al., sec. 4), this
> code computed PMI using the probability that two randomly selected
> clusters from the same document will be c1 and c2. Also, since the
> total numbers of cluster tokens and pairs are constant across pairs,
> this code use counts instead of probabilities.
>
>
>
From the code in the python implementation we see that it outputs the word, the bit string and the word counts.
```
def save_clusters(self, output_path):
with open(output_path, 'w') as f:
for w in self.words:
f.write("{}\t{}\t{}\n".format(w, self.get_bitstring(w),
self.word_counts[w]))
```
|
43,303,575
|
I am trying to install Cassandra on windows 10 localhost.
I am getting error as `Can't detect Python version!`
I am trying this way
Downloaded and extracted Cassandra in `C:\wamp64\apache-cassandra-3.10`
Set `Set-ExecutionPolicy Unrestricted` in Windows powershell
From Windows CMD
```
cd C:\wamp64\apache-cassandra-3.10\bin
C:\wamp64\apache-cassandra-3.10\bin>cassandra.bat -f
```
Cassandra is now running so I stopped it by `Control-C`
Then I try to run `cqlsh` by following command
```
C:\wamp64\apache-cassandra-3.10\bin>cqlsh.bat
```
But I got errror `Can't detect Python version!`
So I download and install Python 2.7.13 in `C:\wamp64\python`
I have added environmental path for python in System Properties `C:\wamp64\python\`
I extracted Thrift in `C:\wamp64\python\thrift-0.10.0`
Then I install Python like this
```
C:\wamp64\python\thrift-0.10.0>python setup.py install
```
But again I am getting error on running `cqlsh` as
```
C:\wamp64\apache-cassandra-3.10\bin>cqlsh.bat
Can't detect Python version!
```
Please see and suggest what step I have missed in installation of Cassandra for this error.
Thanks
**Edit**
I reinstall everything from scratch again carefully and now I am getting this error
```
C:\wamp64\apache-cassandra-3.10\pylib>python setup.py install
Traceback (most recent call last):
File "setup.py", line 33, in <module>
ext_modules=get_extensions(),
File "setup.py", line 26, in get_extensions
from Cython.Build import cythonize
ImportError: No module named Cython.Build
C:\wamp64\apache-cassandra-3.10\pylib>cd C:\wamp64\apache-cassandra-3.10\bin
C:\wamp64\apache-cassandra-3.10\bin>python cqlsh localhost 9160
File "cqlsh", line 20
python -c 'import sys; sys.exit(not (0x020700b0 < sys.hexversion < 0x03000000))' 2>/dev/null \
^
SyntaxError: invalid syntax
C:\wamp64\apache-cassandra-3.10\bin>
```
Please see and suggest any possible way to resolve these error.
Thanks
|
2017/04/09
|
[
"https://Stackoverflow.com/questions/43303575",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/-1/"
] |
I have installed latest version of Apache Cassandra 3.11.9 for Windows, My python env variable is already set for python3 (Python 3.8), as I actively use python 3.8. I was continuously getting error, then I installed python2 inside 'Apache Cassandra 3.11.9\bin'. I need not to reset my env variable to python2.
The more on solution:
<https://susant.medium.com/simple-way-to-install-cassandra-in-windows-10-6497e93989e6>
|
I think you are following wrong python installation procedures. **please uninstall all the python instances using programs and features section in control panel. then install python obtained from [python.org](https://www.python.org/). ensure add to path option is checked on the time of installation. verify python installation by typing `python` on a CMD window.**
After that cd to your cassandra installation bin folder. type `cassandra.bat -f`. it will successfully launch a cassandra server instance. And never stop it, beacuase cqlsh needs a running cassandra instance.
Then open another CMD window. cd to your cassandra installation bin folder. type `cqlsh`. it will successfully connect to running cassandra server instance. And CMD window will switched to cqlsh console mode.
Successfully tested and verified on Win 7 64 bit with python 2.7 64 bit. *I you have time, please check it on python 3.6 too...*
|
43,303,575
|
I am trying to install Cassandra on windows 10 localhost.
I am getting error as `Can't detect Python version!`
I am trying this way
Downloaded and extracted Cassandra in `C:\wamp64\apache-cassandra-3.10`
Set `Set-ExecutionPolicy Unrestricted` in Windows powershell
From Windows CMD
```
cd C:\wamp64\apache-cassandra-3.10\bin
C:\wamp64\apache-cassandra-3.10\bin>cassandra.bat -f
```
Cassandra is now running so I stopped it by `Control-C`
Then I try to run `cqlsh` by following command
```
C:\wamp64\apache-cassandra-3.10\bin>cqlsh.bat
```
But I got errror `Can't detect Python version!`
So I download and install Python 2.7.13 in `C:\wamp64\python`
I have added environmental path for python in System Properties `C:\wamp64\python\`
I extracted Thrift in `C:\wamp64\python\thrift-0.10.0`
Then I install Python like this
```
C:\wamp64\python\thrift-0.10.0>python setup.py install
```
But again I am getting error on running `cqlsh` as
```
C:\wamp64\apache-cassandra-3.10\bin>cqlsh.bat
Can't detect Python version!
```
Please see and suggest what step I have missed in installation of Cassandra for this error.
Thanks
**Edit**
I reinstall everything from scratch again carefully and now I am getting this error
```
C:\wamp64\apache-cassandra-3.10\pylib>python setup.py install
Traceback (most recent call last):
File "setup.py", line 33, in <module>
ext_modules=get_extensions(),
File "setup.py", line 26, in get_extensions
from Cython.Build import cythonize
ImportError: No module named Cython.Build
C:\wamp64\apache-cassandra-3.10\pylib>cd C:\wamp64\apache-cassandra-3.10\bin
C:\wamp64\apache-cassandra-3.10\bin>python cqlsh localhost 9160
File "cqlsh", line 20
python -c 'import sys; sys.exit(not (0x020700b0 < sys.hexversion < 0x03000000))' 2>/dev/null \
^
SyntaxError: invalid syntax
C:\wamp64\apache-cassandra-3.10\bin>
```
Please see and suggest any possible way to resolve these error.
Thanks
|
2017/04/09
|
[
"https://Stackoverflow.com/questions/43303575",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/-1/"
] |
I have installed latest version of Apache Cassandra 3.11.9 for Windows, My python env variable is already set for python3 (Python 3.8), as I actively use python 3.8. I was continuously getting error, then I installed python2 inside 'Apache Cassandra 3.11.9\bin'. I need not to reset my env variable to python2.
The more on solution:
<https://susant.medium.com/simple-way-to-install-cassandra-in-windows-10-6497e93989e6>
|
I had this issue as I was running Python3 and Python2 on Windows.
It seems like the problem was with missing PATH to Python2.
To check, run in cmd:
```
python --version
```
If you get nothing, it means that the PATH is not added.
Note:
To add path when installing Python2 you need to scroll down to Customize Python 2.x.x -> Add python.exe to Path (unlike Python 3 where you have a checkbox)
|
67,281,038
|
I have wrote a code for face recognition in python.
My code works perfectly in `.py` file (without any errors or warning), but after making a `.exe` file out of it, through `pyinstaller` it won't work at all.
I have searched through, for the same and tried the following methods, but it still won't work.
first method i made the following changes in `.spec` file.
(Windows OS)
>
> main.spec file
>
>
>
```
block_cipher = None
face_models = [
('.\\face_recognition_models\\models\\dlib_face_recognition_resnet_model_v1.dat', './face_recognition_models/models'),
('.\\face_recognition_models\\models\\mmod_human_face_detector.dat', './face_recognition_models/models'),
('.\\face_recognition_models\\models\\shape_predictor_5_face_landmarks.dat', './face_recognition_models/models'),
('.\\face_recognition_models\\models\\shape_predictor_68_face_landmarks.dat', './face_recognition_models/models'),
]
a = Analysis(['<your python script name.py>'],
pathex=['<path to working directory>'],
binaries=face_models,
datas=[],
hiddenimports=['scipy._lib.messagestream', 'scipy', 'scipy.signal', 'scipy.signal.bsplines', 'scipy.special', 'scipy.special._ufuncs_cxx',
'scipy.linalg.cython_blas',
'scipy.linalg.cython_lapack',
'scipy.integrate',
'scipy.integrate.quadrature',
'scipy.integrate.odepack',
'scipy.integrate._odepack',
'scipy.integrate.quadpack',
'scipy.integrate._quadpack',
'scipy.integrate._ode',
'scipy.integrate.vode',
'scipy.integrate._dop', 'scipy._lib', 'scipy._build_utils','scipy.__config__',
'scipy.integrate.lsoda', 'scipy.cluster', 'scipy.constants','scipy.fftpack','scipy.interpolate','scipy.io','scipy.linalg','scipy.misc','scipy.ndimage','scipy.odr','scipy.optimize','scipy.setup','scipy.sparse','scipy.spatial','scipy.special','scipy.stats','scipy.version'],
hookspath=[],
runtime_hooks=[],
excludes=[],
win_no_prefer_redirects=False,
win_private_assemblies=False,
cipher=block_cipher)
a.datas += Tree('./scipy-extra-dll', prefix=None)
pyz = PYZ(a.pure, a.zipped_data,
cipher=block_cipher)
exe = EXE(pyz,
a.scripts,
a.binaries,
a.zipfiles,
a.datas,
name='<your python script name>',
debug=False,
strip=False,
upx=True,
runtime_tmpdir=None,
console=True )
```
another method
```
#replaced
datas=[]
#with
datas=[('shape_predictor_68_face_landmarks.dat','./face_recognition_models/models'),('shape_predictor_5_face_landmarks.dat','./face_recognition_models/models'),('mmod_human_face_detector.dat','./face_recognition_models/models'),('dlib_face_recognition_resnet_model_v1.dat','./face_recognition_models/models')]
```
some people said it might me a problem with the `hook-scipy.py` file so here it is
>
> hook-scipy.py
>
>
>
```
import os
import glob
from PyInstaller.utils.hooks import get_module_file_attribute
from PyInstaller.compat import is_win
from PyInstaller.utils.hooks import collect_submodules
from PyInstaller.utils.hooks import collect_data_files
hiddenimports = collect_submodules('scipy')
datas = collect_data_files('scipy')
binaries = []
# package the DLL bundle that official scipy wheels for Windows ship
# The DLL bundle will either be in extra-dll on windows proper
# and in .libs if installed on a virtualenv created from MinGW (Git-Bash
# for example)
if is_win:
extra_dll_locations = ['extra-dll', '.libs']
for location in extra_dll_locations:
dll_glob = os.path.join(os.path.dirname(
get_module_file_attribute('scipy')), location, "*.dll")
if glob.glob(dll_glob):
binaries.append((dll_glob, "."))
# collect library-wide utility extension modules
hiddenimports = ['scipy._lib.%s' % m for m in [
'messagestream', "_ccallback_c", "_fpumode"]]
```
please help me to solve this.
|
2021/04/27
|
[
"https://Stackoverflow.com/questions/67281038",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/14332805/"
] |
First of all, make sure your function app can be compiled.
Second, the format of your publish url is no problem.
So maybe this problem is not from the Visual Studio side. please make sure the function app is not stop or restarting, the scm site is not under the protection of NETWorking and you have login the right Microsoft account in VS.
If all of above still don't work, you can try to use other deploy method. Such as [command](https://learn.microsoft.com/en-us/azure/azure-functions/functions-run-local?tabs=windows%2Ccsharp%2Cbash#publish) or [ftp](https://learn.microsoft.com/en-us/azure/app-service/deploy-ftp?tabs=portal) and [so on](https://learn.microsoft.com/en-us/azure/azure-functions/functions-deployment-technologies#deployment-technology-details).
And for your situation, if you just do a little change, the [incremental deployment](https://learn.microsoft.com/en-us/azure/azure-functions/functions-continuous-deployment) may be a better choice.
|
In my case opening azure functions app in my browser helped. Until that it was giving error when I try to publish it in Visual Studio.
|
45,457,324
|
I have set up a spark cluster and all the nodes have access to network shared storage where they can access a file to read. I am running this in a python jupyter notebook. It was working a few days ago, and now it stopped working but I'm not sure why, or what I have changed.
I have tried restarting the nodes and master.
I have also tried copying the csv file to a new directory and pointing the spark.read there, but it still gives the same error.
When I delete the csv file, it gives a much shorter error saying 'File not found'
Any help would be greatly appreciated.
This is my code:
```
from pyspark.sql import SparkSession
from pyspark.conf import SparkConf
spark = SparkSession.builder \
.master("spark://IP:PORT") \
.appName("app_1") \
.config(conf=SparkConf()) \
.getOrCreate()
df = spark.read.csv("/nas/file123.csv")
string1 = df.rdd.map(lambda x: x.column1).collect()
```
However, I get this error:
```
---------------------------------------------------------------------------
Py4JJavaError Traceback (most recent call last)
<ipython-input-2-12bd938122cd> in <module>()
29
30
---> 31 string1 = df.rdd.map(lambda x: x.column1).collect()
32
33
/home/hjk/Downloads/spark-2.1.0-bin-hadoop2.7/python/pyspark/rdd.pyc in collect(self)
807 """
808 with SCCallSiteSync(self.context) as css:
--> 809 port = self.ctx._jvm.PythonRDD.collectAndServe(self._jrdd.rdd())
810 return list(_load_from_socket(port, self._jrdd_deserializer))
811
/usr/local/lib/python2.7/dist-packages/py4j/java_gateway.pyc in __call__(self, *args)
1131 answer = self.gateway_client.send_command(command)
1132 return_value = get_return_value(
-> 1133 answer, self.gateway_client, self.target_id, self.name)
1134
1135 for temp_arg in temp_args:
/home/hjk/Downloads/spark-2.1.0-bin-hadoop2.7/python/pyspark/sql/utils.pyc in deco(*a, **kw)
61 def deco(*a, **kw):
62 try:
---> 63 return f(*a, **kw)
64 except py4j.protocol.Py4JJavaError as e:
65 s = e.java_exception.toString()
/usr/local/lib/python2.7/dist-packages/py4j/protocol.pyc in get_return_value(answer, gateway_client, target_id, name)
317 raise Py4JJavaError(
318 "An error occurred while calling {0}{1}{2}.\n".
--> 319 format(target_id, ".", name), value)
320 else:
321 raise Py4JError(
Py4JJavaError: An error occurred while calling z:org.apache.spark.api.python.PythonRDD.collectAndServe.
: org.apache.spark.SparkException: Job aborted due to stage failure: Task 4 in stage 3.0 failed 4 times, most recent failure: Lost task 4.3 in stage 3.0 (TID 37, executor 2): java.io.FileNotFoundException: File file:/nas/file123.csv does not exist
It is possible the underlying files have been updated. You can explicitly invalidate the cache in Spark by running 'REFRESH TABLE tableName' command in SQL or by recreating the Dataset/DataFrame involved.
at org.apache.spark.sql.execution.datasources.FileScanRDD$$anon$1.nextIterator(FileScanRDD.scala:157)
at org.apache.spark.sql.execution.datasources.FileScanRDD$$anon$1.hasNext(FileScanRDD.scala:102)
at org.apache.spark.sql.catalyst.expressions.GeneratedClass$GeneratedIterator.processNext(Unknown Source)
at org.apache.spark.sql.execution.BufferedRowIterator.hasNext(BufferedRowIterator.java:43)
at org.apache.spark.sql.execution.WholeStageCodegenExec$$anonfun$8$$anon$1.hasNext(WholeStageCodegenExec.scala:377)
at scala.collection.Iterator$$anon$11.hasNext(Iterator.scala:408)
at org.apache.spark.api.python.SerDeUtil$AutoBatchedPickler.hasNext(SerDeUtil.scala:117)
at scala.collection.Iterator$class.foreach(Iterator.scala:893)
at org.apache.spark.api.python.SerDeUtil$AutoBatchedPickler.foreach(SerDeUtil.scala:112)
at org.apache.spark.api.python.PythonRDD$.writeIteratorToStream(PythonRDD.scala:504)
at org.apache.spark.api.python.PythonRunner$WriterThread$$anonfun$run$3.apply(PythonRDD.scala:328)
at org.apache.spark.util.Utils$.logUncaughtExceptions(Utils.scala:1951)
at org.apache.spark.api.python.PythonRunner$WriterThread.run(PythonRDD.scala:269)
Driver stacktrace:
at org.apache.spark.scheduler.DAGScheduler.org$apache$spark$scheduler$DAGScheduler$$failJobAndIndependentStages(DAGScheduler.scala:1435)
at org.apache.spark.scheduler.DAGScheduler$$anonfun$abortStage$1.apply(DAGScheduler.scala:1423)
at org.apache.spark.scheduler.DAGScheduler$$anonfun$abortStage$1.apply(DAGScheduler.scala:1422)
at scala.collection.mutable.ResizableArray$class.foreach(ResizableArray.scala:59)
at scala.collection.mutable.ArrayBuffer.foreach(ArrayBuffer.scala:48)
at org.apache.spark.scheduler.DAGScheduler.abortStage(DAGScheduler.scala:1422)
at org.apache.spark.scheduler.DAGScheduler$$anonfun$handleTaskSetFailed$1.apply(DAGScheduler.scala:802)
at org.apache.spark.scheduler.DAGScheduler$$anonfun$handleTaskSetFailed$1.apply(DAGScheduler.scala:802)
at scala.Option.foreach(Option.scala:257)
at org.apache.spark.scheduler.DAGScheduler.handleTaskSetFailed(DAGScheduler.scala:802)
at org.apache.spark.scheduler.DAGSchedulerEventProcessLoop.doOnReceive(DAGScheduler.scala:1650)
at org.apache.spark.scheduler.DAGSchedulerEventProcessLoop.onReceive(DAGScheduler.scala:1605)
at org.apache.spark.scheduler.DAGSchedulerEventProcessLoop.onReceive(DAGScheduler.scala:1594)
at org.apache.spark.util.EventLoop$$anon$1.run(EventLoop.scala:48)
at org.apache.spark.scheduler.DAGScheduler.runJob(DAGScheduler.scala:628)
at org.apache.spark.SparkContext.runJob(SparkContext.scala:1918)
at org.apache.spark.SparkContext.runJob(SparkContext.scala:1931)
at org.apache.spark.SparkContext.runJob(SparkContext.scala:1944)
at org.apache.spark.SparkContext.runJob(SparkContext.scala:1958)
at org.apache.spark.rdd.RDD$$anonfun$collect$1.apply(RDD.scala:935)
at org.apache.spark.rdd.RDDOperationScope$.withScope(RDDOperationScope.scala:151)
at org.apache.spark.rdd.RDDOperationScope$.withScope(RDDOperationScope.scala:112)
at org.apache.spark.rdd.RDD.withScope(RDD.scala:362)
at org.apache.spark.rdd.RDD.collect(RDD.scala:934)
at org.apache.spark.api.python.PythonRDD$.collectAndServe(PythonRDD.scala:453)
at org.apache.spark.api.python.PythonRDD.collectAndServe(PythonRDD.scala)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at py4j.reflection.MethodInvoker.invoke(MethodInvoker.java:244)
at py4j.reflection.ReflectionEngine.invoke(ReflectionEngine.java:357)
at py4j.Gateway.invoke(Gateway.java:280)
at py4j.commands.AbstractCommand.invokeMethod(AbstractCommand.java:132)
at py4j.commands.CallCommand.execute(CallCommand.java:79)
at py4j.GatewayConnection.run(GatewayConnection.java:214)
at java.lang.Thread.run(Thread.java:748)
Caused by: java.io.FileNotFoundException: File file:/nas/file123.csv does not exist
It is possible the underlying files have been updated. You can explicitly invalidate the cache in Spark by running 'REFRESH TABLE tableName' command in SQL or by recreating the Dataset/DataFrame involved.
at org.apache.spark.sql.execution.datasources.FileScanRDD$$anon$1.nextIterator(FileScanRDD.scala:157)
at org.apache.spark.sql.execution.datasources.FileScanRDD$$anon$1.hasNext(FileScanRDD.scala:102)
at org.apache.spark.sql.catalyst.expressions.GeneratedClass$GeneratedIterator.processNext(Unknown Source)
at org.apache.spark.sql.execution.BufferedRowIterator.hasNext(BufferedRowIterator.java:43)
at org.apache.spark.sql.execution.WholeStageCodegenExec$$anonfun$8$$anon$1.hasNext(WholeStageCodegenExec.scala:377)
at scala.collection.Iterator$$anon$11.hasNext(Iterator.scala:408)
at org.apache.spark.api.python.SerDeUtil$AutoBatchedPickler.hasNext(SerDeUtil.scala:117)
at scala.collection.Iterator$class.foreach(Iterator.scala:893)
at org.apache.spark.api.python.SerDeUtil$AutoBatchedPickler.foreach(SerDeUtil.scala:112)
at org.apache.spark.api.python.PythonRDD$.writeIteratorToStream(PythonRDD.scala:504)
at org.apache.spark.api.python.PythonRunner$WriterThread$$anonfun$run$3.apply(PythonRDD.scala:328)
at org.apache.spark.util.Utils$.logUncaughtExceptions(Utils.scala:1951)
at org.apache.spark.api.python.PythonRunner$WriterThread.run(PythonRDD.scala:269)
```
|
2017/08/02
|
[
"https://Stackoverflow.com/questions/45457324",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/8236204/"
] |
From the error it looks like it is checking the file on your local system. Just make sure that you have file present on specified Path. Also try below suggestions.
1. try with file URI : file:///nas/file123.csv
2. Upload the file on HDFS and try to read the file from HDFS URI like hdfs:///...
Hope this helps.
Regards,
Neeraj
|
If you are loading the data from local directory, remember to make sure file exists in all of your worker nodes.
|
62,246,786
|
I would like to run my scrapy sprider from python script.
I can call my spider with the following code,
```
subprocess.check_output(['scrapy crawl mySpider'])
```
Untill all is well. But before that, I instantiate the class of my spider by initializing the start\_urls, then the call to scrapy crawl doesn't work since it doesn't find the variable start\_urls.
```
from flask import Flask, jsonify, request
import scrapy
import subprocess
class ClassSpider(scrapy.Spider):
name = 'mySpider'
#start_urls = []
#pages = 0
news = []
def __init__(self, url, nbrPage):
self.pages = nbrPage
self.start_urls = url
def parse(self):
...
def run(self):
subprocess.check_output(['scrapy crawl mySpider'])
return self.news
app = Flask(__name__)
data = []
@app.route('/', methods=['POST'])
def getNews():
mySpiderClass = ClassSpider(request.json['url'], 2)
data.append(mySpider.run())
return jsonify({'data': data})
if __name__ == "__main__":
app.run(debug=True)
```
The error I get is:
TypeError: **init** missing 1 required positional argument: 'start\_url' and 'pages'
any help please?
|
2020/06/07
|
[
"https://Stackoverflow.com/questions/62246786",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/13700256/"
] |
```
stock = {'meat':100,'fish':100,'bread':100, 'milk':100,'chips':100}
total = 0
for v in stock.values():
total += v
```
|
```
>>> from statistics import mean
>>> stock={'meat':100,'fish':100,'bread':100, 'milk':100,'chips':100}
>>> print(f"Total stock level : {mean(stock.values())*len(stock)}")
Total stock level : 500
```
|
62,246,786
|
I would like to run my scrapy sprider from python script.
I can call my spider with the following code,
```
subprocess.check_output(['scrapy crawl mySpider'])
```
Untill all is well. But before that, I instantiate the class of my spider by initializing the start\_urls, then the call to scrapy crawl doesn't work since it doesn't find the variable start\_urls.
```
from flask import Flask, jsonify, request
import scrapy
import subprocess
class ClassSpider(scrapy.Spider):
name = 'mySpider'
#start_urls = []
#pages = 0
news = []
def __init__(self, url, nbrPage):
self.pages = nbrPage
self.start_urls = url
def parse(self):
...
def run(self):
subprocess.check_output(['scrapy crawl mySpider'])
return self.news
app = Flask(__name__)
data = []
@app.route('/', methods=['POST'])
def getNews():
mySpiderClass = ClassSpider(request.json['url'], 2)
data.append(mySpider.run())
return jsonify({'data': data})
if __name__ == "__main__":
app.run(debug=True)
```
The error I get is:
TypeError: **init** missing 1 required positional argument: 'start\_url' and 'pages'
any help please?
|
2020/06/07
|
[
"https://Stackoverflow.com/questions/62246786",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/13700256/"
] |
Your current code using `sum()` is the best way to do this, and I would recommend keeping things as-is. However, just for illustrative purposes, here's a for-loop method that does the same thing.
```py
stock = {'meat': 100, 'fish': 100, 'bread': 100, 'milk': 100, 'chips': 100}
totalstock = 0
for item, value in stock.items():
totalstock += value
print(f"Total stock level: {totalstock}")
```
**Note**: I intentionally used [`dict.items()`](https://docs.python.org/3/library/stdtypes.html#dict.items) here as it's almost always more useful when iterating over dictionaries. If you *really* only need the values from a dict, it's more efficient to use [`dict.values()`](https://docs.python.org/3/library/stdtypes.html#dict.values)
|
```
>>> from statistics import mean
>>> stock={'meat':100,'fish':100,'bread':100, 'milk':100,'chips':100}
>>> print(f"Total stock level : {mean(stock.values())*len(stock)}")
Total stock level : 500
```
|
62,246,786
|
I would like to run my scrapy sprider from python script.
I can call my spider with the following code,
```
subprocess.check_output(['scrapy crawl mySpider'])
```
Untill all is well. But before that, I instantiate the class of my spider by initializing the start\_urls, then the call to scrapy crawl doesn't work since it doesn't find the variable start\_urls.
```
from flask import Flask, jsonify, request
import scrapy
import subprocess
class ClassSpider(scrapy.Spider):
name = 'mySpider'
#start_urls = []
#pages = 0
news = []
def __init__(self, url, nbrPage):
self.pages = nbrPage
self.start_urls = url
def parse(self):
...
def run(self):
subprocess.check_output(['scrapy crawl mySpider'])
return self.news
app = Flask(__name__)
data = []
@app.route('/', methods=['POST'])
def getNews():
mySpiderClass = ClassSpider(request.json['url'], 2)
data.append(mySpider.run())
return jsonify({'data': data})
if __name__ == "__main__":
app.run(debug=True)
```
The error I get is:
TypeError: **init** missing 1 required positional argument: 'start\_url' and 'pages'
any help please?
|
2020/06/07
|
[
"https://Stackoverflow.com/questions/62246786",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/13700256/"
] |
```
stock = {'meat':100,'fish':100,'bread':100, 'milk':100,'chips':100}
total = 0
for v in stock.values():
total += v
```
|
Your can iterate threw a dictonary in python very easily.
```
for key in dict:
print(key)
```
So in your case you should do the following:
```
totalstock = 0
for key in stock:
totalstock += stock[key]
print(totalstock)
```
|
62,246,786
|
I would like to run my scrapy sprider from python script.
I can call my spider with the following code,
```
subprocess.check_output(['scrapy crawl mySpider'])
```
Untill all is well. But before that, I instantiate the class of my spider by initializing the start\_urls, then the call to scrapy crawl doesn't work since it doesn't find the variable start\_urls.
```
from flask import Flask, jsonify, request
import scrapy
import subprocess
class ClassSpider(scrapy.Spider):
name = 'mySpider'
#start_urls = []
#pages = 0
news = []
def __init__(self, url, nbrPage):
self.pages = nbrPage
self.start_urls = url
def parse(self):
...
def run(self):
subprocess.check_output(['scrapy crawl mySpider'])
return self.news
app = Flask(__name__)
data = []
@app.route('/', methods=['POST'])
def getNews():
mySpiderClass = ClassSpider(request.json['url'], 2)
data.append(mySpider.run())
return jsonify({'data': data})
if __name__ == "__main__":
app.run(debug=True)
```
The error I get is:
TypeError: **init** missing 1 required positional argument: 'start\_url' and 'pages'
any help please?
|
2020/06/07
|
[
"https://Stackoverflow.com/questions/62246786",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/13700256/"
] |
Your current code using `sum()` is the best way to do this, and I would recommend keeping things as-is. However, just for illustrative purposes, here's a for-loop method that does the same thing.
```py
stock = {'meat': 100, 'fish': 100, 'bread': 100, 'milk': 100, 'chips': 100}
totalstock = 0
for item, value in stock.items():
totalstock += value
print(f"Total stock level: {totalstock}")
```
**Note**: I intentionally used [`dict.items()`](https://docs.python.org/3/library/stdtypes.html#dict.items) here as it's almost always more useful when iterating over dictionaries. If you *really* only need the values from a dict, it's more efficient to use [`dict.values()`](https://docs.python.org/3/library/stdtypes.html#dict.values)
|
Your can iterate threw a dictonary in python very easily.
```
for key in dict:
print(key)
```
So in your case you should do the following:
```
totalstock = 0
for key in stock:
totalstock += stock[key]
print(totalstock)
```
|
56,128,397
|
I pulled the official mongo image from the Docker website and started a mongo container named `dataiomongo`.
I now want to connect to the mongodb inside the container using pymongo.
This is the python script I wrote:
```
from pprint import pprint
from pymongo import MongoClient
client = MongoClient('localhost', port=27017)
db = client.admin
server = db.command("serverStatus")
pprint(server)
```
The error that came is:
```
Traceback (most recent call last):
File "D:/dataio/test_mongo.py", line 8, in <module>
server = db.command("serverStatus")
File "D:\dataio\venv\lib\site-packages\pymongo\database.py", line 655, in command
read_preference) as (sock_info, slave_ok):
File "C:\Python27\Lib\contextlib.py", line 17, in __enter__
return self.gen.next()
File "D:\dataio\venv\lib\site-packages\pymongo\mongo_client.py", line 1135, in _socket_for_reads
server = topology.select_server(read_preference)
File "D:\dataio\venv\lib\site-packages\pymongo\topology.py", line 226, in select_server
address))
File "D:\dataio\venv\lib\site-packages\pymongo\topology.py", line 184, in select_servers
selector, server_timeout, address)
File "D:\dataio\venv\lib\site-packages\pymongo\topology.py", line 200, in _select_servers_loop
self._error_message(selector))
pymongo.errors.ServerSelectionTimeoutError: localhost:27017: [Errno 10061] No connection could be made because the target machine actively refused it
```
How do I go about connecting to the mongodb inside the docker container?
|
2019/05/14
|
[
"https://Stackoverflow.com/questions/56128397",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/5684940/"
] |
run mongo
=========
First you need to run mongo
```
$ docker run --rm --name my-mongo -it -p 27017:27017 mongo:latest
```
as a daemon
===========
```
$ docker run --name my-mongo -d mongo:latest
```
connect to the previous container.. with another container
==========================================================
```
$ docker run -it --link my-mongo:mongo --rm mongo:latest sh -c 'exec mongo "$MONGO_PORT_27017_TCP_ADDR:$MONGO_PORT_27017_TCP_PORT/test"'
```
Insert Data into db
===================
```
Insert the data into the db
```
Connect db with python
----------------------
```
client = MongoClient()
client.server_info()
db = client.yourdbname
```
|
Make sure you bind the 27017 container port to host port via -p 27017:27017 flag.
|
56,128,397
|
I pulled the official mongo image from the Docker website and started a mongo container named `dataiomongo`.
I now want to connect to the mongodb inside the container using pymongo.
This is the python script I wrote:
```
from pprint import pprint
from pymongo import MongoClient
client = MongoClient('localhost', port=27017)
db = client.admin
server = db.command("serverStatus")
pprint(server)
```
The error that came is:
```
Traceback (most recent call last):
File "D:/dataio/test_mongo.py", line 8, in <module>
server = db.command("serverStatus")
File "D:\dataio\venv\lib\site-packages\pymongo\database.py", line 655, in command
read_preference) as (sock_info, slave_ok):
File "C:\Python27\Lib\contextlib.py", line 17, in __enter__
return self.gen.next()
File "D:\dataio\venv\lib\site-packages\pymongo\mongo_client.py", line 1135, in _socket_for_reads
server = topology.select_server(read_preference)
File "D:\dataio\venv\lib\site-packages\pymongo\topology.py", line 226, in select_server
address))
File "D:\dataio\venv\lib\site-packages\pymongo\topology.py", line 184, in select_servers
selector, server_timeout, address)
File "D:\dataio\venv\lib\site-packages\pymongo\topology.py", line 200, in _select_servers_loop
self._error_message(selector))
pymongo.errors.ServerSelectionTimeoutError: localhost:27017: [Errno 10061] No connection could be made because the target machine actively refused it
```
How do I go about connecting to the mongodb inside the docker container?
|
2019/05/14
|
[
"https://Stackoverflow.com/questions/56128397",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/5684940/"
] |
I think you miss `-p 27017:27017` flag.
`docker run -p 27017:27017 --name mymongo -d mongo` .
|
Make sure you bind the 27017 container port to host port via -p 27017:27017 flag.
|
56,128,397
|
I pulled the official mongo image from the Docker website and started a mongo container named `dataiomongo`.
I now want to connect to the mongodb inside the container using pymongo.
This is the python script I wrote:
```
from pprint import pprint
from pymongo import MongoClient
client = MongoClient('localhost', port=27017)
db = client.admin
server = db.command("serverStatus")
pprint(server)
```
The error that came is:
```
Traceback (most recent call last):
File "D:/dataio/test_mongo.py", line 8, in <module>
server = db.command("serverStatus")
File "D:\dataio\venv\lib\site-packages\pymongo\database.py", line 655, in command
read_preference) as (sock_info, slave_ok):
File "C:\Python27\Lib\contextlib.py", line 17, in __enter__
return self.gen.next()
File "D:\dataio\venv\lib\site-packages\pymongo\mongo_client.py", line 1135, in _socket_for_reads
server = topology.select_server(read_preference)
File "D:\dataio\venv\lib\site-packages\pymongo\topology.py", line 226, in select_server
address))
File "D:\dataio\venv\lib\site-packages\pymongo\topology.py", line 184, in select_servers
selector, server_timeout, address)
File "D:\dataio\venv\lib\site-packages\pymongo\topology.py", line 200, in _select_servers_loop
self._error_message(selector))
pymongo.errors.ServerSelectionTimeoutError: localhost:27017: [Errno 10061] No connection could be made because the target machine actively refused it
```
How do I go about connecting to the mongodb inside the docker container?
|
2019/05/14
|
[
"https://Stackoverflow.com/questions/56128397",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/5684940/"
] |
run mongo
=========
First you need to run mongo
```
$ docker run --rm --name my-mongo -it -p 27017:27017 mongo:latest
```
as a daemon
===========
```
$ docker run --name my-mongo -d mongo:latest
```
connect to the previous container.. with another container
==========================================================
```
$ docker run -it --link my-mongo:mongo --rm mongo:latest sh -c 'exec mongo "$MONGO_PORT_27017_TCP_ADDR:$MONGO_PORT_27017_TCP_PORT/test"'
```
Insert Data into db
===================
```
Insert the data into the db
```
Connect db with python
----------------------
```
client = MongoClient()
client.server_info()
db = client.yourdbname
```
|
I think you miss `-p 27017:27017` flag.
`docker run -p 27017:27017 --name mymongo -d mongo` .
|
56,803,812
|
I want to include a cron task in a MariaDB container, based on the latest image `mariadb`, but I'm stuck with this.
I tried many things without success because I can't launch both MariaDB and Cron.
Here is my actual dockerfile:
```
FROM mariadb:10.3
# DB settings
ENV MYSQL_DATABASE=beurre \
MYSQL_ROOT_PASSWORD=beurette
COPY ./data /docker-entrypoint-initdb.d
COPY ./keys/keys.enc home/mdb/
COPY ./config/encryption.cnf /etc/mysql/conf.d/encryption.cnf
# Installations
RUN apt-get update && apt-get -y install python cron
# Cron
RUN touch /etc/cron.d/bp-cron
RUN printf '* * * * * root echo "Hello world" >> /var/log/cron.log 2>&1\n#' >> /etc/cron.d/bp-cron
RUN touch /var/log/cron.log
RUN chmod 0644 /etc/cron.d/bp-cron
RUN cron
```
With its settings, the database starts correctly, but "Cron" is not initialized. To make it work, I have to get into the container and execute the "Cron" command, and everything works perfectly.
So I'm looking for a way to launch both the db and cron from my Dockerfile used in my docker-compose.
If this is not possible, maybe there is another way to do tasks planned? The purpose being to execute a script of the db.
|
2019/06/28
|
[
"https://Stackoverflow.com/questions/56803812",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/9678258/"
] |
Elaborating on @k0pernikus's comment, I would recommend to use a separate container that runs cron. The cronjobs in that container can then work with your mysql database.
Here's how I would approach it:
1. Create a Cron Docker Container
=================================
You can set up a cron container fairly simply. Here's an example Dockerfile that should do the job:
```
FROM alpine
COPY ./crontab /etc/crontab
RUN crontab /etc/crontab
RUN touch /var/log/cron.log
CMD crond -f
```
Just put your crontab into a `crontab` file next to that Dockerfile and you should have a working cron container.
An example crontab file:
```
* * * * * mysql -h mysql --execute "INSERT INTO database.table VALUES 'v';"
```
2. Add the cron container to your docker-compose.yml as a service
=================================================================
Make sure you add your cron container to the docker-compose.yml, and put it in the same network as your mysql service:
```
networks:
my_network:
services:
mysql:
image: mariadb
networks:
- my_network
cron:
image: my_cron
depends_on:
- mysql
build:
context: ./path/to/my/cron-docker-folder
networks:
- my_network
```
|
I recommend the [solution provided by fjc](https://stackoverflow.com/a/56804227/457268). Treat this as nice-to-know to understand why your approach is not working.
---
Docker has `RUN` commands that are only being executed during build. Not on container startup.
It also has a `CMD` (or ENTRYPOINT) for executing specific scripts.
Since you are using [mariadb](https://github.com/docker-library/mariadb/blob/5b833d3bb53298adea162c555f066233f74ff236/10.3/Dockerfile) your CMD it is:
```
ENTRYPOINT ["docker-entrypoint.sh"]
EXPOSE 3306
CMD ["mysqld"]
```
(You can find the link to the actual dockerfles on [dockerhub](https://hub.docker.com/_/mariadb).)
This tells docker to run:
```
docker-entrypoint.sh mysqld
```
on startup.
You'd have to override its `docker-entrypoint.sh` to allow for the startup of the cron job as well.
---
[See the relevant part of the Dockerfile for the CMD instruction](https://docs.docker.com/engine/reference/builder/):
>
> CMD The CMD instruction has three forms:
> ========================================
>
>
> CMD ["executable","param1","param2"] (exec form, this is the preferred
> form) CMD ["param1","param2"] (as default parameters to ENTRYPOINT)
> CMD command param1 param2 (shell form) There can only be one CMD
> instruction in a Dockerfile. If you list more than one CMD then only
> the last CMD will take effect.
>
>
> The main purpose of a CMD is to provide defaults for an executing
> container. These defaults can include an executable, or they can omit
> the executable, in which case you must specify an ENTRYPOINT
> instruction as well.
>
>
> Note: If CMD is used to provide default arguments for the ENTRYPOINT
> instruction, both the CMD and ENTRYPOINT instructions should be
> specified with the JSON array format.
>
>
> Note: The exec form is parsed as a JSON array, which means that you
> must use double-quotes (“) around words not single-quotes (‘).
>
>
> Note: Unlike the shell form, the exec form does not invoke a command
> shell. This means that normal shell processing does not happen. For
> example, CMD [ "echo", "$HOME" ] will not do variable substitution on
> $HOME. If you want shell processing then either use the shell form or
> execute a shell directly, for example: CMD [ "sh", "-c", "echo $HOME"
> ]. When using the exec form and executing a shell directly, as in the
> case for the shell form, it is the shell that is doing the environment
> variable expansion, not docker.
>
>
> When used in the shell or exec formats, the CMD instruction sets the
> command to be executed when running the image.
>
>
> If you use the shell form of the CMD, then the will execute
> in /bin/sh -c:
>
>
> FROM ubuntu CMD echo "This is a test." | wc - If you want to run your
> without a shell then you must express the command as a JSON
> array and give the full path to the executable. This array form is the
> preferred format of CMD. Any additional parameters must be
> individually expressed as strings in the array:
>
>
> FROM ubuntu CMD ["/usr/bin/wc","--help"] If you would like your
> container to run the same executable every time, then you should
> consider using ENTRYPOINT in combination with CMD. See ENTRYPOINT.
>
>
> If the user specifies arguments to docker run then they will override
> the default specified in CMD.
>
>
> Note: Don’t confuse RUN with CMD. RUN actually runs a command and
> commits the result; CMD does not execute anything at build time, but
> specifies the intended command for the image.
>
>
>
|
62,827,871
|
I'm looking for a compiler to compile '.py' file to a single '.exe' file.
I've try already **auto-py-to-exe** but I'm not happy with it.
I've tried **PyInstaller**, but one of its dependencies (PyCrypto, which I need) is not working/ maintained anymore and fails to install. <https://pyinstaller.readthedocs.io/en/stable/usage.html#encrypting-python-bytecode>
I've look also **nuitka** but it doesn't seem possible to set an icon for the exe.
Do you have any compiler recommendations that can obfuscate / encrypt the code to limit the reverse engineering?
|
2020/07/10
|
[
"https://Stackoverflow.com/questions/62827871",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/11943028/"
] |
I had a similar issue to this, needing to run Python code on machines where Python could not be downloaded.
I used py2exe, and it worked quite well. (<https://www.py2exe.org/>)
|
You could try these **steps to convert .py to .exe in Python 3.8**
1. Install [Python 3.8](https://www.python.org/downloads/).
2. Install cx\_Freeze, (open your command prompt and type `pip install cx_Freeze`.
3. Install idna, (open your command prompt and type `pip install idna`.
4. Write a `.py` a program named `myfirstprog.py`.
5. Create a new python file named `setup.py` on the current directory of your script.
6. In the `setup.py` file, copy the code below and save it.
7. With shift pressed right click on the same directory, so you are able to open a command prompt window.
8. In the prompt, type `python setup.py build`
9. If your script is error-free, then there will be no problem with creating applications.
10. Check the newly created folder `build`. It has another folder in it. Within that folder, you can find your application. Run it. Make yourself happy.
See the original answer [here](https://stackoverflow.com/a/44433442/10250028).
|
64,764,650
|
Say that there are two iterators:
```
def genA():
while True:
yield 1
def genB():
while True:
yield 2
gA = genA()
gB = genB()
```
According to [this SO answer](https://stackoverflow.com/a/8770796/3259896) they can be ***evenly*** interleaved using the [`itertools` recipes](https://docs.python.org/3/library/itertools.html#recipes):
```
def cycle(iterable):
# cycle('ABCD') --> A B C D A B C D A B C D ...
saved = []
for element in iterable:
yield element
saved.append(element)
while saved:
for element in saved:
yield element
def roundrobin(*iterables):
"roundrobin('ABC', 'D', 'EF') --> A D E B F C"
# Recipe credited to George Sakkis
num_active = len(iterables)
nexts = cycle(iter(it).__next__ for it in iterables)
while num_active:
try:
for next in nexts:
yield next()
except StopIteration:
# Remove the iterator we just exhausted from the cycle.
num_active -= 1
nexts = cycle(islice(nexts, num_active))
aa = roundrobin(gA, gB)
next(aa)
```
So `next(aa)` will shift the iterator output each time, so a bunch of `next` calls will result in `1, 2, 1, 2, 1, 2, 1` - `50%` will come from one iterator, and the other `50%` will come from the other.
I am wondering how we can code it so that `x%` will come from one iterator, and `(1-x)%` from the other. For example, `75%` from the first iterator, and `25%` from the other.
So several calls to `next(combinedIterator)` will result in something like this:
```
1 1 1 2 1 1 1 2 1 1 1 2
```
For my purpose, it doesn't matter if the output is strictly ordered like above, or if it is random, with the output determined by probability.
|
2020/11/10
|
[
"https://Stackoverflow.com/questions/64764650",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/3259896/"
] |
If you're okay with a deterministic approach (as I understand from your self-answer), you can add an argument which is the percentage of the first iterator and then just calculate each iterator's "part". For example, if you want `.75` from the first iterator - this translates to: *for every **three** elements from `iterator1`, yield **one** element from `iterator2`*.
```py
def interleave(itt1, itt2, itt1_per):
itt1_frac, total = itt1_per.as_integer_ratio()
itt2_frac = total - itt1_frac
while True:
for _ in range(itt1_frac):
yield next(itt1)
for _ in range(itt2_frac):
yield next(itt2)
newGen = interleave(gA, gB, .75)
for _ in range(12):
print(next(newGen), end=' ')
```
This will print:
```
1 1 1 2 1 1 1 2 1 1 1 2
```
---
***Watch out!*** This will only work well for "nice" fractions. For example: using this function with `.6` means that *for every **`5,404,319,552,844,595`** elements from `iterator1`, it will yield **`3,602,879,701,896,397`** elements from `iterator2`*.
***One way to overcome this*** is to use [`decimal.Decimal`](https://docs.python.org/3/library/decimal.html#decimal.Decimal) with ***[string arguments](https://stackoverflow.com/questions/55755444/why-is-the-precision-accurate-when-decimal-takes-in-a-string-instead-of-float)***:
```py
from decimal import Decimal
def interleave(itt1, itt2, itt1_per):
itt1_frac, total = Decimal(str(itt1_per)).as_integer_ratio()
...
```
Using `Decimal` now means that passing `.6` translates to the more **sensible**: *for every **three** elements from `iterator1`, yield **two** elements from `iterator2`*.
Using this revised code with `.6` as an argument, will print:
```
1 1 1 2 2 1 1 1 2 2 1 1
```
|
```
def genA():
while True:
yield 1
def genB():
while True:
yield 2
gA = genA()
gB = genB()
import random
def xyz(itt1, itt2):
while True:
if random.random() < .25:
yield next(itt1)
else:
yield next(itt2)
newGen = xyz(gA, gB)
next(newGen)
```
This works for a uniform distribution. I won't select this as an answer for someone to possibility give a non probabilistic answer.
|
19,965,453
|
Im making a multipart POST using the python package requests.
Im using xlrd to change some values in an Excel file save it then send that up in a multipart POST.
This working fine when I run it locally on my mac but when I put the code on a remote machine and make the same request the body content type is blank where as locally the body content type is application/vnd.ms-excel. So my question is, is there a way enforce the content type using python requests so that in this case the body content type is application/vnd.ms-excel.
Sorry cant post up any code as I don't have it on this machine.
|
2013/11/13
|
[
"https://Stackoverflow.com/questions/19965453",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/1946337/"
] |
The `files` parameter accepts a dictionary of keys to tuples, with the following form:
```
files = {'name': (<filename>, <file object>, <content type>, <per-part headers>)}
```
In your specific case, you could write this:
```
files = {'file': ('filename.xls', open('filename.xls'), 'application/vnd.ms-excel', {})}
```
That should work fine.
|
I believe you can use the headers parameter, e.g
```
requests.post(url, data=my_data, headers={"Content-type": "application/vnd.ms-excel"})
```
|
29,686,328
|
Edit: Rather than vote me down can you provide an url on where you would recommend a newbie learn Python? Be part of the solution versus problem.
I'm trying to compile a basic program (for a class) that when specific if/elif/else conditions are met a specific roman numeral shows though I'm a bit confused on why I'm getting my error. Error included next.
```
Traceback (most recent call last):
File "python", line 42
print"The number is I"
^
SyntaxError: invalid syntax
```
When I remove the else line altogether the program operates but I don't ever get any of the if/elif functions of print to actually print out. I've been going off of this below tutorial for learning the else/if python as the community college doesnt teach python but frustratingly enough asks projects be coded in it.
tutorial:<http://www.tutorialspoint.com/python/python_if_else.htm>
which I've learned I need to put whatever I want printed in paranthesis
correct: print ("1 - Got a false expression value")
wrong: print "1 - Got a false expression value"
Again I'm lost on this error - any advice or direction to a tutorial where I can understand this would be much appreciated
```
# ////////////////////// ALGORITHIM /////////////////////////
# 1 prompt the user for a number from range of 1-10
# 2 display roman numeral for that number
# 3 if outside range of 1-10 display error message invalid number try reentering
# ////////////////////// ALGORITHIM END /////////////////////////
# ////////////////////// PSEUDOCODE /////////////////////////
# IF <<number >= 0, 11
# Error the number you have entered is not in the range of 1-10
# Elif var = 1
# display I
# Elif var = 2
# display II
# Elif var = 2
# display II
# Elif var = 3
# display III
# Elif var = 4
# display IV
# Elif var = 5
# display V
# Elif var = 6
# display VI
# Elif var = 7
# display VII
# Elif var = 8
# display VIII
# Elif var = 9
# display IX
# Elif var = 10
# display X
#END IF
# MAIN MODULE
number = 0
#getInput Module
number = input(int("Please enter a number within the range of 1-10"))
print("Was your number " + number + "?")
#decisionPiece Module
if(number==1):
print"The number is I"
elif(number==2):
print"The number is II"
elif(number==3):
print"The nubmer is III"
elif(number==4):
print"The nubmer is IV"
elif(number==5):
print"The nubmer is V"
elif(number==6):
print"The nubmer is VI"
elif(number==7):
print"The nubmer is VII"
elif(number==8):
print"The nubmer is VIII"
elif(number==9):
print"The nubmer is IX"
elif(number==10):
print"The nubmer is X"
else:
print"The number you entered is not within the range of 1-10"
print ("Good bye!")
#printNumber Module
```
|
2015/04/16
|
[
"https://Stackoverflow.com/questions/29686328",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/4259649/"
] |
```
else(number==3):
print("The number is X")
```
Is incorret. You should use only
```
else :
print("The number is X")
```
|
Just use `else:` instead of `else(number==3)`. `else:` doesn't take a condition. Also, you don't need to put parentheses around the conditions in Python.
|
29,686,328
|
Edit: Rather than vote me down can you provide an url on where you would recommend a newbie learn Python? Be part of the solution versus problem.
I'm trying to compile a basic program (for a class) that when specific if/elif/else conditions are met a specific roman numeral shows though I'm a bit confused on why I'm getting my error. Error included next.
```
Traceback (most recent call last):
File "python", line 42
print"The number is I"
^
SyntaxError: invalid syntax
```
When I remove the else line altogether the program operates but I don't ever get any of the if/elif functions of print to actually print out. I've been going off of this below tutorial for learning the else/if python as the community college doesnt teach python but frustratingly enough asks projects be coded in it.
tutorial:<http://www.tutorialspoint.com/python/python_if_else.htm>
which I've learned I need to put whatever I want printed in paranthesis
correct: print ("1 - Got a false expression value")
wrong: print "1 - Got a false expression value"
Again I'm lost on this error - any advice or direction to a tutorial where I can understand this would be much appreciated
```
# ////////////////////// ALGORITHIM /////////////////////////
# 1 prompt the user for a number from range of 1-10
# 2 display roman numeral for that number
# 3 if outside range of 1-10 display error message invalid number try reentering
# ////////////////////// ALGORITHIM END /////////////////////////
# ////////////////////// PSEUDOCODE /////////////////////////
# IF <<number >= 0, 11
# Error the number you have entered is not in the range of 1-10
# Elif var = 1
# display I
# Elif var = 2
# display II
# Elif var = 2
# display II
# Elif var = 3
# display III
# Elif var = 4
# display IV
# Elif var = 5
# display V
# Elif var = 6
# display VI
# Elif var = 7
# display VII
# Elif var = 8
# display VIII
# Elif var = 9
# display IX
# Elif var = 10
# display X
#END IF
# MAIN MODULE
number = 0
#getInput Module
number = input(int("Please enter a number within the range of 1-10"))
print("Was your number " + number + "?")
#decisionPiece Module
if(number==1):
print"The number is I"
elif(number==2):
print"The number is II"
elif(number==3):
print"The nubmer is III"
elif(number==4):
print"The nubmer is IV"
elif(number==5):
print"The nubmer is V"
elif(number==6):
print"The nubmer is VI"
elif(number==7):
print"The nubmer is VII"
elif(number==8):
print"The nubmer is VIII"
elif(number==9):
print"The nubmer is IX"
elif(number==10):
print"The nubmer is X"
else:
print"The number you entered is not within the range of 1-10"
print ("Good bye!")
#printNumber Module
```
|
2015/04/16
|
[
"https://Stackoverflow.com/questions/29686328",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/4259649/"
] |
```
else(number==3):
print("The number is X")
```
Is incorret. You should use only
```
else :
print("The number is X")
```
|
this line:
```
number = input(int("Please enter a number within the range of 1-10"))
```
is now giving you problems. You need get the input, then convert to an int:
```
number = int(input("Please enter a number within the range of 1-10"))
```
This code will still throw an error if the user enters anything that it cannot convert to an int. I'm guessing that they are not looking for you to trap that condition for this project.
|
29,686,328
|
Edit: Rather than vote me down can you provide an url on where you would recommend a newbie learn Python? Be part of the solution versus problem.
I'm trying to compile a basic program (for a class) that when specific if/elif/else conditions are met a specific roman numeral shows though I'm a bit confused on why I'm getting my error. Error included next.
```
Traceback (most recent call last):
File "python", line 42
print"The number is I"
^
SyntaxError: invalid syntax
```
When I remove the else line altogether the program operates but I don't ever get any of the if/elif functions of print to actually print out. I've been going off of this below tutorial for learning the else/if python as the community college doesnt teach python but frustratingly enough asks projects be coded in it.
tutorial:<http://www.tutorialspoint.com/python/python_if_else.htm>
which I've learned I need to put whatever I want printed in paranthesis
correct: print ("1 - Got a false expression value")
wrong: print "1 - Got a false expression value"
Again I'm lost on this error - any advice or direction to a tutorial where I can understand this would be much appreciated
```
# ////////////////////// ALGORITHIM /////////////////////////
# 1 prompt the user for a number from range of 1-10
# 2 display roman numeral for that number
# 3 if outside range of 1-10 display error message invalid number try reentering
# ////////////////////// ALGORITHIM END /////////////////////////
# ////////////////////// PSEUDOCODE /////////////////////////
# IF <<number >= 0, 11
# Error the number you have entered is not in the range of 1-10
# Elif var = 1
# display I
# Elif var = 2
# display II
# Elif var = 2
# display II
# Elif var = 3
# display III
# Elif var = 4
# display IV
# Elif var = 5
# display V
# Elif var = 6
# display VI
# Elif var = 7
# display VII
# Elif var = 8
# display VIII
# Elif var = 9
# display IX
# Elif var = 10
# display X
#END IF
# MAIN MODULE
number = 0
#getInput Module
number = input(int("Please enter a number within the range of 1-10"))
print("Was your number " + number + "?")
#decisionPiece Module
if(number==1):
print"The number is I"
elif(number==2):
print"The number is II"
elif(number==3):
print"The nubmer is III"
elif(number==4):
print"The nubmer is IV"
elif(number==5):
print"The nubmer is V"
elif(number==6):
print"The nubmer is VI"
elif(number==7):
print"The nubmer is VII"
elif(number==8):
print"The nubmer is VIII"
elif(number==9):
print"The nubmer is IX"
elif(number==10):
print"The nubmer is X"
else:
print"The number you entered is not within the range of 1-10"
print ("Good bye!")
#printNumber Module
```
|
2015/04/16
|
[
"https://Stackoverflow.com/questions/29686328",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/4259649/"
] |
this line:
```
number = input(int("Please enter a number within the range of 1-10"))
```
is now giving you problems. You need get the input, then convert to an int:
```
number = int(input("Please enter a number within the range of 1-10"))
```
This code will still throw an error if the user enters anything that it cannot convert to an int. I'm guessing that they are not looking for you to trap that condition for this project.
|
Just use `else:` instead of `else(number==3)`. `else:` doesn't take a condition. Also, you don't need to put parentheses around the conditions in Python.
|
67,017,354
|
**My problem**: starting a threaded function and, **asynchronously**, act upon the returned value
I know how to:
* start a threaded function with `threading`. The problem: no simple way to get the result back
* [get the return value](https://stackoverflow.com/questions/6893968/how-to-get-the-return-value-from-a-thread-in-python) from a threaded function. The problem: it is synchronous
What I would like to achieve is similar to JavaScript's
```
aFunctionThatReturnsAPromise()
.then(r => {// do something with the returned value when it is available})
// the code here runs synchronously right after aFunctionThatReturnsAPromise is started
```
In pseudo-Python, I would think about something like (modifying the example from [the answer](https://stackoverflow.com/a/58829816/903011) to the linked thread)
```
import time
import concurrent.futures
def foo(bar):
print('hello {}'.format(bar))
time.sleep(10)
return 'foo'
def the_callback(something):
print(f"the thread returned {something}")
with concurrent.futures.ThreadPoolExecutor() as executor:
# submit the threaded call ...
future = executor.submit(foo, 'world!')
# ... and set a callback
future.callback(the_callback, future.result()) # ← this is the made up part
# or, all in one: future = executor.submit(foo, 'world!', callback=the_callback) # in which case the parameters probably would need to be passed the JS way
# the threaded call runs at its pace
# the following line is ran right after the call above
print("after submit")
# after some time (~10 seconds) the callback is finished (and has printed out what was passed to it)
# there should probably be some kind of join() so that the scripts waits until the thread is done
```
I want to stay if possible with threads (which do things at their own pace and I do not care when they are done), rather than `asyncio` (where I have to explicitly `await` things in a single thread)
|
2021/04/09
|
[
"https://Stackoverflow.com/questions/67017354",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/903011/"
] |
You can use [`concurrent.futures.add_done_callback`](https://docs.python.org/3/library/concurrent.futures.html#concurrent.futures.Future.add_done_callback) as shown below. The callback must be a callable taking a single argument, the `Future` instance — and it must get the result from that as shown. The example also adds some additional information to it which the callback function uses for printing its messages.
Note that the callback function(s) will be called concurrently, so the usual mutex precautions should be taken if there are shared resources involved. This *wasn't* been done in the example below, so sometimes the printed output will be jumbled.
```
from concurrent import futures
import random
import time
def foo(bar, delay):
print(f'hello {bar} - {delay}')
time.sleep(delay)
return bar
def the_callback(fn):
if fn.cancelled():
print(f'args {fn.args}: canceled')
elif fn.done():
error = fn.exception()
if error:
print(f'args {fn.args}: caused error {error}')
else:
print(f'args {fn.args}: returned: {fn.result()}')
with futures.ThreadPoolExecutor(max_workers=2) as executor:
for name in ('foo', 'bar', 'bas'):
delay = random.randint(1, 5)
f = executor.submit(foo, name, delay)
f.args = name, delay
f.add_done_callback(the_callback)
print('fini')
```
Sample output:
```
hello foo - 5
hello bar - 3
args ('bar', 3): returned: bar
hello bas - 4
args ('foo', 5): returned: foo
args ('bas', 4): returned: bas
fini
```
|
You can use [add\_done\_callback](https://docs.python.org/3/library/concurrent.futures.html#concurrent.futures.Future.add_done_callback) of `concurrent.futures` library, so you can modify your example like this:
```py
def the_callback(something):
print(f"the thread returned {something.result()}")
with concurrent.futures.ThreadPoolExecutor() as executor:
future = executor.submit(foo, 'world!')
future.add_done_callback(the_callback)
```
|
6,969,222
|
Every time I run my code in Python IDLE development environment, I get a Visual C++ runtime error/unhandled exception in pythonw.exe.
```
Figure 1: pythonw.exe - Application Error
The exception unknown software exception (0x40000015) occurred
in the application at location 0x1e0e1379.
```
I am using networkx and matplotlib to display a graph:
```
import matplotlib.pyplot as plt
import networkx as nx
```
I am running Windows XP.
Any ideas how to resolve this? Or should I just quit using IDLE?
|
2011/08/06
|
[
"https://Stackoverflow.com/questions/6969222",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/264970/"
] |
The easiest fix for this is to open IDLE from the start menu and then opening your code files from there.
|
The solution to this problem was indeed to quit using IDLE. I got the Python stuff for Eclipse; I'd recommend that setup.
|
45,530,741
|
I'm trying to run my code with a multiprocessing function but mongo keep returning
>
> "MongoClient opened before fork. Create MongoClient with
> connect=False, or create client after forking."
>
>
>
I really doesn't understand how i can adapt my code to this.
Basically the structure is:
```
db = MongoClient().database
db.authenticate('user', 'password', mechanism='SCRAM-SHA-1')
collectionW = db['words']
collectionT = db['sinMemo']
collectionL = db['sinLogic']
def findW(word):
rows = collectionw.find({"word": word})
ind = 0
for row in rows:
ind += 1
id = row["_id"]
if ind == 0:
a = ind
else:
a = id
return a
def trainAI(stri):
...
if findW(word) == 0:
_id = db['words'].insert(
{"_id": getNextSequence(db.counters, "nodeid"), "word": word})
story = _id
else:
story = findW(word)
...
def train(index):
# searching progress
progFile = "./train/progress{0}.txt".format(index)
trainFile = "./train/small_file_{0}".format(index)
if os.path.exists(progFile):
f = open(progFile, "r")
ind = f.read().strip()
if ind != "":
pprint(ind)
i = int(ind)
else:
pprint("No progress saved or progress lost!")
i = 0
f.close()
else:
i = 0
#get the number of line of the file
rangeC = rawbigcount(trainFile)
#fix unicode
non_bmp_map = dict.fromkeys(range(0x10000, sys.maxunicode + 1), 0xfffd)
files = io.open(trainFile, "r", encoding="utf8")
str1 = ""
str2 = ""
filex = open(progFile, "w")
with progressbar.ProgressBar(max_value=rangeC) as bar:
for line in files:
line = line.replace("\n", "")
if i % 2 == 0:
str1 = line.translate(non_bmp_map)
else:
str2 = line.translate(non_bmp_map)
bar.update(i)
trainAI(str1 + " " + str2)
filex.seek(0)
filex.truncate()
filex.write(str(i))
i += 1
#multiprocessing function
maxProcess = 3
def f(l, i):
l.acquire()
train(i + 1)
l.release()
if __name__ == '__main__':
lock = Lock()
for num in range(maxProcess):
pprint("start " + str(num))
Process(target=f, args=(lock, num)).start()
```
This code is made for reading 4 different file in 4 different process and at the same time insert the data in the database.
I copied only part of the code for make you understand the structure of it.
I've tried to add connect=False to this code but nothing...
```
db = MongoClient(connect=False).database
db.authenticate('user', 'password', mechanism='SCRAM-SHA-1')
collectionW = db['words']
collectionT = db['sinMemo']
collectionL = db['sinLogic']
```
then i've tried to move it in the f function (right before train() but what i get is that the program doesn't find collectionW,collectionT and collectionL.
I'm not very expert of python or mongodb so i hope that this is not a silly question.
The code is running under Ubuntu 16.04.2 with python 2.7.12
|
2017/08/06
|
[
"https://Stackoverflow.com/questions/45530741",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/3476027/"
] |
db.authenticate will have to connect to mongo server and it will try to make a connection. So, even though connect=False is being used, db.authenticate will require a connection to be open.
Why don't you create the mongo client instance after fork? That's look like the easiest solution.
|
Since `db.authenticate` must open the MongoClient and connect to the server, it creates connections which won't work in the forked subprocess. Hence, the error message. Try this instead:
```
db = MongoClient('mongodb://user:password@localhost', connect=False).database
```
Also, delete the Lock `l`. Acquiring a lock in one subprocess has no effect on other subprocesses.
|
45,530,741
|
I'm trying to run my code with a multiprocessing function but mongo keep returning
>
> "MongoClient opened before fork. Create MongoClient with
> connect=False, or create client after forking."
>
>
>
I really doesn't understand how i can adapt my code to this.
Basically the structure is:
```
db = MongoClient().database
db.authenticate('user', 'password', mechanism='SCRAM-SHA-1')
collectionW = db['words']
collectionT = db['sinMemo']
collectionL = db['sinLogic']
def findW(word):
rows = collectionw.find({"word": word})
ind = 0
for row in rows:
ind += 1
id = row["_id"]
if ind == 0:
a = ind
else:
a = id
return a
def trainAI(stri):
...
if findW(word) == 0:
_id = db['words'].insert(
{"_id": getNextSequence(db.counters, "nodeid"), "word": word})
story = _id
else:
story = findW(word)
...
def train(index):
# searching progress
progFile = "./train/progress{0}.txt".format(index)
trainFile = "./train/small_file_{0}".format(index)
if os.path.exists(progFile):
f = open(progFile, "r")
ind = f.read().strip()
if ind != "":
pprint(ind)
i = int(ind)
else:
pprint("No progress saved or progress lost!")
i = 0
f.close()
else:
i = 0
#get the number of line of the file
rangeC = rawbigcount(trainFile)
#fix unicode
non_bmp_map = dict.fromkeys(range(0x10000, sys.maxunicode + 1), 0xfffd)
files = io.open(trainFile, "r", encoding="utf8")
str1 = ""
str2 = ""
filex = open(progFile, "w")
with progressbar.ProgressBar(max_value=rangeC) as bar:
for line in files:
line = line.replace("\n", "")
if i % 2 == 0:
str1 = line.translate(non_bmp_map)
else:
str2 = line.translate(non_bmp_map)
bar.update(i)
trainAI(str1 + " " + str2)
filex.seek(0)
filex.truncate()
filex.write(str(i))
i += 1
#multiprocessing function
maxProcess = 3
def f(l, i):
l.acquire()
train(i + 1)
l.release()
if __name__ == '__main__':
lock = Lock()
for num in range(maxProcess):
pprint("start " + str(num))
Process(target=f, args=(lock, num)).start()
```
This code is made for reading 4 different file in 4 different process and at the same time insert the data in the database.
I copied only part of the code for make you understand the structure of it.
I've tried to add connect=False to this code but nothing...
```
db = MongoClient(connect=False).database
db.authenticate('user', 'password', mechanism='SCRAM-SHA-1')
collectionW = db['words']
collectionT = db['sinMemo']
collectionL = db['sinLogic']
```
then i've tried to move it in the f function (right before train() but what i get is that the program doesn't find collectionW,collectionT and collectionL.
I'm not very expert of python or mongodb so i hope that this is not a silly question.
The code is running under Ubuntu 16.04.2 with python 2.7.12
|
2017/08/06
|
[
"https://Stackoverflow.com/questions/45530741",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/3476027/"
] |
db.authenticate will have to connect to mongo server and it will try to make a connection. So, even though connect=False is being used, db.authenticate will require a connection to be open.
Why don't you create the mongo client instance after fork? That's look like the easiest solution.
|
Here is how I did it for my problem:
```
import pathos.pools as pp
import time
import db_access
class MultiprocessingTest(object):
def __init__(self):
pass
def test_mp(self):
data = [[form,'form_number','client_id'] for form in range(5000)]
pool = pp.ProcessPool(4)
pool.map(db_access.insertData, data)
if __name__ == '__main__':
time_i = time.time()
mp = MultiprocessingTest()
mp.test_mp()
time_f = time.time()
print 'Time Taken: ', time_f - time_i
```
Here is db\_access.py:
```
from pymongo import MongoClient
def insertData(form):
client = MongoClient()
db = client['TEST_001']
db.initialization.insert({
"form": form[0],
"form_number": form[1],
"client_id": form[2]
})
```
This is happening to your code because you are initiating MongoCLient() once for all the sub-processes. MongoClient is not fork safe. So, initiating inside each function works and let me know if there are other solutions.
|
21,687,643
|
I am working on a large scale project that involves giving a python script a first name and getting back a result as to what kind of gender it belongs to. My current program is written in Java and using Jython to interact with a Python script called "sex machine." It works great in most cases and I've tested it with smaller groups of users. However, when I attempt to test it with a large group of users the program gets about halfway in and then gives me the following error:
```
"Exception in thread "main" SyntaxError: No viable alternative to input '\\n'", ('<string>', 1, 22, "result = d.get_gender('Christinewazonek'')\n")
```
I am more accustomed to Java and have limited knowledge of Python so at the moment I don't know how to solve this problem. I tried to trim the string that I'm giving the get\_gender method but that didn't help any. I am not sure what the numbers 1, 22 even mean.
Like I said since I'm using Jython my code would be the following:
```
static PythonInterpreter interp = new PythonInterpreter();
interp.exec("import sys, os.path");
interp.exec("sys.path.append('/Users/myname/Desktop/')");
interp.exec("import sexmachine.detector as gender");
interp.exec("d = gender.Detector()");
interp.exec("result = d.get_gender('"+WordUtils.capitalize(name).trim()
+"')");
PyObject gendAnswer = interp.get("result");
```
And this is pretty much the extent of Jython/Python interaction in my Java code. If someone sees something that's wrong or not right I would certainly appreciate if you could help me. As this is a large project it takes time to run the whole program again only to run into the same issue, so because of this I really need to fix this problem.
|
2014/02/10
|
[
"https://Stackoverflow.com/questions/21687643",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/1461393/"
] |
I don't know if it helps but this is what I did and it works for me.
```
public static void main(String[] args){
PythonInterpreter pI = new PythonInterpreter();
pI.exec("x = 3");
PyObject result = pI.get("x");
System.out.println(result);
}
```
|
Not sure if you sorted this out, but have an extra apostrophe on
```
d.get_gender('Christinewazonek'')
```
Just like in Java, everything you open you need to close, and in this case you opened a string containing `)\n")` which was not closed.
Depending on the interpreter you are using, this can be flagged easily. Perhaps you might try different interpreter.
|
34,132,484
|
i have a large string like
```
res = ["FAV_VENUE_CITY_NAME == 'Mumbai' & EVENT_GENRE == 'KIDS' & count_EVENT_GENRE >= 1",
"FAV_VENUE_CITY_NAME == 'Mumbai' & EVENT_GENRE == 'FANTASY' & count_EVENT_GENRE >= 1",
"FAV_VENUE_CITY_NAME =='Mumbai' & EVENT_GENRE == 'FESTIVAL' & count_EVENT_GENRE >= 1",
"FAV_VENUE_CITY_NAME == 'New Delhi' & EVENT_GENRE == 'WORKSHOP' & count_EVENT_GENRE >= 1",
"FAV_VENUE_CITY_NAME == 'Mumbai' & EVENT_GENRE == 'EXHIBITION' & count_EVENT_GENRE >= 1",
"FAV_VENUE_CITY_NAME == 'Bangalore' & FAV_GENRE == '|DRAMA|'",
"FAV_VENUE_CITY_NAME = 'Mumbai' & & FAV_GENRE == '|ACTION|ADVENTURE|SCI-FI|'",
"FAV_VENUE_CITY_NAME == 'Bangalore' & FAV_GENRE == '|COMEDY|'",
"FAV_VENUE_CITY_NAME == 'Bangalore' & FAV_GENRE == 'DRAMA' & FAV_LANGUAGE == 'English'",
"FAV_VENUE_CITY_NAME == 'New Delhi' & FAV_LANGUAGE == 'Hindi' & count_EVENT_LANGUAGE >= 1"]
```
now i am extracting fields by
```
res = [re.split(r'[(==)(>=)]', x)[0].strip() for x in re.split('[&($#$)]', whereFields)]
res = [x for x in list(set(res)) if x]
o/p:['FAV_GENRE', 'FAV_LANGUAGE', 'FAV_VENUE_CITY_NAME', 'count_EVENT_GENRE', 'EVENT_GENRE','count_EVENT_LANGUAGE']
```
then by following this [filter out some items from a list and store in different arrays in python](https://stackoverflow.com/questions/34119386/filter-out-some-items-from-a-list-and-store-in-different-arrays-in-python?noredirect=1#comment55989982_34119386)
i am getting values
```
FAV_VENUE_CITY_NAME = ['New Delhi', 'Mumbai', 'Bangalore']
FAV_GENRE = ['|DRAMA|', '|COMEDY|', '|ACTION|ADVENTURE|SCI-FI|', 'DRAMA']
EVENT_GENRE = ['FESTIVAL', 'WORKSHOP', 'FANTASY', 'KIDS', 'EXHIBITION']
FAV_LANGUAGE = ['English', 'Hindi']
count_on_field = ['EVENT_GENRE', 'EVENT_LANGUAGE']
```
Now i want to make a dictionary whose key will be field name in res. and values will be the result from above link.
Or is there a way to make items of list res as different different list by themselves.
SOmething like
```
res = ['FAV_GENRE', 'FAV_LANGUAGE', 'FAV_VENUE_CITY_NAME', 'count_EVENT_GENRE', 'EVENT_GENRE','count_EVENT_LANGUAGE']
for i in range(len(res)):
res[i] = list(res[i]) # make each item as an empty list with name as it is
```
so that they become like
```
FAV_VENUE_CITY_NAME = []
EVENT_GENRE = []
FAV_GENRE = []
FAV_LANGUAGE = [
```
then get the value to each individual lists in res list by following the method in above link.
Then make a dictionary like the below line making a dict with index as key
```
a = [51,27,13,56]
b = dict(enumerate(a))
#####d = dict{key=each list name from res list, value = value in each ind. lists}
```
#
or if possible suggest something like from top res list....how to form a dict having key as field names and values as values from each lines
```
o/p: d = {'FAV_VENUE_CITY_NAME':['Mumbai','New Delhi','Bangalore'], 'EVENT_GENRE':['KIDS','FANTASY','FESTIVAL','WORKSHOP','EXHIBITION'], 'FAV_GENRE':['|DRAMA|','|ACTION|ADVENTURE|SCI-FI|','|COMEDY|','DRAMA'], 'FAV_LANGUAGE':['English','Hindi']}
```
count\_EVENT\_GENRE>=1,count\_EVENT\_LANGUAGE>=1 should not be in that dictionary ,rather they should go to a list
```
count_on_fields = ['EVENT_GENRE','EVENT_LANGUAGE']
```
Pease if anybody has a better idea or suggestion, do help.
|
2015/12/07
|
[
"https://Stackoverflow.com/questions/34132484",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/5533254/"
] |
The latest d.ts file now has the component method. Update yours with `tsd update -o`
|
We had a kinda similar issue earlier today and it was to do with using Angular 1.5. Beta 1 which doesn't contain the component function.
To fix it we had to upgrade to Angular 1.5 Beta 2 which does contain the component functon.
|
70,921,901
|
I have been trying to fetch the metadata from a KDB+ Database using python, basically, I installed a library called **`qpython`** and using this library we connect and query the KDB+ Database.
I want to store the metadata for all the appropriate cols for a table/view in KDB+ Database using python. I am unable to separate the metadata part, despite trying myriad different approaches.
Namely a few to typecast the output to list/tuple, iterating using `for`, et cetera.
```
from qpython import qconnection
def fetch_metadata_from_kdb(params):
try:
kdb_connection_obj = qconnection.QConnection(host=params['host'], port=params['port'], username=params['username'], password=params['password'])
kdb_connection_obj.open()
PREDICATE = "meta[{}]".format(params['table'])
metadata = kdb_connection_obj(PREDICATE)
kdb_connection_obj.close()
return metadata
except Exception as error_msg:
return error_msg
def fetch_tables_from_kdb(params):
try:
kdb_connection_obj = qconnection.QConnection(host=params['host'], port=params['port'], username=params['username'], password=params['password'])
kdb_connection_obj.open()
tables = kdb_connection_obj("tables[]")
views = kdb_connection_obj("views[]")
kdb_connection_obj.close()
return [table.decode() for table in list(tables)], [view.decode() for view in list(views)]
except Exception as error_msg:
return error_msg
parms_q = {'host':'localhost', 'port':5010,
'username':'kdb', 'password':'kdb', 'table':'testing'}
print("fetch_tables_from_kdb:", fetch_tables_from_kdb(parms_q), "\n")
print("fetch_metadata_from_kdb:", fetch_metadata_from_kdb(parms_q), "\n")
```
The output which I am currently getting is as follows;
```
fetch_tables_from_kdb: (['testing'], ['viewname'])
fetch_metadata_from_kdb: [(b'time',) (b'sym',) (b'price',) (b'qty',)]![(b'p', b'', b'') (b's', b'', b'') (b'f', b'', b'') (b'j', b'', b'')]
```
I am not able to separate the columns part and the metadata part. How to store only the metadata for the appropriate column for a table/view in KDB using python?
|
2022/01/31
|
[
"https://Stackoverflow.com/questions/70921901",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/12553730/"
] |
The metadata that you have returned from kdb is correct but is being displayed in python as a kdb dictionary format which I agree is not very useful.
If you pass the pandas=True flag into your qconnection call then qPython will parse kdb datastructures, such as a table into pandas data structures or sensible python types, which in your case looks like it will be more useful.
Please see an example below -
kdb setup (all on localhost)
```
$ q -p 5000
q)testing:([]date:.z.d+0 1 2;`g#sym:`abc`def`ghi;num:`s#10 20 30)
q)testing
date sym num
------------------
2022.01.31 abc 10
2022.02.01 def 20
2022.02.02 ghi 30
q)meta testing
c | t f a
----| -----
date| d
sym | s g
num | j s
```
Python code
```
from qpython import qconnection
#create and open 2 connections to kdb process - 1 without pandas flag and one
q = qconnection.QConnection(host="localhost", port=5000)
qpandas = qconnection.QConnection(host="localhost", port=5000, pandas=True)
q.open()
qpandas.open()
#see what is returned with a q table
print(q("testing"))
[(8066, b'abc', 10) (8067, b'def', 20) (8068, b'ghi', 30)]
#the data is a qPython data object
type(q("testing"))
qpython.qcollection.QTable
#whereas using the pandas=True flag a dataframe is returned.
print(qpandas("testing"))
date sym num
0 2022-01-31 b'abc' 10
1 2022-02-01 b'def' 20
2 2022-02-02 b'ghi' 30
#This is the same for the meta of a table
print(q("meta testing"))
[(b'date',) (b'sym',) (b'num',)]![(b'd', b'', b'') (b's', b'', b'g') (b'j', b'', b's')]
print(qpandas("meta testing"))
t f a
c
b'date' d b'' b''
b'sym' s b'' b'g'
b'num' j b'' b's'
```
With the above you can now access the columns and rows using pandas (the b'num' etc is the qPython way of expressing a backtick `
Also now you have the ability to now use the `DataFrame.info()` to extract datatypes if you are more intrested in the python data structure rather than the kdb data structure/types. qPython will convert the q types to sensible python types automatically.
```
qpandas("testing").info()
<class 'pandas.core.frame.DataFrame'>
RangeIndex: 3 entries, 0 to 2
Data columns (total 3 columns):
# Column Non-Null Count Dtype
--- ------ -------------- -----
0 date 3 non-null datetime64[ns]
1 sym 3 non-null object
2 num 3 non-null int64
dtypes: datetime64[ns](1), int64(1), object(1)
memory usage: 200.0+ bytes
```
|
In the meantime, I have checked quite a bit of KBD documentation and found that the metadata provides the following as the output. You can see that here [kdb metadata](https://code.kx.com/q4m3/8_Tables/)
`c | t f a`
c-columns
t-symbol
f-foreign key association
a-attributes associated with the column
We can access the metadata object(*<class 'qpython.qcollection.QKeyedTable'>*) by interating over a `for` loop as shown below;
```
from qpython import qconnection
def fetch_metadata_from_kdb(params):
try:
col_list, metadata_list = [], []
kdb_connection_obj = qconnection.QConnection(host=params['host'], port=params['port'], username=params['username'], password=params['password'])
kdb_connection_obj.open()
PREDICATE = "meta[{}]".format(params['table'])
############# FOR LOOP ##############
for i,j in kdb_connection_obj(PREDICATE).items():
col_list.append(i[0].decode())
metadata_list.append(j[0].decode())
kdb_connection_obj.close()
return col_list, metadata_list
except Exception as error_msg:
return error_msg
parms_q = {'host':'localhost', 'port':5010,
'username':'kdb', 'password':'kdb', 'table':'testing'}
print(fetch_metadata_from_kdb(parms_q))
```
`Output: ['time', 'sym', 'price', 'qty'], ['p', 's', 'f', 'j']`
I also got the KDB char types / q data types from the documentation [here](https://code.kx.com/q4m3/2_Basic_Data_Types_Atoms/). Below is the implementation for the same;
```
import pandas as pd
from qpython import qconnection
kdb_type_char_dict = dict()
df = pd.read_html('https://code.kx.com/q4m3/2_Basic_Data_Types_Atoms/')[1].iloc[:17, 0:3][['Type', 'CharType']]
for i, j in zip(df.iloc[:, 0], df.iloc[:, 1]): kdb_type_char_dict[str(j)] = str(i)
####### Q DATA TYPES DICTIONARY #######
print("Chat types/ q data types dictionary:", kdb_type_char_dict)
def fetch_metadata_from_kdb(params):
try:
col_list, metadata_list, temp_list = [], [], []
kdb_connection_obj = qconnection.QConnection(host=params['host'], port=params['port'],
username=params['username'], password=params['password'])
kdb_connection_obj.open()
PREDICATE = "meta[{}]".format(params['table'])
for i, j in kdb_connection_obj(PREDICATE).items():
col_list.append(i[0].decode())
temp_list.append(j[0].decode())
for i in temp_list:
metadata_list.append("{}".format(kdb_type_char_dict[i]))
kdb_connection_obj.close()
return col_list, metadata_list
except Exception as error_msg:
return error_msg
params = {'host': 'localhost', 'port': 5010,
'username': 'kdb', 'password': 'kdb', 'table': 'testing'}
print(fetch_metadata_from_kdb(params))
```
Output:
```
Chat types/ q data types dictionary: {'b': 'boolean', 'x': 'byte', 'h': 'short', 'i': 'int', 'j': 'long', 'e': 'real', 'f': 'float', 'c': 'char', 's': 'symbol', 'p': 'timestamp', 'm': 'month', 'd': 'date', 'z': '(datetime)', 'n': 'timespan', 'u': 'minute', 'v': 'second', 't': 'time'}
(['time', 'sym', 'price', 'qty'], ['timestamp', 'symbol', 'float', 'long'])
```
|
62,503,638
|
I have a data frame as shown below. which is a sales data of two health care product starting from December 2016 to November 2018.
```
product price sale_date discount
A 50 2016-12-01 5
A 50 2017-01-03 4
B 200 2016-12-24 10
A 50 2017-01-18 3
B 200 2017-01-28 15
A 50 2017-01-18 6
B 200 2017-01-28 20
A 50 2017-04-18 6
B 200 2017-12-08 25
A 50 2017-11-18 6
B 200 2017-08-21 20
B 200 2017-12-28 30
A 50 2018-03-18 10
B 300 2018-06-08 45
B 300 2018-09-20 50
A 50 2018-11-18 8
B 300 2018-11-28 35
```
From the above data I would like to plot month wise total sale price and total discount in a bar plot for each products using in python.
So I would like to have two line plots for product A
```
X axis = year and month
Y axis1 = Total sale price
Y axis = Total discount price
```
The intention of this plots are to impacts the of discounts on sales.
|
2020/06/21
|
[
"https://Stackoverflow.com/questions/62503638",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/8901845/"
] |
The problem with the first query is that it returns no rows if there is 1 row (or less) in the table.
It looks like they consider that an empty resultset is not the correct answer in this case. Instead, they always want one row as a result that contains a `null` value (which indicates the absence of the Nth salary in the table).
This is what the second query does. It `SELECT`s the (scalar) result of the initial query - so it always produces one row. If the subquery returns something, you get that value as a result, else you get `null`.
Consider this simple example:
```
select 1 where 0 = 1 -- returns no rows
select (select 1 where 0 = 1) -- returns one row with a "null" value
```
|
This query:
```
SELECT DISTINCT Salary AS SecondHighestSalary -- (DISTINCT is really not needed)
FROM Employee
ORDER BY Salary DESC
LIMIT 1 OFFSET 1
```
in case the table has only 1 row, does not return `null`.
It returns nothing (no rows).
But when it is placed inside another query as a derived column:
```
SELECT
(
SELECT DISTINCT Salary
FROM Employee
ORDER BY Salary DESC
LIMIT 1 OFFSET 1
) AS SecondHighestSalary
```
the value of that column returned will be `null` because every column in a query contains either a value or `null` which has the meaning of *unknown* or *missing*.
|
35,700,781
|
I have a small Python app that produces a form, the user enters some strings in and it collects them as an array and adds (or tries to) that array as a value of a key in Google's Memcache.
This is the script:
```
import webapp2
from google.appengine.api import memcache
MAIN_PAGE_HTML = """\
<html>
<body>
<form action="/get" method="post">
<div><input name="ID"/></div>
<div><input name="param1"/></div>
<div><input name="param2"/></div>
<div><input name="param3"/></div>
<div><input name="param4"/></div>
<div><input name="param5"/></div>
<div><input type="submit" value="Change Status"></div>
</form>
</body>
</html>
"""
class MainPage(webapp2.RequestHandler):
def get(self):
self.response.write(MAIN_PAGE_HTML)
class status(webapp2.RequestHandler):
def post(self):
paramarray= (self.request.get_all('param1'),
self.request.get_all('param2'),
self.request.get_all('param3'),
self.request.get_all('param4'),
self.request.get_all('param5'))
array1 = tuple(paramarray)
memcache.add(key=(self.request.get_all('ID')), value=array1, time=3600)
app = webapp2.WSGIApplication([
('/', MainPage),
('/get', status),
], debug=True)
```
I've tried setting `paramarray` as a tuple, not a list. Still getting the same error:
```
Traceback (most recent call last):
File "/base/data/home/runtimes/python27/python27_lib/versions/third_party/webapp2-2.5.2/webapp2.py", line 1535, in __call__
rv = self.handle_exception(request, response, e)
File "/base/data/home/runtimes/python27/python27_lib/versions/third_party/webapp2-2.5.2/webapp2.py", line 1529, in __call__
rv = self.router.dispatch(request, response)
File "/base/data/home/runtimes/python27/python27_lib/versions/third_party/webapp2-2.5.2/webapp2.py", line 1278, in default_dispatcher
return route.handler_adapter(request, response)
File "/base/data/home/runtimes/python27/python27_lib/versions/third_party/webapp2-2.5.2/webapp2.py", line 1102, in __call__
return handler.dispatch()
File "/base/data/home/runtimes/python27/python27_lib/versions/third_party/webapp2-2.5.2/webapp2.py", line 572, in dispatch
return self.handle_exception(e, self.app.debug)
File "/base/data/home/runtimes/python27/python27_lib/versions/third_party/webapp2-2.5.2/webapp2.py", line 570, in dispatch
return method(*args, **kwargs)
File "/base/data/home/apps/s~gtech-iot/1.391041688070473184/main.py", line 41, in post
memcache.add(key=(self.request.get_all('ID')), value=array1, time=3600)
File "/base/data/home/runtimes/python27/python27_lib/versions/1/google/appengine/api/memcache/__init__.py", line 785, in add
namespace=namespace)
File "/base/data/home/runtimes/python27/python27_lib/versions/1/google/appengine/api/memcache/__init__.py", line 868, in _set_with_policy
rpc = self._set_multi_async_with_policy(policy, {key: value},
TypeError: unhashable type: 'list'
```
Tried to set curly braces, regular, squared, with or without the `array1=..` statement
Please help,
Thanks
|
2016/02/29
|
[
"https://Stackoverflow.com/questions/35700781",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/3136727/"
] |
Given that you work with system logs and their format is known and stable, my approach would be something like:
* identify a set of keywords (either common, or one per log)
* for each log, iterate line by line
* once keywords match, add the relevant information from each line in e.g. a dictionary
You could use shell tools (like `grep`, `cut` and/or `awk`) to pre-process the log and extract relevant lines from the log (I assume you only need e.g. error entries).
You can use something like [this](https://stackoverflow.com/a/16017858/515948) as a starting point.
|
If you want ot use tool then you can use ELK(Elastic,Logstash and kibana).
if no then you have to read first log file then apply regex according to your requirment.
|
35,700,781
|
I have a small Python app that produces a form, the user enters some strings in and it collects them as an array and adds (or tries to) that array as a value of a key in Google's Memcache.
This is the script:
```
import webapp2
from google.appengine.api import memcache
MAIN_PAGE_HTML = """\
<html>
<body>
<form action="/get" method="post">
<div><input name="ID"/></div>
<div><input name="param1"/></div>
<div><input name="param2"/></div>
<div><input name="param3"/></div>
<div><input name="param4"/></div>
<div><input name="param5"/></div>
<div><input type="submit" value="Change Status"></div>
</form>
</body>
</html>
"""
class MainPage(webapp2.RequestHandler):
def get(self):
self.response.write(MAIN_PAGE_HTML)
class status(webapp2.RequestHandler):
def post(self):
paramarray= (self.request.get_all('param1'),
self.request.get_all('param2'),
self.request.get_all('param3'),
self.request.get_all('param4'),
self.request.get_all('param5'))
array1 = tuple(paramarray)
memcache.add(key=(self.request.get_all('ID')), value=array1, time=3600)
app = webapp2.WSGIApplication([
('/', MainPage),
('/get', status),
], debug=True)
```
I've tried setting `paramarray` as a tuple, not a list. Still getting the same error:
```
Traceback (most recent call last):
File "/base/data/home/runtimes/python27/python27_lib/versions/third_party/webapp2-2.5.2/webapp2.py", line 1535, in __call__
rv = self.handle_exception(request, response, e)
File "/base/data/home/runtimes/python27/python27_lib/versions/third_party/webapp2-2.5.2/webapp2.py", line 1529, in __call__
rv = self.router.dispatch(request, response)
File "/base/data/home/runtimes/python27/python27_lib/versions/third_party/webapp2-2.5.2/webapp2.py", line 1278, in default_dispatcher
return route.handler_adapter(request, response)
File "/base/data/home/runtimes/python27/python27_lib/versions/third_party/webapp2-2.5.2/webapp2.py", line 1102, in __call__
return handler.dispatch()
File "/base/data/home/runtimes/python27/python27_lib/versions/third_party/webapp2-2.5.2/webapp2.py", line 572, in dispatch
return self.handle_exception(e, self.app.debug)
File "/base/data/home/runtimes/python27/python27_lib/versions/third_party/webapp2-2.5.2/webapp2.py", line 570, in dispatch
return method(*args, **kwargs)
File "/base/data/home/apps/s~gtech-iot/1.391041688070473184/main.py", line 41, in post
memcache.add(key=(self.request.get_all('ID')), value=array1, time=3600)
File "/base/data/home/runtimes/python27/python27_lib/versions/1/google/appengine/api/memcache/__init__.py", line 785, in add
namespace=namespace)
File "/base/data/home/runtimes/python27/python27_lib/versions/1/google/appengine/api/memcache/__init__.py", line 868, in _set_with_policy
rpc = self._set_multi_async_with_policy(policy, {key: value},
TypeError: unhashable type: 'list'
```
Tried to set curly braces, regular, squared, with or without the `array1=..` statement
Please help,
Thanks
|
2016/02/29
|
[
"https://Stackoverflow.com/questions/35700781",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/3136727/"
] |
Given that you work with system logs and their format is known and stable, my approach would be something like:
* identify a set of keywords (either common, or one per log)
* for each log, iterate line by line
* once keywords match, add the relevant information from each line in e.g. a dictionary
You could use shell tools (like `grep`, `cut` and/or `awk`) to pre-process the log and extract relevant lines from the log (I assume you only need e.g. error entries).
You can use something like [this](https://stackoverflow.com/a/16017858/515948) as a starting point.
|
In case you might be interested in extracting some data and save it to a `.txt` file, the following sample code might be helpful:
```
import re
import sys
import os.path
expDate = '2018-11-27'
expTime = '11-21-09'
infile = r"/home/xenial/Datasets/CIVIT/Nov_27/rover/NMND17420010S_"+expDate+"_"+expTime+".LOG"
keep_phrases = ["FINESTEERING"]
with open(infile) as f:
f = f.readlines()
with open('/home/xenial/Datasets/CIVIT/Nov_27/rover/GPS_'+expDate+'_'+expTime+'.txt', 'w') as file:
file.write("gpsWeek,gpsSOW\n")
for line in f:
for phrase in keep_phrases:
if phrase in line:
resFind = re.findall('\.*?FINESTEERING,(\d+).*?,(\d+\.\d*)',line)[0]
gpsWeek = re.findall('\.*?FINESTEERING,(\d+)',line)[0]
gpsWeekStr = str(gpsWeek)
gpsSOW = re.findall('\.*?FINESTEERING,'+ gpsWeekStr + ',(\d+\.\d*)',line)[0]
gpsSOWStr = str(gpsSOW)
file.write(gpsWeekStr+','+gpsSOWStr+'\n')
break
print ("------------------------------------")
```
In my case, FINESTEERING was an interesting keyword in my `.log` file to extract numbers, including GPS\_Week and GPS\_Seconds\_of\_Weeks. You may modify this code to suit your own application.
|
36,913,153
|
When I do this calculation
`2*(5+5/(3+3))*3`
I get 30 in Python (2.7).
But what it seems is that `2*(5+5/(3+3))*3`is equal to `35`.
Can someone tell me why python gives me the answer of 30 instead of 35?
I've tested with JavaScript, Lua and Mac Calculator and they show me 35.
Why does Python calculate wrong?
<http://ideone.com/yiFJxS>
|
2016/04/28
|
[
"https://Stackoverflow.com/questions/36913153",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/4841229/"
] |
This happens because of the piece `5/(3 + 3)` which evalautes to 0. You need to use either of them as float.
|
Always assume it's an issue with something you're doing rather than with an entire coding language!
It works fine for me in Python shell. 35 is the expected answer and 35 is what we get! Most likely something on your end or a mis-type / you've miss-commented something out. This is from copy pasting your code above.
edit: Who is more likely to be wrong, an individual? Or the masses? Occams razor. In this case I assumed he was using 3 and not 2.7 which led to the correct assumption as it does work in 3.
|
52,958,847
|
I am trying to calculate a DTW distance matrix which will look into 150,000 time series each having between 13 to 24 observations - that is the produced distance matrix will be a list of the size of approximately (150,000 x 150,000)/2= 11,250,000,000.
I am running this over a big data cluster of the size of 200GB but I am getting the memory error.
I am using dtaidisatance library and used the **distance\_matrix\_fast** function that I could pass on the entire time series at once in a list, but I was getting similar memory error but coming out of the package. the error was thrown straight away as soon as I run it though. I also used the **block function** in the package, but seems like it is not able to take it all the time series at once to start with.
So I decided to go through a loop and calculate the distance between every two pair of time series and then append it to a list. However I do get the same
memory error again as following after running for a long while:
File "/root/anaconda2/test/final\_clustering\_2.py", line 93, in
distance\_matrix\_scaled.append(dtw.distance\_fast(Series\_scaled[i], Series\_scaled[j]))
MemoryError
this is my code below:
```
distance_matrix_scaled = []
m=len(Series_scaled)
#m=100000
for i in range(0, m - 1):
for j in range(i + 1, m):
distance_matrix_scaled.append(dtw.distance_fast(Series_scaled[i], Series_scaled[j]))
# save it to the disk
np.save('distance_entire', distance_matrix_scaled)
```
Could you please help to answer why am I getting this memory error? is it the python list limit or my cluster size causing this? Is there a clever way or format in numpy I could use to navigate this problem?
|
2018/10/23
|
[
"https://Stackoverflow.com/questions/52958847",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/6235045/"
] |
Thanks to @KacperMadej for this solution on [github](https://github.com/highcharts/highcharts-angular/issues/89).
To load a theme simply add the following somewhere in the project:
```js
import * as Highcharts from 'highcharts';
require('highcharts/themes/dark-blue')(Highcharts);
```
|
The theme factory is now the default export of `highcharts/themes/<theme-name>` so this will work:
```
import * as Highcharts from 'highcharts';
import theme from 'highcharts/themes/dark-unica';
theme(Highcharts);
```
|
52,958,847
|
I am trying to calculate a DTW distance matrix which will look into 150,000 time series each having between 13 to 24 observations - that is the produced distance matrix will be a list of the size of approximately (150,000 x 150,000)/2= 11,250,000,000.
I am running this over a big data cluster of the size of 200GB but I am getting the memory error.
I am using dtaidisatance library and used the **distance\_matrix\_fast** function that I could pass on the entire time series at once in a list, but I was getting similar memory error but coming out of the package. the error was thrown straight away as soon as I run it though. I also used the **block function** in the package, but seems like it is not able to take it all the time series at once to start with.
So I decided to go through a loop and calculate the distance between every two pair of time series and then append it to a list. However I do get the same
memory error again as following after running for a long while:
File "/root/anaconda2/test/final\_clustering\_2.py", line 93, in
distance\_matrix\_scaled.append(dtw.distance\_fast(Series\_scaled[i], Series\_scaled[j]))
MemoryError
this is my code below:
```
distance_matrix_scaled = []
m=len(Series_scaled)
#m=100000
for i in range(0, m - 1):
for j in range(i + 1, m):
distance_matrix_scaled.append(dtw.distance_fast(Series_scaled[i], Series_scaled[j]))
# save it to the disk
np.save('distance_entire', distance_matrix_scaled)
```
Could you please help to answer why am I getting this memory error? is it the python list limit or my cluster size causing this? Is there a clever way or format in numpy I could use to navigate this problem?
|
2018/10/23
|
[
"https://Stackoverflow.com/questions/52958847",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/6235045/"
] |
Thanks to @KacperMadej for this solution on [github](https://github.com/highcharts/highcharts-angular/issues/89).
To load a theme simply add the following somewhere in the project:
```js
import * as Highcharts from 'highcharts';
require('highcharts/themes/dark-blue')(Highcharts);
```
|
I found another way to do this. The versions I am using are
```
"angular-highcharts": "^12.0.0",
"highcharts": "^9.3.0",
```
In your module imports you have
```
import {ChartModule, HIGHCHARTS_MODULES} from "angular-highcharts";
import theme from 'highcharts/themes/dark-blue';
```
And in the providers section you have
```
{
provide: HIGHCHARTS_MODULES,
useFactory: () => [ theme ]
}
```
|
52,958,847
|
I am trying to calculate a DTW distance matrix which will look into 150,000 time series each having between 13 to 24 observations - that is the produced distance matrix will be a list of the size of approximately (150,000 x 150,000)/2= 11,250,000,000.
I am running this over a big data cluster of the size of 200GB but I am getting the memory error.
I am using dtaidisatance library and used the **distance\_matrix\_fast** function that I could pass on the entire time series at once in a list, but I was getting similar memory error but coming out of the package. the error was thrown straight away as soon as I run it though. I also used the **block function** in the package, but seems like it is not able to take it all the time series at once to start with.
So I decided to go through a loop and calculate the distance between every two pair of time series and then append it to a list. However I do get the same
memory error again as following after running for a long while:
File "/root/anaconda2/test/final\_clustering\_2.py", line 93, in
distance\_matrix\_scaled.append(dtw.distance\_fast(Series\_scaled[i], Series\_scaled[j]))
MemoryError
this is my code below:
```
distance_matrix_scaled = []
m=len(Series_scaled)
#m=100000
for i in range(0, m - 1):
for j in range(i + 1, m):
distance_matrix_scaled.append(dtw.distance_fast(Series_scaled[i], Series_scaled[j]))
# save it to the disk
np.save('distance_entire', distance_matrix_scaled)
```
Could you please help to answer why am I getting this memory error? is it the python list limit or my cluster size causing this? Is there a clever way or format in numpy I could use to navigate this problem?
|
2018/10/23
|
[
"https://Stackoverflow.com/questions/52958847",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/6235045/"
] |
The theme factory is now the default export of `highcharts/themes/<theme-name>` so this will work:
```
import * as Highcharts from 'highcharts';
import theme from 'highcharts/themes/dark-unica';
theme(Highcharts);
```
|
I found another way to do this. The versions I am using are
```
"angular-highcharts": "^12.0.0",
"highcharts": "^9.3.0",
```
In your module imports you have
```
import {ChartModule, HIGHCHARTS_MODULES} from "angular-highcharts";
import theme from 'highcharts/themes/dark-blue';
```
And in the providers section you have
```
{
provide: HIGHCHARTS_MODULES,
useFactory: () => [ theme ]
}
```
|
32,492,183
|
When I run `python manage.py runserver`, everything starts out fine, but then I get a `SystemCheckError` stating that Pillow is not installed; however, Pillow is definitely installed on this machine.
This is the error I receive:
>
> Performing system checks...
>
>
> Unhandled exception in thread started by Traceback (most recent call last): File
> "/usr/local/lib/python2.7/dist-packages/django/utils/autoreload.py",
> line 225, in wrapper
> fn(\*args, \*\*kwargs) File "/usr/local/lib/python2.7/dist-packages/django/core/management/commands/runserver.py",
> line 110, in inner\_run
> self.validate(display\_num\_errors=True) File "/usr/local/lib/python2.7/dist-packages/django/core/management/base.py",
> line 468, in validate
> return self.check(app\_configs=app\_configs, display\_num\_errors=display\_num\_errors) File
> "/usr/local/lib/python2.7/dist-packages/django/core/management/base.py",
> line 527, in check
> raise SystemCheckError(msg) django.core.management.base.SystemCheckError: SystemCheckError: System
> check identified some issues:
>
>
> ERRORS: recipes.Recipes.primary\_image: (fields.E210) Cannot use
> ImageField because Pillow is not installed.
> HINT: Get Pillow at <https://pypi.python.org/pypi/Pillow> or run command "pip install Pillow". recipes.Recipes.thumbnail\_image:
> (fields.E210) Cannot use ImageField because Pillow is not installed.
> HINT: Get Pillow at <https://pypi.python.org/pypi/Pillow> or run command "pip install Pillow".
>
>
>
I'm running this on an Ubuntu machine. Any ideas what's up?
|
2015/09/10
|
[
"https://Stackoverflow.com/questions/32492183",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/4501444/"
] |
This should do the trick:
`/(?| (")((?:\\"|[^"])+)\1 | (')((?:\\'|[^'])+)\1 )/xg`
[Demo](https://regex101.com/r/cG3qR3/2)
---------------------------------------
BTW: [regex101.com](https://regex101.com/r/rX4rL7/1) is a great resource to use (which is where I got the regex above)
Update
------
The first one I posted works for PHP, [here is one](https://regex101.com/r/jB6hM0/1) for JS
`/"([^"\\]*(?:\\.[^"\\]*)*)"|\w+|'([^'\\]*(?:\\.[^'\\]*)*)'/g`
|
Maybe I read your question incorrectly but this is working for me
`/\".+\"/gm`
<https://regex101.com/r/wF0yN4/1>
|
32,492,183
|
When I run `python manage.py runserver`, everything starts out fine, but then I get a `SystemCheckError` stating that Pillow is not installed; however, Pillow is definitely installed on this machine.
This is the error I receive:
>
> Performing system checks...
>
>
> Unhandled exception in thread started by Traceback (most recent call last): File
> "/usr/local/lib/python2.7/dist-packages/django/utils/autoreload.py",
> line 225, in wrapper
> fn(\*args, \*\*kwargs) File "/usr/local/lib/python2.7/dist-packages/django/core/management/commands/runserver.py",
> line 110, in inner\_run
> self.validate(display\_num\_errors=True) File "/usr/local/lib/python2.7/dist-packages/django/core/management/base.py",
> line 468, in validate
> return self.check(app\_configs=app\_configs, display\_num\_errors=display\_num\_errors) File
> "/usr/local/lib/python2.7/dist-packages/django/core/management/base.py",
> line 527, in check
> raise SystemCheckError(msg) django.core.management.base.SystemCheckError: SystemCheckError: System
> check identified some issues:
>
>
> ERRORS: recipes.Recipes.primary\_image: (fields.E210) Cannot use
> ImageField because Pillow is not installed.
> HINT: Get Pillow at <https://pypi.python.org/pypi/Pillow> or run command "pip install Pillow". recipes.Recipes.thumbnail\_image:
> (fields.E210) Cannot use ImageField because Pillow is not installed.
> HINT: Get Pillow at <https://pypi.python.org/pypi/Pillow> or run command "pip install Pillow".
>
>
>
I'm running this on an Ubuntu machine. Any ideas what's up?
|
2015/09/10
|
[
"https://Stackoverflow.com/questions/32492183",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/4501444/"
] |
Simplest way would be to do:
```
/"[^"]*?"/g
```
This will return an array with `"hi"`, `"abc\nddk"` and `"gh"` and you can do something like `piece.replace(/"/g, "")` on individual pieces to get rid of the `"`. If you don't like that then rather than do a `match` you can do a [search and don't replace](http://ejohn.org/blog/search-and-dont-replace/)
```
var matches = [];
str.replace(/"([^"]*?)"/g, function (_, match) {
matches.push(match);
});
```
|
This should do the trick:
`/(?| (")((?:\\"|[^"])+)\1 | (')((?:\\'|[^'])+)\1 )/xg`
[Demo](https://regex101.com/r/cG3qR3/2)
---------------------------------------
BTW: [regex101.com](https://regex101.com/r/rX4rL7/1) is a great resource to use (which is where I got the regex above)
Update
------
The first one I posted works for PHP, [here is one](https://regex101.com/r/jB6hM0/1) for JS
`/"([^"\\]*(?:\\.[^"\\]*)*)"|\w+|'([^'\\]*(?:\\.[^'\\]*)*)'/g`
|
32,492,183
|
When I run `python manage.py runserver`, everything starts out fine, but then I get a `SystemCheckError` stating that Pillow is not installed; however, Pillow is definitely installed on this machine.
This is the error I receive:
>
> Performing system checks...
>
>
> Unhandled exception in thread started by Traceback (most recent call last): File
> "/usr/local/lib/python2.7/dist-packages/django/utils/autoreload.py",
> line 225, in wrapper
> fn(\*args, \*\*kwargs) File "/usr/local/lib/python2.7/dist-packages/django/core/management/commands/runserver.py",
> line 110, in inner\_run
> self.validate(display\_num\_errors=True) File "/usr/local/lib/python2.7/dist-packages/django/core/management/base.py",
> line 468, in validate
> return self.check(app\_configs=app\_configs, display\_num\_errors=display\_num\_errors) File
> "/usr/local/lib/python2.7/dist-packages/django/core/management/base.py",
> line 527, in check
> raise SystemCheckError(msg) django.core.management.base.SystemCheckError: SystemCheckError: System
> check identified some issues:
>
>
> ERRORS: recipes.Recipes.primary\_image: (fields.E210) Cannot use
> ImageField because Pillow is not installed.
> HINT: Get Pillow at <https://pypi.python.org/pypi/Pillow> or run command "pip install Pillow". recipes.Recipes.thumbnail\_image:
> (fields.E210) Cannot use ImageField because Pillow is not installed.
> HINT: Get Pillow at <https://pypi.python.org/pypi/Pillow> or run command "pip install Pillow".
>
>
>
I'm running this on an Ubuntu machine. Any ideas what's up?
|
2015/09/10
|
[
"https://Stackoverflow.com/questions/32492183",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/4501444/"
] |
Simplest way would be to do:
```
/"[^"]*?"/g
```
This will return an array with `"hi"`, `"abc\nddk"` and `"gh"` and you can do something like `piece.replace(/"/g, "")` on individual pieces to get rid of the `"`. If you don't like that then rather than do a `match` you can do a [search and don't replace](http://ejohn.org/blog/search-and-dont-replace/)
```
var matches = [];
str.replace(/"([^"]*?)"/g, function (_, match) {
matches.push(match);
});
```
|
Maybe I read your question incorrectly but this is working for me
`/\".+\"/gm`
<https://regex101.com/r/wF0yN4/1>
|
18,802,563
|
**Background**:
My Python program handles relatively large quantities of data, which can be generated in-program, or imported. The data is then processed, and during one of these processes, the data is deliberately copied and then manipulated, cleaned for duplicates and then returned to the program for further use. The data I'm handling is very precise (up to 16 decimal places), and maintaining this accuracy to at least 14dp is vital. However, mathematical operations of course can return slight variations in my floats, such that two values are identical to 14dp, but may vary ever so slightly to 16dp, therefore meaning the built in `set()` function doesn't correctly remove such 'duplicates' (I used this method to prototype the idea, but it's not satisfactory for the finished program). I should also point out I may well be overlooking something simple! I am just interested to see what others come up with :)
**Question:**
What is the most efficient way to remove very-near-duplicates from a potentially very large data set?
**My Attempts**:
I have tried rounding the values themselves to 14dp, but this is of course not satisfactory as this leads to larger errors down the line. I have a potential solution to this problem, but I am not convinced it is as efficient or 'pythonic' as possible. My attempt involves finding the indices of list entries that match to x dp, and then removing one of the matching entries.
Thank you in advance for any advice! Please let me know if there's anything you wish to be clarified, or of course if I'm overlooking something very simple (I may be at a point where I'm over-thinking it).
**Clarification on 'Duplicates'**:
Example of one of my 'duplicate' entries: 603.73066958946424, 603.73066958946460, the solution would remove one of these values.
**Note on decimal.Decimal:**
This could work if it was guaranteed that all *imported* data did not already have some near-duplicates (which it often does).
|
2013/09/14
|
[
"https://Stackoverflow.com/questions/18802563",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/1644700/"
] |
You really want to use NumPy if you're handling large quantities of data. Here's how I would do it :
Import NumPy :
```
import numpy as np
```
Generate 8000 high-precision floats (128-bits will be enough for your purposes, but note that I'm converting the 64-bits output of `random` to 128 just to fake it. Use your real data here.) :
```
a = np.float128(np.random.random((8000,)))
```
Find the indexes of the unique elements in the rounded array :
```
_, unique = np.unique(a.round(decimals=14), return_index=True)
```
And take those indexes from the original (non-rounded) array :
```
no_duplicates = a[unique]
```
|
Why don't you create a dict that maps the 14dp values to the corresponding full 16dp values:
```
d = collections.defaultdict(list)
for x in l:
d[round(x, 14)].append(x)
```
Now if you just want "unique" (by your definition) values, you can do
```
unique = [v[0] for v in d.values()]
```
|
67,541,366
|
I have a set of filter objects, which inherit the properties of a `Filter` base class
```
class Filter():
def __init__(self):
self.filterList = []
def __add__(self,f):
self.filterList += f.filterList
def match(self, entry):
for f in self.filterList:
if not f(entry):
return False
return True
class thisFilter(Filter):
def __init__(self, args):
super().__init__()
....
def thisFilterFunc(entry)->bool:
return True
self.filterList.append(thisFilterFunc)
```
This filter classes are used by various functions to filter entries
```
def myfunc(myfilter, arg1, ...):
...
for entry in entries:
if myfilter.match(entry):
... do something
```
Multiple filters can be added (logical and) by adding instances of these filters:
```
bigFilter = filter1 + filter2 + ...
```
This is all comming together quite well, but I would love to generalize this in a way to handle more complex logical constraints, e.g.
```
bigFilter = (filter1 and filter2) or (filter3 and not filter4)
```
It feels like this should somehow be possible with overwriting `__bool__` of the class instead of using `__add__` but the boolean value of the class is only known for a given entry and not during assemly of the filter.
Any ideas how to make this possible? Or is there maybe a more pythonic way do do this?
|
2021/05/14
|
[
"https://Stackoverflow.com/questions/67541366",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/4177926/"
] |
I would go for something like this:
```py
class Filter:
def __init__(self, filter: Callable[[Any], bool]):
self.filter = filter
def __add__(self, added: Filter):
return OrFilter(self, added)
def __mul__(self, mult: Filter):
return AndFilter(self, mult)
def __invert__(self):
return Filter(lambda x: not self.filter(x))
def __call__(self, entry):
return self.filter(entry)
class AndFilter(Filter):
def __init__(self, left: Filter, right: Filter):
self.left = left
self.right = right
def __call__(self, entry):
return self.left(entry) and self.right(entry)
class OrFilter(Filter):
def __init__(self, left: Filter, right: Filter):
self.left = left
self.right = right
def __call__(self, entry):
return self.left(entry) or self.right(entry)
```
Then you can create filters, and use them as `(filterA + ~filterB) * filterC`
You'll probably want to replace that `Any` with a generic type, so that your filter knows what it's dealing with.
|
Thanks, @njzk2 for the solution. In my code I used `|` and `&`. To be backwards compatible I also kept the `.match()` instead of using `__call__()` and also added the `__add__` again.
```
class Filter:
def __init__(self, filter: Callable[[Any], bool]):
self.filter = filter
def __or__(self, ored: Filter):
return OrFilter(self, ored)
def __and__(self, anded: Filter):
return AndFilter(self, anded)
def __add__(self, added: Filter):
# self as __and__
return self.__and__(added)
def __invert__(self):
return Filter(lambda x: not self.filter(x))
def match(self, entry):
return self.filter(entry)
class AndFilter(Filter):
def __init__(self, left: Filter, right: Filter):
self.left = left
self.right = right
def filter(self, entry):
return self.left.filter(entry) and self.right.filter(entry)
class OrFilter(Filter):
def __init__(self, left: Filter, right: Filter):
self.left = left
self.right = right
def filter(self, entry):
return self.left.filter(entry) or self.right.filter(entry)
class MyFilter(Filter):
def __init__(self, args):
...
def ffunc(entry) -> bool:
...
super().__init__(ffunc)
```
|
48,490,272
|
I'm trying to launch Safari with Selenium in python with all my sessions logged in (e.g. gmail) so I don't have to login manually.
The easy solution would be to launch safari with the default user profile, but I can't find documentation on how to do this.
```
from selenium import webdriver
driver = webdriver.Safari()
url = 'https://www.gmail.com/'
driver.get(url)
```
Just for reference, the code below is the code for Chrome. What is the safari equivalent?
```
options.add_argument("user-data-dir=/Users/alexiseggermont/Library/Application Support/Google/Chrome/Default/") #Path to your chrome profile
driver = webdriver.Chrome(chrome_options=options)
driver.get(url)
```
|
2018/01/28
|
[
"https://Stackoverflow.com/questions/48490272",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/2827060/"
] |
In the "[Creating Builds](https://dojotoolkit.org/documentation/tutorials/1.10/build/)" tutorial, it says:
>
> You might be asking yourself "if we build everything we need into a
> layer, why do we worry about the rest of the modules?" If you were to
> only keep the layer files and not have the rest of the modules
> available, you would lose that as an option to keep your application
> working without having to do a whole build again to access those
> modules.
>
>
>
That said, I agree with you, it would be nice to be able to build a minimal distribution containing only the layers - because even though the browser may only download the dojo/dojo.js layer, it's annoying to have to distribute the big 100MB directory.
However, even if the build script only copied the layer files, the layers may need various resource files which are not declared in the AMD dependency graph (e.g. images or fonts).
In my dojo projects, I've usually resorted to manually specifying and copying the required to a "minimal build" directory at the end of my build script. As long as it's a small application, this is usually manageable. It is certainly a bit annoying and error-prone though, so if anyone knows of a better way to do what you're asking, I'd love to hear about it.
```bash
node ../../dojo/dojo.js load=build --profile "$PROFILE" --releaseDir "$DISTDIR" $@
# ...
FILES=(
index.html
myapp/resources/myapp.css
myapp/resources/logo.svg
dojo/dojo.js
dojo/resources/blank.gif
dijit/themes/claro/form/images/buttonArrows.png
)
for file in ${FILES[*]}; do
mkdir -p $MINIMAL_DIST_DIR/`dirname $file`
cp $DISTDIR/myapp/$file $MINIMAL_DIST_DIR/$file
done
```
(The file myapp.css `@imports` dojo.css etc, so all the CSS is built into that single file.)
|
I don't know if this is a useful but I have a situation where I am creating a layer which loads from a completely different location to the core dojo app.
This means that I actually don't need the `dojo`, `dijit` and `dojox` to be in my build. I was having the issue of all files being bundled in to my location whether I wanted them or not.
What I have opted for is to change the destination of these files to a folder I can just ignore which is outside of my application folder by using `destLocation`.
So I have this in my build script
```
packages: [
{
name: "dojo",
location: "./dtk/dojo",
destLocation: '../directory/outside/of/codebase/dojo'
},
{
name: "dijit", location: "./dtk/dijit",
destLocation: '../directory/outside/of/codebase/dijit' },
{
name: "dojox",
location: "./dtk/dojox",
destLocation: '../directory/outside/of/codebase/dojox'
},
{
name: "applayer",
location: "./location/of/my/release/folder/",
destLocation: './'
}
],
```
It's not perfect but it at least keeps the necessary packages out of my directory. I think the idea of bundling all files in to the directory is for the event where you run a require outside of your core layers.
If you did a hotfix for example in the client and you require a dojo module which isn't already in `require.cache` then this request would fail. If you know that won't happen then you don't need the packages (Hopefully).
|
57,978,333
|
While training a job on a SageMaker instance using H2o AutoML a message "This H2OFrame is empty" has come up after running the code, what should I do to fix the problem?
```
/opt/ml/input/config/hyperparameters.json
All Parameters:
{'nfolds': '5', 'training': "{'classification': 'true', 'target': 'y'}", 'max_runtime_secs': '3600'}
/opt/ml/input/config/resourceconfig.json
All Resources:
{'current_host': 'algo-1', 'hosts': ['algo-1'], 'network_interface_name': 'eth0'}
Waiting until DNS resolves: 1
10.0.182.83
Starting up H2O-3
Creating Connection to H2O-3
Attempt 0: H2O-3 not running yet...
Connecting to H2O server at http://127.0.0.1:54321... successful.
-------------------------- ----------------------------------------
-------------------------- ----------------------------------------
Beginning Model Training
Parse progress: |█████████████████████████████████████████████████████████| 100%
Classification - If you want to do a regression instead, set "classification":"false" in "training" params, inhyperparamters.json
Converting specified columns to categorical values:
[]
AutoML progress: |████████████████████████████████████████████████████████| 100%
This H2OFrame is empty.
Exception during training: Argument `model` should be a ModelBase, got NoneType None
Traceback (most recent call last):
File "/opt/program/train", line 138, in _train_model
h2o.save_model(aml.leader, path=model_path)
File "/root/.local/lib/python3.7/site-packages/h2o/h2o.py", line 969, in save_model
assert_is_type(model, ModelBase)
File "/root/.local/lib/python3.7/site-packages/h2o/utils/typechecks.py", line 457, in assert_is_type
skip_frames=skip_frames)
h2o.exceptions.H2OTypeError: Argument `model` should be a ModelBase, got NoneType None
H2O session _sid_8aba closed.
```
I'm wondering if it's a problem because of the max\_runtime\_secs, my data has around 500 rows and 250000 columns.
|
2019/09/17
|
[
"https://Stackoverflow.com/questions/57978333",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/8836963/"
] |
thanks @Marcel Mendes Reis for following up on your solution in the comments. I will repost here for others to easily find:
*I realized the issue was due to the max\_runtime. When I trained the model with more time I didn't have the problem.*
|
Doing some tests I realized that the problem was because of the max\_runtime, I believe I didn't allow the model to train enough.
|
72,122,475
|
I have a custom field in employee module in Odoo to display the age.
That field is calculated from birtday field
```
for record in self:
today = datetime.date.today()
record['x_studio_age_2'] = today.year - record['birthday'].year - ((today.month, today.day) < (record['birthday'].month, record['birthday'].day))
```
The age field works, but I get an error when I try to import a CSV:
```
Traceback (most recent call last):
File "/usr/lib/python3/dist-packages/odoo/tools/safe_eval.py", line 330, in safe_eval
return unsafe_eval(c, globals_dict, locals_dict)
File "", line 3, in <module>
AttributeError: 'bool' object has no attribute 'year'
```
So, i have to remove the code but now I have to update the age of all employess.
Is any wrong with the code?
|
2022/05/05
|
[
"https://Stackoverflow.com/questions/72122475",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/7259851/"
] |
I think you should do someting like that.
`False` means value is not set.
```py
today = datetime.date.today()
for record in self:
if record['birthday']:
record['x_studio_age_2'] = today.year - record['birthday'].year - ((today.month, today.day) < (record['birthday'].month, record['birthday'].day))
else:
record['x_studio_age_2'] = False
```
|
you have to check the birthday first , because if it's not set it will return false value as boolean
|
64,583,022
|
I imported a csv file with the variable “HEIGHT” which has 10 values.
```
HEIGHT
62
58
72
63
66
62
63
62
62
67
```
I want to use numpy and numpy only to count the number of times the value ‘62’ does not occur. The answer should be 6.
```
import numpy
import csv
with open(‘measurements.csv’),’r’) as f:
rows=f.readline()
rows=f.split(‘,’)
rows=numpy.array([rows[2:4]])
print(rows)
```
I’m a beginner python learner practicing numpy, so I am not quite sure how to approach this problem.
|
2020/10/28
|
[
"https://Stackoverflow.com/questions/64583022",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/14458035/"
] |
Using numpy you can do:
```
data = np.array([62, 58, 72, 63, 66, 62, 63, 62, 62, 67])
(data != 62).sum()
```
That is, `data != 62` will make a numpy Boolean array, and `sum` will add these up, with `True` as `1`, giving the total count.
|
If you want to use *numpy and numpy only*,
Load the file using numpy:
```
dataset = np.loadtxt('measurements.csv', delimiter=',')
```
Seems like the height variable is in the 3rd column (index *2*). When you use `loadtxt`, you'll get a 2D array that looks like a table. You need the column with index 2, and you can then use @tom10's solution:
```
(dataset[:, 2] != 62).sum()
```
And you have a complete numpy workflow.
**Note:** Read docs to understand functions used better.
* [numpy.loadtxt](https://numpy.org/doc/stable/reference/generated/numpy.loadtxt.html)
* [Comparisons in numpy arrays](https://www.python-course.eu/numpy_masking.php) (Tutorial - opinionated!)
* [Official docs on indexing](https://numpy.org/doc/stable/user/basics.indexing.html)
|
2,896,179
|
Can anyone help me out in fitting a gamma distribution in python? Well, I've got some data : X and Y coordinates, and I want to find the gamma parameters that fit this distribution... In the [Scipy doc](http://docs.scipy.org/doc/scipy/reference/generated/scipy.stats.gamma.html#scipy.stats.gamma), it turns out that a fit method actually exists but I don't know how to use it :s.. First, in which format the argument "data" must be, and how can I provide the second argument (the parameters) since that's what I'm looking for?
|
2010/05/24
|
[
"https://Stackoverflow.com/questions/2896179",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/348838/"
] |
Generate some gamma data:
```
import scipy.stats as stats
alpha = 5
loc = 100.5
beta = 22
data = stats.gamma.rvs(alpha, loc=loc, scale=beta, size=10000)
print(data)
# [ 202.36035683 297.23906376 249.53831795 ..., 271.85204096 180.75026301
# 364.60240242]
```
Here we fit the data to the gamma distribution:
```
fit_alpha, fit_loc, fit_beta=stats.gamma.fit(data)
print(fit_alpha, fit_loc, fit_beta)
# (5.0833692504230008, 100.08697963283467, 21.739518937816108)
print(alpha, loc, beta)
# (5, 100.5, 22)
```
|
If you want a long example including a discussion about estimating or fixing the support of the distribution, then you can find it in <https://github.com/scipy/scipy/issues/1359> and the linked mailing list message.
Preliminary support to fix parameters, such as location, during fit has been added to the trunk version of scipy.
|
2,896,179
|
Can anyone help me out in fitting a gamma distribution in python? Well, I've got some data : X and Y coordinates, and I want to find the gamma parameters that fit this distribution... In the [Scipy doc](http://docs.scipy.org/doc/scipy/reference/generated/scipy.stats.gamma.html#scipy.stats.gamma), it turns out that a fit method actually exists but I don't know how to use it :s.. First, in which format the argument "data" must be, and how can I provide the second argument (the parameters) since that's what I'm looking for?
|
2010/05/24
|
[
"https://Stackoverflow.com/questions/2896179",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/348838/"
] |
Generate some gamma data:
```
import scipy.stats as stats
alpha = 5
loc = 100.5
beta = 22
data = stats.gamma.rvs(alpha, loc=loc, scale=beta, size=10000)
print(data)
# [ 202.36035683 297.23906376 249.53831795 ..., 271.85204096 180.75026301
# 364.60240242]
```
Here we fit the data to the gamma distribution:
```
fit_alpha, fit_loc, fit_beta=stats.gamma.fit(data)
print(fit_alpha, fit_loc, fit_beta)
# (5.0833692504230008, 100.08697963283467, 21.739518937816108)
print(alpha, loc, beta)
# (5, 100.5, 22)
```
|
I was unsatisfied with the ss.gamma.rvs-function as it can generate negative numbers, something the gamma-distribution is supposed not to have. So I fitted the sample through expected value = mean(data) and variance = var(data) (see wikipedia for details) and wrote a function that can yield random samples of a gamma distribution without scipy (which I found hard to install properly, on a sidenote):
```
import random
import numpy
data = [6176, 11046, 670, 6146, 7945, 6864, 767, 7623, 7212, 9040, 3213, 6302, 10044, 10195, 9386, 7230, 4602, 6282, 8619, 7903, 6318, 13294, 6990, 5515, 9157]
# Fit gamma distribution through mean and average
mean_of_distribution = numpy.mean(data)
variance_of_distribution = numpy.var(data)
def gamma_random_sample(mean, variance, size):
"""Yields a list of random numbers following a gamma distribution defined by mean and variance"""
g_alpha = mean*mean/variance
g_beta = mean/variance
for i in range(size):
yield random.gammavariate(g_alpha,1/g_beta)
# force integer values to get integer sample
grs = [int(i) for i in gamma_random_sample(mean_of_distribution,variance_of_distribution,len(data))]
print("Original data: ", sorted(data))
print("Random sample: ", sorted(grs))
# Original data: [670, 767, 3213, 4602, 5515, 6146, 6176, 6282, 6302, 6318, 6864, 6990, 7212, 7230, 7623, 7903, 7945, 8619, 9040, 9157, 9386, 10044, 10195, 11046, 13294]
# Random sample: [1646, 2237, 3178, 3227, 3649, 4049, 4171, 5071, 5118, 5139, 5456, 6139, 6468, 6726, 6944, 7050, 7135, 7588, 7597, 7971, 10269, 10563, 12283, 12339, 13066]
```
|
2,896,179
|
Can anyone help me out in fitting a gamma distribution in python? Well, I've got some data : X and Y coordinates, and I want to find the gamma parameters that fit this distribution... In the [Scipy doc](http://docs.scipy.org/doc/scipy/reference/generated/scipy.stats.gamma.html#scipy.stats.gamma), it turns out that a fit method actually exists but I don't know how to use it :s.. First, in which format the argument "data" must be, and how can I provide the second argument (the parameters) since that's what I'm looking for?
|
2010/05/24
|
[
"https://Stackoverflow.com/questions/2896179",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/348838/"
] |
Generate some gamma data:
```
import scipy.stats as stats
alpha = 5
loc = 100.5
beta = 22
data = stats.gamma.rvs(alpha, loc=loc, scale=beta, size=10000)
print(data)
# [ 202.36035683 297.23906376 249.53831795 ..., 271.85204096 180.75026301
# 364.60240242]
```
Here we fit the data to the gamma distribution:
```
fit_alpha, fit_loc, fit_beta=stats.gamma.fit(data)
print(fit_alpha, fit_loc, fit_beta)
# (5.0833692504230008, 100.08697963283467, 21.739518937816108)
print(alpha, loc, beta)
# (5, 100.5, 22)
```
|
1): the "data" variable could be in the format of a python list or tuple, or a numpy.ndarray, which could be obtained by using:
```
data=numpy.array(data)
```
where the 2nd data in the above line should be a list or a tuple, containing your data.
2: the "parameter" variable is a first guess you could optionally provide to the fitting function as a starting point for the fitting process, so it could be omitted.
3: a note on @mondano's answer. The usage of moments (mean and variances) to work out the gamma parameters are reasonably good for large shape parameters (alpha>10), but could yield poor results for small values of alpha (See *Statistical methods in the atmospheric scineces* by Wilks, and THOM, H. C. S., 1958: A note on the gamma distribution. Mon. Wea. Rev., 86, 117–122.
Using Maximum Likelihood Estimators, as that implemented in the scipy module, is regarded a better choice in such cases.
|
2,896,179
|
Can anyone help me out in fitting a gamma distribution in python? Well, I've got some data : X and Y coordinates, and I want to find the gamma parameters that fit this distribution... In the [Scipy doc](http://docs.scipy.org/doc/scipy/reference/generated/scipy.stats.gamma.html#scipy.stats.gamma), it turns out that a fit method actually exists but I don't know how to use it :s.. First, in which format the argument "data" must be, and how can I provide the second argument (the parameters) since that's what I'm looking for?
|
2010/05/24
|
[
"https://Stackoverflow.com/questions/2896179",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/348838/"
] |
Generate some gamma data:
```
import scipy.stats as stats
alpha = 5
loc = 100.5
beta = 22
data = stats.gamma.rvs(alpha, loc=loc, scale=beta, size=10000)
print(data)
# [ 202.36035683 297.23906376 249.53831795 ..., 271.85204096 180.75026301
# 364.60240242]
```
Here we fit the data to the gamma distribution:
```
fit_alpha, fit_loc, fit_beta=stats.gamma.fit(data)
print(fit_alpha, fit_loc, fit_beta)
# (5.0833692504230008, 100.08697963283467, 21.739518937816108)
print(alpha, loc, beta)
# (5, 100.5, 22)
```
|
[OpenTURNS](http://www.openturns.org/) has a simple way to do this with the `GammaFactory` class.
First, let's generate a sample:
```
import openturns as ot
gammaDistribution = ot.Gamma()
sample = gammaDistribution.getSample(100)
```
Then fit a Gamma to it:
```
distribution = ot.GammaFactory().build(sample)
```
Then we can draw the PDF of the Gamma:
```
import openturns.viewer as otv
otv.View(distribution.drawPDF())
```
which produces:
[](https://i.stack.imgur.com/sxvyt.png)
More details on this topic at: <http://openturns.github.io/openturns/latest/user_manual/_generated/openturns.GammaFactory.html>
|
2,896,179
|
Can anyone help me out in fitting a gamma distribution in python? Well, I've got some data : X and Y coordinates, and I want to find the gamma parameters that fit this distribution... In the [Scipy doc](http://docs.scipy.org/doc/scipy/reference/generated/scipy.stats.gamma.html#scipy.stats.gamma), it turns out that a fit method actually exists but I don't know how to use it :s.. First, in which format the argument "data" must be, and how can I provide the second argument (the parameters) since that's what I'm looking for?
|
2010/05/24
|
[
"https://Stackoverflow.com/questions/2896179",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/348838/"
] |
I was unsatisfied with the ss.gamma.rvs-function as it can generate negative numbers, something the gamma-distribution is supposed not to have. So I fitted the sample through expected value = mean(data) and variance = var(data) (see wikipedia for details) and wrote a function that can yield random samples of a gamma distribution without scipy (which I found hard to install properly, on a sidenote):
```
import random
import numpy
data = [6176, 11046, 670, 6146, 7945, 6864, 767, 7623, 7212, 9040, 3213, 6302, 10044, 10195, 9386, 7230, 4602, 6282, 8619, 7903, 6318, 13294, 6990, 5515, 9157]
# Fit gamma distribution through mean and average
mean_of_distribution = numpy.mean(data)
variance_of_distribution = numpy.var(data)
def gamma_random_sample(mean, variance, size):
"""Yields a list of random numbers following a gamma distribution defined by mean and variance"""
g_alpha = mean*mean/variance
g_beta = mean/variance
for i in range(size):
yield random.gammavariate(g_alpha,1/g_beta)
# force integer values to get integer sample
grs = [int(i) for i in gamma_random_sample(mean_of_distribution,variance_of_distribution,len(data))]
print("Original data: ", sorted(data))
print("Random sample: ", sorted(grs))
# Original data: [670, 767, 3213, 4602, 5515, 6146, 6176, 6282, 6302, 6318, 6864, 6990, 7212, 7230, 7623, 7903, 7945, 8619, 9040, 9157, 9386, 10044, 10195, 11046, 13294]
# Random sample: [1646, 2237, 3178, 3227, 3649, 4049, 4171, 5071, 5118, 5139, 5456, 6139, 6468, 6726, 6944, 7050, 7135, 7588, 7597, 7971, 10269, 10563, 12283, 12339, 13066]
```
|
If you want a long example including a discussion about estimating or fixing the support of the distribution, then you can find it in <https://github.com/scipy/scipy/issues/1359> and the linked mailing list message.
Preliminary support to fix parameters, such as location, during fit has been added to the trunk version of scipy.
|
2,896,179
|
Can anyone help me out in fitting a gamma distribution in python? Well, I've got some data : X and Y coordinates, and I want to find the gamma parameters that fit this distribution... In the [Scipy doc](http://docs.scipy.org/doc/scipy/reference/generated/scipy.stats.gamma.html#scipy.stats.gamma), it turns out that a fit method actually exists but I don't know how to use it :s.. First, in which format the argument "data" must be, and how can I provide the second argument (the parameters) since that's what I'm looking for?
|
2010/05/24
|
[
"https://Stackoverflow.com/questions/2896179",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/348838/"
] |
If you want a long example including a discussion about estimating or fixing the support of the distribution, then you can find it in <https://github.com/scipy/scipy/issues/1359> and the linked mailing list message.
Preliminary support to fix parameters, such as location, during fit has been added to the trunk version of scipy.
|
1): the "data" variable could be in the format of a python list or tuple, or a numpy.ndarray, which could be obtained by using:
```
data=numpy.array(data)
```
where the 2nd data in the above line should be a list or a tuple, containing your data.
2: the "parameter" variable is a first guess you could optionally provide to the fitting function as a starting point for the fitting process, so it could be omitted.
3: a note on @mondano's answer. The usage of moments (mean and variances) to work out the gamma parameters are reasonably good for large shape parameters (alpha>10), but could yield poor results for small values of alpha (See *Statistical methods in the atmospheric scineces* by Wilks, and THOM, H. C. S., 1958: A note on the gamma distribution. Mon. Wea. Rev., 86, 117–122.
Using Maximum Likelihood Estimators, as that implemented in the scipy module, is regarded a better choice in such cases.
|
2,896,179
|
Can anyone help me out in fitting a gamma distribution in python? Well, I've got some data : X and Y coordinates, and I want to find the gamma parameters that fit this distribution... In the [Scipy doc](http://docs.scipy.org/doc/scipy/reference/generated/scipy.stats.gamma.html#scipy.stats.gamma), it turns out that a fit method actually exists but I don't know how to use it :s.. First, in which format the argument "data" must be, and how can I provide the second argument (the parameters) since that's what I'm looking for?
|
2010/05/24
|
[
"https://Stackoverflow.com/questions/2896179",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/348838/"
] |
If you want a long example including a discussion about estimating or fixing the support of the distribution, then you can find it in <https://github.com/scipy/scipy/issues/1359> and the linked mailing list message.
Preliminary support to fix parameters, such as location, during fit has been added to the trunk version of scipy.
|
[OpenTURNS](http://www.openturns.org/) has a simple way to do this with the `GammaFactory` class.
First, let's generate a sample:
```
import openturns as ot
gammaDistribution = ot.Gamma()
sample = gammaDistribution.getSample(100)
```
Then fit a Gamma to it:
```
distribution = ot.GammaFactory().build(sample)
```
Then we can draw the PDF of the Gamma:
```
import openturns.viewer as otv
otv.View(distribution.drawPDF())
```
which produces:
[](https://i.stack.imgur.com/sxvyt.png)
More details on this topic at: <http://openturns.github.io/openturns/latest/user_manual/_generated/openturns.GammaFactory.html>
|
2,896,179
|
Can anyone help me out in fitting a gamma distribution in python? Well, I've got some data : X and Y coordinates, and I want to find the gamma parameters that fit this distribution... In the [Scipy doc](http://docs.scipy.org/doc/scipy/reference/generated/scipy.stats.gamma.html#scipy.stats.gamma), it turns out that a fit method actually exists but I don't know how to use it :s.. First, in which format the argument "data" must be, and how can I provide the second argument (the parameters) since that's what I'm looking for?
|
2010/05/24
|
[
"https://Stackoverflow.com/questions/2896179",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/348838/"
] |
I was unsatisfied with the ss.gamma.rvs-function as it can generate negative numbers, something the gamma-distribution is supposed not to have. So I fitted the sample through expected value = mean(data) and variance = var(data) (see wikipedia for details) and wrote a function that can yield random samples of a gamma distribution without scipy (which I found hard to install properly, on a sidenote):
```
import random
import numpy
data = [6176, 11046, 670, 6146, 7945, 6864, 767, 7623, 7212, 9040, 3213, 6302, 10044, 10195, 9386, 7230, 4602, 6282, 8619, 7903, 6318, 13294, 6990, 5515, 9157]
# Fit gamma distribution through mean and average
mean_of_distribution = numpy.mean(data)
variance_of_distribution = numpy.var(data)
def gamma_random_sample(mean, variance, size):
"""Yields a list of random numbers following a gamma distribution defined by mean and variance"""
g_alpha = mean*mean/variance
g_beta = mean/variance
for i in range(size):
yield random.gammavariate(g_alpha,1/g_beta)
# force integer values to get integer sample
grs = [int(i) for i in gamma_random_sample(mean_of_distribution,variance_of_distribution,len(data))]
print("Original data: ", sorted(data))
print("Random sample: ", sorted(grs))
# Original data: [670, 767, 3213, 4602, 5515, 6146, 6176, 6282, 6302, 6318, 6864, 6990, 7212, 7230, 7623, 7903, 7945, 8619, 9040, 9157, 9386, 10044, 10195, 11046, 13294]
# Random sample: [1646, 2237, 3178, 3227, 3649, 4049, 4171, 5071, 5118, 5139, 5456, 6139, 6468, 6726, 6944, 7050, 7135, 7588, 7597, 7971, 10269, 10563, 12283, 12339, 13066]
```
|
1): the "data" variable could be in the format of a python list or tuple, or a numpy.ndarray, which could be obtained by using:
```
data=numpy.array(data)
```
where the 2nd data in the above line should be a list or a tuple, containing your data.
2: the "parameter" variable is a first guess you could optionally provide to the fitting function as a starting point for the fitting process, so it could be omitted.
3: a note on @mondano's answer. The usage of moments (mean and variances) to work out the gamma parameters are reasonably good for large shape parameters (alpha>10), but could yield poor results for small values of alpha (See *Statistical methods in the atmospheric scineces* by Wilks, and THOM, H. C. S., 1958: A note on the gamma distribution. Mon. Wea. Rev., 86, 117–122.
Using Maximum Likelihood Estimators, as that implemented in the scipy module, is regarded a better choice in such cases.
|
2,896,179
|
Can anyone help me out in fitting a gamma distribution in python? Well, I've got some data : X and Y coordinates, and I want to find the gamma parameters that fit this distribution... In the [Scipy doc](http://docs.scipy.org/doc/scipy/reference/generated/scipy.stats.gamma.html#scipy.stats.gamma), it turns out that a fit method actually exists but I don't know how to use it :s.. First, in which format the argument "data" must be, and how can I provide the second argument (the parameters) since that's what I'm looking for?
|
2010/05/24
|
[
"https://Stackoverflow.com/questions/2896179",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/348838/"
] |
I was unsatisfied with the ss.gamma.rvs-function as it can generate negative numbers, something the gamma-distribution is supposed not to have. So I fitted the sample through expected value = mean(data) and variance = var(data) (see wikipedia for details) and wrote a function that can yield random samples of a gamma distribution without scipy (which I found hard to install properly, on a sidenote):
```
import random
import numpy
data = [6176, 11046, 670, 6146, 7945, 6864, 767, 7623, 7212, 9040, 3213, 6302, 10044, 10195, 9386, 7230, 4602, 6282, 8619, 7903, 6318, 13294, 6990, 5515, 9157]
# Fit gamma distribution through mean and average
mean_of_distribution = numpy.mean(data)
variance_of_distribution = numpy.var(data)
def gamma_random_sample(mean, variance, size):
"""Yields a list of random numbers following a gamma distribution defined by mean and variance"""
g_alpha = mean*mean/variance
g_beta = mean/variance
for i in range(size):
yield random.gammavariate(g_alpha,1/g_beta)
# force integer values to get integer sample
grs = [int(i) for i in gamma_random_sample(mean_of_distribution,variance_of_distribution,len(data))]
print("Original data: ", sorted(data))
print("Random sample: ", sorted(grs))
# Original data: [670, 767, 3213, 4602, 5515, 6146, 6176, 6282, 6302, 6318, 6864, 6990, 7212, 7230, 7623, 7903, 7945, 8619, 9040, 9157, 9386, 10044, 10195, 11046, 13294]
# Random sample: [1646, 2237, 3178, 3227, 3649, 4049, 4171, 5071, 5118, 5139, 5456, 6139, 6468, 6726, 6944, 7050, 7135, 7588, 7597, 7971, 10269, 10563, 12283, 12339, 13066]
```
|
[OpenTURNS](http://www.openturns.org/) has a simple way to do this with the `GammaFactory` class.
First, let's generate a sample:
```
import openturns as ot
gammaDistribution = ot.Gamma()
sample = gammaDistribution.getSample(100)
```
Then fit a Gamma to it:
```
distribution = ot.GammaFactory().build(sample)
```
Then we can draw the PDF of the Gamma:
```
import openturns.viewer as otv
otv.View(distribution.drawPDF())
```
which produces:
[](https://i.stack.imgur.com/sxvyt.png)
More details on this topic at: <http://openturns.github.io/openturns/latest/user_manual/_generated/openturns.GammaFactory.html>
|
2,896,179
|
Can anyone help me out in fitting a gamma distribution in python? Well, I've got some data : X and Y coordinates, and I want to find the gamma parameters that fit this distribution... In the [Scipy doc](http://docs.scipy.org/doc/scipy/reference/generated/scipy.stats.gamma.html#scipy.stats.gamma), it turns out that a fit method actually exists but I don't know how to use it :s.. First, in which format the argument "data" must be, and how can I provide the second argument (the parameters) since that's what I'm looking for?
|
2010/05/24
|
[
"https://Stackoverflow.com/questions/2896179",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/348838/"
] |
[OpenTURNS](http://www.openturns.org/) has a simple way to do this with the `GammaFactory` class.
First, let's generate a sample:
```
import openturns as ot
gammaDistribution = ot.Gamma()
sample = gammaDistribution.getSample(100)
```
Then fit a Gamma to it:
```
distribution = ot.GammaFactory().build(sample)
```
Then we can draw the PDF of the Gamma:
```
import openturns.viewer as otv
otv.View(distribution.drawPDF())
```
which produces:
[](https://i.stack.imgur.com/sxvyt.png)
More details on this topic at: <http://openturns.github.io/openturns/latest/user_manual/_generated/openturns.GammaFactory.html>
|
1): the "data" variable could be in the format of a python list or tuple, or a numpy.ndarray, which could be obtained by using:
```
data=numpy.array(data)
```
where the 2nd data in the above line should be a list or a tuple, containing your data.
2: the "parameter" variable is a first guess you could optionally provide to the fitting function as a starting point for the fitting process, so it could be omitted.
3: a note on @mondano's answer. The usage of moments (mean and variances) to work out the gamma parameters are reasonably good for large shape parameters (alpha>10), but could yield poor results for small values of alpha (See *Statistical methods in the atmospheric scineces* by Wilks, and THOM, H. C. S., 1958: A note on the gamma distribution. Mon. Wea. Rev., 86, 117–122.
Using Maximum Likelihood Estimators, as that implemented in the scipy module, is regarded a better choice in such cases.
|
59,273,273
|
I am attempting to open a serial connection to a usb device using PySerial, and with the following code I am getting the following error:
```
import serial
ser = serial.Serial('/dev/tty.usbserial-EN270425')
```
```
File "/Library/Frameworks/Python.framework/Versions/3.8/lib/python3.8/site-packages/serial/serialposix.py", line 265, in open
self.fd = os.open(self.portstr, os.O_RDWR | os.O_NOCTTY | os.O_NONBLOCK)
OSError: [Errno 16] Resource busy: '/dev/tty.usbserial-EN270425'
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "<pyshell#2>", line 1, in <module>
ser = serial.Serial('/dev/tty.usbserial-EN270425')
File "/Library/Frameworks/Python.framework/Versions/3.8/lib/python3.8/site-packages/serial/serialutil.py", line 240, in __init__
self.open()
File "/Library/Frameworks/Python.framework/Versions/3.8/lib/python3.8/site-packages/serial/serialposix.py", line 268, in open
raise SerialException(msg.errno, "could not open port {}: {}".format(self._port, msg))
serial.serialutil.SerialException: [Errno 16] could not open port /dev/tty.usbserial-EN270425: [Errno 16] Resource busy: '/dev/tty.usbserial-EN270425'
```
I have checked to see if there was a process using the resource via
`lsof | grep "/dev/tty.usbserial-EN270425"` and got no return value.
I was able to connect to the port on a different machine, the only difference being operating system and python version. The machine that CAN connect is running Mac OS Mojave and Python 3.6, the machine that CANNOT connect is running Mac OS Catalina and Python 3.8. Does anyone have any idea on where I can move forward from here?
|
2019/12/10
|
[
"https://Stackoverflow.com/questions/59273273",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/11313162/"
] |
This is an interesting problem, for which I believe there is no standard function yet. This is not a huge problem because the hash itself contains an identifier telling us which hash algorithm was used. The important thing to note here is that `PASSWORD_DEFAULT` is a **constant**. Constants do not change.
To figure out which algorithm is used when using the default constant (which was and still is bcrypt), you need to generate some dummy hash and look at the beginning of it. We can even use a nice helper function [`password_get_info()`](https://www.php.net/manual/en/function.password-get-info.php)
```
$hashInfo = password_get_info(password_hash('pass', PASSWORD_DEFAULT, [ 'cost' => 4 ] ));
echo $hashInfo['algo']; // should return either 1 or 2y
if($hashInfo['algo'] === PASSWORD_BCRYPT) {
// will be true for PHP <= 7.4
}
```
|
***Edit***
As of PHP 7.4.3 you can continue using `PASSWORD_DEFAULT === PASSWORD_BCRYPT`
*<https://3v4l.org/nN4Qi>*
---
You don't actually have to use `password_hash` twice. A better and faster way is to provide an already hashed value with `Bcrypt` and check it against `PASSWORD_DEFAULT` with
[password\_needs\_rehash](https://www.php.net/manual/en/function.password-needs-rehash.php) function to see if the default algo has changed or not.
>
> bcrypt algorithm is the default as of PHP 5.5.0
>
>
>
So for example:
```
$hash = '$2y$10$ra4VedcLU8bv3jR0AlpEau3AZevkQz4Utm7F8EqUNE0Jqx0s772NG'; // Bcrypt hash
// if it doesn't need rehash then the default algo is absolutely Bcrypt
if (! password_needs_rehash($hash, PASSWORD_DEFAULT)) {
// do some clean up
}
```
>
> **Note**: make sure that the hash value($hash) has the same cost provided in `password_needs_rehash`'s third parameter, otherwise it will consider the hash outdated and need rehash since the cost has changed.
>
>
>
|
48,238,171
|
I'm getting string data into my python code.Some time data is coming with an extra "and" or " or for example
```
Tom and Mark and
```
in this case I need to remove the last "and" & final outcome will look like
```
Tom and Mark
```
But when data will come like this
```
Harry and John
```
Then I will consider the data without removing the "and"
Can you suggest me how to do that?
|
2018/01/13
|
[
"https://Stackoverflow.com/questions/48238171",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/9184713/"
] |
```
[2, 4] [4, -3, -4, 4]
```
Sorted: `[2, 4] [-4, -3, 4, 4]`
```
qs are negative, p is positive:
2 + 4 + 2 + 3 = sum(4, 3) + 2*2
qs are larger than p:
4 - 2 + 4 - 2 = sum(4, 4) - 2*2
qs are negative, p is positive:
4 + 4 + 4 + 3 = sum(4, 3) + 2*4
qs are equal to p:
4 - 4 + 4 - 4 = 0
qs are smaller than p (not in our example):
5 - 3 + 5 - 2 = -sum(3, 2) + 2*5
qs and p are both negative (not in our example):
p is larger:
|-5 - (-7)| + |-5 - (-6)| = 7 - 5 + 6 - 5 = sum(7, 6) - 2*5
q is larger:
|-5 - (-3)| + |-5 - (-2)| = 5 - 3 + 5 - 2 = -sum(3, 2) + 2*5
p is negative, qs are positive (not in our example):
|7 - (-5)| + |6 - (-5)| = 7 + 5 + 6 + 5 = sum(7, 6) + 2*5
```
There are a few cases here, all of which can reuse already computed sums, as well as multiplication, if the arrays are ascending.
Sort and compute a prefix-sum array for `Q`, also recording the index where `q`s turn to positive if there is one. For each `p` find the start and end section of each case described above in either `O(log n)` or `O(1)` time. Add to the total by using the prefix-sum and a multiple of `p`. Complexity: `O(n log n)` time, `O(n)` space.
|
It may or may not be more efficient to add or subtract the (unmodified) difference depending on sign instead of ("unconditionally") "adding `abs()`".
I'd expect a contemporary compiler, even JIT, to detect the equivalence, though.
```
! Sum of Absolute Differences between every pair of elements of two arrays;
INTEGER PROCEDURE SAD2(a, b);
INTEGER ARRAY a, b;
BEGIN
INTEGER sum, i, j; ! by the book, declare j locally just like diff;
sum := 0;
FOR i := LOWERBOUND(a, 1) STEP 1 UNTIL UPPERBOUND(a, 1) DO
FOR j := LOWERBOUND(b, 1) STEP 1 UNTIL UPPERBOUND(b, 1) DO BEGIN
INTEGER diff;
diff := a(i) - b(j);
sum := if diff < 0 then sum - diff
else sum + diff;
END;
SAD2 := sum;
END SAD2;
```
For a sub-quadratic algorithm, see [גלעד ברקן's answer](https://stackoverflow.com/a/48239655/3789665).
This may well be code for what גלעד ברקן intended, not following to PEP8 to the dot:
```python
''' Given sequences A and B, SAD2 computes the sum of absolute differences
for every element b from B subtracted from every element a from A.
The usual SAD sums up absolute differences of pairs with like index, only.
'''
from bisect import bisect_right
class state(list):
''' Hold state for one sequence: sorted elements & processing state. '''
def __init__(self, a):
self.extend(sorted(a))
self.total = 0
''' sum(self[:self.todo]) '''
self.todo = 0
''' next index to do/#elements done '''
def __str__(self):
return list.__str__(self) + str(self.todo) + ', ' + str(self.total)
def SAD2(a, b):
''' return Sum of Absolute Differences of all pairs (a[x], b[y]). '''
nPairs = len(a) * len(b)
if nPairs < 2:
return abs(a[0] - b[0]) if 0 < nPairs else None
a = state(a)
b = state(b)
sad = 0
while True:
key = a[a.todo]
identical = bisect_right(a, key, a.todo)
local = 0
# iterate 'til not lower
# going to need i: no takewhile(lambda x: x < key, b[todo:])
i = b.todo
while i < len(b):
val = b[i]
if key <= val:
break
local += val
i += 1
# update SAD
# account for elements in a[:a.todo] paired with b[b.todo:i]
sad += local*a.todo - a.total*(i - b.todo)
b.todo = i
n_key = identical - a.todo
local += b.total
b.total = local
# account for elements in a[a.todo:identical] paired with b[:i]
sad += (key*i - local)*n_key
if len(b) <= b.todo:
rest = len(a) - identical
if 0 < rest:
sad += sum(a[identical:])*len(b) - b.total*rest
return sad
a.todo = identical
a.total += key * n_key
a, b = b, a
```
|
12,797,274
|
I've just installed Python 2.7 on windows along with IPython.
I'm used to running IPython from within Emacs on Linux, e.g.
```
M-x shell
```
Then type '`ipython`' at the prompt.
This works fine under Linux, but under Windows it hangs after printing the IPython banner text, i.e. it looks like it's working, but then you never get an IPython prompt.
I can load IPython (and Python) under Windows no problem from a standard cmd terminal, just not from within Emacs.
Anyone else experienced or hopefully solved this issue?
I get the same problem when trying to start plain old Python as well.
|
2012/10/09
|
[
"https://Stackoverflow.com/questions/12797274",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/1041868/"
] |
In Windows it won't work as easily, and it's very annoying. First, make sure you have installed pyreadline.
Next, I have got it working with a bat file that is in my system path containing:
```
[ipython.bat]
@python.exe -i C:\devel\Python\2.7-bin\Scripts\ipython.py --pylab %*
```
Next get python-mode.el and delete ipython.el. The latter is deprecated and python-mode includes its functionality.
Change the path of the ipython.py file accordingly and save it. Now in your Emacs do the following in your init.el file.
```
(custom-set-variables
'(py-shell-name "ipython.bat"))
```
Alternatively, to achieve the last step, do C-h v <RET> py-shell-name and customize it and change it to ipython.bat or the full path of your ipython.bat if the scripts directory is not in your system path. Save for future sessions.
That should get your IPython shell working. There is one more caveat if you want multiple interactive matplotlib figures without hanging your IPython console. The only way I could get around this issue was to use IPython 0.10 instead of the current version.
|
I wish I had seen this post a while ago. My experiences with running python, ipython from within Emacs are the following:
I tried a number of options (Using python(x,y)-2.7.3.0 on Windows 7)
Using ipython.el:
It still works, provided that you change ipython-command. I do not recommend this, since you loose some functionality from ipython.el by changing ipython-command
You can show figures, but the prompt is frozen until you close the figure
Bare python doesn't work
Changing py-shell-name (as above):
My figures are always hanging, when I add the --pylab option
I always get an IPython terminal
Introducing ipython.bat containing
```
@C:\\Python27\\python.exe -i -u C:\\Python27\\Scripts\\ipython-script.py %*
```
and changing the path such that ipython.bat i found before ipython.exe
IPython works, but still only one thread (you have to close figure to return to shell)
Python also works and all the remaining functions from python-mode.el
I still have to figure out how to return the shell after opening a figure (within emacs). I got to work using Python 2.6 and and older version of IPython
|
12,797,274
|
I've just installed Python 2.7 on windows along with IPython.
I'm used to running IPython from within Emacs on Linux, e.g.
```
M-x shell
```
Then type '`ipython`' at the prompt.
This works fine under Linux, but under Windows it hangs after printing the IPython banner text, i.e. it looks like it's working, but then you never get an IPython prompt.
I can load IPython (and Python) under Windows no problem from a standard cmd terminal, just not from within Emacs.
Anyone else experienced or hopefully solved this issue?
I get the same problem when trying to start plain old Python as well.
|
2012/10/09
|
[
"https://Stackoverflow.com/questions/12797274",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/1041868/"
] |
In Windows it won't work as easily, and it's very annoying. First, make sure you have installed pyreadline.
Next, I have got it working with a bat file that is in my system path containing:
```
[ipython.bat]
@python.exe -i C:\devel\Python\2.7-bin\Scripts\ipython.py --pylab %*
```
Next get python-mode.el and delete ipython.el. The latter is deprecated and python-mode includes its functionality.
Change the path of the ipython.py file accordingly and save it. Now in your Emacs do the following in your init.el file.
```
(custom-set-variables
'(py-shell-name "ipython.bat"))
```
Alternatively, to achieve the last step, do C-h v <RET> py-shell-name and customize it and change it to ipython.bat or the full path of your ipython.bat if the scripts directory is not in your system path. Save for future sessions.
That should get your IPython shell working. There is one more caveat if you want multiple interactive matplotlib figures without hanging your IPython console. The only way I could get around this issue was to use IPython 0.10 instead of the current version.
|
Using `python -i` or `ipython -i` instead of `python` or `ipython` worked for me (it gets me the prompt in Emacs, instead of hanging indefinitely).
From python --help :
>
> -i : inspect interactively after running script; forces a prompt even
> if stdin does not appear to be a terminal; also PYTHONINSPECT=x
>
>
>
---
However, the resulting python interpreter will probably not be fully functional. You have 2 options :
* if you are using python from a conda environment : I advise that you switch to a proper Python install. That does not mean deleting Anaconda, it only means to download Python from <https://www.python.org/downloads/windows/> and to remove Anaconda's Python from your Windows' %PATH% variable. If you don't, you'll meet many problems with the display of pictures, with the activation of conda (conda activate base), etc etc, and I never succeeded in resolving them all.
* if you are using Python from python.org/downloads : you're on the right track! Check that Python is accessible from the command line (Win+R cmd python), and if not, add its path to the Windows Path Variable.
Now you need to make the image display work. Out of the box, matplotlib.pyplot lags in Emacs. Here's what you need to do:
First, check that you have downloaded the ipython package (`pip install ipython`) along with matplotlib and numpy (if you use them, but who doesn't : `pip install matplotlib`, `pip install numpy`).
Then find the location of ipython-script.py on your computer (type "ipython-script.py" in the file explorer to find it) and put it as an argument in the variable python-shell-interpreter-args, after the "-i ". Also check that python-shell-interpreter is well set to `python`. You should have something like that :
```lisp
(setq python-shell-interpreter "python")
(setq python-shell-interpreter-args "-i c:/Users/YOUR-USER-NAME/AppData/Local/Continuum/anaconda3/Scripts/ipython-script.py")
```
That way, you can run a python shell directly from emacs with `M-x run-python` (or, from a python file, C-c C-p and then C-c C-c).
Of course, you can do the same from a shell terminal too if you prefer, with the command :
`python -i "c:/Users/YOUR-USER-NAME/AppData/Local/Continuum/anaconda3/Scripts/ipython-script.py"`
or you can create a bat file with the same command. But that's not necessary.
(Tested with Python 3.8)
|
44,123,641
|
I am using the python libraries from the Assistant SDK for speech recognition via gRPC. I have the speech recognized and returned as a string calling the method `resp.result.spoken_request_text` from `\googlesamples\assistant\__main__.py` and I have the answer as an audio stream from the assistant API with the method `resp.audio_out.audio_data` also from `\googlesamples\assistant\__main__.py`
I would like to know if it is possible to have the answer from the service as a string as well (hoping it is available in the service definition or that it could be included), and how I could access/request the answer as string.
Thanks in advance.
|
2017/05/22
|
[
"https://Stackoverflow.com/questions/44123641",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/3415195/"
] |
Currently (Assistant SDK Developer Preview 1), there is no direct way to do this. You can probably feed the audio stream into a Speech-to-Text system, but that really starts getting silly.
Speaking to the engineers on this subject while at Google I/O, they indicated that there are some technical complications on their end to doing this, but they understand the use cases. They need to see questions like this to know that people want the feature.
Hopefully it will make it into an upcoming Developer Preview.
|
Update: for
>
> google.assistant.embedded.v1alpha2
>
>
>
the assistant SDK includes the field `supplemental_display_text`
>
> which is meant to extract the assistant response as text which aids
> the user's understanding
>
>
>
or to be displayed on screens. Still making the text available to the developer. [Goolge assistant documentation](https://developers.google.com/assistant/sdk/guides/service/integrate#text-response)
|
36,911,421
|
If a class contains two constructors that take in different types of arguments as shown here:
```
public class Planet {
public double xxPos; //its current x position
public double yyPos; //its current y position
public double xxVel; //its current veolicity in the x direction
public double yyVel; //its current veolicity in the y direction
public double mass; //its mass
public String imgFileName; //The name of an image in the images directory that depicts the planet
// constructor is like __init__ from python, this sets up the object when called like: Planet(arguments)
public Planet(double xP, double yP, double xV, double yV, double m, String img) {
xxPos = xP;
yyPos = yP;
xxVel = xV;
yyVel = yV;
mass = m;
imgFileName = img;
}
// second constructor
// how come testplanetconstructor knows to use this second one?
// does it know based on the argument type its being passed?
public Planet(Planet p) {
xxPos = p.xxPos;
yyPos = p.yyPos;
xxVel = p.xxVel;
yyVel = p.yyVel;
mass = p.mass;
imgFileName = p.imgFileName;
}
}
```
My primary question is:
1) How does another class with a main that calls this class determine which constructor to use?
If that is the case, what would happen if you have two constructors with:
2) the same type and number of arguments?
3) the same type but different number of arguments?
I realize that follow up questions are something that you should probably never do (aka something messy). I am just curious.
|
2016/04/28
|
[
"https://Stackoverflow.com/questions/36911421",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/5249753/"
] |
>
> 1) How does another class with a main that calls this class determine
> which constructor to use?
>
>
>
Compiler follows same process as overloaded method for static binding by checking unique method signature. To know about method signature [see this](https://docs.oracle.com/javase/tutorial/java/javaOO/methods.html)
```
public static void main(String args[]) {
double d1 = 0;
double d2 = 0;
double d3 = 0;
double d4 = 0;
double d5 = 0;
String img = "";
Planet p = new Planet(d1, d2, d3, d4, d5, img);// constructor with valid argument set
}
```
>
> 2) the same type and number of arguments?
>
>
>
This is actually not possible to write two method/constructor with same signature in a single class. e.g. following code never compile
```
Planet(int i) {// compilation error
return 0;
}
Planet(int j) {// compilation error
return 0;
}
```
>
> 3) the same type but different number of arguments?
>
>
>
This is possible, just like method creating / calling with different signature.
e.g.
```
Planet p1 = new Planet(d1, d2, d3, d4, d5, img);
Planet p2 = new Planet(p1);
```
|
### 1) How does another class with a main that calls this class determine which constructor to use?
Classes don't determine anything, the programmer do, and he does it by placing the appropriate parameters. For example, if you have 2 constructors `public Test (int i)`and `public Test()`, when you call `new Test(5)` it will call the first, and if you do `new Test()` it will call the second.
### What if..? 2) the same type and number of arguments?
You can't. It will not compile.
### What if..? 3) the same type but different number of arguments?
The same that in 1)
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.