qid
int64 46k
74.7M
| question
stringlengths 54
37.8k
| date
stringlengths 10
10
| metadata
listlengths 3
3
| response_j
stringlengths 17
26k
| response_k
stringlengths 26
26k
|
|---|---|---|---|---|---|
71,637,890
|
I use python and pandas to analyze big data set. I have a several arrays with different length. I need to insert values to specific column. If some values ββare not present for column it should be 'not defined'. Input data looks like row in dataframe with different positions.
Expected output:

Examples of input data:
```
# Example 1
{'Water Solubility': 'Insoluble ', 'Melting Point': '135-138 Β°C', 'logP': '4.68'}
# Example 2
{'Melting Point': '71 Β°C (whole mAb)', 'Hydrophobicity': '-0.529', 'Isoelectric Point': '7.89', 'Molecular Weight': '51234.9', 'Molecular Formula': 'C2224H3475N621O698S36'}
# Example 3
{'Water Solubility': '1E+006 mg/L (at 25 Β°C)', 'Melting Point': '204-205 Β°C', 'logP': '1.1', 'pKa': '6.78'}
```
I have tried to add to array 'Not defined' but I couldn't find the right approach
|
2022/03/27
|
[
"https://Stackoverflow.com/questions/71637890",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/18598283/"
] |
Me too was facing same issue. But using `window` worked for me,
```
const lineHeight= window
.getComputedStyle(descriptionRef.current, null)
.getPropertyValue("line-height");
```
|
you need to set the `line-height` via `style` prop to get by script. But bear in mind, `15` and `15px` are different things for `line-height` attribute.
If we remove the style attribute, even we specify the `line-height` in CSS class, we cannot get its value as `12px` and it will be empty as same as your case.
```css
/* It doesn't matter you specify or not in CSS */
.div-class {
line-height: 12px;
}
```
```
(
<div ref={divRef} className="div-class" style={{ lineHeight: '15px' }}>
</div>
)
```
```
useEffect(() => {
console.log({ lineHeight: divRef.current.style?.lineHeight }); // {lineHeight: '15px'}
}, []);
```
|
35,092,571
|
I am trying to create a dashboard where I can analyse my model's data (Article) using the library [plotly](https://plot.ly/python/).
The Plotly bar chart is not showing on my template, I am wondering if I am doing something wrong since there's no error with the code below :
**models.py**
```
from django.db import models
from django.contrib.auth.models import User
import plotly.plotly as py
import plotly.graph_objs as go
class Article(models.Model):
user = models.ForeignKey(User, default='1')
titre = models.CharField(max_length=100, unique=True)
slug = models.SlugField(max_length=40)
likes = models.ManyToManyField(User, related_name="likes")
def __str__(self):
return self.titre
@property
def article_chart(self):
data = [
go.Bar(
x=[self.titre], #title of the article
y=[self.likes.count()] #number of likes on an article
)
]
plot_url = py.plot(data, filename='basic-bar')
return plot_url
```
**dashboard.html**
```
<div>{{ article.article_chart }}</div>
```
Why is the bar chart not visible? Any suggestion ?
|
2016/01/29
|
[
"https://Stackoverflow.com/questions/35092571",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/4859971/"
] |
The result of
```
py.plot(data, filename='basic-bar')
```
is to generate a offline HTML file and return a local URL of this file
e.g. file:///your\_project\_pwd/temp-plot.html
If you want to render it in Django framework, you need to
* use `<iframe>` and restructure of your folder in Django settings
OR
* use plotly.offline method to generate the HTML code with your input `data`
There is a example which I had tried:
```
figure_or_data = [Scatter(x=[1, 2, 3], y=[3, 1, 6])]
plot_html = plot_html, plotdivid, width, height = _plot_html(
figure_or_data, True, 'test', True,
'100%', '100%')
resize_script = ''
if width == '100%' or height == '100%':
resize_script = (
''
'<script type="text/javascript">'
'window.removeEventListener("resize");'
'window.addEventListener("resize", function(){{'
'Plotly.Plots.resize(document.getElementById("{id}"));}});'
'</script>'
).format(id=plotdivid)
html = ''.join([
plot_html,
resize_script])
return render(request, 'dashboard.html', {'html': html,})
```
|
The above answer was very useful, I am in fact watching for parent resize, I am working in angular and I used the below code to achieve the resize, I am having a similar problem and this line of code was useful
```
<div class="col-lg-12" ng-if="showME" style="padding:0px">
<div id="graphPlot" ng-bind-html="myHTML"></div>
</div>
```
The graph will be inserted through the variable myHTML
All i did was watch for the parent resize and got the div id alone using jquery and passed it to plotly and it worked.
```
$scope.$on("angular-resizable.resizeEnd", function (event, args){
Plotly.Plots.resize(document.getElementById($('#graphPlot').children().eq(0).attr("id")));
});
```
|
1,718,251
|
I am using the macports version of python on a Snow Leopard computer, and using cmake to build a cross-platform extension to it. I search for the python interpreter and libraries on the system using the following commands in CMakeLists.txt
```
include(FindPythonInterp)
include(FindPythonLibs )
```
However, while cmake identified the correct interpreter in `/opt/local/bin`, it tries to link against the wrong framework - namely the system Python framework.
```
-- Found PythonInterp: /opt/local/bin/python2.6
-- Found PythonLibs: -framework Python
```
And this causes the following runtime error
```
Fatal Python error: Interpreter not initialized (version mismatch?)
```
As soon as I replace `-framework Python` with `/opt/local/Library/Frameworks/Python.framework/Python` things seem to work as expected.
How can I make cmake link against the correct Python framework found in
```
/opt/local/Library/Frameworks/Python.framework/Python
```
rather than the system one in
```
/System/Library/Frameworks/Python.framework/Python
```
?
|
2009/11/11
|
[
"https://Stackoverflow.com/questions/1718251",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/134397/"
] |
Adding the following in `~/.bash_profile`
```
export DYLD_FRAMEWORK_PATH=/opt/local/Library/Frameworks
```
fixes the problem at least temporarily. Apparently, this inconsistency between the python interpreter and the python framework used by cmake is a bug that should be hopefully fixed in the new version.
|
I am not intimately familiar with CMake, but with the Apple version of gcc/ld, you can pass the `-F` flag to specify a new framework search path. For example, `-F/opt/local/Library/Frameworks` will search in MacPorts' frameworks directory. If you can specify such a flag using CMake, it may solve your problem.
|
69,102,892
|
I have a class object which has the task of running a file. When I was developing, my class object was in the same file as the code I used to run the file.
Now I am refactoring and making this a real package so I moved the code to a file called `class_objects.py`.
I have installed this package locally, but now when I use the class object `Naive` it looks for the file in the directory I am currently working in as opposed to looking for the file which is part of the package. I have read up on absolute paths, relative paths, and verifying that `__init__.py` exists. I am stumped on this one.
How can I make sure my package looks for the `file.ext` within its own directories as opposed to looking for `file.ext` where I am running from?
Here is how I call my package:
```py
# Trying to use my package installed locally
from my-package.class_objects import Naive
a = Naive()
a.find_and_run()
```
>
> Error
>
>
>
```
ValueError: no such file /home/user/tutorial/dir/file.ext
```
This is my directory
```py
My-Package
drwxrwxr-x - user 8 Sep 1:27 .
drwxrwxr-x - user 8 Sep 1:22 βββ python
drwxrwxr-x - user 8 Sep 1:25 βββ dist
.rw-r--r-- 1.6M user 8 Sep 1:25 β βββ my-package-0.1.0-py3-none-any.whl
.rw-rw-r-- 1.6M user 8 Sep 1:25 β βββ my-package-0.1.0.tar.gz
.rw-rw-r-- 480 user 8 Sep 1:22 βββ pyproject.toml
drwxrwxr-x - user 8 Sep 1:24 βββ my-package
.rw-rw-r-- 0 user 8 Sep 1:14 βββ __init__.py
.rw-rw-r-- 2.6k user 8 Sep 1:13 βββ class_objects.py
drwxrwxr-x - user 8 Sep 1:19 βββ dir
.rw-rw-r-- 0 user 8 Sep 1:19 β βββ __init__.py
.rwxrwxr-x 1.5M user 7 Sep 22:39 β βββ file.exe
```
This is what is inside of `class_objects.py`
```py
class Naive(object):
...
def find_and_run():
out_dir = os.path.join("dir", "file.ext")
naive_model = RunThing(stan_file=out_dir)
```
|
2021/09/08
|
[
"https://Stackoverflow.com/questions/69102892",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/2444023/"
] |
Just use boolean logic:
```
WHERE (:IP_TYPE = 'HIGH' AND (TYPE = 'HIGH' OR TYPE = '' OR TYPE IS NULL)
) OR
(:IP_TYPE = 'LOW' AND TYPE = 'LOW')
```
Or more succinctly:
```
WHERE :IP_TYPE = TYPE OR
(:IP_TYPE = 'HIGH' AND (TYPE = '' OR TYPE IS NULL))
```
|
In Oracle, an empty string `''` is the same as `NULL`; so your filter can simply be:
```sql
SELECT *
FROM PAYRECORDS
WHERE :ip_type = type
OR (:ip_type = 'HIGH' AND type IS NULL);
```
|
69,102,892
|
I have a class object which has the task of running a file. When I was developing, my class object was in the same file as the code I used to run the file.
Now I am refactoring and making this a real package so I moved the code to a file called `class_objects.py`.
I have installed this package locally, but now when I use the class object `Naive` it looks for the file in the directory I am currently working in as opposed to looking for the file which is part of the package. I have read up on absolute paths, relative paths, and verifying that `__init__.py` exists. I am stumped on this one.
How can I make sure my package looks for the `file.ext` within its own directories as opposed to looking for `file.ext` where I am running from?
Here is how I call my package:
```py
# Trying to use my package installed locally
from my-package.class_objects import Naive
a = Naive()
a.find_and_run()
```
>
> Error
>
>
>
```
ValueError: no such file /home/user/tutorial/dir/file.ext
```
This is my directory
```py
My-Package
drwxrwxr-x - user 8 Sep 1:27 .
drwxrwxr-x - user 8 Sep 1:22 βββ python
drwxrwxr-x - user 8 Sep 1:25 βββ dist
.rw-r--r-- 1.6M user 8 Sep 1:25 β βββ my-package-0.1.0-py3-none-any.whl
.rw-rw-r-- 1.6M user 8 Sep 1:25 β βββ my-package-0.1.0.tar.gz
.rw-rw-r-- 480 user 8 Sep 1:22 βββ pyproject.toml
drwxrwxr-x - user 8 Sep 1:24 βββ my-package
.rw-rw-r-- 0 user 8 Sep 1:14 βββ __init__.py
.rw-rw-r-- 2.6k user 8 Sep 1:13 βββ class_objects.py
drwxrwxr-x - user 8 Sep 1:19 βββ dir
.rw-rw-r-- 0 user 8 Sep 1:19 β βββ __init__.py
.rwxrwxr-x 1.5M user 7 Sep 22:39 β βββ file.exe
```
This is what is inside of `class_objects.py`
```py
class Naive(object):
...
def find_and_run():
out_dir = os.path.join("dir", "file.ext")
naive_model = RunThing(stan_file=out_dir)
```
|
2021/09/08
|
[
"https://Stackoverflow.com/questions/69102892",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/2444023/"
] |
Just use boolean logic:
```
WHERE (:IP_TYPE = 'HIGH' AND (TYPE = 'HIGH' OR TYPE = '' OR TYPE IS NULL)
) OR
(:IP_TYPE = 'LOW' AND TYPE = 'LOW')
```
Or more succinctly:
```
WHERE :IP_TYPE = TYPE OR
(:IP_TYPE = 'HIGH' AND (TYPE = '' OR TYPE IS NULL))
```
|
You can also use the `NVL()` to consider that when column `type` isn't populated then it should be considered as `HIGH`:
```
SELECT *
FROM PAYRECORDS
WHERE NVL(type, 'HIGH') = :ip_type;
```
|
69,102,892
|
I have a class object which has the task of running a file. When I was developing, my class object was in the same file as the code I used to run the file.
Now I am refactoring and making this a real package so I moved the code to a file called `class_objects.py`.
I have installed this package locally, but now when I use the class object `Naive` it looks for the file in the directory I am currently working in as opposed to looking for the file which is part of the package. I have read up on absolute paths, relative paths, and verifying that `__init__.py` exists. I am stumped on this one.
How can I make sure my package looks for the `file.ext` within its own directories as opposed to looking for `file.ext` where I am running from?
Here is how I call my package:
```py
# Trying to use my package installed locally
from my-package.class_objects import Naive
a = Naive()
a.find_and_run()
```
>
> Error
>
>
>
```
ValueError: no such file /home/user/tutorial/dir/file.ext
```
This is my directory
```py
My-Package
drwxrwxr-x - user 8 Sep 1:27 .
drwxrwxr-x - user 8 Sep 1:22 βββ python
drwxrwxr-x - user 8 Sep 1:25 βββ dist
.rw-r--r-- 1.6M user 8 Sep 1:25 β βββ my-package-0.1.0-py3-none-any.whl
.rw-rw-r-- 1.6M user 8 Sep 1:25 β βββ my-package-0.1.0.tar.gz
.rw-rw-r-- 480 user 8 Sep 1:22 βββ pyproject.toml
drwxrwxr-x - user 8 Sep 1:24 βββ my-package
.rw-rw-r-- 0 user 8 Sep 1:14 βββ __init__.py
.rw-rw-r-- 2.6k user 8 Sep 1:13 βββ class_objects.py
drwxrwxr-x - user 8 Sep 1:19 βββ dir
.rw-rw-r-- 0 user 8 Sep 1:19 β βββ __init__.py
.rwxrwxr-x 1.5M user 7 Sep 22:39 β βββ file.exe
```
This is what is inside of `class_objects.py`
```py
class Naive(object):
...
def find_and_run():
out_dir = os.path.join("dir", "file.ext")
naive_model = RunThing(stan_file=out_dir)
```
|
2021/09/08
|
[
"https://Stackoverflow.com/questions/69102892",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/2444023/"
] |
In Oracle, an empty string `''` is the same as `NULL`; so your filter can simply be:
```sql
SELECT *
FROM PAYRECORDS
WHERE :ip_type = type
OR (:ip_type = 'HIGH' AND type IS NULL);
```
|
You can also use the `NVL()` to consider that when column `type` isn't populated then it should be considered as `HIGH`:
```
SELECT *
FROM PAYRECORDS
WHERE NVL(type, 'HIGH') = :ip_type;
```
|
21,529,118
|
I'm trying to use flask-migrate to version my database locally and then reflect the changes in production (Heroku). So far I managed to successfully version the local database and upgrade it, so now I wanted to reflect this on Heroku. To do this I pushed the latest code state to Heroku together with the newly created **migrations** folder and updated requirements.txt. I saw the dependencies were successfully installed:
```
Successfully installed Flask-Migrate alembic Flask-Script Mako
```
Then, I tried:
```
$ heroku run python app/hello.py db upgrade
```
Unfortunately I got this in response:
```
Running `python app/hello.py db upgrade` attached to terminal... up, run.4322
Traceback (most recent call last):
File "app/hello.py", line 37, in <module>
manager.run()
File "/app/.heroku/python/lib/python2.7/site-packages/flask_script/__init__.py", line 405, in run
result = self.handle(sys.argv[0], sys.argv[1:])
File "/app/.heroku/python/lib/python2.7/site-packages/flask_script/__init__.py", line 384, in handle
return handle(app, *positional_args, **kwargs)
File "/app/.heroku/python/lib/python2.7/site-packages/flask_script/commands.py", line 145, in handle
return self.run(*args, **kwargs)
File "/app/.heroku/python/lib/python2.7/site-packages/flask_migrate/__init__.py", line 97, in upgrade
config = _get_config(directory)
File "/app/.heroku/python/lib/python2.7/site-packages/flask_migrate/__init__.py", line 37, in _get_config
config.set_main_option('script_location', directory)
File "/app/.heroku/python/lib/python2.7/site-packages/alembic/config.py", line 142, in set_main_option
self.file_config.set(self.config_ini_section, name, value)
File "/app/.heroku/python/lib/python2.7/ConfigParser.py", line 753, in set
ConfigParser.set(self, section, option, value)
File "/app/.heroku/python/lib/python2.7/ConfigParser.py", line 396, in set
raise NoSectionError(section)
ConfigParser.NoSectionError: No section: 'alembic'
```
I googled to find what this might be and it looks like the config file can't be opened, however I have no idea what can be done to fix that. How come this works locally but not on Heroku?
|
2014/02/03
|
[
"https://Stackoverflow.com/questions/21529118",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/703809/"
] |
I was struggling with this for some time and even posted on the Heroku python forums, but no replies so far. To solve the issue I decided not to run the migration remotely on Heroku, but to run the migration on my development machine and pass the production database address instead. So I do this:
1. Sync the development db with production (when using Heroku you can easily do this with *heroku pg:pull*, you have to drop your local db prior to calling this method though)
2. Assuming your models are already updated, run the *python app.py db migrate*. **Important**: I started getting the original error on my local too, I figured out I have to be in the exact same directory where my app.py is, otherwise I get the error.
3. Review your auto-generated migration scripts
4. Upgrade your local db with *python app.py db upgrade*
5. Change the settings for your app to use the production db instead of your local development db and then run *python app.py db upgrade* again
After some thinking it struck me that this might have been the way this tool was designed to work. Although it still would be nice to be able to run the migrations remotely from Heroku, I'll settle for my solution as it is quicker and does the job.
|
I haven't tried this with Heroku, but ran into the same error and symptoms. The issue for me was that when running locally, my current working directory was set to the project root directory, and when running remotely, it was set to the user's home directory.
Try either cd'ing to the right starting directory first, or pass the --directory parameter to the flask-migrate command with the absolute path to your migrations folder.
|
21,529,118
|
I'm trying to use flask-migrate to version my database locally and then reflect the changes in production (Heroku). So far I managed to successfully version the local database and upgrade it, so now I wanted to reflect this on Heroku. To do this I pushed the latest code state to Heroku together with the newly created **migrations** folder and updated requirements.txt. I saw the dependencies were successfully installed:
```
Successfully installed Flask-Migrate alembic Flask-Script Mako
```
Then, I tried:
```
$ heroku run python app/hello.py db upgrade
```
Unfortunately I got this in response:
```
Running `python app/hello.py db upgrade` attached to terminal... up, run.4322
Traceback (most recent call last):
File "app/hello.py", line 37, in <module>
manager.run()
File "/app/.heroku/python/lib/python2.7/site-packages/flask_script/__init__.py", line 405, in run
result = self.handle(sys.argv[0], sys.argv[1:])
File "/app/.heroku/python/lib/python2.7/site-packages/flask_script/__init__.py", line 384, in handle
return handle(app, *positional_args, **kwargs)
File "/app/.heroku/python/lib/python2.7/site-packages/flask_script/commands.py", line 145, in handle
return self.run(*args, **kwargs)
File "/app/.heroku/python/lib/python2.7/site-packages/flask_migrate/__init__.py", line 97, in upgrade
config = _get_config(directory)
File "/app/.heroku/python/lib/python2.7/site-packages/flask_migrate/__init__.py", line 37, in _get_config
config.set_main_option('script_location', directory)
File "/app/.heroku/python/lib/python2.7/site-packages/alembic/config.py", line 142, in set_main_option
self.file_config.set(self.config_ini_section, name, value)
File "/app/.heroku/python/lib/python2.7/ConfigParser.py", line 753, in set
ConfigParser.set(self, section, option, value)
File "/app/.heroku/python/lib/python2.7/ConfigParser.py", line 396, in set
raise NoSectionError(section)
ConfigParser.NoSectionError: No section: 'alembic'
```
I googled to find what this might be and it looks like the config file can't be opened, however I have no idea what can be done to fix that. How come this works locally but not on Heroku?
|
2014/02/03
|
[
"https://Stackoverflow.com/questions/21529118",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/703809/"
] |
I was struggling with this for some time and even posted on the Heroku python forums, but no replies so far. To solve the issue I decided not to run the migration remotely on Heroku, but to run the migration on my development machine and pass the production database address instead. So I do this:
1. Sync the development db with production (when using Heroku you can easily do this with *heroku pg:pull*, you have to drop your local db prior to calling this method though)
2. Assuming your models are already updated, run the *python app.py db migrate*. **Important**: I started getting the original error on my local too, I figured out I have to be in the exact same directory where my app.py is, otherwise I get the error.
3. Review your auto-generated migration scripts
4. Upgrade your local db with *python app.py db upgrade*
5. Change the settings for your app to use the production db instead of your local development db and then run *python app.py db upgrade* again
After some thinking it struck me that this might have been the way this tool was designed to work. Although it still would be nice to be able to run the migrations remotely from Heroku, I'll settle for my solution as it is quicker and does the job.
|
Had the same issue as you. I then tried to commit `migrations/alembic.ini` and then things started to work. Just make sure there's no sensitive information inside that file before committing it. Hope this solves your issue too.
|
21,529,118
|
I'm trying to use flask-migrate to version my database locally and then reflect the changes in production (Heroku). So far I managed to successfully version the local database and upgrade it, so now I wanted to reflect this on Heroku. To do this I pushed the latest code state to Heroku together with the newly created **migrations** folder and updated requirements.txt. I saw the dependencies were successfully installed:
```
Successfully installed Flask-Migrate alembic Flask-Script Mako
```
Then, I tried:
```
$ heroku run python app/hello.py db upgrade
```
Unfortunately I got this in response:
```
Running `python app/hello.py db upgrade` attached to terminal... up, run.4322
Traceback (most recent call last):
File "app/hello.py", line 37, in <module>
manager.run()
File "/app/.heroku/python/lib/python2.7/site-packages/flask_script/__init__.py", line 405, in run
result = self.handle(sys.argv[0], sys.argv[1:])
File "/app/.heroku/python/lib/python2.7/site-packages/flask_script/__init__.py", line 384, in handle
return handle(app, *positional_args, **kwargs)
File "/app/.heroku/python/lib/python2.7/site-packages/flask_script/commands.py", line 145, in handle
return self.run(*args, **kwargs)
File "/app/.heroku/python/lib/python2.7/site-packages/flask_migrate/__init__.py", line 97, in upgrade
config = _get_config(directory)
File "/app/.heroku/python/lib/python2.7/site-packages/flask_migrate/__init__.py", line 37, in _get_config
config.set_main_option('script_location', directory)
File "/app/.heroku/python/lib/python2.7/site-packages/alembic/config.py", line 142, in set_main_option
self.file_config.set(self.config_ini_section, name, value)
File "/app/.heroku/python/lib/python2.7/ConfigParser.py", line 753, in set
ConfigParser.set(self, section, option, value)
File "/app/.heroku/python/lib/python2.7/ConfigParser.py", line 396, in set
raise NoSectionError(section)
ConfigParser.NoSectionError: No section: 'alembic'
```
I googled to find what this might be and it looks like the config file can't be opened, however I have no idea what can be done to fix that. How come this works locally but not on Heroku?
|
2014/02/03
|
[
"https://Stackoverflow.com/questions/21529118",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/703809/"
] |
I was struggling with this for some time and even posted on the Heroku python forums, but no replies so far. To solve the issue I decided not to run the migration remotely on Heroku, but to run the migration on my development machine and pass the production database address instead. So I do this:
1. Sync the development db with production (when using Heroku you can easily do this with *heroku pg:pull*, you have to drop your local db prior to calling this method though)
2. Assuming your models are already updated, run the *python app.py db migrate*. **Important**: I started getting the original error on my local too, I figured out I have to be in the exact same directory where my app.py is, otherwise I get the error.
3. Review your auto-generated migration scripts
4. Upgrade your local db with *python app.py db upgrade*
5. Change the settings for your app to use the production db instead of your local development db and then run *python app.py db upgrade* again
After some thinking it struck me that this might have been the way this tool was designed to work. Although it still would be nice to be able to run the migrations remotely from Heroku, I'll settle for my solution as it is quicker and does the job.
|
To elaborate on lawicko's answer, when using Flask-Migrate and Heroku, a good way to conduct database migrations for a production database is to download the production database, generate the migration script locally, and run the migration script on Heroku. The alternative is to use a local development database to generate migrations and run the migration script on Heroku for the production database, but if the local and production migrations become out of sync, errors will arise when performing database upgrades.
Here are the steps to download the production database and generate migrations locally in more detail:
1. Use the command `heroku pg:pull DATABASE_URL new_db_name -a heroku_app_name` which creates a new local Postgres database with the name new\_db\_name and the same database schema and content of your Postgres database on Heroku. A database with the same name must not already exist. You can use a tool like Postico to view and manage your local Postgres databases with a GUI.
2. Configure your Flask app to use the new Postgres database. Assuming Flask-SQLAlchemy is being used, in Flask's configuration set `SQLALCHEMY_DATABASE_URI = "postgresql://localhost/new_db_name"`.
3. Now that Flask recognizes the new local Postgres database mirroring the production database, use Flask-Migrate's `flask db init` and `flask db migrate` commands to generate a migration script.
4. Push the migrations folder generated by Flask-Migrate to Heroku.
5. Use Flask-Migrate to upgrade the production database on Heroku with `heroku run flask db upgrade -a heroku_app_name`.
|
21,529,118
|
I'm trying to use flask-migrate to version my database locally and then reflect the changes in production (Heroku). So far I managed to successfully version the local database and upgrade it, so now I wanted to reflect this on Heroku. To do this I pushed the latest code state to Heroku together with the newly created **migrations** folder and updated requirements.txt. I saw the dependencies were successfully installed:
```
Successfully installed Flask-Migrate alembic Flask-Script Mako
```
Then, I tried:
```
$ heroku run python app/hello.py db upgrade
```
Unfortunately I got this in response:
```
Running `python app/hello.py db upgrade` attached to terminal... up, run.4322
Traceback (most recent call last):
File "app/hello.py", line 37, in <module>
manager.run()
File "/app/.heroku/python/lib/python2.7/site-packages/flask_script/__init__.py", line 405, in run
result = self.handle(sys.argv[0], sys.argv[1:])
File "/app/.heroku/python/lib/python2.7/site-packages/flask_script/__init__.py", line 384, in handle
return handle(app, *positional_args, **kwargs)
File "/app/.heroku/python/lib/python2.7/site-packages/flask_script/commands.py", line 145, in handle
return self.run(*args, **kwargs)
File "/app/.heroku/python/lib/python2.7/site-packages/flask_migrate/__init__.py", line 97, in upgrade
config = _get_config(directory)
File "/app/.heroku/python/lib/python2.7/site-packages/flask_migrate/__init__.py", line 37, in _get_config
config.set_main_option('script_location', directory)
File "/app/.heroku/python/lib/python2.7/site-packages/alembic/config.py", line 142, in set_main_option
self.file_config.set(self.config_ini_section, name, value)
File "/app/.heroku/python/lib/python2.7/ConfigParser.py", line 753, in set
ConfigParser.set(self, section, option, value)
File "/app/.heroku/python/lib/python2.7/ConfigParser.py", line 396, in set
raise NoSectionError(section)
ConfigParser.NoSectionError: No section: 'alembic'
```
I googled to find what this might be and it looks like the config file can't be opened, however I have no idea what can be done to fix that. How come this works locally but not on Heroku?
|
2014/02/03
|
[
"https://Stackoverflow.com/questions/21529118",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/703809/"
] |
I haven't tried this with Heroku, but ran into the same error and symptoms. The issue for me was that when running locally, my current working directory was set to the project root directory, and when running remotely, it was set to the user's home directory.
Try either cd'ing to the right starting directory first, or pass the --directory parameter to the flask-migrate command with the absolute path to your migrations folder.
|
Had the same issue as you. I then tried to commit `migrations/alembic.ini` and then things started to work. Just make sure there's no sensitive information inside that file before committing it. Hope this solves your issue too.
|
21,529,118
|
I'm trying to use flask-migrate to version my database locally and then reflect the changes in production (Heroku). So far I managed to successfully version the local database and upgrade it, so now I wanted to reflect this on Heroku. To do this I pushed the latest code state to Heroku together with the newly created **migrations** folder and updated requirements.txt. I saw the dependencies were successfully installed:
```
Successfully installed Flask-Migrate alembic Flask-Script Mako
```
Then, I tried:
```
$ heroku run python app/hello.py db upgrade
```
Unfortunately I got this in response:
```
Running `python app/hello.py db upgrade` attached to terminal... up, run.4322
Traceback (most recent call last):
File "app/hello.py", line 37, in <module>
manager.run()
File "/app/.heroku/python/lib/python2.7/site-packages/flask_script/__init__.py", line 405, in run
result = self.handle(sys.argv[0], sys.argv[1:])
File "/app/.heroku/python/lib/python2.7/site-packages/flask_script/__init__.py", line 384, in handle
return handle(app, *positional_args, **kwargs)
File "/app/.heroku/python/lib/python2.7/site-packages/flask_script/commands.py", line 145, in handle
return self.run(*args, **kwargs)
File "/app/.heroku/python/lib/python2.7/site-packages/flask_migrate/__init__.py", line 97, in upgrade
config = _get_config(directory)
File "/app/.heroku/python/lib/python2.7/site-packages/flask_migrate/__init__.py", line 37, in _get_config
config.set_main_option('script_location', directory)
File "/app/.heroku/python/lib/python2.7/site-packages/alembic/config.py", line 142, in set_main_option
self.file_config.set(self.config_ini_section, name, value)
File "/app/.heroku/python/lib/python2.7/ConfigParser.py", line 753, in set
ConfigParser.set(self, section, option, value)
File "/app/.heroku/python/lib/python2.7/ConfigParser.py", line 396, in set
raise NoSectionError(section)
ConfigParser.NoSectionError: No section: 'alembic'
```
I googled to find what this might be and it looks like the config file can't be opened, however I have no idea what can be done to fix that. How come this works locally but not on Heroku?
|
2014/02/03
|
[
"https://Stackoverflow.com/questions/21529118",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/703809/"
] |
To elaborate on lawicko's answer, when using Flask-Migrate and Heroku, a good way to conduct database migrations for a production database is to download the production database, generate the migration script locally, and run the migration script on Heroku. The alternative is to use a local development database to generate migrations and run the migration script on Heroku for the production database, but if the local and production migrations become out of sync, errors will arise when performing database upgrades.
Here are the steps to download the production database and generate migrations locally in more detail:
1. Use the command `heroku pg:pull DATABASE_URL new_db_name -a heroku_app_name` which creates a new local Postgres database with the name new\_db\_name and the same database schema and content of your Postgres database on Heroku. A database with the same name must not already exist. You can use a tool like Postico to view and manage your local Postgres databases with a GUI.
2. Configure your Flask app to use the new Postgres database. Assuming Flask-SQLAlchemy is being used, in Flask's configuration set `SQLALCHEMY_DATABASE_URI = "postgresql://localhost/new_db_name"`.
3. Now that Flask recognizes the new local Postgres database mirroring the production database, use Flask-Migrate's `flask db init` and `flask db migrate` commands to generate a migration script.
4. Push the migrations folder generated by Flask-Migrate to Heroku.
5. Use Flask-Migrate to upgrade the production database on Heroku with `heroku run flask db upgrade -a heroku_app_name`.
|
Had the same issue as you. I then tried to commit `migrations/alembic.ini` and then things started to work. Just make sure there's no sensitive information inside that file before committing it. Hope this solves your issue too.
|
10,524,842
|
I have a multithreaded mergesorting program in C, and a program for benchmark testing it with 0, 1, 2, or 4 threads. I also wrote a program in Python to do multiple tests and aggregate the results.
The weird thing is that when I run the Python, the tests always run in about half the time compared to when I run them directly in the shell.
For example, when I run the testing program by itself with 4 million integers to sort (the last two arguments are the seed and modulus for generating integers):
```
$ ./mergetest 4000000 4194819 140810581084
0 threads: 1.483485s wall; 1.476092s user; 0.004001s sys
1 threads: 1.489206s wall; 1.488093s user; 0.000000s sys
2 threads: 0.854119s wall; 1.608100s user; 0.008000s sys
4 threads: 0.673286s wall; 2.224139s user; 0.024002s sys
```
Using the python script:
```
$ ./mergedata.py 1 4000000
Average runtime for 1 runs with 4000000 items each:
0 threads: 0.677512s wall; 0.664041s user; 0.016001s sys
1 threads: 0.709118s wall; 0.704044s user; 0.004001s sys
2 threads: 0.414058s wall; 0.752047s user; 0.028001s sys
4 threads: 0.373708s wall; 1.24008s user; 0.024002s sys
```
This happens no matter how many I'm sorting, or how many times I run it. The python program calls the tester with the subprocess module, then parses and aggregates the output. Any ideas why this would happen? Is Python somehow optimizing the execution? or is there something slowing it down when I run it directly that I'm not aware of?
Code: <https://gist.github.com/2650009>
|
2012/05/09
|
[
"https://Stackoverflow.com/questions/10524842",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/1325447/"
] |
Turns out I was passing sys.maxint to the subprocess as the modulus for generating random numbers. C was truncating the 64-bit integer and interpreting it as signed, i.e., -1 in two's complement, so every random number was being mod'd by that and becoming 0. So, sorting all the same values seems to take about half as much time as random data.
|
wrapping this in a shell script will probably have the same effect. if so its the console operations
|
20,694,338
|
I am trying to play around with some more of function programming parts of python and for a test I thought I would print out the sum of the first n integers for all numbers between 1 and 100.
```
for i in map(lambda n: (n*(n+1))/2, range(1,101)):
print "sum of the first %d integers: %d" % (i,i)
```
The last line prints out as:
```
sum of the first 5050 integers: 5050
```
It should read "sum of the first 100 integers is 5050 (I may have an off by one error but I'll fix that).
My question is what is variable that holds the index?
|
2013/12/20
|
[
"https://Stackoverflow.com/questions/20694338",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/1761521/"
] |
You can return tuple with (index, value) from your lambda, like that:
```
for i,s in map(lambda n: (n,(n*(n+1))/2), range(1,101)):
print "sum of the first %d integers: %d" % (i,s)
```
|
Your code doesn't define a variable that holds an index. In the outermost scope, there is just the variable (sometimes called a "name" when talking about Python) "i".
If you'd like an index, you can use the built-in function enumerate()
```
for i,x in enumerate([5,10,15]):
print i, x
```
|
20,694,338
|
I am trying to play around with some more of function programming parts of python and for a test I thought I would print out the sum of the first n integers for all numbers between 1 and 100.
```
for i in map(lambda n: (n*(n+1))/2, range(1,101)):
print "sum of the first %d integers: %d" % (i,i)
```
The last line prints out as:
```
sum of the first 5050 integers: 5050
```
It should read "sum of the first 100 integers is 5050 (I may have an off by one error but I'll fix that).
My question is what is variable that holds the index?
|
2013/12/20
|
[
"https://Stackoverflow.com/questions/20694338",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/1761521/"
] |
Your code doesn't define a variable that holds an index. In the outermost scope, there is just the variable (sometimes called a "name" when talking about Python) "i".
If you'd like an index, you can use the built-in function enumerate()
```
for i,x in enumerate([5,10,15]):
print i, x
```
|
Or you could try this:
```
for ind, val in enumerate(map(lambda n: (n*(n+1))/2, range(1,101)), 1):
print "sum of the first %d integers: %d" % (ind, val)
```
|
20,694,338
|
I am trying to play around with some more of function programming parts of python and for a test I thought I would print out the sum of the first n integers for all numbers between 1 and 100.
```
for i in map(lambda n: (n*(n+1))/2, range(1,101)):
print "sum of the first %d integers: %d" % (i,i)
```
The last line prints out as:
```
sum of the first 5050 integers: 5050
```
It should read "sum of the first 100 integers is 5050 (I may have an off by one error but I'll fix that).
My question is what is variable that holds the index?
|
2013/12/20
|
[
"https://Stackoverflow.com/questions/20694338",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/1761521/"
] |
Your code doesn't define a variable that holds an index. In the outermost scope, there is just the variable (sometimes called a "name" when talking about Python) "i".
If you'd like an index, you can use the built-in function enumerate()
```
for i,x in enumerate([5,10,15]):
print i, x
```
|
Mayby I don't understand what you are going for but couldn't you just use print
```
print "sum of the first %d integers: %d" %(100,sum(xrange(1,101)))
```
if you wanted user input...
```
i = input("Enter the upper range to sum: ")
print "sum of the first %d integers: %d" %(i,sum(xrange(1,i+1)))
```
I'm also using xrange() because it is faster in python v2 than range().
|
20,694,338
|
I am trying to play around with some more of function programming parts of python and for a test I thought I would print out the sum of the first n integers for all numbers between 1 and 100.
```
for i in map(lambda n: (n*(n+1))/2, range(1,101)):
print "sum of the first %d integers: %d" % (i,i)
```
The last line prints out as:
```
sum of the first 5050 integers: 5050
```
It should read "sum of the first 100 integers is 5050 (I may have an off by one error but I'll fix that).
My question is what is variable that holds the index?
|
2013/12/20
|
[
"https://Stackoverflow.com/questions/20694338",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/1761521/"
] |
You can return tuple with (index, value) from your lambda, like that:
```
for i,s in map(lambda n: (n,(n*(n+1))/2), range(1,101)):
print "sum of the first %d integers: %d" % (i,s)
```
|
Or you could try this:
```
for ind, val in enumerate(map(lambda n: (n*(n+1))/2, range(1,101)), 1):
print "sum of the first %d integers: %d" % (ind, val)
```
|
20,694,338
|
I am trying to play around with some more of function programming parts of python and for a test I thought I would print out the sum of the first n integers for all numbers between 1 and 100.
```
for i in map(lambda n: (n*(n+1))/2, range(1,101)):
print "sum of the first %d integers: %d" % (i,i)
```
The last line prints out as:
```
sum of the first 5050 integers: 5050
```
It should read "sum of the first 100 integers is 5050 (I may have an off by one error but I'll fix that).
My question is what is variable that holds the index?
|
2013/12/20
|
[
"https://Stackoverflow.com/questions/20694338",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/1761521/"
] |
You can return tuple with (index, value) from your lambda, like that:
```
for i,s in map(lambda n: (n,(n*(n+1))/2), range(1,101)):
print "sum of the first %d integers: %d" % (i,s)
```
|
Mayby I don't understand what you are going for but couldn't you just use print
```
print "sum of the first %d integers: %d" %(100,sum(xrange(1,101)))
```
if you wanted user input...
```
i = input("Enter the upper range to sum: ")
print "sum of the first %d integers: %d" %(i,sum(xrange(1,i+1)))
```
I'm also using xrange() because it is faster in python v2 than range().
|
20,694,338
|
I am trying to play around with some more of function programming parts of python and for a test I thought I would print out the sum of the first n integers for all numbers between 1 and 100.
```
for i in map(lambda n: (n*(n+1))/2, range(1,101)):
print "sum of the first %d integers: %d" % (i,i)
```
The last line prints out as:
```
sum of the first 5050 integers: 5050
```
It should read "sum of the first 100 integers is 5050 (I may have an off by one error but I'll fix that).
My question is what is variable that holds the index?
|
2013/12/20
|
[
"https://Stackoverflow.com/questions/20694338",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/1761521/"
] |
Or you could try this:
```
for ind, val in enumerate(map(lambda n: (n*(n+1))/2, range(1,101)), 1):
print "sum of the first %d integers: %d" % (ind, val)
```
|
Mayby I don't understand what you are going for but couldn't you just use print
```
print "sum of the first %d integers: %d" %(100,sum(xrange(1,101)))
```
if you wanted user input...
```
i = input("Enter the upper range to sum: ")
print "sum of the first %d integers: %d" %(i,sum(xrange(1,i+1)))
```
I'm also using xrange() because it is faster in python v2 than range().
|
62,670,991
|
I'm trying to read multiple CSV files from blob storage using python.
The code that I'm using is:
```
blob_service_client = BlobServiceClient.from_connection_string(connection_str)
container_client = blob_service_client.get_container_client(container)
blobs_list = container_client.list_blobs(folder_root)
for blob in blobs_list:
blob_client = blob_service_client.get_blob_client(container=container, blob="blob.name")
stream = blob_client.download_blob().content_as_text()
```
I'm not sure what is the correct way to store the CSV files read in a pandas dataframe.
I tried to use:
```
df = df.append(pd.read_csv(StringIO(stream)))
```
But this shows me an error.
Any idea how can I to do this?
|
2020/07/01
|
[
"https://Stackoverflow.com/questions/62670991",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/6153466/"
] |
You could download the file from blob storage, then read the data into a pandas DataFrame from the downloaded file.
```
from azure.storage.blob import BlockBlobService
import pandas as pd
import tables
STORAGEACCOUNTNAME= <storage_account_name>
STORAGEACCOUNTKEY= <storage_account_key>
LOCALFILENAME= <local_file_name>
CONTAINERNAME= <container_name>
BLOBNAME= <blob_name>
#download from blob
t1=time.time()
blob_service=BlockBlobService(account_name=STORAGEACCOUNTNAME,account_key=STORAGEACCOUNTKEY)
blob_service.get_blob_to_path(CONTAINERNAME,BLOBNAME,LOCALFILENAME)
t2=time.time()
print(("It takes %s seconds to download "+blobname) % (t2 - t1))
# LOCALFILE is the file path
dataframe_blobdata = pd.read_csv(LOCALFILENAME)
```
For more details, see [here](https://learn.microsoft.com/en-us/azure/machine-learning/team-data-science-process/explore-data-blob).
---
If you want to do the conversion directly, the code will help. You need to get content from the blob object and in the `get_blob_to_text` there's no need for the local file name.
```
from io import StringIO
blobstring = blob_service.get_blob_to_text(CONTAINERNAME,BLOBNAME).content
df = pd.read_csv(StringIO(blobstring))
```
|
```
import pandas as pd
data = pd.read_csv('blob_sas_url')
```
The Blob SAS Url can be found by right clicking on the azure portal's blob file that you want to import and selecting Generate SAS. Then, click Generate SAS token and URL button and copy the SAS url to above code in place of blob\_sas\_url.
|
62,670,991
|
I'm trying to read multiple CSV files from blob storage using python.
The code that I'm using is:
```
blob_service_client = BlobServiceClient.from_connection_string(connection_str)
container_client = blob_service_client.get_container_client(container)
blobs_list = container_client.list_blobs(folder_root)
for blob in blobs_list:
blob_client = blob_service_client.get_blob_client(container=container, blob="blob.name")
stream = blob_client.download_blob().content_as_text()
```
I'm not sure what is the correct way to store the CSV files read in a pandas dataframe.
I tried to use:
```
df = df.append(pd.read_csv(StringIO(stream)))
```
But this shows me an error.
Any idea how can I to do this?
|
2020/07/01
|
[
"https://Stackoverflow.com/questions/62670991",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/6153466/"
] |
You could download the file from blob storage, then read the data into a pandas DataFrame from the downloaded file.
```
from azure.storage.blob import BlockBlobService
import pandas as pd
import tables
STORAGEACCOUNTNAME= <storage_account_name>
STORAGEACCOUNTKEY= <storage_account_key>
LOCALFILENAME= <local_file_name>
CONTAINERNAME= <container_name>
BLOBNAME= <blob_name>
#download from blob
t1=time.time()
blob_service=BlockBlobService(account_name=STORAGEACCOUNTNAME,account_key=STORAGEACCOUNTKEY)
blob_service.get_blob_to_path(CONTAINERNAME,BLOBNAME,LOCALFILENAME)
t2=time.time()
print(("It takes %s seconds to download "+blobname) % (t2 - t1))
# LOCALFILE is the file path
dataframe_blobdata = pd.read_csv(LOCALFILENAME)
```
For more details, see [here](https://learn.microsoft.com/en-us/azure/machine-learning/team-data-science-process/explore-data-blob).
---
If you want to do the conversion directly, the code will help. You need to get content from the blob object and in the `get_blob_to_text` there's no need for the local file name.
```
from io import StringIO
blobstring = blob_service.get_blob_to_text(CONTAINERNAME,BLOBNAME).content
df = pd.read_csv(StringIO(blobstring))
```
|
You can now directly read from BlobStorage into a Pandas DataFrame:
```
mydata = pd.read_csv(
f"abfs://{blob_path}",
storage_options={
"connection_string": os.environ["STORAGE_CONNECTION"]
})
```
where `blob_path` is the path to your file, given as `{container-name}/{blob-preifx.csv}`
|
62,670,991
|
I'm trying to read multiple CSV files from blob storage using python.
The code that I'm using is:
```
blob_service_client = BlobServiceClient.from_connection_string(connection_str)
container_client = blob_service_client.get_container_client(container)
blobs_list = container_client.list_blobs(folder_root)
for blob in blobs_list:
blob_client = blob_service_client.get_blob_client(container=container, blob="blob.name")
stream = blob_client.download_blob().content_as_text()
```
I'm not sure what is the correct way to store the CSV files read in a pandas dataframe.
I tried to use:
```
df = df.append(pd.read_csv(StringIO(stream)))
```
But this shows me an error.
Any idea how can I to do this?
|
2020/07/01
|
[
"https://Stackoverflow.com/questions/62670991",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/6153466/"
] |
You could download the file from blob storage, then read the data into a pandas DataFrame from the downloaded file.
```
from azure.storage.blob import BlockBlobService
import pandas as pd
import tables
STORAGEACCOUNTNAME= <storage_account_name>
STORAGEACCOUNTKEY= <storage_account_key>
LOCALFILENAME= <local_file_name>
CONTAINERNAME= <container_name>
BLOBNAME= <blob_name>
#download from blob
t1=time.time()
blob_service=BlockBlobService(account_name=STORAGEACCOUNTNAME,account_key=STORAGEACCOUNTKEY)
blob_service.get_blob_to_path(CONTAINERNAME,BLOBNAME,LOCALFILENAME)
t2=time.time()
print(("It takes %s seconds to download "+blobname) % (t2 - t1))
# LOCALFILE is the file path
dataframe_blobdata = pd.read_csv(LOCALFILENAME)
```
For more details, see [here](https://learn.microsoft.com/en-us/azure/machine-learning/team-data-science-process/explore-data-blob).
---
If you want to do the conversion directly, the code will help. You need to get content from the blob object and in the `get_blob_to_text` there's no need for the local file name.
```
from io import StringIO
blobstring = blob_service.get_blob_to_text(CONTAINERNAME,BLOBNAME).content
df = pd.read_csv(StringIO(blobstring))
```
|
The BlockBlobService as part of azure-storage is deprecated. Use below instead:
```
!pip install azure-storage-blob
from azure.storage.blob import BlobServiceClient
import pandas as pd
STORAGEACCOUNTURL= <storage_account_url>
STORAGEACCOUNTKEY= <storage_account_key>
LOCALFILENAME= <local_file_name>
CONTAINERNAME= <container_name>
BLOBNAME= <blob_name>
#download from blob
blob_service_client_instance=BlobServiceClient(account_url=STORAGEACCOUNTURL, credential=STORAGEACCOUNTKEY)
blob_client_instance = blob_service_client_instance.get_blob_client(CONTAINERNAME, BLOBNAME, snapshot=None)
with open(LOCALFILENAME, "wb") as my_blob:
blob_data = blob_client_instance.download_blob()
blob_data.readinto(my_blob)
#import blob to dataframe
df = pd.read_csv(LOCALFILENAME)
```
LOCALFILENAME is the same as BLOBNAME
|
62,670,991
|
I'm trying to read multiple CSV files from blob storage using python.
The code that I'm using is:
```
blob_service_client = BlobServiceClient.from_connection_string(connection_str)
container_client = blob_service_client.get_container_client(container)
blobs_list = container_client.list_blobs(folder_root)
for blob in blobs_list:
blob_client = blob_service_client.get_blob_client(container=container, blob="blob.name")
stream = blob_client.download_blob().content_as_text()
```
I'm not sure what is the correct way to store the CSV files read in a pandas dataframe.
I tried to use:
```
df = df.append(pd.read_csv(StringIO(stream)))
```
But this shows me an error.
Any idea how can I to do this?
|
2020/07/01
|
[
"https://Stackoverflow.com/questions/62670991",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/6153466/"
] |
You could download the file from blob storage, then read the data into a pandas DataFrame from the downloaded file.
```
from azure.storage.blob import BlockBlobService
import pandas as pd
import tables
STORAGEACCOUNTNAME= <storage_account_name>
STORAGEACCOUNTKEY= <storage_account_key>
LOCALFILENAME= <local_file_name>
CONTAINERNAME= <container_name>
BLOBNAME= <blob_name>
#download from blob
t1=time.time()
blob_service=BlockBlobService(account_name=STORAGEACCOUNTNAME,account_key=STORAGEACCOUNTKEY)
blob_service.get_blob_to_path(CONTAINERNAME,BLOBNAME,LOCALFILENAME)
t2=time.time()
print(("It takes %s seconds to download "+blobname) % (t2 - t1))
# LOCALFILE is the file path
dataframe_blobdata = pd.read_csv(LOCALFILENAME)
```
For more details, see [here](https://learn.microsoft.com/en-us/azure/machine-learning/team-data-science-process/explore-data-blob).
---
If you want to do the conversion directly, the code will help. You need to get content from the blob object and in the `get_blob_to_text` there's no need for the local file name.
```
from io import StringIO
blobstring = blob_service.get_blob_to_text(CONTAINERNAME,BLOBNAME).content
df = pd.read_csv(StringIO(blobstring))
```
|
BlockBlobService is indeed deprecated. However, @Deepak's answer doesn't work for me. Below works:
```
import pandas as pd
from io import BytesIO
from azure.storage.blob import BlobServiceClient
CONNECTION_STRING= <connection_string>
CONTAINERNAME= <container_name>
BLOBNAME= <blob_name>
blob_service_client = BlobServiceClient.from_connection_string(CONNECTION_STRING)
container_client = blob_service_client.get_container_client(CONTAINERNAME)
blob_client = container_client.get_blob_client(BLOBNAME)
with BytesIO() as input_blob:
blob_client.download_blob().download_to_stream(input_blob)
input_blob.seek(0)
df = pd.read_csv(input_blob)
```
|
62,670,991
|
I'm trying to read multiple CSV files from blob storage using python.
The code that I'm using is:
```
blob_service_client = BlobServiceClient.from_connection_string(connection_str)
container_client = blob_service_client.get_container_client(container)
blobs_list = container_client.list_blobs(folder_root)
for blob in blobs_list:
blob_client = blob_service_client.get_blob_client(container=container, blob="blob.name")
stream = blob_client.download_blob().content_as_text()
```
I'm not sure what is the correct way to store the CSV files read in a pandas dataframe.
I tried to use:
```
df = df.append(pd.read_csv(StringIO(stream)))
```
But this shows me an error.
Any idea how can I to do this?
|
2020/07/01
|
[
"https://Stackoverflow.com/questions/62670991",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/6153466/"
] |
```
import pandas as pd
data = pd.read_csv('blob_sas_url')
```
The Blob SAS Url can be found by right clicking on the azure portal's blob file that you want to import and selecting Generate SAS. Then, click Generate SAS token and URL button and copy the SAS url to above code in place of blob\_sas\_url.
|
You can now directly read from BlobStorage into a Pandas DataFrame:
```
mydata = pd.read_csv(
f"abfs://{blob_path}",
storage_options={
"connection_string": os.environ["STORAGE_CONNECTION"]
})
```
where `blob_path` is the path to your file, given as `{container-name}/{blob-preifx.csv}`
|
62,670,991
|
I'm trying to read multiple CSV files from blob storage using python.
The code that I'm using is:
```
blob_service_client = BlobServiceClient.from_connection_string(connection_str)
container_client = blob_service_client.get_container_client(container)
blobs_list = container_client.list_blobs(folder_root)
for blob in blobs_list:
blob_client = blob_service_client.get_blob_client(container=container, blob="blob.name")
stream = blob_client.download_blob().content_as_text()
```
I'm not sure what is the correct way to store the CSV files read in a pandas dataframe.
I tried to use:
```
df = df.append(pd.read_csv(StringIO(stream)))
```
But this shows me an error.
Any idea how can I to do this?
|
2020/07/01
|
[
"https://Stackoverflow.com/questions/62670991",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/6153466/"
] |
```
import pandas as pd
data = pd.read_csv('blob_sas_url')
```
The Blob SAS Url can be found by right clicking on the azure portal's blob file that you want to import and selecting Generate SAS. Then, click Generate SAS token and URL button and copy the SAS url to above code in place of blob\_sas\_url.
|
The BlockBlobService as part of azure-storage is deprecated. Use below instead:
```
!pip install azure-storage-blob
from azure.storage.blob import BlobServiceClient
import pandas as pd
STORAGEACCOUNTURL= <storage_account_url>
STORAGEACCOUNTKEY= <storage_account_key>
LOCALFILENAME= <local_file_name>
CONTAINERNAME= <container_name>
BLOBNAME= <blob_name>
#download from blob
blob_service_client_instance=BlobServiceClient(account_url=STORAGEACCOUNTURL, credential=STORAGEACCOUNTKEY)
blob_client_instance = blob_service_client_instance.get_blob_client(CONTAINERNAME, BLOBNAME, snapshot=None)
with open(LOCALFILENAME, "wb") as my_blob:
blob_data = blob_client_instance.download_blob()
blob_data.readinto(my_blob)
#import blob to dataframe
df = pd.read_csv(LOCALFILENAME)
```
LOCALFILENAME is the same as BLOBNAME
|
62,670,991
|
I'm trying to read multiple CSV files from blob storage using python.
The code that I'm using is:
```
blob_service_client = BlobServiceClient.from_connection_string(connection_str)
container_client = blob_service_client.get_container_client(container)
blobs_list = container_client.list_blobs(folder_root)
for blob in blobs_list:
blob_client = blob_service_client.get_blob_client(container=container, blob="blob.name")
stream = blob_client.download_blob().content_as_text()
```
I'm not sure what is the correct way to store the CSV files read in a pandas dataframe.
I tried to use:
```
df = df.append(pd.read_csv(StringIO(stream)))
```
But this shows me an error.
Any idea how can I to do this?
|
2020/07/01
|
[
"https://Stackoverflow.com/questions/62670991",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/6153466/"
] |
```
import pandas as pd
data = pd.read_csv('blob_sas_url')
```
The Blob SAS Url can be found by right clicking on the azure portal's blob file that you want to import and selecting Generate SAS. Then, click Generate SAS token and URL button and copy the SAS url to above code in place of blob\_sas\_url.
|
BlockBlobService is indeed deprecated. However, @Deepak's answer doesn't work for me. Below works:
```
import pandas as pd
from io import BytesIO
from azure.storage.blob import BlobServiceClient
CONNECTION_STRING= <connection_string>
CONTAINERNAME= <container_name>
BLOBNAME= <blob_name>
blob_service_client = BlobServiceClient.from_connection_string(CONNECTION_STRING)
container_client = blob_service_client.get_container_client(CONTAINERNAME)
blob_client = container_client.get_blob_client(BLOBNAME)
with BytesIO() as input_blob:
blob_client.download_blob().download_to_stream(input_blob)
input_blob.seek(0)
df = pd.read_csv(input_blob)
```
|
936,933
|
If you raise a KeyboardInterrupt while trying to acquire a semaphore, the threads that also try to release the same semaphore object hang indefinitely.
Code:
```
import threading
import time
def worker(i, sema):
time.sleep(2)
print i, "finished"
sema.release()
sema = threading.BoundedSemaphore(value=5)
threads = []
for x in xrange(100):
sema.acquire()
t = threading.Thread(target=worker, args=(x, sema))
t.start()
threads.append(t)
```
Start this up and then ^C as it is running. It will hang and never exit.
```
0 finished
3 finished
1 finished
2 finished
4 finished
^C5 finished
Traceback (most recent call last):
File "/tmp/proof.py", line 15, in <module>
sema.acquire()
File "/System/Library/Frameworks/Python.framework/Versions/2.5/lib/python2.5/threading.py", line 290, in acquire
self.__cond.wait()
File "/System/Library/Frameworks/Python.framework/Versions/2.5/lib/python2.5/threading.py", line 214, in wait
waiter.acquire()
KeyboardInterrupt
6 finished
7 finished
8 finished
9 finished
```
How can I get it to let the last few threads die natural deaths and then exit normally? (which it does if you don't try to interrupt it)
|
2009/06/01
|
[
"https://Stackoverflow.com/questions/936933",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/41613/"
] |
You can use the signal module to set a flag that tells the main thread to stop processing:
```
import threading
import time
import signal
import sys
sigint = False
def sighandler(num, frame):
global sigint
sigint = True
def worker(i, sema):
time.sleep(2)
print i, "finished"
sema.release()
signal.signal(signal.SIGINT, sighandler)
sema = threading.BoundedSemaphore(value=5)
threads = []
for x in xrange(100):
sema.acquire()
if sigint:
sys.exit()
t = threading.Thread(target=worker, args=(x, sema))
t.start()
t.join()
threads.append(t)
```
|
In this case, it looks like you might just want to use a thread pool to control the starting and stopping of your threads. You could use [Chris Arndt's threadpool library](http://www.chrisarndt.de/projects/threadpool/) in a manner something like this:
```
pool = ThreadPool(5)
try:
# enqueue 100 worker threads
pool.wait()
except KeyboardInterrupt, k:
pool.dismiss(5)
# the program will exit after all running threads are complete
```
|
936,933
|
If you raise a KeyboardInterrupt while trying to acquire a semaphore, the threads that also try to release the same semaphore object hang indefinitely.
Code:
```
import threading
import time
def worker(i, sema):
time.sleep(2)
print i, "finished"
sema.release()
sema = threading.BoundedSemaphore(value=5)
threads = []
for x in xrange(100):
sema.acquire()
t = threading.Thread(target=worker, args=(x, sema))
t.start()
threads.append(t)
```
Start this up and then ^C as it is running. It will hang and never exit.
```
0 finished
3 finished
1 finished
2 finished
4 finished
^C5 finished
Traceback (most recent call last):
File "/tmp/proof.py", line 15, in <module>
sema.acquire()
File "/System/Library/Frameworks/Python.framework/Versions/2.5/lib/python2.5/threading.py", line 290, in acquire
self.__cond.wait()
File "/System/Library/Frameworks/Python.framework/Versions/2.5/lib/python2.5/threading.py", line 214, in wait
waiter.acquire()
KeyboardInterrupt
6 finished
7 finished
8 finished
9 finished
```
How can I get it to let the last few threads die natural deaths and then exit normally? (which it does if you don't try to interrupt it)
|
2009/06/01
|
[
"https://Stackoverflow.com/questions/936933",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/41613/"
] |
You can use the signal module to set a flag that tells the main thread to stop processing:
```
import threading
import time
import signal
import sys
sigint = False
def sighandler(num, frame):
global sigint
sigint = True
def worker(i, sema):
time.sleep(2)
print i, "finished"
sema.release()
signal.signal(signal.SIGINT, sighandler)
sema = threading.BoundedSemaphore(value=5)
threads = []
for x in xrange(100):
sema.acquire()
if sigint:
sys.exit()
t = threading.Thread(target=worker, args=(x, sema))
t.start()
t.join()
threads.append(t)
```
|
In your original code you could also make the threads daemon threads. When you interrupt the script, the daemon threads all die as you expected.
```
t = ...
t.setDaemon(True)
t.start()
```
|
936,933
|
If you raise a KeyboardInterrupt while trying to acquire a semaphore, the threads that also try to release the same semaphore object hang indefinitely.
Code:
```
import threading
import time
def worker(i, sema):
time.sleep(2)
print i, "finished"
sema.release()
sema = threading.BoundedSemaphore(value=5)
threads = []
for x in xrange(100):
sema.acquire()
t = threading.Thread(target=worker, args=(x, sema))
t.start()
threads.append(t)
```
Start this up and then ^C as it is running. It will hang and never exit.
```
0 finished
3 finished
1 finished
2 finished
4 finished
^C5 finished
Traceback (most recent call last):
File "/tmp/proof.py", line 15, in <module>
sema.acquire()
File "/System/Library/Frameworks/Python.framework/Versions/2.5/lib/python2.5/threading.py", line 290, in acquire
self.__cond.wait()
File "/System/Library/Frameworks/Python.framework/Versions/2.5/lib/python2.5/threading.py", line 214, in wait
waiter.acquire()
KeyboardInterrupt
6 finished
7 finished
8 finished
9 finished
```
How can I get it to let the last few threads die natural deaths and then exit normally? (which it does if you don't try to interrupt it)
|
2009/06/01
|
[
"https://Stackoverflow.com/questions/936933",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/41613/"
] |
You can use the signal module to set a flag that tells the main thread to stop processing:
```
import threading
import time
import signal
import sys
sigint = False
def sighandler(num, frame):
global sigint
sigint = True
def worker(i, sema):
time.sleep(2)
print i, "finished"
sema.release()
signal.signal(signal.SIGINT, sighandler)
sema = threading.BoundedSemaphore(value=5)
threads = []
for x in xrange(100):
sema.acquire()
if sigint:
sys.exit()
t = threading.Thread(target=worker, args=(x, sema))
t.start()
t.join()
threads.append(t)
```
|
This is bug [#11714](http://bugs.python.org/issue11714), and has been [patched](http://hg.python.org/cpython/rev/2253b8a18bbf) in newer versions of python.
If you are using an older python, you could copy the the version of `Semaphore` found in that patch into your project and use it instead of relying on the buggy version in `threading`
|
936,933
|
If you raise a KeyboardInterrupt while trying to acquire a semaphore, the threads that also try to release the same semaphore object hang indefinitely.
Code:
```
import threading
import time
def worker(i, sema):
time.sleep(2)
print i, "finished"
sema.release()
sema = threading.BoundedSemaphore(value=5)
threads = []
for x in xrange(100):
sema.acquire()
t = threading.Thread(target=worker, args=(x, sema))
t.start()
threads.append(t)
```
Start this up and then ^C as it is running. It will hang and never exit.
```
0 finished
3 finished
1 finished
2 finished
4 finished
^C5 finished
Traceback (most recent call last):
File "/tmp/proof.py", line 15, in <module>
sema.acquire()
File "/System/Library/Frameworks/Python.framework/Versions/2.5/lib/python2.5/threading.py", line 290, in acquire
self.__cond.wait()
File "/System/Library/Frameworks/Python.framework/Versions/2.5/lib/python2.5/threading.py", line 214, in wait
waiter.acquire()
KeyboardInterrupt
6 finished
7 finished
8 finished
9 finished
```
How can I get it to let the last few threads die natural deaths and then exit normally? (which it does if you don't try to interrupt it)
|
2009/06/01
|
[
"https://Stackoverflow.com/questions/936933",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/41613/"
] |
You can use the signal module to set a flag that tells the main thread to stop processing:
```
import threading
import time
import signal
import sys
sigint = False
def sighandler(num, frame):
global sigint
sigint = True
def worker(i, sema):
time.sleep(2)
print i, "finished"
sema.release()
signal.signal(signal.SIGINT, sighandler)
sema = threading.BoundedSemaphore(value=5)
threads = []
for x in xrange(100):
sema.acquire()
if sigint:
sys.exit()
t = threading.Thread(target=worker, args=(x, sema))
t.start()
t.join()
threads.append(t)
```
|
```
# importing modules
import threading
import time
# defining our worker and pass a counter and the semaphore to it
def worker(i, sema):
time.sleep(2)
print i, "finished"
# releasing the thread increments the sema value
sema.release()
# creating the semaphore object
sema = threading.BoundedSemaphore(value=5)
# a list to store the created threads
threads = []
for x in xrange(100):
try:
sema.acquire()
t = threading.Thread(target=worker, args=(x, sema))
t.start()
threads.append(t)
# exit once the user hit CTRL+c
# or you can make the thead as daemon t.setdaemon(True)
except KeyboardInterrupt:
exit()
```
|
936,933
|
If you raise a KeyboardInterrupt while trying to acquire a semaphore, the threads that also try to release the same semaphore object hang indefinitely.
Code:
```
import threading
import time
def worker(i, sema):
time.sleep(2)
print i, "finished"
sema.release()
sema = threading.BoundedSemaphore(value=5)
threads = []
for x in xrange(100):
sema.acquire()
t = threading.Thread(target=worker, args=(x, sema))
t.start()
threads.append(t)
```
Start this up and then ^C as it is running. It will hang and never exit.
```
0 finished
3 finished
1 finished
2 finished
4 finished
^C5 finished
Traceback (most recent call last):
File "/tmp/proof.py", line 15, in <module>
sema.acquire()
File "/System/Library/Frameworks/Python.framework/Versions/2.5/lib/python2.5/threading.py", line 290, in acquire
self.__cond.wait()
File "/System/Library/Frameworks/Python.framework/Versions/2.5/lib/python2.5/threading.py", line 214, in wait
waiter.acquire()
KeyboardInterrupt
6 finished
7 finished
8 finished
9 finished
```
How can I get it to let the last few threads die natural deaths and then exit normally? (which it does if you don't try to interrupt it)
|
2009/06/01
|
[
"https://Stackoverflow.com/questions/936933",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/41613/"
] |
In your original code you could also make the threads daemon threads. When you interrupt the script, the daemon threads all die as you expected.
```
t = ...
t.setDaemon(True)
t.start()
```
|
In this case, it looks like you might just want to use a thread pool to control the starting and stopping of your threads. You could use [Chris Arndt's threadpool library](http://www.chrisarndt.de/projects/threadpool/) in a manner something like this:
```
pool = ThreadPool(5)
try:
# enqueue 100 worker threads
pool.wait()
except KeyboardInterrupt, k:
pool.dismiss(5)
# the program will exit after all running threads are complete
```
|
936,933
|
If you raise a KeyboardInterrupt while trying to acquire a semaphore, the threads that also try to release the same semaphore object hang indefinitely.
Code:
```
import threading
import time
def worker(i, sema):
time.sleep(2)
print i, "finished"
sema.release()
sema = threading.BoundedSemaphore(value=5)
threads = []
for x in xrange(100):
sema.acquire()
t = threading.Thread(target=worker, args=(x, sema))
t.start()
threads.append(t)
```
Start this up and then ^C as it is running. It will hang and never exit.
```
0 finished
3 finished
1 finished
2 finished
4 finished
^C5 finished
Traceback (most recent call last):
File "/tmp/proof.py", line 15, in <module>
sema.acquire()
File "/System/Library/Frameworks/Python.framework/Versions/2.5/lib/python2.5/threading.py", line 290, in acquire
self.__cond.wait()
File "/System/Library/Frameworks/Python.framework/Versions/2.5/lib/python2.5/threading.py", line 214, in wait
waiter.acquire()
KeyboardInterrupt
6 finished
7 finished
8 finished
9 finished
```
How can I get it to let the last few threads die natural deaths and then exit normally? (which it does if you don't try to interrupt it)
|
2009/06/01
|
[
"https://Stackoverflow.com/questions/936933",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/41613/"
] |
In your original code you could also make the threads daemon threads. When you interrupt the script, the daemon threads all die as you expected.
```
t = ...
t.setDaemon(True)
t.start()
```
|
This is bug [#11714](http://bugs.python.org/issue11714), and has been [patched](http://hg.python.org/cpython/rev/2253b8a18bbf) in newer versions of python.
If you are using an older python, you could copy the the version of `Semaphore` found in that patch into your project and use it instead of relying on the buggy version in `threading`
|
936,933
|
If you raise a KeyboardInterrupt while trying to acquire a semaphore, the threads that also try to release the same semaphore object hang indefinitely.
Code:
```
import threading
import time
def worker(i, sema):
time.sleep(2)
print i, "finished"
sema.release()
sema = threading.BoundedSemaphore(value=5)
threads = []
for x in xrange(100):
sema.acquire()
t = threading.Thread(target=worker, args=(x, sema))
t.start()
threads.append(t)
```
Start this up and then ^C as it is running. It will hang and never exit.
```
0 finished
3 finished
1 finished
2 finished
4 finished
^C5 finished
Traceback (most recent call last):
File "/tmp/proof.py", line 15, in <module>
sema.acquire()
File "/System/Library/Frameworks/Python.framework/Versions/2.5/lib/python2.5/threading.py", line 290, in acquire
self.__cond.wait()
File "/System/Library/Frameworks/Python.framework/Versions/2.5/lib/python2.5/threading.py", line 214, in wait
waiter.acquire()
KeyboardInterrupt
6 finished
7 finished
8 finished
9 finished
```
How can I get it to let the last few threads die natural deaths and then exit normally? (which it does if you don't try to interrupt it)
|
2009/06/01
|
[
"https://Stackoverflow.com/questions/936933",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/41613/"
] |
In your original code you could also make the threads daemon threads. When you interrupt the script, the daemon threads all die as you expected.
```
t = ...
t.setDaemon(True)
t.start()
```
|
```
# importing modules
import threading
import time
# defining our worker and pass a counter and the semaphore to it
def worker(i, sema):
time.sleep(2)
print i, "finished"
# releasing the thread increments the sema value
sema.release()
# creating the semaphore object
sema = threading.BoundedSemaphore(value=5)
# a list to store the created threads
threads = []
for x in xrange(100):
try:
sema.acquire()
t = threading.Thread(target=worker, args=(x, sema))
t.start()
threads.append(t)
# exit once the user hit CTRL+c
# or you can make the thead as daemon t.setdaemon(True)
except KeyboardInterrupt:
exit()
```
|
48,561,126
|
I installed opencv on my Ubuntu 14.04 system system with
```
pip install python-opencv
```
my Python version is 2.7.14
```
import cv2
cv2.__version__
```
tells me that I have the OpenCV version 3.4.0.
After that I wanted to follow the tutorial on the OpenCV website
```
import numpy as np
import cv2 as cv
img = cv.imread('messi5.jpg',0)
print img
```
It works fine until this point, but then I am supposed to enter
```
cv.imshow('image',img)
```
and I get the following error:
```
QObject::moveToThread: Current thread (0x233cdb0) is not the object's thread (0x2458430).
Cannot move to target thread (0x233cdb0)
QObject::moveToThread: Current thread (0x233cdb0) is not the object's thread (0x2458430).
Cannot move to target thread (0x233cdb0)
QPixmap: Must construct a QApplication before a QPaintDevice
```
Does anyone know what the problem is?
|
2018/02/01
|
[
"https://Stackoverflow.com/questions/48561126",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/3977420/"
] |
Apparently
```
pip install python-opencv
```
is not working at all and should not be used. After I installed Opencv from their website it worked
|
Try checking if the image you are reading is loading
```
image = cv2.imread(filepath,0) #0 for gray scale
if image is None:
print "Cant Load Image"
else:
cv2.imshow("Image", image)
cv2.waitKey(0)
```
|
48,561,126
|
I installed opencv on my Ubuntu 14.04 system system with
```
pip install python-opencv
```
my Python version is 2.7.14
```
import cv2
cv2.__version__
```
tells me that I have the OpenCV version 3.4.0.
After that I wanted to follow the tutorial on the OpenCV website
```
import numpy as np
import cv2 as cv
img = cv.imread('messi5.jpg',0)
print img
```
It works fine until this point, but then I am supposed to enter
```
cv.imshow('image',img)
```
and I get the following error:
```
QObject::moveToThread: Current thread (0x233cdb0) is not the object's thread (0x2458430).
Cannot move to target thread (0x233cdb0)
QObject::moveToThread: Current thread (0x233cdb0) is not the object's thread (0x2458430).
Cannot move to target thread (0x233cdb0)
QPixmap: Must construct a QApplication before a QPaintDevice
```
Does anyone know what the problem is?
|
2018/02/01
|
[
"https://Stackoverflow.com/questions/48561126",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/3977420/"
] |
Try checking if the image you are reading is loading
```
image = cv2.imread(filepath,0) #0 for gray scale
if image is None:
print "Cant Load Image"
else:
cv2.imshow("Image", image)
cv2.waitKey(0)
```
|
seems hard to install opencv on ubuntu, I finally get it with a docker image
<https://hub.docker.com/r/jjanzic/docker-python3-opencv/>
or you can download sources and make install as described on
<https://milq.github.io/install-opencv-ubuntu-debian/> using bash script
|
48,561,126
|
I installed opencv on my Ubuntu 14.04 system system with
```
pip install python-opencv
```
my Python version is 2.7.14
```
import cv2
cv2.__version__
```
tells me that I have the OpenCV version 3.4.0.
After that I wanted to follow the tutorial on the OpenCV website
```
import numpy as np
import cv2 as cv
img = cv.imread('messi5.jpg',0)
print img
```
It works fine until this point, but then I am supposed to enter
```
cv.imshow('image',img)
```
and I get the following error:
```
QObject::moveToThread: Current thread (0x233cdb0) is not the object's thread (0x2458430).
Cannot move to target thread (0x233cdb0)
QObject::moveToThread: Current thread (0x233cdb0) is not the object's thread (0x2458430).
Cannot move to target thread (0x233cdb0)
QPixmap: Must construct a QApplication before a QPaintDevice
```
Does anyone know what the problem is?
|
2018/02/01
|
[
"https://Stackoverflow.com/questions/48561126",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/3977420/"
] |
Apparently
```
pip install python-opencv
```
is not working at all and should not be used. After I installed Opencv from their website it worked
|
seems hard to install opencv on ubuntu, I finally get it with a docker image
<https://hub.docker.com/r/jjanzic/docker-python3-opencv/>
or you can download sources and make install as described on
<https://milq.github.io/install-opencv-ubuntu-debian/> using bash script
|
30,324,474
|
**Using the "re" i compile the datas of a handshake like this:**
```
piece_request_handshake = re.compile('13426974546f7272656e742070726f746f636f6c(?P<reserved>\w{16})(?P<info_hash>\w{40})(?P<peer_id>\w{40})')
handshake = piece_request_handshake.findall(hex_data)
```
*Then i print it*
**I'm unable to add image because i'm new so this is the output:**
```
root@debian:/home/florian/TΓ©lΓ©chargements# python script.py
[('0000000000100005', '606d4759c464c8fd0d4a5d8fc7a223ed70d31d7b', '2d5452323532302d746d6e6a657a307a6d687932')]
```
**My question is, how can i take only the second piece of this data that is to say the "hash\_info" (the "606d47...") ?**
**I already tried with the group of re with the following line:**
```
print handshake.group('info_hash')
```
**But the result is an error (sorry again i can't show the screen...):**
```
*root@debian:/home/florian/TΓ©lΓ©chargements# python script.py
Exception in thread Thread-1:
Traceback (most recent call last):
File "/usr/lib/python2.7/threading.py", line 552, in __bootstrap_inner
self.run()
File "script.py", line 122, in run
self.p.dispatch(0, PieceRequestSniffer.cb)
File "script.py", line 82, in cb
print handshake.group('info_hash')
AttributeError: 'list' object has no attribute 'group'*
```
This is the start of my full code for the curious:
```
import pcapy
import dpkt
from threading import Thread
import re
import binascii
import socket
import time
liste=[]
prefix = '13426974546f7272656e742070726f746f636f6c'
hash_code = re.compile('%s(?P<reserved>\w{16})(?P<info_hash>\w{40})(?P<peer_id>\w{40})' % prefix)
match = hash_code.match()
piece_request_handshake = re.compile('13426974546f7272656e742070726f746f636f6c(?P<aaa>\w{16})(?P<bbb>\w{40})(?P<ccc>\w{40})')
piece_request_tcpclose = re.compile('(?P<start>\w{12})5011')
#-----------------------------------------------------------------INIT------------------------------------------------------------
class PieceRequestSniffer(Thread):
def __init__(self, dev='eth0'):
Thread.__init__(self)
self.expr = 'udp or tcp'
self.maxlen = 65535 # max size of packet to capture
self.promiscuous = 1 # promiscuous mode?
self.read_timeout = 100 # in milliseconds
self.max_pkts = -1 # number of packets to capture; -1 => no limit
self.active = True
self.p = pcapy.open_live(dev, self.maxlen, self.promiscuous, self.read_timeout)
self.p.setfilter(self.expr)
@staticmethod
def cb(hdr, data):
eth = dpkt.ethernet.Ethernet(str(data))
ip = eth.data
#------------------------------------------------------IPV4 AND TCP PACKETS ONLY---------------------------------------------------
#Select Ipv4 packets because of problem with the .p in Ipv6
if eth.type == dpkt.ethernet.ETH_TYPE_IP6:
return
else:
#Select only TCP protocols
if ip.p == dpkt.ip.IP_PROTO_TCP:
tcp = ip.data
src_ip = socket.inet_ntoa(ip.src)
dst_ip = socket.inet_ntoa(ip.dst)
fin_flag = ( tcp.flags & dpkt.tcp.TH_FIN ) != 0
#if fin_flag:
#print "TH_FIN src:%s dst:%s" % (src_ip,dst_ip)
try:
#Return hexadecimal representation
hex_data = binascii.hexlify(tcp.data)
except:
return
#-----------------------------------------------------------HANDSHAKE-------------------------------------------------------------
handshake = piece_request_handshake.findall(hex_data)
if handshake and (src_ip+" "+dst_ip) not in liste and (dst_ip+" "+src_ip) not in liste and handshake != '':
liste.append(src_ip+" "+dst_ip)
print match.group('info_hash')
```
|
2015/05/19
|
[
"https://Stackoverflow.com/questions/30324474",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/4872475/"
] |
`re.findall()` returns a list of tuples, each containing the matching strings that correspond to the named groups in the re pattern. This example (using a simplified pattern) demonstrates that you can access the required item with indexing:
```
import re
prefix = 'prefix'
pattern = re.compile('%s(?P<reserved>\w{4})(?P<info_hash>\w{10})(?P<peer_id>\w{10})' % prefix)
handshake = 'prefix12341234567890ABCDEF1234' # sniffed data
match = pattern.findall(handshake)
>>> print match
[('1234', '1234567890', 'ABCDEF1234')]
>>> info_hash = match[0][1]
>>> print info_hash
1234567890
```
But the point of named groups is to provide a way to access the matched values for a named group by name. You can use `re.match()` instead:
```
import re
prefix = 'prefix'
pattern = re.compile('%s(?P<reserved>\w{4})(?P<info_hash>\w{10})(?P<peer_id>\w{10})' % prefix)
handshake = 'prefix12341234567890ABCDEF1234' # sniffed data
match = pattern.match(handshake)
>>> print match
<_sre.SRE_Match object at 0x7fc201efe918>
>>> print match.group('reserved')
1234
>>> print match.group('info_hash')
1234567890
>>> print match.group('peer_id')
ABCDEF1234
```
The values are also available using dictionary access:
```
>>> d = match.groupdict()
>>> d
{'peer_id': 'ABCDEF1234', 'reserved': '1234', 'info_hash': '1234567890'}
>>> d['info_hash']
'1234567890'
```
Finally, if there are multiple handshake sequences in the input data, you can use `re.finditer()`:
```
import re
prefix = 'prefix'
pattern = re.compile('%s(?P<reserved>\w{4})(?P<info_hash>\w{10})(?P<peer_id>\w{10})' % prefix)
handshake = 'blahprefix12341234567890ABCDEF1234|randomjunkprefix12349876543210ABCDEF1234,more random junkprefix1234hellothereABCDEF1234...' # sniffed data
for match in pattern.finditer(handshake):
print match.group('info_hash')
```
Output:
```
1234567890
9876543210
hellothere
```
|
`re.findall` will return a list of tuples. The `group()` call works on `Match` objects, returned by some other functions in `re`:
```
for match in re.finditer(needle, haystack):
print match.group('info_hash')
```
Also, you might not need `findall` if you're just matching a single handshake.
|
30,542,336
|
I am new to python and trying to learn the recursion.
I'm trying to display all possible outcomes by changing 'a' to either number 7 or 8
For example,
```
user_type = 40aa
```
so it will display:
```
4077
4078
4087
4088
```
thank you
it doesn't have to be 40aa, it can be a4a0, aaa0, etc
this code is only replace 7, how can i fix that
```
user_type = 40aa
def replace(string, a, b)
if not string:
return ""
elif string[:len(b)] == b:
return a + replace(string[len(b):], a, b)
else:
return string[0] + replace(string[1:], a, b)
print(replace(user_type, '7', 'a'))
```
|
2015/05/30
|
[
"https://Stackoverflow.com/questions/30542336",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/4955371/"
] |
```
pattern = "40aa"
options = [7, 8]
def replace(left, right):
if len(right) > 0:
if right[0] == "a":
results = []
for i in options:
results.extend(replace(left + str(i), right[1:]))
return results
else:
return replace(left + right[0], right[1:])
else:
return [left]
print replace("", pattern)
```
In other words, the function is called with the already processed part of the pattern, and remaining part of the pattern. If the next pattern's character is a digit, it's passed from the pattern to the result. If it's "a" it's replaced with all options step by step and the remaining pattern is processed recursively.
|
I don't know Python very well, but I can help with the recursion.
The basic idea is that you will loop through each character in the string, and each time you hit an 'a', you will replace it with a 7 and an 8, and pass both of those values to your recursive method.
Here is an example:
Suppose you have the string "Basttaa".
Loop until you hit an a, so you are at the second character. Replace it with a '7' and an '8'. Now you have two separate strings, and you can pass both to your recursive method.
We now have "B7sttaa" and "B8sttaa". We pass both to our function.
In the first string, we get to the 6th character and replace it with a '7' and an '8' and repeat the process. After that replacement, we have "B7stt7a", "B7stt8a", and "B8sttaa".
Now with the second string that was passed, we get to the 6th character again and do the process of replacing. Now we have four strings: "B7stt7a", "B7stt8a", "B8stt7a", and "B8stt8a".
Those four strings are again passed to the recursive method and we get our final 8 strings, after the last character on each is replaced with both a '7' and an '8'.
Our four strings: "B7stt7a", "B7stt8a", "B8stt7a", and "B8stt8a" are again passed to our recursive method. The method gets to the last character of each, and replaces the a of each with a '7' and '8'. Then, because it is at the end of each string, it adds each to the list.
"B7stt7a" becomes "B7stt77" and "B7stt78" and both are added to the list.
"B7stt8a" becomes "B7stt87" and "B7stt88" and both are added to the list.
"B8stt7a" becomes "B8stt77" and "B8stt78" and both are added to the list.
"B8stt8a" becomes "B8stt87" and "B8stt88" and both are added to the list.
Now the list has ["B7stt77", "B7stt78", "B7stt87", "B7stt88", "B8stt77", "B8stt78", "B8stt87", "B8stt88"]
The psuedo-code looks something like this:
```
list[];
recusion(string str)
for each char
if char is 'a'
return recursion(str replace char with 7)
return recursion(str replace char with 8)
if at end
add str to list
return;
```
|
30,542,336
|
I am new to python and trying to learn the recursion.
I'm trying to display all possible outcomes by changing 'a' to either number 7 or 8
For example,
```
user_type = 40aa
```
so it will display:
```
4077
4078
4087
4088
```
thank you
it doesn't have to be 40aa, it can be a4a0, aaa0, etc
this code is only replace 7, how can i fix that
```
user_type = 40aa
def replace(string, a, b)
if not string:
return ""
elif string[:len(b)] == b:
return a + replace(string[len(b):], a, b)
else:
return string[0] + replace(string[1:], a, b)
print(replace(user_type, '7', 'a'))
```
|
2015/05/30
|
[
"https://Stackoverflow.com/questions/30542336",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/4955371/"
] |
```
pattern = "40aa"
options = [7, 8]
def replace(left, right):
if len(right) > 0:
if right[0] == "a":
results = []
for i in options:
results.extend(replace(left + str(i), right[1:]))
return results
else:
return replace(left + right[0], right[1:])
else:
return [left]
print replace("", pattern)
```
In other words, the function is called with the already processed part of the pattern, and remaining part of the pattern. If the next pattern's character is a digit, it's passed from the pattern to the result. If it's "a" it's replaced with all options step by step and the remaining pattern is processed recursively.
|
So, we can do this in a fairly straightforward way. We need to create two lists:
```
user_type.split('a') == ['40', '', '']
itertools.product('78', repeat=user_type.count('a')) == [('7', '7'), ('7', '8'), ('8', '7'), ('8', '8')]
```
Now, for each of our pairs, we need to interleave them. The `itertools` documentation provides a nice recipe named `roundrobin()` to do this:
```
import itertools
# Round Robin recipe from the itertools documentation
def roundrobin(*iterables):
"roundrobin('ABC', 'D', 'EF') --> A D E B F C"
# Recipe credited to George Sakkis
pending = len(iterables)
nexts = itertools.cycle(iter(it).next for it in iterables)
while pending:
try:
for next in nexts:
yield next()
except StopIteration:
pending -= 1
nexts = itertools.cycle(itertools.islice(nexts, pending))
```
Let's put all of this together:
```
user_type = "40aa"
user_count = user_type.count('a')
for replacement in itertools.product('78', repeat=user_count):
print ''.join(roundrobin(user_type.split('a'), replacement))
```
Which outputs:
```
4077
4078
4087
4088
```
|
22,770,352
|
I am trying to predict weekly sales using ARMA ARIMA models. I could not find a function for tuning the order(p,d,q) in `statsmodels`. Currently R has a function `forecast::auto.arima()` which will tune the (p,d,q) parameters.
How do I go about choosing the right order for my model? Are there any libraries available in python for this purpose?
|
2014/03/31
|
[
"https://Stackoverflow.com/questions/22770352",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/1483927/"
] |
```
def evaluate_arima_model(X, arima_order):
# prepare training dataset
train_size = int(len(X) * 0.90)
train, test = X[0:train_size], X[train_size:]
history = [x for x in train]
# make predictions
predictions = list()
for t in range(len(test)):
model = ARIMA(history, order=arima_order)
model_fit = model.fit(disp=0)
yhat = model_fit.forecast()[0]
predictions.append(yhat)
history.append(test[t])
# calculate out of sample error
error = mean_squared_error(test, predictions)
return error
# evaluate combinations of p, d and q values for an ARIMA model
def evaluate_models(dataset, p_values, d_values, q_values):
dataset = dataset.astype('float32')
best_score, best_cfg = float("inf"), None
for p in p_values:
for d in d_values:
for q in q_values:
order = (p,d,q)
try:
mse = evaluate_arima_model(dataset, order)
if mse < best_score:
best_score, best_cfg = mse, order
print('ARIMA%s MSE=%.3f' % (order,mse))
except:
continue
print('Best ARIMA%s MSE=%.3f' % (best_cfg, best_score))
# load dataset
def parser(x):
return datetime.strptime('190'+x, '%Y-%m')
import datetime
p_values = [4,5,6,7,8]
d_values = [0,1,2]
q_values = [2,3,4,5,6]
warnings.filterwarnings("ignore")
evaluate_models(train, p_values, d_values, q_values)
```
This will give you the p,d,q values, then use the values for your ARIMA model
|
I wrote these utility functions to directly calculate pdq values
*get\_PDQ\_parallel* require three inputs data which is series with timestamp(datetime) as index. n\_jobs will provide number of parallel processor. output will be dataframe with aic and bic value with order=(P,D,Q) in index
p and q range is [0,12] while d is [0,1]
```
import statsmodels
from statsmodels import api as sm
from sklearn.metrics import r2_score,mean_squared_error
from sklearn.utils import check_array
from functools import partial
from multiprocessing import Pool
def get_aic_bic(order,series):
aic=np.nan
bic=np.nan
#print(series.shape,order)
try:
arima_mod=statsmodels.tsa.arima_model.ARIMA(series,order=order,freq='H').fit(transparams=True,method='css')
aic=arima_mod.aic
bic=arima_mod.bic
print(order,aic,bic)
except:
pass
return aic,bic
def get_PDQ_parallel(data,n_jobs=7):
p_val=13
q_val=13
d_vals=2
pdq_vals=[ (p,d,q) for p in range(p_val) for d in range(d_vals) for q in range(q_val)]
get_aic_bic_partial=partial(get_aic_bic,series=data)
p = Pool(n_jobs)
res=p.map(get_aic_bic_partial, pdq_vals)
p.close()
return pd.DataFrame(res,index=pdq_vals,columns=['aic','bic'])
```
|
22,770,352
|
I am trying to predict weekly sales using ARMA ARIMA models. I could not find a function for tuning the order(p,d,q) in `statsmodels`. Currently R has a function `forecast::auto.arima()` which will tune the (p,d,q) parameters.
How do I go about choosing the right order for my model? Are there any libraries available in python for this purpose?
|
2014/03/31
|
[
"https://Stackoverflow.com/questions/22770352",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/1483927/"
] |
As of now, we can directly use [pyramid-arima](https://pypi.org/project/pyramid-arima/) package from PyPI.
|
In conda, use `conda install -c saravji pmdarima` to install.
The user `saravji` has put it in anaconda cloud.
then to use,
```
from pmdarima.arima import auto_arima
```
(Note that the name `pyramid-arima` is changed to `pmdarima`).
|
22,770,352
|
I am trying to predict weekly sales using ARMA ARIMA models. I could not find a function for tuning the order(p,d,q) in `statsmodels`. Currently R has a function `forecast::auto.arima()` which will tune the (p,d,q) parameters.
How do I go about choosing the right order for my model? Are there any libraries available in python for this purpose?
|
2014/03/31
|
[
"https://Stackoverflow.com/questions/22770352",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/1483927/"
] |
You can implement a number of approaches:
1. [`ARIMAResults`](http://statsmodels.sourceforge.net/stable/generated/statsmodels.tsa.arima_model.ARIMAResults.html#statsmodels.tsa.arima_model.ARIMAResults) include `aic` and `bic`. By their definition, (see [here](http://en.wikipedia.org/wiki/Akaike_information_criterion) and [here](http://en.wikipedia.org/wiki/Bayesian_information_criterion)), these criteria penalize for the number of parameters in the model. So you may use these numbers to compare the models. Also scipy has [`optimize.brute`](http://docs.scipy.org/doc/scipy/reference/generated/scipy.optimize.brute.html#scipy.optimize.brute) which does grid search on the specified parameters space. So a workflow like this should work:
```
def objfunc(order, exog, endog):
from statsmodels.tsa.arima.model import ARIMA
fit = ARIMA(endog, order, exog).fit()
return fit.aic()
from scipy.optimize import brute
grid = (slice(1, 3, 1), slice(1, 3, 1), slice(1, 3, 1))
brute(objfunc, grid, args=(exog, endog), finish=None)
```
Make sure you call `brute` with `finish=None`.
2. You may obtain `pvalues` from `ARIMAResults`. So a sort of step-forward algorithm is easy to implement where the degree of the model is increased across the dimension which obtains lowest p-value for the added parameter.
3. Use [`ARIMAResults.predict`](http://statsmodels.sourceforge.net/stable/generated/statsmodels.tsa.arima_model.ARIMAResults.predict.html#statsmodels.tsa.arima_model.ARIMAResults.predict) to cross-validate alternative models. The best approach would be to keep the tail of the time series (say most recent 5% of data) out of sample, and use these points to obtain the *test error* of the fitted models.
|
possible solution
```
df=pd.read_csv("http://vincentarelbundock.github.io/Rdatasets/csv/datasets/AirPassengers.csv")
# Define the p, d and q parameters to take any value between 0 and 2
p = d = q = range(0, 2)
print(p)
import itertools
import warnings
# Generate all different combinations of p, q and q triplets
pdq = list(itertools.product(p, d, q))
print(pdq)
# Generate all different combinations of seasonal p, q and q triplets
seasonal_pdq = [(x[0], x[1], x[2], 12) for x in list(itertools.product(p, d, q))]
print('Examples of parameter combinations for Seasonal ARIMA...')
print('SARIMAX: {} x {}'.format(pdq[1], seasonal_pdq[1]))
print('SARIMAX: {} x {}'.format(pdq[1], seasonal_pdq[2]))
print('SARIMAX: {} x {}'.format(pdq[2], seasonal_pdq[3]))
print('SARIMAX: {} x {}'.format(pdq[2], seasonal_pdq[4]))
Examples of parameter combinations for Seasonal ARIMA...
SARIMAX: (0, 0, 1) x (0, 0, 1, 12)
SARIMAX: (0, 0, 1) x (0, 1, 0, 12)
SARIMAX: (0, 1, 0) x (0, 1, 1, 12)
SARIMAX: (0, 1, 0) x (1, 0, 0, 12)
y=df
#warnings.filterwarnings("ignore") # specify to ignore warning messages
for param in pdq:
for param_seasonal in seasonal_pdq:
try:
mod = sm.tsa.statespace.SARIMAX(y,
order=param,
seasonal_order=param_seasonal,
enforce_stationarity=False,
enforce_invertibility=False)
results = mod.fit()
print('ARIMA{}x{}12 - AIC:{}'.format(param, param_seasonal, results.aic))
except:
continue
ARIMA(0, 0, 0)x(0, 0, 1, 12)12 - AIC:3618.0303991426763
ARIMA(0, 0, 0)x(0, 1, 1, 12)12 - AIC:2824.7439963684233
ARIMA(0, 0, 0)x(1, 0, 0, 12)12 - AIC:2942.2733127230185
ARIMA(0, 0, 0)x(1, 0, 1, 12)12 - AIC:2922.178151133141
ARIMA(0, 0, 0)x(1, 1, 0, 12)12 - AIC:2767.105066400224
ARIMA(0, 0, 0)x(1, 1, 1, 12)12 - AIC:2691.233398643673
ARIMA(0, 0, 1)x(0, 0, 0, 12)12 - AIC:3890.816777796087
ARIMA(0, 0, 1)x(0, 0, 1, 12)12 - AIC:3541.1171286722
ARIMA(0, 0, 1)x(0, 1, 0, 12)12 - AIC:3028.8377323188824
ARIMA(0, 0, 1)x(0, 1, 1, 12)12 - AIC:2746.77973129136
ARIMA(0, 0, 1)x(1, 0, 0, 12)12 - AIC:3583.523640623017
ARIMA(0, 0, 1)x(1, 0, 1, 12)12 - AIC:3531.2937768990187
ARIMA(0, 0, 1)x(1, 1, 0, 12)12 - AIC:2781.198675746594
ARIMA(0, 0, 1)x(1, 1, 1, 12)12 - AIC:2720.7023088205974
ARIMA(0, 1, 0)x(0, 0, 1, 12)12 - AIC:3029.089945668332
ARIMA(0, 1, 0)x(0, 1, 1, 12)12 - AIC:2568.2832251221016
ARIMA(0, 1, 0)x(1, 0, 0, 12)12 - AIC:2841.315781459511
ARIMA(0, 1, 0)x(1, 0, 1, 12)12 - AIC:2815.4011044132576
ARIMA(0, 1, 0)x(1, 1, 0, 12)12 - AIC:2588.533386513587
ARIMA(0, 1, 0)x(1, 1, 1, 12)12 - AIC:2569.9453272483315
ARIMA(0, 1, 1)x(0, 0, 0, 12)12 - AIC:3327.5177587522303
ARIMA(0, 1, 1)x(0, 0, 1, 12)12 - AIC:2984.716706112334
ARIMA(0, 1, 1)x(0, 1, 0, 12)12 - AIC:2789.128542154043
ARIMA(0, 1, 1)x(0, 1, 1, 12)12 - AIC:2537.0293659293943
ARIMA(0, 1, 1)x(1, 0, 0, 12)12 - AIC:2984.4555708516436
ARIMA(0, 1, 1)x(1, 0, 1, 12)12 - AIC:2939.460958374472
ARIMA(0, 1, 1)x(1, 1, 0, 12)12 - AIC:2578.7862352774437
ARIMA(0, 1, 1)x(1, 1, 1, 12)12 - AIC:2537.771484229265
ARIMA(1, 0, 0)x(0, 0, 0, 12)12 - AIC:3391.5248913820797
ARIMA(1, 0, 0)x(0, 0, 1, 12)12 - AIC:3038.142074281268
C:\Users\Dell\Anaconda3\lib\site-packages\statsmodels\base\model.py:496: ConvergenceWarning: Maximum Likelihood optimization failed to converge. Check mle_retvals
"Check mle_retvals", ConvergenceWarning)
ARIMA(1, 0, 0)x(0, 1, 0, 12)12 - AIC:2839.809192263449
ARIMA(1, 0, 0)x(0, 1, 1, 12)12 - AIC:2588.50367175184
ARIMA(1, 0, 0)x(1, 0, 0, 12)12 - AIC:2993.4630440139595
ARIMA(1, 0, 0)x(1, 0, 1, 12)12 - AIC:2995.049216326931
ARIMA(1, 0, 0)x(1, 1, 0, 12)12 - AIC:2588.2463284315304
ARIMA(1, 0, 0)x(1, 1, 1, 12)12 - AIC:2592.80110502723
ARIMA(1, 0, 1)x(0, 0, 0, 12)12 - AIC:3352.0350133621478
ARIMA(1, 0, 1)x(0, 0, 1, 12)12 - AIC:3006.5493366627807
ARIMA(1, 0, 1)x(0, 1, 0, 12)12 - AIC:2810.6423724894516
ARIMA(1, 0, 1)x(0, 1, 1, 12)12 - AIC:2559.584031948852
ARIMA(1, 0, 1)x(1, 0, 0, 12)12 - AIC:2981.2250436794675
ARIMA(1, 0, 1)x(1, 0, 1, 12)12 - AIC:2959.3142304724834
ARIMA(1, 0, 1)x(1, 1, 0, 12)12 - AIC:2579.8245645892207
ARIMA(1, 0, 1)x(1, 1, 1, 12)12 - AIC:2563.13922589258
ARIMA(1, 1, 0)x(0, 0, 0, 12)12 - AIC:3354.7462930846423
ARIMA(1, 1, 0)x(0, 0, 1, 12)12 - AIC:3006.702997636003
ARIMA(1, 1, 0)x(0, 1, 0, 12)12 - AIC:2809.3844175191666
ARIMA(1, 1, 0)x(0, 1, 1, 12)12 - AIC:2558.484602766447
ARIMA(1, 1, 0)x(1, 0, 0, 12)12 - AIC:2959.885810636943
ARIMA(1, 1, 0)x(1, 0, 1, 12)12 - AIC:2960.712709764296
ARIMA(1, 1, 0)x(1, 1, 0, 12)12 - AIC:2557.945907092698
ARIMA(1, 1, 0)x(1, 1, 1, 12)12 - AIC:2559.274166458508
ARIMA(1, 1, 1)x(0, 0, 0, 12)12 - AIC:3326.3285511700374
ARIMA(1, 1, 1)x(0, 0, 1, 12)12 - AIC:2985.868532151721
ARIMA(1, 1, 1)x(0, 1, 0, 12)12 - AIC:2790.7677149967103
ARIMA(1, 1, 1)x(0, 1, 1, 12)12 - AIC:2538.820635541546
ARIMA(1, 1, 1)x(1, 0, 0, 12)12 - AIC:2963.2789505804294
ARIMA(1, 1, 1)x(1, 0, 1, 12)12 - AIC:2941.2436984747465
ARIMA(1, 1, 1)x(1, 1, 0, 12)12 - AIC:2559.8258191422606
ARIMA(1, 1, 1)x(1, 1, 1, 12)12 - AIC:2539.712354465328
```
from <https://www.digitalocean.com/community/tutorials/a-guide-to-time-series-forecasting-with-arima-in-python-3>
also see [https://github.com/decisionstats/pythonfordatascience/blob/master/time%2Bseries%20(1).ipynb](https://github.com/decisionstats/pythonfordatascience/blob/master/time%2Bseries.ipynb)
|
22,770,352
|
I am trying to predict weekly sales using ARMA ARIMA models. I could not find a function for tuning the order(p,d,q) in `statsmodels`. Currently R has a function `forecast::auto.arima()` which will tune the (p,d,q) parameters.
How do I go about choosing the right order for my model? Are there any libraries available in python for this purpose?
|
2014/03/31
|
[
"https://Stackoverflow.com/questions/22770352",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/1483927/"
] |
I wrote these utility functions to directly calculate pdq values
*get\_PDQ\_parallel* require three inputs data which is series with timestamp(datetime) as index. n\_jobs will provide number of parallel processor. output will be dataframe with aic and bic value with order=(P,D,Q) in index
p and q range is [0,12] while d is [0,1]
```
import statsmodels
from statsmodels import api as sm
from sklearn.metrics import r2_score,mean_squared_error
from sklearn.utils import check_array
from functools import partial
from multiprocessing import Pool
def get_aic_bic(order,series):
aic=np.nan
bic=np.nan
#print(series.shape,order)
try:
arima_mod=statsmodels.tsa.arima_model.ARIMA(series,order=order,freq='H').fit(transparams=True,method='css')
aic=arima_mod.aic
bic=arima_mod.bic
print(order,aic,bic)
except:
pass
return aic,bic
def get_PDQ_parallel(data,n_jobs=7):
p_val=13
q_val=13
d_vals=2
pdq_vals=[ (p,d,q) for p in range(p_val) for d in range(d_vals) for q in range(q_val)]
get_aic_bic_partial=partial(get_aic_bic,series=data)
p = Pool(n_jobs)
res=p.map(get_aic_bic_partial, pdq_vals)
p.close()
return pd.DataFrame(res,index=pdq_vals,columns=['aic','bic'])
```
|
In conda, use `conda install -c saravji pmdarima` to install.
The user `saravji` has put it in anaconda cloud.
then to use,
```
from pmdarima.arima import auto_arima
```
(Note that the name `pyramid-arima` is changed to `pmdarima`).
|
22,770,352
|
I am trying to predict weekly sales using ARMA ARIMA models. I could not find a function for tuning the order(p,d,q) in `statsmodels`. Currently R has a function `forecast::auto.arima()` which will tune the (p,d,q) parameters.
How do I go about choosing the right order for my model? Are there any libraries available in python for this purpose?
|
2014/03/31
|
[
"https://Stackoverflow.com/questions/22770352",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/1483927/"
] |
possible solution
```
df=pd.read_csv("http://vincentarelbundock.github.io/Rdatasets/csv/datasets/AirPassengers.csv")
# Define the p, d and q parameters to take any value between 0 and 2
p = d = q = range(0, 2)
print(p)
import itertools
import warnings
# Generate all different combinations of p, q and q triplets
pdq = list(itertools.product(p, d, q))
print(pdq)
# Generate all different combinations of seasonal p, q and q triplets
seasonal_pdq = [(x[0], x[1], x[2], 12) for x in list(itertools.product(p, d, q))]
print('Examples of parameter combinations for Seasonal ARIMA...')
print('SARIMAX: {} x {}'.format(pdq[1], seasonal_pdq[1]))
print('SARIMAX: {} x {}'.format(pdq[1], seasonal_pdq[2]))
print('SARIMAX: {} x {}'.format(pdq[2], seasonal_pdq[3]))
print('SARIMAX: {} x {}'.format(pdq[2], seasonal_pdq[4]))
Examples of parameter combinations for Seasonal ARIMA...
SARIMAX: (0, 0, 1) x (0, 0, 1, 12)
SARIMAX: (0, 0, 1) x (0, 1, 0, 12)
SARIMAX: (0, 1, 0) x (0, 1, 1, 12)
SARIMAX: (0, 1, 0) x (1, 0, 0, 12)
y=df
#warnings.filterwarnings("ignore") # specify to ignore warning messages
for param in pdq:
for param_seasonal in seasonal_pdq:
try:
mod = sm.tsa.statespace.SARIMAX(y,
order=param,
seasonal_order=param_seasonal,
enforce_stationarity=False,
enforce_invertibility=False)
results = mod.fit()
print('ARIMA{}x{}12 - AIC:{}'.format(param, param_seasonal, results.aic))
except:
continue
ARIMA(0, 0, 0)x(0, 0, 1, 12)12 - AIC:3618.0303991426763
ARIMA(0, 0, 0)x(0, 1, 1, 12)12 - AIC:2824.7439963684233
ARIMA(0, 0, 0)x(1, 0, 0, 12)12 - AIC:2942.2733127230185
ARIMA(0, 0, 0)x(1, 0, 1, 12)12 - AIC:2922.178151133141
ARIMA(0, 0, 0)x(1, 1, 0, 12)12 - AIC:2767.105066400224
ARIMA(0, 0, 0)x(1, 1, 1, 12)12 - AIC:2691.233398643673
ARIMA(0, 0, 1)x(0, 0, 0, 12)12 - AIC:3890.816777796087
ARIMA(0, 0, 1)x(0, 0, 1, 12)12 - AIC:3541.1171286722
ARIMA(0, 0, 1)x(0, 1, 0, 12)12 - AIC:3028.8377323188824
ARIMA(0, 0, 1)x(0, 1, 1, 12)12 - AIC:2746.77973129136
ARIMA(0, 0, 1)x(1, 0, 0, 12)12 - AIC:3583.523640623017
ARIMA(0, 0, 1)x(1, 0, 1, 12)12 - AIC:3531.2937768990187
ARIMA(0, 0, 1)x(1, 1, 0, 12)12 - AIC:2781.198675746594
ARIMA(0, 0, 1)x(1, 1, 1, 12)12 - AIC:2720.7023088205974
ARIMA(0, 1, 0)x(0, 0, 1, 12)12 - AIC:3029.089945668332
ARIMA(0, 1, 0)x(0, 1, 1, 12)12 - AIC:2568.2832251221016
ARIMA(0, 1, 0)x(1, 0, 0, 12)12 - AIC:2841.315781459511
ARIMA(0, 1, 0)x(1, 0, 1, 12)12 - AIC:2815.4011044132576
ARIMA(0, 1, 0)x(1, 1, 0, 12)12 - AIC:2588.533386513587
ARIMA(0, 1, 0)x(1, 1, 1, 12)12 - AIC:2569.9453272483315
ARIMA(0, 1, 1)x(0, 0, 0, 12)12 - AIC:3327.5177587522303
ARIMA(0, 1, 1)x(0, 0, 1, 12)12 - AIC:2984.716706112334
ARIMA(0, 1, 1)x(0, 1, 0, 12)12 - AIC:2789.128542154043
ARIMA(0, 1, 1)x(0, 1, 1, 12)12 - AIC:2537.0293659293943
ARIMA(0, 1, 1)x(1, 0, 0, 12)12 - AIC:2984.4555708516436
ARIMA(0, 1, 1)x(1, 0, 1, 12)12 - AIC:2939.460958374472
ARIMA(0, 1, 1)x(1, 1, 0, 12)12 - AIC:2578.7862352774437
ARIMA(0, 1, 1)x(1, 1, 1, 12)12 - AIC:2537.771484229265
ARIMA(1, 0, 0)x(0, 0, 0, 12)12 - AIC:3391.5248913820797
ARIMA(1, 0, 0)x(0, 0, 1, 12)12 - AIC:3038.142074281268
C:\Users\Dell\Anaconda3\lib\site-packages\statsmodels\base\model.py:496: ConvergenceWarning: Maximum Likelihood optimization failed to converge. Check mle_retvals
"Check mle_retvals", ConvergenceWarning)
ARIMA(1, 0, 0)x(0, 1, 0, 12)12 - AIC:2839.809192263449
ARIMA(1, 0, 0)x(0, 1, 1, 12)12 - AIC:2588.50367175184
ARIMA(1, 0, 0)x(1, 0, 0, 12)12 - AIC:2993.4630440139595
ARIMA(1, 0, 0)x(1, 0, 1, 12)12 - AIC:2995.049216326931
ARIMA(1, 0, 0)x(1, 1, 0, 12)12 - AIC:2588.2463284315304
ARIMA(1, 0, 0)x(1, 1, 1, 12)12 - AIC:2592.80110502723
ARIMA(1, 0, 1)x(0, 0, 0, 12)12 - AIC:3352.0350133621478
ARIMA(1, 0, 1)x(0, 0, 1, 12)12 - AIC:3006.5493366627807
ARIMA(1, 0, 1)x(0, 1, 0, 12)12 - AIC:2810.6423724894516
ARIMA(1, 0, 1)x(0, 1, 1, 12)12 - AIC:2559.584031948852
ARIMA(1, 0, 1)x(1, 0, 0, 12)12 - AIC:2981.2250436794675
ARIMA(1, 0, 1)x(1, 0, 1, 12)12 - AIC:2959.3142304724834
ARIMA(1, 0, 1)x(1, 1, 0, 12)12 - AIC:2579.8245645892207
ARIMA(1, 0, 1)x(1, 1, 1, 12)12 - AIC:2563.13922589258
ARIMA(1, 1, 0)x(0, 0, 0, 12)12 - AIC:3354.7462930846423
ARIMA(1, 1, 0)x(0, 0, 1, 12)12 - AIC:3006.702997636003
ARIMA(1, 1, 0)x(0, 1, 0, 12)12 - AIC:2809.3844175191666
ARIMA(1, 1, 0)x(0, 1, 1, 12)12 - AIC:2558.484602766447
ARIMA(1, 1, 0)x(1, 0, 0, 12)12 - AIC:2959.885810636943
ARIMA(1, 1, 0)x(1, 0, 1, 12)12 - AIC:2960.712709764296
ARIMA(1, 1, 0)x(1, 1, 0, 12)12 - AIC:2557.945907092698
ARIMA(1, 1, 0)x(1, 1, 1, 12)12 - AIC:2559.274166458508
ARIMA(1, 1, 1)x(0, 0, 0, 12)12 - AIC:3326.3285511700374
ARIMA(1, 1, 1)x(0, 0, 1, 12)12 - AIC:2985.868532151721
ARIMA(1, 1, 1)x(0, 1, 0, 12)12 - AIC:2790.7677149967103
ARIMA(1, 1, 1)x(0, 1, 1, 12)12 - AIC:2538.820635541546
ARIMA(1, 1, 1)x(1, 0, 0, 12)12 - AIC:2963.2789505804294
ARIMA(1, 1, 1)x(1, 0, 1, 12)12 - AIC:2941.2436984747465
ARIMA(1, 1, 1)x(1, 1, 0, 12)12 - AIC:2559.8258191422606
ARIMA(1, 1, 1)x(1, 1, 1, 12)12 - AIC:2539.712354465328
```
from <https://www.digitalocean.com/community/tutorials/a-guide-to-time-series-forecasting-with-arima-in-python-3>
also see [https://github.com/decisionstats/pythonfordatascience/blob/master/time%2Bseries%20(1).ipynb](https://github.com/decisionstats/pythonfordatascience/blob/master/time%2Bseries.ipynb)
|
actually
```
def objfunc(order,*params ):
from statsmodels.tsa.arima_model import ARIMA
p,d,q = order
fit = ARIMA(endog, order, exog).fit()
return fit.aic()
from scipy.optimize import brute
grid = (slice(1, 3, 1), slice(1, 3, 1), slice(1, 3, 1))
brute(objfunc, grid, args=params, finish=None)
```
|
22,770,352
|
I am trying to predict weekly sales using ARMA ARIMA models. I could not find a function for tuning the order(p,d,q) in `statsmodels`. Currently R has a function `forecast::auto.arima()` which will tune the (p,d,q) parameters.
How do I go about choosing the right order for my model? Are there any libraries available in python for this purpose?
|
2014/03/31
|
[
"https://Stackoverflow.com/questions/22770352",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/1483927/"
] |
You can implement a number of approaches:
1. [`ARIMAResults`](http://statsmodels.sourceforge.net/stable/generated/statsmodels.tsa.arima_model.ARIMAResults.html#statsmodels.tsa.arima_model.ARIMAResults) include `aic` and `bic`. By their definition, (see [here](http://en.wikipedia.org/wiki/Akaike_information_criterion) and [here](http://en.wikipedia.org/wiki/Bayesian_information_criterion)), these criteria penalize for the number of parameters in the model. So you may use these numbers to compare the models. Also scipy has [`optimize.brute`](http://docs.scipy.org/doc/scipy/reference/generated/scipy.optimize.brute.html#scipy.optimize.brute) which does grid search on the specified parameters space. So a workflow like this should work:
```
def objfunc(order, exog, endog):
from statsmodels.tsa.arima.model import ARIMA
fit = ARIMA(endog, order, exog).fit()
return fit.aic()
from scipy.optimize import brute
grid = (slice(1, 3, 1), slice(1, 3, 1), slice(1, 3, 1))
brute(objfunc, grid, args=(exog, endog), finish=None)
```
Make sure you call `brute` with `finish=None`.
2. You may obtain `pvalues` from `ARIMAResults`. So a sort of step-forward algorithm is easy to implement where the degree of the model is increased across the dimension which obtains lowest p-value for the added parameter.
3. Use [`ARIMAResults.predict`](http://statsmodels.sourceforge.net/stable/generated/statsmodels.tsa.arima_model.ARIMAResults.predict.html#statsmodels.tsa.arima_model.ARIMAResults.predict) to cross-validate alternative models. The best approach would be to keep the tail of the time series (say most recent 5% of data) out of sample, and use these points to obtain the *test error* of the fitted models.
|
There is now a proper python package to do auto-arima. <https://github.com/tgsmith61591/pmdarima>
Docs:
<http://alkaline-ml.com/pmdarima>
Example usage: <https://github.com/tgsmith61591/pmdarima/blob/master/examples/quick_start_example.ipynb>
|
22,770,352
|
I am trying to predict weekly sales using ARMA ARIMA models. I could not find a function for tuning the order(p,d,q) in `statsmodels`. Currently R has a function `forecast::auto.arima()` which will tune the (p,d,q) parameters.
How do I go about choosing the right order for my model? Are there any libraries available in python for this purpose?
|
2014/03/31
|
[
"https://Stackoverflow.com/questions/22770352",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/1483927/"
] |
```
def evaluate_arima_model(X, arima_order):
# prepare training dataset
train_size = int(len(X) * 0.90)
train, test = X[0:train_size], X[train_size:]
history = [x for x in train]
# make predictions
predictions = list()
for t in range(len(test)):
model = ARIMA(history, order=arima_order)
model_fit = model.fit(disp=0)
yhat = model_fit.forecast()[0]
predictions.append(yhat)
history.append(test[t])
# calculate out of sample error
error = mean_squared_error(test, predictions)
return error
# evaluate combinations of p, d and q values for an ARIMA model
def evaluate_models(dataset, p_values, d_values, q_values):
dataset = dataset.astype('float32')
best_score, best_cfg = float("inf"), None
for p in p_values:
for d in d_values:
for q in q_values:
order = (p,d,q)
try:
mse = evaluate_arima_model(dataset, order)
if mse < best_score:
best_score, best_cfg = mse, order
print('ARIMA%s MSE=%.3f' % (order,mse))
except:
continue
print('Best ARIMA%s MSE=%.3f' % (best_cfg, best_score))
# load dataset
def parser(x):
return datetime.strptime('190'+x, '%Y-%m')
import datetime
p_values = [4,5,6,7,8]
d_values = [0,1,2]
q_values = [2,3,4,5,6]
warnings.filterwarnings("ignore")
evaluate_models(train, p_values, d_values, q_values)
```
This will give you the p,d,q values, then use the values for your ARIMA model
|
actually
```
def objfunc(order,*params ):
from statsmodels.tsa.arima_model import ARIMA
p,d,q = order
fit = ARIMA(endog, order, exog).fit()
return fit.aic()
from scipy.optimize import brute
grid = (slice(1, 3, 1), slice(1, 3, 1), slice(1, 3, 1))
brute(objfunc, grid, args=params, finish=None)
```
|
22,770,352
|
I am trying to predict weekly sales using ARMA ARIMA models. I could not find a function for tuning the order(p,d,q) in `statsmodels`. Currently R has a function `forecast::auto.arima()` which will tune the (p,d,q) parameters.
How do I go about choosing the right order for my model? Are there any libraries available in python for this purpose?
|
2014/03/31
|
[
"https://Stackoverflow.com/questions/22770352",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/1483927/"
] |
As of now, we can directly use [pyramid-arima](https://pypi.org/project/pyramid-arima/) package from PyPI.
|
actually
```
def objfunc(order,*params ):
from statsmodels.tsa.arima_model import ARIMA
p,d,q = order
fit = ARIMA(endog, order, exog).fit()
return fit.aic()
from scipy.optimize import brute
grid = (slice(1, 3, 1), slice(1, 3, 1), slice(1, 3, 1))
brute(objfunc, grid, args=params, finish=None)
```
|
22,770,352
|
I am trying to predict weekly sales using ARMA ARIMA models. I could not find a function for tuning the order(p,d,q) in `statsmodels`. Currently R has a function `forecast::auto.arima()` which will tune the (p,d,q) parameters.
How do I go about choosing the right order for my model? Are there any libraries available in python for this purpose?
|
2014/03/31
|
[
"https://Stackoverflow.com/questions/22770352",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/1483927/"
] |
There is now a proper python package to do auto-arima. <https://github.com/tgsmith61591/pmdarima>
Docs:
<http://alkaline-ml.com/pmdarima>
Example usage: <https://github.com/tgsmith61591/pmdarima/blob/master/examples/quick_start_example.ipynb>
|
I wrote these utility functions to directly calculate pdq values
*get\_PDQ\_parallel* require three inputs data which is series with timestamp(datetime) as index. n\_jobs will provide number of parallel processor. output will be dataframe with aic and bic value with order=(P,D,Q) in index
p and q range is [0,12] while d is [0,1]
```
import statsmodels
from statsmodels import api as sm
from sklearn.metrics import r2_score,mean_squared_error
from sklearn.utils import check_array
from functools import partial
from multiprocessing import Pool
def get_aic_bic(order,series):
aic=np.nan
bic=np.nan
#print(series.shape,order)
try:
arima_mod=statsmodels.tsa.arima_model.ARIMA(series,order=order,freq='H').fit(transparams=True,method='css')
aic=arima_mod.aic
bic=arima_mod.bic
print(order,aic,bic)
except:
pass
return aic,bic
def get_PDQ_parallel(data,n_jobs=7):
p_val=13
q_val=13
d_vals=2
pdq_vals=[ (p,d,q) for p in range(p_val) for d in range(d_vals) for q in range(q_val)]
get_aic_bic_partial=partial(get_aic_bic,series=data)
p = Pool(n_jobs)
res=p.map(get_aic_bic_partial, pdq_vals)
p.close()
return pd.DataFrame(res,index=pdq_vals,columns=['aic','bic'])
```
|
22,770,352
|
I am trying to predict weekly sales using ARMA ARIMA models. I could not find a function for tuning the order(p,d,q) in `statsmodels`. Currently R has a function `forecast::auto.arima()` which will tune the (p,d,q) parameters.
How do I go about choosing the right order for my model? Are there any libraries available in python for this purpose?
|
2014/03/31
|
[
"https://Stackoverflow.com/questions/22770352",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/1483927/"
] |
There is now a proper python package to do auto-arima. <https://github.com/tgsmith61591/pmdarima>
Docs:
<http://alkaline-ml.com/pmdarima>
Example usage: <https://github.com/tgsmith61591/pmdarima/blob/master/examples/quick_start_example.ipynb>
|
actually
```
def objfunc(order,*params ):
from statsmodels.tsa.arima_model import ARIMA
p,d,q = order
fit = ARIMA(endog, order, exog).fit()
return fit.aic()
from scipy.optimize import brute
grid = (slice(1, 3, 1), slice(1, 3, 1), slice(1, 3, 1))
brute(objfunc, grid, args=params, finish=None)
```
|
68,562,020
|
This is my code
```
import pandas as pd
keys = ['phone match', 'account match']
d = {k: [] for k in keys}
df = pd.DataFrame(data=[[1,2,3],[4,5,6]],columns=['A','B','C'])
df['D'] = [d for _ in range(df.shape[0])]
df.at[0, 'D']['phone match'].append(4)
```
But instead of appending only on the dictionary at index 0 it appends to all the dictionaries and therefore the output is:
```
A B C D
0 1 2 3 {'phone match': [4], 'account match': []}
1 4 5 6 {'phone match': [4], 'account match': []}
```
While the desired output is:
```
A B C D
0 1 2 3 {'phone match': [4], 'account match': []}
1 4 5 6 {'phone match': [], 'account match': []}
```
I think this is because python is linking to the same dictionary, but how can I avoid that?
|
2021/07/28
|
[
"https://Stackoverflow.com/questions/68562020",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/16546771/"
] |
You need to create multiple `dict` in order to make each of them have different object ID
```
keys = ['phone match', 'account match']
df = pd.DataFrame(data=[[1,2,3],[4,5,6]],columns=['A','B','C'])
df['D'] = [{k: [] for k in keys} for _ in range(df.shape[0])] # Change here
df.at[0, 'D']['phone match'].append(4)
df
Out[65]:
A B C D
0 1 2 3 {'phone match': [4], 'account match': []}
1 4 5 6 {'phone match': [], 'account match': []}
```
|
`dict` objects are passed by reference in python.
In order to achieve what you want, you can use the following line which creates a copy of d for every line:
```
df['D'] = [d.copy() for _ in range(df.shape[0])]
```
|
31,518,864
|
I am currently generating 8 random values each time I run a program on Python. These 8 values are different each time I run the program, and I would like to be able to now save these 8 values each time I run the program to a text file in 8 separate columns. When saving these values for future runs, though, I would like to still be able to keep previous values. For example: after run 1, the text file will be 8x1, after run 2, the text file will be 8x2, and after run n, the text file will be 8xn.
I have been looking at solutions like this: [save output values in txt file in columns python](https://stackoverflow.com/questions/25967479/save-output-values-in-txt-file-in-columns-python)
And it seems using 'a' instead 'w' will append my new values instead of overwriting previous values. I've been trying to follow the documentation on the method .write but just don't quite see how I can write to a particular column using this method. I have been able to simply write each column in its own text file, but I'd rather be able to write the columns together in the same text file for future runs I do with this program.
Edit: my outputs will be 8 floating point numbers and to reiterate, they will be random each time.
So after 1 run, I will create 8 floating point values: Run11, Run12, Run13, Run14, Run15, Run16, Run17, Run18. After my second run, I will create another set of values (8 entries long): Run21, Run22, Run23, Run24, Run25, Run26, Run27, Run28.
In the text file, I would like these values to be placed in specific columns like this: [http://imgur.com/zxoxaKM](https://imgur.com/zxoxaKM) (this is what it would look like after 2 runs).
The "Value n:" titles are the headers for each column.
|
2015/07/20
|
[
"https://Stackoverflow.com/questions/31518864",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/5135338/"
] |
```
import csv
from tempfile import NamedTemporaryFile
from shutil import move
from itertools import chain
with open("in.csv") as f, NamedTemporaryFile(dir=".", delete=False) as temp:
r = csv.reader(f)
new = [9, 10, 11, 12, 13, 14, 15, 16]
wr = csv.writer(temp)
wr.writerows(zip(chain.from_iterable(r), new))
move(temp.name, "in.csv")
```
Input:
```
1
2
3
4
5
6
7
8
```
Output:
```
1,9
2,10
3,11
4,12
5,13
6,14
7,15
8,16
```
To take the header into account:
```
with open("in.csv") as f, NamedTemporaryFile(dir=".", delete=False) as temp:
r = csv.reader(f)
header = next(r)
new = [9, 10, 11, 12, 13, 14, 15, 16]
wr = csv.writer(temp)
wr.writerow(header+["Value {}:".format(len(header)+1)])
wr.writerows(zip(chain.from_iterable(r), new))
move(temp.name, "in.csv")
```
Input:
```
Value 1:
1
2
3
4
5
6
7
8
```
Output:
```
Value 1:,Value 2:
1,9
2,10
3,11
4,12
5,13
6,14
7,15
8,16
```
If you are adding an actual row each tie and not a column then just append:
```
with open("in.csv","a") as f:
new = [9, 10, 11, 12, 13, 14, 15, 16]
wr = csv.writer(f)
wr.writerow(new)
```
Input:
```
value 1:,value 2:,value 3:,value 4:,value 5:,value 6:,value 7:,value 8:
1,2,3,4,5,6,7,8
```
Output:
```
value 1:,value 2:,value 3:,value 4:,value 5:,value 6:,value 7:,value 8:
1,2,3,4,5,6,7,8
9,10,11,12,13,14,15,16
```
|
what about
```
a = [1, 2, 3, 4, 5, 6, 7, 8]
f = open('myFile.txt', 'a')
for n in a:
f.write('%d\t'%n)
f.write('\n')
f.close()
```
and you get as file content after running it 4 times
1 2 3 4 5 6 7 8
1 2 3 4 5 6 7 8
1 2 3 4 5 6 7 8
1 2 3 4 5 6 7 8
======= EDIT =========
Try this, its ugly but works ;-)
```
import os.path
h = ['Header 1', 'Hea 2', 'Header 3', 'Header 4', 'H 5', 'Header', 'Header 7', 'Header 8']
a = [1, 2, 3, 4, 5, 6, 7, 8]
fileName = 'myFile.txt'
#write header
withHeader = not os.path.isfile(fileName)
f = open(fileName, 'a')
if withHeader:
print 'Writing header'
for s in h:
f.write('%s\t'%s)
f.write('\n')
#write numbers
for i in range(0, len(a)):
space = len(h[i])/2;
n = a[i]
for c in range(0, space):
f.write(' ')
print 'Writing %d'%n
f.write('%d'%n)
for c in range(0, space):
f.write(' ')
f.write('\t')
f.write('\n')
f.close()
```
result:
```
Header 1 Hea 2 Header 3 Header 4 H 5 Header Header 7 Header 8
1 2 3 4 5 6 7 8
1 2 3 4 5 6 7 8
1 2 3 4 5 6 7 8
1 2 3 4 5 6 7 8
```
|
66,602,674
|
I have two e2e automation framework , one is python based other is protractor based. I need to write a docker-compose file to run these two projects in different containers and fetch reports and their console to my local system.
below are the contents of my docker-compose.yml file
```
version: '3'
services:
e2e-Tests:
build: ./c8y/DockerFile
image: e2etests
command: npm run e2e
container_name: cn-e2eTests
py-Tests:
build: ./py/DockerFile
image: pytests
command: npm run e2e
container_name: cn-pyTests
```
When I run docker-compose up , I get the below error :
```
Building e2e-Tests
failed to get console mode for stdout: The handle is invalid.
[+] Building 0.0s (0/1)
[+] Building 0.0s (1/2) om sender: walk \\?\C:\Users\xxx\e2e\Dockerfile: The system cannot find the path specified.
=> ERROR [internal] load build definition from Dockerfile 0.0s
=> => transferring dockerfile: 105B 0.0s
------
> [internal] load build definition from Dockerfile:
------
ERROR: Service 'e2e-Tests' failed to build
```
|
2021/03/12
|
[
"https://Stackoverflow.com/questions/66602674",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/7074479/"
] |
Try to rename: `Dockerfile` instead of `DockerFile`.
**Edit:**
When the docker file name is not provided explicitly, it will look for the file `Dockerfile` with a small `f`. My issue was as simple that.
|
My issue was related to registry access issues. It works fine now.
|
53,402,349
|
i build regex expression that matches
2 letters or 2 letters folowed by '/' and next 2 letters for example:
```
rt bl/ws se gn/wd wk bl/rt
/^(((\s+)?[a-zA-Z]{2}(\/[a-zA-Z]{2})?)(\s+|$))+$/i
```
and that works without problems.
Next problem what I have is match all "word" not containing '/' character.
and replace all matches by duplicate values separated by '/'. For above example excepted output should be:
```
rt/rt bl/ws se/se gn/wd wk/wk bl/rt
```
I tried it some time but without success. Could you help me with that ?
**Update 1:**
I've started with regex that matches words not containing 'at'
```
(\b((?!(at))\w)+\b)
```
Et the and I want to replace matched elements with python like
```
re.sub(r'(\b((?!(at))\w)+\b)', r'\1/\1', text)
```
but first have to find right elements ...
|
2018/11/20
|
[
"https://Stackoverflow.com/questions/53402349",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/1743688/"
] |
A list comprehension should do the trick:
```
>>> NUM_ITEMS = 5
>>> my_array = [[0, 1] for _ in range(NUM_ITEMS)]
>>> my_array
[[0, 1], [0, 1], [0, 1], [0, 1], [0, 1]]
```
|
Since you tagged arrays, here's an alternative `numpy` solution using [`numpy.tile`](https://docs.scipy.org/doc/numpy-1.15.0/reference/generated/numpy.tile.html).
```
>>> import numpy as np
>>> NUM_ITEMS = 10
>>> np.tile([0, 1], (NUM_ITEMS, 1))
array([[0, 1],
[0, 1],
[0, 1],
[0, 1],
[0, 1],
[0, 1],
[0, 1],
[0, 1],
[0, 1],
[0, 1]])
```
|
49,695,050
|
I'm trying to write a csv file into an S3 bucket using AWS Lambda, and for this I used the following code:
```
data=[[1,2,3],[23,56,98]]
with open("s3://my_bucket/my_file.csv", "w") as f:
f.write(data)
```
And this raises the following error:
```
[Errno 2] No such file or directory: u's3://my_bucket/my_file.csv': IOError
Traceback (most recent call last):
File "/var/task/lambda_function.py", line 51, in lambda_handler
with open("s3://my_bucket/my_file.csv", "w") as f:
IOError: [Errno 2] No such file or directory: u's3://my_bucket/my_file.csv'
```
Can I have some help with this please ?
**PS: I'm using python 2.7**
Thanking you in advance
|
2018/04/06
|
[
"https://Stackoverflow.com/questions/49695050",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/6598781/"
] |
Better to answer later than never. There are four steps to get your data in S3:
* Call the S3 bucket
* Load the data into Lambda using the requests library (if you don't have it installed, you are gonna have to load it as a layer)
* Write the data into the Lambda '/tmp' file
* Upload the file into s3
Something like this:
```
import csv
import requests
#all other apropriate libs already be loaded in lambda
#properly call your s3 bucket
s3 = boto3.resource('s3')
bucket = s3.Bucket('your-bucket-name')
key = 'yourfilename.txt'
#you would need to grab the file from somewhere. Use this incomplete line below to get started:
with requests.Session() as s:
getfile = s.get('yourfilelocation')
#Only then you can write the data into the '/tmp' folder.
with open('/tmp/yourfilename.txt', 'w', newline='') as f:
w = csv.writer(f)
w.writerows(filelist)
#upload the data into s3
bucket.upload_file('/tmp/yourfilename.txt', key)
```
Hope it helps.
|
```
with open("s3://my_bucket/my_file.csv", "w+") as f:
```
instead of
```
with open("s3://my_bucket/my_file.csv", "w") as f:
```
notice the "w" has changed to "w+" this means that it will write to the file, and if it does not exist it will create it.
|
49,695,050
|
I'm trying to write a csv file into an S3 bucket using AWS Lambda, and for this I used the following code:
```
data=[[1,2,3],[23,56,98]]
with open("s3://my_bucket/my_file.csv", "w") as f:
f.write(data)
```
And this raises the following error:
```
[Errno 2] No such file or directory: u's3://my_bucket/my_file.csv': IOError
Traceback (most recent call last):
File "/var/task/lambda_function.py", line 51, in lambda_handler
with open("s3://my_bucket/my_file.csv", "w") as f:
IOError: [Errno 2] No such file or directory: u's3://my_bucket/my_file.csv'
```
Can I have some help with this please ?
**PS: I'm using python 2.7**
Thanking you in advance
|
2018/04/06
|
[
"https://Stackoverflow.com/questions/49695050",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/6598781/"
] |
Better to answer later than never. There are four steps to get your data in S3:
* Call the S3 bucket
* Load the data into Lambda using the requests library (if you don't have it installed, you are gonna have to load it as a layer)
* Write the data into the Lambda '/tmp' file
* Upload the file into s3
Something like this:
```
import csv
import requests
#all other apropriate libs already be loaded in lambda
#properly call your s3 bucket
s3 = boto3.resource('s3')
bucket = s3.Bucket('your-bucket-name')
key = 'yourfilename.txt'
#you would need to grab the file from somewhere. Use this incomplete line below to get started:
with requests.Session() as s:
getfile = s.get('yourfilelocation')
#Only then you can write the data into the '/tmp' folder.
with open('/tmp/yourfilename.txt', 'w', newline='') as f:
w = csv.writer(f)
w.writerows(filelist)
#upload the data into s3
bucket.upload_file('/tmp/yourfilename.txt', key)
```
Hope it helps.
|
I am not aware of using AWS Lambda, but I have been using Boto3 to do the same.
It is a simple few line code.
```
#Your file path will be something like this:
#s3://<your_s3_bucket_name>/<Directory_name>/<File_name>.csv
import boto3
BUCKET_NAME = '<your_s3_bucket_name>'
PREFIX = '<Directory_name>/'
s3 = boto3.resource('s3')
obj = s3.Object(BUCKET_NAME, PREFIX + '<File_name>.csv')
obj.put(Body=content)
```
|
51,400,332
|
I want the insertion query do nothing if it's nothing new in csv file , In Case it is , i want to insert only this one and not again all the csv, any suggestion would be great!
PS: it's not duplicate with other questions because here we have "%s" no stable values and in python it's different the syntax!
```
cursorobject=connection.cursor()
sql2="CREATE DATABASE IF NOT EXISTS mydb"
cursorobject.execute(sql2)
sql1="CREATE TABLE IF NOT EXISTS users(id int(11) NOT NULL AUTO_INCREMENT,first_name varchar(255),last_name varchar(255),company_name varchar(255),address varchar(255),city varchar(255),country varchar(255),postal varchar(255),phone1 varchar(255),phone2 varchar(255),email varchar(255),web varchar(255),PRIMARY KEY(id))"
cursorobject.execute(sql1)
csvfile=open('Entries.csv','r')
reader = csv.reader(csvfile,delimiter=',')
for row in reader:
cursorobject.execute("INSERT INTO users(first_name,last_name,company_name,address,city,country,postal,phone1,phone2,email,web) VALUES (%s ,%s, %s,%s,%s,%s,%s,%s,%s,%s,%s)",row)
```
|
2018/07/18
|
[
"https://Stackoverflow.com/questions/51400332",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/9855183/"
] |
You should do something like this:
```
class eventCell: UICollectionViewCell {
@IBOutlet private weak var eventTitle: UILabel!
@IBOutlet private weak var descriptionLabel:UILabel!
@IBOutlet private weak var eventImage: UIImageView!
typealias Event = (title:String, location:String, lat:CLLocationDegrees, long:CLLocationDegrees)
var eventArray = [Event]()
override func prepareForReuse() {
eventImage.image = nil
}
func lool() {
var event = Event(title: "a", location:"b", lat:5, long:4)
eventArray.append(event)
eventTitle.text = eventArray[0].title
}
}
```
|
Why not creating some `Struct`?
Simple like this:
```
struct Event {
var title: String
var location: String
var lat: CLLocationDegrees
var long: CLLocationDegrees
}
```
Then just do that:
```
var eventArray = [Event]()
```
And call it like that:
```
for event in eventArray{
event.title = eventTitle.text
}
```
|
68,714,450
|
I have 2 dataframes:
**users**
```
user_id position
0 201 Senior Engineer
1 207 Senior System Architect
2 223 Senior account manage
3 212 Junior Manager
4 112 junior Engineer
5 311 junior python developer
```
```
df1 = pd.DataFrame({'user_id': ['201', '207', '223', '212', '112', '311'],
'position': ['Senior Engineer', 'Senior System Architect', 'Senior account manage', 'Junior Manager', 'junior Engineer', 'junior python developer']})
```
**roles**
```
role_id role_position
0 10 %senior%
1 20 %junior%
```
```
df2 = pd.DataFrame({'role_id': ['10', '20'],
'role_position': ['%senior%', '%junior%']})
```
I want to join them to get role\_id for each row in df1 using condition something like this:
```
lower(df1.position) LIKE df2.role_position
```
I want to use operator LIKE (like in SQL).
So it would look like this (or without role\_position - it would be even better):
```
user_id position role_id role_position
0 201 Senior Engineer 10 %senior%
1 207 Senior System Architect 10 %senior%
2 223 Senior account manage 10 %senior%
3 212 Junior Manager 20 %junior%
4 112 junior Engineer 20 %junior%
5 311 junior python developer 20 %junior%
```
How can i make this?
Thank you for your help!
|
2021/08/09
|
[
"https://Stackoverflow.com/questions/68714450",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/16580145/"
] |
You can use `str.extract()`+`merge()`:
```
pat='('+'|'.join(df2['role_position'].str.strip('%').unique())+')'
df1['role_position']='%'+df1['position'].str.lower().str.extract(pat,expand=False)+'%'
df1=df1.merge(df2,on='role_position',how='left')
```
output of `df1`:
```
user_id position role_id role_position
0 201 Senior Engineer 10 %senior%
1 207 Senior System Architect 10 %senior%
2 223 Senior account manage 10 %senior%
3 212 Junior Manager 20 %junior%
4 112 junior Engineer 20 %junior%
5 311 junior python developer 20 %junior%
```
|
Possibilities:
* [fuzzy words](https://www.google.com/search?q=fuzzy%20in%20pandas&rlz=1C5CHFA_enPL889PL889&oq=fuzzy%20in%20pandas&aqs=chrome..69i57j0i10i22i30j0i22i30.2529j0j7&sourceid=chrome&ie=UTF-8)
* [Sequence Matcher](https://towardsdatascience.com/sequencematcher-in-python-6b1e6f3915fc)
* [.extract](https://www.geeksforgeeks.org/python-pandas-series-str-extract/#:%7E:text=extract()%20function%20is%20used,match%20of%20regular%20expression%20pat.)
---
```
df1['Similarity'] = 0
df1['Role'] = 0
from difflib import SequenceMatcher
def similar(a, b):
return SequenceMatcher(None, a, b).ratio()
for index, row in df1.iterrows():
for x in df2['role_position']:
z = similar(row['position'],x)
if z >= 0.20:
df1.loc[index, "Similarity"] = z
df1.loc[index, "Role"] = x
```
[](https://i.stack.imgur.com/S7FFY.png)
|
68,714,450
|
I have 2 dataframes:
**users**
```
user_id position
0 201 Senior Engineer
1 207 Senior System Architect
2 223 Senior account manage
3 212 Junior Manager
4 112 junior Engineer
5 311 junior python developer
```
```
df1 = pd.DataFrame({'user_id': ['201', '207', '223', '212', '112', '311'],
'position': ['Senior Engineer', 'Senior System Architect', 'Senior account manage', 'Junior Manager', 'junior Engineer', 'junior python developer']})
```
**roles**
```
role_id role_position
0 10 %senior%
1 20 %junior%
```
```
df2 = pd.DataFrame({'role_id': ['10', '20'],
'role_position': ['%senior%', '%junior%']})
```
I want to join them to get role\_id for each row in df1 using condition something like this:
```
lower(df1.position) LIKE df2.role_position
```
I want to use operator LIKE (like in SQL).
So it would look like this (or without role\_position - it would be even better):
```
user_id position role_id role_position
0 201 Senior Engineer 10 %senior%
1 207 Senior System Architect 10 %senior%
2 223 Senior account manage 10 %senior%
3 212 Junior Manager 20 %junior%
4 112 junior Engineer 20 %junior%
5 311 junior python developer 20 %junior%
```
How can i make this?
Thank you for your help!
|
2021/08/09
|
[
"https://Stackoverflow.com/questions/68714450",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/16580145/"
] |
You can use `str.extract()`+`merge()`:
```
pat='('+'|'.join(df2['role_position'].str.strip('%').unique())+')'
df1['role_position']='%'+df1['position'].str.lower().str.extract(pat,expand=False)+'%'
df1=df1.merge(df2,on='role_position',how='left')
```
output of `df1`:
```
user_id position role_id role_position
0 201 Senior Engineer 10 %senior%
1 207 Senior System Architect 10 %senior%
2 223 Senior account manage 10 %senior%
3 212 Junior Manager 20 %junior%
4 112 junior Engineer 20 %junior%
5 311 junior python developer 20 %junior%
```
|
You can generate a dict of mappings and then map the values:
```
df2['role_position'] = df2['role_position'].str.strip('%')
mappings = df2.set_index('role_position').to_dict('dict')['role_id']
>> mappings
{'senior': '10', 'junior': '20'}
```
Using a regular expression we can extract the roles for each position:
```
re_roles = f"({df2['role_position'].str.cat(sep='|')})"
position = df1['position'].str.extract(re_roles, flags=re.I).iloc[:, 0].str.lower()
>> position
0 senior
1 senior
2 senior
3 junior
4 junior
5 junior
Name: 0, dtype: object
```
And finally map the `role_id` using the mappings dictionary:
```
df1['role_id'] = position.map(mappings)
>> df1
user_id position role_id
0 201 Senior Engineer 10
1 207 Senior System Architect 10
2 223 Senior account manage 10
3 212 Junior Manager 20
4 112 junior Engineer 20
5 311 junior python developer 20
```
|
68,714,450
|
I have 2 dataframes:
**users**
```
user_id position
0 201 Senior Engineer
1 207 Senior System Architect
2 223 Senior account manage
3 212 Junior Manager
4 112 junior Engineer
5 311 junior python developer
```
```
df1 = pd.DataFrame({'user_id': ['201', '207', '223', '212', '112', '311'],
'position': ['Senior Engineer', 'Senior System Architect', 'Senior account manage', 'Junior Manager', 'junior Engineer', 'junior python developer']})
```
**roles**
```
role_id role_position
0 10 %senior%
1 20 %junior%
```
```
df2 = pd.DataFrame({'role_id': ['10', '20'],
'role_position': ['%senior%', '%junior%']})
```
I want to join them to get role\_id for each row in df1 using condition something like this:
```
lower(df1.position) LIKE df2.role_position
```
I want to use operator LIKE (like in SQL).
So it would look like this (or without role\_position - it would be even better):
```
user_id position role_id role_position
0 201 Senior Engineer 10 %senior%
1 207 Senior System Architect 10 %senior%
2 223 Senior account manage 10 %senior%
3 212 Junior Manager 20 %junior%
4 112 junior Engineer 20 %junior%
5 311 junior python developer 20 %junior%
```
How can i make this?
Thank you for your help!
|
2021/08/09
|
[
"https://Stackoverflow.com/questions/68714450",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/16580145/"
] |
You can save some trouble by doing a `merge` directly if the seniority level always start at front:
```
print (pd.merge(df, df2,
left_on=df["position"].str.split().str[0].str.lower(),
right_on=df2["role_position"].str.strip("%")).drop("key_0", axis=1))
```
Else you can do a `pd.Series.str.extract` during a `merge`:
```
pat = f'({"|".join(df2["role_position"].str.strip("%"))})'
print (pd.merge(df, df2,
left_on=df["position"].str.extract(pat, flags=re.IGNORECASE, expand=False).str.lower(),
right_on=df2["role_position"].str.strip("%")).drop("key_0", axis=1))
```
Both yields the same result:
```
user_id position role_id role_position
0 201 Senior Engineer 10 %senior%
1 207 Senior System Architect 10 %senior%
2 223 Senior account manage 10 %senior%
3 212 Junior Manager 20 %junior%
4 112 junior Engineer 20 %junior%
5 311 junior python developer 20 %junior%
```
|
Possibilities:
* [fuzzy words](https://www.google.com/search?q=fuzzy%20in%20pandas&rlz=1C5CHFA_enPL889PL889&oq=fuzzy%20in%20pandas&aqs=chrome..69i57j0i10i22i30j0i22i30.2529j0j7&sourceid=chrome&ie=UTF-8)
* [Sequence Matcher](https://towardsdatascience.com/sequencematcher-in-python-6b1e6f3915fc)
* [.extract](https://www.geeksforgeeks.org/python-pandas-series-str-extract/#:%7E:text=extract()%20function%20is%20used,match%20of%20regular%20expression%20pat.)
---
```
df1['Similarity'] = 0
df1['Role'] = 0
from difflib import SequenceMatcher
def similar(a, b):
return SequenceMatcher(None, a, b).ratio()
for index, row in df1.iterrows():
for x in df2['role_position']:
z = similar(row['position'],x)
if z >= 0.20:
df1.loc[index, "Similarity"] = z
df1.loc[index, "Role"] = x
```
[](https://i.stack.imgur.com/S7FFY.png)
|
68,714,450
|
I have 2 dataframes:
**users**
```
user_id position
0 201 Senior Engineer
1 207 Senior System Architect
2 223 Senior account manage
3 212 Junior Manager
4 112 junior Engineer
5 311 junior python developer
```
```
df1 = pd.DataFrame({'user_id': ['201', '207', '223', '212', '112', '311'],
'position': ['Senior Engineer', 'Senior System Architect', 'Senior account manage', 'Junior Manager', 'junior Engineer', 'junior python developer']})
```
**roles**
```
role_id role_position
0 10 %senior%
1 20 %junior%
```
```
df2 = pd.DataFrame({'role_id': ['10', '20'],
'role_position': ['%senior%', '%junior%']})
```
I want to join them to get role\_id for each row in df1 using condition something like this:
```
lower(df1.position) LIKE df2.role_position
```
I want to use operator LIKE (like in SQL).
So it would look like this (or without role\_position - it would be even better):
```
user_id position role_id role_position
0 201 Senior Engineer 10 %senior%
1 207 Senior System Architect 10 %senior%
2 223 Senior account manage 10 %senior%
3 212 Junior Manager 20 %junior%
4 112 junior Engineer 20 %junior%
5 311 junior python developer 20 %junior%
```
How can i make this?
Thank you for your help!
|
2021/08/09
|
[
"https://Stackoverflow.com/questions/68714450",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/16580145/"
] |
You can save some trouble by doing a `merge` directly if the seniority level always start at front:
```
print (pd.merge(df, df2,
left_on=df["position"].str.split().str[0].str.lower(),
right_on=df2["role_position"].str.strip("%")).drop("key_0", axis=1))
```
Else you can do a `pd.Series.str.extract` during a `merge`:
```
pat = f'({"|".join(df2["role_position"].str.strip("%"))})'
print (pd.merge(df, df2,
left_on=df["position"].str.extract(pat, flags=re.IGNORECASE, expand=False).str.lower(),
right_on=df2["role_position"].str.strip("%")).drop("key_0", axis=1))
```
Both yields the same result:
```
user_id position role_id role_position
0 201 Senior Engineer 10 %senior%
1 207 Senior System Architect 10 %senior%
2 223 Senior account manage 10 %senior%
3 212 Junior Manager 20 %junior%
4 112 junior Engineer 20 %junior%
5 311 junior python developer 20 %junior%
```
|
You can generate a dict of mappings and then map the values:
```
df2['role_position'] = df2['role_position'].str.strip('%')
mappings = df2.set_index('role_position').to_dict('dict')['role_id']
>> mappings
{'senior': '10', 'junior': '20'}
```
Using a regular expression we can extract the roles for each position:
```
re_roles = f"({df2['role_position'].str.cat(sep='|')})"
position = df1['position'].str.extract(re_roles, flags=re.I).iloc[:, 0].str.lower()
>> position
0 senior
1 senior
2 senior
3 junior
4 junior
5 junior
Name: 0, dtype: object
```
And finally map the `role_id` using the mappings dictionary:
```
df1['role_id'] = position.map(mappings)
>> df1
user_id position role_id
0 201 Senior Engineer 10
1 207 Senior System Architect 10
2 223 Senior account manage 10
3 212 Junior Manager 20
4 112 junior Engineer 20
5 311 junior python developer 20
```
|
61,253,507
|
I am parsing json file that has the following data subset.
```
"title": "Revert \"testcase for check\""
```
In my python script I do the following:
```
with open('%s/staging_area/pr_info.json' % cwd) as data_file:
pr_info = json.load(data_file)
pr_title=pr_info["title"]
```
pr\_title will contain the following information after getting the title from json object.
```
Revert "testcase for check"
```
It seems that escape characters \ are not part of the string assignment. Is there any way to retain the entire string including escape characters? Thank you so much!
|
2020/04/16
|
[
"https://Stackoverflow.com/questions/61253507",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/9828901/"
] |
If you really need it, you should escape it again with json and remove first and last quote:
```py
pr_title = json.dumps(pr_title)[1:-1]
```
but escape characters is for escaping, raw value of string is still `Revert "testcase for check"`. So escaping function will depend on where you data is applied (DB, HTML, XML, etc).
To explain `[1:-1]`, the `dumps` escapes raw string to be JSON-valid which adds `\` and surrounds the string with quotation marks `"`. You have to remove these quotes from resulting string. Since Python could work with string same as list you can get all letters from second to penultimate with `[1:-1]` which literally removes the first and last quotes:
```
print(pr_title)
>>> "Revert \"testcase for check\""
print(pr_title[1:-1])
>>> Revert \"testcase for check\"
```
|
If your goal is to print pr\_title, then you can probably use json.dumps() to print the original text.
```
>>> import json
>>> j = '{"name": "\"Bob\""}'
>>> print(j)
{"name": ""Bob""}
>>> json.dumps(j)
'"{\\"name\\": \\"\\"Bob\\"\\"}"'
```
|
61,253,507
|
I am parsing json file that has the following data subset.
```
"title": "Revert \"testcase for check\""
```
In my python script I do the following:
```
with open('%s/staging_area/pr_info.json' % cwd) as data_file:
pr_info = json.load(data_file)
pr_title=pr_info["title"]
```
pr\_title will contain the following information after getting the title from json object.
```
Revert "testcase for check"
```
It seems that escape characters \ are not part of the string assignment. Is there any way to retain the entire string including escape characters? Thank you so much!
|
2020/04/16
|
[
"https://Stackoverflow.com/questions/61253507",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/9828901/"
] |
In case you really need to maintain the escape characters, you will have to escape the *escape characters* right after reading the file and before parsing the JSON.
```
with open('%s/staging_area/pr_info.json' % cwd) as data_file:
raw_data_file = data_file.read().replace("\\", "\\\\\\")
pr_info = json.JSONDecoder().decode(raw_data_file)
```
Then `pr_title` should still have the original escaped characters.
---
What is happening is:
1. Replace each single backslash for three backslashes: original escape character (\) + an escaped *escape character* (\\).
2. `raw_data_file` is now a string instead of a file pointer, so we cannot use json.load(). But the method [decode](https://docs.python.org/3.7/library/json.html#json.JSONDecoder.decode) from json.JSONDecoder admits a string input.
3. The decoder will parse the JSON string and remove the escaped *escape character*, while maintaining the original one from your file.
|
If your goal is to print pr\_title, then you can probably use json.dumps() to print the original text.
```
>>> import json
>>> j = '{"name": "\"Bob\""}'
>>> print(j)
{"name": ""Bob""}
>>> json.dumps(j)
'"{\\"name\\": \\"\\"Bob\\"\\"}"'
```
|
62,618,261
|
I have 4 figures (y1,y2,y3,y4) that i want to plot on a common x axis (yr1,yr2,yr3,m1,m2,m3,m4,m5). In this code however i have kept axaxis as separate since i am trying to get the basics right first.
```
import matplotlib.pyplot as plt
import numpy as np
plt.figure(1)
xaxis = ['y1','y2','y3','m1','m2','m3', 'm4', 'm5']
y1 = np.array([.73,.74,.71,.75,.72,.75,.74,.74])
y2 = np.array([.82,.80,.77,.81,.72,.81,.77,.77])
y3 = np.array([.35,.36,.45,.43,.44,.45,.48,.45])
y4 = np.array([.49,.52,.59,.58,.61,.65,.61,.58])
plt.subplot(221)
plt.plot(xaxis,y1)
plt.subplot(222)
plt.plot(xaxis,y2)
plt.subplot(223)
plt.subplot(xaxis,y3)
plt.subplot(224)
plt.subplot(xaxis,y4)
plt.show()
```
Getting this error
```
---------------------------------------------------------------------------
ValueError Traceback (most recent call last)
<ipython-input-23-dfe04cc8c6c4> in <module>
14 plt.plot(xaxis,y2)
15 plt.subplot(223)
---> 16 plt.subplot(xaxis,y3)
17 plt.subplot(224)
18 plt.subplot(xaxis,y4)
~\AppData\Local\Continuum\anaconda3\lib\site-packages\matplotlib\pyplot.py in subplot(*args, **kwargs)
1074
1075 fig = gcf()
-> 1076 a = fig.add_subplot(*args, **kwargs)
1077 bbox = a.bbox
1078 byebye = []
~\AppData\Local\Continuum\anaconda3\lib\site-packages\matplotlib\figure.py in add_subplot(self, *args, **kwargs)
1412 self._axstack.remove(ax)
1413
-> 1414 a = subplot_class_factory(projection_class)(self, *args, **kwargs)
1415
1416 return self._add_axes_internal(key, a)
~\AppData\Local\Continuum\anaconda3\lib\site-packages\matplotlib\axes\_subplots.py in __init__(self, fig, *args, **kwargs)
62 # num - 1 for converting from MATLAB to python indexing
63 else:
---> 64 raise ValueError(f'Illegal argument(s) to subplot: {args}')
65
66 self.update_params()
ValueError: Illegal argument(s) to subplot: (['y1', 'y2', 'y3', 'm1', 'm2', 'm3', 'm4', 'm5'], array([0.35, 0.36, 0.45, 0.43, 0.44, 0.45, 0.48, 0.45]))
```
Please help understand the issue here !
|
2020/06/28
|
[
"https://Stackoverflow.com/questions/62618261",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/5866905/"
] |
Small mistake. You have put `plt.subplot` instead of `plt.plot`. This should work now:
```
import matplotlib.pyplot as plt
import numpy as np
plt.figure(1)
xaxis = ['y1','y2','y3','m1','m2','m3', 'm4', 'm5']
y1 = np.array([.73,.74,.71,.75,.72,.75,.74,.74])
y2 = np.array([.82,.80,.77,.81,.72,.81,.77,.77])
y3 = np.array([.35,.36,.45,.43,.44,.45,.48,.45])
y4 = np.array([.49,.52,.59,.58,.61,.65,.61,.58])
plt.subplot(221)
plt.plot(xaxis,y1)
plt.subplot(222)
plt.plot(xaxis,y2)
plt.subplot(223)
plt.plot(xaxis,y3)
plt.subplot(224)
plt.plot(xaxis,y4)
plt.show()
```
Hope this helps :)
|
Try this:
```
fig, ax = plt.subplots(4, 1,sharex=True,gridspec_kw= {'height_ratios':[3,1,1,1]})
ax[0].plot(xais,y1)
ax[1].plot(xais,y1)
ax[2].plot(xais,y1)
ax[3].plot(xais,y1)
```
for 4 figures stacked on top of each other with shared x-axis.
for 2x2:
```
fig, ax = plt.subplots(2,2)
ax[0,0].plot(xaxis,y1)
ax[0,1].plot(xaxis,y1)
ax[1,0].plot(xaxis,y1)
ax[1,1].plot(xaxis,y1)
```
and then plt.show() to see results.
Look here for more:
[For more here is an example](https://matplotlib.org/gallery/lines_bars_and_markers/spectrum_demo.html#sphx-glr-gallery-lines-bars-and-markers-spectrum-demo-py)
|
62,618,261
|
I have 4 figures (y1,y2,y3,y4) that i want to plot on a common x axis (yr1,yr2,yr3,m1,m2,m3,m4,m5). In this code however i have kept axaxis as separate since i am trying to get the basics right first.
```
import matplotlib.pyplot as plt
import numpy as np
plt.figure(1)
xaxis = ['y1','y2','y3','m1','m2','m3', 'm4', 'm5']
y1 = np.array([.73,.74,.71,.75,.72,.75,.74,.74])
y2 = np.array([.82,.80,.77,.81,.72,.81,.77,.77])
y3 = np.array([.35,.36,.45,.43,.44,.45,.48,.45])
y4 = np.array([.49,.52,.59,.58,.61,.65,.61,.58])
plt.subplot(221)
plt.plot(xaxis,y1)
plt.subplot(222)
plt.plot(xaxis,y2)
plt.subplot(223)
plt.subplot(xaxis,y3)
plt.subplot(224)
plt.subplot(xaxis,y4)
plt.show()
```
Getting this error
```
---------------------------------------------------------------------------
ValueError Traceback (most recent call last)
<ipython-input-23-dfe04cc8c6c4> in <module>
14 plt.plot(xaxis,y2)
15 plt.subplot(223)
---> 16 plt.subplot(xaxis,y3)
17 plt.subplot(224)
18 plt.subplot(xaxis,y4)
~\AppData\Local\Continuum\anaconda3\lib\site-packages\matplotlib\pyplot.py in subplot(*args, **kwargs)
1074
1075 fig = gcf()
-> 1076 a = fig.add_subplot(*args, **kwargs)
1077 bbox = a.bbox
1078 byebye = []
~\AppData\Local\Continuum\anaconda3\lib\site-packages\matplotlib\figure.py in add_subplot(self, *args, **kwargs)
1412 self._axstack.remove(ax)
1413
-> 1414 a = subplot_class_factory(projection_class)(self, *args, **kwargs)
1415
1416 return self._add_axes_internal(key, a)
~\AppData\Local\Continuum\anaconda3\lib\site-packages\matplotlib\axes\_subplots.py in __init__(self, fig, *args, **kwargs)
62 # num - 1 for converting from MATLAB to python indexing
63 else:
---> 64 raise ValueError(f'Illegal argument(s) to subplot: {args}')
65
66 self.update_params()
ValueError: Illegal argument(s) to subplot: (['y1', 'y2', 'y3', 'm1', 'm2', 'm3', 'm4', 'm5'], array([0.35, 0.36, 0.45, 0.43, 0.44, 0.45, 0.48, 0.45]))
```
Please help understand the issue here !
|
2020/06/28
|
[
"https://Stackoverflow.com/questions/62618261",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/5866905/"
] |
Reusing Sahith Kurapati's code just to provide a cleaner solution. This way you can share axes and only configure line and chart styles once if they're all supposed to have the same style.
```py
import matplotlib.pyplot as plt
import numpy as np
plt.figure(1)
xaxis = ['y1','y2','y3','m1','m2','m3', 'm4', 'm5']
yy = np.array([[.73,.74,.71,.75,.72,.75,.74,.74],
[.82,.80,.77,.81,.72,.81,.77,.77],
[.35,.36,.45,.43,.44,.45,.48,.45],
[.49,.52,.59,.58,.61,.65,.61,.58]])
fig, axes = plt.subplots(2, 2)
for y, ax in zip(yy, axes.ravel()):
ax.plot(y)
plt.show()
```
|
Try this:
```
fig, ax = plt.subplots(4, 1,sharex=True,gridspec_kw= {'height_ratios':[3,1,1,1]})
ax[0].plot(xais,y1)
ax[1].plot(xais,y1)
ax[2].plot(xais,y1)
ax[3].plot(xais,y1)
```
for 4 figures stacked on top of each other with shared x-axis.
for 2x2:
```
fig, ax = plt.subplots(2,2)
ax[0,0].plot(xaxis,y1)
ax[0,1].plot(xaxis,y1)
ax[1,0].plot(xaxis,y1)
ax[1,1].plot(xaxis,y1)
```
and then plt.show() to see results.
Look here for more:
[For more here is an example](https://matplotlib.org/gallery/lines_bars_and_markers/spectrum_demo.html#sphx-glr-gallery-lines-bars-and-markers-spectrum-demo-py)
|
45,063,974
|
I have a sqlite table with 3 columns named ID (integer), N (integer) and V (real). The pair (ID, N) is unique.
Using the python module sqlite3, I would like to perform a recursive selection with the form
```
select ID from TABLE where N = 0 and V between ? and ? and ID in
(select ID from TABLE where N = 7 and V between ? and ? and ID in
(select ID from TABLE where N = 8 and V between ? and ? and ID in
(...)
)
)
```
I get the following error, probably because the maximum recursion depth was exceeded (?). I need about 20 to 50 recusion levels
```
sqlite3.OperationalError: parser stack overflow
```
I also tried to join the subselections like
```
select ID from
(select ID from TABLE where N = 0 and V between ? and ?)
join (select ID from TABLE where N = 7 and V between ? and ?) using (ID)
join (select ID from TABLE where N = 8 and V between ? and ?) using (ID)
join ...
```
but this approach is supprisingly slow, even with few subselections
Is there a better way to perform the same selection?
Note : the table is indexed on (N, V)
Below is an example to show how the selection works
```
ID N V
0 0 0,1
0 1 0,2
0 2 0,3
1 0 0,5
1 1 0,6
1 2 0,7
2 0 0,8
2 1 0,9
2 2 1,2
```
Step 0
```
select ID from TABLE where N = 0 and V between 0 and 0,6
```
ID is in (0,1)
Step 1
```
select ID from TABLE where N = 1 and V between 0 and 1 and ID in (0, 1)
```
ID is still in (0,1)
Step 2
```
select ID from TABLE where N = 2 and V between 0,5 and 1 and ID in (0, 1)
```
ID is 1
|
2017/07/12
|
[
"https://Stackoverflow.com/questions/45063974",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/2660966/"
] |
Unwrap the recursion, do it in reverse order and do it in Python. For this I created a table consisting of 100 records, each with an Id between 0 and 99, N=3 and V=5. Arbitrarily I selected the entire collection of records as the innermost.
You need to imagine having a list of values for N and V indexed so that the values at the head of the list are selected for the last SQL SELECT. What the loop does is simply to take the list of IDs resulting from an inner SELECT to feed it as part of the IN clause to the next SELECT.
Without any indexes this is all over in an augenblick.
```
>>> import sqlite3
>>> conn = sqlite3.connect('recur.db')
>>> c = conn.cursor()
>>> previous_ids = str(tuple(range(0,100)))
>>> for it in range(50):
... rows = c.execute('''SELECT ID FROM the_table WHERE N=3 AND V BETWEEN 2 AND 7 AND ID IN %s''' % previous_ids)
... previous_ids = str(tuple([int(_[0]) for _ in rows.fetchall()]))
...
>>> previous_ids
'(0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19, 20, 21, 22, 23, 24, 25, 26, 27, 28, 29, 30, 31, 32, 33, 34, 35, 36, 37, 38, 39, 40, 41, 42, 43, 44, 45, 46, 47, 48, 49, 50, 51, 52, 53, 54, 55, 56, 57, 58, 59, 60, 61, 62, 63, 64, 65, 66, 67, 68, 69, 70, 71, 72, 73, 74, 75, 76, 77, 78, 79, 80, 81, 82, 83, 84, 85, 86, 87, 88, 89, 90, 91, 92, 93, 94, 95, 96, 97, 98, 99)'
```
Edit: This avoids the use of long strings, takes longer than an augenblick. It's essentially the same idea implemented using tables.
```
>>> import sqlite3
>>> conn = sqlite3.connect('recur.db')
>>> c = conn.cursor()
>>> N_V = [
... (0, (0,6)),
... (0, (0, 1)),
... (1, (0, 2)),
... (2, (0, 3)),
... (0, (0, 5)),
... (1, (0, 6)),
... (2, (0, 7)),
... (0, (0, 8)),
... (1, (0, 9)),
... (2, (1, 2))
... ]
>>> r = c.execute('''CREATE TABLE essentials AS SELECT ID, N, V FROM the_table WHERE N=0 AND V BETWEEN 0 AND 6''')
>>> for n_v in N_V[1:]:
... r = c.execute('''CREATE TABLE next AS SELECT * FROM essentials WHERE essentials.ID IN (SELECT ID FROM the_table WHERE N=%s AND V BETWEEN %s AND %s)''' % (n_v[0], n_v[1][0], n_v[1][1]))
... r = c.execute('''DROP TABLE essentials''')
... r = c.execute('''ALTER TABLE next RENAME TO essentials''')
...
```
|
Indexing the triplet (ID, N, V) instead of only the (N, V) doublet made the join approach fast enough for being considered
```
create index I on TABLE(ID, N, V)
```
and then
```
select ID from
(select ID from TABLE where N = 0 and V between ? and ?)
join (select ID from TABLE where N = 7 and V between ? and ?) using (ID)
join (select ID from TABLE where N = 8 and V between ? and ?) using (ID)
join ...
```
|
45,063,974
|
I have a sqlite table with 3 columns named ID (integer), N (integer) and V (real). The pair (ID, N) is unique.
Using the python module sqlite3, I would like to perform a recursive selection with the form
```
select ID from TABLE where N = 0 and V between ? and ? and ID in
(select ID from TABLE where N = 7 and V between ? and ? and ID in
(select ID from TABLE where N = 8 and V between ? and ? and ID in
(...)
)
)
```
I get the following error, probably because the maximum recursion depth was exceeded (?). I need about 20 to 50 recusion levels
```
sqlite3.OperationalError: parser stack overflow
```
I also tried to join the subselections like
```
select ID from
(select ID from TABLE where N = 0 and V between ? and ?)
join (select ID from TABLE where N = 7 and V between ? and ?) using (ID)
join (select ID from TABLE where N = 8 and V between ? and ?) using (ID)
join ...
```
but this approach is supprisingly slow, even with few subselections
Is there a better way to perform the same selection?
Note : the table is indexed on (N, V)
Below is an example to show how the selection works
```
ID N V
0 0 0,1
0 1 0,2
0 2 0,3
1 0 0,5
1 1 0,6
1 2 0,7
2 0 0,8
2 1 0,9
2 2 1,2
```
Step 0
```
select ID from TABLE where N = 0 and V between 0 and 0,6
```
ID is in (0,1)
Step 1
```
select ID from TABLE where N = 1 and V between 0 and 1 and ID in (0, 1)
```
ID is still in (0,1)
Step 2
```
select ID from TABLE where N = 2 and V between 0,5 and 1 and ID in (0, 1)
```
ID is 1
|
2017/07/12
|
[
"https://Stackoverflow.com/questions/45063974",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/2660966/"
] |
Unwrap the recursion, do it in reverse order and do it in Python. For this I created a table consisting of 100 records, each with an Id between 0 and 99, N=3 and V=5. Arbitrarily I selected the entire collection of records as the innermost.
You need to imagine having a list of values for N and V indexed so that the values at the head of the list are selected for the last SQL SELECT. What the loop does is simply to take the list of IDs resulting from an inner SELECT to feed it as part of the IN clause to the next SELECT.
Without any indexes this is all over in an augenblick.
```
>>> import sqlite3
>>> conn = sqlite3.connect('recur.db')
>>> c = conn.cursor()
>>> previous_ids = str(tuple(range(0,100)))
>>> for it in range(50):
... rows = c.execute('''SELECT ID FROM the_table WHERE N=3 AND V BETWEEN 2 AND 7 AND ID IN %s''' % previous_ids)
... previous_ids = str(tuple([int(_[0]) for _ in rows.fetchall()]))
...
>>> previous_ids
'(0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19, 20, 21, 22, 23, 24, 25, 26, 27, 28, 29, 30, 31, 32, 33, 34, 35, 36, 37, 38, 39, 40, 41, 42, 43, 44, 45, 46, 47, 48, 49, 50, 51, 52, 53, 54, 55, 56, 57, 58, 59, 60, 61, 62, 63, 64, 65, 66, 67, 68, 69, 70, 71, 72, 73, 74, 75, 76, 77, 78, 79, 80, 81, 82, 83, 84, 85, 86, 87, 88, 89, 90, 91, 92, 93, 94, 95, 96, 97, 98, 99)'
```
Edit: This avoids the use of long strings, takes longer than an augenblick. It's essentially the same idea implemented using tables.
```
>>> import sqlite3
>>> conn = sqlite3.connect('recur.db')
>>> c = conn.cursor()
>>> N_V = [
... (0, (0,6)),
... (0, (0, 1)),
... (1, (0, 2)),
... (2, (0, 3)),
... (0, (0, 5)),
... (1, (0, 6)),
... (2, (0, 7)),
... (0, (0, 8)),
... (1, (0, 9)),
... (2, (1, 2))
... ]
>>> r = c.execute('''CREATE TABLE essentials AS SELECT ID, N, V FROM the_table WHERE N=0 AND V BETWEEN 0 AND 6''')
>>> for n_v in N_V[1:]:
... r = c.execute('''CREATE TABLE next AS SELECT * FROM essentials WHERE essentials.ID IN (SELECT ID FROM the_table WHERE N=%s AND V BETWEEN %s AND %s)''' % (n_v[0], n_v[1][0], n_v[1][1]))
... r = c.execute('''DROP TABLE essentials''')
... r = c.execute('''ALTER TABLE next RENAME TO essentials''')
...
```
|
This query requires only the (N, V) index for the subqueries. A separate index on ID might help for the outer query:
```
select ID from t
where ID in (select ID from TABLE where N = 0 and V between ? and ?)
and ID in (select ID from TABLE where N = 7 and V between ? and ?)
and ID in (select ID from TABLE where N = 8 and V between ? and ?)
...
```
|
45,063,974
|
I have a sqlite table with 3 columns named ID (integer), N (integer) and V (real). The pair (ID, N) is unique.
Using the python module sqlite3, I would like to perform a recursive selection with the form
```
select ID from TABLE where N = 0 and V between ? and ? and ID in
(select ID from TABLE where N = 7 and V between ? and ? and ID in
(select ID from TABLE where N = 8 and V between ? and ? and ID in
(...)
)
)
```
I get the following error, probably because the maximum recursion depth was exceeded (?). I need about 20 to 50 recusion levels
```
sqlite3.OperationalError: parser stack overflow
```
I also tried to join the subselections like
```
select ID from
(select ID from TABLE where N = 0 and V between ? and ?)
join (select ID from TABLE where N = 7 and V between ? and ?) using (ID)
join (select ID from TABLE where N = 8 and V between ? and ?) using (ID)
join ...
```
but this approach is supprisingly slow, even with few subselections
Is there a better way to perform the same selection?
Note : the table is indexed on (N, V)
Below is an example to show how the selection works
```
ID N V
0 0 0,1
0 1 0,2
0 2 0,3
1 0 0,5
1 1 0,6
1 2 0,7
2 0 0,8
2 1 0,9
2 2 1,2
```
Step 0
```
select ID from TABLE where N = 0 and V between 0 and 0,6
```
ID is in (0,1)
Step 1
```
select ID from TABLE where N = 1 and V between 0 and 1 and ID in (0, 1)
```
ID is still in (0,1)
Step 2
```
select ID from TABLE where N = 2 and V between 0,5 and 1 and ID in (0, 1)
```
ID is 1
|
2017/07/12
|
[
"https://Stackoverflow.com/questions/45063974",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/2660966/"
] |
This query requires only the (N, V) index for the subqueries. A separate index on ID might help for the outer query:
```
select ID from t
where ID in (select ID from TABLE where N = 0 and V between ? and ?)
and ID in (select ID from TABLE where N = 7 and V between ? and ?)
and ID in (select ID from TABLE where N = 8 and V between ? and ?)
...
```
|
Indexing the triplet (ID, N, V) instead of only the (N, V) doublet made the join approach fast enough for being considered
```
create index I on TABLE(ID, N, V)
```
and then
```
select ID from
(select ID from TABLE where N = 0 and V between ? and ?)
join (select ID from TABLE where N = 7 and V between ? and ?) using (ID)
join (select ID from TABLE where N = 8 and V between ? and ?) using (ID)
join ...
```
|
28,506,726
|
I am new to the `subprocess` module in python.
The documentation provided this example:
```
>>> subprocess.check_output(["echo", "Hello World!"])
b'Hello World!\n'
```
What I tried is:
```
>>> import subprocess
>>> subprocess.check_output(["cd", "../tests", "ls"])
/usr/bin/cd: line 4: cd: ../tests: No such file or directory
Traceback (most recent call last):
File "<input>", line 1, in <module>
File "/Library/Frameworks/Python.framework/Versions/3.4/lib/python3.4/subprocess.py", line 620, in check_output
raise CalledProcessError(retcode, process.args, output=output)
subprocess.CalledProcessError: Command '['cd', '../tests', 'ls']' returned non-zero exit status 1
```
I am confused because this is my file structure:
```
/proj
/cron
test_scheduler.py
/tests
printy.py
test1.py
test2.py
...
```
These are my other attempts as well:
```
>>> subprocess.check_output(["cd", "../tests", "python", "printy.py"])
/usr/bin/cd: line 4: cd: ../tests: No such file or directory
Traceback (most recent call last):
File "<input>", line 1, in <module>
File "/Library/Frameworks/Python.framework/Versions/3.4/lib/python3.4/subprocess.py", line 620, in check_output
raise CalledProcessError(retcode, process.args, output=output)
subprocess.CalledProcessError: Command '['cd', '../tests', 'python', 'printy.py']' returned non-zero exit status 1
>>> subprocess.check_output(["cd", "../tests;", "ls"])
/usr/bin/cd: line 4: cd: ../tests;: No such file or directory
Traceback (most recent call last):
File "<input>", line 1, in <module>
File "/Library/Frameworks/Python.framework/Versions/3.4/lib/python3.4/subprocess.py", line 620, in check_output
raise CalledProcessError(retcode, process.args, output=output)
subprocess.CalledProcessError: Command '['cd', '../tests;', 'ls']' returned non-zero exit status 1
```
|
2015/02/13
|
[
"https://Stackoverflow.com/questions/28506726",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/1815710/"
] |
The relative path to the `tests` directory depends on where the script is being run from. I would suggest calling `subprocess.check_output(["pwd"])` to check where you are.
Also you can't combine two commands in the same call like in your attempt with `["cd", "../tests", "python", "printy.py"]`. You'll need to make two separate calls with `["cd", "../tests"]` and `["python", "printy.py"]` respectively.
|
You are missing a argument here I think.
Here a snippet from the only python script I ever wrote:
```
#!/usr/local/bin/python
from subprocess import call
...
call( "rm " + backupFolder + "*.bz2", shell=True )
```
Please note the `shell=True` in the end of that call.
|
28,506,726
|
I am new to the `subprocess` module in python.
The documentation provided this example:
```
>>> subprocess.check_output(["echo", "Hello World!"])
b'Hello World!\n'
```
What I tried is:
```
>>> import subprocess
>>> subprocess.check_output(["cd", "../tests", "ls"])
/usr/bin/cd: line 4: cd: ../tests: No such file or directory
Traceback (most recent call last):
File "<input>", line 1, in <module>
File "/Library/Frameworks/Python.framework/Versions/3.4/lib/python3.4/subprocess.py", line 620, in check_output
raise CalledProcessError(retcode, process.args, output=output)
subprocess.CalledProcessError: Command '['cd', '../tests', 'ls']' returned non-zero exit status 1
```
I am confused because this is my file structure:
```
/proj
/cron
test_scheduler.py
/tests
printy.py
test1.py
test2.py
...
```
These are my other attempts as well:
```
>>> subprocess.check_output(["cd", "../tests", "python", "printy.py"])
/usr/bin/cd: line 4: cd: ../tests: No such file or directory
Traceback (most recent call last):
File "<input>", line 1, in <module>
File "/Library/Frameworks/Python.framework/Versions/3.4/lib/python3.4/subprocess.py", line 620, in check_output
raise CalledProcessError(retcode, process.args, output=output)
subprocess.CalledProcessError: Command '['cd', '../tests', 'python', 'printy.py']' returned non-zero exit status 1
>>> subprocess.check_output(["cd", "../tests;", "ls"])
/usr/bin/cd: line 4: cd: ../tests;: No such file or directory
Traceback (most recent call last):
File "<input>", line 1, in <module>
File "/Library/Frameworks/Python.framework/Versions/3.4/lib/python3.4/subprocess.py", line 620, in check_output
raise CalledProcessError(retcode, process.args, output=output)
subprocess.CalledProcessError: Command '['cd', '../tests;', 'ls']' returned non-zero exit status 1
```
|
2015/02/13
|
[
"https://Stackoverflow.com/questions/28506726",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/1815710/"
] |
**Try to avoid `shell=True` if possible.**
In this case, you can certainly avoid. The problem you are facing is: `cd` is a shell builtin. Its not a command/program/utility that can be called from outside. You need to be within a shell for `cd` to work. What you can instead do is change your current directory. Execute the command. And then go back to your original directory.
You'll need to do something like the below:
```
pathBefore = os.getcwd()
os.chdir("/path/to/your/directory")
subprocess.check_output(["ls"])
os.chdir(pathBefore) # get back to the path we were in before
```
---
**UPDATE**: [A better approach pointed out by @JFSebastian is to use the additional `cwd` argument to `check_output` call](https://stackoverflow.com/a/28508621/227884).
|
The relative path to the `tests` directory depends on where the script is being run from. I would suggest calling `subprocess.check_output(["pwd"])` to check where you are.
Also you can't combine two commands in the same call like in your attempt with `["cd", "../tests", "python", "printy.py"]`. You'll need to make two separate calls with `["cd", "../tests"]` and `["python", "printy.py"]` respectively.
|
28,506,726
|
I am new to the `subprocess` module in python.
The documentation provided this example:
```
>>> subprocess.check_output(["echo", "Hello World!"])
b'Hello World!\n'
```
What I tried is:
```
>>> import subprocess
>>> subprocess.check_output(["cd", "../tests", "ls"])
/usr/bin/cd: line 4: cd: ../tests: No such file or directory
Traceback (most recent call last):
File "<input>", line 1, in <module>
File "/Library/Frameworks/Python.framework/Versions/3.4/lib/python3.4/subprocess.py", line 620, in check_output
raise CalledProcessError(retcode, process.args, output=output)
subprocess.CalledProcessError: Command '['cd', '../tests', 'ls']' returned non-zero exit status 1
```
I am confused because this is my file structure:
```
/proj
/cron
test_scheduler.py
/tests
printy.py
test1.py
test2.py
...
```
These are my other attempts as well:
```
>>> subprocess.check_output(["cd", "../tests", "python", "printy.py"])
/usr/bin/cd: line 4: cd: ../tests: No such file or directory
Traceback (most recent call last):
File "<input>", line 1, in <module>
File "/Library/Frameworks/Python.framework/Versions/3.4/lib/python3.4/subprocess.py", line 620, in check_output
raise CalledProcessError(retcode, process.args, output=output)
subprocess.CalledProcessError: Command '['cd', '../tests', 'python', 'printy.py']' returned non-zero exit status 1
>>> subprocess.check_output(["cd", "../tests;", "ls"])
/usr/bin/cd: line 4: cd: ../tests;: No such file or directory
Traceback (most recent call last):
File "<input>", line 1, in <module>
File "/Library/Frameworks/Python.framework/Versions/3.4/lib/python3.4/subprocess.py", line 620, in check_output
raise CalledProcessError(retcode, process.args, output=output)
subprocess.CalledProcessError: Command '['cd', '../tests;', 'ls']' returned non-zero exit status 1
```
|
2015/02/13
|
[
"https://Stackoverflow.com/questions/28506726",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/1815710/"
] |
**Try to avoid `shell=True` if possible.**
In this case, you can certainly avoid. The problem you are facing is: `cd` is a shell builtin. Its not a command/program/utility that can be called from outside. You need to be within a shell for `cd` to work. What you can instead do is change your current directory. Execute the command. And then go back to your original directory.
You'll need to do something like the below:
```
pathBefore = os.getcwd()
os.chdir("/path/to/your/directory")
subprocess.check_output(["ls"])
os.chdir(pathBefore) # get back to the path we were in before
```
---
**UPDATE**: [A better approach pointed out by @JFSebastian is to use the additional `cwd` argument to `check_output` call](https://stackoverflow.com/a/28508621/227884).
|
You are missing a argument here I think.
Here a snippet from the only python script I ever wrote:
```
#!/usr/local/bin/python
from subprocess import call
...
call( "rm " + backupFolder + "*.bz2", shell=True )
```
Please note the `shell=True` in the end of that call.
|
28,506,726
|
I am new to the `subprocess` module in python.
The documentation provided this example:
```
>>> subprocess.check_output(["echo", "Hello World!"])
b'Hello World!\n'
```
What I tried is:
```
>>> import subprocess
>>> subprocess.check_output(["cd", "../tests", "ls"])
/usr/bin/cd: line 4: cd: ../tests: No such file or directory
Traceback (most recent call last):
File "<input>", line 1, in <module>
File "/Library/Frameworks/Python.framework/Versions/3.4/lib/python3.4/subprocess.py", line 620, in check_output
raise CalledProcessError(retcode, process.args, output=output)
subprocess.CalledProcessError: Command '['cd', '../tests', 'ls']' returned non-zero exit status 1
```
I am confused because this is my file structure:
```
/proj
/cron
test_scheduler.py
/tests
printy.py
test1.py
test2.py
...
```
These are my other attempts as well:
```
>>> subprocess.check_output(["cd", "../tests", "python", "printy.py"])
/usr/bin/cd: line 4: cd: ../tests: No such file or directory
Traceback (most recent call last):
File "<input>", line 1, in <module>
File "/Library/Frameworks/Python.framework/Versions/3.4/lib/python3.4/subprocess.py", line 620, in check_output
raise CalledProcessError(retcode, process.args, output=output)
subprocess.CalledProcessError: Command '['cd', '../tests', 'python', 'printy.py']' returned non-zero exit status 1
>>> subprocess.check_output(["cd", "../tests;", "ls"])
/usr/bin/cd: line 4: cd: ../tests;: No such file or directory
Traceback (most recent call last):
File "<input>", line 1, in <module>
File "/Library/Frameworks/Python.framework/Versions/3.4/lib/python3.4/subprocess.py", line 620, in check_output
raise CalledProcessError(retcode, process.args, output=output)
subprocess.CalledProcessError: Command '['cd', '../tests;', 'ls']' returned non-zero exit status 1
```
|
2015/02/13
|
[
"https://Stackoverflow.com/questions/28506726",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/1815710/"
] |
The error message is clear:
>
> /usr/bin/cd: line 4: cd: ../tests: No such file or directory
>
>
>
that is you have successfully started `/usr/bin/cd` program that failed and printed the error message.
If you wanted to run `ls` command from `../tests` directory instead:
```
import os
import subprocess
cwd = os.path.join(get_script_dir(), '../tests')
output = subprocess.check_output(['ls'], cwd=cwd)
```
where [`get_script_dir()`](https://stackoverflow.com/a/22881871/4279).
Note: do not use a relative path for the directory -- your script can be run from a different directory -- relative paths fail in this case.
|
You are missing a argument here I think.
Here a snippet from the only python script I ever wrote:
```
#!/usr/local/bin/python
from subprocess import call
...
call( "rm " + backupFolder + "*.bz2", shell=True )
```
Please note the `shell=True` in the end of that call.
|
28,506,726
|
I am new to the `subprocess` module in python.
The documentation provided this example:
```
>>> subprocess.check_output(["echo", "Hello World!"])
b'Hello World!\n'
```
What I tried is:
```
>>> import subprocess
>>> subprocess.check_output(["cd", "../tests", "ls"])
/usr/bin/cd: line 4: cd: ../tests: No such file or directory
Traceback (most recent call last):
File "<input>", line 1, in <module>
File "/Library/Frameworks/Python.framework/Versions/3.4/lib/python3.4/subprocess.py", line 620, in check_output
raise CalledProcessError(retcode, process.args, output=output)
subprocess.CalledProcessError: Command '['cd', '../tests', 'ls']' returned non-zero exit status 1
```
I am confused because this is my file structure:
```
/proj
/cron
test_scheduler.py
/tests
printy.py
test1.py
test2.py
...
```
These are my other attempts as well:
```
>>> subprocess.check_output(["cd", "../tests", "python", "printy.py"])
/usr/bin/cd: line 4: cd: ../tests: No such file or directory
Traceback (most recent call last):
File "<input>", line 1, in <module>
File "/Library/Frameworks/Python.framework/Versions/3.4/lib/python3.4/subprocess.py", line 620, in check_output
raise CalledProcessError(retcode, process.args, output=output)
subprocess.CalledProcessError: Command '['cd', '../tests', 'python', 'printy.py']' returned non-zero exit status 1
>>> subprocess.check_output(["cd", "../tests;", "ls"])
/usr/bin/cd: line 4: cd: ../tests;: No such file or directory
Traceback (most recent call last):
File "<input>", line 1, in <module>
File "/Library/Frameworks/Python.framework/Versions/3.4/lib/python3.4/subprocess.py", line 620, in check_output
raise CalledProcessError(retcode, process.args, output=output)
subprocess.CalledProcessError: Command '['cd', '../tests;', 'ls']' returned non-zero exit status 1
```
|
2015/02/13
|
[
"https://Stackoverflow.com/questions/28506726",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/1815710/"
] |
**Try to avoid `shell=True` if possible.**
In this case, you can certainly avoid. The problem you are facing is: `cd` is a shell builtin. Its not a command/program/utility that can be called from outside. You need to be within a shell for `cd` to work. What you can instead do is change your current directory. Execute the command. And then go back to your original directory.
You'll need to do something like the below:
```
pathBefore = os.getcwd()
os.chdir("/path/to/your/directory")
subprocess.check_output(["ls"])
os.chdir(pathBefore) # get back to the path we were in before
```
---
**UPDATE**: [A better approach pointed out by @JFSebastian is to use the additional `cwd` argument to `check_output` call](https://stackoverflow.com/a/28508621/227884).
|
The error message is clear:
>
> /usr/bin/cd: line 4: cd: ../tests: No such file or directory
>
>
>
that is you have successfully started `/usr/bin/cd` program that failed and printed the error message.
If you wanted to run `ls` command from `../tests` directory instead:
```
import os
import subprocess
cwd = os.path.join(get_script_dir(), '../tests')
output = subprocess.check_output(['ls'], cwd=cwd)
```
where [`get_script_dir()`](https://stackoverflow.com/a/22881871/4279).
Note: do not use a relative path for the directory -- your script can be run from a different directory -- relative paths fail in this case.
|
63,030,306
|
I have the below python snippet
```py
@click.argument('file',type=click.Path(exists=True))
```
The above command read from a file in the below format
```sh
python3 code.py file.txt
```
The same file is processed using a function
```py
def get_domains(domain_names_file):
with open(domain_names_file) as f:
domains = f.readlines()
return domains
domains = get_domains(file)
```
I dont want to read it from file , I want to provide a domain as an argument while executing in terminal , the command will be
```sh
python3 code.py example.com
```
How should I rewrite the code .
Python Version : 3.8.2
|
2020/07/22
|
[
"https://Stackoverflow.com/questions/63030306",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/8474328/"
] |
`click.argument` by default creates arguments that are read from the command line:
```
@click.argument('file')
```
This should create an argument that is read from the command line and made available in the `file` argument.
See the docs & examples [here](https://pocoo-click.readthedocs.io/en/latest/arguments/)
|
You may use `argparse`:
```
import argparse
# set up the different arguments
parser = argparse.ArgumentParser(description='Some nasty description here.')
parser.add_argument("--domain", help="Domain: www.some-domain.com", required=True)
args = parser.parse_args()
print(args.domain)
```
And you invoke it via
```
python your-python-file.py --domain www.google.com
```
|
63,030,306
|
I have the below python snippet
```py
@click.argument('file',type=click.Path(exists=True))
```
The above command read from a file in the below format
```sh
python3 code.py file.txt
```
The same file is processed using a function
```py
def get_domains(domain_names_file):
with open(domain_names_file) as f:
domains = f.readlines()
return domains
domains = get_domains(file)
```
I dont want to read it from file , I want to provide a domain as an argument while executing in terminal , the command will be
```sh
python3 code.py example.com
```
How should I rewrite the code .
Python Version : 3.8.2
|
2020/07/22
|
[
"https://Stackoverflow.com/questions/63030306",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/8474328/"
] |
I realised you're using the [click](https://click.palletsprojects.com/en/7.x/arguments) library.
Since you want to pass the 'domain'/website as an argument, you can just input it as a string. If you remove the 'type' parameter from your decorator, it would make the type STRING by default.
>
> The most basic option is a simple string argument of one value. If no type is provided, the type of the default value is used, and if no default value is provided, the type is assumed to be STRING.
>
>
>
Solution:
`@click.argument('domain')`
|
You may use `argparse`:
```
import argparse
# set up the different arguments
parser = argparse.ArgumentParser(description='Some nasty description here.')
parser.add_argument("--domain", help="Domain: www.some-domain.com", required=True)
args = parser.parse_args()
print(args.domain)
```
And you invoke it via
```
python your-python-file.py --domain www.google.com
```
|
60,527,883
|
I have a dataset of 284 features I am trying to impute using scikit-learn, however I get an error where the number of features changes to 283:
```
imputer = SimpleImputer(missing_values = np.nan, strategy = "mean")
imputer = imputer.fit(data.iloc[:,0:284])
df[:,0:284] = imputer.transform(df[:,0:284])
X = MinMaxScaler().fit_transform(df)
---------------------------------------------------------------------------
ValueError Traceback (most recent call last)
<ipython-input-150-849be5be8fcb> in <module>
1 imputer = SimpleImputer(missing_values = np.nan, strategy = "mean")
2 imputer = imputer.fit(data.iloc[:,0:284])
----> 3 df[:,0:284] = imputer.transform(df[:,0:284])
4 X = MinMaxScaler().fit_transform(df)
~\Anaconda3\envs\environment\lib\site-packages\sklearn\impute\_base.py in transform(self, X)
411 if X.shape[1] != statistics.shape[0]:
412 raise ValueError("X has %d features per sample, expected %d"
--> 413 % (X.shape[1], self.statistics_.shape[0]))
414
415 # Delete the invalid columns if strategy is not constant
ValueError: X has 283 features per sample, expected 284
```
I don't understand how this is reaching 283 features, I assume on fitting it's finding features that have all 0s or something and deciding to drop that, but I can't find documentation which tells me how to make sure those features are still kept. I am not a programmer so not sure if I am missing something else that's obvious or if I'm better looking into another method?
|
2020/03/04
|
[
"https://Stackoverflow.com/questions/60527883",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/8831033/"
] |
This could happen if you have a feature without any values, from <https://scikit-learn.org/stable/modules/generated/sklearn.impute.SimpleImputer.html>:
'Columns which only contained missing values at fit are discarded upon transform if strategy is not βconstantβ'.
You can tell if this is indeed the problem by using a high 'verbose' value when constructing the imputer:
sklearn.impute.SimpleImputer(..., verbose=100,...)
It will spit sth like:
UserWarning: Deleting features without observed values: [ ... ]
|
I was dealing with the same situation and i got my solution by adding this transformation before the SimpleImputer mean strategy
```
imputer = SimpleImputer(strategy = 'constant', fill_value = 0)
df_prepared_to_mean_or_anything_else = imputer.fit_transform(previous_df)
```
What does it do? Fills everything missing with the value specified on parameter `fill_value`
|
69,011,571
|
Which function was used for the following plot in R?
At least it looks like a predefined function to me.
Edit: Okay it seems to be Stata according Claudio.
New question:
Is there anything comparable in python/R to get this output?
How to calculate Coef.? What kind of coefficient is this?
[](https://i.stack.imgur.com/cZ1Tg.png)
|
2021/09/01
|
[
"https://Stackoverflow.com/questions/69011571",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/15870842/"
] |
You can simply switch over `status` inside the body of your view and assign the correct `String` and `foregroundColor` to your `Text` inside each `case.
```
struct StatusView: View {
let status: Status
var body: some View {
switch status {
case .accepted:
Text("accepted")
.foregroundColor(.green)
case .standby:
Text("standby")
.foregroundColor(.yellow)
case .notAllowed:
Text("not allowed")
.foregroundColor(.red)
}
}
}
```
Or if you can modify `Status`, you can simply assign a `String` `rawValue` to it, then displaying the appropriate text based on its value is even easier.
```
enum Status: String {
case accepted
case standby
case notAllowed
}
struct StatusView: View {
let status: Status
var body: some View {
Text(status.rawValue)
.foregroundColor(statusColor(status: status))
}
private func statusColor(status: Status) -> Color {
switch status {
case .accepted:
return .green
case .standby:
return .yellow
case .notAllowed:
return .red
}
}
}
```
|
Here is an updated and refactored answer based on **David** answer, with this way you do not need that `ststusColor` function anymore and you can access the **colorValue** every where in your project instead of last answer that was accessible only inside `StatusView`.
```
struct StatusView: View {
let status: Status
var body: some View {
Text(status.rawValue)
.foregroundColor(status.colorValue)
}
}
enum Status: String {
case accepted
case standby
case notAllowed
var colorValue: Color {
switch self {
case .accepted:
return .green
case .standby:
return .yellow
case .notAllowed:
return .red
}
}
}
```
|
5,373,474
|
I'm trying to use argparse to parse the command line arguments for a program I'm working on. Essentially, I need to support multiple positional arguments spread within the optional arguments, but cannot get argparse to work in this situation. In the actual program, I'm using a custom action (I need to store a snapshot of the namespace each time a positional argument is found), but the problem I'm having can be replicated with the `append` action:
```
>>> import argparse
>>> parser = argparse.ArgumentParser()
>>> parser.add_argument('-a', action='store_true')
>>> parser.add_argument('-b', action='store_true')
>>> parser.add_argument('input', action='append')
>>> parser.parse_args(['fileone', '-a', 'filetwo', '-b', 'filethree'])
usage: ipython [-h] [-a] [-b] input
ipython: error: unrecognized arguments: filetwo filethree
```
I'd like this to result in the namespace `(a=True, b=True, input=['fileone', 'filetwo', 'filethree'])`, but cannot see how to do this - if indeed it can. I can't see anything in the docs or Google which says one way or the other if this is possible, although its quite possible (likely?) I've overlooked something. Does anyone have any suggestions?
|
2011/03/21
|
[
"https://Stackoverflow.com/questions/5373474",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/668807/"
] |
You can't interleave the switches (i.e. `-a` and `-b`) with the positional arguments (i.e. fileone, filetwo and filethree) in this way. The switches must appear before or after the positional arguments, not in-between.
Also, in order to have multiple positional arguments, you need to specify the `nargs` parameter to `add_argument`. For example:
```
parser.add_argument('input', nargs='+')
```
This tells `argparse` to consume one or more positional arguments and append them to a list. See the [argparse documentation](http://docs.python.org/library/argparse.html) for more information. With this line, the code:
```
parser.parse_args(['-a', '-b', 'fileone', 'filetwo', 'filethree'])
```
results in:
```
Namespace(a=True, b=True, input=['fileone', 'filetwo', 'filethree'])
```
|
The 'append' action makes more sense with an optional:
```
parser.add_argument('-i', '--input',action='append')
parser.parse_args(['-i','fileone', '-a', '-i','filetwo', '-b', '-i','filethree'])
```
You can interleave optionals with separate positionals ('input1 -a input2 -b input3'), but you cannot interleave optionals within one multiitem positional. But you can accomplish this with a two step parse.
```
import argparse
parser1 = argparse.ArgumentParser()
parser1.add_argument('-a', action='store_true')
parser1.add_argument('-b', action='store_true')
parser2 = argparse.ArgumentParser()
parser2.add_argument('input', nargs='*')
ns, rest = parser1.parse_known_args(['fileone', '-a', 'filetwo', '-b', 'filethree'])
# Namespace(a=True, b=True), ['fileone', 'filetwo', 'filethree']
ns = parser2.parse_args(rest, ns)
# Namespace(a=True, b=True, input=['fileone', 'filetwo', 'filethree'])
```
<http://bugs.python.org/issue14191> is a proposed patch that will do this with single call to:
```
parser.parse_intermixed_args(['fileone', '-a', 'filetwo', '-b', 'filethree'])
```
|
5,373,474
|
I'm trying to use argparse to parse the command line arguments for a program I'm working on. Essentially, I need to support multiple positional arguments spread within the optional arguments, but cannot get argparse to work in this situation. In the actual program, I'm using a custom action (I need to store a snapshot of the namespace each time a positional argument is found), but the problem I'm having can be replicated with the `append` action:
```
>>> import argparse
>>> parser = argparse.ArgumentParser()
>>> parser.add_argument('-a', action='store_true')
>>> parser.add_argument('-b', action='store_true')
>>> parser.add_argument('input', action='append')
>>> parser.parse_args(['fileone', '-a', 'filetwo', '-b', 'filethree'])
usage: ipython [-h] [-a] [-b] input
ipython: error: unrecognized arguments: filetwo filethree
```
I'd like this to result in the namespace `(a=True, b=True, input=['fileone', 'filetwo', 'filethree'])`, but cannot see how to do this - if indeed it can. I can't see anything in the docs or Google which says one way or the other if this is possible, although its quite possible (likely?) I've overlooked something. Does anyone have any suggestions?
|
2011/03/21
|
[
"https://Stackoverflow.com/questions/5373474",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/668807/"
] |
You can't interleave the switches (i.e. `-a` and `-b`) with the positional arguments (i.e. fileone, filetwo and filethree) in this way. The switches must appear before or after the positional arguments, not in-between.
Also, in order to have multiple positional arguments, you need to specify the `nargs` parameter to `add_argument`. For example:
```
parser.add_argument('input', nargs='+')
```
This tells `argparse` to consume one or more positional arguments and append them to a list. See the [argparse documentation](http://docs.python.org/library/argparse.html) for more information. With this line, the code:
```
parser.parse_args(['-a', '-b', 'fileone', 'filetwo', 'filethree'])
```
results in:
```
Namespace(a=True, b=True, input=['fileone', 'filetwo', 'filethree'])
```
|
It seems to me that hpaulj is on the right track but making things a bit more complicated than necessary. I suspect that Blair is looking for something akin to the behavior of the old optparse module and doesn't really need the list of input arguments in the inputs field of the args object. He just wants
```
import argparse
parser = argparse.ArgumentParser()
parser.add_argument('-a', action='store_true')
parser.add_argument('-b', action='store_true')
opts, args = parser.parse_known_args(['fileone', '-a', 'filetwo', '-b', 'filethree'])
# Namespace(a=True, b=True), ['fileone', 'filetwo', 'filethree']
```
In the vernacular of optparse, the "options" are available in opts, and the list of possibly interspersed other "arguments" are in args.
|
5,373,474
|
I'm trying to use argparse to parse the command line arguments for a program I'm working on. Essentially, I need to support multiple positional arguments spread within the optional arguments, but cannot get argparse to work in this situation. In the actual program, I'm using a custom action (I need to store a snapshot of the namespace each time a positional argument is found), but the problem I'm having can be replicated with the `append` action:
```
>>> import argparse
>>> parser = argparse.ArgumentParser()
>>> parser.add_argument('-a', action='store_true')
>>> parser.add_argument('-b', action='store_true')
>>> parser.add_argument('input', action='append')
>>> parser.parse_args(['fileone', '-a', 'filetwo', '-b', 'filethree'])
usage: ipython [-h] [-a] [-b] input
ipython: error: unrecognized arguments: filetwo filethree
```
I'd like this to result in the namespace `(a=True, b=True, input=['fileone', 'filetwo', 'filethree'])`, but cannot see how to do this - if indeed it can. I can't see anything in the docs or Google which says one way or the other if this is possible, although its quite possible (likely?) I've overlooked something. Does anyone have any suggestions?
|
2011/03/21
|
[
"https://Stackoverflow.com/questions/5373474",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/668807/"
] |
srgerg was right about the definition of positional arguments. In order to get the result you want, You have to accept them as optional arguments, and modify the resulted namespace according to your need.
You can use a custom action:
```
class MyAction(argparse.Action):
def __call__(self, parser, namespace, values, option_string=None):
# Set optional arguments to True or False
if option_string:
attr = True if values else False
setattr(namespace, self.dest, attr)
# Modify value of "input" in the namespace
if hasattr(namespace, 'input'):
current_values = getattr(namespace, 'input')
try:
current_values.extend(values)
except AttributeError:
current_values = values
finally:
setattr(namespace, 'input', current_values)
else:
setattr(namespace, 'input', values)
parser = argparse.ArgumentParser()
parser.add_argument('-a', nargs='+', action=MyAction)
parser.add_argument('-b', nargs='+', action=MyAction)
parser.add_argument('input', nargs='+', action=MyAction)
```
And this is what you get:
```
>>> parser.parse_args(['fileone', '-a', 'filetwo', '-b', 'filethree'])
Namespace(a=True, b=True, input=['fileone', 'filetwo', 'filethree'])
```
Or you can modify the resulted namespace like this:
```
>>> import argparse
>>> parser = argparse.ArgumentParser()
>>> parser.add_argument('-a', nargs='+')
>>> parser.add_argument('-b', nargs='+')
>>> parser.add_argument('input', nargs='+')
>>> result = parser.parse_args(['fileone', '-a', 'filetwo', '-b', 'filethree'])
>>> inputs = []
>>> inputs.extend(result.a)
>>> inputs.extend(result.b)
>>> inputs.extend(result.input)
>>> modified = argparse.Namespace(
a = result.a != [],
b = result.b != [],
input = inputs)
```
And this is what you get:
```
>>> modified
Namespace(a=True, b=True, input=['filetwo', 'filethree', 'fileone'])
```
However, both method result in less readable and less maintainable code. Maybe it's better to change the program logic and do it in a different way.
|
The 'append' action makes more sense with an optional:
```
parser.add_argument('-i', '--input',action='append')
parser.parse_args(['-i','fileone', '-a', '-i','filetwo', '-b', '-i','filethree'])
```
You can interleave optionals with separate positionals ('input1 -a input2 -b input3'), but you cannot interleave optionals within one multiitem positional. But you can accomplish this with a two step parse.
```
import argparse
parser1 = argparse.ArgumentParser()
parser1.add_argument('-a', action='store_true')
parser1.add_argument('-b', action='store_true')
parser2 = argparse.ArgumentParser()
parser2.add_argument('input', nargs='*')
ns, rest = parser1.parse_known_args(['fileone', '-a', 'filetwo', '-b', 'filethree'])
# Namespace(a=True, b=True), ['fileone', 'filetwo', 'filethree']
ns = parser2.parse_args(rest, ns)
# Namespace(a=True, b=True, input=['fileone', 'filetwo', 'filethree'])
```
<http://bugs.python.org/issue14191> is a proposed patch that will do this with single call to:
```
parser.parse_intermixed_args(['fileone', '-a', 'filetwo', '-b', 'filethree'])
```
|
5,373,474
|
I'm trying to use argparse to parse the command line arguments for a program I'm working on. Essentially, I need to support multiple positional arguments spread within the optional arguments, but cannot get argparse to work in this situation. In the actual program, I'm using a custom action (I need to store a snapshot of the namespace each time a positional argument is found), but the problem I'm having can be replicated with the `append` action:
```
>>> import argparse
>>> parser = argparse.ArgumentParser()
>>> parser.add_argument('-a', action='store_true')
>>> parser.add_argument('-b', action='store_true')
>>> parser.add_argument('input', action='append')
>>> parser.parse_args(['fileone', '-a', 'filetwo', '-b', 'filethree'])
usage: ipython [-h] [-a] [-b] input
ipython: error: unrecognized arguments: filetwo filethree
```
I'd like this to result in the namespace `(a=True, b=True, input=['fileone', 'filetwo', 'filethree'])`, but cannot see how to do this - if indeed it can. I can't see anything in the docs or Google which says one way or the other if this is possible, although its quite possible (likely?) I've overlooked something. Does anyone have any suggestions?
|
2011/03/21
|
[
"https://Stackoverflow.com/questions/5373474",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/668807/"
] |
srgerg was right about the definition of positional arguments. In order to get the result you want, You have to accept them as optional arguments, and modify the resulted namespace according to your need.
You can use a custom action:
```
class MyAction(argparse.Action):
def __call__(self, parser, namespace, values, option_string=None):
# Set optional arguments to True or False
if option_string:
attr = True if values else False
setattr(namespace, self.dest, attr)
# Modify value of "input" in the namespace
if hasattr(namespace, 'input'):
current_values = getattr(namespace, 'input')
try:
current_values.extend(values)
except AttributeError:
current_values = values
finally:
setattr(namespace, 'input', current_values)
else:
setattr(namespace, 'input', values)
parser = argparse.ArgumentParser()
parser.add_argument('-a', nargs='+', action=MyAction)
parser.add_argument('-b', nargs='+', action=MyAction)
parser.add_argument('input', nargs='+', action=MyAction)
```
And this is what you get:
```
>>> parser.parse_args(['fileone', '-a', 'filetwo', '-b', 'filethree'])
Namespace(a=True, b=True, input=['fileone', 'filetwo', 'filethree'])
```
Or you can modify the resulted namespace like this:
```
>>> import argparse
>>> parser = argparse.ArgumentParser()
>>> parser.add_argument('-a', nargs='+')
>>> parser.add_argument('-b', nargs='+')
>>> parser.add_argument('input', nargs='+')
>>> result = parser.parse_args(['fileone', '-a', 'filetwo', '-b', 'filethree'])
>>> inputs = []
>>> inputs.extend(result.a)
>>> inputs.extend(result.b)
>>> inputs.extend(result.input)
>>> modified = argparse.Namespace(
a = result.a != [],
b = result.b != [],
input = inputs)
```
And this is what you get:
```
>>> modified
Namespace(a=True, b=True, input=['filetwo', 'filethree', 'fileone'])
```
However, both method result in less readable and less maintainable code. Maybe it's better to change the program logic and do it in a different way.
|
It seems to me that hpaulj is on the right track but making things a bit more complicated than necessary. I suspect that Blair is looking for something akin to the behavior of the old optparse module and doesn't really need the list of input arguments in the inputs field of the args object. He just wants
```
import argparse
parser = argparse.ArgumentParser()
parser.add_argument('-a', action='store_true')
parser.add_argument('-b', action='store_true')
opts, args = parser.parse_known_args(['fileone', '-a', 'filetwo', '-b', 'filethree'])
# Namespace(a=True, b=True), ['fileone', 'filetwo', 'filethree']
```
In the vernacular of optparse, the "options" are available in opts, and the list of possibly interspersed other "arguments" are in args.
|
5,373,474
|
I'm trying to use argparse to parse the command line arguments for a program I'm working on. Essentially, I need to support multiple positional arguments spread within the optional arguments, but cannot get argparse to work in this situation. In the actual program, I'm using a custom action (I need to store a snapshot of the namespace each time a positional argument is found), but the problem I'm having can be replicated with the `append` action:
```
>>> import argparse
>>> parser = argparse.ArgumentParser()
>>> parser.add_argument('-a', action='store_true')
>>> parser.add_argument('-b', action='store_true')
>>> parser.add_argument('input', action='append')
>>> parser.parse_args(['fileone', '-a', 'filetwo', '-b', 'filethree'])
usage: ipython [-h] [-a] [-b] input
ipython: error: unrecognized arguments: filetwo filethree
```
I'd like this to result in the namespace `(a=True, b=True, input=['fileone', 'filetwo', 'filethree'])`, but cannot see how to do this - if indeed it can. I can't see anything in the docs or Google which says one way or the other if this is possible, although its quite possible (likely?) I've overlooked something. Does anyone have any suggestions?
|
2011/03/21
|
[
"https://Stackoverflow.com/questions/5373474",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/668807/"
] |
The 'append' action makes more sense with an optional:
```
parser.add_argument('-i', '--input',action='append')
parser.parse_args(['-i','fileone', '-a', '-i','filetwo', '-b', '-i','filethree'])
```
You can interleave optionals with separate positionals ('input1 -a input2 -b input3'), but you cannot interleave optionals within one multiitem positional. But you can accomplish this with a two step parse.
```
import argparse
parser1 = argparse.ArgumentParser()
parser1.add_argument('-a', action='store_true')
parser1.add_argument('-b', action='store_true')
parser2 = argparse.ArgumentParser()
parser2.add_argument('input', nargs='*')
ns, rest = parser1.parse_known_args(['fileone', '-a', 'filetwo', '-b', 'filethree'])
# Namespace(a=True, b=True), ['fileone', 'filetwo', 'filethree']
ns = parser2.parse_args(rest, ns)
# Namespace(a=True, b=True, input=['fileone', 'filetwo', 'filethree'])
```
<http://bugs.python.org/issue14191> is a proposed patch that will do this with single call to:
```
parser.parse_intermixed_args(['fileone', '-a', 'filetwo', '-b', 'filethree'])
```
|
It seems to me that hpaulj is on the right track but making things a bit more complicated than necessary. I suspect that Blair is looking for something akin to the behavior of the old optparse module and doesn't really need the list of input arguments in the inputs field of the args object. He just wants
```
import argparse
parser = argparse.ArgumentParser()
parser.add_argument('-a', action='store_true')
parser.add_argument('-b', action='store_true')
opts, args = parser.parse_known_args(['fileone', '-a', 'filetwo', '-b', 'filethree'])
# Namespace(a=True, b=True), ['fileone', 'filetwo', 'filethree']
```
In the vernacular of optparse, the "options" are available in opts, and the list of possibly interspersed other "arguments" are in args.
|
2,604,917
|
I'm trying to make a program so that I can run it through the command line with the following format:
```
./myProgram
```
I made it executable and put `#!/usr/bin/env python` in the header, but it's giving me the following error.
```
env: python\r: No such file or directory
```
However, when I run "python myProgram", it runs fine. Can someone tell me what I'm doing wrong?
|
2010/04/09
|
[
"https://Stackoverflow.com/questions/2604917",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/305105/"
] |
Your line endings are wrong. Use `dos2unix` to fix them.
|
`dos2unix filename.py` or inside vim issue the command `:set fileformat=unix` and save.
|
2,604,917
|
I'm trying to make a program so that I can run it through the command line with the following format:
```
./myProgram
```
I made it executable and put `#!/usr/bin/env python` in the header, but it's giving me the following error.
```
env: python\r: No such file or directory
```
However, when I run "python myProgram", it runs fine. Can someone tell me what I'm doing wrong?
|
2010/04/09
|
[
"https://Stackoverflow.com/questions/2604917",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/305105/"
] |
Your line endings are wrong. Use `dos2unix` to fix them.
|
+1 on ignacio's suggestion.
however, to answer the 1st part of your question more directly, each OS/system uses a different line termination character:
POSIX (any Unix-flavor like Linux, \*BSD, Mac OS X, etc.) uses **`\n`** (NEWLINE) while
DOS/Win uses the combo **`\r\n`** (CR/carriage return + NEWLINE) and old Mac OS 8 or 9 uses just CR or **`\r`**.
to solve this problem, you can run a utility like ignacio has suggested, or you should be able to do it from your text editor (may not be very apparent however).
to answer the other part of your question, the reason why `$ python myProgram` works is because Python treats all three different line endings the same... the shebang line at the top is ignored because you told Python to load and run that script, and "`#`" means the first line is a comment and thus ignored.
when you tell your OS shell to run it, it needs to parse that line and execute whatever interpreter you requested, but if it can't, it pukes on you like it did.
hope this helps!
ps. on a side note, you can find out what line-termination character is being used on your operating system, just check out the `os.linesep` (data) attribute. for example, on my Mac (OS X), i get this:
```
>>> import os
>>> os.linesep
'\n'
```
here's a quick summary of the other related attributes that i plagiarized from my [hardcore Python intro course notes](http://cyberwebconsulting.com):

|
2,604,917
|
I'm trying to make a program so that I can run it through the command line with the following format:
```
./myProgram
```
I made it executable and put `#!/usr/bin/env python` in the header, but it's giving me the following error.
```
env: python\r: No such file or directory
```
However, when I run "python myProgram", it runs fine. Can someone tell me what I'm doing wrong?
|
2010/04/09
|
[
"https://Stackoverflow.com/questions/2604917",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/305105/"
] |
+1 on ignacio's suggestion.
however, to answer the 1st part of your question more directly, each OS/system uses a different line termination character:
POSIX (any Unix-flavor like Linux, \*BSD, Mac OS X, etc.) uses **`\n`** (NEWLINE) while
DOS/Win uses the combo **`\r\n`** (CR/carriage return + NEWLINE) and old Mac OS 8 or 9 uses just CR or **`\r`**.
to solve this problem, you can run a utility like ignacio has suggested, or you should be able to do it from your text editor (may not be very apparent however).
to answer the other part of your question, the reason why `$ python myProgram` works is because Python treats all three different line endings the same... the shebang line at the top is ignored because you told Python to load and run that script, and "`#`" means the first line is a comment and thus ignored.
when you tell your OS shell to run it, it needs to parse that line and execute whatever interpreter you requested, but if it can't, it pukes on you like it did.
hope this helps!
ps. on a side note, you can find out what line-termination character is being used on your operating system, just check out the `os.linesep` (data) attribute. for example, on my Mac (OS X), i get this:
```
>>> import os
>>> os.linesep
'\n'
```
here's a quick summary of the other related attributes that i plagiarized from my [hardcore Python intro course notes](http://cyberwebconsulting.com):

|
`dos2unix filename.py` or inside vim issue the command `:set fileformat=unix` and save.
|
16,903,416
|
I've used:
```
data = DataReader("yhoo", "yahoo", datetime.datetime(2000, 1, 1),
datetime.datetime.today())
```
in pandas (python) to get history data of yahoo, but it cannot show today's price (the market has not yet closed) how can I resolve such problem, thanks in advance.
|
2013/06/03
|
[
"https://Stackoverflow.com/questions/16903416",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/857130/"
] |
```
import pandas
import pandas.io.data
import datetime
import urllib2
import csv
YAHOO_TODAY="http://download.finance.yahoo.com/d/quotes.csv?s=%s&f=sd1ohgl1vl1"
def get_quote_today(symbol):
response = urllib2.urlopen(YAHOO_TODAY % symbol)
reader = csv.reader(response, delimiter=",", quotechar='"')
for row in reader:
if row[0] == symbol:
return row
## main ##
symbol = "TSLA"
history = pandas.io.data.DataReader(symbol, "yahoo", start="2014/1/1")
print history.tail(2)
today = datetime.date.today()
df = pandas.DataFrame(index=pandas.DatetimeIndex(start=today, end=today, freq="D"),
columns=["Open", "High", "Low", "Close", "Volume", "Adj Close"],
dtype=float)
row = get_quote_today(symbol)
df.ix[0] = map(float, row[2:])
history = history.append(df)
print "today is %s" % today
print history.tail(2)
```
just to complete perigee's answer, it cost me quite some time to find a way to append the data.
```
Open High Low Close Volume Adj Close
Date
2014-02-04 180.7 181.60 176.20 178.73 4686300 178.73
2014-02-05 178.3 180.59 169.36 174.42 7268000 174.42
today is 2014-02-06
Open High Low Close Volume Adj Close
2014-02-05 178.30 180.59 169.36 174.420 7268000 174.420
2014-02-06 176.36 180.11 176.00 178.793 5199297 178.793
```
|
The simplest way to extract Indian stock price data into Python is to use the nsepy library.
In case you do not have the nsepy library do the following:
```
pip install nsepy
```
The following code allows you to extract HDFC stock price for 10 years.
```
from nsepy import get_history
from datetime import date
dfc=get_history(symbol="HDFCBANK",start=date(2015,5,12),end=date(2020,5,18))
```
This is so far the easiest code I have found.
|
16,903,416
|
I've used:
```
data = DataReader("yhoo", "yahoo", datetime.datetime(2000, 1, 1),
datetime.datetime.today())
```
in pandas (python) to get history data of yahoo, but it cannot show today's price (the market has not yet closed) how can I resolve such problem, thanks in advance.
|
2013/06/03
|
[
"https://Stackoverflow.com/questions/16903416",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/857130/"
] |
Find a way to work around, just use urllib to fetch the data with:
```
http://download.finance.yahoo.com/d/quotes.csv?s=yhoo&f=sd1ohgl1l1v
```
then add it to dataframe
|
The simplest way to extract Indian stock price data into Python is to use the nsepy library.
In case you do not have the nsepy library do the following:
```
pip install nsepy
```
The following code allows you to extract HDFC stock price for 10 years.
```
from nsepy import get_history
from datetime import date
dfc=get_history(symbol="HDFCBANK",start=date(2015,5,12),end=date(2020,5,18))
```
This is so far the easiest code I have found.
|
16,903,416
|
I've used:
```
data = DataReader("yhoo", "yahoo", datetime.datetime(2000, 1, 1),
datetime.datetime.today())
```
in pandas (python) to get history data of yahoo, but it cannot show today's price (the market has not yet closed) how can I resolve such problem, thanks in advance.
|
2013/06/03
|
[
"https://Stackoverflow.com/questions/16903416",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/857130/"
] |
So from trying this out and looking at the dataframe, it doesn't look too possible. You tell it to go from a specific day until today, yet the dataframe stops at may 31st 2013. This tells me that yahoo probably has not made it available for you to use in the past couple days or somehow pandas is just not picking it up. It is not just missing 1 day, it is missing 3.
If I do the following:
```
>>> df = DataReader("yhoo", "yahoo", datetime.datetime(2013, 6, 1),datetime.datetime.today())
>>> len(df)
0
```
it shows me that there simply is no data to pick up in those days so far. If there is some way around this then I cannot figure it out, but it just seems that the data is not available for you yet, which is hard to believe.
|
The module from pandas doesn't work anymore, because the google and yahoo doens't provide support anymore. So you can create a function to take the data direct from the Google Finance using the url. Here is a part of a code to do this
```
import csv
import datetime
import re
import codecs
import requests
import pandas as pd
import numpy as np
import matplotlib.pyplot as plt
```
You can wrote a function to get data from Google Finance using the url, you have to indent the parte bellow.
```
#You have to indent this part
def get_google_finance_intraday(ticker, period=60, days=1, exchange='NASD'):
"""
Retrieve intraday stock data from Google Finance.
Parameters
----------------
ticker : str
Company ticker symbol.
period : int
Interval between stock values in seconds.
i = 60 corresponds to one minute tick data
i = 86400 corresponds to daily data
days : int
Number of days of data to retrieve.
exchange : str
Exchange from which the quotes should be fetched
Returns
---------------
df : pandas.DataFrame
DataFrame containing the opening price, high price, low price,
closing price, and volume. The index contains the times associated with
the retrieved price values.
"""
# build url
url = 'https://finance.google.com/finance/getprices?p={days}d&f=d,o,h,l,c,v&q={ticker}&i={period}&x={exchange}'.format(ticker=ticker, period=period, days=days, exchange=exchange)
page = requests.get(url)
reader = csv.reader(codecs.iterdecode(page.content.splitlines(), "utf-8"))
columns = ['Open', 'High', 'Low', 'Close', 'Volume']
rows = []
times = []
for row in reader:
if re.match('^[a\d]', row[0]):
if row[0].startswith('a'):
start = datetime.datetime.fromtimestamp(int(row[0][1:]))
times.append(start)
else:
times.append(start+datetime.timedelta(seconds=period*int(row[0])))
rows.append(map(float, row[1:]))
if len(rows):
return pd.DataFrame(rows, index=pd.DatetimeIndex(times, name='Date'), columns=columns)
else:
return pd.DataFrame(rows, index=pd.DatetimeIndex(times, name='Date'))
```
Now you can just call the function with the ticket that you want, in my case AAPL and the result is a pandas DataFrame containing the opening price, high price, low price, closing price, and volume.
```
ticker = 'AAPL'
period = 60
days = 1
exchange = 'NASD'
df = get_google_finance_intraday(ticker, period=period, days=days)
df
```
|
16,903,416
|
I've used:
```
data = DataReader("yhoo", "yahoo", datetime.datetime(2000, 1, 1),
datetime.datetime.today())
```
in pandas (python) to get history data of yahoo, but it cannot show today's price (the market has not yet closed) how can I resolve such problem, thanks in advance.
|
2013/06/03
|
[
"https://Stackoverflow.com/questions/16903416",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/857130/"
] |
```
import pandas
import pandas.io.data
import datetime
import urllib2
import csv
YAHOO_TODAY="http://download.finance.yahoo.com/d/quotes.csv?s=%s&f=sd1ohgl1vl1"
def get_quote_today(symbol):
response = urllib2.urlopen(YAHOO_TODAY % symbol)
reader = csv.reader(response, delimiter=",", quotechar='"')
for row in reader:
if row[0] == symbol:
return row
## main ##
symbol = "TSLA"
history = pandas.io.data.DataReader(symbol, "yahoo", start="2014/1/1")
print history.tail(2)
today = datetime.date.today()
df = pandas.DataFrame(index=pandas.DatetimeIndex(start=today, end=today, freq="D"),
columns=["Open", "High", "Low", "Close", "Volume", "Adj Close"],
dtype=float)
row = get_quote_today(symbol)
df.ix[0] = map(float, row[2:])
history = history.append(df)
print "today is %s" % today
print history.tail(2)
```
just to complete perigee's answer, it cost me quite some time to find a way to append the data.
```
Open High Low Close Volume Adj Close
Date
2014-02-04 180.7 181.60 176.20 178.73 4686300 178.73
2014-02-05 178.3 180.59 169.36 174.42 7268000 174.42
today is 2014-02-06
Open High Low Close Volume Adj Close
2014-02-05 178.30 180.59 169.36 174.420 7268000 174.420
2014-02-06 176.36 180.11 176.00 178.793 5199297 178.793
```
|
Find a way to work around, just use urllib to fetch the data with:
```
http://download.finance.yahoo.com/d/quotes.csv?s=yhoo&f=sd1ohgl1l1v
```
then add it to dataframe
|
16,903,416
|
I've used:
```
data = DataReader("yhoo", "yahoo", datetime.datetime(2000, 1, 1),
datetime.datetime.today())
```
in pandas (python) to get history data of yahoo, but it cannot show today's price (the market has not yet closed) how can I resolve such problem, thanks in advance.
|
2013/06/03
|
[
"https://Stackoverflow.com/questions/16903416",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/857130/"
] |
Find a way to work around, just use urllib to fetch the data with:
```
http://download.finance.yahoo.com/d/quotes.csv?s=yhoo&f=sd1ohgl1l1v
```
then add it to dataframe
|
So from trying this out and looking at the dataframe, it doesn't look too possible. You tell it to go from a specific day until today, yet the dataframe stops at may 31st 2013. This tells me that yahoo probably has not made it available for you to use in the past couple days or somehow pandas is just not picking it up. It is not just missing 1 day, it is missing 3.
If I do the following:
```
>>> df = DataReader("yhoo", "yahoo", datetime.datetime(2013, 6, 1),datetime.datetime.today())
>>> len(df)
0
```
it shows me that there simply is no data to pick up in those days so far. If there is some way around this then I cannot figure it out, but it just seems that the data is not available for you yet, which is hard to believe.
|
16,903,416
|
I've used:
```
data = DataReader("yhoo", "yahoo", datetime.datetime(2000, 1, 1),
datetime.datetime.today())
```
in pandas (python) to get history data of yahoo, but it cannot show today's price (the market has not yet closed) how can I resolve such problem, thanks in advance.
|
2013/06/03
|
[
"https://Stackoverflow.com/questions/16903416",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/857130/"
] |
This code uses the pandas read\_csv method to get the new quote from yahoo, and it checks if the new quote is an update from the current date or a new date in order to update the last record in history or append a new record.
If you add a while(true) loop and a sleep around the new\_quote section, you can have the code refresh the quote during the day.
It also has duplicate last trade price to fill in the Close and the Adjusted Close, given that intraday close and adj close are always the same value.
```
import pandas as pd
import pandas.io.data as web
def get_quote_today(symbol):
url="http://download.finance.yahoo.com/d/quotes.csv?s=%s&f=d1t1ohgl1vl1"
new_quote= pd.read_csv(url%symbol,
names=[u'Date',u'time',u'Open', u'High', u'Low',
u'Close', u'Volume', u'Adj Close'])
# generate timestamp:
stamp = pd.to_datetime(new_quote.Date+" "+new_quote.time)
new_quote.index= stamp
return new_quote.iloc[:, 2:]
if __name__ == "__main__":
symbol = "TSLA"
history = web.DataReader(symbol, "yahoo", start="2014/1/1")
print history.tail()
new_quote = get_quote_today(symbol)
if new_quote.index > history.index[-1]:
if new_quote.index[-1].date() == history.index[-1].date():
# if both quotes are for the first date, update history's last record.
history.iloc[-1]= new_quote.iloc[-1]
else:
history=history.append(new_quote)
history.tail()
```
|
The module from pandas doesn't work anymore, because the google and yahoo doens't provide support anymore. So you can create a function to take the data direct from the Google Finance using the url. Here is a part of a code to do this
```
import csv
import datetime
import re
import codecs
import requests
import pandas as pd
import numpy as np
import matplotlib.pyplot as plt
```
You can wrote a function to get data from Google Finance using the url, you have to indent the parte bellow.
```
#You have to indent this part
def get_google_finance_intraday(ticker, period=60, days=1, exchange='NASD'):
"""
Retrieve intraday stock data from Google Finance.
Parameters
----------------
ticker : str
Company ticker symbol.
period : int
Interval between stock values in seconds.
i = 60 corresponds to one minute tick data
i = 86400 corresponds to daily data
days : int
Number of days of data to retrieve.
exchange : str
Exchange from which the quotes should be fetched
Returns
---------------
df : pandas.DataFrame
DataFrame containing the opening price, high price, low price,
closing price, and volume. The index contains the times associated with
the retrieved price values.
"""
# build url
url = 'https://finance.google.com/finance/getprices?p={days}d&f=d,o,h,l,c,v&q={ticker}&i={period}&x={exchange}'.format(ticker=ticker, period=period, days=days, exchange=exchange)
page = requests.get(url)
reader = csv.reader(codecs.iterdecode(page.content.splitlines(), "utf-8"))
columns = ['Open', 'High', 'Low', 'Close', 'Volume']
rows = []
times = []
for row in reader:
if re.match('^[a\d]', row[0]):
if row[0].startswith('a'):
start = datetime.datetime.fromtimestamp(int(row[0][1:]))
times.append(start)
else:
times.append(start+datetime.timedelta(seconds=period*int(row[0])))
rows.append(map(float, row[1:]))
if len(rows):
return pd.DataFrame(rows, index=pd.DatetimeIndex(times, name='Date'), columns=columns)
else:
return pd.DataFrame(rows, index=pd.DatetimeIndex(times, name='Date'))
```
Now you can just call the function with the ticket that you want, in my case AAPL and the result is a pandas DataFrame containing the opening price, high price, low price, closing price, and volume.
```
ticker = 'AAPL'
period = 60
days = 1
exchange = 'NASD'
df = get_google_finance_intraday(ticker, period=period, days=days)
df
```
|
16,903,416
|
I've used:
```
data = DataReader("yhoo", "yahoo", datetime.datetime(2000, 1, 1),
datetime.datetime.today())
```
in pandas (python) to get history data of yahoo, but it cannot show today's price (the market has not yet closed) how can I resolve such problem, thanks in advance.
|
2013/06/03
|
[
"https://Stackoverflow.com/questions/16903416",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/857130/"
] |
This code uses the pandas read\_csv method to get the new quote from yahoo, and it checks if the new quote is an update from the current date or a new date in order to update the last record in history or append a new record.
If you add a while(true) loop and a sleep around the new\_quote section, you can have the code refresh the quote during the day.
It also has duplicate last trade price to fill in the Close and the Adjusted Close, given that intraday close and adj close are always the same value.
```
import pandas as pd
import pandas.io.data as web
def get_quote_today(symbol):
url="http://download.finance.yahoo.com/d/quotes.csv?s=%s&f=d1t1ohgl1vl1"
new_quote= pd.read_csv(url%symbol,
names=[u'Date',u'time',u'Open', u'High', u'Low',
u'Close', u'Volume', u'Adj Close'])
# generate timestamp:
stamp = pd.to_datetime(new_quote.Date+" "+new_quote.time)
new_quote.index= stamp
return new_quote.iloc[:, 2:]
if __name__ == "__main__":
symbol = "TSLA"
history = web.DataReader(symbol, "yahoo", start="2014/1/1")
print history.tail()
new_quote = get_quote_today(symbol)
if new_quote.index > history.index[-1]:
if new_quote.index[-1].date() == history.index[-1].date():
# if both quotes are for the first date, update history's last record.
history.iloc[-1]= new_quote.iloc[-1]
else:
history=history.append(new_quote)
history.tail()
```
|
So from trying this out and looking at the dataframe, it doesn't look too possible. You tell it to go from a specific day until today, yet the dataframe stops at may 31st 2013. This tells me that yahoo probably has not made it available for you to use in the past couple days or somehow pandas is just not picking it up. It is not just missing 1 day, it is missing 3.
If I do the following:
```
>>> df = DataReader("yhoo", "yahoo", datetime.datetime(2013, 6, 1),datetime.datetime.today())
>>> len(df)
0
```
it shows me that there simply is no data to pick up in those days so far. If there is some way around this then I cannot figure it out, but it just seems that the data is not available for you yet, which is hard to believe.
|
16,903,416
|
I've used:
```
data = DataReader("yhoo", "yahoo", datetime.datetime(2000, 1, 1),
datetime.datetime.today())
```
in pandas (python) to get history data of yahoo, but it cannot show today's price (the market has not yet closed) how can I resolve such problem, thanks in advance.
|
2013/06/03
|
[
"https://Stackoverflow.com/questions/16903416",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/857130/"
] |
```
import pandas
import pandas.io.data
import datetime
import urllib2
import csv
YAHOO_TODAY="http://download.finance.yahoo.com/d/quotes.csv?s=%s&f=sd1ohgl1vl1"
def get_quote_today(symbol):
response = urllib2.urlopen(YAHOO_TODAY % symbol)
reader = csv.reader(response, delimiter=",", quotechar='"')
for row in reader:
if row[0] == symbol:
return row
## main ##
symbol = "TSLA"
history = pandas.io.data.DataReader(symbol, "yahoo", start="2014/1/1")
print history.tail(2)
today = datetime.date.today()
df = pandas.DataFrame(index=pandas.DatetimeIndex(start=today, end=today, freq="D"),
columns=["Open", "High", "Low", "Close", "Volume", "Adj Close"],
dtype=float)
row = get_quote_today(symbol)
df.ix[0] = map(float, row[2:])
history = history.append(df)
print "today is %s" % today
print history.tail(2)
```
just to complete perigee's answer, it cost me quite some time to find a way to append the data.
```
Open High Low Close Volume Adj Close
Date
2014-02-04 180.7 181.60 176.20 178.73 4686300 178.73
2014-02-05 178.3 180.59 169.36 174.42 7268000 174.42
today is 2014-02-06
Open High Low Close Volume Adj Close
2014-02-05 178.30 180.59 169.36 174.420 7268000 174.420
2014-02-06 176.36 180.11 176.00 178.793 5199297 178.793
```
|
The module from pandas doesn't work anymore, because the google and yahoo doens't provide support anymore. So you can create a function to take the data direct from the Google Finance using the url. Here is a part of a code to do this
```
import csv
import datetime
import re
import codecs
import requests
import pandas as pd
import numpy as np
import matplotlib.pyplot as plt
```
You can wrote a function to get data from Google Finance using the url, you have to indent the parte bellow.
```
#You have to indent this part
def get_google_finance_intraday(ticker, period=60, days=1, exchange='NASD'):
"""
Retrieve intraday stock data from Google Finance.
Parameters
----------------
ticker : str
Company ticker symbol.
period : int
Interval between stock values in seconds.
i = 60 corresponds to one minute tick data
i = 86400 corresponds to daily data
days : int
Number of days of data to retrieve.
exchange : str
Exchange from which the quotes should be fetched
Returns
---------------
df : pandas.DataFrame
DataFrame containing the opening price, high price, low price,
closing price, and volume. The index contains the times associated with
the retrieved price values.
"""
# build url
url = 'https://finance.google.com/finance/getprices?p={days}d&f=d,o,h,l,c,v&q={ticker}&i={period}&x={exchange}'.format(ticker=ticker, period=period, days=days, exchange=exchange)
page = requests.get(url)
reader = csv.reader(codecs.iterdecode(page.content.splitlines(), "utf-8"))
columns = ['Open', 'High', 'Low', 'Close', 'Volume']
rows = []
times = []
for row in reader:
if re.match('^[a\d]', row[0]):
if row[0].startswith('a'):
start = datetime.datetime.fromtimestamp(int(row[0][1:]))
times.append(start)
else:
times.append(start+datetime.timedelta(seconds=period*int(row[0])))
rows.append(map(float, row[1:]))
if len(rows):
return pd.DataFrame(rows, index=pd.DatetimeIndex(times, name='Date'), columns=columns)
else:
return pd.DataFrame(rows, index=pd.DatetimeIndex(times, name='Date'))
```
Now you can just call the function with the ticket that you want, in my case AAPL and the result is a pandas DataFrame containing the opening price, high price, low price, closing price, and volume.
```
ticker = 'AAPL'
period = 60
days = 1
exchange = 'NASD'
df = get_google_finance_intraday(ticker, period=period, days=days)
df
```
|
16,903,416
|
I've used:
```
data = DataReader("yhoo", "yahoo", datetime.datetime(2000, 1, 1),
datetime.datetime.today())
```
in pandas (python) to get history data of yahoo, but it cannot show today's price (the market has not yet closed) how can I resolve such problem, thanks in advance.
|
2013/06/03
|
[
"https://Stackoverflow.com/questions/16903416",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/857130/"
] |
```
import pandas
import pandas.io.data
import datetime
import urllib2
import csv
YAHOO_TODAY="http://download.finance.yahoo.com/d/quotes.csv?s=%s&f=sd1ohgl1vl1"
def get_quote_today(symbol):
response = urllib2.urlopen(YAHOO_TODAY % symbol)
reader = csv.reader(response, delimiter=",", quotechar='"')
for row in reader:
if row[0] == symbol:
return row
## main ##
symbol = "TSLA"
history = pandas.io.data.DataReader(symbol, "yahoo", start="2014/1/1")
print history.tail(2)
today = datetime.date.today()
df = pandas.DataFrame(index=pandas.DatetimeIndex(start=today, end=today, freq="D"),
columns=["Open", "High", "Low", "Close", "Volume", "Adj Close"],
dtype=float)
row = get_quote_today(symbol)
df.ix[0] = map(float, row[2:])
history = history.append(df)
print "today is %s" % today
print history.tail(2)
```
just to complete perigee's answer, it cost me quite some time to find a way to append the data.
```
Open High Low Close Volume Adj Close
Date
2014-02-04 180.7 181.60 176.20 178.73 4686300 178.73
2014-02-05 178.3 180.59 169.36 174.42 7268000 174.42
today is 2014-02-06
Open High Low Close Volume Adj Close
2014-02-05 178.30 180.59 169.36 174.420 7268000 174.420
2014-02-06 176.36 180.11 176.00 178.793 5199297 178.793
```
|
So from trying this out and looking at the dataframe, it doesn't look too possible. You tell it to go from a specific day until today, yet the dataframe stops at may 31st 2013. This tells me that yahoo probably has not made it available for you to use in the past couple days or somehow pandas is just not picking it up. It is not just missing 1 day, it is missing 3.
If I do the following:
```
>>> df = DataReader("yhoo", "yahoo", datetime.datetime(2013, 6, 1),datetime.datetime.today())
>>> len(df)
0
```
it shows me that there simply is no data to pick up in those days so far. If there is some way around this then I cannot figure it out, but it just seems that the data is not available for you yet, which is hard to believe.
|
16,903,416
|
I've used:
```
data = DataReader("yhoo", "yahoo", datetime.datetime(2000, 1, 1),
datetime.datetime.today())
```
in pandas (python) to get history data of yahoo, but it cannot show today's price (the market has not yet closed) how can I resolve such problem, thanks in advance.
|
2013/06/03
|
[
"https://Stackoverflow.com/questions/16903416",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/857130/"
] |
So from trying this out and looking at the dataframe, it doesn't look too possible. You tell it to go from a specific day until today, yet the dataframe stops at may 31st 2013. This tells me that yahoo probably has not made it available for you to use in the past couple days or somehow pandas is just not picking it up. It is not just missing 1 day, it is missing 3.
If I do the following:
```
>>> df = DataReader("yhoo", "yahoo", datetime.datetime(2013, 6, 1),datetime.datetime.today())
>>> len(df)
0
```
it shows me that there simply is no data to pick up in those days so far. If there is some way around this then I cannot figure it out, but it just seems that the data is not available for you yet, which is hard to believe.
|
The simplest way to extract Indian stock price data into Python is to use the nsepy library.
In case you do not have the nsepy library do the following:
```
pip install nsepy
```
The following code allows you to extract HDFC stock price for 10 years.
```
from nsepy import get_history
from datetime import date
dfc=get_history(symbol="HDFCBANK",start=date(2015,5,12),end=date(2020,5,18))
```
This is so far the easiest code I have found.
|
2,040,616
|
When I run my python script I get the following warning
```
DeprecationWarning: the sets module is deprecated
```
How do I fix this?
|
2010/01/11
|
[
"https://Stackoverflow.com/questions/2040616",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/247873/"
] |
Stop using the `sets` module, or switch to an older version of python where it's not deprecated.
According to [pep-004](http://www.python.org/dev/peps/pep-0004/), `sets` is deprecated as of v2.6, replaced by the built-in [`set` and `frozenset` types](http://docs.python.org/library/stdtypes.html#set-types-set-frozenset).
|
You don't need to import the `sets` module to use them, they're in the builtin namespace.
|
2,040,616
|
When I run my python script I get the following warning
```
DeprecationWarning: the sets module is deprecated
```
How do I fix this?
|
2010/01/11
|
[
"https://Stackoverflow.com/questions/2040616",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/247873/"
] |
History:
Before Python 2.3: no set functionality
Python 2.3: `sets` module arrived
Python 2.4: `set` and `frozenset` built-ins introduced
Python 2.6: `sets` module deprecated
You should change your code to use `set` instead of `sets.Set`.
If you still wish to be able to support using Python 2.3, you can do this at the start of your script:
```
try:
set
except NameError:
from sets import Set as set
```
|
You don't need to import the `sets` module to use them, they're in the builtin namespace.
|
2,040,616
|
When I run my python script I get the following warning
```
DeprecationWarning: the sets module is deprecated
```
How do I fix this?
|
2010/01/11
|
[
"https://Stackoverflow.com/questions/2040616",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/247873/"
] |
If you want to fix it James definitely has the right answer, but in case you want to just turn off deprecation warnings, you can run python like so:
```
$ python -Wignore::DeprecationWarning
Python 2.6.2 (r262:71600, Sep 20 2009, 20:47:22)
[GCC 4.2.1 (Apple Inc. build 5646)] on darwin
Type "help", "copyright", "credits" or "license" for more information.
>>> import sets
>>>
```
(From: <http://puzzling.org/logs/thoughts/2009/May/3/python26-deprecation-warning>)
You can also ignore it programmatically:
```
import warnings
warnings.simplefilter("ignore", DeprecationWarning)
```
|
You don't need to import the `sets` module to use them, they're in the builtin namespace.
|
2,040,616
|
When I run my python script I get the following warning
```
DeprecationWarning: the sets module is deprecated
```
How do I fix this?
|
2010/01/11
|
[
"https://Stackoverflow.com/questions/2040616",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/247873/"
] |
Stop using the `sets` module, or switch to an older version of python where it's not deprecated.
According to [pep-004](http://www.python.org/dev/peps/pep-0004/), `sets` is deprecated as of v2.6, replaced by the built-in [`set` and `frozenset` types](http://docs.python.org/library/stdtypes.html#set-types-set-frozenset).
|
Use the built-in `set` instead of importing and using `sets` module.
From [documentation](http://docs.python.org/whatsnew/2.6.html):
>
> The sets module has been deprecated;
> itβs better to use the built-in set
> and frozenset types.
>
>
>
|
2,040,616
|
When I run my python script I get the following warning
```
DeprecationWarning: the sets module is deprecated
```
How do I fix this?
|
2010/01/11
|
[
"https://Stackoverflow.com/questions/2040616",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/247873/"
] |
Stop using the `sets` module, or switch to an older version of python where it's not deprecated.
According to [pep-004](http://www.python.org/dev/peps/pep-0004/), `sets` is deprecated as of v2.6, replaced by the built-in [`set` and `frozenset` types](http://docs.python.org/library/stdtypes.html#set-types-set-frozenset).
|
History:
Before Python 2.3: no set functionality
Python 2.3: `sets` module arrived
Python 2.4: `set` and `frozenset` built-ins introduced
Python 2.6: `sets` module deprecated
You should change your code to use `set` instead of `sets.Set`.
If you still wish to be able to support using Python 2.3, you can do this at the start of your script:
```
try:
set
except NameError:
from sets import Set as set
```
|
2,040,616
|
When I run my python script I get the following warning
```
DeprecationWarning: the sets module is deprecated
```
How do I fix this?
|
2010/01/11
|
[
"https://Stackoverflow.com/questions/2040616",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/247873/"
] |
Stop using the `sets` module, or switch to an older version of python where it's not deprecated.
According to [pep-004](http://www.python.org/dev/peps/pep-0004/), `sets` is deprecated as of v2.6, replaced by the built-in [`set` and `frozenset` types](http://docs.python.org/library/stdtypes.html#set-types-set-frozenset).
|
If you want to fix it James definitely has the right answer, but in case you want to just turn off deprecation warnings, you can run python like so:
```
$ python -Wignore::DeprecationWarning
Python 2.6.2 (r262:71600, Sep 20 2009, 20:47:22)
[GCC 4.2.1 (Apple Inc. build 5646)] on darwin
Type "help", "copyright", "credits" or "license" for more information.
>>> import sets
>>>
```
(From: <http://puzzling.org/logs/thoughts/2009/May/3/python26-deprecation-warning>)
You can also ignore it programmatically:
```
import warnings
warnings.simplefilter("ignore", DeprecationWarning)
```
|
2,040,616
|
When I run my python script I get the following warning
```
DeprecationWarning: the sets module is deprecated
```
How do I fix this?
|
2010/01/11
|
[
"https://Stackoverflow.com/questions/2040616",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/247873/"
] |
History:
Before Python 2.3: no set functionality
Python 2.3: `sets` module arrived
Python 2.4: `set` and `frozenset` built-ins introduced
Python 2.6: `sets` module deprecated
You should change your code to use `set` instead of `sets.Set`.
If you still wish to be able to support using Python 2.3, you can do this at the start of your script:
```
try:
set
except NameError:
from sets import Set as set
```
|
Use the built-in `set` instead of importing and using `sets` module.
From [documentation](http://docs.python.org/whatsnew/2.6.html):
>
> The sets module has been deprecated;
> itβs better to use the built-in set
> and frozenset types.
>
>
>
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.