qid
int64 46k
74.7M
| question
stringlengths 54
37.8k
| date
stringlengths 10
10
| metadata
listlengths 3
3
| response_j
stringlengths 17
26k
| response_k
stringlengths 26
26k
|
|---|---|---|---|---|---|
39,845,636
|
I have the latest version of pip 8.1.1 on my ubuntu 16.
But I am not able to install any modules via pip as I get this error all the time.
```
File "/usr/local/bin/pip", line 5, in <module>
from pkg_resources import load_entry_point
File "/usr/lib/python3/dist-packages/pkg_resources/__init__.py", line 2927, in <module>
@_call_aside
File "/usr/lib/python3/dist-packages/pkg_resources/__init__.py", line 2913, in _call_aside
f(*args, **kwargs)
File "/usr/lib/python3/dist-packages/pkg_resources/__init__.py", line 2940, in _initialize_master_working_set
working_set = WorkingSet._build_master()
File "/usr/lib/python3/dist-packages/pkg_resources/__init__.py", line 635, in _build_master
ws.require(__requires__)
File "/usr/lib/python3/dist-packages/pkg_resources/__init__.py", line 943, in require
needed = self.resolve(parse_requirements(requirements))
File "/usr/lib/python3/dist-packages/pkg_resources/__init__.py", line 829, in resolve
raise DistributionNotFound(req, requirers)
pkg_resources.DistributionNotFound: The 'pip==7.1.0' distribution was not found and is required by the application
```
I found a similar [link](https://stackoverflow.com/questions/38587785/pkg-resources-distributionnotfound-the-pip-1-5-4-distribution-was-not-found), but not helpful.
|
2016/10/04
|
[
"https://Stackoverflow.com/questions/39845636",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/5474316/"
] |
I repaired mine this with command:
>
> easy\_install pip
>
>
>
|
This is an issue with Homebrew that occurs when upgrading pip. Working off the answer by @amangpt777, the following worked for me!
You can still access pip by using python -m pip. Hence, you can get your pip version by running:
```
python -m pip --version
```
Copy the version and update the following files with the new version:
```
/usr/local/opt/python@3.x/bin/pip3
/usr/local/opt/python@3.x/bin/pip3.x
```
They should look something like this:
```
#!/usr/local/opt/python@3.7/bin/python3.7
# EASY-INSTALL-ENTRY-SCRIPT: 'pip==21.3.1','console_scripts','pip3.7'
__requires__ = 'pip==21.3.1'
import re
import sys
from pkg_resources import load_entry_point
if __name__ == '__main__':
sys.argv[0] = re.sub(r'(-script\.pyw?|\.exe)?$', '', sys.argv[0])
sys.exit(
load_entry_point('pip==21.3.1', 'console_scripts', 'pip3.7')()
)
```
And you should be able to use pip normally again!
|
39,845,636
|
I have the latest version of pip 8.1.1 on my ubuntu 16.
But I am not able to install any modules via pip as I get this error all the time.
```
File "/usr/local/bin/pip", line 5, in <module>
from pkg_resources import load_entry_point
File "/usr/lib/python3/dist-packages/pkg_resources/__init__.py", line 2927, in <module>
@_call_aside
File "/usr/lib/python3/dist-packages/pkg_resources/__init__.py", line 2913, in _call_aside
f(*args, **kwargs)
File "/usr/lib/python3/dist-packages/pkg_resources/__init__.py", line 2940, in _initialize_master_working_set
working_set = WorkingSet._build_master()
File "/usr/lib/python3/dist-packages/pkg_resources/__init__.py", line 635, in _build_master
ws.require(__requires__)
File "/usr/lib/python3/dist-packages/pkg_resources/__init__.py", line 943, in require
needed = self.resolve(parse_requirements(requirements))
File "/usr/lib/python3/dist-packages/pkg_resources/__init__.py", line 829, in resolve
raise DistributionNotFound(req, requirers)
pkg_resources.DistributionNotFound: The 'pip==7.1.0' distribution was not found and is required by the application
```
I found a similar [link](https://stackoverflow.com/questions/38587785/pkg-resources-distributionnotfound-the-pip-1-5-4-distribution-was-not-found), but not helpful.
|
2016/10/04
|
[
"https://Stackoverflow.com/questions/39845636",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/5474316/"
] |
I repaired mine this with command:
>
> easy\_install pip
>
>
>
|
I did not manage to get it to work by using `easy_install pip` or updating the pip configuration file `/usr/local/bin/pip`.
Instead, I removed pip and installed the distribution required by the conf file:
>
> Uninstalling pip:
>
>
>
`$ sudo apt purge python-pip` or `$ sudo yum remove python-pip`
>
> Reinstalling required distribution of pip (change the distribution accordingly):
>
>
>
`$ sudo easy_install pip==9.0.3`
|
39,845,636
|
I have the latest version of pip 8.1.1 on my ubuntu 16.
But I am not able to install any modules via pip as I get this error all the time.
```
File "/usr/local/bin/pip", line 5, in <module>
from pkg_resources import load_entry_point
File "/usr/lib/python3/dist-packages/pkg_resources/__init__.py", line 2927, in <module>
@_call_aside
File "/usr/lib/python3/dist-packages/pkg_resources/__init__.py", line 2913, in _call_aside
f(*args, **kwargs)
File "/usr/lib/python3/dist-packages/pkg_resources/__init__.py", line 2940, in _initialize_master_working_set
working_set = WorkingSet._build_master()
File "/usr/lib/python3/dist-packages/pkg_resources/__init__.py", line 635, in _build_master
ws.require(__requires__)
File "/usr/lib/python3/dist-packages/pkg_resources/__init__.py", line 943, in require
needed = self.resolve(parse_requirements(requirements))
File "/usr/lib/python3/dist-packages/pkg_resources/__init__.py", line 829, in resolve
raise DistributionNotFound(req, requirers)
pkg_resources.DistributionNotFound: The 'pip==7.1.0' distribution was not found and is required by the application
```
I found a similar [link](https://stackoverflow.com/questions/38587785/pkg-resources-distributionnotfound-the-pip-1-5-4-distribution-was-not-found), but not helpful.
|
2016/10/04
|
[
"https://Stackoverflow.com/questions/39845636",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/5474316/"
] |
After upgrading from 18.0 to 18.1, I got the same error. Reinstalling the program(without using pip itself) worked for me:
```
$ curl https://bootstrap.pypa.io/get-pip.py > get-pip.py
$ sudo python get-pip.py
```
|
On mac this can be fixed with brew
```
brew reinstall python
```
|
39,845,636
|
I have the latest version of pip 8.1.1 on my ubuntu 16.
But I am not able to install any modules via pip as I get this error all the time.
```
File "/usr/local/bin/pip", line 5, in <module>
from pkg_resources import load_entry_point
File "/usr/lib/python3/dist-packages/pkg_resources/__init__.py", line 2927, in <module>
@_call_aside
File "/usr/lib/python3/dist-packages/pkg_resources/__init__.py", line 2913, in _call_aside
f(*args, **kwargs)
File "/usr/lib/python3/dist-packages/pkg_resources/__init__.py", line 2940, in _initialize_master_working_set
working_set = WorkingSet._build_master()
File "/usr/lib/python3/dist-packages/pkg_resources/__init__.py", line 635, in _build_master
ws.require(__requires__)
File "/usr/lib/python3/dist-packages/pkg_resources/__init__.py", line 943, in require
needed = self.resolve(parse_requirements(requirements))
File "/usr/lib/python3/dist-packages/pkg_resources/__init__.py", line 829, in resolve
raise DistributionNotFound(req, requirers)
pkg_resources.DistributionNotFound: The 'pip==7.1.0' distribution was not found and is required by the application
```
I found a similar [link](https://stackoverflow.com/questions/38587785/pkg-resources-distributionnotfound-the-pip-1-5-4-distribution-was-not-found), but not helpful.
|
2016/10/04
|
[
"https://Stackoverflow.com/questions/39845636",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/5474316/"
] |
Delete all of the pip/pip3 stuff under .local including the packages.
```
sudo apt-get purge python-pip python3-pip
```
Now remove all pip3 files from local
```
sudo rm -rf /usr/local/bin/pip3
```
you can check which pip is installed other wise execute below one to remove all (No worries)
```
sudo rm -rf /usr/local/bin/pip3.*
```
Using pip and/or pip3, reinstall needed Python packages.
```
sudo apt-get install python-pip python3-pip
```
|
Just relink to resolve it. Find which python : `ls -l /usr/local/bin/python`
```
ln -sf /usr/local/Cellar/python/2.7.12/Frameworks/Python.framework/Versions/2.7/bin/pip /usr/local/bin/pip
```
Or reinstall pip: <https://pip.pypa.io/en/stable/installing/>
|
39,845,636
|
I have the latest version of pip 8.1.1 on my ubuntu 16.
But I am not able to install any modules via pip as I get this error all the time.
```
File "/usr/local/bin/pip", line 5, in <module>
from pkg_resources import load_entry_point
File "/usr/lib/python3/dist-packages/pkg_resources/__init__.py", line 2927, in <module>
@_call_aside
File "/usr/lib/python3/dist-packages/pkg_resources/__init__.py", line 2913, in _call_aside
f(*args, **kwargs)
File "/usr/lib/python3/dist-packages/pkg_resources/__init__.py", line 2940, in _initialize_master_working_set
working_set = WorkingSet._build_master()
File "/usr/lib/python3/dist-packages/pkg_resources/__init__.py", line 635, in _build_master
ws.require(__requires__)
File "/usr/lib/python3/dist-packages/pkg_resources/__init__.py", line 943, in require
needed = self.resolve(parse_requirements(requirements))
File "/usr/lib/python3/dist-packages/pkg_resources/__init__.py", line 829, in resolve
raise DistributionNotFound(req, requirers)
pkg_resources.DistributionNotFound: The 'pip==7.1.0' distribution was not found and is required by the application
```
I found a similar [link](https://stackoverflow.com/questions/38587785/pkg-resources-distributionnotfound-the-pip-1-5-4-distribution-was-not-found), but not helpful.
|
2016/10/04
|
[
"https://Stackoverflow.com/questions/39845636",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/5474316/"
] |
I had this issue for a very long time until I recently found that my 'pip' file (/usr/local/bin/pip) is trying to load the wrong version of pip. I believe you also have 8.1.1 correctly installed on your machine and can give following a try.
1. Open your /usr/local/bin/pip file. For me it looks like :
```
__requires__ = 'pip==9.0.1'
import sys
from pkg_resources import load_entry_point
if __name__ == '__main__':
sys.exit(
load_entry_point('pip==9.0.1', 'console_scripts', 'pip')()
)
```
2. Change 'pip==9.0.1' in line 1 and last line to whichever version you have installed on your system, for example you will need to change 7.1.0 to 8.1.1.
Basically /usr/local/bin/pip is an entry file for loading the pip required version module. Somehow when I am upgrading/changing pip installation this file is not getting updated and so I update it manually every time.
|
Delete all of the pip/pip3 stuff under .local including the packages.
```
sudo apt-get purge python-pip python3-pip
```
Now remove all pip3 files from local
```
sudo rm -rf /usr/local/bin/pip3
```
you can check which pip is installed other wise execute below one to remove all (No worries)
```
sudo rm -rf /usr/local/bin/pip3.*
```
Using pip and/or pip3, reinstall needed Python packages.
```
sudo apt-get install python-pip python3-pip
```
|
39,845,636
|
I have the latest version of pip 8.1.1 on my ubuntu 16.
But I am not able to install any modules via pip as I get this error all the time.
```
File "/usr/local/bin/pip", line 5, in <module>
from pkg_resources import load_entry_point
File "/usr/lib/python3/dist-packages/pkg_resources/__init__.py", line 2927, in <module>
@_call_aside
File "/usr/lib/python3/dist-packages/pkg_resources/__init__.py", line 2913, in _call_aside
f(*args, **kwargs)
File "/usr/lib/python3/dist-packages/pkg_resources/__init__.py", line 2940, in _initialize_master_working_set
working_set = WorkingSet._build_master()
File "/usr/lib/python3/dist-packages/pkg_resources/__init__.py", line 635, in _build_master
ws.require(__requires__)
File "/usr/lib/python3/dist-packages/pkg_resources/__init__.py", line 943, in require
needed = self.resolve(parse_requirements(requirements))
File "/usr/lib/python3/dist-packages/pkg_resources/__init__.py", line 829, in resolve
raise DistributionNotFound(req, requirers)
pkg_resources.DistributionNotFound: The 'pip==7.1.0' distribution was not found and is required by the application
```
I found a similar [link](https://stackoverflow.com/questions/38587785/pkg-resources-distributionnotfound-the-pip-1-5-4-distribution-was-not-found), but not helpful.
|
2016/10/04
|
[
"https://Stackoverflow.com/questions/39845636",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/5474316/"
] |
I repaired mine this with command:
>
> easy\_install pip
>
>
>
|
After a large upgrade in Mint -> 19, my system was a bit weird and I came across this problem too.
I checked the answer from @amangpt777 that may be the one to try
```
/usr/local/bin/pip # -> actually had a shebang calling python3
~/.local/bin/pip* # files were duplicated with the "sudo installed" /usr/local/bin/pip*
```
Running
```
sudo python get-pip.py # with script https://bootstrap.pypa.io/get-pip.py
sudo -H pip install setuptools
```
seem to solve the problem. I understand this as a problem with the root / user installation of python. Not sure if ananconda3 is also messing around with those scrips.
|
39,845,636
|
I have the latest version of pip 8.1.1 on my ubuntu 16.
But I am not able to install any modules via pip as I get this error all the time.
```
File "/usr/local/bin/pip", line 5, in <module>
from pkg_resources import load_entry_point
File "/usr/lib/python3/dist-packages/pkg_resources/__init__.py", line 2927, in <module>
@_call_aside
File "/usr/lib/python3/dist-packages/pkg_resources/__init__.py", line 2913, in _call_aside
f(*args, **kwargs)
File "/usr/lib/python3/dist-packages/pkg_resources/__init__.py", line 2940, in _initialize_master_working_set
working_set = WorkingSet._build_master()
File "/usr/lib/python3/dist-packages/pkg_resources/__init__.py", line 635, in _build_master
ws.require(__requires__)
File "/usr/lib/python3/dist-packages/pkg_resources/__init__.py", line 943, in require
needed = self.resolve(parse_requirements(requirements))
File "/usr/lib/python3/dist-packages/pkg_resources/__init__.py", line 829, in resolve
raise DistributionNotFound(req, requirers)
pkg_resources.DistributionNotFound: The 'pip==7.1.0' distribution was not found and is required by the application
```
I found a similar [link](https://stackoverflow.com/questions/38587785/pkg-resources-distributionnotfound-the-pip-1-5-4-distribution-was-not-found), but not helpful.
|
2016/10/04
|
[
"https://Stackoverflow.com/questions/39845636",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/5474316/"
] |
if you have 2 versions of pip for example `/usr/lib/pip` and `/usr/local/lib/pip` belongs to python 2.6 and 2.7.
you can delete the `/usr/lib/pip` and make a link pip=>/usr/local/lib/pip.
|
After a large upgrade in Mint -> 19, my system was a bit weird and I came across this problem too.
I checked the answer from @amangpt777 that may be the one to try
```
/usr/local/bin/pip # -> actually had a shebang calling python3
~/.local/bin/pip* # files were duplicated with the "sudo installed" /usr/local/bin/pip*
```
Running
```
sudo python get-pip.py # with script https://bootstrap.pypa.io/get-pip.py
sudo -H pip install setuptools
```
seem to solve the problem. I understand this as a problem with the root / user installation of python. Not sure if ananconda3 is also messing around with those scrips.
|
39,845,636
|
I have the latest version of pip 8.1.1 on my ubuntu 16.
But I am not able to install any modules via pip as I get this error all the time.
```
File "/usr/local/bin/pip", line 5, in <module>
from pkg_resources import load_entry_point
File "/usr/lib/python3/dist-packages/pkg_resources/__init__.py", line 2927, in <module>
@_call_aside
File "/usr/lib/python3/dist-packages/pkg_resources/__init__.py", line 2913, in _call_aside
f(*args, **kwargs)
File "/usr/lib/python3/dist-packages/pkg_resources/__init__.py", line 2940, in _initialize_master_working_set
working_set = WorkingSet._build_master()
File "/usr/lib/python3/dist-packages/pkg_resources/__init__.py", line 635, in _build_master
ws.require(__requires__)
File "/usr/lib/python3/dist-packages/pkg_resources/__init__.py", line 943, in require
needed = self.resolve(parse_requirements(requirements))
File "/usr/lib/python3/dist-packages/pkg_resources/__init__.py", line 829, in resolve
raise DistributionNotFound(req, requirers)
pkg_resources.DistributionNotFound: The 'pip==7.1.0' distribution was not found and is required by the application
```
I found a similar [link](https://stackoverflow.com/questions/38587785/pkg-resources-distributionnotfound-the-pip-1-5-4-distribution-was-not-found), but not helpful.
|
2016/10/04
|
[
"https://Stackoverflow.com/questions/39845636",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/5474316/"
] |
I had this issue for a very long time until I recently found that my 'pip' file (/usr/local/bin/pip) is trying to load the wrong version of pip. I believe you also have 8.1.1 correctly installed on your machine and can give following a try.
1. Open your /usr/local/bin/pip file. For me it looks like :
```
__requires__ = 'pip==9.0.1'
import sys
from pkg_resources import load_entry_point
if __name__ == '__main__':
sys.exit(
load_entry_point('pip==9.0.1', 'console_scripts', 'pip')()
)
```
2. Change 'pip==9.0.1' in line 1 and last line to whichever version you have installed on your system, for example you will need to change 7.1.0 to 8.1.1.
Basically /usr/local/bin/pip is an entry file for loading the pip required version module. Somehow when I am upgrading/changing pip installation this file is not getting updated and so I update it manually every time.
|
This is an issue with Homebrew that occurs when upgrading pip. Working off the answer by @amangpt777, the following worked for me!
You can still access pip by using python -m pip. Hence, you can get your pip version by running:
```
python -m pip --version
```
Copy the version and update the following files with the new version:
```
/usr/local/opt/python@3.x/bin/pip3
/usr/local/opt/python@3.x/bin/pip3.x
```
They should look something like this:
```
#!/usr/local/opt/python@3.7/bin/python3.7
# EASY-INSTALL-ENTRY-SCRIPT: 'pip==21.3.1','console_scripts','pip3.7'
__requires__ = 'pip==21.3.1'
import re
import sys
from pkg_resources import load_entry_point
if __name__ == '__main__':
sys.argv[0] = re.sub(r'(-script\.pyw?|\.exe)?$', '', sys.argv[0])
sys.exit(
load_entry_point('pip==21.3.1', 'console_scripts', 'pip3.7')()
)
```
And you should be able to use pip normally again!
|
63,835,086
|
If I have a dataframe with the following layout:
```
ID# Response
1234 Covid-19 was a disaster for my business
3456 The way you handled this pandemic was awesome
```
I want to be able to count frequency of specific words from a list.
```
list=['covid','COVID','Covid-19','pandemic','coronavirus']
```
In the end I want to generate a dictionary like the following
```
{covid:0,COVID:0,Covid-19:1,pandemic:1,'coronavirus':0}
```
Please help I am really stuck on how to code this in python
|
2020/09/10
|
[
"https://Stackoverflow.com/questions/63835086",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/13000877/"
] |
For each string, find number of matches.
```
dict((s, df['response'].str.count(s).fillna(0).sum()) for s in list_of_strings)
```
Note that `Series.str.count` takes a regex input. You may want to append `(?=\b)` for positive look-ahead word-endings.
`Series.str.count` returns `NA` when counting `NA`, thus, fill with 0. For each string, sum over column.
|
```
import pandas as pd
import numpy as np
df = pd.DataFrame({'sheet':['sheet1', 'sheet2', 'sheet3', 'sheet2'],
'tokenized_text':[['efcc', 'fficial', 'billiontwits', 'since', 'covid', 'landed'], ['when', 'people', 'say', 'the', 'fatality', 'rate', 'of', 'coronavirus', 'is'], ['in', 'the', 'coronavirus-induced', 'crisis', 'people', 'are', 'cyvbwx'], ['in', 'the', 'be-induced', 'crisis', 'people', 'are', 'cyvbwx']] })
print(df)
words_collection = ['covid','COVID','Covid-19','pandemic','coronavirus']
# Extract the words from all lines
all_words = []
for index, row in df.iterrows():
all_words.extend(row['tokenized_text'])
# Create a dictionary that maps for each word from `words_collection` the counter it appears
word_to_number_of_occurences = dict()
# Go over the word collection and set it's counter
for word in words_collection:
word_to_number_of_occurences[word] = all_words.count(word)
# {'covid': 1, 'COVID': 0, 'Covid-19': 0, 'pandemic': 0, 'coronavirus': 1}
print(word_to_number_of_occurences)
```
|
63,835,086
|
If I have a dataframe with the following layout:
```
ID# Response
1234 Covid-19 was a disaster for my business
3456 The way you handled this pandemic was awesome
```
I want to be able to count frequency of specific words from a list.
```
list=['covid','COVID','Covid-19','pandemic','coronavirus']
```
In the end I want to generate a dictionary like the following
```
{covid:0,COVID:0,Covid-19:1,pandemic:1,'coronavirus':0}
```
Please help I am really stuck on how to code this in python
|
2020/09/10
|
[
"https://Stackoverflow.com/questions/63835086",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/13000877/"
] |
```
import pandas as pd
import numpy as np
df = pd.DataFrame({'sheet':['sheet1', 'sheet2', 'sheet3', 'sheet2'],
'tokenized_text':[['efcc', 'fficial', 'billiontwits', 'since', 'covid', 'landed'], ['when', 'people', 'say', 'the', 'fatality', 'rate', 'of', 'coronavirus', 'is'], ['in', 'the', 'coronavirus-induced', 'crisis', 'people', 'are', 'cyvbwx'], ['in', 'the', 'be-induced', 'crisis', 'people', 'are', 'cyvbwx']] })
print(df)
words_collection = ['covid','COVID','Covid-19','pandemic','coronavirus']
# Extract the words from all lines
all_words = []
for index, row in df.iterrows():
all_words.extend(row['tokenized_text'])
# Create a dictionary that maps for each word from `words_collection` the counter it appears
word_to_number_of_occurences = dict()
# Go over the word collection and set it's counter
for word in words_collection:
word_to_number_of_occurences[word] = all_words.count(word)
# {'covid': 1, 'COVID': 0, 'Covid-19': 0, 'pandemic': 0, 'coronavirus': 1}
print(word_to_number_of_occurences)
```
|
Try with `np.hstack` and `Counter`:
```
from collections import Counter
a = np.hstack(df['Response'].str.split())
dct = {**dict.fromkeys(lst, 0), **Counter(a[np.isin(a, lst)])}
```
---
```
{'covid': 0, 'COVID': 0, 'Covid-19': 1, 'pandemic': 1, 'coronavirus': 0}
```
|
63,835,086
|
If I have a dataframe with the following layout:
```
ID# Response
1234 Covid-19 was a disaster for my business
3456 The way you handled this pandemic was awesome
```
I want to be able to count frequency of specific words from a list.
```
list=['covid','COVID','Covid-19','pandemic','coronavirus']
```
In the end I want to generate a dictionary like the following
```
{covid:0,COVID:0,Covid-19:1,pandemic:1,'coronavirus':0}
```
Please help I am really stuck on how to code this in python
|
2020/09/10
|
[
"https://Stackoverflow.com/questions/63835086",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/13000877/"
] |
```
import pandas as pd
import numpy as np
df = pd.DataFrame({'sheet':['sheet1', 'sheet2', 'sheet3', 'sheet2'],
'tokenized_text':[['efcc', 'fficial', 'billiontwits', 'since', 'covid', 'landed'], ['when', 'people', 'say', 'the', 'fatality', 'rate', 'of', 'coronavirus', 'is'], ['in', 'the', 'coronavirus-induced', 'crisis', 'people', 'are', 'cyvbwx'], ['in', 'the', 'be-induced', 'crisis', 'people', 'are', 'cyvbwx']] })
print(df)
words_collection = ['covid','COVID','Covid-19','pandemic','coronavirus']
# Extract the words from all lines
all_words = []
for index, row in df.iterrows():
all_words.extend(row['tokenized_text'])
# Create a dictionary that maps for each word from `words_collection` the counter it appears
word_to_number_of_occurences = dict()
# Go over the word collection and set it's counter
for word in words_collection:
word_to_number_of_occurences[word] = all_words.count(word)
# {'covid': 1, 'COVID': 0, 'Covid-19': 0, 'pandemic': 0, 'coronavirus': 1}
print(word_to_number_of_occurences)
```
|
You could do it very simply with a dict of comprehension :
```
{x:df.Response.str.count(x).sum() for x in list}
```
Output
```
{'covid': 0, 'COVID': 0, 'Covid-19': 1, 'pandemic': 1, 'coronavirus': 0}
```
|
63,835,086
|
If I have a dataframe with the following layout:
```
ID# Response
1234 Covid-19 was a disaster for my business
3456 The way you handled this pandemic was awesome
```
I want to be able to count frequency of specific words from a list.
```
list=['covid','COVID','Covid-19','pandemic','coronavirus']
```
In the end I want to generate a dictionary like the following
```
{covid:0,COVID:0,Covid-19:1,pandemic:1,'coronavirus':0}
```
Please help I am really stuck on how to code this in python
|
2020/09/10
|
[
"https://Stackoverflow.com/questions/63835086",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/13000877/"
] |
For each string, find number of matches.
```
dict((s, df['response'].str.count(s).fillna(0).sum()) for s in list_of_strings)
```
Note that `Series.str.count` takes a regex input. You may want to append `(?=\b)` for positive look-ahead word-endings.
`Series.str.count` returns `NA` when counting `NA`, thus, fill with 0. For each string, sum over column.
|
Try with `np.hstack` and `Counter`:
```
from collections import Counter
a = np.hstack(df['Response'].str.split())
dct = {**dict.fromkeys(lst, 0), **Counter(a[np.isin(a, lst)])}
```
---
```
{'covid': 0, 'COVID': 0, 'Covid-19': 1, 'pandemic': 1, 'coronavirus': 0}
```
|
63,835,086
|
If I have a dataframe with the following layout:
```
ID# Response
1234 Covid-19 was a disaster for my business
3456 The way you handled this pandemic was awesome
```
I want to be able to count frequency of specific words from a list.
```
list=['covid','COVID','Covid-19','pandemic','coronavirus']
```
In the end I want to generate a dictionary like the following
```
{covid:0,COVID:0,Covid-19:1,pandemic:1,'coronavirus':0}
```
Please help I am really stuck on how to code this in python
|
2020/09/10
|
[
"https://Stackoverflow.com/questions/63835086",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/13000877/"
] |
For each string, find number of matches.
```
dict((s, df['response'].str.count(s).fillna(0).sum()) for s in list_of_strings)
```
Note that `Series.str.count` takes a regex input. You may want to append `(?=\b)` for positive look-ahead word-endings.
`Series.str.count` returns `NA` when counting `NA`, thus, fill with 0. For each string, sum over column.
|
You could do it very simply with a dict of comprehension :
```
{x:df.Response.str.count(x).sum() for x in list}
```
Output
```
{'covid': 0, 'COVID': 0, 'Covid-19': 1, 'pandemic': 1, 'coronavirus': 0}
```
|
63,835,086
|
If I have a dataframe with the following layout:
```
ID# Response
1234 Covid-19 was a disaster for my business
3456 The way you handled this pandemic was awesome
```
I want to be able to count frequency of specific words from a list.
```
list=['covid','COVID','Covid-19','pandemic','coronavirus']
```
In the end I want to generate a dictionary like the following
```
{covid:0,COVID:0,Covid-19:1,pandemic:1,'coronavirus':0}
```
Please help I am really stuck on how to code this in python
|
2020/09/10
|
[
"https://Stackoverflow.com/questions/63835086",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/13000877/"
] |
Try with `np.hstack` and `Counter`:
```
from collections import Counter
a = np.hstack(df['Response'].str.split())
dct = {**dict.fromkeys(lst, 0), **Counter(a[np.isin(a, lst)])}
```
---
```
{'covid': 0, 'COVID': 0, 'Covid-19': 1, 'pandemic': 1, 'coronavirus': 0}
```
|
You could do it very simply with a dict of comprehension :
```
{x:df.Response.str.count(x).sum() for x in list}
```
Output
```
{'covid': 0, 'COVID': 0, 'Covid-19': 1, 'pandemic': 1, 'coronavirus': 0}
```
|
9,377,801
|
I would like to know how practical it would be to create a program which takes handwritten characters in some form, analyzes them, and offers corrections to the user. The inspiration for this idea is to have elementary school students in other countries or University students in America learn how to write in languages such as Japanese or Chinese where there are a lot of characters and even the slightest mistake can make a big difference.
I am unsure how the program will analyze the character. My current idea is to get a single pixel width line to represent the stroke, compare how far each pixel is from the corresponding pixel in the example character loaded from a database, and output which area needs the most work. Endpoints will also be useful to know. I would also like to tell the user if their character could be interpreted as another character similar to the one they wanted to write.
I imagine I will need a library of some sort to complete this project in any sort of timely manner but I have been unable to locate one which meets the standards I will need for the program. I looked into OpenCV but it appears to be meant for vision than image processing. I would also appreciate the library/module to be in python or Java but I can learn a new language if absolutely necessary.
Thank you for any help in this project.
|
2012/02/21
|
[
"https://Stackoverflow.com/questions/9377801",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/1223327/"
] |
Character Recognition is usually implemented using Artificial Neural Networks ([ANNs](http://en.wikipedia.org/wiki/Artificial_neural_network)). It is not a straightforward task to implement seeing that there are usually lots of ways in which different people write the same character.
The good thing about neural networks is that they can be trained. So, to change from one language to another all you need to change are the weights between the neurons, and leave your network intact. Neural networks are also able to generalize to a certain extent, so they are usually able to cope with minor variances of the same letter.
[Tesseract](http://en.wikipedia.org/wiki/Artificial_neural_network) is an open source OCR which was developed in the mid 90's. You might want to read about it to gain some pointers.
|
Have you seen <http://www.skritter.com>? They do this in combination with spaced recognition scheduling.
I guess you want to classify features such as curves in your strokes (http://en.wikipedia.org/wiki/CJK\_strokes), then as a next layer identify componenents, then estimate the most likely character. All the while statistically weighting the most likely character. Where there are two likely matches you will want to show them as likely to be confused. You will also need to create a database of probably 3000 to 5000 characters, or up to 10000 for the ambitious.
See also <http://www.tegaki.org/> for an open source program to do this.
|
9,377,801
|
I would like to know how practical it would be to create a program which takes handwritten characters in some form, analyzes them, and offers corrections to the user. The inspiration for this idea is to have elementary school students in other countries or University students in America learn how to write in languages such as Japanese or Chinese where there are a lot of characters and even the slightest mistake can make a big difference.
I am unsure how the program will analyze the character. My current idea is to get a single pixel width line to represent the stroke, compare how far each pixel is from the corresponding pixel in the example character loaded from a database, and output which area needs the most work. Endpoints will also be useful to know. I would also like to tell the user if their character could be interpreted as another character similar to the one they wanted to write.
I imagine I will need a library of some sort to complete this project in any sort of timely manner but I have been unable to locate one which meets the standards I will need for the program. I looked into OpenCV but it appears to be meant for vision than image processing. I would also appreciate the library/module to be in python or Java but I can learn a new language if absolutely necessary.
Thank you for any help in this project.
|
2012/02/21
|
[
"https://Stackoverflow.com/questions/9377801",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/1223327/"
] |
You can follow company links from this Wikipedia article:
<http://en.wikipedia.org/wiki/Intelligent_character_recognition>
I would not recommend that you attempt to implement a solution yourself, especially if you want to complete the task in less than a year or two of full-time work. It would be unfortunate if an incomplete solution provided poor guidance for students.
**A word of caution: some companies that offer commercial ICR libraries may not wish to support you and/or may not provide a quote. That's their right. However, if you do not feel comfortable working with a particular vendor, either ask for a different sales contact and/or try a different vendor first.**
>
> My current idea is to get a single pixel width line to represent the stroke, compare how far each pixel is from the corresponding pixel in the example character loaded from a database, and output which area needs the most work.
>
>
>
The initial step of getting a stroke representation only a single pixel wide is much more difficult than you might guess. Although there are simple algorithms (e.g. Stentiford and Zhang-Suen) to perform thinning, stroke crossings and rough edges present serious problems. This is a classic (and unsolved) problem. Thinning works much of the time, but when it fails, it can fail miserably.
You could work with an open source library, and although that will help you learn algorithms and their uses, to develop a good solution you will almost certainly need to dig into the algorithms themselves and understand how they work. That requires quite a bit of study.
Here are some books that are useful as introduct textbooks:
* **Digital Image Processing** by Gonzalez and Woods
* **Character Recognition Systems** by Cheriet, Kharma, Siu, and Suen
* **Reading in the Brain** by Stanislas Dehaene
Gonzalez and Woods is a standard textbook in image processing. Without some background knowledge of image processing it will be difficult for you to make progress.
The book by Cheriet, et al., touches on the state of the art in optical character recognition (OCR) and also covers handwriting recognition. The sooner you read this book, the sooner you can learn about techniques that have already been attempted.
The Dehaene book is a readable presentation of the mental processes involved in human reading, and could inspire development of interesting new algorithms.
|
Have you seen <http://www.skritter.com>? They do this in combination with spaced recognition scheduling.
I guess you want to classify features such as curves in your strokes (http://en.wikipedia.org/wiki/CJK\_strokes), then as a next layer identify componenents, then estimate the most likely character. All the while statistically weighting the most likely character. Where there are two likely matches you will want to show them as likely to be confused. You will also need to create a database of probably 3000 to 5000 characters, or up to 10000 for the ambitious.
See also <http://www.tegaki.org/> for an open source program to do this.
|
26,743,269
|
From the [Python docs](https://docs.python.org/2/library/heapq.html):
>
> The latter two functions [heapq.nlargest and heapq.nsmallest] perform best for smaller values of n. For
> larger values, it is more efficient to use the sorted() function.
> Also, when n==1, it is more efficient to use the built-in min() and
> max() functions.
>
>
>
If I want to retrieve the minimum element in the min-heap, why do the Python docs suggest using `min()`, which I assume runs in O(n) time, when
I can instead retrieve the first element in the heap in O(1) time? (I'm assuming the first element in the heap is the minimum)
|
2014/11/04
|
[
"https://Stackoverflow.com/questions/26743269",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/1032973/"
] |
The `nsmallest` and `nlargest` methods available from `heapq` do not assume that the argument passed to them is already in heap format. Instead, they seek to "heapify" the argument as they traverse it, which will be more efficient than outright sorting for the top-k elements for small values of k, but for k exactly equal to one, it's even faster to avoid paying the heapify-as-you-traverse overhead, and just use `min` directly.
Your statement is correct. If you are given an array that you can guarantee has been heapified, and not altered since, then accessing the first element will give you the min (respectively the max for a max-heap).
Looking at the [source code for heapq](https://hg.python.org/cpython/file/2.7/Lib/heapq.py) (maybe I'm looking at old code?) it still seems quite weird to me. `nsmallest` has a special case for `n ==1` implemented like this (line 397):
```
def nsmallest(n, iterable, key=None):
"""Find the n smallest elements in a dataset.
Equivalent to: sorted(iterable, key=key)[:n]
"""
# Short-cut for n==1 is to use min() when len(iterable)>0
if n == 1:
it = iter(iterable)
head = list(islice(it, 1))
if not head:
return []
if key is None:
return [min(chain(head, it))]
return [min(chain(head, it), key=key)]
# ... rest of function
```
Just playing with that expression in the interpreter makes it seem bizarre:
```
In [203]: foo = list(itertools.islice([1,2,3], 1)); it = iter([1,2,3]); x = itertools.chain(foo, it);
In [204]: x.next()
Out[204]: 1
In [205]: x.next()
Out[205]: 1
In [206]: x.next()
Out[206]: 2
In [207]: x.next()
Out[207]: 3
In [208]: x.next()
---------------------------------------------------------------------------
StopIteration Traceback (most recent call last)
<ipython-input-208-e05f366da090> in <module>()
----> 1 x.next()
StopIteration:
```
It seems to be building a generator (which gets turned into a `list` immediately) that only takes the 1st element (as you might expect with a min heap), but then it oddly `chain`s it with a plain old generator that's going to go over the whole array.
I agree that if you start from a `list` and want to query for the minimum element, it's probably better to leave it as a `list` and use `min`. However, if you are handed a min heap, yes indeed you should just inspect the first element -- that is part of the point of heapifying it in the first place.
But regardless, this source code looks quite bizarre for passing the min heap to `min` -- I would greatly welcome more explanation about what it's doing -- and maybe a pointer to some more recent C-level code for an implementation of heapq, if there is one.
|
If you need just to pick one minimum element on a heapified list just do list[0]:
```
import heapq
lst = [1,-1,100,200]
heapq.heapify(lst)
min_value = lst[0]
```
Doc above refers to getting n smallest numbers, and heap is not the most efficient data structure to do that if n is large.
|
44,773,983
|
I have two following files:
**testcase\_module.py**
```
import boto3
ec2 = boto3.resource('ec2')
def f():
return ec2.instances.all()
```
**testcase\_test.py**
```
import testcase_module
import unittest.mock
class MainTest(unittest.TestCase):
@unittest.mock.patch('testcase_module.ec2', spec_set=['instances'])
def test_f(self, ec2_mock):
ec2_mock.instances.spec_set = ['all']
testcase_module.f()
if __name__ == '__main__':
unittest.main()
```
I added `spec_test` parameter to the patch because I would like to assert if any other function than `instances.all()` has been called, but changing string `'all'` to `'allx'` doesn't make test fail while changing `'instances'` to `'instancesx'` does. I tried the following changes (`git diff testcase_test.py` and `python testcase_test.py` results below):
**Attempt 1:**
```
diff --git a/testcase_test.py b/testcase_test.py
index d6d6e59..ae274c8 100644
--- a/testcase_test.py
+++ b/testcase_test.py
@@ -3,9 +3,8 @@ import unittest.mock
class MainTest(unittest.TestCase):
- @unittest.mock.patch('testcase_module.ec2', spec_set=['instances'])
- def test_f(self, ec2_mock):
- ec2_mock.instances.spec_set = ['all']
+ @unittest.mock.patch('testcase_module.ec2', spec_set=['instances.all'])
+ def test_f(self, _):
testcase_module.f()
```
Produces:
```
E
======================================================================
ERROR: test_f (__main__.MainTest)
----------------------------------------------------------------------
Traceback (most recent call last):
File "/usr/lib/python3.5/unittest/mock.py", line 1157, in patched
return func(*args, **keywargs)
File "testcase_test.py", line 8, in test_f
testcase_module.f()
File "/path/to/project/testcase_module.py", line 8, in f
return ec2.instances.all()
File "/usr/lib/python3.5/unittest/mock.py", line 578, in __getattr__
raise AttributeError("Mock object has no attribute %r" % name)
AttributeError: Mock object has no attribute 'instances'
----------------------------------------------------------------------
Ran 1 test in 0.001s
FAILED (errors=1)
```
**Attempt 2:**
```
diff --git a/testcase_test.py b/testcase_test.py
index d6d6e59..d93abd1 100644
--- a/testcase_test.py
+++ b/testcase_test.py
@@ -3,9 +3,8 @@ import unittest.mock
class MainTest(unittest.TestCase):
- @unittest.mock.patch('testcase_module.ec2', spec_set=['instances'])
- def test_f(self, ec2_mock):
- ec2_mock.instances.spec_set = ['all']
+ @unittest.mock.patch('testcase_module.ec2.instances', spec_set=['all'])
+ def test_f(self):
testcase_module.f()
```
Produces:
```
E
======================================================================
ERROR: test_f (__main__.MainTest)
----------------------------------------------------------------------
Traceback (most recent call last):
File "/usr/lib/python3.5/unittest/mock.py", line 1149, in patched
arg = patching.__enter__()
File "/usr/lib/python3.5/unittest/mock.py", line 1312, in __enter__
setattr(self.target, self.attribute, new_attr)
AttributeError: can't set attribute
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "/usr/lib/python3.5/unittest/mock.py", line 1170, in patched
patching.__exit__(*exc_info)
File "/usr/lib/python3.5/unittest/mock.py", line 1334, in __exit__
delattr(self.target, self.attribute)
AttributeError: can't delete attribute
----------------------------------------------------------------------
Ran 1 test in 0.001s
FAILED (errors=1)
```
How can I make it failing when other method than `instances.all` has been called?
|
2017/06/27
|
[
"https://Stackoverflow.com/questions/44773983",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/1906088/"
] |
Try using `mock_add_spec`.
`ec2_mock.instances.mock_add_spec(['all'], spec_set=True)`
Link: <https://docs.python.org/3/library/unittest.mock.html#unittest.mock.Mock.mock_add_spec>
|
What about doing it like this:
```
@unittest.mock.patch('testcase_module.boto3.resource', autospec=True)
def test_f(self, ec2_resource_mock):
class InstanceStub(object):
def all(self):
return [...]
ec2_resource_mock.return_value = mock.create_autospec(
EC2InstanceType, instances=InstanceStub())
testcase_module.f()
```
|
71,957,239
|
How can I get the largest key with the largest value in python dictionary. In the below example you can see 1 and 2 have same frequency. But i want to return the larger key.
```
nums = [1,2,2,3,1]
frq = {}
for i in nums:
if i not in frq:
frq[i] = 1
else:
frq[i] += 1
frequency = max(frq, key=frq.get)
print(frequency)
```
|
2022/04/21
|
[
"https://Stackoverflow.com/questions/71957239",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/11416278/"
] |
Have your `key` function return a tuple of the value and the associated key. The first element of the tuple is compared first, the second will break ties.
```
>>> from collections import Counter
>>> nums = [1, 2, 2, 3, 1]
>>> frq = Counter(nums)
>>> max(frq, key=lambda n: (frq[n], n))
2
```
Note that `collections.Counter` builds the `frq` dictionary automatically given `nums`.
|
You can use tuple comparison for the keys to compare keys based on their frequencies, and then tiebreak based on the actual value of the key only if the frequencies are the same:
```py
frequency = max(frq, key=lambda x: (frq.get(x), x))
```
With this change, this outputs:
```py
2
```
|
71,957,239
|
How can I get the largest key with the largest value in python dictionary. In the below example you can see 1 and 2 have same frequency. But i want to return the larger key.
```
nums = [1,2,2,3,1]
frq = {}
for i in nums:
if i not in frq:
frq[i] = 1
else:
frq[i] += 1
frequency = max(frq, key=frq.get)
print(frequency)
```
|
2022/04/21
|
[
"https://Stackoverflow.com/questions/71957239",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/11416278/"
] |
Have your `key` function return a tuple of the value and the associated key. The first element of the tuple is compared first, the second will break ties.
```
>>> from collections import Counter
>>> nums = [1, 2, 2, 3, 1]
>>> frq = Counter(nums)
>>> max(frq, key=lambda n: (frq[n], n))
2
```
Note that `collections.Counter` builds the `frq` dictionary automatically given `nums`.
|
You can try this, which sorts first on frequency and then on key:
```py
nums = [1,2,2,3,1]
frq = {}
for i in nums:
if i not in frq:
frq[i] = 1
else:
frq[i] += 1
frequency = max(frq, key=lambda x: (frq.get(x), x))
print(frequency)
```
Output:
```
2
```
Alternatively, you can use the Counter data type from Python's collections module to do some of the work for you:
```py
from collections import Counter
nums = [1,2,2,3,1]
frequency = max(Counter(nums).items(), key=lambda x: (x[1], x[0]))[0]
print(frequency)
```
Here we pass items() to max(), so that both the key and the value are available to the key function (lambda). The result of max() will now be a (key, value) tuple, so we use `[0]` to get just the key as our result.
**UPDATE**: One other way to do it without overriding the default `key` argument to `max()` is this:
```py
frequency = max((v, k) for k, v in Counter(nums).items())[1]
```
|
38,506,250
|
i want to take input from user in and each value of the input is on consecutive line.this is to be implemented in python
```
while x=int(raw_input()): ##<=showing error at this line
print(x)
gollum(x)
#the function gollum() has to be called if the input is present
```
|
2016/07/21
|
[
"https://Stackoverflow.com/questions/38506250",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/6590204/"
] |
There are no other ways with HTTP (except what HTTP allows, as already mentioned).
But there are many other ways to transfer data from server to server, like FTP or establishing a direct socket connection.
Note that you will need install/configure such additional ways, and maybe not only on the server (for communication through a firewall, you will also need to allow the used ports)
|
**Session :**
A session is stored on the server and cannot be accessed by the user (client). It is used to store information across the site such as login sessions. It can be used to store information in the server side, and pass between different php scripts
Note that session creates a session cookie for identification like `PHPSESSID`, which may reveal server-side scripting language used, which most people prefer not to reveal. To not reveal, we can change this cookie name from php. Also, by stealing that cookie, someone may hijack the session. So, session should be used properly by verifying it. If session verifying is done, you may prevent hijacking and use it securely.
**For Secure Usage:**
To use sessions securely, you can follow this blog which shows how sessions can be verified and prevented from being hijacked.
>
> <http://blog.teamtreehouse.com/how-to-create-bulletproof-sessions>
>
>
>
|
38,506,250
|
i want to take input from user in and each value of the input is on consecutive line.this is to be implemented in python
```
while x=int(raw_input()): ##<=showing error at this line
print(x)
gollum(x)
#the function gollum() has to be called if the input is present
```
|
2016/07/21
|
[
"https://Stackoverflow.com/questions/38506250",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/6590204/"
] |
You may use encrypted cookies: <https://packagist.org/packages/illuminate/cookie> to store an encoded key. Based on that decoded key you can save and read from database+Memcache layer
|
**Session :**
A session is stored on the server and cannot be accessed by the user (client). It is used to store information across the site such as login sessions. It can be used to store information in the server side, and pass between different php scripts
Note that session creates a session cookie for identification like `PHPSESSID`, which may reveal server-side scripting language used, which most people prefer not to reveal. To not reveal, we can change this cookie name from php. Also, by stealing that cookie, someone may hijack the session. So, session should be used properly by verifying it. If session verifying is done, you may prevent hijacking and use it securely.
**For Secure Usage:**
To use sessions securely, you can follow this blog which shows how sessions can be verified and prevented from being hijacked.
>
> <http://blog.teamtreehouse.com/how-to-create-bulletproof-sessions>
>
>
>
|
13,620,633
|
Given the URL <http://www.smartmoney.com/quote/FAST/?story=financials&timewindow=1&opt=YB&isFinprint=1&framework.view=smi_emptyView> , how would you capture and print the contents of an entire row of data?
For example, what would it take to get an output that looked something like:
"Cash & Short Term Investments 144,841 169,760 189,252 86,743 57,379"? Or something like "Property, Plant & Equipment - Gross 725,104 632,332 571,467 538,805 465,493"?
I've been introduced to the basics of Xpath through sites <http://www.techchorus.net/web-scraping-lxml> . However, the Xpath syntax is still largely a mystery to me.
I already have successfully done this in BeautifulSoup. I like the fact that BeautifulSoup doesn't require me to know the structure of the file - it just looks for the element containing the text I search for. Unfortunately, BeautifulSoup is too slow for a script that has to do this THOUSANDS of times. The source code for my task in BeautifulSoup is (with title\_input equal to "Cash & Short Term Investments"):
```
page = urllib2.urlopen (url_local)
soup = BeautifulSoup (page)
soup_line_item = soup.findAll(text=title_input)[0].parent.parent.parent
list_output = soup_line_item.findAll('td') # List of elements
```
So what would the equivalent code in lxml be?
EDIT 1: The URLs were concealed the first time I posted. I have now fixed that.
EDIT 2: I have added my BeautifulSoup-based solution to clarify what I'm trying to do.
EDIT 3: +10 to root for your solution. For the benefit of future developers with the same question, I'm posting here a quick-and-dirty script that worked for me:
```
#!/usr/bin/env python
import urllib
import lxml.html
url = 'balancesheet.html'
result = urllib.urlopen(url)
html = result.read()
doc = lxml.html.document_fromstring(html)
x = doc.xpath(u'.//th[div[text()="Cash & Short Term Investments"]]/following-sibling::td/text()')
print x
```
|
2012/11/29
|
[
"https://Stackoverflow.com/questions/13620633",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/776739/"
] |
```
In [18]: doc.xpath(u'.//th[div[text()="Cash & Short Term Investments"]]/following-sibling::td/text()')
Out[18]: [' 144,841', ' 169,760', ' 189,252', ' 86,743', ' 57,379']
```
or you can define a little function to get the rows by text:
```
In [19]: def func(doc,txt):
...: exp=u'.//th[div[text()="{0}"]]'\
...: u'/following-sibling::td/text()'.format(txt)
...: return [i.strip() for i in doc.xpath(exp)]
In [20]: func(doc,u'Total Accounts Receivable')
Out[20]: ['338,594', '270,133', '214,169', '244,940', '236,331']
```
or you can get all the rows to a `dict`:
```
In [21]: d={}
In [22]: for i in doc.xpath(u'.//tbody/tr'):
...: if len(i.xpath(u'.//th/div/text()')):
...: d[i.xpath(u'.//th/div/text()')[0]]=\
...: [e.strip() for e in i.xpath(u'.//td/text()')]
In [23]: d.items()[:3]
Out[23]:
[('Accounts Receivables, Gross',
['344,241', '274,894', '218,255', '247,600', '238,596']),
('Short-Term Investments',
['27,165', '26,067', '24,400', '851', '159']),
('Cash & Short Term Investments',
['144,841', '169,760', '189,252', '86,743', '57,379'])]
```
|
let html holds the html source code:
```
import lxm.html
doc = lxml.html.document_fromstring(html)
rows_element = doc.xpath('/html/body/div/div[2]/div/div[5]/div/div/table/tbody/tr')
for row in rows_element:
print row.text_content()
```
not tested but should work
P.S.Install xpath cheker or firefinder in firefox to help you with xpath
|
17,669,095
|
I'm trying find exactly what's wrong with a larger job that I'm trying to schedule with launchd for the first time. So I made the simplest python file I could think of, `print 'running test'`, titled it `com.schedulertest.plist` and then made a plist file like so:
```
<?xml version="1.0" encoding="UTF-8"?>
<!DOCTYPE plist PUBLIC -//Apple Computer//DTD PLIST 1.0//EN http://www.apple.com/DTDs/PropertyList-1.0.dtd >
<plist version="1.0">
<dict>
<key>Label</key>
<string>com.schedulertest.py.plist</string>
<key>ProgramArguments</key>
<array>
<string>arch</string>
<string>-i386</string>
<string>/usr/bin/python2.7</string>
<string>/Users/user/Documents/Python_Alerts_Project/schedulertest.py</string>
</array>
<key>RunAtLoad</key>
<true/>
<key>StartInterval</key>
<integer>60</integer>
</dict>
</plist>
```
Then I saved it in `$HOME/Library/LaunchAgents/` and ran:
```
launchctl load com.schedulertest.plist
```
I should be getting the print output from my py script every 60 seconds, right? I don't see anything though -- is there an obvious fault in my script or process?
|
2013/07/16
|
[
"https://Stackoverflow.com/questions/17669095",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/1844086/"
] |
So the answer was not a big deal, but it might help others to share the solution. I had simply forgotten, as we will when moving around several virtualenvs, which python I was in. If you're having trouble and your `.plist` and script seem well-formed, it won't hurt to run `which python` etc, and check the result against the path you're listing in your program arguments.
|
### Troubleshooting
* To debug `.plist`, you can check the log for any error, e.g.
```
tail -f /var/log/system.log
```
To specify custom log, use:
```
<key>StandardOutPath</key>
<string>/var/log/myjob.log</string>
<key>StandardErrorPath</key>
<string>/var/log/myjob.log</string>
```
* To find the latest exit status of the job, run:
```
launchctl list com.schedulertest.plist
```
* To make sure the syntax is correct, use `plutil` command.
See: [Debugging `launchd` Jobs section of Creating Launch Daemons and Agents page](https://developer.apple.com/library/content/documentation/MacOSX/Conceptual/BPSystemStartup/Chapters/CreatingLaunchdJobs.html#//apple_ref/doc/uid/10000172i-SW7-BCIEDDBJ).
|
51,760,986
|
I was going through the CPython source code I found the following piece of code from the standard library([`ast.py`](https://github.com/python/cpython/blob/master/Lib/ast.py#L60)).
```
if isinstance(node.op, UAdd):
return + operand
else:
return - operand
```
I tried the following in my python interpreter
```
>>> def s():
... return + 1
...
>>> s()
1
```
But this is same as the following right?
```
def s():
return 1
```
Can any one help me to understand what does the expression `return +` or `return -` do in python and when we should use this?
|
2018/08/09
|
[
"https://Stackoverflow.com/questions/51760986",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/6699447/"
] |
plus and minus in this context are [unary operators](https://en.wikipedia.org/wiki/Unary_operation). That is, they accept **a single operand**. This is in comparison to the binary operator `*` (for example) that operates on **two operands**. Evidently `+1` is just `1`. So the unary operator `+` in your return statement is redundant.
|
`return + operand` is equivalent to `return operand` (if operand is a number). The only purpose I see is to insist on the fact that we do not want the opposite of `operand`.
|
51,760,986
|
I was going through the CPython source code I found the following piece of code from the standard library([`ast.py`](https://github.com/python/cpython/blob/master/Lib/ast.py#L60)).
```
if isinstance(node.op, UAdd):
return + operand
else:
return - operand
```
I tried the following in my python interpreter
```
>>> def s():
... return + 1
...
>>> s()
1
```
But this is same as the following right?
```
def s():
return 1
```
Can any one help me to understand what does the expression `return +` or `return -` do in python and when we should use this?
|
2018/08/09
|
[
"https://Stackoverflow.com/questions/51760986",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/6699447/"
] |
I haven't studied the code, so I don't know for sure, but Python allows overriding [unary operator behavior](https://docs.python.org/3/library/operator.html):
>
> `__pos__(self)` Implements behavior for unary positive (e.g. +some\_object)
>
>
> `__neg__(self)` Implements behavior for negation (e.g. -some\_object)
>
>
>
So `operand` in your case could be an object of a class which overrides those magic methods.
This means that `return + operand` is NOT equivalent to `return operand`.
|
`return + operand` is equivalent to `return operand` (if operand is a number). The only purpose I see is to insist on the fact that we do not want the opposite of `operand`.
|
51,760,986
|
I was going through the CPython source code I found the following piece of code from the standard library([`ast.py`](https://github.com/python/cpython/blob/master/Lib/ast.py#L60)).
```
if isinstance(node.op, UAdd):
return + operand
else:
return - operand
```
I tried the following in my python interpreter
```
>>> def s():
... return + 1
...
>>> s()
1
```
But this is same as the following right?
```
def s():
return 1
```
Can any one help me to understand what does the expression `return +` or `return -` do in python and when we should use this?
|
2018/08/09
|
[
"https://Stackoverflow.com/questions/51760986",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/6699447/"
] |
plus and minus in this context are [unary operators](https://en.wikipedia.org/wiki/Unary_operation). That is, they accept **a single operand**. This is in comparison to the binary operator `*` (for example) that operates on **two operands**. Evidently `+1` is just `1`. So the unary operator `+` in your return statement is redundant.
|
Both `+` and `-` can be used as unary or binary operators. In your case they are used as unary operators. `return + operand` is the same as `return operand`. We are used to see them in the form of `return +operand` or `return -operand`.
|
51,760,986
|
I was going through the CPython source code I found the following piece of code from the standard library([`ast.py`](https://github.com/python/cpython/blob/master/Lib/ast.py#L60)).
```
if isinstance(node.op, UAdd):
return + operand
else:
return - operand
```
I tried the following in my python interpreter
```
>>> def s():
... return + 1
...
>>> s()
1
```
But this is same as the following right?
```
def s():
return 1
```
Can any one help me to understand what does the expression `return +` or `return -` do in python and when we should use this?
|
2018/08/09
|
[
"https://Stackoverflow.com/questions/51760986",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/6699447/"
] |
I haven't studied the code, so I don't know for sure, but Python allows overriding [unary operator behavior](https://docs.python.org/3/library/operator.html):
>
> `__pos__(self)` Implements behavior for unary positive (e.g. +some\_object)
>
>
> `__neg__(self)` Implements behavior for negation (e.g. -some\_object)
>
>
>
So `operand` in your case could be an object of a class which overrides those magic methods.
This means that `return + operand` is NOT equivalent to `return operand`.
|
Both `+` and `-` can be used as unary or binary operators. In your case they are used as unary operators. `return + operand` is the same as `return operand`. We are used to see them in the form of `return +operand` or `return -operand`.
|
51,760,986
|
I was going through the CPython source code I found the following piece of code from the standard library([`ast.py`](https://github.com/python/cpython/blob/master/Lib/ast.py#L60)).
```
if isinstance(node.op, UAdd):
return + operand
else:
return - operand
```
I tried the following in my python interpreter
```
>>> def s():
... return + 1
...
>>> s()
1
```
But this is same as the following right?
```
def s():
return 1
```
Can any one help me to understand what does the expression `return +` or `return -` do in python and when we should use this?
|
2018/08/09
|
[
"https://Stackoverflow.com/questions/51760986",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/6699447/"
] |
plus and minus in this context are [unary operators](https://en.wikipedia.org/wiki/Unary_operation). That is, they accept **a single operand**. This is in comparison to the binary operator `*` (for example) that operates on **two operands**. Evidently `+1` is just `1`. So the unary operator `+` in your return statement is redundant.
|
I haven't studied the code, so I don't know for sure, but Python allows overriding [unary operator behavior](https://docs.python.org/3/library/operator.html):
>
> `__pos__(self)` Implements behavior for unary positive (e.g. +some\_object)
>
>
> `__neg__(self)` Implements behavior for negation (e.g. -some\_object)
>
>
>
So `operand` in your case could be an object of a class which overrides those magic methods.
This means that `return + operand` is NOT equivalent to `return operand`.
|
67,685,667
|
I have a problem when I want to host an HTML file using python on the web server.
I go to cmd and type this in:
```
python -m http.server
```
The command works but when I open the domain name I get a list of HTML pages, which I can click and they open. How can I make it so that when I open the domain in chrome it immediately shows me the main.html?
Thank you
|
2021/05/25
|
[
"https://Stackoverflow.com/questions/67685667",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/14456620/"
] |
You can add additional options:
```
python -m http.server --directory C:/ESD/ --bind 127.0.0.1
```
`--directory` will tell python in which folder to look for html files, here i specified the path `C:/ESD/`
`--bind` will tell python on which ip the server should run at
If you want to directly open the file instead of the directory listing all html files, you have to rename your `main.html` to `index.html`
|
*it works but when I open it up in chrome it shows me lists of my html files and when I click on one it goes to it.*
This is expected behavior.
*how can I do it so that when I open it in chrome it immediately shows me the main.html?*
Rename `main.html` to `index.html`. As [docs](https://docs.python.org/3/library/http.server.html#http.server.SimpleHTTPRequestHandler.do_GET) says standard procedure to handle `GET` request is
>
> If the request was mapped to a directory, the directory is checked for a file named index.html or index.htm (in that order). If found, the file’s contents are returned; otherwise a directory listing is generated by calling the list\_directory() method. This method uses os.listdir() to scan the directory, and returns a 404 error response if the listdir() fails.
>
>
>
|
25,000,994
|
I tried to install pyFFTW 0.9.2 to OSX mavericks, but I encounter the following errors:
```
/usr/bin/clang -bundle -undefined dynamic_lookup
-L//anaconda/lib -arch x86_64 -arch x86_64
build/temp.macosx-10.5-x86_64-2.7/anaconda/lib/python2.7/site-packages/pyFFTW-master/pyfftw/pyfftw.o
-L//anaconda/lib -lfftw3 -lfftw3f -lfftw3l -lfftw3_threads -lfftw3f_threads -lfftw3l_threads
-o build/lib.macosx-10.5-x86_64-2.7/pyfftw/pyfftw.so
ld: library not found for -lfftw3
clang: error: linker command failed with exit code 1 (use -v to see invocation)
```
As mentioned in [pyFFTW installation -> cannot find -lfftw3\_threads](https://stackoverflow.com/questions/24834822/pyfftw-installation-cannot-find-lfftw3-threads), I tried to compile and install fftw 3.3.4 for three times. But it is not working for me.
How I did was:
```
./configure --enable-float --enable-share => make => make install
./configure --enable-long-double --enable-share => make => make install
./configure --enable-threads --enable-share => make => make install
```
then I run python (2.7) setup files in pyFFTW folder, and I get the error above.
I appreciate your help.
|
2014/07/28
|
[
"https://Stackoverflow.com/questions/25000994",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/3885085/"
] |
I had the same issue on OSX 10.9.4 Maverick.
Try this:
download FFTW 3.3.4 than open a terminal window and go in the extracted FFTW directory and run these commands:
```
$ ./configure --enable-long-double --enable-threads
$ make
$ sudo make install
$ ./configure --enable-float --enable-threads
$ make
$ sudo make install
```
Than install pyFFTW using pip as suggested:
```
$ sudo pip install pyfftw
```
|
I'm using MacOX 10.11.4 and Python 3.5.1 installed through `conda` and the above answer didn't work for me.
I would still get this error:
```
ld: library not found for -lfftw3l
clang: error: linker command failed with exit code 1 (use -v to see invocation)
error: command 'gcc' failed with exit status 1
----------------------------------------
Failed building wheel for pyfftw
```
or:
```
ld: library not found for -lfftw3l_threads
clang: error: linker command failed with exit code 1 (use -v to see invocation)
error: command 'gcc' failed with exit status 1
----------------------------------------
Failed building wheel for pyfftw
```
What *did* work for me was a slight variation on what I found [here](https://github.com/pyFFTW/pyFFTW/issues/16):
### First install long-double libraries
```
comp:fftw-3.3.4 user$ ./configure --enable-threads --enable-shared --disable-fortran --enable-long-double CFLAGS="-O3 -fno-common -fomit-frame-pointer -fstrict-aliasing"
comp:fftw-3.3.4 user$ make
comp:fftw-3.3.4 user$ sudo make install
```
### Then install float and double libraries
```
comp:fftw-3.3.4 user$ ./configure --enable-threads --enable-shared --disable-fortran --enable-sse2 --enable-float CFLAGS="-O3 -fno-common -fomit-frame-pointer -fstrict-aliasing"
comp:fftw-3.3.4 user$ make
comp:fftw-3.3.4 user$ sudo make install
comp:fftw-3.3.4 user$ ./configure --enable-threads --enable-shared --disable-fortran --enable-sse2 CFLAGS="-O3 -fno-common -fomit-frame-pointer -fstrict-aliasing"
comp:fftw-3.3.4 user$ make
comp:fftw-3.3.4 user$ sudo make install
```
### Then install `pyfftw`
```
comp:fftw-3.3.4 user$ sudo -H pip install pyfftw
```
I don't think the `--disable-fortran` and `--enable-sse2` flags are necessary and I'm not sure `sudo` is necessary for `pip` but this is what worked for me.
Note that your `/usr/local/lib` folder should contain the following files when you're done:
```
libfftw3.3.dylib
libfftw3.a
libfftw3.dylib
libfftw3.la
libfftw3_threads.3.dylib
libfftw3_threads.a
libfftw3_threads.dylib
libfftw3_threads.la
libfftw3f.3.dylib
libfftw3f.a
libfftw3f.dylib
libfftw3f.la
libfftw3f_threads.3.dylib
libfftw3f_threads.a
libfftw3f_threads.dylib
libfftw3f_threads.la
libfftw3l.3.dylib
libfftw3l.a
libfftw3l.dylib
libfftw3l.la
libfftw3l_threads.3.dylib
libfftw3l_threads.a
libfftw3l_threads.dylib
libfftw3l_threads.la
```
|
32,171,138
|
I need to generate all possible strings of certain length X that satisfies the following two rules:
1. Must end with '0'
2. There can't be two or more adjacent '1'
For example, when X = 4, all 'legal' strings are [0000, 0010, 0100, 1000, 1010].
I already wrote a piece of recursive code that simply append the newly found string to a list.
```
def generate(pre0, pre1, cur_len, max_len, container = []):
if (cur_len == max_len-1):
container.append("".join([pre0, pre1, "0"]))
return
if (pre1 == '1'):
cur_char = '0'
generate(pre0+pre1, cur_char, cur_len+1, max_len, container)
else:
cur_char = '0'
generate(pre0+pre1, cur_char, cur_len+1, max_len, container)
cur_char = '1'
generate(pre0+pre1, cur_char, cur_len+1, max_len, container)
if __name__ == "__main__":
container = []
_generate("", "", 0, 4, container)
print container
```
However this method won't work while X reaches 100+ because the memory complexity. I am not familiar with the generator method in Python, so could anyone here help me figure out how to re-write it into a generator? Thanks so much!
P.S. I am working on a Project Euler problem, this is not homework.
Update 1:
Grateful to the first 3 answers, I am using Python 2.7 and can switch to python 3.4. The reason I am asking for the generator is that I can't possibly hold even just the final result list in my memory. A quick mathematical proof will show that there are Fibonacci(X) possible strings for the length X, which means I have to really use a generator and filter the result on the fly.
|
2015/08/23
|
[
"https://Stackoverflow.com/questions/32171138",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/2759486/"
] |
Assuming you're using python version >= 3.4, where `yield from` is available, you can `yield` instead of accumulating+returning. You don't need to pass around a container.
```
def generate(pre0, pre1, cur_len, max_len):
if (cur_len == max_len-1):
yield "".join((pre0, pre1, "0"))
return
if (pre1 == '1'):
yield from generate(pre0+pre1, '0', cur_len+1, max_len)
else:
yield from generate(pre0+pre1, '0', cur_len+1, max_len)
yield from generate(pre0+pre1, '1', cur_len+1, max_len)
if __name__ == "__main__":
for result in generate("", "", 0, 4):
print result
```
If you're using a python version where `yield from` is not available, replace those lines with:
```
for x in generate(...):
yield x
```
|
If you were doing it in Python 3, you would adapt it as follows:
* Remove the `container` parameter.
* Change all occurrences of `container.append(*some\_value*)` to `yield *some\_value*`.
* Prepend `yield from` to all recursive calls.
To do it in Python 2, you do the same, except that Python 2 doesn’t support `yield from`. Rather than doing `yield from *iterable*`, you’ll need to use:
```
for item in *iterable*:
yield item
```
You’ll then also need to change your call site, removing the `container` argument and instead storing the return value of the call. You’ll also need to iterate over the result, printing the values, as `print` will just give you `<generator object at 0x...>`.
|
32,171,138
|
I need to generate all possible strings of certain length X that satisfies the following two rules:
1. Must end with '0'
2. There can't be two or more adjacent '1'
For example, when X = 4, all 'legal' strings are [0000, 0010, 0100, 1000, 1010].
I already wrote a piece of recursive code that simply append the newly found string to a list.
```
def generate(pre0, pre1, cur_len, max_len, container = []):
if (cur_len == max_len-1):
container.append("".join([pre0, pre1, "0"]))
return
if (pre1 == '1'):
cur_char = '0'
generate(pre0+pre1, cur_char, cur_len+1, max_len, container)
else:
cur_char = '0'
generate(pre0+pre1, cur_char, cur_len+1, max_len, container)
cur_char = '1'
generate(pre0+pre1, cur_char, cur_len+1, max_len, container)
if __name__ == "__main__":
container = []
_generate("", "", 0, 4, container)
print container
```
However this method won't work while X reaches 100+ because the memory complexity. I am not familiar with the generator method in Python, so could anyone here help me figure out how to re-write it into a generator? Thanks so much!
P.S. I am working on a Project Euler problem, this is not homework.
Update 1:
Grateful to the first 3 answers, I am using Python 2.7 and can switch to python 3.4. The reason I am asking for the generator is that I can't possibly hold even just the final result list in my memory. A quick mathematical proof will show that there are Fibonacci(X) possible strings for the length X, which means I have to really use a generator and filter the result on the fly.
|
2015/08/23
|
[
"https://Stackoverflow.com/questions/32171138",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/2759486/"
] |
Lame string-based testing whether "11" is contained in the formatted string and yield if it's not (for every even number up to 2^maxlen):
```
def gen(maxlen):
pattern = "{{:0{}b}}".format(maxlen)
for i in range(0, 2**maxlen, 2):
s = pattern.format(i) # not ideal, because we always format to test for "11"
if "11" not in s:
yield s
```
Superior mathematical approach (`M xor M * 2 = M * 3`):
```
def gen(maxlen):
pattern = "{{:0{}b}}".format(maxlen)
for i in range(0, 2**maxlen, 2):
if i ^ i*2 == i*3:
yield pattern.format(i)
```
Here's a benchmark for 6 different implementations (Python 3!):
```
from time import clock
from itertools import product
def math_range(maxlen):
pattern = "{{:0{}b}}".format(maxlen)
for i in range(0, 2**maxlen, 2):
if i ^ i*2 == i*3:
yield pattern.format(i)
def math_while(maxlen):
pattern = "{{:0{}b}}".format(maxlen)
maxnum = 2**maxlen - 1
i = 0
while True:
if i ^ i*2 == i*3:
yield pattern.format(i)
if i >= maxnum:
break
i += 2
def itertools_generator(max_len):
return filter(lambda i: '11' not in i, (''.join(i) + '0' for i in product('01', repeat=max_len-1)))
def itertools_list(maxlen):
return list(filter(lambda i: '11' not in i, (''.join(i) + '0' for i in product('01', repeat=maxlen-1))))
def string_based(maxlen):
pattern = "{{:0{}b}}".format(maxlen)
for i in range(0, 2**maxlen, 2):
s = pattern.format(i)
if "11" not in s:
yield s
def generate(pre0, pre1, cur_len, max_len):
if (cur_len == max_len-1):
yield "".join((pre0, pre1, "0"))
return
if (pre1 == '1'):
yield from generate(pre0+pre1, "0", cur_len+1, max_len)
else:
yield from generate(pre0+pre1, "0", cur_len+1, max_len)
yield from generate(pre0+pre1, "1", cur_len+1, max_len)
def string_based_smart(val):
yield from generate("", "", 0, val)
def benchmark(val, *funcs):
for i, func in enumerate(funcs, 1):
start = clock()
for g in func(val):
g
print("{}. {:6.2f} - {}".format(i, clock()-start, func.__name__))
benchmark(24, string_based_smart, math_range, math_while, itertools_generator, itertools_list, string_based)
```
Some numbers for string length = 24 (in seconds):
```
1. 0.24 - string_based_smart
2. 1.73 - math_range
3. 2.59 - math_while
4. 6.95 - itertools_generator
5. 6.78 - itertools_list
6. 6.45 - string_based
```
shx2's algorithm is clearly the winner, followed by math. Pythonic code makes quite a difference if you compare the results of both math approaches (note: ranges are also generators).
Noteworthy: the `itertools_*` functions perform almost equally slow, but `itertools_list` needs a lot more memory to store the list in (~6 MB spike in my test). All other generator-based solutions have a minimal memory footprint, because they only need to store the current state and not the entire result.
None of the shown functions blows up the stack, because they do not use actual recursion. Python [does not optimize tail recursion](https://stackoverflow.com/questions/13591970/does-python-optimize-tail-recursion), thus you need loops and generators.
**//edit:** naive C++ implementation of `math_range` (MSVS 2013):
```
#include "stdafx.h"
#include <iostream>
#include <bitset>
#include <ctime>
#include <fstream>
using namespace std;
int _tmain(int argc, _TCHAR* argv[])
{
const unsigned __int32 maxlen = 24;
const unsigned __int32 maxnum = 2 << (maxlen - 1);
clock_t begin = clock();
ofstream out;
out.open("log.txt");
if (!out.is_open()){
cout << "Can't write to target";
return 1;
}
for (unsigned __int32 i = 0; i < maxnum; i+=2){
if ((i ^ i * 2) == i * 3){
out << std::bitset<maxlen>(i) << "\n"; // dont use std::endl!
}
}
out.close();
clock_t end = clock();
double elapsed_secs = double(end - begin) / CLOCKS_PER_SEC;
cout << elapsed_secs << endl;
return 0;
}
```
It takes 0.08 seconds(!) for maxlen = 24 (`/Ox`).
An implementation of shx2's algorithm in C++ is non-trivial, because a recursive approach would lead to stack overflow (ha ha), and there's no `yield`. See:
* [Why wasn't yield added to C++0x?](https://stackoverflow.com/questions/3864410/why-wasnt-yield-added-to-c0x)
* <http://www.chiark.greenend.org.uk/~sgtatham/coroutines.html>
* <http://blog.think-async.com/2009/08/secret-sauce-revealed.html>
* <http://www.codeproject.com/Articles/418776/How-to-replace-recursive-functions-using-stack-and>
But if you want raw speed, then there's no way around it.
|
Assuming you're using python version >= 3.4, where `yield from` is available, you can `yield` instead of accumulating+returning. You don't need to pass around a container.
```
def generate(pre0, pre1, cur_len, max_len):
if (cur_len == max_len-1):
yield "".join((pre0, pre1, "0"))
return
if (pre1 == '1'):
yield from generate(pre0+pre1, '0', cur_len+1, max_len)
else:
yield from generate(pre0+pre1, '0', cur_len+1, max_len)
yield from generate(pre0+pre1, '1', cur_len+1, max_len)
if __name__ == "__main__":
for result in generate("", "", 0, 4):
print result
```
If you're using a python version where `yield from` is not available, replace those lines with:
```
for x in generate(...):
yield x
```
|
32,171,138
|
I need to generate all possible strings of certain length X that satisfies the following two rules:
1. Must end with '0'
2. There can't be two or more adjacent '1'
For example, when X = 4, all 'legal' strings are [0000, 0010, 0100, 1000, 1010].
I already wrote a piece of recursive code that simply append the newly found string to a list.
```
def generate(pre0, pre1, cur_len, max_len, container = []):
if (cur_len == max_len-1):
container.append("".join([pre0, pre1, "0"]))
return
if (pre1 == '1'):
cur_char = '0'
generate(pre0+pre1, cur_char, cur_len+1, max_len, container)
else:
cur_char = '0'
generate(pre0+pre1, cur_char, cur_len+1, max_len, container)
cur_char = '1'
generate(pre0+pre1, cur_char, cur_len+1, max_len, container)
if __name__ == "__main__":
container = []
_generate("", "", 0, 4, container)
print container
```
However this method won't work while X reaches 100+ because the memory complexity. I am not familiar with the generator method in Python, so could anyone here help me figure out how to re-write it into a generator? Thanks so much!
P.S. I am working on a Project Euler problem, this is not homework.
Update 1:
Grateful to the first 3 answers, I am using Python 2.7 and can switch to python 3.4. The reason I am asking for the generator is that I can't possibly hold even just the final result list in my memory. A quick mathematical proof will show that there are Fibonacci(X) possible strings for the length X, which means I have to really use a generator and filter the result on the fly.
|
2015/08/23
|
[
"https://Stackoverflow.com/questions/32171138",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/2759486/"
] |
You can use a `filter` with [`itertools.product`](https://docs.python.org/3/library/itertools.html#itertools.product)
```
def generate(max_len):
return list(filter(lambda i: '11' not in i, (''.join(i) + '0' for i in itertools.product('01', repeat=max_len-1))))
```
This uses generators the entire time until the return which finally creates a `list`. The `filter` will act on each of the strings created by `itertools.product` as they are produced.
```
>>> generate(5)
['00000', '00010', '00100', '01000', '01010', '10000', '10010', '10100']
```
**Edit** To use this function as a generator expression, just drop the `list` and switch `filter` to [`itertools.ifilter`](https://docs.python.org/2/library/itertools.html#itertools.ifilter)
```
def generate(max_len):
return itertools.ifilter(lambda i: '11' not in i, (''.join(i) + '0' for i in itertools.product('01', repeat=max_len-1)))
for s in generate(10):
# do something with s
```
|
If you were doing it in Python 3, you would adapt it as follows:
* Remove the `container` parameter.
* Change all occurrences of `container.append(*some\_value*)` to `yield *some\_value*`.
* Prepend `yield from` to all recursive calls.
To do it in Python 2, you do the same, except that Python 2 doesn’t support `yield from`. Rather than doing `yield from *iterable*`, you’ll need to use:
```
for item in *iterable*:
yield item
```
You’ll then also need to change your call site, removing the `container` argument and instead storing the return value of the call. You’ll also need to iterate over the result, printing the values, as `print` will just give you `<generator object at 0x...>`.
|
32,171,138
|
I need to generate all possible strings of certain length X that satisfies the following two rules:
1. Must end with '0'
2. There can't be two or more adjacent '1'
For example, when X = 4, all 'legal' strings are [0000, 0010, 0100, 1000, 1010].
I already wrote a piece of recursive code that simply append the newly found string to a list.
```
def generate(pre0, pre1, cur_len, max_len, container = []):
if (cur_len == max_len-1):
container.append("".join([pre0, pre1, "0"]))
return
if (pre1 == '1'):
cur_char = '0'
generate(pre0+pre1, cur_char, cur_len+1, max_len, container)
else:
cur_char = '0'
generate(pre0+pre1, cur_char, cur_len+1, max_len, container)
cur_char = '1'
generate(pre0+pre1, cur_char, cur_len+1, max_len, container)
if __name__ == "__main__":
container = []
_generate("", "", 0, 4, container)
print container
```
However this method won't work while X reaches 100+ because the memory complexity. I am not familiar with the generator method in Python, so could anyone here help me figure out how to re-write it into a generator? Thanks so much!
P.S. I am working on a Project Euler problem, this is not homework.
Update 1:
Grateful to the first 3 answers, I am using Python 2.7 and can switch to python 3.4. The reason I am asking for the generator is that I can't possibly hold even just the final result list in my memory. A quick mathematical proof will show that there are Fibonacci(X) possible strings for the length X, which means I have to really use a generator and filter the result on the fly.
|
2015/08/23
|
[
"https://Stackoverflow.com/questions/32171138",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/2759486/"
] |
Lame string-based testing whether "11" is contained in the formatted string and yield if it's not (for every even number up to 2^maxlen):
```
def gen(maxlen):
pattern = "{{:0{}b}}".format(maxlen)
for i in range(0, 2**maxlen, 2):
s = pattern.format(i) # not ideal, because we always format to test for "11"
if "11" not in s:
yield s
```
Superior mathematical approach (`M xor M * 2 = M * 3`):
```
def gen(maxlen):
pattern = "{{:0{}b}}".format(maxlen)
for i in range(0, 2**maxlen, 2):
if i ^ i*2 == i*3:
yield pattern.format(i)
```
Here's a benchmark for 6 different implementations (Python 3!):
```
from time import clock
from itertools import product
def math_range(maxlen):
pattern = "{{:0{}b}}".format(maxlen)
for i in range(0, 2**maxlen, 2):
if i ^ i*2 == i*3:
yield pattern.format(i)
def math_while(maxlen):
pattern = "{{:0{}b}}".format(maxlen)
maxnum = 2**maxlen - 1
i = 0
while True:
if i ^ i*2 == i*3:
yield pattern.format(i)
if i >= maxnum:
break
i += 2
def itertools_generator(max_len):
return filter(lambda i: '11' not in i, (''.join(i) + '0' for i in product('01', repeat=max_len-1)))
def itertools_list(maxlen):
return list(filter(lambda i: '11' not in i, (''.join(i) + '0' for i in product('01', repeat=maxlen-1))))
def string_based(maxlen):
pattern = "{{:0{}b}}".format(maxlen)
for i in range(0, 2**maxlen, 2):
s = pattern.format(i)
if "11" not in s:
yield s
def generate(pre0, pre1, cur_len, max_len):
if (cur_len == max_len-1):
yield "".join((pre0, pre1, "0"))
return
if (pre1 == '1'):
yield from generate(pre0+pre1, "0", cur_len+1, max_len)
else:
yield from generate(pre0+pre1, "0", cur_len+1, max_len)
yield from generate(pre0+pre1, "1", cur_len+1, max_len)
def string_based_smart(val):
yield from generate("", "", 0, val)
def benchmark(val, *funcs):
for i, func in enumerate(funcs, 1):
start = clock()
for g in func(val):
g
print("{}. {:6.2f} - {}".format(i, clock()-start, func.__name__))
benchmark(24, string_based_smart, math_range, math_while, itertools_generator, itertools_list, string_based)
```
Some numbers for string length = 24 (in seconds):
```
1. 0.24 - string_based_smart
2. 1.73 - math_range
3. 2.59 - math_while
4. 6.95 - itertools_generator
5. 6.78 - itertools_list
6. 6.45 - string_based
```
shx2's algorithm is clearly the winner, followed by math. Pythonic code makes quite a difference if you compare the results of both math approaches (note: ranges are also generators).
Noteworthy: the `itertools_*` functions perform almost equally slow, but `itertools_list` needs a lot more memory to store the list in (~6 MB spike in my test). All other generator-based solutions have a minimal memory footprint, because they only need to store the current state and not the entire result.
None of the shown functions blows up the stack, because they do not use actual recursion. Python [does not optimize tail recursion](https://stackoverflow.com/questions/13591970/does-python-optimize-tail-recursion), thus you need loops and generators.
**//edit:** naive C++ implementation of `math_range` (MSVS 2013):
```
#include "stdafx.h"
#include <iostream>
#include <bitset>
#include <ctime>
#include <fstream>
using namespace std;
int _tmain(int argc, _TCHAR* argv[])
{
const unsigned __int32 maxlen = 24;
const unsigned __int32 maxnum = 2 << (maxlen - 1);
clock_t begin = clock();
ofstream out;
out.open("log.txt");
if (!out.is_open()){
cout << "Can't write to target";
return 1;
}
for (unsigned __int32 i = 0; i < maxnum; i+=2){
if ((i ^ i * 2) == i * 3){
out << std::bitset<maxlen>(i) << "\n"; // dont use std::endl!
}
}
out.close();
clock_t end = clock();
double elapsed_secs = double(end - begin) / CLOCKS_PER_SEC;
cout << elapsed_secs << endl;
return 0;
}
```
It takes 0.08 seconds(!) for maxlen = 24 (`/Ox`).
An implementation of shx2's algorithm in C++ is non-trivial, because a recursive approach would lead to stack overflow (ha ha), and there's no `yield`. See:
* [Why wasn't yield added to C++0x?](https://stackoverflow.com/questions/3864410/why-wasnt-yield-added-to-c0x)
* <http://www.chiark.greenend.org.uk/~sgtatham/coroutines.html>
* <http://blog.think-async.com/2009/08/secret-sauce-revealed.html>
* <http://www.codeproject.com/Articles/418776/How-to-replace-recursive-functions-using-stack-and>
But if you want raw speed, then there's no way around it.
|
If you were doing it in Python 3, you would adapt it as follows:
* Remove the `container` parameter.
* Change all occurrences of `container.append(*some\_value*)` to `yield *some\_value*`.
* Prepend `yield from` to all recursive calls.
To do it in Python 2, you do the same, except that Python 2 doesn’t support `yield from`. Rather than doing `yield from *iterable*`, you’ll need to use:
```
for item in *iterable*:
yield item
```
You’ll then also need to change your call site, removing the `container` argument and instead storing the return value of the call. You’ll also need to iterate over the result, printing the values, as `print` will just give you `<generator object at 0x...>`.
|
32,171,138
|
I need to generate all possible strings of certain length X that satisfies the following two rules:
1. Must end with '0'
2. There can't be two or more adjacent '1'
For example, when X = 4, all 'legal' strings are [0000, 0010, 0100, 1000, 1010].
I already wrote a piece of recursive code that simply append the newly found string to a list.
```
def generate(pre0, pre1, cur_len, max_len, container = []):
if (cur_len == max_len-1):
container.append("".join([pre0, pre1, "0"]))
return
if (pre1 == '1'):
cur_char = '0'
generate(pre0+pre1, cur_char, cur_len+1, max_len, container)
else:
cur_char = '0'
generate(pre0+pre1, cur_char, cur_len+1, max_len, container)
cur_char = '1'
generate(pre0+pre1, cur_char, cur_len+1, max_len, container)
if __name__ == "__main__":
container = []
_generate("", "", 0, 4, container)
print container
```
However this method won't work while X reaches 100+ because the memory complexity. I am not familiar with the generator method in Python, so could anyone here help me figure out how to re-write it into a generator? Thanks so much!
P.S. I am working on a Project Euler problem, this is not homework.
Update 1:
Grateful to the first 3 answers, I am using Python 2.7 and can switch to python 3.4. The reason I am asking for the generator is that I can't possibly hold even just the final result list in my memory. A quick mathematical proof will show that there are Fibonacci(X) possible strings for the length X, which means I have to really use a generator and filter the result on the fly.
|
2015/08/23
|
[
"https://Stackoverflow.com/questions/32171138",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/2759486/"
] |
Lame string-based testing whether "11" is contained in the formatted string and yield if it's not (for every even number up to 2^maxlen):
```
def gen(maxlen):
pattern = "{{:0{}b}}".format(maxlen)
for i in range(0, 2**maxlen, 2):
s = pattern.format(i) # not ideal, because we always format to test for "11"
if "11" not in s:
yield s
```
Superior mathematical approach (`M xor M * 2 = M * 3`):
```
def gen(maxlen):
pattern = "{{:0{}b}}".format(maxlen)
for i in range(0, 2**maxlen, 2):
if i ^ i*2 == i*3:
yield pattern.format(i)
```
Here's a benchmark for 6 different implementations (Python 3!):
```
from time import clock
from itertools import product
def math_range(maxlen):
pattern = "{{:0{}b}}".format(maxlen)
for i in range(0, 2**maxlen, 2):
if i ^ i*2 == i*3:
yield pattern.format(i)
def math_while(maxlen):
pattern = "{{:0{}b}}".format(maxlen)
maxnum = 2**maxlen - 1
i = 0
while True:
if i ^ i*2 == i*3:
yield pattern.format(i)
if i >= maxnum:
break
i += 2
def itertools_generator(max_len):
return filter(lambda i: '11' not in i, (''.join(i) + '0' for i in product('01', repeat=max_len-1)))
def itertools_list(maxlen):
return list(filter(lambda i: '11' not in i, (''.join(i) + '0' for i in product('01', repeat=maxlen-1))))
def string_based(maxlen):
pattern = "{{:0{}b}}".format(maxlen)
for i in range(0, 2**maxlen, 2):
s = pattern.format(i)
if "11" not in s:
yield s
def generate(pre0, pre1, cur_len, max_len):
if (cur_len == max_len-1):
yield "".join((pre0, pre1, "0"))
return
if (pre1 == '1'):
yield from generate(pre0+pre1, "0", cur_len+1, max_len)
else:
yield from generate(pre0+pre1, "0", cur_len+1, max_len)
yield from generate(pre0+pre1, "1", cur_len+1, max_len)
def string_based_smart(val):
yield from generate("", "", 0, val)
def benchmark(val, *funcs):
for i, func in enumerate(funcs, 1):
start = clock()
for g in func(val):
g
print("{}. {:6.2f} - {}".format(i, clock()-start, func.__name__))
benchmark(24, string_based_smart, math_range, math_while, itertools_generator, itertools_list, string_based)
```
Some numbers for string length = 24 (in seconds):
```
1. 0.24 - string_based_smart
2. 1.73 - math_range
3. 2.59 - math_while
4. 6.95 - itertools_generator
5. 6.78 - itertools_list
6. 6.45 - string_based
```
shx2's algorithm is clearly the winner, followed by math. Pythonic code makes quite a difference if you compare the results of both math approaches (note: ranges are also generators).
Noteworthy: the `itertools_*` functions perform almost equally slow, but `itertools_list` needs a lot more memory to store the list in (~6 MB spike in my test). All other generator-based solutions have a minimal memory footprint, because they only need to store the current state and not the entire result.
None of the shown functions blows up the stack, because they do not use actual recursion. Python [does not optimize tail recursion](https://stackoverflow.com/questions/13591970/does-python-optimize-tail-recursion), thus you need loops and generators.
**//edit:** naive C++ implementation of `math_range` (MSVS 2013):
```
#include "stdafx.h"
#include <iostream>
#include <bitset>
#include <ctime>
#include <fstream>
using namespace std;
int _tmain(int argc, _TCHAR* argv[])
{
const unsigned __int32 maxlen = 24;
const unsigned __int32 maxnum = 2 << (maxlen - 1);
clock_t begin = clock();
ofstream out;
out.open("log.txt");
if (!out.is_open()){
cout << "Can't write to target";
return 1;
}
for (unsigned __int32 i = 0; i < maxnum; i+=2){
if ((i ^ i * 2) == i * 3){
out << std::bitset<maxlen>(i) << "\n"; // dont use std::endl!
}
}
out.close();
clock_t end = clock();
double elapsed_secs = double(end - begin) / CLOCKS_PER_SEC;
cout << elapsed_secs << endl;
return 0;
}
```
It takes 0.08 seconds(!) for maxlen = 24 (`/Ox`).
An implementation of shx2's algorithm in C++ is non-trivial, because a recursive approach would lead to stack overflow (ha ha), and there's no `yield`. See:
* [Why wasn't yield added to C++0x?](https://stackoverflow.com/questions/3864410/why-wasnt-yield-added-to-c0x)
* <http://www.chiark.greenend.org.uk/~sgtatham/coroutines.html>
* <http://blog.think-async.com/2009/08/secret-sauce-revealed.html>
* <http://www.codeproject.com/Articles/418776/How-to-replace-recursive-functions-using-stack-and>
But if you want raw speed, then there's no way around it.
|
You can use a `filter` with [`itertools.product`](https://docs.python.org/3/library/itertools.html#itertools.product)
```
def generate(max_len):
return list(filter(lambda i: '11' not in i, (''.join(i) + '0' for i in itertools.product('01', repeat=max_len-1))))
```
This uses generators the entire time until the return which finally creates a `list`. The `filter` will act on each of the strings created by `itertools.product` as they are produced.
```
>>> generate(5)
['00000', '00010', '00100', '01000', '01010', '10000', '10010', '10100']
```
**Edit** To use this function as a generator expression, just drop the `list` and switch `filter` to [`itertools.ifilter`](https://docs.python.org/2/library/itertools.html#itertools.ifilter)
```
def generate(max_len):
return itertools.ifilter(lambda i: '11' not in i, (''.join(i) + '0' for i in itertools.product('01', repeat=max_len-1)))
for s in generate(10):
# do something with s
```
|
22,398,881
|
I have a Django app on Heroku. I set up another app on the same Heroku account.
Now I want another instance of the first app.
I just cloned the first app and pushed into the newly created app, but it's not working.
Got this error when doing `git push heroku master`
```
Running setup.py install for distribute
Before install bootstrap.
Scanning installed packages
Setuptools installation detected at /app/.heroku/python/lib/python2.7/site-packages/setuptools-2.1-py2.7.egg
Egg installation
Patching...
Renaming /app/.heroku/python/lib/python2.7/site-packages/setuptools-2.1-py2.7.egg into /app/.heroku/python/lib/python2.7/site-packages/setuptools-2.1-py2.7.egg.OLD.1394782343.31
Patched done.
Relaunching...
Traceback (most recent call last):
File "<string>", line 1, in <module>
NameError: name 'install' is not defined
Complete output from command /app/.heroku/python/bin/python -c "import setuptools, tokenize;__file__='/tmp/pip_build_u57096/distribute/setup.py';exec(compile(getattr(tokenize, 'open', open)(__file__).read().replace('\r\n', '\n'), __file__, 'exec'))" install --record /tmp/pip-TYqPAN-record/install-record.txt --single-version-externally-managed --compile:
Before install bootstrap.
Scanning installed packages
Setuptools installation detected at /app/.heroku/python/lib/python2.7/site-packages/setuptools-2.1-py2.7.egg
Egg installation
Patching...
Renaming /app/.heroku/python/lib/python2.7/site-packages/setuptools-2.1-py2.7.egg into /app/.heroku/python/lib/python2.7/site-packages/setuptools-2.1-py2.7.egg.OLD.1394782343.31
Patched done.
Relaunching...
Traceback (most recent call last):
File "<string>", line 1, in <module>
NameError: name 'install' is not defined
----------------------------------------
Cleaning up...
Command /app/.heroku/python/bin/python -c "import setuptools, tokenize;__file__='/tmp/pip_build_u57096/distribute/setup.py';exec(compile(getattr(tokenize, 'open', open)(__file__).read().replace('\r\n', '\n'), __file__, 'exec'))" install --record /tmp/pip-TYqPAN-record/install-record.txt --single-version-externally-managed --compile failed with error code 1 in /tmp/pip_build_u57096/distribute
Storing debug log for failure in /app/.pip/pip.log
! Push rejected, failed to compile Python app
To git@heroku.com:gentle-plateau-6569.git
! [remote rejected] master -> master (pre-receive hook declined)
error: failed to push some refs to 'git@heroku.com:gentle-plateau-6569.git'
```
my requirements.txt file is
```
Django==1.4
South==0.7.5
boto==2.5.2
distribute==0.6.27
dj-database-url==0.2.1
django-debug-toolbar==0.9.4
django-flash==1.8
django-mailgun==0.2.1
django-registration==0.8
django-session-security==2.0.3
django-sslify==0.2
django-storages==1.1.5
gunicorn==0.14.6
ipdb==0.7
ipython==0.13
newrelic==1.6.0.13
psycopg2==2.4.5
raven==2.0.3
requests==0.13.6
simplejson==2.4.0
wsgiref==0.1.2
xlrd==0.7.9
xlwt==0.7.4
```
please help me to get rid of this
|
2014/03/14
|
[
"https://Stackoverflow.com/questions/22398881",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/2088432/"
] |
This appears to be related to an issue [here](https://bitbucket.org/tarek/distribute/issue/91/install-glitch-when-using-pip-virtualenv#comment-5782834).
In other words, `distribute` has been deprecated and is being replaced with `setuptools`. Try replacing the line `distribute==0.6.27` with `setuptools>=0.7` as per the recommendation in the link.
|
virtualenv is a tool to create isolated Python environments.
you will need to add the following to fix `command python setup.py egg_info failed with error code 1`, so inside your requirements.txt add this:
>
> virtualenv==12.0.7
>
>
>
|
45,335,659
|
Let's say I have the following simple situation:
```
import pandas as pd
def multiply(row):
global results
results.append(row[0] * row[1])
def main():
results = []
df = pd.DataFrame([{'a': 1, 'b': 2}, {'a': 3, 'b': 4}, {'a': 5, 'b': 6}])
df.apply(multiply, axis=1)
print(results)
if __name__ == '__main__':
main()
```
This results in the following traceback:
```
Traceback (most recent call last):
File "<ipython-input-2-58ca95c5b364>", line 1, in <module>
main()
File "<ipython-input-1-9bb1bda9e141>", line 11, in main
df.apply(multiply, axis=1)
File "C:\Users\bbritten\AppData\Local\Continuum\Anaconda3\lib\site-packages\pandas\core\frame.py", line 4262, in apply
ignore_failures=ignore_failures)
File "C:\Users\bbritten\AppData\Local\Continuum\Anaconda3\lib\site-packages\pandas\core\frame.py", line 4358, in _apply_standard
results[i] = func(v)
File "<ipython-input-1-9bb1bda9e141>", line 5, in multiply
results.append(row[0] * row[1])
NameError: ("name 'results' is not defined", 'occurred at index 0')
```
I know that I can move `results = []` to the `if` statement to get this example to work, but is there a way to keep the structure I have now and make it work?
|
2017/07/26
|
[
"https://Stackoverflow.com/questions/45335659",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/2789863/"
] |
You must declare results outside the functions like:
```
import pandas as pd
results = []
def multiply(row):
# the rest of your code...
```
### UPDATE
Also note that `list` in python is mutable, hence you don't need to specify it with global in the beginning of the functions. Example
```
def multiply(row):
# global results -> This is not necessary!
results.append(row[0] * row[1])
```
|
You must move results outside of the function. I don't think there is any other way without moving the variable out.
One way is to pass results as a parameter to multiply method.
|
49,495,211
|
When I run **virt-manager --no-fork** on macOS 10.13 High Sierra I get the following:
```
Traceback (most recent call last):
File "/usr/local/Cellar/virt-manager/1.5.0/libexec/share/virt-manager/virt-manager", line 31, in <module>
import gi
ImportError: No module named gi
```
python version 2.7.6 on macOS
Tried mulitple solutions (by googling) none fixed the issue, any ideas how to solve "ImportError: No module named gi" error?
|
2018/03/26
|
[
"https://Stackoverflow.com/questions/49495211",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/2575019/"
] |
On debian 9.7 I had the same problem, I fixed:
```sh
apt install python-gobject-2-dev python-gobject-dev libvirt-glib-1.0-dev python3-libxml2 libosinfo-1.0-dev
```

|
(Arch-linux) I have anaconda installed which breaks virt-manager. But there is no need to uninstall it, just renamed anaconda folder to anything else and virt-manager works again. When need to use anaconda, undo the folder name change. Anaconda modifies PATH environment and when the folder is not present (the trick to rename it) system will look to a standard location. I think the problem could be related.
|
4,356,329
|
I have a generated file with thousands of lines like the following:
`CODE,XXX,DATE,20101201,TIME,070400,CONDITION_CODES,LTXT,PRICE,999.0000,QUANTITY,100,TSN,1510000001`
Some lines have more fields and others have fewer, but all follow the same pattern of key-value pairs and each line has a TSN field.
When doing some analysis on the file, I wrote a loop like the following to read the file into a dictionary:
```
#!/usr/bin/env python
from sys import argv
records = {}
for line in open(argv[1]):
fields = line.strip().split(',')
record = dict(zip(fields[::2], fields[1::2]))
records[record['TSN']] = record
print 'Found %d records in the file.' % len(records)
```
...which is fine and does exactly what I want it to (the `print` is just a trivial example).
However, it doesn't feel particularly "pythonic" to me and the line with:
```
dict(zip(fields[::2], fields[1::2]))
```
Which just feels "clunky" (how many times does it iterate over the fields?).
Is there a better way of doing this in Python 2.6 with just the standard modules to hand?
|
2010/12/04
|
[
"https://Stackoverflow.com/questions/4356329",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/78845/"
] |
Not so much better as just [more efficient...](https://stackoverflow.com/questions/2631189/python-every-other-element-idiom/2631256#2631256)
[Full explanation](https://stackoverflow.com/questions/2233204/how-does-zipitersn-work-in-python)
|
```
import itertools
def grouper(n, iterable, fillvalue=None):
"grouper(3, 'ABCDEFG', 'x') --> ABC DEF Gxx"
args = [iter(iterable)] * n
return itertools.izip_longest(fillvalue=fillvalue, *args)
record = dict(grouper(2, line.strip().split(","))
```
[source](http://docs.python.org/library/itertools.html)
|
4,356,329
|
I have a generated file with thousands of lines like the following:
`CODE,XXX,DATE,20101201,TIME,070400,CONDITION_CODES,LTXT,PRICE,999.0000,QUANTITY,100,TSN,1510000001`
Some lines have more fields and others have fewer, but all follow the same pattern of key-value pairs and each line has a TSN field.
When doing some analysis on the file, I wrote a loop like the following to read the file into a dictionary:
```
#!/usr/bin/env python
from sys import argv
records = {}
for line in open(argv[1]):
fields = line.strip().split(',')
record = dict(zip(fields[::2], fields[1::2]))
records[record['TSN']] = record
print 'Found %d records in the file.' % len(records)
```
...which is fine and does exactly what I want it to (the `print` is just a trivial example).
However, it doesn't feel particularly "pythonic" to me and the line with:
```
dict(zip(fields[::2], fields[1::2]))
```
Which just feels "clunky" (how many times does it iterate over the fields?).
Is there a better way of doing this in Python 2.6 with just the standard modules to hand?
|
2010/12/04
|
[
"https://Stackoverflow.com/questions/4356329",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/78845/"
] |
In Python 2 you could use `izip` in the `itertools` module and the magic of generator objects to write your own function to simplify the creation of pairs of values for the `dict` records. I got the idea for `pairwise()` from a similarly named (although functionally different) [recipe](https://docs.python.org/2/library/itertools.html#recipes) in the Python 2 `itertools` docs.
To use the approach in Python 3, you can just use plain `zip()` since it does what `izip()` did in Python 2 resulting in the latter's removal from `itertools` — the example below addresses this and should work in both versions.
```
try:
from itertools import izip
except ImportError: # Python 3
izip = zip
def pairwise(iterable):
"s -> (s0,s1), (s2,s3), (s4, s5), ..."
a = iter(iterable)
return izip(a, a)
```
Which can be used like this in your file reading `for` loop:
```
from sys import argv
records = {}
for line in open(argv[1]):
fields = (field.strip() for field in line.split(',')) # generator expr
record = dict(pairwise(fields))
records[record['TSN']] = record
print('Found %d records in the file.' % len(records))
```
**But wait, there's more!**
It's possible to create a generalized version I'll call `grouper()`, which again corresponds to a similarly named `itertools` recipe (which is listed right below `pairwise()`):
```
def grouper(n, iterable):
"s -> (s0,s1,...sn-1), (sn,sn+1,...s2n-1), (s2n,s2n+1,...s3n-1), ..."
return izip(*[iter(iterable)]*n)
```
Which could be used like this in your `for` loop:
```
record = dict(grouper(2, fields))
```
Of course, for specific cases like this, it's easy to use `functools.partial()` and create a similar `pairwise()` function with it (which will work in both Python 2 & 3):
```
import functools
pairwise = functools.partial(grouper, 2)
```
**Postscript**
Unless there's a really huge number of fields, you could instead create a actual sequence out of the pairs of line items (rather than using a [generator expression](https://docs.python.org/3/reference/expressions.html#generator-expressions) which has no `len()`):
```
fields = tuple(field.strip() for field in line.split(','))
```
The advantage being that it would allow the grouping to be done using simple slicing:
```
try:
xrange
except NameError: # Python 3
xrange = range
def grouper(n, sequence):
for i in xrange(0, len(sequence), n):
yield sequence[i:i+n]
pairwise = functools.partial(grouper, 2)
```
|
Not so much better as just [more efficient...](https://stackoverflow.com/questions/2631189/python-every-other-element-idiom/2631256#2631256)
[Full explanation](https://stackoverflow.com/questions/2233204/how-does-zipitersn-work-in-python)
|
4,356,329
|
I have a generated file with thousands of lines like the following:
`CODE,XXX,DATE,20101201,TIME,070400,CONDITION_CODES,LTXT,PRICE,999.0000,QUANTITY,100,TSN,1510000001`
Some lines have more fields and others have fewer, but all follow the same pattern of key-value pairs and each line has a TSN field.
When doing some analysis on the file, I wrote a loop like the following to read the file into a dictionary:
```
#!/usr/bin/env python
from sys import argv
records = {}
for line in open(argv[1]):
fields = line.strip().split(',')
record = dict(zip(fields[::2], fields[1::2]))
records[record['TSN']] = record
print 'Found %d records in the file.' % len(records)
```
...which is fine and does exactly what I want it to (the `print` is just a trivial example).
However, it doesn't feel particularly "pythonic" to me and the line with:
```
dict(zip(fields[::2], fields[1::2]))
```
Which just feels "clunky" (how many times does it iterate over the fields?).
Is there a better way of doing this in Python 2.6 with just the standard modules to hand?
|
2010/12/04
|
[
"https://Stackoverflow.com/questions/4356329",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/78845/"
] |
Not so much better as just [more efficient...](https://stackoverflow.com/questions/2631189/python-every-other-element-idiom/2631256#2631256)
[Full explanation](https://stackoverflow.com/questions/2233204/how-does-zipitersn-work-in-python)
|
If we're going to abstract it into a function anyway, it's not too hard to write "from scratch":
```
def pairs(iterable):
iterator = iter(iterable)
while True:
try: yield (iterator.next(), iterator.next())
except: return
```
robert's recipe version definitely wins points for flexibility, though.
|
4,356,329
|
I have a generated file with thousands of lines like the following:
`CODE,XXX,DATE,20101201,TIME,070400,CONDITION_CODES,LTXT,PRICE,999.0000,QUANTITY,100,TSN,1510000001`
Some lines have more fields and others have fewer, but all follow the same pattern of key-value pairs and each line has a TSN field.
When doing some analysis on the file, I wrote a loop like the following to read the file into a dictionary:
```
#!/usr/bin/env python
from sys import argv
records = {}
for line in open(argv[1]):
fields = line.strip().split(',')
record = dict(zip(fields[::2], fields[1::2]))
records[record['TSN']] = record
print 'Found %d records in the file.' % len(records)
```
...which is fine and does exactly what I want it to (the `print` is just a trivial example).
However, it doesn't feel particularly "pythonic" to me and the line with:
```
dict(zip(fields[::2], fields[1::2]))
```
Which just feels "clunky" (how many times does it iterate over the fields?).
Is there a better way of doing this in Python 2.6 with just the standard modules to hand?
|
2010/12/04
|
[
"https://Stackoverflow.com/questions/4356329",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/78845/"
] |
In Python 2 you could use `izip` in the `itertools` module and the magic of generator objects to write your own function to simplify the creation of pairs of values for the `dict` records. I got the idea for `pairwise()` from a similarly named (although functionally different) [recipe](https://docs.python.org/2/library/itertools.html#recipes) in the Python 2 `itertools` docs.
To use the approach in Python 3, you can just use plain `zip()` since it does what `izip()` did in Python 2 resulting in the latter's removal from `itertools` — the example below addresses this and should work in both versions.
```
try:
from itertools import izip
except ImportError: # Python 3
izip = zip
def pairwise(iterable):
"s -> (s0,s1), (s2,s3), (s4, s5), ..."
a = iter(iterable)
return izip(a, a)
```
Which can be used like this in your file reading `for` loop:
```
from sys import argv
records = {}
for line in open(argv[1]):
fields = (field.strip() for field in line.split(',')) # generator expr
record = dict(pairwise(fields))
records[record['TSN']] = record
print('Found %d records in the file.' % len(records))
```
**But wait, there's more!**
It's possible to create a generalized version I'll call `grouper()`, which again corresponds to a similarly named `itertools` recipe (which is listed right below `pairwise()`):
```
def grouper(n, iterable):
"s -> (s0,s1,...sn-1), (sn,sn+1,...s2n-1), (s2n,s2n+1,...s3n-1), ..."
return izip(*[iter(iterable)]*n)
```
Which could be used like this in your `for` loop:
```
record = dict(grouper(2, fields))
```
Of course, for specific cases like this, it's easy to use `functools.partial()` and create a similar `pairwise()` function with it (which will work in both Python 2 & 3):
```
import functools
pairwise = functools.partial(grouper, 2)
```
**Postscript**
Unless there's a really huge number of fields, you could instead create a actual sequence out of the pairs of line items (rather than using a [generator expression](https://docs.python.org/3/reference/expressions.html#generator-expressions) which has no `len()`):
```
fields = tuple(field.strip() for field in line.split(','))
```
The advantage being that it would allow the grouping to be done using simple slicing:
```
try:
xrange
except NameError: # Python 3
xrange = range
def grouper(n, sequence):
for i in xrange(0, len(sequence), n):
yield sequence[i:i+n]
pairwise = functools.partial(grouper, 2)
```
|
```
import itertools
def grouper(n, iterable, fillvalue=None):
"grouper(3, 'ABCDEFG', 'x') --> ABC DEF Gxx"
args = [iter(iterable)] * n
return itertools.izip_longest(fillvalue=fillvalue, *args)
record = dict(grouper(2, line.strip().split(","))
```
[source](http://docs.python.org/library/itertools.html)
|
4,356,329
|
I have a generated file with thousands of lines like the following:
`CODE,XXX,DATE,20101201,TIME,070400,CONDITION_CODES,LTXT,PRICE,999.0000,QUANTITY,100,TSN,1510000001`
Some lines have more fields and others have fewer, but all follow the same pattern of key-value pairs and each line has a TSN field.
When doing some analysis on the file, I wrote a loop like the following to read the file into a dictionary:
```
#!/usr/bin/env python
from sys import argv
records = {}
for line in open(argv[1]):
fields = line.strip().split(',')
record = dict(zip(fields[::2], fields[1::2]))
records[record['TSN']] = record
print 'Found %d records in the file.' % len(records)
```
...which is fine and does exactly what I want it to (the `print` is just a trivial example).
However, it doesn't feel particularly "pythonic" to me and the line with:
```
dict(zip(fields[::2], fields[1::2]))
```
Which just feels "clunky" (how many times does it iterate over the fields?).
Is there a better way of doing this in Python 2.6 with just the standard modules to hand?
|
2010/12/04
|
[
"https://Stackoverflow.com/questions/4356329",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/78845/"
] |
```
import itertools
def grouper(n, iterable, fillvalue=None):
"grouper(3, 'ABCDEFG', 'x') --> ABC DEF Gxx"
args = [iter(iterable)] * n
return itertools.izip_longest(fillvalue=fillvalue, *args)
record = dict(grouper(2, line.strip().split(","))
```
[source](http://docs.python.org/library/itertools.html)
|
If we're going to abstract it into a function anyway, it's not too hard to write "from scratch":
```
def pairs(iterable):
iterator = iter(iterable)
while True:
try: yield (iterator.next(), iterator.next())
except: return
```
robert's recipe version definitely wins points for flexibility, though.
|
4,356,329
|
I have a generated file with thousands of lines like the following:
`CODE,XXX,DATE,20101201,TIME,070400,CONDITION_CODES,LTXT,PRICE,999.0000,QUANTITY,100,TSN,1510000001`
Some lines have more fields and others have fewer, but all follow the same pattern of key-value pairs and each line has a TSN field.
When doing some analysis on the file, I wrote a loop like the following to read the file into a dictionary:
```
#!/usr/bin/env python
from sys import argv
records = {}
for line in open(argv[1]):
fields = line.strip().split(',')
record = dict(zip(fields[::2], fields[1::2]))
records[record['TSN']] = record
print 'Found %d records in the file.' % len(records)
```
...which is fine and does exactly what I want it to (the `print` is just a trivial example).
However, it doesn't feel particularly "pythonic" to me and the line with:
```
dict(zip(fields[::2], fields[1::2]))
```
Which just feels "clunky" (how many times does it iterate over the fields?).
Is there a better way of doing this in Python 2.6 with just the standard modules to hand?
|
2010/12/04
|
[
"https://Stackoverflow.com/questions/4356329",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/78845/"
] |
In Python 2 you could use `izip` in the `itertools` module and the magic of generator objects to write your own function to simplify the creation of pairs of values for the `dict` records. I got the idea for `pairwise()` from a similarly named (although functionally different) [recipe](https://docs.python.org/2/library/itertools.html#recipes) in the Python 2 `itertools` docs.
To use the approach in Python 3, you can just use plain `zip()` since it does what `izip()` did in Python 2 resulting in the latter's removal from `itertools` — the example below addresses this and should work in both versions.
```
try:
from itertools import izip
except ImportError: # Python 3
izip = zip
def pairwise(iterable):
"s -> (s0,s1), (s2,s3), (s4, s5), ..."
a = iter(iterable)
return izip(a, a)
```
Which can be used like this in your file reading `for` loop:
```
from sys import argv
records = {}
for line in open(argv[1]):
fields = (field.strip() for field in line.split(',')) # generator expr
record = dict(pairwise(fields))
records[record['TSN']] = record
print('Found %d records in the file.' % len(records))
```
**But wait, there's more!**
It's possible to create a generalized version I'll call `grouper()`, which again corresponds to a similarly named `itertools` recipe (which is listed right below `pairwise()`):
```
def grouper(n, iterable):
"s -> (s0,s1,...sn-1), (sn,sn+1,...s2n-1), (s2n,s2n+1,...s3n-1), ..."
return izip(*[iter(iterable)]*n)
```
Which could be used like this in your `for` loop:
```
record = dict(grouper(2, fields))
```
Of course, for specific cases like this, it's easy to use `functools.partial()` and create a similar `pairwise()` function with it (which will work in both Python 2 & 3):
```
import functools
pairwise = functools.partial(grouper, 2)
```
**Postscript**
Unless there's a really huge number of fields, you could instead create a actual sequence out of the pairs of line items (rather than using a [generator expression](https://docs.python.org/3/reference/expressions.html#generator-expressions) which has no `len()`):
```
fields = tuple(field.strip() for field in line.split(','))
```
The advantage being that it would allow the grouping to be done using simple slicing:
```
try:
xrange
except NameError: # Python 3
xrange = range
def grouper(n, sequence):
for i in xrange(0, len(sequence), n):
yield sequence[i:i+n]
pairwise = functools.partial(grouper, 2)
```
|
If we're going to abstract it into a function anyway, it's not too hard to write "from scratch":
```
def pairs(iterable):
iterator = iter(iterable)
while True:
try: yield (iterator.next(), iterator.next())
except: return
```
robert's recipe version definitely wins points for flexibility, though.
|
36,867,567
|
Is there someway to import the attribute namespace of a class instance into a method?
Also, is there someway to direct output to the attribute namespace of the instance?
For example, consider this simple block of code:
```
class Name():
def __init__(self,name='Joe'):
self.name = name
def mangle_name(self):
# Magical python command like 'import self' goes here (I guess)
self.name = name + '123' # As expected, error here as 'name' is not defined
a_name = Name()
a_name.mangle_name()
print(a_name.name)
```
As expected, this block of code fails because `name` is not recognised. What I want to achieve is that the method `mangle_name` searches the attribute namespace of `self` to look for `name` (and therefore finding `'Joe'` in the example). Is there some command such that the attribute namespace of `self` is imported, such that there will be no error and the output will be:
```
>>> Joe123
```
I understand that this is probably bad practice, but in some circumstances it would make the code a lot clearer without heaps of references to `self.`something throughout it. If this is instead 'so-terrible-and-never-should-be-done' practice, could you please explain why?
With regards to the second part of the question, what I want to do is:
```
class Name():
def __init__(self,name='Joe'):
self.name = name
def mangle_name(self):
# Magical python command like 'import self' goes here (I guess)
name = name + '123' # This line is different
a_name = Name()
a_name.mangle_name()
print(a_name.name)
```
Where this code returns the output:
```
>>> Joe123
```
Trying the command `import self` returns a module not found error.
I am using Python 3.5 and would therefore prefer a Python 3.5 compatible answer.
EDIT: For clarity.
|
2016/04/26
|
[
"https://Stackoverflow.com/questions/36867567",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/2728074/"
] |
@martin-lear has correctly answered the second part of your question. To address the first part, I'd offer these observations:
Firstly, there is no language feature that lets you "import" the instance's namespace into the method scope, other than performing attribute lookups on self.
You could probably abuse `exec` or `eval` and the instance's `__ dict __` to get something like what you want, but you'd have to manually dump the instance namespace into the method scope at the beginning of the method and then update the instance's `__dict __` at the end.
Even if this can be done it should be avoided though. Source code is primarily a means of communication between humans, and if your code violates its readers' expectations in such a way it will be more difficult to understand and read, and the cost of this over time will outweigh the cost of you having to type *self* in front of instance variable names. The principle that you should [code as if the future maintainers are psychopaths who know where you live](http://c2.com/cgi/wiki?CodeForTheMaintainer) applies here.
Finally, consider that the irritation of having to type *self* a lot is telling you something about your design. Perhaps having to type something like `self.a + self.b + self.c + self.d + self.e + self.f + self.g + self.h + ....` is a sign that you are missing an abstraction - perhaps a collection in this case - and you should be looking to design your class in such a way that you only have to type `sum(self.foos)`.
|
So in your mangle\_name function, you are getting the error becuase you have not defined the name variable
```
self.name = name + '123'
```
should be
```
self.name = self.name + '123'
```
or something similiar
|
13,489,299
|
I am learning python and this is from
```
http://www.learnpython.org/page/MultipleFunctionArguments
```
They have an example code that does not work- I am wondering if it is just a typo or if it should not work at all.
```
def bar(first, second, third, **options):
if options.get("action") == "sum":
print "The sum is: %d" % (first + second + third)
if options.get("return") == "first":
return first
result = bar(1, 2, 3, action = "sum", return = "first")
print "Result: %d" % result
```
Learnpython thinks the output should have been-
```
The sum is: 6
Result: 1
```
The error i get is-
```
Traceback (most recent call last):
File "/base/data/home/apps/s~learnpythonjail/1.354953192642593048/main.py", line 99, in post
exec(cmd, safe_globals)
File "<string>", line 9
result = bar(1, 2, 3, action = "sum", return = "first")
^
SyntaxError: invalid syntax
```
Is there a way to do what they are trying to do or is the example wrong? Sorry I did look at the python tutorial someone had answered but I don't understand how to fix this.
|
2012/11/21
|
[
"https://Stackoverflow.com/questions/13489299",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/612258/"
] |
`return` is a keyword in python - you can't use it as a variable name. If you change it to something else (eg `ret`) it will work fine.
```
def bar(first, second, third, **options):
if options.get("action") == "sum":
print "The sum is: %d" % (first + second + third)
if options.get("ret") == "first":
return first
result = bar(1, 2, 3, action = "sum", ret = "first")
print "Result: %d" % result
```
|
You probably should not use "`return`" as an argument name, because it's a Python command.
|
51,998,627
|
I want to provide an id (unique) to every new row added to my sql database without explicitly asking the user. I was thinking about using `bigint` and incrementing value for every row added compared to the id of the previous row.
Example :
```
name id
xyz 1
```
now for another data to be added to this table, say name = 'abc', i want the id to automatically increment to 2 and then save all looking like
```
name id
xyz 1
abc 2
```
Is there any way I can do so? Or any other data type that can increment with each new row?
P.S. I am using sql on python so I can access the last row and then add 1 to it through `fetchall()` query but that doesn't seem like the efficient way to do it.
|
2018/08/24
|
[
"https://Stackoverflow.com/questions/51998627",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/-1/"
] |
How To - Fullscreen - <https://www.w3schools.com/howto/howto_js_fullscreen.asp>
This is the current "angular way" to do it.
**HTML**
```
<h2 (click)="openFullscreen()">open</h2>
<h2 (click)="closeFullscreen()">close</h2>
```
**Component**
```
import { DOCUMENT } from '@angular/common';
import { Component, Inject, OnInit } from '@angular/core';
@Component({
selector: 'app-root',
templateUrl: './app.component.html',
styleUrls: ['./app.component.scss']
})
export class AppComponent implements OnInit {
constructor(@Inject(DOCUMENT) private document: any) {}
elem;
ngOnInit() {
this.elem = document.documentElement;
}
openFullscreen() {
if (this.elem.requestFullscreen) {
this.elem.requestFullscreen();
} else if (this.elem.mozRequestFullScreen) {
/* Firefox */
this.elem.mozRequestFullScreen();
} else if (this.elem.webkitRequestFullscreen) {
/* Chrome, Safari and Opera */
this.elem.webkitRequestFullscreen();
} else if (this.elem.msRequestFullscreen) {
/* IE/Edge */
this.elem.msRequestFullscreen();
}
}
/* Close fullscreen */
closeFullscreen() {
if (this.document.exitFullscreen) {
this.document.exitFullscreen();
} else if (this.document.mozCancelFullScreen) {
/* Firefox */
this.document.mozCancelFullScreen();
} else if (this.document.webkitExitFullscreen) {
/* Chrome, Safari and Opera */
this.document.webkitExitFullscreen();
} else if (this.document.msExitFullscreen) {
/* IE/Edge */
this.document.msExitFullscreen();
}
}
}
```
|
You can try with `requestFullscreen`
I have create a demo on **[Stackblitz](https://angular-awqvmd.stackblitz.io/)**
```
fullScreen() {
let elem = document.documentElement;
let methodToBeInvoked = elem.requestFullscreen ||
elem.webkitRequestFullScreen || elem['mozRequestFullscreen']
||
elem['msRequestFullscreen'];
if (methodToBeInvoked) methodToBeInvoked.call(elem);
}
```
|
51,998,627
|
I want to provide an id (unique) to every new row added to my sql database without explicitly asking the user. I was thinking about using `bigint` and incrementing value for every row added compared to the id of the previous row.
Example :
```
name id
xyz 1
```
now for another data to be added to this table, say name = 'abc', i want the id to automatically increment to 2 and then save all looking like
```
name id
xyz 1
abc 2
```
Is there any way I can do so? Or any other data type that can increment with each new row?
P.S. I am using sql on python so I can access the last row and then add 1 to it through `fetchall()` query but that doesn't seem like the efficient way to do it.
|
2018/08/24
|
[
"https://Stackoverflow.com/questions/51998627",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/-1/"
] |
You can try with `requestFullscreen`
I have create a demo on **[Stackblitz](https://angular-awqvmd.stackblitz.io/)**
```
fullScreen() {
let elem = document.documentElement;
let methodToBeInvoked = elem.requestFullscreen ||
elem.webkitRequestFullScreen || elem['mozRequestFullscreen']
||
elem['msRequestFullscreen'];
if (methodToBeInvoked) methodToBeInvoked.call(elem);
}
```
|
put the following code on the top of the component (before @Component) you want to trigger:
```
interface FsDocument extends HTMLDocument {
mozFullScreenElement?: Element;
msFullscreenElement?: Element;
msExitFullscreen?: () => void;
mozCancelFullScreen?: () => void;
}
export function isFullScreen(): boolean {
const fsDoc = <FsDocument> document;
return !!(fsDoc.fullscreenElement || fsDoc.mozFullScreenElement || fsDoc.webkitFullscreenElement || fsDoc.msFullscreenElement);
}
interface FsDocumentElement extends HTMLElement {
msRequestFullscreen?: () => void;
mozRequestFullScreen?: () => void;
}
export function toggleFullScreen(): void {
const fsDoc = <FsDocument> document;
if (!isFullScreen()) {
const fsDocElem = <FsDocumentElement> document.documentElement;
if (fsDocElem.requestFullscreen)
fsDocElem.requestFullscreen();
else if (fsDocElem.msRequestFullscreen)
fsDocElem.msRequestFullscreen();
else if (fsDocElem.mozRequestFullScreen)
fsDocElem.mozRequestFullScreen();
else if (fsDocElem.webkitRequestFullscreen)
fsDocElem.webkitRequestFullscreen();
}
else if (fsDoc.exitFullscreen)
fsDoc.exitFullscreen();
else if (fsDoc.msExitFullscreen)
fsDoc.msExitFullscreen();
else if (fsDoc.mozCancelFullScreen)
fsDoc.mozCancelFullScreen();
else if (fsDoc.webkitExitFullscreen)
fsDoc.webkitExitFullscreen();
}
export function setFullScreen(full: boolean): void {
if (full !== isFullScreen())
toggleFullScreen();
}
```
and on the **ngOnInit** method make a call to the `setFullScreen(full: boolean)` function:
```
ngOnInit(): void {
setFullScreen(true);
}
```
|
51,998,627
|
I want to provide an id (unique) to every new row added to my sql database without explicitly asking the user. I was thinking about using `bigint` and incrementing value for every row added compared to the id of the previous row.
Example :
```
name id
xyz 1
```
now for another data to be added to this table, say name = 'abc', i want the id to automatically increment to 2 and then save all looking like
```
name id
xyz 1
abc 2
```
Is there any way I can do so? Or any other data type that can increment with each new row?
P.S. I am using sql on python so I can access the last row and then add 1 to it through `fetchall()` query but that doesn't seem like the efficient way to do it.
|
2018/08/24
|
[
"https://Stackoverflow.com/questions/51998627",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/-1/"
] |
You can try with `requestFullscreen`
I have create a demo on **[Stackblitz](https://angular-awqvmd.stackblitz.io/)**
```
fullScreen() {
let elem = document.documentElement;
let methodToBeInvoked = elem.requestFullscreen ||
elem.webkitRequestFullScreen || elem['mozRequestFullscreen']
||
elem['msRequestFullscreen'];
if (methodToBeInvoked) methodToBeInvoked.call(elem);
}
```
|
****This is the current "angular way" to do it.****
HTML:
Use This in Html:
```
(click)="openFullscreen()
```
NgOninit:
```
this.elem = document.documentElement;
```
TS:
Put this Function it will work...
```
openFullscreen() {
if (this.elem.requestFullscreen) {
this.elem.requestFullscreen();
} else if (this.elem.mozRequestFullScreen) {
/* Firefox */
this.elem.mozRequestFullScreen();
} else if (this.elem.webkitRequestFullscreen) {
/* Chrome, Safari and Opera */
this.elem.webkitRequestFullscreen();
} else if (this.elem.msRequestFullscreen) {
/* IE/Edge */
this.elem.msRequestFullscreen();
}`]`
```
}
|
51,998,627
|
I want to provide an id (unique) to every new row added to my sql database without explicitly asking the user. I was thinking about using `bigint` and incrementing value for every row added compared to the id of the previous row.
Example :
```
name id
xyz 1
```
now for another data to be added to this table, say name = 'abc', i want the id to automatically increment to 2 and then save all looking like
```
name id
xyz 1
abc 2
```
Is there any way I can do so? Or any other data type that can increment with each new row?
P.S. I am using sql on python so I can access the last row and then add 1 to it through `fetchall()` query but that doesn't seem like the efficient way to do it.
|
2018/08/24
|
[
"https://Stackoverflow.com/questions/51998627",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/-1/"
] |
You can try with `requestFullscreen`
I have create a demo on **[Stackblitz](https://angular-awqvmd.stackblitz.io/)**
```
fullScreen() {
let elem = document.documentElement;
let methodToBeInvoked = elem.requestFullscreen ||
elem.webkitRequestFullScreen || elem['mozRequestFullscreen']
||
elem['msRequestFullscreen'];
if (methodToBeInvoked) methodToBeInvoked.call(elem);
}
```
|
Most of the answer above are from 2018, nowdays to achieve compatibility in Chrome, Firefox, other browsers and mobile, I have determined that using `requestFullScreen` is enough
<https://developer.mozilla.org/en-US/docs/Web/API/Element/requestFullScreen>
Here is an example on to be able to toggle between full screen and normal screen:
```js
docElement: HTMLElement;
isFullScreen: boolean = false;
ngOnInit(): void {
this.docElement = document.documentElement;
}
toggleFullScreen() {
if (!this.isFullScreen) {
this.docElement.requestFullscreen();
}
else {
document.exitFullscreen();
}
this.isFullScreen = !this.isFullScreen;
}
```
The request to change to fullscreen can only be made when a user interacts with an element, for example a button
```html
<button (click)="toggleFullScreen()">SWITCH SCREEN MODE</button>
```
|
51,998,627
|
I want to provide an id (unique) to every new row added to my sql database without explicitly asking the user. I was thinking about using `bigint` and incrementing value for every row added compared to the id of the previous row.
Example :
```
name id
xyz 1
```
now for another data to be added to this table, say name = 'abc', i want the id to automatically increment to 2 and then save all looking like
```
name id
xyz 1
abc 2
```
Is there any way I can do so? Or any other data type that can increment with each new row?
P.S. I am using sql on python so I can access the last row and then add 1 to it through `fetchall()` query but that doesn't seem like the efficient way to do it.
|
2018/08/24
|
[
"https://Stackoverflow.com/questions/51998627",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/-1/"
] |
How To - Fullscreen - <https://www.w3schools.com/howto/howto_js_fullscreen.asp>
This is the current "angular way" to do it.
**HTML**
```
<h2 (click)="openFullscreen()">open</h2>
<h2 (click)="closeFullscreen()">close</h2>
```
**Component**
```
import { DOCUMENT } from '@angular/common';
import { Component, Inject, OnInit } from '@angular/core';
@Component({
selector: 'app-root',
templateUrl: './app.component.html',
styleUrls: ['./app.component.scss']
})
export class AppComponent implements OnInit {
constructor(@Inject(DOCUMENT) private document: any) {}
elem;
ngOnInit() {
this.elem = document.documentElement;
}
openFullscreen() {
if (this.elem.requestFullscreen) {
this.elem.requestFullscreen();
} else if (this.elem.mozRequestFullScreen) {
/* Firefox */
this.elem.mozRequestFullScreen();
} else if (this.elem.webkitRequestFullscreen) {
/* Chrome, Safari and Opera */
this.elem.webkitRequestFullscreen();
} else if (this.elem.msRequestFullscreen) {
/* IE/Edge */
this.elem.msRequestFullscreen();
}
}
/* Close fullscreen */
closeFullscreen() {
if (this.document.exitFullscreen) {
this.document.exitFullscreen();
} else if (this.document.mozCancelFullScreen) {
/* Firefox */
this.document.mozCancelFullScreen();
} else if (this.document.webkitExitFullscreen) {
/* Chrome, Safari and Opera */
this.document.webkitExitFullscreen();
} else if (this.document.msExitFullscreen) {
/* IE/Edge */
this.document.msExitFullscreen();
}
}
}
```
|
put the following code on the top of the component (before @Component) you want to trigger:
```
interface FsDocument extends HTMLDocument {
mozFullScreenElement?: Element;
msFullscreenElement?: Element;
msExitFullscreen?: () => void;
mozCancelFullScreen?: () => void;
}
export function isFullScreen(): boolean {
const fsDoc = <FsDocument> document;
return !!(fsDoc.fullscreenElement || fsDoc.mozFullScreenElement || fsDoc.webkitFullscreenElement || fsDoc.msFullscreenElement);
}
interface FsDocumentElement extends HTMLElement {
msRequestFullscreen?: () => void;
mozRequestFullScreen?: () => void;
}
export function toggleFullScreen(): void {
const fsDoc = <FsDocument> document;
if (!isFullScreen()) {
const fsDocElem = <FsDocumentElement> document.documentElement;
if (fsDocElem.requestFullscreen)
fsDocElem.requestFullscreen();
else if (fsDocElem.msRequestFullscreen)
fsDocElem.msRequestFullscreen();
else if (fsDocElem.mozRequestFullScreen)
fsDocElem.mozRequestFullScreen();
else if (fsDocElem.webkitRequestFullscreen)
fsDocElem.webkitRequestFullscreen();
}
else if (fsDoc.exitFullscreen)
fsDoc.exitFullscreen();
else if (fsDoc.msExitFullscreen)
fsDoc.msExitFullscreen();
else if (fsDoc.mozCancelFullScreen)
fsDoc.mozCancelFullScreen();
else if (fsDoc.webkitExitFullscreen)
fsDoc.webkitExitFullscreen();
}
export function setFullScreen(full: boolean): void {
if (full !== isFullScreen())
toggleFullScreen();
}
```
and on the **ngOnInit** method make a call to the `setFullScreen(full: boolean)` function:
```
ngOnInit(): void {
setFullScreen(true);
}
```
|
51,998,627
|
I want to provide an id (unique) to every new row added to my sql database without explicitly asking the user. I was thinking about using `bigint` and incrementing value for every row added compared to the id of the previous row.
Example :
```
name id
xyz 1
```
now for another data to be added to this table, say name = 'abc', i want the id to automatically increment to 2 and then save all looking like
```
name id
xyz 1
abc 2
```
Is there any way I can do so? Or any other data type that can increment with each new row?
P.S. I am using sql on python so I can access the last row and then add 1 to it through `fetchall()` query but that doesn't seem like the efficient way to do it.
|
2018/08/24
|
[
"https://Stackoverflow.com/questions/51998627",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/-1/"
] |
How To - Fullscreen - <https://www.w3schools.com/howto/howto_js_fullscreen.asp>
This is the current "angular way" to do it.
**HTML**
```
<h2 (click)="openFullscreen()">open</h2>
<h2 (click)="closeFullscreen()">close</h2>
```
**Component**
```
import { DOCUMENT } from '@angular/common';
import { Component, Inject, OnInit } from '@angular/core';
@Component({
selector: 'app-root',
templateUrl: './app.component.html',
styleUrls: ['./app.component.scss']
})
export class AppComponent implements OnInit {
constructor(@Inject(DOCUMENT) private document: any) {}
elem;
ngOnInit() {
this.elem = document.documentElement;
}
openFullscreen() {
if (this.elem.requestFullscreen) {
this.elem.requestFullscreen();
} else if (this.elem.mozRequestFullScreen) {
/* Firefox */
this.elem.mozRequestFullScreen();
} else if (this.elem.webkitRequestFullscreen) {
/* Chrome, Safari and Opera */
this.elem.webkitRequestFullscreen();
} else if (this.elem.msRequestFullscreen) {
/* IE/Edge */
this.elem.msRequestFullscreen();
}
}
/* Close fullscreen */
closeFullscreen() {
if (this.document.exitFullscreen) {
this.document.exitFullscreen();
} else if (this.document.mozCancelFullScreen) {
/* Firefox */
this.document.mozCancelFullScreen();
} else if (this.document.webkitExitFullscreen) {
/* Chrome, Safari and Opera */
this.document.webkitExitFullscreen();
} else if (this.document.msExitFullscreen) {
/* IE/Edge */
this.document.msExitFullscreen();
}
}
}
```
|
****This is the current "angular way" to do it.****
HTML:
Use This in Html:
```
(click)="openFullscreen()
```
NgOninit:
```
this.elem = document.documentElement;
```
TS:
Put this Function it will work...
```
openFullscreen() {
if (this.elem.requestFullscreen) {
this.elem.requestFullscreen();
} else if (this.elem.mozRequestFullScreen) {
/* Firefox */
this.elem.mozRequestFullScreen();
} else if (this.elem.webkitRequestFullscreen) {
/* Chrome, Safari and Opera */
this.elem.webkitRequestFullscreen();
} else if (this.elem.msRequestFullscreen) {
/* IE/Edge */
this.elem.msRequestFullscreen();
}`]`
```
}
|
51,998,627
|
I want to provide an id (unique) to every new row added to my sql database without explicitly asking the user. I was thinking about using `bigint` and incrementing value for every row added compared to the id of the previous row.
Example :
```
name id
xyz 1
```
now for another data to be added to this table, say name = 'abc', i want the id to automatically increment to 2 and then save all looking like
```
name id
xyz 1
abc 2
```
Is there any way I can do so? Or any other data type that can increment with each new row?
P.S. I am using sql on python so I can access the last row and then add 1 to it through `fetchall()` query but that doesn't seem like the efficient way to do it.
|
2018/08/24
|
[
"https://Stackoverflow.com/questions/51998627",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/-1/"
] |
How To - Fullscreen - <https://www.w3schools.com/howto/howto_js_fullscreen.asp>
This is the current "angular way" to do it.
**HTML**
```
<h2 (click)="openFullscreen()">open</h2>
<h2 (click)="closeFullscreen()">close</h2>
```
**Component**
```
import { DOCUMENT } from '@angular/common';
import { Component, Inject, OnInit } from '@angular/core';
@Component({
selector: 'app-root',
templateUrl: './app.component.html',
styleUrls: ['./app.component.scss']
})
export class AppComponent implements OnInit {
constructor(@Inject(DOCUMENT) private document: any) {}
elem;
ngOnInit() {
this.elem = document.documentElement;
}
openFullscreen() {
if (this.elem.requestFullscreen) {
this.elem.requestFullscreen();
} else if (this.elem.mozRequestFullScreen) {
/* Firefox */
this.elem.mozRequestFullScreen();
} else if (this.elem.webkitRequestFullscreen) {
/* Chrome, Safari and Opera */
this.elem.webkitRequestFullscreen();
} else if (this.elem.msRequestFullscreen) {
/* IE/Edge */
this.elem.msRequestFullscreen();
}
}
/* Close fullscreen */
closeFullscreen() {
if (this.document.exitFullscreen) {
this.document.exitFullscreen();
} else if (this.document.mozCancelFullScreen) {
/* Firefox */
this.document.mozCancelFullScreen();
} else if (this.document.webkitExitFullscreen) {
/* Chrome, Safari and Opera */
this.document.webkitExitFullscreen();
} else if (this.document.msExitFullscreen) {
/* IE/Edge */
this.document.msExitFullscreen();
}
}
}
```
|
Most of the answer above are from 2018, nowdays to achieve compatibility in Chrome, Firefox, other browsers and mobile, I have determined that using `requestFullScreen` is enough
<https://developer.mozilla.org/en-US/docs/Web/API/Element/requestFullScreen>
Here is an example on to be able to toggle between full screen and normal screen:
```js
docElement: HTMLElement;
isFullScreen: boolean = false;
ngOnInit(): void {
this.docElement = document.documentElement;
}
toggleFullScreen() {
if (!this.isFullScreen) {
this.docElement.requestFullscreen();
}
else {
document.exitFullscreen();
}
this.isFullScreen = !this.isFullScreen;
}
```
The request to change to fullscreen can only be made when a user interacts with an element, for example a button
```html
<button (click)="toggleFullScreen()">SWITCH SCREEN MODE</button>
```
|
51,998,627
|
I want to provide an id (unique) to every new row added to my sql database without explicitly asking the user. I was thinking about using `bigint` and incrementing value for every row added compared to the id of the previous row.
Example :
```
name id
xyz 1
```
now for another data to be added to this table, say name = 'abc', i want the id to automatically increment to 2 and then save all looking like
```
name id
xyz 1
abc 2
```
Is there any way I can do so? Or any other data type that can increment with each new row?
P.S. I am using sql on python so I can access the last row and then add 1 to it through `fetchall()` query but that doesn't seem like the efficient way to do it.
|
2018/08/24
|
[
"https://Stackoverflow.com/questions/51998627",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/-1/"
] |
put the following code on the top of the component (before @Component) you want to trigger:
```
interface FsDocument extends HTMLDocument {
mozFullScreenElement?: Element;
msFullscreenElement?: Element;
msExitFullscreen?: () => void;
mozCancelFullScreen?: () => void;
}
export function isFullScreen(): boolean {
const fsDoc = <FsDocument> document;
return !!(fsDoc.fullscreenElement || fsDoc.mozFullScreenElement || fsDoc.webkitFullscreenElement || fsDoc.msFullscreenElement);
}
interface FsDocumentElement extends HTMLElement {
msRequestFullscreen?: () => void;
mozRequestFullScreen?: () => void;
}
export function toggleFullScreen(): void {
const fsDoc = <FsDocument> document;
if (!isFullScreen()) {
const fsDocElem = <FsDocumentElement> document.documentElement;
if (fsDocElem.requestFullscreen)
fsDocElem.requestFullscreen();
else if (fsDocElem.msRequestFullscreen)
fsDocElem.msRequestFullscreen();
else if (fsDocElem.mozRequestFullScreen)
fsDocElem.mozRequestFullScreen();
else if (fsDocElem.webkitRequestFullscreen)
fsDocElem.webkitRequestFullscreen();
}
else if (fsDoc.exitFullscreen)
fsDoc.exitFullscreen();
else if (fsDoc.msExitFullscreen)
fsDoc.msExitFullscreen();
else if (fsDoc.mozCancelFullScreen)
fsDoc.mozCancelFullScreen();
else if (fsDoc.webkitExitFullscreen)
fsDoc.webkitExitFullscreen();
}
export function setFullScreen(full: boolean): void {
if (full !== isFullScreen())
toggleFullScreen();
}
```
and on the **ngOnInit** method make a call to the `setFullScreen(full: boolean)` function:
```
ngOnInit(): void {
setFullScreen(true);
}
```
|
****This is the current "angular way" to do it.****
HTML:
Use This in Html:
```
(click)="openFullscreen()
```
NgOninit:
```
this.elem = document.documentElement;
```
TS:
Put this Function it will work...
```
openFullscreen() {
if (this.elem.requestFullscreen) {
this.elem.requestFullscreen();
} else if (this.elem.mozRequestFullScreen) {
/* Firefox */
this.elem.mozRequestFullScreen();
} else if (this.elem.webkitRequestFullscreen) {
/* Chrome, Safari and Opera */
this.elem.webkitRequestFullscreen();
} else if (this.elem.msRequestFullscreen) {
/* IE/Edge */
this.elem.msRequestFullscreen();
}`]`
```
}
|
51,998,627
|
I want to provide an id (unique) to every new row added to my sql database without explicitly asking the user. I was thinking about using `bigint` and incrementing value for every row added compared to the id of the previous row.
Example :
```
name id
xyz 1
```
now for another data to be added to this table, say name = 'abc', i want the id to automatically increment to 2 and then save all looking like
```
name id
xyz 1
abc 2
```
Is there any way I can do so? Or any other data type that can increment with each new row?
P.S. I am using sql on python so I can access the last row and then add 1 to it through `fetchall()` query but that doesn't seem like the efficient way to do it.
|
2018/08/24
|
[
"https://Stackoverflow.com/questions/51998627",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/-1/"
] |
Most of the answer above are from 2018, nowdays to achieve compatibility in Chrome, Firefox, other browsers and mobile, I have determined that using `requestFullScreen` is enough
<https://developer.mozilla.org/en-US/docs/Web/API/Element/requestFullScreen>
Here is an example on to be able to toggle between full screen and normal screen:
```js
docElement: HTMLElement;
isFullScreen: boolean = false;
ngOnInit(): void {
this.docElement = document.documentElement;
}
toggleFullScreen() {
if (!this.isFullScreen) {
this.docElement.requestFullscreen();
}
else {
document.exitFullscreen();
}
this.isFullScreen = !this.isFullScreen;
}
```
The request to change to fullscreen can only be made when a user interacts with an element, for example a button
```html
<button (click)="toggleFullScreen()">SWITCH SCREEN MODE</button>
```
|
put the following code on the top of the component (before @Component) you want to trigger:
```
interface FsDocument extends HTMLDocument {
mozFullScreenElement?: Element;
msFullscreenElement?: Element;
msExitFullscreen?: () => void;
mozCancelFullScreen?: () => void;
}
export function isFullScreen(): boolean {
const fsDoc = <FsDocument> document;
return !!(fsDoc.fullscreenElement || fsDoc.mozFullScreenElement || fsDoc.webkitFullscreenElement || fsDoc.msFullscreenElement);
}
interface FsDocumentElement extends HTMLElement {
msRequestFullscreen?: () => void;
mozRequestFullScreen?: () => void;
}
export function toggleFullScreen(): void {
const fsDoc = <FsDocument> document;
if (!isFullScreen()) {
const fsDocElem = <FsDocumentElement> document.documentElement;
if (fsDocElem.requestFullscreen)
fsDocElem.requestFullscreen();
else if (fsDocElem.msRequestFullscreen)
fsDocElem.msRequestFullscreen();
else if (fsDocElem.mozRequestFullScreen)
fsDocElem.mozRequestFullScreen();
else if (fsDocElem.webkitRequestFullscreen)
fsDocElem.webkitRequestFullscreen();
}
else if (fsDoc.exitFullscreen)
fsDoc.exitFullscreen();
else if (fsDoc.msExitFullscreen)
fsDoc.msExitFullscreen();
else if (fsDoc.mozCancelFullScreen)
fsDoc.mozCancelFullScreen();
else if (fsDoc.webkitExitFullscreen)
fsDoc.webkitExitFullscreen();
}
export function setFullScreen(full: boolean): void {
if (full !== isFullScreen())
toggleFullScreen();
}
```
and on the **ngOnInit** method make a call to the `setFullScreen(full: boolean)` function:
```
ngOnInit(): void {
setFullScreen(true);
}
```
|
51,998,627
|
I want to provide an id (unique) to every new row added to my sql database without explicitly asking the user. I was thinking about using `bigint` and incrementing value for every row added compared to the id of the previous row.
Example :
```
name id
xyz 1
```
now for another data to be added to this table, say name = 'abc', i want the id to automatically increment to 2 and then save all looking like
```
name id
xyz 1
abc 2
```
Is there any way I can do so? Or any other data type that can increment with each new row?
P.S. I am using sql on python so I can access the last row and then add 1 to it through `fetchall()` query but that doesn't seem like the efficient way to do it.
|
2018/08/24
|
[
"https://Stackoverflow.com/questions/51998627",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/-1/"
] |
Most of the answer above are from 2018, nowdays to achieve compatibility in Chrome, Firefox, other browsers and mobile, I have determined that using `requestFullScreen` is enough
<https://developer.mozilla.org/en-US/docs/Web/API/Element/requestFullScreen>
Here is an example on to be able to toggle between full screen and normal screen:
```js
docElement: HTMLElement;
isFullScreen: boolean = false;
ngOnInit(): void {
this.docElement = document.documentElement;
}
toggleFullScreen() {
if (!this.isFullScreen) {
this.docElement.requestFullscreen();
}
else {
document.exitFullscreen();
}
this.isFullScreen = !this.isFullScreen;
}
```
The request to change to fullscreen can only be made when a user interacts with an element, for example a button
```html
<button (click)="toggleFullScreen()">SWITCH SCREEN MODE</button>
```
|
****This is the current "angular way" to do it.****
HTML:
Use This in Html:
```
(click)="openFullscreen()
```
NgOninit:
```
this.elem = document.documentElement;
```
TS:
Put this Function it will work...
```
openFullscreen() {
if (this.elem.requestFullscreen) {
this.elem.requestFullscreen();
} else if (this.elem.mozRequestFullScreen) {
/* Firefox */
this.elem.mozRequestFullScreen();
} else if (this.elem.webkitRequestFullscreen) {
/* Chrome, Safari and Opera */
this.elem.webkitRequestFullscreen();
} else if (this.elem.msRequestFullscreen) {
/* IE/Edge */
this.elem.msRequestFullscreen();
}`]`
```
}
|
11,190,353
|
I am with a python script. I want to open a file to retrieve data inside. I add the right path to `sys.path`:
```
sys.path.append('F:\WORK\SIMILITUDE\ALGOCODE')
sys.path.append('F:\WORK\SIMILITUDE\ALGOCODE\DTW')
```
More precisely, the file `file.txt` I will open is in DTW folder, and I also add upper folder ALGOCODE. Then, I have command
```
inputASTM170512 = open("file.txt","r")
```
I have this present:
```
Traceback (most recent call last):
File "<pyshell#24>", line 1, in <module>
inputASTM170512 = open("ASTM-170512.txt","r")
IOError: [Errno 2] No such file or directory: 'ASTM-170512.txt'
```
Why? Do you have any idea?
|
2012/06/25
|
[
"https://Stackoverflow.com/questions/11190353",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/1141493/"
] |
`open()` checks only the current working directory and does not traverse your system path looking for the file. Only `import` works with that mechanism.
You will either need to change your working directory before you open the file, with `os.chdir(PATH)` or to include the entire path when trying to open it.
|
When you try to open file with `open`, for example:
```
open("ASTM-170512.txt","r")
```
you will try to open a file in the current directory.
It does not depend on `sys.path`. The `sys.path` variable is used when you try to import modules, but not when you open files.
You need to specify the full path to file in the `open` or change the current directory to the correspondent place (I think that the former is better).
|
55,056,805
|
I have a simple `dataframe` as follows:
```
Condition State Value
0 A AM 0.775651
1 B XP 0.700265
2 A HML 0.688315
3 A RMSML 0.666956
4 B XAD 0.636014
5 C VAP 0.542897
6 C RMSML 0.486664
7 B XMA 0.482742
8 D VCD 0.469553
```
Now I would like to have a *barplot* with each value, and same color for each state if Condition is the same. I tried the following python code:
```
Data_Control = pd.ExcelFile('Bar_plot_example.xlsx')
df_Control= Data_Control.parse('Sheet2')# my dataframe
s = pd.Series(df_Control.iloc[:,2].values, index=df_Control.iloc[:,1])
colors = {'A': 'r', 'B': 'b', 'C': 'g', 'D':'k'}
s.plot(kind='barh', color=[colors[i] for i in df_Control['Condition']])
plt.legend()
```
But I am not able to get legend correctly for each condition. I am getting the following plot.

So how should I get correct legend for each condition? Any help is highly appreciated, Thanks.
|
2019/03/08
|
[
"https://Stackoverflow.com/questions/55056805",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/3578648/"
] |
You can create the handles and labels for the legend directly from the data:
```
labels = df['Condition'].unique()
handles = [plt.Rectangle((0,0),1,1, color=colors[l]) for l in labels]
plt.legend(handles, labels, title="Conditions")
```
Complete example:
```
u = """ Condition State Value
0 A AM 0.775651
1 B XP 0.700265
2 A HML 0.688315
3 A RMSML 0.666956
4 B XAD 0.636014
5 C VAP 0.542897
6 C RMSML 0.486664
7 B XMA 0.482742
8 D VCD 0.469553"""
import io
import pandas as pd
import matplotlib.pyplot as plt
df = pd.read_csv(io.StringIO(u),sep="\s+" )
s = pd.Series(df.iloc[:,2].values, index=df.iloc[:,1])
colors = {'A': 'r', 'B': 'b', 'C': 'g', 'D':'k'}
s.plot(kind='barh', color=[colors[i] for i in df['Condition']])
labels = df['Condition'].unique()
handles = [plt.Rectangle((0,0),1,1, color=colors[l]) for l in labels]
plt.legend(handles, labels, title="Conditions")
plt.show()
```
[](https://i.stack.imgur.com/kF5P7.png)
|
So I haven't worked much with plotting directly from pandas, but you'll have to access the handles and use that to construct lists of handles and labels that you can pass to `plt.legend`.
```py
s.plot(kind='barh', color=[colors[i] for i in df['Condition']])
# Get the original handles.
original_handles = plt.gca().get_legend_handles_labels()[0][0]
# Hold the handles and labels that will be passed to legend in lists.
handles = []
labels = []
conditions = df['Condition'].values
# Seen conditions helps us make sure that each label is added only once.
seen_conditions = set()
# Iterate over the condition and handle together.
for condition, handle in zip(conditions, original_handles):
# If the condition was already added to the labels, then ignore it.
if condition in seen_conditions:
continue
# Add the handle and label.
handles.append(handle)
labels.append(condition)
seen_conditions.add(condition)
# Call legend with the stored handles and labels.
plt.legend(handles, labels)
```
|
31,570,675
|
Couldn't find much information on try vs. if when you're checking more than one thing. I have a tuple of strings, and I want to grab an index and store the index that follows it. There are 2 cases.
```
mylist = ('a', 'b')
```
or
```
mylist = ('y', 'z')
```
I want to look at `a` or `y`, depending on which exists, and grab the index that follows. Currently, I'm doing this:
```
item['index'] = None
try:
item['index'] = mylist[mylist.index('a') + 1]
except ValueError:
pass
try:
item['index'] = mylist[mylist.index('y') + 1]
except ValueError:
pass
```
After reading [Using try vs if in python](https://stackoverflow.com/questions/1835756/using-try-vs-if-in-python#), I think it may actually be better/more efficient to write it this way, since half the time, it will likely raise an exception, (`ValueError`), which is more expensive than an if/else, if I'm understanding correctly:
```
if 'a' in mylist:
item['index'] = mylist[mylist.index('a') + 1]
elif 'y' in mylist:
item['index'] = mylist[mylist.index('y') + 1]
else:
item['index'] = None
```
Am I right in my assumption that `if` is better/more efficient here? Or is there a better way of writing this entirely that I'm unaware of?
|
2015/07/22
|
[
"https://Stackoverflow.com/questions/31570675",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/2887891/"
] |
Your first code snippet should rather be like below:
```
try:
item['index'] = mylist[mylist.index('a') + 1]
except ValueError:
try:
item['index'] = mylist[mylist.index('y') + 1]
except ValueError:
item['index'] = None
```
or have a for loop like this:
```
for element in ['a', 'y']:
try:
item['index'] = mylist[mylist.index(element) + 1]
except ValueError:
continue
else:
break
else:
item['index'] = None
```
IMO using try blocks is better performance-wise as you can avoid if-checking in every case irrespective of how often the ValueError occurs or not and its also more readable and pythonic.
|
Performance-wise, exceptions in Python are not heavy like in other languages and you shouldn't worry about using them whenever it's appropriate.
That said, IMO the second code snippet is more concise and clean.
|
31,570,675
|
Couldn't find much information on try vs. if when you're checking more than one thing. I have a tuple of strings, and I want to grab an index and store the index that follows it. There are 2 cases.
```
mylist = ('a', 'b')
```
or
```
mylist = ('y', 'z')
```
I want to look at `a` or `y`, depending on which exists, and grab the index that follows. Currently, I'm doing this:
```
item['index'] = None
try:
item['index'] = mylist[mylist.index('a') + 1]
except ValueError:
pass
try:
item['index'] = mylist[mylist.index('y') + 1]
except ValueError:
pass
```
After reading [Using try vs if in python](https://stackoverflow.com/questions/1835756/using-try-vs-if-in-python#), I think it may actually be better/more efficient to write it this way, since half the time, it will likely raise an exception, (`ValueError`), which is more expensive than an if/else, if I'm understanding correctly:
```
if 'a' in mylist:
item['index'] = mylist[mylist.index('a') + 1]
elif 'y' in mylist:
item['index'] = mylist[mylist.index('y') + 1]
else:
item['index'] = None
```
Am I right in my assumption that `if` is better/more efficient here? Or is there a better way of writing this entirely that I'm unaware of?
|
2015/07/22
|
[
"https://Stackoverflow.com/questions/31570675",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/2887891/"
] |
Well my opinion is close to Rohith Subramanyam's. I think a for loop is definitely more readable, especially in the case where you have more than two elements to test !
But I still think an if block is far more logical to use (pun intended!). It is also strictly speaking less redundant in terms of lines of code:
```
accepted_elements = ['a', 'y']
item['index'] = None
for accepted_element in accepted_elements:
if accepted_element in mylist:
item['index'] = mylist[mylist.index(accepted_element) + 1]
break
```
I think the solution you use in the end is really up to you, as it depends on your code habits (except the for loop which is a must).
EDIT: Well actually, after some time measurements it seems that Rohith Subramanyam's version is slightly faster at first sight. (640 ns per loop vs 740 ns per loop)
|
Performance-wise, exceptions in Python are not heavy like in other languages and you shouldn't worry about using them whenever it's appropriate.
That said, IMO the second code snippet is more concise and clean.
|
31,570,675
|
Couldn't find much information on try vs. if when you're checking more than one thing. I have a tuple of strings, and I want to grab an index and store the index that follows it. There are 2 cases.
```
mylist = ('a', 'b')
```
or
```
mylist = ('y', 'z')
```
I want to look at `a` or `y`, depending on which exists, and grab the index that follows. Currently, I'm doing this:
```
item['index'] = None
try:
item['index'] = mylist[mylist.index('a') + 1]
except ValueError:
pass
try:
item['index'] = mylist[mylist.index('y') + 1]
except ValueError:
pass
```
After reading [Using try vs if in python](https://stackoverflow.com/questions/1835756/using-try-vs-if-in-python#), I think it may actually be better/more efficient to write it this way, since half the time, it will likely raise an exception, (`ValueError`), which is more expensive than an if/else, if I'm understanding correctly:
```
if 'a' in mylist:
item['index'] = mylist[mylist.index('a') + 1]
elif 'y' in mylist:
item['index'] = mylist[mylist.index('y') + 1]
else:
item['index'] = None
```
Am I right in my assumption that `if` is better/more efficient here? Or is there a better way of writing this entirely that I'm unaware of?
|
2015/07/22
|
[
"https://Stackoverflow.com/questions/31570675",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/2887891/"
] |
Well my opinion is close to Rohith Subramanyam's. I think a for loop is definitely more readable, especially in the case where you have more than two elements to test !
But I still think an if block is far more logical to use (pun intended!). It is also strictly speaking less redundant in terms of lines of code:
```
accepted_elements = ['a', 'y']
item['index'] = None
for accepted_element in accepted_elements:
if accepted_element in mylist:
item['index'] = mylist[mylist.index(accepted_element) + 1]
break
```
I think the solution you use in the end is really up to you, as it depends on your code habits (except the for loop which is a must).
EDIT: Well actually, after some time measurements it seems that Rohith Subramanyam's version is slightly faster at first sight. (640 ns per loop vs 740 ns per loop)
|
Your first code snippet should rather be like below:
```
try:
item['index'] = mylist[mylist.index('a') + 1]
except ValueError:
try:
item['index'] = mylist[mylist.index('y') + 1]
except ValueError:
item['index'] = None
```
or have a for loop like this:
```
for element in ['a', 'y']:
try:
item['index'] = mylist[mylist.index(element) + 1]
except ValueError:
continue
else:
break
else:
item['index'] = None
```
IMO using try blocks is better performance-wise as you can avoid if-checking in every case irrespective of how often the ValueError occurs or not and its also more readable and pythonic.
|
23,178,606
|
My python code has been crashing with error 'GC Object already Tracked' . Trying to figure out the best approach to debug this crashes.
OS : Linux.
* Is there a proper way to debug this issue.
There were couple of suggestions in the following article.
[Python memory debugging with GDB](https://stackoverflow.com/questions/273043/python-memory-debugging-with-gdb/23143858#23143858)
Not sure which approach worked for the author.
* Is there a way to generate memory dumps in such scenario which could be analyzed. Like in Windows world.
Found some article on this. But not entirely answers my question:
<http://pfigue.github.io/blog/2012/12/28/where-is-my-core-dump-archlinux/>
|
2014/04/20
|
[
"https://Stackoverflow.com/questions/23178606",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/992301/"
] |
Found out the reason for this issue in my scenario (not necessarily the only reason for the GC object crash).
I used the GDB and Core dumps to debug this issue.
I have Python and C Extension Code (in shared object).
Python code registers a Callback routine with C Extension code.
In a certain workflow a thread from C Extension code was calling the registered Call back routine in Python code.
This usually worked fine but when multiple threads did the same action concurrently it resulted in the Crash with 'GC Object already tracked'.
Synchronizing the access to python objects for multiple thread does resolve this issue.
Thanks to any responded to this.
|
The problem is that you try to add an object to Python's cyclic garbage collector tracking twice.
Check out [this bug](http://bugs.python.org/issue6128), specifically:
* The [documentation for supporting cyclic garbage collection](https://docs.python.org/2/c-api/gcsupport.html) in Python
* [My documentation patch for the issue](http://bugs.python.org/file33497/gcsupport-doc.diff) and
* [My explanation in the bug report itself](http://bugs.python.org/msg208283)
Long story short: if you set `Py_TPFLAGS_HAVE_GC` and you are using Python's built-in memory allocation (standard `tp_alloc`/`tp_free`), you don't ever have to manually call `PyObject_GC_Track()` or `PyObject_GC_UnTrack()`. Python handles it all behind your back.
Unfortunately, this is not documented very well, at the moment. Once you've fixed the issue, feel free to chime in on the bug report (linked above) about better documentation of this behavior.
|
23,178,606
|
My python code has been crashing with error 'GC Object already Tracked' . Trying to figure out the best approach to debug this crashes.
OS : Linux.
* Is there a proper way to debug this issue.
There were couple of suggestions in the following article.
[Python memory debugging with GDB](https://stackoverflow.com/questions/273043/python-memory-debugging-with-gdb/23143858#23143858)
Not sure which approach worked for the author.
* Is there a way to generate memory dumps in such scenario which could be analyzed. Like in Windows world.
Found some article on this. But not entirely answers my question:
<http://pfigue.github.io/blog/2012/12/28/where-is-my-core-dump-archlinux/>
|
2014/04/20
|
[
"https://Stackoverflow.com/questions/23178606",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/992301/"
] |
Found out the reason for this issue in my scenario (not necessarily the only reason for the GC object crash).
I used the GDB and Core dumps to debug this issue.
I have Python and C Extension Code (in shared object).
Python code registers a Callback routine with C Extension code.
In a certain workflow a thread from C Extension code was calling the registered Call back routine in Python code.
This usually worked fine but when multiple threads did the same action concurrently it resulted in the Crash with 'GC Object already tracked'.
Synchronizing the access to python objects for multiple thread does resolve this issue.
Thanks to any responded to this.
|
I ran into this problem using boost::python when our C++ code would trigger a python callback. I would occasionally get "GC object already tracked" and the program would terminate.
I was able to attach GDB to the process prior to triggering the error. One interesting thing, in the python code we were wrapping the callback with
a functools partial, which was actually masking where the real error was occuring. After replacing the partial with a simple callable wrapper class. The "GC object already tracked error" no longer popped up, instead I was now just getting a segfault.
In our boost::python wrapper, we had lambda functions to handle a C++ callback and the lambda function captured the boost::python::object callback function. It turned out, for whatever reason, in the destructor for the lambda, it wasn't always properly acquiring the GIL when destroying the boost::python::object which was causing the segfault.
The fix was to not use a lambda function, but instead create a functor that makes sure to acquire the GIL in the destructor prior to calling PyDECREF() on the boost::python::object.
```
class callback_wrapper
{
public:
callback_wrapper(object cb): _cb(cb), _destroyed(false) {
}
callback_wrapper(const callback_wrapper& other) {
_destroyed = other._destroyed;
Py_INCREF(other._cb.ptr());
_cb = other._cb;
}
~callback_wrapper() {
std::lock_guard<std::recursive_mutex> guard(_mutex);
PyGILState_STATE state = PyGILState_Ensure();
Py_DECREF(_cb.ptr());
PyGILState_Release(state);
_destroyed = true;
}
void operator ()(topic_ptr topic) {
std::lock_guard<std::recursive_mutex> guard(_mutex);
if(_destroyed) {
return;
}
PyGILState_STATE state = PyGILState_Ensure();
try {
_cb(topic);
}
catch(error_already_set) { PyErr_Print(); }
PyGILState_Release(state);
}
object _cb;
std::recursive_mutex _mutex;
bool _destroyed;
};
```
|
72,890,039
|
I have two list of list and I want to check what element of l1 is not in l2 and add all this element in a new list ls in l2. For example,
input:
l1 = [[1,2,3],[5,6],[7,8,9,10]]
l2 = [[1,8,10],[3,9],[5,6]]
output:
l2 = [[1,8,10],[3,9],[5,6],[2,7]]
I write this code but it is seem doesn't work...
```
ls = []
for i in range(len(l1)):
for j in range(len(l1[i])):
if l1[i][j] not in l2 and l1[i][j] not in ls:
ls.append(l1[i][j])
l2.append(ls)
```
I code this in python. Any ideas where can be the problem ?
|
2022/07/06
|
[
"https://Stackoverflow.com/questions/72890039",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/19419689/"
] |
This should work:
```
l1 = [[1,2,3],[5,6],[7,8,9,10]]
l2 = [[1,8,10],[3,9],[5,6]]
l1n = set([x for xs in l1 for x in xs])
l2n = set([x for xs in l2 for x in xs])
l2.append(list(l1n - l2n))
```
|
You are checking whether the elements are in `l2`, not if they are in any of the sublists of `l2`. First, you should probably create a `set` from all the elements in all the sublists of `l2` to make that check faster. Then you can create a similar list comprehension on `l1` to get all the elements that are not in that set. Finally, add that to `l2` or create a new list.
```
>>> l1 = [[1,2,3],[5,6],[7,8,9,10]]
>>> l2 = [[1,8,10],[3,9],[5,6]]
>>> s2 = {x for lst in l2 for x in lst}
>>> [x for lst in l1 for x in lst if x not in s2]
[2, 7]
>>> l2 + [_]
[[1, 8, 10], [3, 9], [5, 6], [2, 7]]
```
|
72,890,039
|
I have two list of list and I want to check what element of l1 is not in l2 and add all this element in a new list ls in l2. For example,
input:
l1 = [[1,2,3],[5,6],[7,8,9,10]]
l2 = [[1,8,10],[3,9],[5,6]]
output:
l2 = [[1,8,10],[3,9],[5,6],[2,7]]
I write this code but it is seem doesn't work...
```
ls = []
for i in range(len(l1)):
for j in range(len(l1[i])):
if l1[i][j] not in l2 and l1[i][j] not in ls:
ls.append(l1[i][j])
l2.append(ls)
```
I code this in python. Any ideas where can be the problem ?
|
2022/07/06
|
[
"https://Stackoverflow.com/questions/72890039",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/19419689/"
] |
This should work:
```
l1 = [[1,2,3],[5,6],[7,8,9,10]]
l2 = [[1,8,10],[3,9],[5,6]]
l1n = set([x for xs in l1 for x in xs])
l2n = set([x for xs in l2 for x in xs])
l2.append(list(l1n - l2n))
```
|
May be you can try like this also
```
import itertools
#MERGE LIST1
l1 = [[1,2,3],[5,6],[7,8,9,10]]
merged = list(itertools.chain(*l1))
#MERGE LIST2
l2 = [[1,8,10],[3,9],[5,6]]
merged2 = list(itertools.chain(*l2))
#FIND THE ELEMENTS NOT IN SECOND LIST
newlist = [item for item in merged if item not in merged2]
#APPEND THE NEWLIST TO LIST2
l2.append(newlist)
print(l2)
```
|
72,890,039
|
I have two list of list and I want to check what element of l1 is not in l2 and add all this element in a new list ls in l2. For example,
input:
l1 = [[1,2,3],[5,6],[7,8,9,10]]
l2 = [[1,8,10],[3,9],[5,6]]
output:
l2 = [[1,8,10],[3,9],[5,6],[2,7]]
I write this code but it is seem doesn't work...
```
ls = []
for i in range(len(l1)):
for j in range(len(l1[i])):
if l1[i][j] not in l2 and l1[i][j] not in ls:
ls.append(l1[i][j])
l2.append(ls)
```
I code this in python. Any ideas where can be the problem ?
|
2022/07/06
|
[
"https://Stackoverflow.com/questions/72890039",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/19419689/"
] |
You are checking whether the elements are in `l2`, not if they are in any of the sublists of `l2`. First, you should probably create a `set` from all the elements in all the sublists of `l2` to make that check faster. Then you can create a similar list comprehension on `l1` to get all the elements that are not in that set. Finally, add that to `l2` or create a new list.
```
>>> l1 = [[1,2,3],[5,6],[7,8,9,10]]
>>> l2 = [[1,8,10],[3,9],[5,6]]
>>> s2 = {x for lst in l2 for x in lst}
>>> [x for lst in l1 for x in lst if x not in s2]
[2, 7]
>>> l2 + [_]
[[1, 8, 10], [3, 9], [5, 6], [2, 7]]
```
|
May be you can try like this also
```
import itertools
#MERGE LIST1
l1 = [[1,2,3],[5,6],[7,8,9,10]]
merged = list(itertools.chain(*l1))
#MERGE LIST2
l2 = [[1,8,10],[3,9],[5,6]]
merged2 = list(itertools.chain(*l2))
#FIND THE ELEMENTS NOT IN SECOND LIST
newlist = [item for item in merged if item not in merged2]
#APPEND THE NEWLIST TO LIST2
l2.append(newlist)
print(l2)
```
|
32,555,508
|
I have a large document with a similar structure:
```
Data800,
Data900,
Data1000,
]
}
```
How would I go about removing the last character from the 3rd to last line (in this case, where the comma is positioned next to Data1000). The output should look like this:
```
Data800,
Data900,
Data1000
]
}
```
It will always be the 3rd to last line in which needs the last character removed. Back-end is linux, and can use perl, bash, python, etc.
|
2015/09/13
|
[
"https://Stackoverflow.com/questions/32555508",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/3139767/"
] |
A simple solution using `wc` to count lines and `sed` to do the editing:
```
sed "$(( $(wc -l <file) - 2))s/,$//" file
```
That will output the edited file on stdout; you can edit inplace with `sed -i`.
|
In python 2.\* :
```
with open('/path/of/file', 'w+') as f:
f.write(f.read().replace(',\n]', '\n]'))
```
|
32,555,508
|
I have a large document with a similar structure:
```
Data800,
Data900,
Data1000,
]
}
```
How would I go about removing the last character from the 3rd to last line (in this case, where the comma is positioned next to Data1000). The output should look like this:
```
Data800,
Data900,
Data1000
]
}
```
It will always be the 3rd to last line in which needs the last character removed. Back-end is linux, and can use perl, bash, python, etc.
|
2015/09/13
|
[
"https://Stackoverflow.com/questions/32555508",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/3139767/"
] |
A simple solution using `wc` to count lines and `sed` to do the editing:
```
sed "$(( $(wc -l <file) - 2))s/,$//" file
```
That will output the edited file on stdout; you can edit inplace with `sed -i`.
|
```
with open('a.txt', "a+") as f:
f.seek(-2, 2) # Jump to the second last byte.
counter = 0
while counter < 2: # if found EOLS still not enough
if f.read(1) == "\n": # Count EOLs
counter += 1
f.seek(-2, 1) # jump back the read byte plus one more
position = f.tell() # save position
last_three_lines = f.read() # read last three lines
f.seek(position, 0) # jump back
f.truncate() # remove all the rest
f.write(last_three_lines[1:]) # write back necessary stuff
```
Input:
------
```
AAAAAa
BBBBBb
CCCCCc
DDDDDd
EEEEEe
FFFFFf
GGGGGg
```
Output:
-------
```
AAAAAa
BBBBBb
CCCCCc
DDDDDd
EEEEE
FFFFFf
GGGGGg
```
|
32,555,508
|
I have a large document with a similar structure:
```
Data800,
Data900,
Data1000,
]
}
```
How would I go about removing the last character from the 3rd to last line (in this case, where the comma is positioned next to Data1000). The output should look like this:
```
Data800,
Data900,
Data1000
]
}
```
It will always be the 3rd to last line in which needs the last character removed. Back-end is linux, and can use perl, bash, python, etc.
|
2015/09/13
|
[
"https://Stackoverflow.com/questions/32555508",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/3139767/"
] |
A simple solution using `wc` to count lines and `sed` to do the editing:
```
sed "$(( $(wc -l <file) - 2))s/,$//" file
```
That will output the edited file on stdout; you can edit inplace with `sed -i`.
|
Perl's [`Tie::File`](https://metacpan.org/pod/Tie::File) module makes this trivial. It has the effect of *tying* an array to a disk file, so that any changes made to the array are reflected in the file
It would look like this (untested, as I'm posting from my tablet). The path to the input file is expected as a parameter on the command line. The line terminators are already removed from the strings that appear in the array, so a call to `chop` will remove the last character of the text
```
use strict;
use warnings;
use Tie::File;
tie my @file, 'Tie::File', shift or die $!;
chop $line[-3];
untie @file;
```
|
32,555,508
|
I have a large document with a similar structure:
```
Data800,
Data900,
Data1000,
]
}
```
How would I go about removing the last character from the 3rd to last line (in this case, where the comma is positioned next to Data1000). The output should look like this:
```
Data800,
Data900,
Data1000
]
}
```
It will always be the 3rd to last line in which needs the last character removed. Back-end is linux, and can use perl, bash, python, etc.
|
2015/09/13
|
[
"https://Stackoverflow.com/questions/32555508",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/3139767/"
] |
A simple solution using `wc` to count lines and `sed` to do the editing:
```
sed "$(( $(wc -l <file) - 2))s/,$//" file
```
That will output the edited file on stdout; you can edit inplace with `sed -i`.
|
The following removes commas followed by `]` or by `}` (with optional whitespace between the two):
```
perl -0777pe's/,(?=\s*[\]}])//g'
```
Usage:
```
perl -0777pe's/,(?=\s*[\]}])//g' file.in >file.out # Read from file
perl -0777pe's/,(?=\s*[\]}])//g' <file.in >file.out # Read from STDIN
perl -i~ -0777pe's/,(?=\s*[\]}])//g' file # In-place, with backup
perl -i -0777pe's/,(?=\s*[\]}])//g' file # In-place, without backup
```
|
32,555,508
|
I have a large document with a similar structure:
```
Data800,
Data900,
Data1000,
]
}
```
How would I go about removing the last character from the 3rd to last line (in this case, where the comma is positioned next to Data1000). The output should look like this:
```
Data800,
Data900,
Data1000
]
}
```
It will always be the 3rd to last line in which needs the last character removed. Back-end is linux, and can use perl, bash, python, etc.
|
2015/09/13
|
[
"https://Stackoverflow.com/questions/32555508",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/3139767/"
] |
Perl's [`Tie::File`](https://metacpan.org/pod/Tie::File) module makes this trivial. It has the effect of *tying* an array to a disk file, so that any changes made to the array are reflected in the file
It would look like this (untested, as I'm posting from my tablet). The path to the input file is expected as a parameter on the command line. The line terminators are already removed from the strings that appear in the array, so a call to `chop` will remove the last character of the text
```
use strict;
use warnings;
use Tie::File;
tie my @file, 'Tie::File', shift or die $!;
chop $line[-3];
untie @file;
```
|
In python 2.\* :
```
with open('/path/of/file', 'w+') as f:
f.write(f.read().replace(',\n]', '\n]'))
```
|
32,555,508
|
I have a large document with a similar structure:
```
Data800,
Data900,
Data1000,
]
}
```
How would I go about removing the last character from the 3rd to last line (in this case, where the comma is positioned next to Data1000). The output should look like this:
```
Data800,
Data900,
Data1000
]
}
```
It will always be the 3rd to last line in which needs the last character removed. Back-end is linux, and can use perl, bash, python, etc.
|
2015/09/13
|
[
"https://Stackoverflow.com/questions/32555508",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/3139767/"
] |
Perl's [`Tie::File`](https://metacpan.org/pod/Tie::File) module makes this trivial. It has the effect of *tying* an array to a disk file, so that any changes made to the array are reflected in the file
It would look like this (untested, as I'm posting from my tablet). The path to the input file is expected as a parameter on the command line. The line terminators are already removed from the strings that appear in the array, so a call to `chop` will remove the last character of the text
```
use strict;
use warnings;
use Tie::File;
tie my @file, 'Tie::File', shift or die $!;
chop $line[-3];
untie @file;
```
|
```
with open('a.txt', "a+") as f:
f.seek(-2, 2) # Jump to the second last byte.
counter = 0
while counter < 2: # if found EOLS still not enough
if f.read(1) == "\n": # Count EOLs
counter += 1
f.seek(-2, 1) # jump back the read byte plus one more
position = f.tell() # save position
last_three_lines = f.read() # read last three lines
f.seek(position, 0) # jump back
f.truncate() # remove all the rest
f.write(last_three_lines[1:]) # write back necessary stuff
```
Input:
------
```
AAAAAa
BBBBBb
CCCCCc
DDDDDd
EEEEEe
FFFFFf
GGGGGg
```
Output:
-------
```
AAAAAa
BBBBBb
CCCCCc
DDDDDd
EEEEE
FFFFFf
GGGGGg
```
|
32,555,508
|
I have a large document with a similar structure:
```
Data800,
Data900,
Data1000,
]
}
```
How would I go about removing the last character from the 3rd to last line (in this case, where the comma is positioned next to Data1000). The output should look like this:
```
Data800,
Data900,
Data1000
]
}
```
It will always be the 3rd to last line in which needs the last character removed. Back-end is linux, and can use perl, bash, python, etc.
|
2015/09/13
|
[
"https://Stackoverflow.com/questions/32555508",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/3139767/"
] |
Perl's [`Tie::File`](https://metacpan.org/pod/Tie::File) module makes this trivial. It has the effect of *tying* an array to a disk file, so that any changes made to the array are reflected in the file
It would look like this (untested, as I'm posting from my tablet). The path to the input file is expected as a parameter on the command line. The line terminators are already removed from the strings that appear in the array, so a call to `chop` will remove the last character of the text
```
use strict;
use warnings;
use Tie::File;
tie my @file, 'Tie::File', shift or die $!;
chop $line[-3];
untie @file;
```
|
The following removes commas followed by `]` or by `}` (with optional whitespace between the two):
```
perl -0777pe's/,(?=\s*[\]}])//g'
```
Usage:
```
perl -0777pe's/,(?=\s*[\]}])//g' file.in >file.out # Read from file
perl -0777pe's/,(?=\s*[\]}])//g' <file.in >file.out # Read from STDIN
perl -i~ -0777pe's/,(?=\s*[\]}])//g' file # In-place, with backup
perl -i -0777pe's/,(?=\s*[\]}])//g' file # In-place, without backup
```
|
4,756,205
|
Some python 3 features and modules having been backported to python 2.7 what are the notable differences between python 3.1 and python 2.7?
|
2011/01/21
|
[
"https://Stackoverflow.com/questions/4756205",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/538850/"
] |
I think these resources might help you:
* A [introduction to Python "3000"](http://www.python.org/doc/essays/ppt/pycon2008/Py3kAndYou.pdf) from Guido van Rossum
* [Porting your code to Python 3](http://peadrop.com/slides/mp5.pdf)
* and of course the documentation of [changes in Python 3.0](http://docs.python.org/py3k/whatsnew/3.0.html)
And as you said
>
> Some python 3 features and modules having been backported to python 2.7
>
>
>
... I would invert that sentence and say [only few packages](https://python3wos.appspot.com/) yet have been ported from Python 2.x to 3.x. Great libraries like [PyGTK](http://pygtk.org/) still only work in Python 2. Migration can take a while in many projects so before you decide to use Python 3 you may rather think about writing your own projects in Python 2, while ensuring compatibility by testing with [2to3](http://docs.python.org/library/2to3.html) regularly.
|
If you want to use any of the python 3 function in python 2.7 then you can import **future** module at the beginning and then you can use it in your code.
|
22,636,974
|
I am trying to figure out how to properly make a discrete state Markov chain model with [`pymc`](http://pymc-devs.github.io/pymc/index.html).
As an example (view in [nbviewer](http://nbviewer.ipython.org/github/shpigi/pieces/blob/master/toyDbnExample.ipynb)), lets make a chain of length T=10 where the Markov state is binary, the initial state distribution is [0.2, 0.8] and that the probability of switching states in state 1 is 0.01 while in state 2 it is 0.5
```
import numpy as np
import pymc as pm
T = 10
prior0 = [0.2, 0.8]
transMat = [[0.99, 0.01], [0.5, 0.5]]
```
To make the model, I make an array of state variables and an array of transition probabilities that depend on the state variables (using the pymc.Index function)
```
states = np.empty(T, dtype=object)
states[0] = pm.Categorical('state_0', prior0)
transPs = np.empty(T, dtype=object)
transPs[0] = pm.Index('trans_0', transMat, states[0])
for i in range(1, T):
states[i] = pm.Categorical('state_%i' % i, transPs[i-1])
transPs[i] = pm.Index('trans_%i' %i, transMat, states[i])
```
Sampling the model shows that the states marginals are what they should be (compared with model built with Kevin Murphy's BNT package in Matlab)
```
model = pm.MCMC([states, transPs])
model.sample(10000, 5000)
[np.mean(model.trace('state_%i' %i)[:]) for i in range(T)]
```
prints out:
```
[-----------------100%-----------------] 10000 of 10000 complete in 7.5 sec
[0.80020000000000002,
0.39839999999999998,
0.20319999999999999,
0.1118,
0.064199999999999993,
0.044600000000000001,
0.033000000000000002,
0.026200000000000001,
0.024199999999999999,
0.023800000000000002]
```
My question is - this does not seem like the most elegant way to build a Markov chain with pymc. Is there a cleaner way that does not require the array of deterministic functions?
My goal is to write a pymc based package for more general dynamic Bayesian networks.
|
2014/03/25
|
[
"https://Stackoverflow.com/questions/22636974",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/3459899/"
] |
Use this:
```
$('#window')
.data("kendoWindow")
.title("Add new category")
.content("") //this little thing does magic
.refresh({
url: _url
})
.center()
.open();
```
I would however suggest you rearrange your calls:
```
$('#window')
.data("kendoWindow")
.title("Add new category")
//.content("") //this little thing does magic
.content("<img src='ajax-loader.gif' alt='Loading...'/>")
.center()
.open();
.refresh({
url: _url
})
```
Using the second configuration and providing a valid loading image, the user will see your window and be informed that the content is being loaded. This is very helpful (not to mention user friendly) since the Kendo window makes an AJAX request when you use the `refresh` function.
Alternatively you could add an event on the window `close` event and set `content("")` inside the handler.
|
You might also want to add
```
.Events(e =>
{
e.Refresh("onRefresh");
})
<script>
function onRefresh() {
this.center();
}
<script>
```
to keep the window centered after the content is loaded as it is loaded asynchronously.
(if your content height is variable)
|
18,983,906
|
Why is time taken to access an element in a non-homogeneous list considered as constant or How is a non-homogeneous list stored in memory in case of python?
In case of a homogeneous list, knowing the type and position, the memory location can be determined but how is it done in case of a non homogeneous list so fast as to consider it as a function of constant time?
Going through this page [Does Python use linked lists for lists? Why is inserting slow?](https://stackoverflow.com/questions/12274060/does-python-use-linked-lists-for-lists-why-is-inserting-slow) I assume it stores elements in contiguous memory locations.
|
2013/09/24
|
[
"https://Stackoverflow.com/questions/18983906",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/2216108/"
] |
Expand your controller action with a new parameter:
```
public JsonResult ResolveProfileSelectionRequired(ResolveProfileSelectionRequiredModel model, bool newtransferee = false)
```
and check it inside the action.
|
You could use:
```
if (Request.Form["newtransferee"].ToLower() == "true")
```
or does it work if you write this instead of the manual html?
```
@Html.HiddenFor(model => model.NewTransferee)
```
|
18,983,906
|
Why is time taken to access an element in a non-homogeneous list considered as constant or How is a non-homogeneous list stored in memory in case of python?
In case of a homogeneous list, knowing the type and position, the memory location can be determined but how is it done in case of a non homogeneous list so fast as to consider it as a function of constant time?
Going through this page [Does Python use linked lists for lists? Why is inserting slow?](https://stackoverflow.com/questions/12274060/does-python-use-linked-lists-for-lists-why-is-inserting-slow) I assume it stores elements in contiguous memory locations.
|
2013/09/24
|
[
"https://Stackoverflow.com/questions/18983906",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/2216108/"
] |
If all you need to do is check the value of a boolean, that's just a conditional statement:
```
public JsonResult ResolveProfileSelectionRequired(ResolveProfileSelectionRequiredModel model)
{
if (model.NewTransferee)
{
// do something
}
else
{
// do something else
}
}
```
It's not entirely clear what your goal is though, since your current `return` statement depends on the output of the method that you only want to call conditionally. All code paths will need to return *something*. Maybe like this?:
```
public JsonResult ResolveProfileSelectionRequired(ResolveProfileSelectionRequiredModel model)
{
if (model.NewTransferee)
{
var resolved = SqlDataProvider.ResolveProfileSelectionRequiredJobException(model);
return resolved ? Json("Success") : Json("Failed");
}
else
{
return Json("Success");
}
}
```
Again, I can't know without knowing more about your intended behavior for this action.
It's worth noting that this isn't really an "MVC" thing. At its simplest, this action method is just a method which receives an argument of type `ResolveProfileSelectionRequiredModel`. The conditional statement is just plain C#. All code paths returning a value is plain C#. The only thing "MVC" about this method is that it's returning a `JsonResult`.
---
**Edit:** I just noticed... The value you're looking for is on the `CreateTransferee` model, but your controller is receiving a `ResolveProfileSelectionRequiredModel`? Is that value on the `ResolveProfileSelectionRequiredModel`? They seem to be very similar. If it's not on that model, you can either add it to that model or use @Raidri's approach to add it as an explicit parameter to the controller action.
|
Expand your controller action with a new parameter:
```
public JsonResult ResolveProfileSelectionRequired(ResolveProfileSelectionRequiredModel model, bool newtransferee = false)
```
and check it inside the action.
|
18,983,906
|
Why is time taken to access an element in a non-homogeneous list considered as constant or How is a non-homogeneous list stored in memory in case of python?
In case of a homogeneous list, knowing the type and position, the memory location can be determined but how is it done in case of a non homogeneous list so fast as to consider it as a function of constant time?
Going through this page [Does Python use linked lists for lists? Why is inserting slow?](https://stackoverflow.com/questions/12274060/does-python-use-linked-lists-for-lists-why-is-inserting-slow) I assume it stores elements in contiguous memory locations.
|
2013/09/24
|
[
"https://Stackoverflow.com/questions/18983906",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/2216108/"
] |
If all you need to do is check the value of a boolean, that's just a conditional statement:
```
public JsonResult ResolveProfileSelectionRequired(ResolveProfileSelectionRequiredModel model)
{
if (model.NewTransferee)
{
// do something
}
else
{
// do something else
}
}
```
It's not entirely clear what your goal is though, since your current `return` statement depends on the output of the method that you only want to call conditionally. All code paths will need to return *something*. Maybe like this?:
```
public JsonResult ResolveProfileSelectionRequired(ResolveProfileSelectionRequiredModel model)
{
if (model.NewTransferee)
{
var resolved = SqlDataProvider.ResolveProfileSelectionRequiredJobException(model);
return resolved ? Json("Success") : Json("Failed");
}
else
{
return Json("Success");
}
}
```
Again, I can't know without knowing more about your intended behavior for this action.
It's worth noting that this isn't really an "MVC" thing. At its simplest, this action method is just a method which receives an argument of type `ResolveProfileSelectionRequiredModel`. The conditional statement is just plain C#. All code paths returning a value is plain C#. The only thing "MVC" about this method is that it's returning a `JsonResult`.
---
**Edit:** I just noticed... The value you're looking for is on the `CreateTransferee` model, but your controller is receiving a `ResolveProfileSelectionRequiredModel`? Is that value on the `ResolveProfileSelectionRequiredModel`? They seem to be very similar. If it's not on that model, you can either add it to that model or use @Raidri's approach to add it as an explicit parameter to the controller action.
|
You could use:
```
if (Request.Form["newtransferee"].ToLower() == "true")
```
or does it work if you write this instead of the manual html?
```
@Html.HiddenFor(model => model.NewTransferee)
```
|
13,116,026
|
I am new to Python. I am writing a string function to check if a certain string contains certain multiple values. Is there a syntax in python to allow me to do something like this:
```
def strParse(str):
a = 't'
b = 'br'
c = 'ht'
if a in str AND b in str AND c in str:
print('Ok!')
```
(The part I am unsure about is having multiple if statements on line.) Thanks!
|
2012/10/29
|
[
"https://Stackoverflow.com/questions/13116026",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/1507139/"
] |
Almost correct, just make the `and` lower-case:
```
def strParse(str):
a = 't'
b = 'br'
c = 'ht'
if a in str and b in str and c in str:
print('Ok!')
```
Doesn't cause problems in this case, but you should avoid using variable-names that are also [built-in functions](http://docs.python.org/2/library/functions.html) (`str` is a builtin function/type)
If you have more values, you can do the same thing more tidily like so:
```
values = ['t', 'br', 'ht']
if all(x in instr for x in values):
print("Ok!")
```
|
Why don't you try typing this into the Python REPL? What you're trying to do is perfectly valid in python, except the `and` keyword is lowercase, not uppercase.
|
13,116,026
|
I am new to Python. I am writing a string function to check if a certain string contains certain multiple values. Is there a syntax in python to allow me to do something like this:
```
def strParse(str):
a = 't'
b = 'br'
c = 'ht'
if a in str AND b in str AND c in str:
print('Ok!')
```
(The part I am unsure about is having multiple if statements on line.) Thanks!
|
2012/10/29
|
[
"https://Stackoverflow.com/questions/13116026",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/1507139/"
] |
Almost correct, just make the `and` lower-case:
```
def strParse(str):
a = 't'
b = 'br'
c = 'ht'
if a in str and b in str and c in str:
print('Ok!')
```
Doesn't cause problems in this case, but you should avoid using variable-names that are also [built-in functions](http://docs.python.org/2/library/functions.html) (`str` is a builtin function/type)
If you have more values, you can do the same thing more tidily like so:
```
values = ['t', 'br', 'ht']
if all(x in instr for x in values):
print("Ok!")
```
|
```
if all(s in text for s in (a, b, c)):
print("ok")
```
|
67,628,700
|
Please help me understand this as I am new to python:
I have a list of names...
```
names = ['John', 'William', 'Amanda']
```
I need to iterate through this list and print out the names as follows:
```
1. John
2. William
3. Amanda
```
I've managed to print the list out however I'm getting this extra space between the index number and the dot like this:
```
1' '. John
2' '. William
3' '. Amanda
```
my code:
```
index = 1
for i in names:
print(index, '.', i)
index += 1
```
|
2021/05/20
|
[
"https://Stackoverflow.com/questions/67628700",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/15166148/"
] |
Couple of easy ways by which you can achieve this:
1. `enumerate` and `f-string`:
```
for index,item in enumerate(names,start=1):
print(f'{index}. {item}')
```
2. `enumerate` and `format function`:
```
for index,item in enumerate(names,start=1):
print('{}. {}'.format(index,item))
```
3. `enumerate` and `string formating`:
```
for index,item in enumerate(names,start=1):
print('%s. %s' %(index,item))
```
4. `enumerate` and manipulating args of `print` function:
```
for index,item in enumerate(names,start=1):
print(index,'. ',item, sep='')
```
5. With `range`:
```
for i in range(len(names)):
print((i+1),'. ',names[i], sep='')
```
6. with range and `f-string`:
```
for i in range(len(names)):
print(f'{(i+1)}. {names[i]}')
```
7. using `extra variable` (Not recommended):
```
index = 1
names = ['John', 'William', 'Amanda']
for i in names:
print(index, '. ', i, sep='')
index += 1
```
NOTE: There are various other ways to do this. But, I'll recommend using `enumerate` with `f-string`.
|
You can add a separator using the `sep` option in `print`
Like this
```
index = 1
names = ['John', 'William', 'Amanda']
for i in names:
print(index, '.' ,' ', i, sep='') #If you dont want the space between the '.' and i you may remove it
index += 1
#OUTPUT:
#1. John
#2. William
#3. Amanda
```
|
67,628,700
|
Please help me understand this as I am new to python:
I have a list of names...
```
names = ['John', 'William', 'Amanda']
```
I need to iterate through this list and print out the names as follows:
```
1. John
2. William
3. Amanda
```
I've managed to print the list out however I'm getting this extra space between the index number and the dot like this:
```
1' '. John
2' '. William
3' '. Amanda
```
my code:
```
index = 1
for i in names:
print(index, '.', i)
index += 1
```
|
2021/05/20
|
[
"https://Stackoverflow.com/questions/67628700",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/15166148/"
] |
The most pythonic way of doing this in Python 3 is using [enumerate](https://docs.python.org/3/library/functions.html#enumerate) and [f-strings](https://docs.python.org/3/reference/lexical_analysis.html#f-strings):
```py
names = ['John', 'William', 'Amanda']
for index, item in enumerate(names, start=1):
print(f'{index}. {item}')
```
Output:
```
1. John
2. William
3. Amanda
```
|
You can add a separator using the `sep` option in `print`
Like this
```
index = 1
names = ['John', 'William', 'Amanda']
for i in names:
print(index, '.' ,' ', i, sep='') #If you dont want the space between the '.' and i you may remove it
index += 1
#OUTPUT:
#1. John
#2. William
#3. Amanda
```
|
67,628,700
|
Please help me understand this as I am new to python:
I have a list of names...
```
names = ['John', 'William', 'Amanda']
```
I need to iterate through this list and print out the names as follows:
```
1. John
2. William
3. Amanda
```
I've managed to print the list out however I'm getting this extra space between the index number and the dot like this:
```
1' '. John
2' '. William
3' '. Amanda
```
my code:
```
index = 1
for i in names:
print(index, '.', i)
index += 1
```
|
2021/05/20
|
[
"https://Stackoverflow.com/questions/67628700",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/15166148/"
] |
Couple of easy ways by which you can achieve this:
1. `enumerate` and `f-string`:
```
for index,item in enumerate(names,start=1):
print(f'{index}. {item}')
```
2. `enumerate` and `format function`:
```
for index,item in enumerate(names,start=1):
print('{}. {}'.format(index,item))
```
3. `enumerate` and `string formating`:
```
for index,item in enumerate(names,start=1):
print('%s. %s' %(index,item))
```
4. `enumerate` and manipulating args of `print` function:
```
for index,item in enumerate(names,start=1):
print(index,'. ',item, sep='')
```
5. With `range`:
```
for i in range(len(names)):
print((i+1),'. ',names[i], sep='')
```
6. with range and `f-string`:
```
for i in range(len(names)):
print(f'{(i+1)}. {names[i]}')
```
7. using `extra variable` (Not recommended):
```
index = 1
names = ['John', 'William', 'Amanda']
for i in names:
print(index, '. ', i, sep='')
index += 1
```
NOTE: There are various other ways to do this. But, I'll recommend using `enumerate` with `f-string`.
|
You can just add a space after your dote, and you don't really need the index variable. I changed your code a little bit:
**Code:**
```
names = ['John', 'William', 'Amanda']
for i in range(len(names)):
index = str(i + 1)
print(index + '. ' + names[i])
# if you don't want the space beween the dot and the name change '. ' to '.'
```
**Output:**
```
1. John
2. William
3. Amanda
```
|
67,628,700
|
Please help me understand this as I am new to python:
I have a list of names...
```
names = ['John', 'William', 'Amanda']
```
I need to iterate through this list and print out the names as follows:
```
1. John
2. William
3. Amanda
```
I've managed to print the list out however I'm getting this extra space between the index number and the dot like this:
```
1' '. John
2' '. William
3' '. Amanda
```
my code:
```
index = 1
for i in names:
print(index, '.', i)
index += 1
```
|
2021/05/20
|
[
"https://Stackoverflow.com/questions/67628700",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/15166148/"
] |
The most pythonic way of doing this in Python 3 is using [enumerate](https://docs.python.org/3/library/functions.html#enumerate) and [f-strings](https://docs.python.org/3/reference/lexical_analysis.html#f-strings):
```py
names = ['John', 'William', 'Amanda']
for index, item in enumerate(names, start=1):
print(f'{index}. {item}')
```
Output:
```
1. John
2. William
3. Amanda
```
|
You can just add a space after your dote, and you don't really need the index variable. I changed your code a little bit:
**Code:**
```
names = ['John', 'William', 'Amanda']
for i in range(len(names)):
index = str(i + 1)
print(index + '. ' + names[i])
# if you don't want the space beween the dot and the name change '. ' to '.'
```
**Output:**
```
1. John
2. William
3. Amanda
```
|
67,628,700
|
Please help me understand this as I am new to python:
I have a list of names...
```
names = ['John', 'William', 'Amanda']
```
I need to iterate through this list and print out the names as follows:
```
1. John
2. William
3. Amanda
```
I've managed to print the list out however I'm getting this extra space between the index number and the dot like this:
```
1' '. John
2' '. William
3' '. Amanda
```
my code:
```
index = 1
for i in names:
print(index, '.', i)
index += 1
```
|
2021/05/20
|
[
"https://Stackoverflow.com/questions/67628700",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/15166148/"
] |
The most pythonic way of doing this in Python 3 is using [enumerate](https://docs.python.org/3/library/functions.html#enumerate) and [f-strings](https://docs.python.org/3/reference/lexical_analysis.html#f-strings):
```py
names = ['John', 'William', 'Amanda']
for index, item in enumerate(names, start=1):
print(f'{index}. {item}')
```
Output:
```
1. John
2. William
3. Amanda
```
|
Couple of easy ways by which you can achieve this:
1. `enumerate` and `f-string`:
```
for index,item in enumerate(names,start=1):
print(f'{index}. {item}')
```
2. `enumerate` and `format function`:
```
for index,item in enumerate(names,start=1):
print('{}. {}'.format(index,item))
```
3. `enumerate` and `string formating`:
```
for index,item in enumerate(names,start=1):
print('%s. %s' %(index,item))
```
4. `enumerate` and manipulating args of `print` function:
```
for index,item in enumerate(names,start=1):
print(index,'. ',item, sep='')
```
5. With `range`:
```
for i in range(len(names)):
print((i+1),'. ',names[i], sep='')
```
6. with range and `f-string`:
```
for i in range(len(names)):
print(f'{(i+1)}. {names[i]}')
```
7. using `extra variable` (Not recommended):
```
index = 1
names = ['John', 'William', 'Amanda']
for i in names:
print(index, '. ', i, sep='')
index += 1
```
NOTE: There are various other ways to do this. But, I'll recommend using `enumerate` with `f-string`.
|
67,628,700
|
Please help me understand this as I am new to python:
I have a list of names...
```
names = ['John', 'William', 'Amanda']
```
I need to iterate through this list and print out the names as follows:
```
1. John
2. William
3. Amanda
```
I've managed to print the list out however I'm getting this extra space between the index number and the dot like this:
```
1' '. John
2' '. William
3' '. Amanda
```
my code:
```
index = 1
for i in names:
print(index, '.', i)
index += 1
```
|
2021/05/20
|
[
"https://Stackoverflow.com/questions/67628700",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/15166148/"
] |
Couple of easy ways by which you can achieve this:
1. `enumerate` and `f-string`:
```
for index,item in enumerate(names,start=1):
print(f'{index}. {item}')
```
2. `enumerate` and `format function`:
```
for index,item in enumerate(names,start=1):
print('{}. {}'.format(index,item))
```
3. `enumerate` and `string formating`:
```
for index,item in enumerate(names,start=1):
print('%s. %s' %(index,item))
```
4. `enumerate` and manipulating args of `print` function:
```
for index,item in enumerate(names,start=1):
print(index,'. ',item, sep='')
```
5. With `range`:
```
for i in range(len(names)):
print((i+1),'. ',names[i], sep='')
```
6. with range and `f-string`:
```
for i in range(len(names)):
print(f'{(i+1)}. {names[i]}')
```
7. using `extra variable` (Not recommended):
```
index = 1
names = ['John', 'William', 'Amanda']
for i in names:
print(index, '. ', i, sep='')
index += 1
```
NOTE: There are various other ways to do this. But, I'll recommend using `enumerate` with `f-string`.
|
```
names = ['John', 'William', 'Amanda']
counter = 1
for name in names:
print(str(counter) + ". " + name)
counter += 1
```
This is what you were trying to do. Your mistake is that you're using the wrong concatenation symbol (need to use +)
[result:](https://i.stack.imgur.com/e2fDi.png)
|
67,628,700
|
Please help me understand this as I am new to python:
I have a list of names...
```
names = ['John', 'William', 'Amanda']
```
I need to iterate through this list and print out the names as follows:
```
1. John
2. William
3. Amanda
```
I've managed to print the list out however I'm getting this extra space between the index number and the dot like this:
```
1' '. John
2' '. William
3' '. Amanda
```
my code:
```
index = 1
for i in names:
print(index, '.', i)
index += 1
```
|
2021/05/20
|
[
"https://Stackoverflow.com/questions/67628700",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/15166148/"
] |
The most pythonic way of doing this in Python 3 is using [enumerate](https://docs.python.org/3/library/functions.html#enumerate) and [f-strings](https://docs.python.org/3/reference/lexical_analysis.html#f-strings):
```py
names = ['John', 'William', 'Amanda']
for index, item in enumerate(names, start=1):
print(f'{index}. {item}')
```
Output:
```
1. John
2. William
3. Amanda
```
|
```
names = ['John', 'William', 'Amanda']
counter = 1
for name in names:
print(str(counter) + ". " + name)
counter += 1
```
This is what you were trying to do. Your mistake is that you're using the wrong concatenation symbol (need to use +)
[result:](https://i.stack.imgur.com/e2fDi.png)
|
5,529,075
|
I'm working on an academic project aimed at studying people behavior.
The project will be divided in three parts:
1. A program to read the data from some remote sources, and build a local data pool with it.
2. A program to validate this data pool, and to keep it coherent
3. A web interface to allow people to read/manipulate the data.
The data consists of a list of people, all with an ID #, and with several characteristics: height, weight, age, ...
I need to easily make groups out of this data (e.g.: all with a given age, or a range of heights) and the data is several TB big (but can reduced in smaller subsets of 2-3 gb).
I have a strong background on the theoretical stuff behind the project, but I'm not a computer scientist. I know java, C and Matlab, and now I'm learning python.
I would like to use python since it seems easy enough and greatly reduce the verbosity of Java. The problem is that I'm wondering how to handle the data pool.
I'm no expert of databases but I guess I need one here. What tools do you think I should use?
Remember that the aim is to implement very advanced mathematical functions on sets of data, thus we want to reduce complexity of source code. Speed is not an issue.
|
2011/04/03
|
[
"https://Stackoverflow.com/questions/5529075",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/269931/"
] |
Sounds that the main functionality needed can be found from:
[pytables](http://www.pytables.org/moin)
and
[scipy/numpy](http://www.scipy.org/)
|
Since you aren't an expert I recommend you to use mysql database as the backend of storing your data, it's easy to learn and you'll have a capability to query your data using SQL and write your data using python see this [MySQL Guide](http://dev.mysql.com/doc/refman/5.1/en/index.html) [Python-Mysql](http://www.kitebird.com/articles/pydbapi.html)
|
5,529,075
|
I'm working on an academic project aimed at studying people behavior.
The project will be divided in three parts:
1. A program to read the data from some remote sources, and build a local data pool with it.
2. A program to validate this data pool, and to keep it coherent
3. A web interface to allow people to read/manipulate the data.
The data consists of a list of people, all with an ID #, and with several characteristics: height, weight, age, ...
I need to easily make groups out of this data (e.g.: all with a given age, or a range of heights) and the data is several TB big (but can reduced in smaller subsets of 2-3 gb).
I have a strong background on the theoretical stuff behind the project, but I'm not a computer scientist. I know java, C and Matlab, and now I'm learning python.
I would like to use python since it seems easy enough and greatly reduce the verbosity of Java. The problem is that I'm wondering how to handle the data pool.
I'm no expert of databases but I guess I need one here. What tools do you think I should use?
Remember that the aim is to implement very advanced mathematical functions on sets of data, thus we want to reduce complexity of source code. Speed is not an issue.
|
2011/04/03
|
[
"https://Stackoverflow.com/questions/5529075",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/269931/"
] |
Go with a NoSQL database like MongoDB which is much easier to handle data in such a case than having to learn SQL.
|
Since you aren't an expert I recommend you to use mysql database as the backend of storing your data, it's easy to learn and you'll have a capability to query your data using SQL and write your data using python see this [MySQL Guide](http://dev.mysql.com/doc/refman/5.1/en/index.html) [Python-Mysql](http://www.kitebird.com/articles/pydbapi.html)
|
5,529,075
|
I'm working on an academic project aimed at studying people behavior.
The project will be divided in three parts:
1. A program to read the data from some remote sources, and build a local data pool with it.
2. A program to validate this data pool, and to keep it coherent
3. A web interface to allow people to read/manipulate the data.
The data consists of a list of people, all with an ID #, and with several characteristics: height, weight, age, ...
I need to easily make groups out of this data (e.g.: all with a given age, or a range of heights) and the data is several TB big (but can reduced in smaller subsets of 2-3 gb).
I have a strong background on the theoretical stuff behind the project, but I'm not a computer scientist. I know java, C and Matlab, and now I'm learning python.
I would like to use python since it seems easy enough and greatly reduce the verbosity of Java. The problem is that I'm wondering how to handle the data pool.
I'm no expert of databases but I guess I need one here. What tools do you think I should use?
Remember that the aim is to implement very advanced mathematical functions on sets of data, thus we want to reduce complexity of source code. Speed is not an issue.
|
2011/04/03
|
[
"https://Stackoverflow.com/questions/5529075",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/269931/"
] |
Sounds that the main functionality needed can be found from:
[pytables](http://www.pytables.org/moin)
and
[scipy/numpy](http://www.scipy.org/)
|
Go with a NoSQL database like MongoDB which is much easier to handle data in such a case than having to learn SQL.
|
52,334,508
|
When trying to implement a custom `get_success_url` method in python, Django throws a `TypeError: quote_from_bytes()`error. For example:
```
class SomeView(generic.CreateView):
#...
def get_success_url(self):
return HttpResponseRedirect(reverse('index'))
```
|
2018/09/14
|
[
"https://Stackoverflow.com/questions/52334508",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/1649917/"
] |
`get_success_url` doesn't return an HttpResponseRedirect instead it should return the url you want to redirect to. So you can just return `reverse('index')`:
```
def get_success_url(self):
return reverse('index')
```
|
A shortcut for HttpResponseRedirect is redirect("View\_name")
it returns an HttpResponse (HTML)
Reverse returns a url
|
29,499,819
|
Trying to make concentric squares in python with Turtle. Here's my attempt:
```
import turtle
def draw_square(t, size):
for i in range(4):
t.forward(size)
t.left(90)
wn = turtle.Screen()
dan = turtle.Turtle()
sizevar = 1
for i in range(10):
draw_square(dan,sizevar)
sizevar += 20
dan.penup()
dan.backward(sizevar/ 2)
dan.right(90)
dan.forward(sizevar / 2)
dan.left(90)
dan.pendown()
```
I'm not sure why they aren't concentric, my `dan.backward(sizevar/2)` and
`dan.forward(sizevar/2)` lines seem to move the square down and to the left too much?
|
2015/04/07
|
[
"https://Stackoverflow.com/questions/29499819",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/721938/"
] |
What you're currently doing is adding an onclick event to a link that calls a function *that adds another onclick event via jQuery*.
Remove the `onclick` property and the `open_file()` function wrapper so that jQuery adds the event as you intended.
|
You do not need `onclick="open_file()"` this:
```
<div id='linkstofiles'>
<a id="3.txt">3.txt</a>
//other links go here
</div>
$("#linkstofiles a").click(function (event) {
var file_name = event.target.id;
$("#f_name").val(file_name);
$.ajax({
url: "docs/" + file_name,
dataType: "text",
success: function (data) {
$("#text_form").val(data);
$('#text_form').removeAttr('readonly');
$('#delete_button').removeAttr('disabled');
$('#save_button').removeAttr('disabled');
}
})
});
```
|
29,499,819
|
Trying to make concentric squares in python with Turtle. Here's my attempt:
```
import turtle
def draw_square(t, size):
for i in range(4):
t.forward(size)
t.left(90)
wn = turtle.Screen()
dan = turtle.Turtle()
sizevar = 1
for i in range(10):
draw_square(dan,sizevar)
sizevar += 20
dan.penup()
dan.backward(sizevar/ 2)
dan.right(90)
dan.forward(sizevar / 2)
dan.left(90)
dan.pendown()
```
I'm not sure why they aren't concentric, my `dan.backward(sizevar/2)` and
`dan.forward(sizevar/2)` lines seem to move the square down and to the left too much?
|
2015/04/07
|
[
"https://Stackoverflow.com/questions/29499819",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/721938/"
] |
You do not need `onclick="open_file()"` this:
```
<div id='linkstofiles'>
<a id="3.txt">3.txt</a>
//other links go here
</div>
$("#linkstofiles a").click(function (event) {
var file_name = event.target.id;
$("#f_name").val(file_name);
$.ajax({
url: "docs/" + file_name,
dataType: "text",
success: function (data) {
$("#text_form").val(data);
$('#text_form').removeAttr('readonly');
$('#delete_button').removeAttr('disabled');
$('#save_button').removeAttr('disabled');
}
})
});
```
|
You don't need to bind a click event again in the function when you have `onclick` in your html.
Also for `$ is not defined`, you need to put jquery library in the head.
|
29,499,819
|
Trying to make concentric squares in python with Turtle. Here's my attempt:
```
import turtle
def draw_square(t, size):
for i in range(4):
t.forward(size)
t.left(90)
wn = turtle.Screen()
dan = turtle.Turtle()
sizevar = 1
for i in range(10):
draw_square(dan,sizevar)
sizevar += 20
dan.penup()
dan.backward(sizevar/ 2)
dan.right(90)
dan.forward(sizevar / 2)
dan.left(90)
dan.pendown()
```
I'm not sure why they aren't concentric, my `dan.backward(sizevar/2)` and
`dan.forward(sizevar/2)` lines seem to move the square down and to the left too much?
|
2015/04/07
|
[
"https://Stackoverflow.com/questions/29499819",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/721938/"
] |
What you're currently doing is adding an onclick event to a link that calls a function *that adds another onclick event via jQuery*.
Remove the `onclick` property and the `open_file()` function wrapper so that jQuery adds the event as you intended.
|
You don't need to bind a click event again in the function when you have `onclick` in your html.
Also for `$ is not defined`, you need to put jquery library in the head.
|
7,492,454
|
I have 2 large, unsorted arrays (structured set of xyz coordinates) and I'm trying to find the positions of all identical subarrays (common points consisting of 3 coordinates). Example:
```
a = array([[0, 1, 2], [3, 4, 5]])
b = array([[3, 4, 5], [6, 7, 8]])
```
Here the correct subarray would be `[3, 4, 5]`, but more than one identical subarrays are possible. The correct indexes would be `[0,1]` for `a` and `[1,0]` for `b`.
I already implemented a pure python method by iterating over all points of one array and comparing them to every point of the other array, but this is extremely slow.
My question is, is there an efficient way to find the indexes for both arrays (preferably in numpy, because I need the arrays for further calculations)? Perhaps a rolling\_window approach?
|
2011/09/20
|
[
"https://Stackoverflow.com/questions/7492454",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/958582/"
] |
A general solution for Python iterables (not specific to numpy or arrays) that works in linear average time (O(n+m), n is the number of subarrays and m is the number of unique subarrays):
```
a = [[0, 1, 2], [3, 4, 5]]
b = [[3, 4, 5], [6, 7, 8]]
from collections import defaultdict
indexmap = defaultdict(list)
for row, sublist in enumerate((a, b)):
for column, item in enumerate(sublist):
indexmap[tuple(item)].append((row, column))
repeats = dict((key, value) for key, value in indexmap.iteritems() if len(value) > 1)
```
Gives
```
{(3, 4, 5): [(0, 1), (1, 0)]}
```
If you don't need the double-row-indexes (index in the list and in the stored index) you can simplify the loop to
```
for row in (a, b):
for column, item in enumerate(sublist):
indexmap[tuple(item)].append(column)
```
as `a` will be processed before `b`, any duplicates will get numbered by row automatically:
```
{(3, 4, 5): [1, 0]}
```
With `repeats[key][rownum]` returning the column index for that row.
|
Can you map each sub-array to its position index in a hash table? So basically, you change your data structure. After that in linear time O(n), where n is the size of the biggest hash hash table, in one loop you can O(1) query each hash table and find out if you have same sub-array present in two or more hash tables.
|
7,492,454
|
I have 2 large, unsorted arrays (structured set of xyz coordinates) and I'm trying to find the positions of all identical subarrays (common points consisting of 3 coordinates). Example:
```
a = array([[0, 1, 2], [3, 4, 5]])
b = array([[3, 4, 5], [6, 7, 8]])
```
Here the correct subarray would be `[3, 4, 5]`, but more than one identical subarrays are possible. The correct indexes would be `[0,1]` for `a` and `[1,0]` for `b`.
I already implemented a pure python method by iterating over all points of one array and comparing them to every point of the other array, but this is extremely slow.
My question is, is there an efficient way to find the indexes for both arrays (preferably in numpy, because I need the arrays for further calculations)? Perhaps a rolling\_window approach?
|
2011/09/20
|
[
"https://Stackoverflow.com/questions/7492454",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/958582/"
] |
A general solution for Python iterables (not specific to numpy or arrays) that works in linear average time (O(n+m), n is the number of subarrays and m is the number of unique subarrays):
```
a = [[0, 1, 2], [3, 4, 5]]
b = [[3, 4, 5], [6, 7, 8]]
from collections import defaultdict
indexmap = defaultdict(list)
for row, sublist in enumerate((a, b)):
for column, item in enumerate(sublist):
indexmap[tuple(item)].append((row, column))
repeats = dict((key, value) for key, value in indexmap.iteritems() if len(value) > 1)
```
Gives
```
{(3, 4, 5): [(0, 1), (1, 0)]}
```
If you don't need the double-row-indexes (index in the list and in the stored index) you can simplify the loop to
```
for row in (a, b):
for column, item in enumerate(sublist):
indexmap[tuple(item)].append(column)
```
as `a` will be processed before `b`, any duplicates will get numbered by row automatically:
```
{(3, 4, 5): [1, 0]}
```
With `repeats[key][rownum]` returning the column index for that row.
|
I did a little further experimenting and found a numpy specific way to solve this:
```
import numpy as np
a = np.arange(24).reshape(2,4,3)
b = np.arange(24, 36).reshape(2,2,3)
```
Array b receives 2 entries from a:
```
b[1,0] = a[0,1]
b[0,1] = a[1,1]
```
Finding common entries:
```
c = np.in1d(a, b).reshape(a.shape)
d = np.in1d(b, a).reshape(b.shape)
```
Checking where common entries exist in all 3 coordinates:
```
indexesC = np.where(c[:,:,0] & c[:,:,1] & c[:,:,2])
indexesD = np.where(d[:,:,0] & d[:,:,1] & d[:,:,2])
```
|
7,492,454
|
I have 2 large, unsorted arrays (structured set of xyz coordinates) and I'm trying to find the positions of all identical subarrays (common points consisting of 3 coordinates). Example:
```
a = array([[0, 1, 2], [3, 4, 5]])
b = array([[3, 4, 5], [6, 7, 8]])
```
Here the correct subarray would be `[3, 4, 5]`, but more than one identical subarrays are possible. The correct indexes would be `[0,1]` for `a` and `[1,0]` for `b`.
I already implemented a pure python method by iterating over all points of one array and comparing them to every point of the other array, but this is extremely slow.
My question is, is there an efficient way to find the indexes for both arrays (preferably in numpy, because I need the arrays for further calculations)? Perhaps a rolling\_window approach?
|
2011/09/20
|
[
"https://Stackoverflow.com/questions/7492454",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/958582/"
] |
I did a little further experimenting and found a numpy specific way to solve this:
```
import numpy as np
a = np.arange(24).reshape(2,4,3)
b = np.arange(24, 36).reshape(2,2,3)
```
Array b receives 2 entries from a:
```
b[1,0] = a[0,1]
b[0,1] = a[1,1]
```
Finding common entries:
```
c = np.in1d(a, b).reshape(a.shape)
d = np.in1d(b, a).reshape(b.shape)
```
Checking where common entries exist in all 3 coordinates:
```
indexesC = np.where(c[:,:,0] & c[:,:,1] & c[:,:,2])
indexesD = np.where(d[:,:,0] & d[:,:,1] & d[:,:,2])
```
|
Can you map each sub-array to its position index in a hash table? So basically, you change your data structure. After that in linear time O(n), where n is the size of the biggest hash hash table, in one loop you can O(1) query each hash table and find out if you have same sub-array present in two or more hash tables.
|
66,170,590
|
Currently I have a program that request JSONS from specific API. The creators of the API have claimed this data is in GeoJSON but QGIS cannot read it.
So I want to extend my Python Script to converting the JSON to GEOJSON in a readable format to get into QGIS and process it further there.
However, I have a few problems,
one I do not know where to start and How the JSON files are constructed...its kind of mess. The JSONS were actually maded based on two API Get Request. One Api Get Request request the Points and the second one requests the details of the points. See this question here for some more context:
[API Request within another API request (Same API) in Python](https://stackoverflow.com/questions/66152751/api-request-within-another-api-request-same-api-in-python/66153040?noredirect=1#comment116958447_66153040)
Note because of character limit I cannot post the code here:
These details are of course supposed to be "the contours" of the area surrounding these points.
The thing is..the point itself is mentioned in the JSON, making it kind of hard to specify what coordinate is for each point. Not to mention all other attributes that are intresting to us, are in the "point" part of the GeoJSON.
Take a look at the JSON itself to see what I mean:
```
{
"comment": null,
"contactDetails": {
"city": null,
"country": "BE",
"email": null,
"extraAddressInfo": null,
"firstName": null,
"kboNumber": null,
"lastName": "TMVW",
"number": null,
"organisation": "TMVW - Brugge-Kust",
"phoneNumber1": null,
"phoneNumber2": null,
"postalCode": null,
"street": null
},
"contractor": null,
"description": "WEGENISWERKEN - Oostende - 154-W - H Baelskaai-project Oosteroever",
"diversions": [],
"endDateTime": "2021-06-30T00:00:00",
"gipodId": 1042078,
"hindrance": {
"description": null,
"direction": null,
"effects": [
"Omleiding: beide richtingen",
"Afgesloten: volledige rijweg",
"Fietsers hebben geen doorgang",
"Handelaars bereikbaar",
"Plaatselijk verkeer: toegelaten",
"Voetgangers hebben doorgang"
],
"important": true,
"locations": [
"Voetpad",
"Fietspad",
"Parkeerstrook",
"Rijbaan"
]
},
"latestUpdate": "2020-12-01T12:16:00.03",
"location": {
"cities": [
"Oostende"
],
"coordinate": {
"coordinates": [
2.931988215468502,
51.23633810341717
],
"crs": {
"properties": {
"name": "urn:ogc:def:crs:OGC:1.3:CRS84"
},
"type": "name"
},
"type": "Point"
},
"geometry": {
"coordinates": [
[
[
2.932567101705748,
51.23657315009855
],
[
2.9309934586397337,
51.235776874431004
],
[
2.9328606392338914,
51.2345112414401
],
[
2.9344086040607285,
51.23535468563417
],
[
2.9344709862243095,
51.23529463700852
],
[
2.932928489694045,
51.23447026126373
],
[
2.935453674897618,
51.2326691257775
],
[
2.937014893295095,
51.23347469462423
],
[
2.9370649363167556,
51.23342209549579
],
[
2.9355339718818847,
51.23261689467634
],
[
2.937705787093551,
51.23108125372614
],
[
2.939235922008332,
51.23191301940206
],
[
2.9393162149112086,
51.231860784836144
],
[
2.9377921292631313,
51.23102909334536
],
[
2.9395494398210404,
51.22978103014327
],
[
2.9395326861153492,
51.22973522407282
],
[
2.9307116955342982,
51.23588365892173
],
[
2.93077732400986,
51.235914858980586
],
[
2.930921969180147,
51.23581685905391
],
[
2.932475593354336,
51.23662429379119
],
[
2.932567101705748,
51.23657315009855
]
]
],
"crs": {
"properties": {
"name": "urn:ogc:def:crs:OGC:1.3:CRS84"
},
"type": "name"
},
"type": "Polygon"
}
},
"mainContractor": null,
"owner": "TMVW - Brugge-Kust",
"reference": "DOM-154/15/004-W",
"startDateTime": "2017-05-02T00:00:00",
"state": "In uitvoering",
"type": "Wegeniswerken (her)aanleg",
"url": null
}
```
As a reminder these JSON's are not supposed to be point files and the geometry can be either a polygon as seen above: A multipolygon or a multiLinestring (do not have an example here).
**So how do I get started and make sure I not only get the detail attributes out but clearly the contour geometry and coordinates?**
The only Unique id is the GIPOD ID, because this one is where the actual link in this API database is.
Edit 1:
So this is the supposed geojson standard.
```
{
"type": "Feature",
"geometry": {
"type": "Point",
"coordinates": [125.6, 10.1]
},
"properties": {
"name": "Dinagat Islands"
}
}
```
Based on this standard, the converted JSON should be this:
```
"type": "Feature",
"geometry": {
"type": "Polygon",
"coordinates": [
[
[
2.932567101705748,
51.23657315009855
],
[
2.9309934586397337,
51.235776874431004
],
[
2.9328606392338914,
51.2345112414401
],
[
2.9344086040607285,
51.23535468563417
],
[
2.9344709862243095,
51.23529463700852
],
[
2.932928489694045,
51.23447026126373
],
[
2.935453674897618,
51.2326691257775
],
[
2.937014893295095,
51.23347469462423
],
[
2.9370649363167556,
51.23342209549579
],
[
2.9355339718818847,
51.23261689467634
],
[
2.937705787093551,
51.23108125372614
],
[
2.939235922008332,
51.23191301940206
],
[
2.9393162149112086,
51.231860784836144
],
[
2.9377921292631313,
51.23102909334536
],
[
2.9395494398210404,
51.22978103014327
],
[
2.9395326861153492,
51.22973522407282
],
[
2.9307116955342982,
51.23588365892173
],
[
2.93077732400986,
51.235914858980586
],
[
2.930921969180147,
51.23581685905391
],
[
2.932475593354336,
51.23662429379119
],
[
2.932567101705748,
51.23657315009855
]
]
},
"properties": {
Too much columns to type, cannot claim them all here because of character limit.
}
}
```
Edit 3: The Solution provided was a good step in the right direction but it gives me two problems:
1. The saved geometry is Null making this JSON a table without
geometry.
2. Only 1 data\_point is saved and not all data\_points that are
being requested.
Here is the JSON:
```
{"type": "FeatureCollection", "features": [{"type": "Feature", "geometry": null, "properties": {"gipodId": 3099549, "StartDateTime": null, "EndDateTime": null, "state": null, "location": {"cities": ["Waregem", "Wielsbeke"], "coordinate": {"coordinates": [3.4206971887218445, 50.91662742195287], "type": "Point", "crs": {"type": "name", "properties": {"name": "urn:ogc:def:crs:OGC:1.3:CRS84"}}}}}}]}
```
Python Code: Below to see how far it has gone:
```
import requests
import json
import os
import glob
import shutil
def process_location_data(location_json):
"""Converts the data point into required geojson format"""
# assigning variable to store geometry details
geometery_details = location_json.get("location").get("geometery")
#geometery_details.pop("crs") # removes the "crs" from geometry
# includes city and location_coordinates
location_details = {
"cities": location_json.get("location").get("cities"),
"coordinate": location_json.get("location").get("coordinate")
}
#EndDateTime
end_datetime = location_json.get("EndDateTime")
#StarDateTime
start_datetime = location_json.get("StarDateTime")
#State
state = location_json.get("State")
#gipodId
gipod_id = location_json.get("gipodId")
#adding all these details into another dict
properties = {
"gipodId": gipod_id,
"StartDateTime": start_datetime,
"EndDateTime": end_datetime,
"state": state,
"location": location_details
}
# creating the final dict to be returned.
geojson_data_point = {
"type": "Feature",
"geometry" : geometery_details,
"properties": properties
}
return geojson_data_point
def process_all_location_data(all_location_points):
"""
For all the points in the location details we will
create the feature collection
"""
feature_collection = {
"type": "FeatureCollection",
"features": []
} #creates dict with zero features.
for data_point in all_location_points:
feature_collection.get("features").append(
process_location_data(data_point)
)
return feature_collection
def fetch_details(url: str, file_name: str):
# Makes request call to get the data of detail
response = requests.get(url)
print("Sending Request for details of gpodId: " + file_name)
folder_path ='api_request_jsons/fetch_details/JSON unfiltered'
text = json.dumps(response.json(),sort_keys=False, indent=4)
print("Details extracted for: "+ file_name)
save_file(folder_path,file_name,text)
return response.json()
# save_file(folder_path,GipodId,text2)
# any other processe
def fetch_points(url: str):
response = requests.get(url)
folder_path ='api_request_jsons/fetch_points'
text = json.dumps(response.json(),sort_keys=False, indent=4)
print("Points Fetched, going to next step: Extracting details")
for obj in response.json():
all_location_points = [fetch_details(obj.get("detail"),str(obj.get("gipodId")))]
save_file(folder_path,'points',text)
feature_collection_json = process_all_location_data(all_location_points)
text2 = json.dumps(process_all_location_data(all_location_points))
folder_path2 = "api_request_jsons/fetch_details/Coordinates"
file_name2 = "Converted"
save_file(folder_path2,file_name2,text2)
return feature_collection_json
def save_file(save_path: str, file_name: str, file_information: str):
completeName = os.path.join(save_path, file_name +".json")
print(completeName + " saved")
file1 = open(completeName, "wt")
file1.write(file_information)
file1.close()
api_response_url = "http://api.gipod.vlaanderen.be/ws/v1/workassignment"
fetch_points(api_response_url)
```
|
2021/02/12
|
[
"https://Stackoverflow.com/questions/66170590",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/11604150/"
] |
In AuthenticatesUsers.php
```
protected function sendLoginResponse(Request $request)
{
$request->session()->regenerate();
$this->clearLoginAttempts($request);
if ($response = $this->authenticated($request, $this->guard()->user())) {
return $response;
}
return $request->wantsJson()
? new JsonResponse([], 204)
: redirect()->back();
}
```
Or You Can Do this in Your Default Login Controller in Line 31
```
protected $redirectTo = "/your-path";
```
|
When the auth middleware detects an unauthenticated user, it will redirect the user to the login named route. You may modify this behavior by updating the redirectTo function in your application's `app/Http/Middleware/Authenticate.php` file
<https://laravel.com/docs/8.x/authentication#redirecting-unauthenticated-users>
|
9,002,569
|
I am using python 2.7 and I would like to do some web2py stuff. I am trying to run web2py from eclipse and I am getting the following error trying to run web2py.py script:
WARNING:web2py:GUI not available because Tk library is not installed
I am running on Windows 7 (64 bit).
Any suggestions?
Thank you
|
2012/01/25
|
[
"https://Stackoverflow.com/questions/9002569",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/945446/"
] |
Simplest solution: The Oracle client is not installed on the remote server where the SSIS package is being executed.
Slightly less simple solution: The Oracle client is installed on the remote server, but in the wrong bit-count for the SSIS installation. For example, if the 64-bit Oracle client is installed but SSIS is being executed with the 32-bit `dtexec` executable, SSIS will not be able to find the Oracle client.
The solution in this case would be to install the 32-bit Oracle client side-by-side with the 64-bit client.
|
1.Go to My Computer Properties
2.Then click on Advance setting.
3.Go to Environment variable
4.Set the path to
```
F:\oracle\product\10.2.0\db_2\perl\5.8.3\lib\MSWin32-x86;F:\oracle\product\10.2.0\db_2\perl\5.8.3\lib;F:\oracle\product\10.2.0\db_2\perl\5.8.3\lib\MSWin32-x86;F:\oracle\product\10.2.0\db_2\perl\site\5.8.3;F:\oracle\product\10.2.0\db_2\perl\site\5.8.3\lib;F:\oracle\product\10.2.0\db_2\sysman\admin\scripts;
```
change your drive and folder depending on your requirement...
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.