qid
int64 46k
74.7M
| question
stringlengths 54
37.8k
| date
stringlengths 10
10
| metadata
listlengths 3
3
| response_j
stringlengths 17
26k
| response_k
stringlengths 26
26k
|
|---|---|---|---|---|---|
14,749,379
|
Because *programming* is one of my favorite hobbies I started a small project in python.
I'm trying to make a nutritional calculator for daily routine, see the code below:
```
# Name: nutri.py
# Author: pyn
my_dict = {'chicken':(40, 50, 10),
'pork':(50, 30, 20)
}
foods = raw_input("Enter your food: ")
#explode / split the user input
foods_list = foods.split(',')
#returns a list separated by comma
print foods_list
```
What I want to do:
1. Get user input and store it in a variable
2. Search the dictionary based on user input and return the asociated values if the keys / foods exists
3. Sum these values in different nutritional chunks and return them, something like: You ate x protein, y carbs and z fat etc.
Any ideas are welcome.
|
2013/02/07
|
[
"https://Stackoverflow.com/questions/14749379",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/2050432/"
] |
using jQuerys hover function, like this:
```
$('#gestaltung_cd').hover(function() {
$('#mainhexa1').toggle();
});
```
(if you don't want to hide the div on blur, then change toggle() to show())
|
Here's an example of how to do the first one and you'd just do the same for the other two with the relevant IDs.
```
$("#gestaltung_cd").hover(
function () {
$("#mainhexa1").show();
},
function () {
$("#mainhexa1").hide();
}
);
```
|
14,749,379
|
Because *programming* is one of my favorite hobbies I started a small project in python.
I'm trying to make a nutritional calculator for daily routine, see the code below:
```
# Name: nutri.py
# Author: pyn
my_dict = {'chicken':(40, 50, 10),
'pork':(50, 30, 20)
}
foods = raw_input("Enter your food: ")
#explode / split the user input
foods_list = foods.split(',')
#returns a list separated by comma
print foods_list
```
What I want to do:
1. Get user input and store it in a variable
2. Search the dictionary based on user input and return the asociated values if the keys / foods exists
3. Sum these values in different nutritional chunks and return them, something like: You ate x protein, y carbs and z fat etc.
Any ideas are welcome.
|
2013/02/07
|
[
"https://Stackoverflow.com/questions/14749379",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/2050432/"
] |
Using jQuery's [hover](http://api.jquery.com/hover/) function :
```
var divs = {
cd: 'mainhexa1',
illu: 'mainhexa2',
klassisch: 'mainhexa3'
};
$('[id^=gestaltung]').hover(function(){ // selects all elements whose id starts with gestaltung
$('#'+divs[this.id.slice('gestaltung_'.length)]).toggle();
});
```
Note that it might be easier to have some relation between the opener and opening elements, like a class or another attribute (as in nnnnnn's answer).
|
```
$("#gestaltung_cd").hover(function({
$("#mainhexa1").css({ "visibility": "visible" });
}, function() {
//Your hover out function
});
```
|
14,749,379
|
Because *programming* is one of my favorite hobbies I started a small project in python.
I'm trying to make a nutritional calculator for daily routine, see the code below:
```
# Name: nutri.py
# Author: pyn
my_dict = {'chicken':(40, 50, 10),
'pork':(50, 30, 20)
}
foods = raw_input("Enter your food: ")
#explode / split the user input
foods_list = foods.split(',')
#returns a list separated by comma
print foods_list
```
What I want to do:
1. Get user input and store it in a variable
2. Search the dictionary based on user input and return the asociated values if the keys / foods exists
3. Sum these values in different nutritional chunks and return them, something like: You ate x protein, y carbs and z fat etc.
Any ideas are welcome.
|
2013/02/07
|
[
"https://Stackoverflow.com/questions/14749379",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/2050432/"
] |
You could use the sibling selector. As long as div's share the same parent, you can still affect them with hover
[DEMO](http://jsfiddle.net/8LFWR/)
Vital Code:
```
#gestaltung_cd:hover ~ #mainhexa1,
#gestaltung_illu:hover ~ #mainhexa2,
#gestaltung_klassisch:hover ~ #mainhexa3 {
display: block;
}
```
|
Just for the record ...
You can do the effect you want only with CSS and HTML ( without javascript ), but you have to place your elements to follow each other and use `+` selector in CSS. Something like :
HTML
```
<div id="gestaltung_cd"></div>
<div id="mainhexa1"></div>
<div id="gestaltung_illu"></div>
<div id="mainhexa2"></div>
<div id="gestaltung_klassisch"></div>
<div id="mainhexa3"></div>
```
CSS
```
div#gestaltung_cd:hover + div#mainhexa1 {
display:block;
}
div#gestaltung_illu:hover + div#mainhexa2 {
display:block;
}
div#gestaltung_klassisch:hover + div#mainhexa3 {
display:block;
}
```
you can check out the demo <http://tinkerbin.com/KP17XUgq>
|
7,556,499
|
despite using the search function I've been unable to find an answer. I got two assumptions, but don't know in how far they may apply. Now the problem:
I'd like to plot a contour. For this I've got here the following python code:
```
import numpy as np
import matplotlib.pyplot as plt
xi=list_of_distance
yi=list_of_angle
x = np.arange(0,54,0.2)
y = np.arange(0,180,2.0)
Z = np.histogram2d(xi,yi,bins=(274,90))
X, Y = np.meshgrid(x, y)
plt.contour(X,Y,Z)
plt.ylabel('angles')
plt.xlabel('distance')
plt.colorbar()
plt.show()
```
xi and yi are lists containing float values.
x and y are defining the 'intervals' ...for example:
x generates a list with values from 0 to 54 in 0.2 steps
y generates a list with values from 0 to 180 in 2.0 steps
with Z I make use of the numpy function to create 2D-Histograms. Actually this seems to be the spot that causes trouble.
When the function plt.contour(X,Y,Z) is called, the following error message emerges:
>
> ... File "/usr/lib/pymodules/python2.7/numpy/ma/core.py", line 2641, in **new**
> \_data = np.array(data, dtype=dtype, copy=copy, subok=True, ndmin=ndmin)
> ValueError: setting an array element with a sequence.
>
>
>
Now to the assumptions what may cause this problem:
1. It seems like it expects an array but instead of an numpy-array it receives a list
or
2. We got a row that is shorter than the others (I came to that thought, after a collegue ran into such an issue a year ago - there it has been fixed by figuring out that the last row was by 2 elements shorter than all others...)
|
2011/09/26
|
[
"https://Stackoverflow.com/questions/7556499",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/329586/"
] |
As @rocksportrocker implies, you need to take into account that `histogram2d` returns the edges in addition to the histogram. Another detail is that you probably want to explicitly pass in a range, otherwise one will be chosen for you based on the actual min and max values in your data. You then want to convert the edges to cell centers for the plot. Something like this:
```
import numpy as np
import matplotlib.pyplot as plt
n = 1000000 # how many data points
xmin, xmax = 0.0, 54.0 # distances
ymin, ymax = 0.0, 180.0 # angles
# make up some random data
xi=np.random.normal(xmax/2.0, xmax/4.0, n)
yi=np.random.normal(ymax/3.0, ymax/3.0, n)
Z, xedges, yedges = np.histogram2d(xi,yi, bins=(270,90), range=[[xmin, xmax], [ymin, ymax]])
# find the cell centers from the cell edges
x = 0.5*(xedges[:-1] + xedges[1:])
y = 0.5*(yedges[:-1] + yedges[1:])
# promote to 2D arrays
Y, X = np.meshgrid(y, x)
plt.contour(X,Y,Z)
plt.ylabel('angles')
plt.xlabel('distance')
plt.colorbar()
plt.savefig("hist2d.png")
```
yields a countour plot like this:

but personally I wouldn't use contours in this case, since the histogram is likely to be noisy.
|
Your traceback indicates that the error does not raise from the call to matplotlib, it is numpy which raises the ValueError.
|
46,437,882
|
So this is driving me crazy I have python3 and modwsgi and apache and a virtual host, that work great, as I have several other wsgi scripts that work fine on the server. I also have a django app that works great when I run the dev server.
I have checked that "ldd mod\_wsgi.so" is linked correctly against python3.5
Whenever I try to access my site, I get an error and the apache log states:
ImportError: No module named 'protectionprofiles'
protection profiles is mysite name the following is my virtual host config
```
<VirtualHost *:80>
ServerAdmin webmaster@localhost
DocumentRoot /var/www/html
ServerName <my ip>
WSGIScriptAlias /certs /var/www/scripts/CavsCertSearch/CavsCertSearch/certstrip.wsgi
WSGIScriptAlias /testcerts /var/www/scripts/CavsCertSearchTest/CavsCertSearch/certstriptest.wsgi
WSGIScriptAlias /protectionprofiles /var/www/protectionprofiles/protectionprofiles/wsgi.py
<Directory /var/www/protectionprofiles/protectionprofiles>
<Files wsgi.py>
Require all granted
</Files>
</Directory>
</VirtualHost>
```
my site app is the protection profiles alias. I have no idea what the issue is I have tried following dozens of different apache tutorials and none of them seem to work. Any help is greatly appreciated.
------ To add something else I tried --------
so I added the following 2 commands inside the virtual host
```
WSGIDaemonProcess protectionprofiles python-path=/var/www/protectionprofiles/
WSGIProcessGroup protectionprofiles
```
and no I get a different error
```
Error was: No module named 'django.db.backends.postgresql'
```
which is my backend, but the dev server works fine? Below is my database configuration
```
DATABASES = {
'default': {
'ENGINE': 'django.db.backends.postgresql',
'USER': 'myuser',
'PASSWORD': 'abcd1234',
'HOST': '127.0.0.1',
'PORT': '5432',
'NAME': 'protectionprofile',
}
}
```
----Another error ----
Occasional I get
```
RuntimeError: populate() isn't reentrant
```
---- Another error!!--- Now when I update
```
WSGIDaemonProcess protectionprofiles python-path=/var/www/protectionprofiles/:/usr/lib/python3/dist-packages/django
```
I get
```
ImportError: cannot import name 'SimpleCookie'
```
---Another wierd thng is that when I have
```
WSGIProcessGroup protectionprofiles
```
enabled I dont get the error that it can't find the module "protectionprofiles" but when then non of my other wsgi scripts work!. They only work when thats no in there. Any explanation of that would be very helpful
|
2017/09/27
|
[
"https://Stackoverflow.com/questions/46437882",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/4216567/"
] |
You probably need to tell mod\_wsgi where your project code is. For embedded mode this is done with `WSGIPythonPath` directive. You should preferably though use daemon mode, in which case you would use `python-path` option to `WSGIDaemonProcess` directive.
|
ok I finally figured out how to fix the issue but not sure exactly why. First off when I was mixing my own wsgi scipts and django I had to specify the daemon process only for the script alias that was django
```
WSGIScriptAlias /certs /var/www/scripts/CavsCertSearch/CavsCertSearch/certstrip.wsgi
WSGIScriptAlias /testcerts /var/www/scripts/CavsCertSearchTest/CavsCertSearch/certstriptest.wsgi
WSGIScriptAlias /debug /var/www/scripts/debug/debug.wsgi
WSGIScriptAlias / /var/www/protectionprofiles/protectionprofiles/wsgi.py process-group=protectionprofiles
WSGIDaemonProcess protectionprofiles python-path=/var/www/protectionprofiles/
WSGIProcessGroup protectionprofiles
```
Then in the app/settings.py in dev mode one backend worked but when ran in mod\_wsgi I needed a different one:
settings.py:
```
DATABASES = {
'default': {
'ENGINE': 'django.db.backends.postgresql_psycopg2',
'NAME': 'databse',
'USER': 'user',
'PASSWORD': '',
'HOST': 'localhost',
'PORT': '5432',
}
}
```
In the development server "'django.db.backends.postgresql" worked fine but in release I had to specify the exact module type. I am not sure why but it works now!!
|
12,957,921
|
I am attempting to run a python program that can run a dictionary from a file with a list of words with each word given a score and standard deviation. My program looks like this:
```
theFile = open('word-happiness.csv' , 'r')
theFile.close()
def make_happiness_table(filename):
'''make_happiness_table: string -> dict
creates a dictionary of happiness scores from the given file'''
with open(filename) as f:
d = dict( line.split(' ') for line in f)
return d
make_happiness_table("word-happiness.csv")
table = make_happiness_table("word-happiness.csv")
(score, stddev) = table['hunger']
print("the score for 'hunger' is %f" % score)
```
My .csv file is in the form
```
word{TAB}score{TAB}standard_deviation
```
and I am trying to create the dictionary in that way. How can I create such a dictionary so that I can print a word such as 'hunger' from the function and get its score and std deviation?
|
2012/10/18
|
[
"https://Stackoverflow.com/questions/12957921",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/1712920/"
] |
The template parameter to [`std::normal_distribution`](http://en.cppreference.com/w/cpp/numeric/random/normal_distribution) must be a floating-point type (`float`, `double`, or `long double`). Using anything else will result in undefined behavior.
Since normal distribution is a continuous distribution, it isn't really clear what you expect it to do when using an `int` type. Maybe you can get the result you want by using `normal_distribution<double>` and rounding the result to `int`.
|
You can use `binomial_distribution` with default probability value of 0.5.
[link](http://www.cplusplus.com/reference/random/binomial_distribution/binomial_distribution/)
It will return integer values in range [0,t], with mean at t/2 ( in case t is even else (t+1)/2, (t-1)/2 have equal prob.). You can set the value of t accordingly and shift by adding a constant to the result if needed.
Binomial distribution is discrete approximation of normal distribution ([link](http://en.wikipedia.org/wiki/Binomial_distribution#Normal_approximation)). Normal distributions theoretically have no lower/upper limits.
I prefer to use following thin wrapper over the original:
```
template <typename T>
class NormalDistribution {
private:
std::mt19937 mt;
std::normal_distribution<T> dis;
public:
NormalDistribution(T mu, T sigma):dis(mu, sigma) {
std::random_device rd;
mt.seed(rd());
}
NormalDistribution(T mu, T sigma, unsigned seed ):dis(mu, sigma) {
mt.seed(seed);
}
T random() {
return dis(mt);
}
};
template <>
class NormalDistribution<int> {
private:
std::mt19937 mt;
std::binomial_distribution<int> dis;
int _min;
public:
NormalDistribution(int min, int max):dis(max-min) {
std::random_device rd;
mt.seed(rd());
}
NormalDistribution(int min, int max, unsigned seed):dis(max-min), _min(min) {
mt.seed(seed);
}
int random() {
return dis(mt)+_min;
}
};
```
|
51,572,168
|
when i want to see my django website in their server....i open cmd and go to manage.py directory:
```
C:\Users\computer house\Desktop\ahmed>
```
and then i type :
```
python manage.py runserver
```
but i see this error :
```
Traceback (most recent call last):
File "manage.py", line 15, in <module>
execute_from_command_line(sys.argv)
File "C:\Users\computer house\AppData\Local\Programs\Python\Python36-32\lib\site-packages\django\core\management\__init__.py", line 371, in execute_from_command_line
utility.execute()
File "C:\Users\computer house\AppData\Local\Programs\Python\Python36-32\lib\site-packages\django\core\management\__init__.py", line 317, in execute
settings.INSTALLED_APPS
File "C:\Users\computer house\AppData\Local\Programs\Python\Python36-32\lib\site-packages\django\conf\__init__.py", line 56, in __getattr__
self._setup(name)
File "C:\Users\computer house\AppData\Local\Programs\Python\Python36-32\lib\site-packages\django\conf\__init__.py", line 43, in _setup
self._wrapped = Settings(settings_module)
File "C:\Users\computer house\AppData\Local\Programs\Python\Python36-32\lib\site-packages\django\conf\__init__.py", line 106, in __init__
mod = importlib.import_module(self.SETTINGS_MODULE)
File "C:\Users\computer house\AppData\Local\Programs\Python\Python36-32\lib\importlib\__init__.py", line 126, in import_module
return _bootstrap._gcd_import(name[level:], package, level)
File "<frozen importlib._bootstrap>", line 994, in _gcd_import
File "<frozen importlib._bootstrap>", line 971, in _find_and_load
File "<frozen importlib._bootstrap>", line 941, in _find_and_load_unlocked
File "<frozen importlib._bootstrap>", line 219, in _call_with_frames_removed
File "<frozen importlib._bootstrap>", line 994, in _gcd_import
File "<frozen importlib._bootstrap>", line 971, in _find_and_load
File "<frozen importlib._bootstrap>", line 953, in _find_and_load_unlocked
ModuleNotFoundError: No module named 'ahmed'
```
what can i do to solve this problem and run my website in the server correctly ?
|
2018/07/28
|
[
"https://Stackoverflow.com/questions/51572168",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/10102365/"
] |
As far as i understood, You have started your project with the following command:
```
C:\Users\computer house\Desktop> python startproject ahmed
```
After that I assume you have similar file structure:
```
ahmed/
├── ahmed/ # make sure this directory exists
│ ├── __init__.py # make sure this file exists.
│ ├── settings.py
│ ├── urls.py
│ └── wsgi.py
└── manage.py
```
Either the inner ahmed directory or `ahmed/__init__.py` does not exists. if `__init__.py` does not exist please create an empty file with taht name.
A python module is a file or directory. Every single file with `.py` extension is treated to be python modules by the interpreter. If you want a directory to be a python module you need to create `__init__.py` inside that directory. The file is run at import time of the module, if you do not know what you are doing leave `__init__.py` empty.
|
check your current directory to have file names `__init__.py` also a directory named `ahmed` contains `__init__.py`
|
10,363,853
|
I'm trying to talk to a child process using the python subprocess.Popen() call. In my real code, I'm implementing a type of IPC, so I want to write some data, read the response, write some more data, read the response, and so on. Because of this, I cannot use Popen.communicate(), which otherwise works well for the simple case.
This code shows my problem. It never even gets the first response, hangs at the first "Reading result". Why? How can I make this work as I expect?
```
import subprocess
p = subprocess.Popen(["sed", 's/a/x/g'],
stdout = subprocess.PIPE,
stdin = subprocess.PIPE)
p.stdin.write("abc\n")
print "Reading result:"
print p.stdout.readline()
p.stdin.write("cat\n")
print "Reading result:"
print p.stdout.readline()
```
|
2012/04/28
|
[
"https://Stackoverflow.com/questions/10363853",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/526797/"
] |
I would try to use `Popen().communicate()` if you can as it does a lot of nice things for you, but if you need to use `Popen()` exactly as you described, you'll need to set sed to flush its buffer after newlines with the `-l` option:
```
p = subprocess.Popen(['sed', '-l', 's/a/x/g'],
stdout=subprocess.PIPE,
stdin=subprocess.PIPE)
```
and your code should work fine
|
`sed`'s output is buffered and only outputs its data until enough has been cumulated or the input stream is exhausted and closed.
Try this:
```
import subprocess
p = subprocess.Popen(["sed", 's/a/x/g'],
stdout = subprocess.PIPE,
stdin = subprocess.PIPE)
p.stdin.write("abc\n")
p.stdin.write("cat\n")
p.stdin.close()
print "Reading result 1:"
print p.stdout.readline()
print "Reading result 2:"
print p.stdout.readline()
```
Be aware that this cannot be done reliably which huge data as wriring to `stdin` blocks once the buffer is full. The best way to do is using [`communicate()`](http://docs.python.org/library/subprocess.html#subprocess.Popen.communicate).
|
18,389,273
|
I'm using python and having some trouble reading the properties of a file, when the filename includes non-ASCII characters.
One of the files for example is named:
`0-Channel-https∺∯∯services.apps.microsoft.com∯browse∯6.2.9200-1∯615∯Channel.dat`
When I run this:
```python
list2 = os.listdir('C:\\Users\\James\\AppData\\Local\\Microsoft\\Windows Store\\Cache Medium IL\\0\\')
for data in list2:
print os.path.getmtime(data) + '\n'
```
I get the error:
```none
WindowsError: [Error 123] The filename, directory name, or volume label syntax is incorrect: '0-Channel-https???services.apps.microsoft.com?browse?6.2.9200-1?615?Channel.dat'
```
I assume its caused by the special chars because the code works fine with other file names with only ASCII chars.
Does anyone know of a way to query the filesystem properties of a file named like this?
|
2013/08/22
|
[
"https://Stackoverflow.com/questions/18389273",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/1084310/"
] |
If this is python 2.x, its an encoding issue. If you pass a unicode string to os.listdir such as `u'C:\\my\\pathname'`, it will return unicode strings and they should have the non-ascii chars encoded correctly. See [Unicode Filenames](http://docs.python.org/2/howto/unicode.html#unicode-filenames) in the docs.
Quoting the doc:
>
> os.listdir(), which returns filenames, raises an issue: should it return the Unicode version of filenames, or should it return 8-bit strings containing the encoded versions? os.listdir() will do both, depending on whether you provided the directory path as an 8-bit string or a Unicode string. If you pass a Unicode string as the path, filenames will be decoded using the filesystem’s encoding and a list of Unicode strings will be returned, while passing an 8-bit path will return the 8-bit versions of the filenames. For example, assuming the default filesystem encoding is UTF-8, running the following program:
>
>
>
this code should work...
```
directory_name = u'C:\\Users\\James\\AppData\\Local\\Microsoft\\Windows Store\\Cache Medium IL\\0\\'
list2 = os.listdir(directory_name)
for data in list2:
print data, os.path.getmtime(os.path.join(directory_name, data))
```
|
As you are in windows you should try with ntpath module instead of os.path
```
from ntpath import getmtime
```
As I don't have windows I can't test it. Every os has a different path convention, so, Python provides a specific module for the most common operative systems.
|
7,281,348
|
I'm building a website which allows you to write python code online just like <http://shell.appspot.com/> or <http://ideone.com>. Can someone please provide me some help how I can achieve this?
|
2011/09/02
|
[
"https://Stackoverflow.com/questions/7281348",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/924945/"
] |
First of all you can browse their [source code](http://code.google.com/p/google-app-engine-samples/downloads/detail?name=shell_20091112.tar.gz&can=2&q=). The main file is only 321 lines !
Basically it keeps a separate global dict for each user session, then uses `compile` and `exec` to run the code and return the result.
```
# log and compile the statement up front
try:
logging.info('Compiling and evaluating:\n%s' % statement)
compiled = compile(statement, '<string>', 'single')
except:
self.response.out.write(traceback.format_exc())
return
```
and
```
# run!
old_globals = dict(statement_module.__dict__)
try:
old_stdout = sys.stdout
old_stderr = sys.stderr
try:
sys.stdout = self.response.out
sys.stderr = self.response.out
exec compiled in statement_module.__dict__
finally:
sys.stdout = old_stdout
sys.stderr = old_stderr
except:
self.response.out.write(traceback.format_exc())
return
```
**Edit:** Not using Google App Engine would make things much more complicated.But you can take a look at [pysandbox](https://github.com/haypo/pysandbox)
|
I am new to python but hopefully following links could help you acheive ur goal,
BaseHTTPServer (http://docs.python.org/library/basehttpserver.html) library can be used for handling web requests and ipython (http://ipython.org) for executing python commands at the backend.
|
33,301,838
|
I am trying to access a JSON object/dictionary in python however get the error:
>
> TypeError: string indices must be integers if script['title'] ==
> "IT":
>
>
>
and this is my code to try and access that particular key within the dictionary:
```
def CreateScript(scriptsToGenerate):
start = time.clock()
apiLocation = ""
saveFile = ""
for script in scriptsToGenerate:
print script
if script['title'] == "IT":
time = script['timeInterval']
baseItem = ""
```
and scriptsToGenerate is passed in using this which makes a HTTP request to my API
```
def GetScripts(urlParameters):
result = requests.get(constURLString + "/" + str(urlParameters)).json()
return [x for x in result if x != []]
```
and here is where I call CreateScript
```
def RunInThread(ID):
startedProcesses = list()
Scripts = []
Scripts = GetScripts(ID)
scriptList = ThreadChunk(Scripts, 2)
for item in scriptList:
proc = Process(target=CreateScript, args=(item))
startedProcesses.append(proc)
proc.start()
#we must wait for the processes to finish before continuing
for process in startedProcesses:
process.join()
print "finished"
```
which I pass this into CreateScript
Here is the output of my script object
```
{u'timeInterval': u'1800', u'title': u'IT', u'attribute' : 'hello'}
```
|
2015/10/23
|
[
"https://Stackoverflow.com/questions/33301838",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/4867148/"
] |
Facts:
* `scriptList` is a list of dictionaries (according to you)
* `for item in scriptList:` gets one dictionary at a time
* `proc = Process(target=CreateScript, args=(item))` passes a dictionary to the `CreateScript` function
* `def CreateScript(scriptsToGenerate):` receives a dictionary
* `for script in scriptsToGenerate:` iterates over the *keys* of the dictionary.
* `if script['title'] == "IT":` tries to access the `title` index of a dictionary key (a string).
So no, this won’t work. At some point, you iterate over a list. You probably want to pass a list of scripts to the `CreateScript` function, so you shouldn’t iterate over `scriptList`.
|
John, what I meant was that you have a json object there, it will need to be treated as such. I think you mean can you do the following?
```
def CreateScript(scriptsToGenerate):
#start = time.clock()
apiLocation = ""
saveFile = ""
i = json.loads(scriptsToGenerate)
if i['title'] == "IT":
time = i['timeInterval']
baseItem = ""
print i['title'], i['timeInterval']
p = json.dumps({u'timeInterval': u'1800', u'title': u'IT', u'attribute' : 'hello'})
CreateScript(p)
```
|
56,328,818
|
I am using fuzzywuzzy in python for fuzzy string matching. I have a set of names in a list named HKCP\_list which I am matching against a pandas column iteratively to get the best possible match. Given below is the code for it
```
import fuzzywuzzy
from fuzzywuzzy import fuzz,process
def search_func(row):
chk = process.extract(row,HKCP_list,scorer=fuzz_token_sort_ratio)[0]
return chk
wc_df['match']=wc_df['concat_name'].map(search_func)
```
The wc\_df dataframe contains the column 'concat\_name' which needs to be matched with every name in the list HKCP\_list. The above code took around 2 hours to run with 6K names in the list and 11K names in the column 'concat\_name'.
I have to rerun this on another data set where are 89K names in the list and 120K names in the column. In order to speed up the process, I got an idea in the following question on Stackoverflow
[Vectorizing or Speeding up Fuzzywuzzy String Matching on PANDAS Column](https://stackoverflow.com/questions/52631291/vectorizing-or-speeding-up-fuzzywuzzy-string-matching-on-pandas-column)
In one of the comments in the answer in the above, it has been advised to compare names that have the same 1st letter. The 'concat\_name' column that I am comparing with is a derived column obtained by concatenating 'first\_name' and 'last\_name' columns in the dataframe. Hence I am using the following function to match the 1st letter (since this is a token sort score that I am considering, I am comparing the 1st letter of both the first\_name and last\_name with the elements in the list). Given below is the code:
```
wc_df['first_name_1stletter'] = wc_df['first_name'].str[0]
wc_df['last_name_1stletter'] = wc_df['last_name'].str[0]
import time
start_time=time.time()
def match_func(row):
CP_subset=[x for x in HKCP_list if x[0]==row['first_name_1stletter'] or x[0]==row['last_name_1stletter']]
return CP_subset
wc_df['list_to_match']=wc_df.apply(match_func,axis=1)
end_time=time.time()
print(end_time-start_time)
```
The above step took 1600 second with 6K X 11K data. The 'list\_to\_match' column contains the list of names to be compared for each concat\_name. Now here I have to again take the list\_to\_match element and pass individual elements in a list and do the fuzzy string matching using the process.extract method. Is there a more elegant and faster way of doing this in the same step as above?
PS: Editing this to add an example as to how the list and the dataframe column looks like.
```
HKCp_list=['jeff bezs','michael blomberg','bill gtes','tim coook','elon musk']
concat_name=['jeff bezos','michael bloomberg','bill gates','tim cook','elon musk','donald trump','kim jong un', 'narendra modi','michael phelps']
first_name=['jeff','michael','bill','tim','elon','donald','kim','narendra','michael']
last_name=['bezos','bloomberg','gates','cook','musk','trump','jong un', 'modi','phelps']
import pandas as pd
df=pd.DataFrame({'first_name':first_name,'last_name':last_name,'concat_name':concat_name})
```
Each row of the 'concat\_name' in df has to be compared against the elements of HKcp\_list.
PS: editing today to reflect the ":" and the row in the 2nd snippet of code I missed yesterday
Regards,
Nirvik
|
2019/05/27
|
[
"https://Stackoverflow.com/questions/56328818",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/4762935/"
] |
Use this function:
```
typedef Future<T> FutureGenerator<T>();
Future<T> retry<T>(int retries, FutureGenerator aFuture) async {
try {
return await aFuture();
} catch (e) {
if (retries > 1) {
return retry(retries - 1, aFuture);
}
rethrow;
}
}
```
And to use it:
```
main(List<String> arguments) {
retry(2, doSometing);
}
Future doSometing() async {
print("Doing something...");
await Future.delayed(Duration(milliseconds: 500));
return "Something";
}
```
|
This is how I implemented it:
```
Future retry<T>(
{Future<T> Function() function,
int numberOfRetries = 3,
Duration delayToRetry = const Duration(milliseconds: 500),
String message = ''}) async {
int retry = numberOfRetries;
List<Exception> exceptions = [];
while (retry-- > 0) {
try {
return await function();
} catch (e) {
exceptions.add(e);
}
if (message != null) print('$message: retry - ${numberOfRetries - retry}');
await Future.delayed(delayToRetry);
}
AggregatedException exception = AggregatedException(message, exceptions);
throw exception;
}
class AggregatedException implements Exception {
final String message;
AggregatedException(this.message, this.exceptions)
: lastException = exceptions.last,
numberOfExceptions = exceptions.length;
final List<Exception> exceptions;
final Exception lastException;
final int numberOfExceptions;
String toString() {
String result = '';
exceptions.forEach((e) => result += e.toString() + '\\');
return result;
}
}
```
This is how I use it:
```
try {
await retry(
function: () async {
_connection = await BluetoothConnection.toAddress(device.address);
},
message: 'Bluetooth Connect');
} catch (e) {
_log.finest('Bluetooth init failed ${e.toString()}');
}
```
|
56,328,818
|
I am using fuzzywuzzy in python for fuzzy string matching. I have a set of names in a list named HKCP\_list which I am matching against a pandas column iteratively to get the best possible match. Given below is the code for it
```
import fuzzywuzzy
from fuzzywuzzy import fuzz,process
def search_func(row):
chk = process.extract(row,HKCP_list,scorer=fuzz_token_sort_ratio)[0]
return chk
wc_df['match']=wc_df['concat_name'].map(search_func)
```
The wc\_df dataframe contains the column 'concat\_name' which needs to be matched with every name in the list HKCP\_list. The above code took around 2 hours to run with 6K names in the list and 11K names in the column 'concat\_name'.
I have to rerun this on another data set where are 89K names in the list and 120K names in the column. In order to speed up the process, I got an idea in the following question on Stackoverflow
[Vectorizing or Speeding up Fuzzywuzzy String Matching on PANDAS Column](https://stackoverflow.com/questions/52631291/vectorizing-or-speeding-up-fuzzywuzzy-string-matching-on-pandas-column)
In one of the comments in the answer in the above, it has been advised to compare names that have the same 1st letter. The 'concat\_name' column that I am comparing with is a derived column obtained by concatenating 'first\_name' and 'last\_name' columns in the dataframe. Hence I am using the following function to match the 1st letter (since this is a token sort score that I am considering, I am comparing the 1st letter of both the first\_name and last\_name with the elements in the list). Given below is the code:
```
wc_df['first_name_1stletter'] = wc_df['first_name'].str[0]
wc_df['last_name_1stletter'] = wc_df['last_name'].str[0]
import time
start_time=time.time()
def match_func(row):
CP_subset=[x for x in HKCP_list if x[0]==row['first_name_1stletter'] or x[0]==row['last_name_1stletter']]
return CP_subset
wc_df['list_to_match']=wc_df.apply(match_func,axis=1)
end_time=time.time()
print(end_time-start_time)
```
The above step took 1600 second with 6K X 11K data. The 'list\_to\_match' column contains the list of names to be compared for each concat\_name. Now here I have to again take the list\_to\_match element and pass individual elements in a list and do the fuzzy string matching using the process.extract method. Is there a more elegant and faster way of doing this in the same step as above?
PS: Editing this to add an example as to how the list and the dataframe column looks like.
```
HKCp_list=['jeff bezs','michael blomberg','bill gtes','tim coook','elon musk']
concat_name=['jeff bezos','michael bloomberg','bill gates','tim cook','elon musk','donald trump','kim jong un', 'narendra modi','michael phelps']
first_name=['jeff','michael','bill','tim','elon','donald','kim','narendra','michael']
last_name=['bezos','bloomberg','gates','cook','musk','trump','jong un', 'modi','phelps']
import pandas as pd
df=pd.DataFrame({'first_name':first_name,'last_name':last_name,'concat_name':concat_name})
```
Each row of the 'concat\_name' in df has to be compared against the elements of HKcp\_list.
PS: editing today to reflect the ":" and the row in the 2nd snippet of code I missed yesterday
Regards,
Nirvik
|
2019/05/27
|
[
"https://Stackoverflow.com/questions/56328818",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/4762935/"
] |
Use this function:
```
typedef Future<T> FutureGenerator<T>();
Future<T> retry<T>(int retries, FutureGenerator aFuture) async {
try {
return await aFuture();
} catch (e) {
if (retries > 1) {
return retry(retries - 1, aFuture);
}
rethrow;
}
}
```
And to use it:
```
main(List<String> arguments) {
retry(2, doSometing);
}
Future doSometing() async {
print("Doing something...");
await Future.delayed(Duration(milliseconds: 500));
return "Something";
}
```
|
I added an optional delay to Daniel Oliveira's answer:
```
typedef Future<T> FutureGenerator<T>();
Future<T> retry<T>(int retries, FutureGenerator aFuture, {Duration delay}) async {
try {
return await aFuture();
} catch (e) {
if (retries > 1) {
if (delay != null) {
await Future.delayed(delay);
}
return retry(retries - 1, aFuture);
}
rethrow;
}
}
```
You can use it as follows:
```
retry(2, doSometing, delay: const Duration(seconds: 1));
```
|
56,328,818
|
I am using fuzzywuzzy in python for fuzzy string matching. I have a set of names in a list named HKCP\_list which I am matching against a pandas column iteratively to get the best possible match. Given below is the code for it
```
import fuzzywuzzy
from fuzzywuzzy import fuzz,process
def search_func(row):
chk = process.extract(row,HKCP_list,scorer=fuzz_token_sort_ratio)[0]
return chk
wc_df['match']=wc_df['concat_name'].map(search_func)
```
The wc\_df dataframe contains the column 'concat\_name' which needs to be matched with every name in the list HKCP\_list. The above code took around 2 hours to run with 6K names in the list and 11K names in the column 'concat\_name'.
I have to rerun this on another data set where are 89K names in the list and 120K names in the column. In order to speed up the process, I got an idea in the following question on Stackoverflow
[Vectorizing or Speeding up Fuzzywuzzy String Matching on PANDAS Column](https://stackoverflow.com/questions/52631291/vectorizing-or-speeding-up-fuzzywuzzy-string-matching-on-pandas-column)
In one of the comments in the answer in the above, it has been advised to compare names that have the same 1st letter. The 'concat\_name' column that I am comparing with is a derived column obtained by concatenating 'first\_name' and 'last\_name' columns in the dataframe. Hence I am using the following function to match the 1st letter (since this is a token sort score that I am considering, I am comparing the 1st letter of both the first\_name and last\_name with the elements in the list). Given below is the code:
```
wc_df['first_name_1stletter'] = wc_df['first_name'].str[0]
wc_df['last_name_1stletter'] = wc_df['last_name'].str[0]
import time
start_time=time.time()
def match_func(row):
CP_subset=[x for x in HKCP_list if x[0]==row['first_name_1stletter'] or x[0]==row['last_name_1stletter']]
return CP_subset
wc_df['list_to_match']=wc_df.apply(match_func,axis=1)
end_time=time.time()
print(end_time-start_time)
```
The above step took 1600 second with 6K X 11K data. The 'list\_to\_match' column contains the list of names to be compared for each concat\_name. Now here I have to again take the list\_to\_match element and pass individual elements in a list and do the fuzzy string matching using the process.extract method. Is there a more elegant and faster way of doing this in the same step as above?
PS: Editing this to add an example as to how the list and the dataframe column looks like.
```
HKCp_list=['jeff bezs','michael blomberg','bill gtes','tim coook','elon musk']
concat_name=['jeff bezos','michael bloomberg','bill gates','tim cook','elon musk','donald trump','kim jong un', 'narendra modi','michael phelps']
first_name=['jeff','michael','bill','tim','elon','donald','kim','narendra','michael']
last_name=['bezos','bloomberg','gates','cook','musk','trump','jong un', 'modi','phelps']
import pandas as pd
df=pd.DataFrame({'first_name':first_name,'last_name':last_name,'concat_name':concat_name})
```
Each row of the 'concat\_name' in df has to be compared against the elements of HKcp\_list.
PS: editing today to reflect the ":" and the row in the 2nd snippet of code I missed yesterday
Regards,
Nirvik
|
2019/05/27
|
[
"https://Stackoverflow.com/questions/56328818",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/4762935/"
] |
Use this function:
```
typedef Future<T> FutureGenerator<T>();
Future<T> retry<T>(int retries, FutureGenerator aFuture) async {
try {
return await aFuture();
} catch (e) {
if (retries > 1) {
return retry(retries - 1, aFuture);
}
rethrow;
}
}
```
And to use it:
```
main(List<String> arguments) {
retry(2, doSometing);
}
Future doSometing() async {
print("Doing something...");
await Future.delayed(Duration(milliseconds: 500));
return "Something";
}
```
|
[Retry](https://pub.dev/packages/retry) from Dart Neat is a good API and, unofficially, it's from Google:
<https://pub.dev/packages/retry>
|
56,328,818
|
I am using fuzzywuzzy in python for fuzzy string matching. I have a set of names in a list named HKCP\_list which I am matching against a pandas column iteratively to get the best possible match. Given below is the code for it
```
import fuzzywuzzy
from fuzzywuzzy import fuzz,process
def search_func(row):
chk = process.extract(row,HKCP_list,scorer=fuzz_token_sort_ratio)[0]
return chk
wc_df['match']=wc_df['concat_name'].map(search_func)
```
The wc\_df dataframe contains the column 'concat\_name' which needs to be matched with every name in the list HKCP\_list. The above code took around 2 hours to run with 6K names in the list and 11K names in the column 'concat\_name'.
I have to rerun this on another data set where are 89K names in the list and 120K names in the column. In order to speed up the process, I got an idea in the following question on Stackoverflow
[Vectorizing or Speeding up Fuzzywuzzy String Matching on PANDAS Column](https://stackoverflow.com/questions/52631291/vectorizing-or-speeding-up-fuzzywuzzy-string-matching-on-pandas-column)
In one of the comments in the answer in the above, it has been advised to compare names that have the same 1st letter. The 'concat\_name' column that I am comparing with is a derived column obtained by concatenating 'first\_name' and 'last\_name' columns in the dataframe. Hence I am using the following function to match the 1st letter (since this is a token sort score that I am considering, I am comparing the 1st letter of both the first\_name and last\_name with the elements in the list). Given below is the code:
```
wc_df['first_name_1stletter'] = wc_df['first_name'].str[0]
wc_df['last_name_1stletter'] = wc_df['last_name'].str[0]
import time
start_time=time.time()
def match_func(row):
CP_subset=[x for x in HKCP_list if x[0]==row['first_name_1stletter'] or x[0]==row['last_name_1stletter']]
return CP_subset
wc_df['list_to_match']=wc_df.apply(match_func,axis=1)
end_time=time.time()
print(end_time-start_time)
```
The above step took 1600 second with 6K X 11K data. The 'list\_to\_match' column contains the list of names to be compared for each concat\_name. Now here I have to again take the list\_to\_match element and pass individual elements in a list and do the fuzzy string matching using the process.extract method. Is there a more elegant and faster way of doing this in the same step as above?
PS: Editing this to add an example as to how the list and the dataframe column looks like.
```
HKCp_list=['jeff bezs','michael blomberg','bill gtes','tim coook','elon musk']
concat_name=['jeff bezos','michael bloomberg','bill gates','tim cook','elon musk','donald trump','kim jong un', 'narendra modi','michael phelps']
first_name=['jeff','michael','bill','tim','elon','donald','kim','narendra','michael']
last_name=['bezos','bloomberg','gates','cook','musk','trump','jong un', 'modi','phelps']
import pandas as pd
df=pd.DataFrame({'first_name':first_name,'last_name':last_name,'concat_name':concat_name})
```
Each row of the 'concat\_name' in df has to be compared against the elements of HKcp\_list.
PS: editing today to reflect the ":" and the row in the 2nd snippet of code I missed yesterday
Regards,
Nirvik
|
2019/05/27
|
[
"https://Stackoverflow.com/questions/56328818",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/4762935/"
] |
I added an optional delay to Daniel Oliveira's answer:
```
typedef Future<T> FutureGenerator<T>();
Future<T> retry<T>(int retries, FutureGenerator aFuture, {Duration delay}) async {
try {
return await aFuture();
} catch (e) {
if (retries > 1) {
if (delay != null) {
await Future.delayed(delay);
}
return retry(retries - 1, aFuture);
}
rethrow;
}
}
```
You can use it as follows:
```
retry(2, doSometing, delay: const Duration(seconds: 1));
```
|
This is how I implemented it:
```
Future retry<T>(
{Future<T> Function() function,
int numberOfRetries = 3,
Duration delayToRetry = const Duration(milliseconds: 500),
String message = ''}) async {
int retry = numberOfRetries;
List<Exception> exceptions = [];
while (retry-- > 0) {
try {
return await function();
} catch (e) {
exceptions.add(e);
}
if (message != null) print('$message: retry - ${numberOfRetries - retry}');
await Future.delayed(delayToRetry);
}
AggregatedException exception = AggregatedException(message, exceptions);
throw exception;
}
class AggregatedException implements Exception {
final String message;
AggregatedException(this.message, this.exceptions)
: lastException = exceptions.last,
numberOfExceptions = exceptions.length;
final List<Exception> exceptions;
final Exception lastException;
final int numberOfExceptions;
String toString() {
String result = '';
exceptions.forEach((e) => result += e.toString() + '\\');
return result;
}
}
```
This is how I use it:
```
try {
await retry(
function: () async {
_connection = await BluetoothConnection.toAddress(device.address);
},
message: 'Bluetooth Connect');
} catch (e) {
_log.finest('Bluetooth init failed ${e.toString()}');
}
```
|
56,328,818
|
I am using fuzzywuzzy in python for fuzzy string matching. I have a set of names in a list named HKCP\_list which I am matching against a pandas column iteratively to get the best possible match. Given below is the code for it
```
import fuzzywuzzy
from fuzzywuzzy import fuzz,process
def search_func(row):
chk = process.extract(row,HKCP_list,scorer=fuzz_token_sort_ratio)[0]
return chk
wc_df['match']=wc_df['concat_name'].map(search_func)
```
The wc\_df dataframe contains the column 'concat\_name' which needs to be matched with every name in the list HKCP\_list. The above code took around 2 hours to run with 6K names in the list and 11K names in the column 'concat\_name'.
I have to rerun this on another data set where are 89K names in the list and 120K names in the column. In order to speed up the process, I got an idea in the following question on Stackoverflow
[Vectorizing or Speeding up Fuzzywuzzy String Matching on PANDAS Column](https://stackoverflow.com/questions/52631291/vectorizing-or-speeding-up-fuzzywuzzy-string-matching-on-pandas-column)
In one of the comments in the answer in the above, it has been advised to compare names that have the same 1st letter. The 'concat\_name' column that I am comparing with is a derived column obtained by concatenating 'first\_name' and 'last\_name' columns in the dataframe. Hence I am using the following function to match the 1st letter (since this is a token sort score that I am considering, I am comparing the 1st letter of both the first\_name and last\_name with the elements in the list). Given below is the code:
```
wc_df['first_name_1stletter'] = wc_df['first_name'].str[0]
wc_df['last_name_1stletter'] = wc_df['last_name'].str[0]
import time
start_time=time.time()
def match_func(row):
CP_subset=[x for x in HKCP_list if x[0]==row['first_name_1stletter'] or x[0]==row['last_name_1stletter']]
return CP_subset
wc_df['list_to_match']=wc_df.apply(match_func,axis=1)
end_time=time.time()
print(end_time-start_time)
```
The above step took 1600 second with 6K X 11K data. The 'list\_to\_match' column contains the list of names to be compared for each concat\_name. Now here I have to again take the list\_to\_match element and pass individual elements in a list and do the fuzzy string matching using the process.extract method. Is there a more elegant and faster way of doing this in the same step as above?
PS: Editing this to add an example as to how the list and the dataframe column looks like.
```
HKCp_list=['jeff bezs','michael blomberg','bill gtes','tim coook','elon musk']
concat_name=['jeff bezos','michael bloomberg','bill gates','tim cook','elon musk','donald trump','kim jong un', 'narendra modi','michael phelps']
first_name=['jeff','michael','bill','tim','elon','donald','kim','narendra','michael']
last_name=['bezos','bloomberg','gates','cook','musk','trump','jong un', 'modi','phelps']
import pandas as pd
df=pd.DataFrame({'first_name':first_name,'last_name':last_name,'concat_name':concat_name})
```
Each row of the 'concat\_name' in df has to be compared against the elements of HKcp\_list.
PS: editing today to reflect the ":" and the row in the 2nd snippet of code I missed yesterday
Regards,
Nirvik
|
2019/05/27
|
[
"https://Stackoverflow.com/questions/56328818",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/4762935/"
] |
[Retry](https://pub.dev/packages/retry) from Dart Neat is a good API and, unofficially, it's from Google:
<https://pub.dev/packages/retry>
|
This is how I implemented it:
```
Future retry<T>(
{Future<T> Function() function,
int numberOfRetries = 3,
Duration delayToRetry = const Duration(milliseconds: 500),
String message = ''}) async {
int retry = numberOfRetries;
List<Exception> exceptions = [];
while (retry-- > 0) {
try {
return await function();
} catch (e) {
exceptions.add(e);
}
if (message != null) print('$message: retry - ${numberOfRetries - retry}');
await Future.delayed(delayToRetry);
}
AggregatedException exception = AggregatedException(message, exceptions);
throw exception;
}
class AggregatedException implements Exception {
final String message;
AggregatedException(this.message, this.exceptions)
: lastException = exceptions.last,
numberOfExceptions = exceptions.length;
final List<Exception> exceptions;
final Exception lastException;
final int numberOfExceptions;
String toString() {
String result = '';
exceptions.forEach((e) => result += e.toString() + '\\');
return result;
}
}
```
This is how I use it:
```
try {
await retry(
function: () async {
_connection = await BluetoothConnection.toAddress(device.address);
},
message: 'Bluetooth Connect');
} catch (e) {
_log.finest('Bluetooth init failed ${e.toString()}');
}
```
|
56,328,818
|
I am using fuzzywuzzy in python for fuzzy string matching. I have a set of names in a list named HKCP\_list which I am matching against a pandas column iteratively to get the best possible match. Given below is the code for it
```
import fuzzywuzzy
from fuzzywuzzy import fuzz,process
def search_func(row):
chk = process.extract(row,HKCP_list,scorer=fuzz_token_sort_ratio)[0]
return chk
wc_df['match']=wc_df['concat_name'].map(search_func)
```
The wc\_df dataframe contains the column 'concat\_name' which needs to be matched with every name in the list HKCP\_list. The above code took around 2 hours to run with 6K names in the list and 11K names in the column 'concat\_name'.
I have to rerun this on another data set where are 89K names in the list and 120K names in the column. In order to speed up the process, I got an idea in the following question on Stackoverflow
[Vectorizing or Speeding up Fuzzywuzzy String Matching on PANDAS Column](https://stackoverflow.com/questions/52631291/vectorizing-or-speeding-up-fuzzywuzzy-string-matching-on-pandas-column)
In one of the comments in the answer in the above, it has been advised to compare names that have the same 1st letter. The 'concat\_name' column that I am comparing with is a derived column obtained by concatenating 'first\_name' and 'last\_name' columns in the dataframe. Hence I am using the following function to match the 1st letter (since this is a token sort score that I am considering, I am comparing the 1st letter of both the first\_name and last\_name with the elements in the list). Given below is the code:
```
wc_df['first_name_1stletter'] = wc_df['first_name'].str[0]
wc_df['last_name_1stletter'] = wc_df['last_name'].str[0]
import time
start_time=time.time()
def match_func(row):
CP_subset=[x for x in HKCP_list if x[0]==row['first_name_1stletter'] or x[0]==row['last_name_1stletter']]
return CP_subset
wc_df['list_to_match']=wc_df.apply(match_func,axis=1)
end_time=time.time()
print(end_time-start_time)
```
The above step took 1600 second with 6K X 11K data. The 'list\_to\_match' column contains the list of names to be compared for each concat\_name. Now here I have to again take the list\_to\_match element and pass individual elements in a list and do the fuzzy string matching using the process.extract method. Is there a more elegant and faster way of doing this in the same step as above?
PS: Editing this to add an example as to how the list and the dataframe column looks like.
```
HKCp_list=['jeff bezs','michael blomberg','bill gtes','tim coook','elon musk']
concat_name=['jeff bezos','michael bloomberg','bill gates','tim cook','elon musk','donald trump','kim jong un', 'narendra modi','michael phelps']
first_name=['jeff','michael','bill','tim','elon','donald','kim','narendra','michael']
last_name=['bezos','bloomberg','gates','cook','musk','trump','jong un', 'modi','phelps']
import pandas as pd
df=pd.DataFrame({'first_name':first_name,'last_name':last_name,'concat_name':concat_name})
```
Each row of the 'concat\_name' in df has to be compared against the elements of HKcp\_list.
PS: editing today to reflect the ":" and the row in the 2nd snippet of code I missed yesterday
Regards,
Nirvik
|
2019/05/27
|
[
"https://Stackoverflow.com/questions/56328818",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/4762935/"
] |
I added an optional delay to Daniel Oliveira's answer:
```
typedef Future<T> FutureGenerator<T>();
Future<T> retry<T>(int retries, FutureGenerator aFuture, {Duration delay}) async {
try {
return await aFuture();
} catch (e) {
if (retries > 1) {
if (delay != null) {
await Future.delayed(delay);
}
return retry(retries - 1, aFuture);
}
rethrow;
}
}
```
You can use it as follows:
```
retry(2, doSometing, delay: const Duration(seconds: 1));
```
|
[Retry](https://pub.dev/packages/retry) from Dart Neat is a good API and, unofficially, it's from Google:
<https://pub.dev/packages/retry>
|
6,424,676
|
I have a list of lists, say:
```
arr = [[1, 2], [1, 3], [1, 4]]
```
I would like to append 100 to each of the inner lists. Output for the above example would be:
```
arr = [[1, 2, 100], [1, 3, 100], [1, 4, 100]]
```
I can of course do:
```
for elem in arr:
elem.append(100)
```
But is there something more pythonic that can be done? Why does the following not work:
```
arr = [elem.append(100) for elem in arr]
```
|
2011/06/21
|
[
"https://Stackoverflow.com/questions/6424676",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/373948/"
] |
The second version should be written like `arr = [elem + [100] for elem in arr]`. But the most pythonic way if you ask me is the first one. The `for` construct has it's own use, and it suits very well here.
|
You can do
```
[a + [100] for a in arr]
```
The reason why your `append` doesn't work is that `append` doesn't return the list, but rather `None`.
Of course, this is more resource intensive than just doing `append` - you end up making copies of everything.
|
6,424,676
|
I have a list of lists, say:
```
arr = [[1, 2], [1, 3], [1, 4]]
```
I would like to append 100 to each of the inner lists. Output for the above example would be:
```
arr = [[1, 2, 100], [1, 3, 100], [1, 4, 100]]
```
I can of course do:
```
for elem in arr:
elem.append(100)
```
But is there something more pythonic that can be done? Why does the following not work:
```
arr = [elem.append(100) for elem in arr]
```
|
2011/06/21
|
[
"https://Stackoverflow.com/questions/6424676",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/373948/"
] |
The second version should be written like `arr = [elem + [100] for elem in arr]`. But the most pythonic way if you ask me is the first one. The `for` construct has it's own use, and it suits very well here.
|
This is the more pythonic way
```
for elem in arr:
elem.append(100)
```
but as an option you can also try this:
```
[arr[i].append(100) for i in range(len(arr))]
print arr # It will return [[1, 2, 100], [1, 3, 100], [1, 4, 100]]
```
|
6,424,676
|
I have a list of lists, say:
```
arr = [[1, 2], [1, 3], [1, 4]]
```
I would like to append 100 to each of the inner lists. Output for the above example would be:
```
arr = [[1, 2, 100], [1, 3, 100], [1, 4, 100]]
```
I can of course do:
```
for elem in arr:
elem.append(100)
```
But is there something more pythonic that can be done? Why does the following not work:
```
arr = [elem.append(100) for elem in arr]
```
|
2011/06/21
|
[
"https://Stackoverflow.com/questions/6424676",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/373948/"
] |
Note that your code also works - the only thing you have to know is that there is no need to assign result to variable in such a case:
```
arr = [[1, 2], [1, 3], [1, 4]]
[x.append(100) for x in arr]
```
After execution arr will contain updated list [[1, 2, 100], [1, 3, 100], [1, 4, 100]], e.g it work like in-place update, but this not very common practice to do this.
The same result you will get in the next case:
```
map(lambda x: x.append(100), arr)
```
As was discussed you can use list comprehension or map to do this with assigning result to any variable:
```
res = map(lambda x: x + [100], arr)
```
|
You can do
```
[a + [100] for a in arr]
```
The reason why your `append` doesn't work is that `append` doesn't return the list, but rather `None`.
Of course, this is more resource intensive than just doing `append` - you end up making copies of everything.
|
6,424,676
|
I have a list of lists, say:
```
arr = [[1, 2], [1, 3], [1, 4]]
```
I would like to append 100 to each of the inner lists. Output for the above example would be:
```
arr = [[1, 2, 100], [1, 3, 100], [1, 4, 100]]
```
I can of course do:
```
for elem in arr:
elem.append(100)
```
But is there something more pythonic that can be done? Why does the following not work:
```
arr = [elem.append(100) for elem in arr]
```
|
2011/06/21
|
[
"https://Stackoverflow.com/questions/6424676",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/373948/"
] |
Note that your code also works - the only thing you have to know is that there is no need to assign result to variable in such a case:
```
arr = [[1, 2], [1, 3], [1, 4]]
[x.append(100) for x in arr]
```
After execution arr will contain updated list [[1, 2, 100], [1, 3, 100], [1, 4, 100]], e.g it work like in-place update, but this not very common practice to do this.
The same result you will get in the next case:
```
map(lambda x: x.append(100), arr)
```
As was discussed you can use list comprehension or map to do this with assigning result to any variable:
```
res = map(lambda x: x + [100], arr)
```
|
This is the more pythonic way
```
for elem in arr:
elem.append(100)
```
but as an option you can also try this:
```
[arr[i].append(100) for i in range(len(arr))]
print arr # It will return [[1, 2, 100], [1, 3, 100], [1, 4, 100]]
```
|
54,024,663
|
I have a python application that is running in Kubernetes. The app has a ping health-check which is called frequently via a REST call and checks that the call returns an HTTP 200. This clutters the Kubernetes logs when I view it through the logs console.
The function definition looks like this:
```
def ping():
return jsonify({'status': 'pong'})
```
How can I silence a specific call from showing up in the log? Is there a way I can put this in code such as a python decorator on top of the health check function? Or is there an option in the Kubernetes console where I can configure to ignore this call?
|
2019/01/03
|
[
"https://Stackoverflow.com/questions/54024663",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/5240483/"
] |
In kubernetes, everything in container which you have on `stdout` or `stderr` will come into the kubernetes logs. The only way to exclude the logs of `health-check` ping call remove from kubernetes logs is that, In your application you should redirect output of those ping calls to somefile in say `/var/log/`. This will effectively remove the output of that `health-check` ping from the `stdout`.
Once the output is not in `stdout` or `stderr` of the pods, pod logs will not have logs from that special `health-check`
You can also use the sidecar containers to streamline your application logs, like if you don't want all of the logs of application in `kubectl logs` output. You can write those file.
As stated in Official docs of kubernetes:
>
> By having your sidecar containers stream to their own stdout and stderr streams, you can take advantage of the kubelet and the logging agent that already run on each node. The sidecar containers read logs from a file, a socket, or the journald. Each individual sidecar container prints log to its own stdout or stderr stream.
>
>
>
This approach allows you to separate several log streams from different parts of your application, some of which can lack support for writing to stdout or stderr. The logic behind redirecting logs is minimal, so it’s hardly a significant overhead.
For more information on Kubernetes logging, please refer official docs:
<https://kubernetes.io/docs/concepts/cluster-administration/logging/>
|
Just invert the logic log on fail in the app, modify the code or wrap with a custom decorator
|
54,024,663
|
I have a python application that is running in Kubernetes. The app has a ping health-check which is called frequently via a REST call and checks that the call returns an HTTP 200. This clutters the Kubernetes logs when I view it through the logs console.
The function definition looks like this:
```
def ping():
return jsonify({'status': 'pong'})
```
How can I silence a specific call from showing up in the log? Is there a way I can put this in code such as a python decorator on top of the health check function? Or is there an option in the Kubernetes console where I can configure to ignore this call?
|
2019/01/03
|
[
"https://Stackoverflow.com/questions/54024663",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/5240483/"
] |
>
> clutters the Kubernetes logs when I view it through the logs console.
>
>
>
Since you said "view", I know it might not be the most accurate way to do this, but works ok:
```
kubectl logs [pod] [options] | awk '!/some health-check pattern|or another annoying pattern to exclude|another pattern you dont want to see|and so on/ {print $0}'
```
Used `awk` for filtering the logs, if you're unaware, you can read about it [here](https://www.gnu.org/software/gawk/manual/gawk.html)
* `!`: means exclude the following pattern
* `/pattern/`: enclose your pattern between `//` (standard awk syntax)
* `{print $0}`: means print the current line (you can also use just `print`)
**Note**: There is always a risk of excluding something necessary. Use wisely.
|
Just invert the logic log on fail in the app, modify the code or wrap with a custom decorator
|
54,024,663
|
I have a python application that is running in Kubernetes. The app has a ping health-check which is called frequently via a REST call and checks that the call returns an HTTP 200. This clutters the Kubernetes logs when I view it through the logs console.
The function definition looks like this:
```
def ping():
return jsonify({'status': 'pong'})
```
How can I silence a specific call from showing up in the log? Is there a way I can put this in code such as a python decorator on top of the health check function? Or is there an option in the Kubernetes console where I can configure to ignore this call?
|
2019/01/03
|
[
"https://Stackoverflow.com/questions/54024663",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/5240483/"
] |
In kubernetes, everything in container which you have on `stdout` or `stderr` will come into the kubernetes logs. The only way to exclude the logs of `health-check` ping call remove from kubernetes logs is that, In your application you should redirect output of those ping calls to somefile in say `/var/log/`. This will effectively remove the output of that `health-check` ping from the `stdout`.
Once the output is not in `stdout` or `stderr` of the pods, pod logs will not have logs from that special `health-check`
You can also use the sidecar containers to streamline your application logs, like if you don't want all of the logs of application in `kubectl logs` output. You can write those file.
As stated in Official docs of kubernetes:
>
> By having your sidecar containers stream to their own stdout and stderr streams, you can take advantage of the kubelet and the logging agent that already run on each node. The sidecar containers read logs from a file, a socket, or the journald. Each individual sidecar container prints log to its own stdout or stderr stream.
>
>
>
This approach allows you to separate several log streams from different parts of your application, some of which can lack support for writing to stdout or stderr. The logic behind redirecting logs is minimal, so it’s hardly a significant overhead.
For more information on Kubernetes logging, please refer official docs:
<https://kubernetes.io/docs/concepts/cluster-administration/logging/>
|
>
> clutters the Kubernetes logs when I view it through the logs console.
>
>
>
Since you said "view", I know it might not be the most accurate way to do this, but works ok:
```
kubectl logs [pod] [options] | awk '!/some health-check pattern|or another annoying pattern to exclude|another pattern you dont want to see|and so on/ {print $0}'
```
Used `awk` for filtering the logs, if you're unaware, you can read about it [here](https://www.gnu.org/software/gawk/manual/gawk.html)
* `!`: means exclude the following pattern
* `/pattern/`: enclose your pattern between `//` (standard awk syntax)
* `{print $0}`: means print the current line (you can also use just `print`)
**Note**: There is always a risk of excluding something necessary. Use wisely.
|
48,186,159
|
My app is installed correctly and its models.py reads:
```
from django.db import models
class Album(models.Model):
artist = models.CharField(max_lenght=250)
album_title = models.CharField(max_lenght=500)
genre = models.CharField(max_lenght=100)
album_logo = models.CharField(max_lenght=1000)
class Song(models.Model):
album = models.ForeignKey(Album, on_delete=models.CASCADE)
file_type = models.CharField(max_lenght=10)
song_title = models.CharField(max_lenght=250)
```
But whenever I run python manage.py migrate, I get the following error:
```
Traceback (most recent call last):
File "manage.py", line 15, in <module>
execute_from_command_line(sys.argv)
File "C:\Users\dougl\AppData\Local\Programs\Python\Python36-32\lib\site-packages\django-2.0-py3.6.egg\django\core\management\__init__.py", line 371, in execute_from_command_line
utility.execute()
File "C:\Users\dougl\AppData\Local\Programs\Python\Python36-32\lib\site-packages\django-2.0-py3.6.egg\django\core\management\__init__.py", line 347, in execute
django.setup()
File "C:\Users\dougl\AppData\Local\Programs\Python\Python36-32\lib\site-packages\django-2.0-py3.6.egg\django\__init__.py", line 24, in setup
apps.populate(settings.INSTALLED_APPS)
File "C:\Users\dougl\AppData\Local\Programs\Python\Python36-32\lib\site-packages\django-2.0-py3.6.egg\django\apps\registry.py", line 112, in populate
app_config.import_models()
File "C:\Users\dougl\AppData\Local\Programs\Python\Python36-32\lib\site-packages\django-2.0-py3.6.egg\django\apps\config.py", line 198, in import_models
self.models_module = import_module(models_module_name)
File "C:\Users\dougl\AppData\Local\Programs\Python\Python36-32\lib\importlib\__init__.py", line 126, in import_module
return _bootstrap._gcd_import(name[level:], package, level)
File "<frozen importlib._bootstrap>", line 994, in _gcd_import
File "<frozen importlib._bootstrap>", line 971, in _find_and_load
File "<frozen importlib._bootstrap>", line 955, in _find_and_load_unlocked
File "<frozen importlib._bootstrap>", line 665, in _load_unlocked
File "<frozen importlib._bootstrap_external>", line 678, in exec_module
File "<frozen importlib._bootstrap>", line 219, in _call_with_frames_removed
File "C:\Users\dougl\desktop\django websites\twenty_one_pilots\downloads\models.py", line 3, in <module>
class Album(models.Model):
File "C:\Users\dougl\desktop\django websites\twenty_one_pilots\downloads\models.py", line 4, in Album
artist = models.CharField(max_lenght=250)
File "C:\Users\dougl\AppData\Local\Programs\Python\Python36-32\lib\site-packages\django-2.0-py3.6.egg\django\db\models\fields\__init__.py", line 1042, in __init__
super().__init__(*args, **kwargs)
TypeError: __init__() got an unexpected keyword argument 'max_lenght'
```
What could be wrong? I'm still a beginner and I'm still learning about databases.
|
2018/01/10
|
[
"https://Stackoverflow.com/questions/48186159",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/8978159/"
] |
Use this:
As per my comment, length (lenght) splelling is wrong.
```
from django.db import models
class Album(models.Model):
artist = models.CharField(max_length=250)
album_title = models.CharField(max_length=500)
genre = models.CharField(max_length=100)
album_logo = models.TextField()
class Song(models.Model):
album = models.ForeignKey(Album, on_delete=models.CASCADE)
file_type = models.CharField(max_length=10)
song_title = models.CharField(max_length=250)
```
|
You have spelled length wrong here
```
album_title = models.CharField(max_lenght=500)
```
happens to the best of us.
|
63,709,035
|
I am trying to have a button in HTML that removes a row in the database when I click it. This is for a flask app.
here is the HTML:
```
<div class="container-fluid text-center" id="products">
{% for product in productList %}
<div class='userProduct'>
<a href="{{ product.productURL }}" target="_blank">{{ product.title|truncate(30) }}</a>
<h4 class="currentPrice">${{ product.currentPrice }}</h4>
<h5 class="budget">Budget: {{ product.userBudget }}</h5>
<form action="{{ url_for('delete', id=product.id) }}">
<button class="btn btn-sm btn-primary" type="submit">Remove from Wishlist</button>
</form>
</div>
{% endfor %}
</div>
```
And here is the route in the python file
```
@app.route('/delete/<id>')
@login_required
def delete(id):
remove_product = Product.query.filter_by(id=int(id)).first()
db.session.delete(remove_product)
db.session.commit()
return redirect(url_for('dashboard'))
```
Is there anything wrong I am doing with url\_for? I want to pass the id from the button to the python file so I can determine which entry to delete from the database.
|
2020/09/02
|
[
"https://Stackoverflow.com/questions/63709035",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/11522265/"
] |
You're probably passing ID as an integer and your route is expecting a string. Use this instead:
```
@app.route('/delete/<int:id>')
```
|
I just encountered the same error,
In my case it was caused by some values being None in the database.
So maybe check whether product.id has missing values.
To handle this, I needed an extra line, which would look something like this for you:
```
@app.route('/delete/<id>') # current line
@app.route('/delete', defaults={'id': None}) # added line which handles None
```
In delete(id) you then have to handle the case where id is None:
```
if id is None:
... # what should happen if id is None
```
|
31,291,204
|
I'm new to Apache Spark and have a simple question about DataFrame caching.
When I cached a DataFrame in memory using `df.cache()` in python, I found that the data is removed after the program terminates.
Can I keep the cached data in memory so that I can access the data for the next run without doing `df.cache()` again?
|
2015/07/08
|
[
"https://Stackoverflow.com/questions/31291204",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/2368670/"
] |
The cache used with `cache()` is tied to the current spark context; its purpose is to prevent having to recalculate some intermediate results in the current application multiple times. If the context gets closed, the cache is gone. Nor can you share the cache between different running Spark contexts.
To be able to reuse the data in a different context, you will have to save it to a file system. If you prefer the results to be in memory (or have a good chance of being in memory when you try to reload them) you can look at using [Tachyon](http://www.tachyon-project.org).
|
If you are talking about saving the RDD into disk , use any of the following -

The link is for pyspark , same is available for Java/ Scala as well-#
<https://spark.apache.org/docs/latest/api/python/pyspark.sql.html>
If you were not talking about disk , then cache() is already doing the job of saving the same in memory
|
38,989,150
|
I trying to import a module and use a function from that module in my current python file.
I run the nosetests on the parser\_tests.py file but it fails with "name 'parse\_subject' not defined"
e.g its not finding the parse\_subject function which is clearly defined in the parsrer.py file
This is the parsrer file:
```
def peek(word_list):
if word_list:
word = word_list[0]
return word[0]
else:
return None
#Confirms that the expected word is the right type,
```
def match(word\_list, expecting):
if word\_list:
word = word\_list.pop(0)
```
if word[0] == expecting:
return word
else:
return None
else:
return None
```
def skip(word\_list, word\_type):
while peek(word\_list) == word\_type:
match(word\_list, word\_type)
def parse\_verb(word\_list):
skip(word\_list, 'stop')
```
if peek(word_list) == 'verb':
return match(word_list, 'verb')
else:
raise ParserError("Expected a verb next.")
```
def parse\_object(word\_list):
skip(word\_list, 'stop')
next\_word = peek(word\_list)
```
if next_word == 'noun':
return match(word_list, 'noun')
elif next_word == 'direction':
return match(word_list, 'direction')
else:
raise ParserError("Expected a noun or direction next.")
```
def parse\_subject(word\_list):
skip(word\_list, 'stop')
next\_word = peek(word\_list)
```
if next_word == 'noun':
return match(word_list, 'noun')
elif next_word == 'verb':
return ('noun', 'player')
else:
raise ParserError("Expected a verb next.")
```
def parse\_sentence(word\_list):
subj = parse\_subject(word\_list)
verb = parse\_verb(word\_list)
obj = parse\_object(word\_list)
```
return Sentence(subj, verb, obj)
```
This is my tests file
```
from nose.tools import *
```
from nose.tools import assert\_equals
import sys
sys.path.append("h:/projects/projectx48/ex48")
import parsrer
def test\_subject():
word\_list = [('noun', 'bear'), ('verb', 'eat'), ('stop', 'the'), ('noun', 'honey')]
assert\_equals(parse\_subject(word\_list), ('noun','bear'))
|
2016/08/17
|
[
"https://Stackoverflow.com/questions/38989150",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/6684586/"
] |
```
SELECT STUFF((
SELECT ','+ Name
FROM MyTable
WHERE ID in (1, 2, 3)
FOR XML PATH('')
), 1, 1, '') AS Names
```
Result:
>
> Apple,Microsoft,Samsung
>
>
>
|
```
USE [Database Name]
SELECT COLUMN_NAME,*
FROM INFORMATION_SCHEMA.COLUMNS
WHERE TABLE_NAME = 'YourTableName'
```
|
38,989,150
|
I trying to import a module and use a function from that module in my current python file.
I run the nosetests on the parser\_tests.py file but it fails with "name 'parse\_subject' not defined"
e.g its not finding the parse\_subject function which is clearly defined in the parsrer.py file
This is the parsrer file:
```
def peek(word_list):
if word_list:
word = word_list[0]
return word[0]
else:
return None
#Confirms that the expected word is the right type,
```
def match(word\_list, expecting):
if word\_list:
word = word\_list.pop(0)
```
if word[0] == expecting:
return word
else:
return None
else:
return None
```
def skip(word\_list, word\_type):
while peek(word\_list) == word\_type:
match(word\_list, word\_type)
def parse\_verb(word\_list):
skip(word\_list, 'stop')
```
if peek(word_list) == 'verb':
return match(word_list, 'verb')
else:
raise ParserError("Expected a verb next.")
```
def parse\_object(word\_list):
skip(word\_list, 'stop')
next\_word = peek(word\_list)
```
if next_word == 'noun':
return match(word_list, 'noun')
elif next_word == 'direction':
return match(word_list, 'direction')
else:
raise ParserError("Expected a noun or direction next.")
```
def parse\_subject(word\_list):
skip(word\_list, 'stop')
next\_word = peek(word\_list)
```
if next_word == 'noun':
return match(word_list, 'noun')
elif next_word == 'verb':
return ('noun', 'player')
else:
raise ParserError("Expected a verb next.")
```
def parse\_sentence(word\_list):
subj = parse\_subject(word\_list)
verb = parse\_verb(word\_list)
obj = parse\_object(word\_list)
```
return Sentence(subj, verb, obj)
```
This is my tests file
```
from nose.tools import *
```
from nose.tools import assert\_equals
import sys
sys.path.append("h:/projects/projectx48/ex48")
import parsrer
def test\_subject():
word\_list = [('noun', 'bear'), ('verb', 'eat'), ('stop', 'the'), ('noun', 'honey')]
assert\_equals(parse\_subject(word\_list), ('noun','bear'))
|
2016/08/17
|
[
"https://Stackoverflow.com/questions/38989150",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/6684586/"
] |
You can do with `XML PATH`
```
SELECT
(
SELECT
T.Name + ', '
FROM
Tbl T
WHERE
Id in (1, 2, 3)
FOR XML PATH ('')
) DesiredOutput
```
Result looks like `Apple, Microsoft, Samsung,`
|
```
USE [Database Name]
SELECT COLUMN_NAME,*
FROM INFORMATION_SCHEMA.COLUMNS
WHERE TABLE_NAME = 'YourTableName'
```
|
38,989,150
|
I trying to import a module and use a function from that module in my current python file.
I run the nosetests on the parser\_tests.py file but it fails with "name 'parse\_subject' not defined"
e.g its not finding the parse\_subject function which is clearly defined in the parsrer.py file
This is the parsrer file:
```
def peek(word_list):
if word_list:
word = word_list[0]
return word[0]
else:
return None
#Confirms that the expected word is the right type,
```
def match(word\_list, expecting):
if word\_list:
word = word\_list.pop(0)
```
if word[0] == expecting:
return word
else:
return None
else:
return None
```
def skip(word\_list, word\_type):
while peek(word\_list) == word\_type:
match(word\_list, word\_type)
def parse\_verb(word\_list):
skip(word\_list, 'stop')
```
if peek(word_list) == 'verb':
return match(word_list, 'verb')
else:
raise ParserError("Expected a verb next.")
```
def parse\_object(word\_list):
skip(word\_list, 'stop')
next\_word = peek(word\_list)
```
if next_word == 'noun':
return match(word_list, 'noun')
elif next_word == 'direction':
return match(word_list, 'direction')
else:
raise ParserError("Expected a noun or direction next.")
```
def parse\_subject(word\_list):
skip(word\_list, 'stop')
next\_word = peek(word\_list)
```
if next_word == 'noun':
return match(word_list, 'noun')
elif next_word == 'verb':
return ('noun', 'player')
else:
raise ParserError("Expected a verb next.")
```
def parse\_sentence(word\_list):
subj = parse\_subject(word\_list)
verb = parse\_verb(word\_list)
obj = parse\_object(word\_list)
```
return Sentence(subj, verb, obj)
```
This is my tests file
```
from nose.tools import *
```
from nose.tools import assert\_equals
import sys
sys.path.append("h:/projects/projectx48/ex48")
import parsrer
def test\_subject():
word\_list = [('noun', 'bear'), ('verb', 'eat'), ('stop', 'the'), ('noun', 'honey')]
assert\_equals(parse\_subject(word\_list), ('noun','bear'))
|
2016/08/17
|
[
"https://Stackoverflow.com/questions/38989150",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/6684586/"
] |
```
SELECT STUFF((
SELECT ','+ Name
FROM MyTable
WHERE ID in (1, 2, 3)
FOR XML PATH('')
), 1, 1, '') AS Names
```
Result:
>
> Apple,Microsoft,Samsung
>
>
>
|
You can do with `XML PATH`
```
SELECT
(
SELECT
T.Name + ', '
FROM
Tbl T
WHERE
Id in (1, 2, 3)
FOR XML PATH ('')
) DesiredOutput
```
Result looks like `Apple, Microsoft, Samsung,`
|
38,989,150
|
I trying to import a module and use a function from that module in my current python file.
I run the nosetests on the parser\_tests.py file but it fails with "name 'parse\_subject' not defined"
e.g its not finding the parse\_subject function which is clearly defined in the parsrer.py file
This is the parsrer file:
```
def peek(word_list):
if word_list:
word = word_list[0]
return word[0]
else:
return None
#Confirms that the expected word is the right type,
```
def match(word\_list, expecting):
if word\_list:
word = word\_list.pop(0)
```
if word[0] == expecting:
return word
else:
return None
else:
return None
```
def skip(word\_list, word\_type):
while peek(word\_list) == word\_type:
match(word\_list, word\_type)
def parse\_verb(word\_list):
skip(word\_list, 'stop')
```
if peek(word_list) == 'verb':
return match(word_list, 'verb')
else:
raise ParserError("Expected a verb next.")
```
def parse\_object(word\_list):
skip(word\_list, 'stop')
next\_word = peek(word\_list)
```
if next_word == 'noun':
return match(word_list, 'noun')
elif next_word == 'direction':
return match(word_list, 'direction')
else:
raise ParserError("Expected a noun or direction next.")
```
def parse\_subject(word\_list):
skip(word\_list, 'stop')
next\_word = peek(word\_list)
```
if next_word == 'noun':
return match(word_list, 'noun')
elif next_word == 'verb':
return ('noun', 'player')
else:
raise ParserError("Expected a verb next.")
```
def parse\_sentence(word\_list):
subj = parse\_subject(word\_list)
verb = parse\_verb(word\_list)
obj = parse\_object(word\_list)
```
return Sentence(subj, verb, obj)
```
This is my tests file
```
from nose.tools import *
```
from nose.tools import assert\_equals
import sys
sys.path.append("h:/projects/projectx48/ex48")
import parsrer
def test\_subject():
word\_list = [('noun', 'bear'), ('verb', 'eat'), ('stop', 'the'), ('noun', 'honey')]
assert\_equals(parse\_subject(word\_list), ('noun','bear'))
|
2016/08/17
|
[
"https://Stackoverflow.com/questions/38989150",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/6684586/"
] |
```
SELECT STUFF((
SELECT ','+ Name
FROM MyTable
WHERE ID in (1, 2, 3)
FOR XML PATH('')
), 1, 1, '') AS Names
```
Result:
>
> Apple,Microsoft,Samsung
>
>
>
|
As SQL server doesn't support input in runtime, you can either create a stored procedure and pass the input value while executing it. or just run the following query.
```
SELECT NAME
FROM MyTable
WHERE ID IN
(SELECT TOP 3 ID FROM MyTable ORDER BY ID)
```
|
38,989,150
|
I trying to import a module and use a function from that module in my current python file.
I run the nosetests on the parser\_tests.py file but it fails with "name 'parse\_subject' not defined"
e.g its not finding the parse\_subject function which is clearly defined in the parsrer.py file
This is the parsrer file:
```
def peek(word_list):
if word_list:
word = word_list[0]
return word[0]
else:
return None
#Confirms that the expected word is the right type,
```
def match(word\_list, expecting):
if word\_list:
word = word\_list.pop(0)
```
if word[0] == expecting:
return word
else:
return None
else:
return None
```
def skip(word\_list, word\_type):
while peek(word\_list) == word\_type:
match(word\_list, word\_type)
def parse\_verb(word\_list):
skip(word\_list, 'stop')
```
if peek(word_list) == 'verb':
return match(word_list, 'verb')
else:
raise ParserError("Expected a verb next.")
```
def parse\_object(word\_list):
skip(word\_list, 'stop')
next\_word = peek(word\_list)
```
if next_word == 'noun':
return match(word_list, 'noun')
elif next_word == 'direction':
return match(word_list, 'direction')
else:
raise ParserError("Expected a noun or direction next.")
```
def parse\_subject(word\_list):
skip(word\_list, 'stop')
next\_word = peek(word\_list)
```
if next_word == 'noun':
return match(word_list, 'noun')
elif next_word == 'verb':
return ('noun', 'player')
else:
raise ParserError("Expected a verb next.")
```
def parse\_sentence(word\_list):
subj = parse\_subject(word\_list)
verb = parse\_verb(word\_list)
obj = parse\_object(word\_list)
```
return Sentence(subj, verb, obj)
```
This is my tests file
```
from nose.tools import *
```
from nose.tools import assert\_equals
import sys
sys.path.append("h:/projects/projectx48/ex48")
import parsrer
def test\_subject():
word\_list = [('noun', 'bear'), ('verb', 'eat'), ('stop', 'the'), ('noun', 'honey')]
assert\_equals(parse\_subject(word\_list), ('noun','bear'))
|
2016/08/17
|
[
"https://Stackoverflow.com/questions/38989150",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/6684586/"
] |
You can do with `XML PATH`
```
SELECT
(
SELECT
T.Name + ', '
FROM
Tbl T
WHERE
Id in (1, 2, 3)
FOR XML PATH ('')
) DesiredOutput
```
Result looks like `Apple, Microsoft, Samsung,`
|
As SQL server doesn't support input in runtime, you can either create a stored procedure and pass the input value while executing it. or just run the following query.
```
SELECT NAME
FROM MyTable
WHERE ID IN
(SELECT TOP 3 ID FROM MyTable ORDER BY ID)
```
|
41,595,532
|
I have a Tkinter button, and for some reason, it accepts width=xx, but not height=xx
I'm using Python 3.5, with Tkinter support by default, on ubuntu 16.04
Here's the code sample:
```
# works: button_enter = ttk.Button(self.frm, text='ok', width = 100)
# works: button_enter.config(width=25)
# fails: button_enter = ttk.Button(self.frm, text='ok', height=15, width = 25)
# fails: button_enter.config(width=25, height=15)
button_enter = ttk.Button(self.frm, text='ok')
button_enter.config(width=25)
button_enter['command'] = self.some_method
button_enter.grid(column=2, row = 0, sticky=W)
```
Here is the error I'm getting:
```
Traceback (most recent call last):
File "pygo.py", line 44, in <module>
app = App()
File "pygo.py", line 34, in __init__
button_enter.config(height=15, width=25)
File "/usr/lib/python3.5/tkinter/__init__.py", line 1333, in configure
return self._configure('configure', cnf, kw)
File "/usr/lib/python3.5/tkinter/__init__.py", line 1324, in _configure
self.tk.call(_flatten((self._w, cmd)) + self._options(cnf))
_tkinter.TclError: unknown option "-height"
```
Can it be made to work? Or if it's a bug, where should I report it?
|
2017/01/11
|
[
"https://Stackoverflow.com/questions/41595532",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/377783/"
] |
It's not a bug, that's just how ttk buttons work. If you need a highly configurable button, use a tkinter button. ttk buttons are less configurable on purpose. The goal of ttk widgets is to give you a set of buttons consistent with a particular theme.
Since you're on a linux system you can affect the height with `pack`, `place` or `grid` with appropriate options, though it's less convenient than the `height` option.
|
Use the **Place()** method instead over **Grid() or Pack() or Config()** . It will work fine. I never used **Config()** method
|
41,595,532
|
I have a Tkinter button, and for some reason, it accepts width=xx, but not height=xx
I'm using Python 3.5, with Tkinter support by default, on ubuntu 16.04
Here's the code sample:
```
# works: button_enter = ttk.Button(self.frm, text='ok', width = 100)
# works: button_enter.config(width=25)
# fails: button_enter = ttk.Button(self.frm, text='ok', height=15, width = 25)
# fails: button_enter.config(width=25, height=15)
button_enter = ttk.Button(self.frm, text='ok')
button_enter.config(width=25)
button_enter['command'] = self.some_method
button_enter.grid(column=2, row = 0, sticky=W)
```
Here is the error I'm getting:
```
Traceback (most recent call last):
File "pygo.py", line 44, in <module>
app = App()
File "pygo.py", line 34, in __init__
button_enter.config(height=15, width=25)
File "/usr/lib/python3.5/tkinter/__init__.py", line 1333, in configure
return self._configure('configure', cnf, kw)
File "/usr/lib/python3.5/tkinter/__init__.py", line 1324, in _configure
self.tk.call(_flatten((self._w, cmd)) + self._options(cnf))
_tkinter.TclError: unknown option "-height"
```
Can it be made to work? Or if it's a bug, where should I report it?
|
2017/01/11
|
[
"https://Stackoverflow.com/questions/41595532",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/377783/"
] |
It's not a bug, that's just how ttk buttons work. If you need a highly configurable button, use a tkinter button. ttk buttons are less configurable on purpose. The goal of ttk widgets is to give you a set of buttons consistent with a particular theme.
Since you're on a linux system you can affect the height with `pack`, `place` or `grid` with appropriate options, though it's less convenient than the `height` option.
|
Don't import like that:
```
from tkinter.ttk import *
```
If you import with this method, the button will be imported as `ttk`
|
631,884
|
How do you encode a png image into base64 using python on Windows?
```
iconfile = open("icon.png")
icondata = iconfile.read()
icondata = base64.b64encode(icondata)
```
The above works fine in Linux and OSX, but on Windows it will encode the first few characters then cut short. Why is this?
|
2009/03/10
|
[
"https://Stackoverflow.com/questions/631884",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/76322/"
] |
[Open the file in binary mode](http://docs.python.org/tutorial/inputoutput.html#reading-and-writing-files):
```
open("icon.png", "rb")
```
I'm not very familiar with Windows, but I'd imagine what's happening is that the file contains a character (0x1A) that [Windows is interpreting as the end of the file](http://www.google.com/search?q=windows+end+of+file+character) (for legacy reasons) when it is opened in text mode. The other issue is that opening a file in text mode (without the 'b') on Windows will cause line endings to be rewritten, which will generally break binary files where those characters don't actually indicate the end of a line.
|
To augment the answer from Miles, the [first eight bytes in a PNG file](http://www.libpng.org/pub/png/spec/1.2/PNG-Rationale.html#R.PNG-file-signature) are specially designed:
* 89 - the first byte is a check that
bit 8 hasn't been stripped
* "PNG" - let someone read that it's a
PNG format
* 0d 0a - the DOS end-of-line
indicator, to check if there was
DOS->unix conversion
* 1a - the DOS end-of-file character,
to check that the file was opened in
binary mode
* 0a - unix end-of-line character, to
check if there was a unix->DOS
conversion
Your code stops at the 1a, as designed.
|
8,839,846
|
In vim, you can check if a file is open in the current buffer with `bufexists`. For a short filename (not full path), you can check if it's open using `bufexists(bufname('filename'))`.
Is there any way to check if a file is open in a *tab*?
My closest workaround is to do something like:
```
:tabdo if bufnr(bufname('filename')) in tabpagebuflist(): echo "Yes"
```
However, that's sort of pythonic pseudocode... I'm not sure how to get that to work in vim. My goal is for an external applescript to check if a file is already open and if so go to a line in that file.
Ideally, I'd like to be able to search through different GUI windows too, but I've gathered (e.g. [Open vim tab in new (GUI) window?](https://stackoverflow.com/questions/6123424/open-vim-tab-in-new-gui-window)) that working with different GUI windows is very challenging / impossible in VIM.
|
2012/01/12
|
[
"https://Stackoverflow.com/questions/8839846",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/814354/"
] |
My impatience and good documentation got the better of me... here's the solution (greatly aided by [Check if current tab is empty in vim](https://stackoverflow.com/questions/5025558/check-if-current-tab-is-empty-in-vim) and [Open vim tab in new (GUI) window?](https://stackoverflow.com/questions/6123424/open-vim-tab-in-new-gui-window)). The source is at <https://github.com/keflavich/macvim-skim>
```
function! WhichTab(filename)
" Try to determine whether file is open in any tab.
" Return number of tab it's open in
let buffername = bufname(a:filename)
if buffername == ""
return 0
endif
let buffernumber = bufnr(buffername)
" tabdo will loop through pages and leave you on the last one;
" this is to make sure we don't leave the current page
let currenttab = tabpagenr()
let tab_arr = []
tabdo let tab_arr += tabpagebuflist()
" return to current page
exec "tabnext ".currenttab
" Start checking tab numbers for matches
let i = 0
for tnum in tab_arr
let i += 1
echo "tnum: ".tnum." buff: ".buffernumber." i: ".i
if tnum == buffernumber
return i
endif
endfor
endfunction
function! WhichWindow(filename)
" Try to determine whether the file is open in any GVIM *window*
let serverlist = split(serverlist(),"\n")
"let currentserver = ????
for server in serverlist
let remotetabnum = remote_expr(server,
\"WhichTab('".a:filename."')")
if remotetabnum != 0
return server
endif
endfor
endfunction
```
then use like so:
```
exec "tabnext ".WhichTab('my_filename')
echo remote_foreground( WhichWindow('my_filename') )
```
or, from the command line, here's a script to go to a particular line of a file using `WhichTab`:
```
#!/bin/bash
file="$1"
line="$2"
for server in `mvim --serverlist`
do
foundfile=`mvim --servername $server --remote-expr "WhichTab('$file')"`
if [[ $foundfile > 0 ]]
then
mvim --servername $server --remote-expr "foreground()"
mvim --servername $server --remote-send ":exec \"tabnext $foundfile\" <CR>"
mvim --servername $server --remote-send ":$line <CR>"
fi
done
```
|
I'd reply to keflavich, but I can't yet...
I was working on a similar problem where I wanted to mimic the behavior of gvim --remote-tab-silent when opening files inside of gvim. I found this WhichTab script of yours, but ran into problems when there is more than one window open in any given tab. If you split windows inside of tabs, then you will have more than one buffer returned by tabpagebuflist(), so your method of using the buffer number's position in the List doesn't work. Here's my solution that accounts for that possibility.
```
" Note: returns a list of tabnos where the buf is found or 0 for none.
" tabnos start at 1, so 0 is always invalid
function! WhichTabNo(bufNo)
let tabNos = []
for tabNo in range(1, tabpagenr("$"))
for bufInTab in tabpagebuflist(tabNo)
if (bufInTab == a:bufNo)
call add(tabNos, tabNo)
endif
endfor
endfor
let numBufsFound = len(tabNos)
return (numBufsFound == 0) ? 0 : tabNos
endfunction
```
I think I can just return tabNos which will be an empty list that gets evaluated as a scalar 0, but I just learned vimscript and am not that comfortable with the particulars of its dynamic typing behavior yet, so I'm leaving it like that for now.
|
44,894,992
|
**On Windows 10, Python 3.6**
Let's say I have a command prompt session open **(not Python command prompt or Python interactive session)** and I've been setting up an environment with a lot of configurations or something of that nature. Is there any way for me to access the history of commands I used in that session with a python module for example?
Ideally, I would like to be able to export this history to a file so I can reuse it in the future.
**Example:**
Type in command prompt: `python savecmd.py`
and it saves the history from that session.
|
2017/07/03
|
[
"https://Stackoverflow.com/questions/44894992",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/6805800/"
] |
You don't need Python at all, use `doskey` facilities for that, i.e.:
```
doskey /history
```
will print out the current session's command history, you can then redirect that to a file if you want to save it:
```
doskey /history > saved_commands.txt
```
If you really want to do it from within Python, you can use `subprocess.check_output()` to capture the command history and then save it to a file:
```
import subprocess
cmd_history = subprocess.check_output(["doskey", "/history"])
with open("saved_commands.txt", "wb") as f:
f.write(cmd_history)
```
|
You're actually asking for 3 different things there.
1. Getting Python REPL command hisotry
2. Getting info from "prompt", which I assume is the Powershell or CMD.exe.
3. Save the history list to a file.
To get the history from inside your REPL, use:
```py
for i in list(range(readline.get_current_history_length())): print(readline.get_history_item(i))
```
Unfortunately, there is no history in REPL, when using from outside, as above, so you need to find the history file and print it.
To see the history file from powershell:
```
function see_my_py_history{ cat C:\<path-to>\saved_commands.txt }
Set-Alias -Name seehist -Value see_my_py_history
```
|
73,631,420
|
I am scraping web series data from `JSON` and `lists` using python. the problem is with date and time.
I got a function to convert the `duration` format
```
def get_duration(secs):
hour = int(secs/3600)
secs = secs - hour*3600
minute = int(secs/60)
if hour == 0:
return "00:" + str(minute)
elif len(str(hour)) == 1:
return "0" + str(hour) + ":" + str(minute)
return str(hour) + ":" + str(minute)
```
The output is `00:29` which is correct.
But how can I convert `broadCastDate` to DD-MM-YYYY format
Here is the Data
```
"items": [
{
"duration": 1778,
"startDate": 1555893000,
"endDate": 4102424940,
"broadCastDate": 1555939800,
}
]
```
The Broadcast Date on the website is **22 Apr 2019**, so need a function to get this type of output.
|
2022/09/07
|
[
"https://Stackoverflow.com/questions/73631420",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/19861079/"
] |
That value looks like a timestamp so you could use [`datetime.fromtimestamp`](https://docs.python.org/3/library/datetime.html#datetime.date.fromtimestamp) to convert to a `datetime` object and then [`strftime`](https://docs.python.org/3/library/datetime.html#strftime-strptime-behavior) to output in the format you want:
```
from datetime import datetime
d = datetime.fromtimestamp(1555939800)
print(d.strftime('%d-%m-%Y'))
```
Output:
```
22-04-2019
```
|
The original question is slightly unclear about the final desired format. If you would like "22 Apr 2019" then the required format string is "%d %b %Y"
```
from datetime import datetime
d = datetime.fromtimestamp(1555939800)
print(d.strftime('%d %b %Y'))
```
Output:
```
22 Apr 2019
```
|
57,159,408
|
Am new to python and trying to understand logging (am not new to programming at all though). Trying to replicate log4j logger that we have in Java. Have a simple test project of the following files:
- User.py
- Test.py
- conf/
- logging.conf
- log
All the python files have logger objects to log to specific file mentioned in logging.conf. However, only the Test.py file dumps logs. Not the User.py file.
User.py is as follows:
```py
import logging
import logging.config
logging.config.fileConfig('conf/logging.conf')
logger = logging.getLogger(__name__)
class User:
username = ""
password = ""
def __init__(self, username = "username", password = "password"):
global logger
logger.debug( "In User constructor" );
self.username = username
self.password = password
logger.debug( "Out User constructor" );
def getSomething(self) :
return "Something"
def login_user(self) :
logger.debug( "In login_user" );
logger.debug( "Out login_user" );
return "Logged In"
```
Test.py as follows:
```py
from User import User
import logging
import logging.config
from threading import Thread
logging.config.fileConfig('conf/logging.conf')
logger = logging.getLogger(__name__)
objectsCount = 1
logger.debug("In VolumeTest")
def thr_func(username, password):
logger.info("In thr_func")
user1 = User(username, password)
logger.info("Created user obj")
logger.info("Before login_user call")
token = user1.login_user()
logger.debug ( "Out thr_func")
def main() :
thr_func( "abc", "def" )
logger.info("In Main")
thr_objects = []
for i in range(objectsCount):
thread = Thread(target=thr_func, args=("subhayan", "MER2018"))
thr_objects.append(thread)
logger.info("Main : before running thread")
for i in range(objectsCount):
logger.info("Main : starting thread " + str(i))
thr_objects[i].start()
logger.info("Main : started thread " + str(i))
logger.info("Main : wait for the thread to finish")
for i in range(objectsCount):
thr_objects[i].join()
logger.info("Main : all done")
if __name__ == "__main__":
main()
```
logging.conf file as follows:
```py
[loggers]
keys=root
[handlers]
keys=logfile
[formatters]
keys=logfileformatter
[logger_root]
level=DEBUG
handlers=logfile
[formatter_logfileformatter]
format=%(asctime)s %(name)-12s: %(threadName)s : %(levelname)s : %(message)s
[handler_logfile]
#class=handlers.RotatingFileHandler
class=handlers.TimedRotatingFileHandler
#class=TimedCompressedRotatingFileHandler
level=DEBUG
#args=('testing.log','a',10,100)
args=('log/testing.log','d', 1, 9, None, False, False)
formatter=logfileformatter
```
I tried running using : "python Test.py". However, no logs from User file So, the question is, can't we have logging for the second file in this way ? What am I doing wrong ? If nothing wrong, what is the best way to do this in python ?
|
2019/07/23
|
[
"https://Stackoverflow.com/questions/57159408",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/1300830/"
] |
I see a couple of things that may be the issue:
1) you're running 2 separate loggers: each file is instantiating it's own logger pointed to the same file.
2) if you want everything in the same file, then create one logger and pass the reference to other modules via a global var. I find it easier to create a log.py file that creates and writes the log file, and then import that module into the other modules.
2) you have a file handler and a formatter in your config file, but don't seem to set it in the code. Something like
```
handler = logging.handlers.RotatingFileHandler(LOG_FILENAME, ...
f = logging.Formatter('%(process)d %(asctime)s %(levelname)s: %(message)s')
handler.setFormatter(f)
logger.addHandler(handler)
```
Hope this helps!
|
Thanks tbitson for the pointers. Here is the changed code.
```
import logging
import logging.config
logging.config.fileConfig('conf/logging.conf')
logger = logging.getLogger(__name__)
```
The modified User.py is:
```
from logger import logger
class User:
username = ""
password = ""
def __init__(self, username = "username", password = "password"):
global logger
logger.debug( "In User constructor" );
self.username = username
self.password = password
logger.debug( "Out User constructor" );
def getSomething(self) :
return "Something"
def login_user(self) :
logger.debug( "In login_user" );
logger.debug( "Out login_user" );
return "Logged In"
```
The modified Test.py is:
from User import User
from logger import logger
from threading import Thread
```
objectsCount = 1
logger.debug("In VolumeTest")
def thr_func(username, password):
logger.info("In thr_func")
user1 = User(username, password)
logger.info("Created user obj")
logger.info("Before login_user call")
token = user1.login_user()
logger.debug ( "Out thr_func")
def main() :
thr_func( "abc", "def" )
logger.info("In Main")
thr_objects = []
for i in range(objectsCount):
thread = Thread(target=thr_func, args=("subhayan", "MER2018"))
thr_objects.append(thread)
logger.info("Main : before running thread")
for i in range(objectsCount):
logger.info("Main : starting thread " + str(i))
thr_objects[i].start()
logger.info("Main : started thread " + str(i))
logger.info("Main : wait for the thread to finish")
for i in range(objectsCount):
thr_objects[i].join()
logger.info("Main : all done")
if __name__ == "__main__":
main()
```
|
47,111,787
|
I'm trying to set and get keys from ElastiCache (memcached) from a python lambda function using Boto3. I can figure out how to get the endpoints but that's pretty much it. Is there some documentation out there that shows the entire process?
|
2017/11/04
|
[
"https://Stackoverflow.com/questions/47111787",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/4891674/"
] |
It sounds like you are trying to interact with Memcached via Boto3. This is not possible. Boto3 is for interacting with the AWS API. You can manage your ElastiCache servers via the AWS API, but you can't interact with the Memcached software running on those servers. You need to use a Memcached client library like [python-memcached](https://pypi.python.org/pypi/python-memcached) in your Python code to actually get and set keys in your Memcached cluster.
Also, your Lambda function will need to reside in the same VPC as the ElastiCache node(s).
|
I had the exact timeout problem listed in the commment of the older post. My bug is in the security group for memcached. Here is the working version in terraform:
```
resource "aws_security_group" "memcached" {
vpc_id = "${aws_vpc.dev.id}"
name = "memcached SG"
ingress {
from_port = "${var.memcached_port}"
to_port = "${var.memcached_port}"
protocol = "tcp"
cidr_blocks = ["${var.public_subnet_cidr}"]
}
egress {
from_port = "${var.memcached_port}"
to_port = "${var.memcached_port}"
protocol = "tcp"
cidr_blocks = ["${var.public_subnet_cidr}"]
}
tags = {
Name = "memcached SG"
}
}
```
I tested the connection by creating a EC2 instance in public subnet and do "telnet (input your cache node URL) 11211".
|
39,899,312
|
I am using the Tracer software package (<https://github.com/Teichlab/tracer>).
The program is invoked as followed:
`tracer assemble [options] <file_1> [<file_2>] <cell_name> <output_directory>`
The program runs on a single dataset and the output goes to `/<output_directory>/<cell_name>`
What I want to do now is run this program on multiple files. To do so this is what I do:
```bash
for filename in /home/tobias/tracer/datasets/test/*.fastq
do
echo "Processing $filename file..."
python tracer assemble --single_end --fragment_length 62 --fragment_sd 1 $filename Tcell_test output;
done
```
This works in priciple, but as `cell_name` is static, every iteration overwrites the output from the previous iteration. How do I need to change my script in order to give the output folder the name of the input file?
For example: Input filename is `tcell1.fastq`. For this cell\_name should be `tcell1`. Next file is `tcell2.fastq` and cell\_name should be `tcell2`, and so on...
|
2016/10/06
|
[
"https://Stackoverflow.com/questions/39899312",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/6932623/"
] |
I think this will do it, in bash, if I understand correctly -
```bash
for filename in /home/tobias/tracer/datasets/test/*.fastq
do
echo "Processing $filename file..."
basefilename="${filename##*/}" #<---
python tracer assemble --single_end --fragment_length 62 --fragment_sd 1 "$filename" "${basefilename%.fastq}" output;
# ^^^^^^^^^^^^^^^^^^^^^^^^
done
```
`${filename##*/}` removes the part up to the last `/`, and `${basefilename%.fastq}` removes the `.fastq` at the end.
|
From what I understood the library you use writes output to predefined (non configurable) directory
Let's call it `output_dir`.
At each iteration you should rename the output directory.
So your code should be something like this (pseudo code)
```
for filename in /home/tobias/tracer/datasets/test/*.fastq
do
echo "Processing $filename file..."
python tracer assemble --single_end --fragment_length 62 --fragment_sd 1 $filename Tcell_test output;
rename output_dir , each_file + "_output_dir"
done
```
|
69,760,313
|
Is there more pythonic way to remove all duplicate elements from a list, while keeping the first and last element?
```
lst = ["foo", "bar", "foobar", "foo", "barfoo", "foo"]
occurence = [i for i, e in enumerate(lst) if e == "foo"]
to_remove = occurence[1:-1]
for i in to_remove:
del lst[i]
print(lst) # ['foo', 'bar', 'foobar', 'barfoo', 'foo']
```
Another approach which I like more.
```
for i, e in enumerate(lst[1:-1]):
if e == "foo":
del lst[i+1]
```
|
2021/10/28
|
[
"https://Stackoverflow.com/questions/69760313",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/7661466/"
] |
Use a slice-assignment and unpack/consume a filter?
```
lst = ["foo", "bar", "foobar", "foo", "barfoo", "foo"]
lst[1:-1] = filter(lambda x: x != "foo", lst[1:-1])
print(lst)
```
Output:
```
['foo', 'bar', 'foobar', 'barfoo', 'foo']
```
|
You can construct a new list with your requirements using Python's set data structure and slicing like this:
```
lst = ["foo", "bar", "foobar", "foo", "barfoo", "foo"]
new_lst = [lst[0]] + list(set(lst[1:-1])) + [lst[-1]]
```
|
69,760,313
|
Is there more pythonic way to remove all duplicate elements from a list, while keeping the first and last element?
```
lst = ["foo", "bar", "foobar", "foo", "barfoo", "foo"]
occurence = [i for i, e in enumerate(lst) if e == "foo"]
to_remove = occurence[1:-1]
for i in to_remove:
del lst[i]
print(lst) # ['foo', 'bar', 'foobar', 'barfoo', 'foo']
```
Another approach which I like more.
```
for i, e in enumerate(lst[1:-1]):
if e == "foo":
del lst[i+1]
```
|
2021/10/28
|
[
"https://Stackoverflow.com/questions/69760313",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/7661466/"
] |
Use a slice-assignment and unpack/consume a filter?
```
lst = ["foo", "bar", "foobar", "foo", "barfoo", "foo"]
lst[1:-1] = filter(lambda x: x != "foo", lst[1:-1])
print(lst)
```
Output:
```
['foo', 'bar', 'foobar', 'barfoo', 'foo']
```
|
Try below:
```
lst = ["foo", "bar", "foobar", "foo", "barfoo", "foo"]
first, last = lst[0], lst[-1]
seen = {first, last}
seen_add = seen.add
inner = (s for s in lst[1:-1] if not (s in seen or seen_add(s)))
result = [first, *inner, last]
```
Result:
```
['foo', 'bar', 'foobar', 'barfoo', 'foo']
```
|
69,760,313
|
Is there more pythonic way to remove all duplicate elements from a list, while keeping the first and last element?
```
lst = ["foo", "bar", "foobar", "foo", "barfoo", "foo"]
occurence = [i for i, e in enumerate(lst) if e == "foo"]
to_remove = occurence[1:-1]
for i in to_remove:
del lst[i]
print(lst) # ['foo', 'bar', 'foobar', 'barfoo', 'foo']
```
Another approach which I like more.
```
for i, e in enumerate(lst[1:-1]):
if e == "foo":
del lst[i+1]
```
|
2021/10/28
|
[
"https://Stackoverflow.com/questions/69760313",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/7661466/"
] |
Use a slice-assignment and unpack/consume a filter?
```
lst = ["foo", "bar", "foobar", "foo", "barfoo", "foo"]
lst[1:-1] = filter(lambda x: x != "foo", lst[1:-1])
print(lst)
```
Output:
```
['foo', 'bar', 'foobar', 'barfoo', 'foo']
```
|
According to your example, it's necessary to remove the duplicates looking at the entire list and then adding the first and last elements:
```
lst = ["foo", "bar", "foobar", "foo", "barfoo", "foo"]
first, last = lst[0], lst[-1]
data = [first, *set([w for w in lst if w not in [first, last]]), last]
print(data)
```
Output:
```
['foo', 'bar', 'barfoo', 'foobar', 'foo']
```
|
69,760,313
|
Is there more pythonic way to remove all duplicate elements from a list, while keeping the first and last element?
```
lst = ["foo", "bar", "foobar", "foo", "barfoo", "foo"]
occurence = [i for i, e in enumerate(lst) if e == "foo"]
to_remove = occurence[1:-1]
for i in to_remove:
del lst[i]
print(lst) # ['foo', 'bar', 'foobar', 'barfoo', 'foo']
```
Another approach which I like more.
```
for i, e in enumerate(lst[1:-1]):
if e == "foo":
del lst[i+1]
```
|
2021/10/28
|
[
"https://Stackoverflow.com/questions/69760313",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/7661466/"
] |
Use a slice-assignment and unpack/consume a filter?
```
lst = ["foo", "bar", "foobar", "foo", "barfoo", "foo"]
lst[1:-1] = filter(lambda x: x != "foo", lst[1:-1])
print(lst)
```
Output:
```
['foo', 'bar', 'foobar', 'barfoo', 'foo']
```
|
You can try this:
```py
mylist = ["foo", "bar", "foobar", "foo", "barfoo", "foo"]
none_duplicated = list(dict.fromkeys(mylist))
print([mylist[0], *none_duplicated, mylist[-1]])
```
Actually, your question is not about to remove duplicates, but remove the occurrences of your first item in the list (assume first and last are the same). But if you want a more generic way, you will first try to get a list of none duplicated value, and then add your first and last.
|
66,819,423
|
I have homework where I'm asked to build Newton and Lagrange interpolation polynomials. I had no troubles with Lagrange polynomial but with Newton polynomial arises one problem: while Lagrange interpolation polynomial and original function match completely with each other, Newton Interpolation doesn't do this.
[Here is the plot.](https://i.stack.imgur.com/2couA.png)
If I remember correctly, Newton and Lagrange polynomial interpolation are different ways to represent the same polynomial and they should match the original function at the interval of interpolation completely.
I thought that Newton coefficients were calculated wrongly, so I found another divided difference function. I tried both function and [they gave me the same results.](https://i.stack.imgur.com/N3YJ7.png)
I'm stuck at this moment. I still think that there is something wrong with calculating divided differences functions, but I can't see the mistake.
Any suggestions, please?
Here is the code:
```
import numpy as np
from scipy.interpolate import lagrange
# func from https://pythonnumericalmethods.berkeley.edu/notebooks/chapter17.05-Newtons-Polynomial-Interpolation.html
def divided_diff(x, y):
n = len(y)
coef = np.zeros([n, n])
coef[:,0] = y
for j in range(1,n):
for i in range(n-j):
coef[i][j] = \
(coef[i+1][j-1] - coef[i][j-1]) / (x[i+j]-x[i])
return coef
def coeff(x, y):
n = len(x)
arr = y.copy()
for i in range(1, n):
arr[i:n] = (arr[i:n] - arr[i - 1]) / (x[i:n] - x[i - 1])
return arr
# 1st range
x_values = np.array([0, np.pi/6, np.pi/3, np.pi/2], float)
y_values = np.array([1, np.sqrt(3)/2, 1/2, 0], float)
print(coeff(x_values, y_values))
print(divided_diff(x_values, y_values)[0,:])
# Show polynomials
newton_coefficients = divided_diff(x_values, y_values)[0,:]
newton = np.poly1d(newton_coefficients[::-1])
lagrange_poly = lagrange(x_values, y_values)
print(f"Here is our Lagrange interpolating polynomial:\n{lagrange_poly}\n")
print(f"Here is our Newton interpolating polynomial:\n{newton}\n")
```
|
2021/03/26
|
[
"https://Stackoverflow.com/questions/66819423",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/11749578/"
] |
One object is created, of type `C`, which contains all of the members of `A`, `B`, and `C`.
|
Here are some additional information about the relation between a base-class object and a derived-class object:
Every derived object contains a base-class part to which a pointer or reference of the base-class type can be bound. That's the reason why a conversion from derived to base exists, and no implicit Conversion from base to derived.
A base-class object can exist either as an independent object or as part of a derived object.
When we initialize or assign an object of a base type from an object of a derived type, only the base-class part of the derived object is copied, moved, or assigned. The derived part of the object is ignored.
*see "C++ Primer Fifth Edition" §15.2.3*
|
66,819,423
|
I have homework where I'm asked to build Newton and Lagrange interpolation polynomials. I had no troubles with Lagrange polynomial but with Newton polynomial arises one problem: while Lagrange interpolation polynomial and original function match completely with each other, Newton Interpolation doesn't do this.
[Here is the plot.](https://i.stack.imgur.com/2couA.png)
If I remember correctly, Newton and Lagrange polynomial interpolation are different ways to represent the same polynomial and they should match the original function at the interval of interpolation completely.
I thought that Newton coefficients were calculated wrongly, so I found another divided difference function. I tried both function and [they gave me the same results.](https://i.stack.imgur.com/N3YJ7.png)
I'm stuck at this moment. I still think that there is something wrong with calculating divided differences functions, but I can't see the mistake.
Any suggestions, please?
Here is the code:
```
import numpy as np
from scipy.interpolate import lagrange
# func from https://pythonnumericalmethods.berkeley.edu/notebooks/chapter17.05-Newtons-Polynomial-Interpolation.html
def divided_diff(x, y):
n = len(y)
coef = np.zeros([n, n])
coef[:,0] = y
for j in range(1,n):
for i in range(n-j):
coef[i][j] = \
(coef[i+1][j-1] - coef[i][j-1]) / (x[i+j]-x[i])
return coef
def coeff(x, y):
n = len(x)
arr = y.copy()
for i in range(1, n):
arr[i:n] = (arr[i:n] - arr[i - 1]) / (x[i:n] - x[i - 1])
return arr
# 1st range
x_values = np.array([0, np.pi/6, np.pi/3, np.pi/2], float)
y_values = np.array([1, np.sqrt(3)/2, 1/2, 0], float)
print(coeff(x_values, y_values))
print(divided_diff(x_values, y_values)[0,:])
# Show polynomials
newton_coefficients = divided_diff(x_values, y_values)[0,:]
newton = np.poly1d(newton_coefficients[::-1])
lagrange_poly = lagrange(x_values, y_values)
print(f"Here is our Lagrange interpolating polynomial:\n{lagrange_poly}\n")
print(f"Here is our Newton interpolating polynomial:\n{newton}\n")
```
|
2021/03/26
|
[
"https://Stackoverflow.com/questions/66819423",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/11749578/"
] |
1. Only one object of type `class C` would be created.
2. So basically in this case, constructors of base classes are called first before the constructor of the derived class is called. In the end, the object of derived class also contains the contents of bases classes.
|
Here are some additional information about the relation between a base-class object and a derived-class object:
Every derived object contains a base-class part to which a pointer or reference of the base-class type can be bound. That's the reason why a conversion from derived to base exists, and no implicit Conversion from base to derived.
A base-class object can exist either as an independent object or as part of a derived object.
When we initialize or assign an object of a base type from an object of a derived type, only the base-class part of the derived object is copied, moved, or assigned. The derived part of the object is ignored.
*see "C++ Primer Fifth Edition" §15.2.3*
|
56,712,534
|
How to append alpha values inside nested list in python?
```
nested_list = [['72010', 'PHARMACY', '-IV', 'FLUIDS', '7.95'], ['TOTAL',
'HOSPITAL', 'CHARGES', '6,720.92'],['PJ72010', 'WORD', 'FLUIDS',
'7.95']]
Expected_output:
[['72010', 'PHARMACY -IV FLUIDS', '7.95'], ['TOTAL HOSPITAL CHARGES', '6,720.92'],['PJ72010', 'WORD FLUIDS', '7.95']]
```
|
2019/06/22
|
[
"https://Stackoverflow.com/questions/56712534",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/11425561/"
] |
If you create a function that defines what you mean by word you can use [`itertools.groupby()`](https://docs.python.org/3/library/itertools.html#itertools.groupby) to group by this function. Then you can either append the `join()`ed results or `extend()` depending on whether it's a group of numbers of word.
I'm inferring from your example that you are defining words from as anything with no numbers, but you can adjust the function as you see fit:
```
from itertools import groupby
# it's a word if it has no numerics
def word(s):
return not any(c.isnumeric() for c in s)
def groupwords(s):
sub = []
for isword, v in groupby(s, key = word):
if isword:
sub.append(" ".join(v))
else:
sub.extend(v)
return sub
res =[groupwords(l) for l in nested_list]
res
```
**Results:**
```
[['72010', 'PHARMACY -IV FLUIDS', '7.95'],
['TOTAL HOSPITAL CHARGES', '6,720.92'],
['PJ72010', 'WORD FLUIDS', '7.95']]
```
|
Go through the each nested list. Check the each element of that list . If it is a full alpha den append it in a temporary variable. Once you find a numeric append the temporary and numeric both.
Code:
```
nested_list = [['72010', 'PHARMACY', '-IV', 'FLUIDS', '7.95'], ['TOTAL',
'HOSPITAL', 'CHARGES', '6,720.92'],['PJ72010', 'WORD', 'FLUIDS',
'7.95']]
def isAlpha(temp):
for i in temp:
if i>='0' and i<='9':
return 0
return 1
isAlpha("abul")
anser_list=[]
for i in nested_list:
nested_list=[]
temp=""
for j in i:
if isAlpha(j)==1:
if len(temp)>0:
temp+=" "
temp+=j
else:
if len(temp)>0:
nested_list.append(temp)
temp=""
nested_list.append(j)
if len(temp)>0:
nested_list.append(temp)
anser_list.append(nested_list)
for i in anser_list:
print(i)
```
Output will be:
```
['72010', 'PHARMACY -IV FLUIDS', '7.95']
['TOTAL HOSPITAL CHARGES', '6,720.92']
['PJ72010', 'WORD FLUIDS', '7.95']
```
|
34,902,307
|
I have a python script that processes an XML file each day (it is transferred via SFTP to a remote directory, then temporarily copied to a local directory) and stores its information in a MySQL database.
One of my parameters for the file is set to "date=today" so that the correct file is processed each day. This works fine and each day I successfully store new file information into the database.
What I need help on is passing a Linux command line argument to run a file for a specific day (in case a previous day's file needs to be rerun). I can manually edit my code to make this work but this will not be an option once the project is in production.
In addition, I need to be able to pass in a command line argument for "date=\*" and have the script run every file in my remote directory. Currently, this parameter will successfully process only a single file based on alphabetic priority.
If my two questions should be asked separately, my mistake, and I'll edit this question to just cover one of them. Example of my code below:
```
today = datetime.datetime.now().strftime('%Y%m%d')
file_var = local_file_path + connect_to_sftp.sftp_get_file(
local_file_path=local_file_path,
sftp_host=sftp_host,
sftp_username=sftp_username,
sftp_directory=sftp_directory,
date=today)
ET = xml.etree.ElementTree.parse(file_var).getroot()
def parse_file():
for node in ET.findall(.......)
```
In another module:
```
def sftp_get_file(local_file_path, sftp_host, sftp_username, sftp_directory, date):
pysftp.Connection(sftp_host, sftp_username)
# find file in remote directory with given suffix
remote_file = glob.glob(sftp_directory + '/' + date + '_file_suffix.xml')
# strip directory name from full file name
file_name_only = remote_file[0][len(sftp_directory):]
# set local path to hold new file
local_path = local_file_path
# combine local path with filename that was loaded
local_file = local_path + file_name_only
# pull file from remote directory and send to local directory
shutil.copyfile(remote_file[0], local_file)
return file_name_only
```
So the SFTP module reads the file, transfers it to the local directory, and returns the file name to be used in the parsing module. The parsing module passes in the parameters and does the rest of the work.
What I need to be able to do, on certain occasions, is override the parameter that says "date=today" and instead say "date=20151225", for example, but I must do this through a **Linux** command line argument.
In addition, if I currently enter the parameter of "date=\*" it only runs the script for the first file that matches that parameter. I need the script to run for ALL files that match that parameter. Any help is much appreciated. Happy to answer any questions to improve clarity.
|
2016/01/20
|
[
"https://Stackoverflow.com/questions/34902307",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/5451573/"
] |
You can use `sys` module and pass the filename as command line argument.
That would be :
```
import sys
today = str(sys.argv[1]) if len(sys.argv) > 1 else datetime.datetime.now().strftime('%Y%m%d')
```
If the name is given as first argument, then `today` variable will be filename given from command line otherwise if no argument is given it will be what you specified as `datetime`.
For second question,
```
file_name_only = remote_file[0][len(sftp_directory):]
```
You are only accessing the first element, but glob might return serveral files when you use `*` wildcard. You must iterate over remote\_file variable and copy all of them.
|
You can use [argsparse](https://docs.python.org/3/library/argparse.html) to consume command line arguments. You will have to check if specific date is passed and use it instead of the current date
```
if args.date_to_run:
today = args.date_to_run
else:
today = datetime.datetime.now().strftime('%Y%m%d')
```
For the second part of your question you can use something like <https://docs.python.org/2/library/fnmatch.html> to match multiple files based on a pattern.
|
73,061,728
|
I'm trying to get a better understanding of python imports and namespaces. Take the example of a module testImport.py:
```py
# testImport.py
var1 = 1
def fn1():
return var1
```
if I import this like this:
```py
from testImport import *
```
I can inspect `var1`
```py
>>> var1
1
```
and `fn1()` returns
```py
>>> fn1()
1
```
Now, I can change the value of var1:
```py
>>> var1 = 2
>>> var1
2
```
but `fn1()` still returns 1:
```py
>>> fn1()
1
```
This is different to if I import properly:
```py
>>> import testImport as t
>>> t.var1 = 2
>>> t.fn1()
2
```
What's going on here? How does python 'remember' the original version of `var1` in `fn1` when using `import *`?
|
2022/07/21
|
[
"https://Stackoverflow.com/questions/73061728",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/13258525/"
] |
I'm not sure that you would be able to modify the root node as you are trying to do here. The namespace URI should be `http://www.w3.org/2000/xmlns/p` if it is `p` that you are trying to use as the prefix and you would use `p:root` as the qualifiedName but that would result in the root node being modified like this `<root xmlns:p="http://www.w3.org/2000/xmlns/p" p:root="p"/>`
A possible alternative approach to get the output you seek might be to add a new node - append all the child nodes from the original root to it and delete the original. Admittedly it is `hacky` and a better solution might well exist...
Given the source XML as:
```
<?xml version="1.0" encoding="utf-8" standalone="yes"?>
<root>
<domain>cdiinvest.com.br</domain>
<domain>savba.sk</domain>
<domain>laboratoriodiseno.cl</domain>
<domain>upc.ro</domain>
<domain>nickel.twosuperior.co.uk</domain>
<domain>hemenhosting.org</domain>
<domain>hostslim.eu</domain>
</root>
```
Then, to modify the source XML:
```
# local XML source file
$file='xml/netblock.xml';
# new node namespace properties
$nsuri='http://www.w3.org/2000/xmlns/p';
$nsname='p:root';
$nsvalue=null;
# create the DOMDocument and load the XML
libxml_use_internal_errors( true );
$dom=new DOMDocument;
$dom->formatOutput=true;
$dom->validateOnParse=true;
$dom->preserveWhiteSpace=false;
$dom->strictErrorChecking=false;
$dom->recover=true;
$dom->load( $file );
libxml_clear_errors();
# create the new root node
$new=$dom->createElementNS( $nsuri, $nsname, $nsvalue );
$dom->appendChild( $new );
# find the original Root
$root=$dom->documentElement;
# iterate backwards through the childNodes - inserting into the new root container in the original order
for( $i=$root->childNodes->length-1; $i >= 0; $i-- ){
$new->insertBefore( $root->childNodes[ $i ],$new->firstChild );
}
# remove original root
$root->parentNode->removeChild( $root );
# save the XML ( or, as here, print out modified XML )
printf('<textarea cols=80 rows=10>%s</textarea>', $dom->saveXML() );
```
Which results in the following output:
```
<?xml version="1.0" encoding="utf-8" standalone="yes"?>
<p:root xmlns:p="http://www.w3.org/2000/xmlns/p">
<domain>cdiinvest.com.br</domain>
<domain>savba.sk</domain>
<domain>laboratoriodiseno.cl</domain>
<domain>upc.ro</domain>
<domain>nickel.twosuperior.co.uk</domain>
<domain>hemenhosting.org</domain>
<domain>hostslim.eu</domain>
</p:root>
```
|
The root element is the root element. You change it by creating a new document with the root element of your choice.
If the the job is left to import all root elements children from another document, then be it (SimpleXML is not that fitting for that, DOMDocument is, see [`DOMDocument::importNode`](https://www.php.net/manual/en/domdocument.importnode.php) and mind the `$deep` flag).
If you don't actually want to change the root element but rewrite the XML then [`XMLReader`](https://www.php.net/XMLReader) / [`XMLWriter`](https://www.php.net/XMLWriter) is likely what you're looking for.
(This is not entirely clear with your question as it misses the input document of the example)
Compare as well with the reference Q&A [How do you parse and process HTML/XML in PHP?](https://stackoverflow.com/q/3577641/367456).
|
51,943,359
|
I have this part on the `docker-compose` that will copy the `angular-cli` frontend and a `scripts` directory.
```
www:
build: ./www
volumes:
- ./www/frontend:/app
- ./www/scripts:/scripts
command: /scripts/init-dev.sh
ports:
- "4200:4200"
- "49153:49153"
```
The `Dockerfile` inside the `www` will install npm.
```
FROM node:10-alpine
RUN apk add --no-cache git make curl gcc g++ python2
RUN npm config set user 0
RUN npm config set unsafe-perm true
RUN npm install
```
And the script:
```
#!/bin/bash
echo "Serving.."
cd /app && npm run-script start
```
I'm running the `docker-compose down && docker-compose build && docker-compose up` command:
```
www_1 | standard_init_linux.go:190: exec user process caused "no such file or directory"
```
I assume the error comes from the `command` line.
|
2018/08/21
|
[
"https://Stackoverflow.com/questions/51943359",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/2111400/"
] |
Two options:
1. also start your services with the InProcessServerBuilder, and use the InProcessChannelBuilder to communicate with it, or
2. just contact the server over "localhost"
|
if you want a cli type of call to your local grpc server, you could checkout <https://github.com/fullstorydev/grpcurl>
|
42,990,994
|
I have this code
```
python_data_struct = {
'amazon_app_id': amazon_app_id,
'librivox_rest_url': librivox_rest_url,
'librivox_id': librivox_id,
'top': top,
'pacakge': 'junk',
'version': 'junk',
'password': password,
'description': description,
'long_description': long_description,
'make_audiobook::sftp_script::appname': '\"%{::appname}\"'
}
try:
myyaml = yaml.dump(python_data_struct)
except:
e = sys.exc_info()[0]
print "Error on %s Error [%s]" % ( librivox_rest_url, e )
sys.exit(5)
write_file( myyaml, hiera_dir + '/' + appname + '.yaml' );
```
It outputs yaml that looks like this:
```
{amazon_app_id: junk, description: !!python/unicode ' Riley was an American writer
known as the "Hoosier poet", and made a start writing newspaper verse in Hoosier
dialect for the Indianapolis Journal in 1875. His favorite authors were Burns
and Dickens. This collection of poems is a romanticized and mostly boy-centered
paean to a 19th century rural American working-class childhood. (Summary by Val
Grimm)', librivox_id: '1000', librivox_rest_url: 'https://librivox.org/api/feed/audiobooks/id/1000/extended/1/format/json',
long_description: !!python/unicode "\n Riley was an American writer known as the\
\ \"Hoosier poet\", and made a start writing newspaper verse in Hoosier dialect\
\ for the Indianapolis Journal in 1875. His favorite authors were Burns and Dickens.\
\ This collection of poems is a romanticized and mostly boy-centered paean to\
\ a 19th century rural American working-class childhood. (Summary by Val Grimm)\n\
\nThe \"Selected Riley Child-Rhymes\" App will not use up your data plan's minutes\
\ by downloading the audio files (mp3) over and over. You will download load all\
\ the data when you install the App. The \"Selected Riley Child-Rhymes\" App works\
\ even in airplane mode and in places without Wi-Fi or phone network access! So\
\ you can listen to your audio book anywhere anytime! The \"Selected Riley Child-Rhymes\"\
\ App automatically pauses when you make or receive a phone call and automatically\
\ resumes after the call ends. The \"Selected Riley Child-Rhymes\" App will continue\
\ to play until you exit the App or pause the reading so it is perfect for listening\
\ in bed, at the gym, or while driving into work.\" \n", 'make_audiobook::sftp_script::appname': '"%{::appname}"',
pacakge: junk, password: junk, top: junk, version: junk}
```
It is hard to see but the particular key/value pair that is a problem is this one:
```
'make_audiobook::sftp_script::appname': '"%{::appname}"',
```
I need to it to be this:
```
'make_audiobook::sftp_script::appname': "%{::appname}",
```
Just double quotes around `%{::appname}` I cannot figure out what to do in my python code. Please help :)
|
2017/03/24
|
[
"https://Stackoverflow.com/questions/42990994",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/759991/"
] |
You can make function calls on ngSubmit
*form class="well" (ngSubmit)="addUserModal.hide(); addUser(model); userForm.reset()" #userForm="ngForm"*
|
I haven't used AngularJS, but the following works for me in Angular 2, if you're able to use jQuery:
`$(".modal").modal("hide")`
|
42,990,994
|
I have this code
```
python_data_struct = {
'amazon_app_id': amazon_app_id,
'librivox_rest_url': librivox_rest_url,
'librivox_id': librivox_id,
'top': top,
'pacakge': 'junk',
'version': 'junk',
'password': password,
'description': description,
'long_description': long_description,
'make_audiobook::sftp_script::appname': '\"%{::appname}\"'
}
try:
myyaml = yaml.dump(python_data_struct)
except:
e = sys.exc_info()[0]
print "Error on %s Error [%s]" % ( librivox_rest_url, e )
sys.exit(5)
write_file( myyaml, hiera_dir + '/' + appname + '.yaml' );
```
It outputs yaml that looks like this:
```
{amazon_app_id: junk, description: !!python/unicode ' Riley was an American writer
known as the "Hoosier poet", and made a start writing newspaper verse in Hoosier
dialect for the Indianapolis Journal in 1875. His favorite authors were Burns
and Dickens. This collection of poems is a romanticized and mostly boy-centered
paean to a 19th century rural American working-class childhood. (Summary by Val
Grimm)', librivox_id: '1000', librivox_rest_url: 'https://librivox.org/api/feed/audiobooks/id/1000/extended/1/format/json',
long_description: !!python/unicode "\n Riley was an American writer known as the\
\ \"Hoosier poet\", and made a start writing newspaper verse in Hoosier dialect\
\ for the Indianapolis Journal in 1875. His favorite authors were Burns and Dickens.\
\ This collection of poems is a romanticized and mostly boy-centered paean to\
\ a 19th century rural American working-class childhood. (Summary by Val Grimm)\n\
\nThe \"Selected Riley Child-Rhymes\" App will not use up your data plan's minutes\
\ by downloading the audio files (mp3) over and over. You will download load all\
\ the data when you install the App. The \"Selected Riley Child-Rhymes\" App works\
\ even in airplane mode and in places without Wi-Fi or phone network access! So\
\ you can listen to your audio book anywhere anytime! The \"Selected Riley Child-Rhymes\"\
\ App automatically pauses when you make or receive a phone call and automatically\
\ resumes after the call ends. The \"Selected Riley Child-Rhymes\" App will continue\
\ to play until you exit the App or pause the reading so it is perfect for listening\
\ in bed, at the gym, or while driving into work.\" \n", 'make_audiobook::sftp_script::appname': '"%{::appname}"',
pacakge: junk, password: junk, top: junk, version: junk}
```
It is hard to see but the particular key/value pair that is a problem is this one:
```
'make_audiobook::sftp_script::appname': '"%{::appname}"',
```
I need to it to be this:
```
'make_audiobook::sftp_script::appname': "%{::appname}",
```
Just double quotes around `%{::appname}` I cannot figure out what to do in my python code. Please help :)
|
2017/03/24
|
[
"https://Stackoverflow.com/questions/42990994",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/759991/"
] |
I haven't used AngularJS, but the following works for me in Angular 2, if you're able to use jQuery:
`$(".modal").modal("hide")`
|
Fire button click on submit event in angularJS.
```
<input id="quemodalcancel" type="submit" value="Cancel" class="btn blue_btn" data-dismiss="modal" aria-hidden="true">
$scope.editProduct = function(item){
// submit button code
document.querySelector('#quemodalcancel').click();
}
```
|
42,990,994
|
I have this code
```
python_data_struct = {
'amazon_app_id': amazon_app_id,
'librivox_rest_url': librivox_rest_url,
'librivox_id': librivox_id,
'top': top,
'pacakge': 'junk',
'version': 'junk',
'password': password,
'description': description,
'long_description': long_description,
'make_audiobook::sftp_script::appname': '\"%{::appname}\"'
}
try:
myyaml = yaml.dump(python_data_struct)
except:
e = sys.exc_info()[0]
print "Error on %s Error [%s]" % ( librivox_rest_url, e )
sys.exit(5)
write_file( myyaml, hiera_dir + '/' + appname + '.yaml' );
```
It outputs yaml that looks like this:
```
{amazon_app_id: junk, description: !!python/unicode ' Riley was an American writer
known as the "Hoosier poet", and made a start writing newspaper verse in Hoosier
dialect for the Indianapolis Journal in 1875. His favorite authors were Burns
and Dickens. This collection of poems is a romanticized and mostly boy-centered
paean to a 19th century rural American working-class childhood. (Summary by Val
Grimm)', librivox_id: '1000', librivox_rest_url: 'https://librivox.org/api/feed/audiobooks/id/1000/extended/1/format/json',
long_description: !!python/unicode "\n Riley was an American writer known as the\
\ \"Hoosier poet\", and made a start writing newspaper verse in Hoosier dialect\
\ for the Indianapolis Journal in 1875. His favorite authors were Burns and Dickens.\
\ This collection of poems is a romanticized and mostly boy-centered paean to\
\ a 19th century rural American working-class childhood. (Summary by Val Grimm)\n\
\nThe \"Selected Riley Child-Rhymes\" App will not use up your data plan's minutes\
\ by downloading the audio files (mp3) over and over. You will download load all\
\ the data when you install the App. The \"Selected Riley Child-Rhymes\" App works\
\ even in airplane mode and in places without Wi-Fi or phone network access! So\
\ you can listen to your audio book anywhere anytime! The \"Selected Riley Child-Rhymes\"\
\ App automatically pauses when you make or receive a phone call and automatically\
\ resumes after the call ends. The \"Selected Riley Child-Rhymes\" App will continue\
\ to play until you exit the App or pause the reading so it is perfect for listening\
\ in bed, at the gym, or while driving into work.\" \n", 'make_audiobook::sftp_script::appname': '"%{::appname}"',
pacakge: junk, password: junk, top: junk, version: junk}
```
It is hard to see but the particular key/value pair that is a problem is this one:
```
'make_audiobook::sftp_script::appname': '"%{::appname}"',
```
I need to it to be this:
```
'make_audiobook::sftp_script::appname': "%{::appname}",
```
Just double quotes around `%{::appname}` I cannot figure out what to do in my python code. Please help :)
|
2017/03/24
|
[
"https://Stackoverflow.com/questions/42990994",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/759991/"
] |
You can make function calls on ngSubmit
*form class="well" (ngSubmit)="addUserModal.hide(); addUser(model); userForm.reset()" #userForm="ngForm"*
|
Fire button click on submit event in angularJS.
```
<input id="quemodalcancel" type="submit" value="Cancel" class="btn blue_btn" data-dismiss="modal" aria-hidden="true">
$scope.editProduct = function(item){
// submit button code
document.querySelector('#quemodalcancel').click();
}
```
|
59,641,747
|
I cannot get it it's bash related or python subprocess, but results are different:
```
>>> subprocess.Popen("echo $HOME", shell=True, stdout=subprocess.PIPE).communicate()
(b'/Users/mac\n', None)
>>> subprocess.Popen(["echo", "$HOME"], shell=True, stdout=subprocess.PIPE).communicate()
(b'\n', None)
```
Why in second time it's just newline? Where argument are falling off?
|
2020/01/08
|
[
"https://Stackoverflow.com/questions/59641747",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/6519078/"
] |
Use fifo format, which can reconnect.
My working example:
```
-f fifo -fifo_format flv \
-drop_pkts_on_overflow 1 -attempt_recovery 1 -recover_any_error 1 \
rtmp://bla.bla/bla
```
|
+1 for the accepted answer, but:
In FFmpeg, an encoder auto-detects it's parameters based on selected output format. Here the output format is unknown (that's correct for formats like "fifo" and "tee") so the encoder won't have all parameters setup the same as if using "flv" output format.
For example: Wowza Streaming Engine will report an error while trying to publish that RTMP stream:
```
H264Utils.decodeAVCC : java.lang.ArrayIndexOutOfBoundsException: 9
```
To overcome that you should add `-flags +global_header` option.
This should help for @Matt's problem.
|
58,469,032
|
I am starting to learn python and I don't know much about programming right now. In the future, I want to build android applications. Python looks interesting to me.
Can I use python for building Android applications?
|
2019/10/20
|
[
"https://Stackoverflow.com/questions/58469032",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/11109516/"
] |
You can build android apps in python, but it's not as powerful as android studio, and the apps made by python take more space and are less memory efficient.But the apps work well...
There are several ways to use Python on Android:
BeeWare. BeeWare is a collection of tools for building native user interfaces. ...
Chaquopy. Chaquopy is a plugin for Android Studio's Gradle-based build system. ...
Kivy. Kivy is a cross-platform OpenGL-based user interface toolkit. ...
Pyqtdeploy. ...
QPython. ...
SL4A. ...
PySide.
|
As of now the official android development languages are **kotlin** and **java** in `android studio` IDE, with the addition of **dart** for cross-platform development in `flutter` SDK. I would advise you stick to whichever you find easiest as per your needs. Although python is more widely accepted in the domains of data science/machine learning so your knowledge of it is still a big plus.
|
58,469,032
|
I am starting to learn python and I don't know much about programming right now. In the future, I want to build android applications. Python looks interesting to me.
Can I use python for building Android applications?
|
2019/10/20
|
[
"https://Stackoverflow.com/questions/58469032",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/11109516/"
] |
You can build android apps in python, but it's not as powerful as android studio, and the apps made by python take more space and are less memory efficient.But the apps work well...
There are several ways to use Python on Android:
BeeWare. BeeWare is a collection of tools for building native user interfaces. ...
Chaquopy. Chaquopy is a plugin for Android Studio's Gradle-based build system. ...
Kivy. Kivy is a cross-platform OpenGL-based user interface toolkit. ...
Pyqtdeploy. ...
QPython. ...
SL4A. ...
PySide.
|
You could use python as the back end and vue native as the front end. Vue active can be exported as native android application and put in the app store. The frontend will call API written in back end (in your case, written in python). Check out Vuenative + flask/ django framework.
|
74,054,668
|
How to convert a `.csv` file to `.npy` efficently?
--------------------------------------------------
I've tried:
```
import numpy as np
filename = "myfile.csv"
vec =np.loadtxt(filename, delimiter=",")
np.save(f"{filename}.npy", vec)
```
While the above works for smallish file, the actual `.csv` file I'm working on has ~12 million lines with 1024 columns, it takes quite a lot to load everything into RAM before converting into an `.npy` format.
### Q (Part 1): Is there some way to load/convert a `.csv` to `.npy` efficiently for large CSV file?
The above code snippet is similar to the answer from [Convert CSV to numpy](https://stackoverflow.com/questions/34932210/convert-csv-to-numpy) but that won't work for ~12M x 1024 matrix.
### Q (Part 2): If there isn't any way to to load/convert a `.csv` to `.npy` efficiently, is there some way to iteratively read the `.csv` file into `.npy` efficiently?
Also, there's an answer here <https://stackoverflow.com/a/53558856/610569> to save the csv file as numpy array iteratively. But seems like the `np.vstack` isn't the best solution when reading the file. The accepted answer there suggests hdf5 but the format is not the main objective of this question and the hdf5 format isn't desired in my use-case since I've to read it back into a numpy array afterwards.
### Q (Part 3): If part 1 and part2 are not possible, are there other efficient storage (e.g. tensorstore) that can store and efficiently convert to numpy array when loading the saved storage format?
There is another library `tensorstore` that seems to efficiently handles arrays which support conversion to numpy array when read, <https://google.github.io/tensorstore/python/tutorial.html>. But somehow there isn't any information on how to save the `tensor`/array without the exact dimensions, all of the examples seem to include configurations like `'dimensions': [1000, 20000],`.
Unlike the HDF5, the tensorstore doesn't seem to have reading overhead issues when converting to numpy, from docs:
>
> Conversion to an numpy.ndarray also implicitly performs a synchronous read (which hits the in-memory cache since the same region was just retrieved)
>
>
>
|
2022/10/13
|
[
"https://Stackoverflow.com/questions/74054668",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/610569/"
] |
Nice question; Informative in itself.
I understand you want to have the whole data set/array in memory, eventually, as a NumPy array. I assume, then, you have enough (RAM) memory to host such array -- 12M x 1K.
I don't specifically know about how `np.loadtxt` (`genfromtxt`) is operating behind the scenes, so I will tell you how I *would* do (after trying like you did).
### Reasoning about memory...
Notice that a simple boolean array will cost ~12 GBytes of memory:
```py
>>> print("{:.1E} bytes".format(
np.array([True]).itemsize * 12E6 * 1024
))
1.2E+10 bytes
```
And this is for a *Boolean* data type. Most likely, you have -- what -- a dataset of Integer, Float? The size may increase quite significantly:
```py
>>> np.array([1], dtype=bool).itemsize
1
>>> np.array([1], dtype=int).itemsize
8
>>> np.array([1], dtype=float).itemsize
8
```
>
> *It's a lot of memory* (which you know, just want to emphasize).
>
>
>
At this point, I would like to point out a possible *swapping* of the working memory. You may have enough physical (RAM) memory in your machine, but if not enough of *free* memory, your system will use the *swap* memory (i.e, *disk*) to keep your system stable & have the work done. The cost you pay is clear: read/writing from/to the disk is very slow.
**My point so far is**: check the data type of your dataset, estimate the size of your future array, and guarantee you have that minimum amount of RAM memory available.
### I/O text
Considering you do have all the (RAM) memory necessary to host the whole numpy array: I would then loop over the whole (~12M lines) text file, filling the pre-existing array row-by-row.
More precisely, I would have the (big) array already instantiated before start reading the file. Only then, I would read each line, split the columns, and give it to [`np.asarray`](https://numpy.org/devdocs/reference/generated/numpy.asarray.html) and assign those (1024) values to each respective row of the *output* array.
>
> The looping over the file is slow, yes. The thing here is that you limit (and control) the amount of memory being used. Roughly speaking, the big objects consuming your memory are the "output" (big) array, and the "line" (1024) array. Sure, there are quite a considerable amount of memory being consumed in each loop in the temporary objects during reading (text!) values, splitting into list elements and casting to an array. Still, it's something that will remain largely constant during the whole ~12M lines.
>
>
>
So, **the steps I would go through are**:
```
0) estimate and guarantee enough RAM memory available
1) instantiate (np.empty or np.zeros) the "output" array
2) loop over "input.txt" file, create a 1D array from each line "i"
3) assign the line values/array to row "i" of "output" array
```
Sure enough, you can even make it parallel: If on one hand text files cannot be randomly (r/w) accessed, on the other hand you can easily split them (see [How can I split one text file into multiple \*.txt files?](https://stackoverflow.com/q/19031144/687896)) to have -- if *fun* is at the table -- them read in parallel, if that time if critical.
Hope that helps.
|
TL;DR
=====
Export to a different function other than `.npy` seems inevitable unless your machine is able to handle the size of the data in-memory as per described in [@Brandt answer](https://stackoverflow.com/a/74055562/610569).
---
Reading the data, then processing it (Kinda answering Q part 2)
===============================================================
To handle data size larger than what the RAM can handle, one would often resort to libraries that performs "**out-of-core**" computation, e.g. [`turicreate.SFrame`](https://apple.github.io/turicreate/docs/api/generated/turicreate.SFrame.html), [`vaex`](https://vaex.readthedocs.io/en/docs/example_io.html#In-memory-data-representations) or [`dask`](https://dask.pydata.org/en/latest/dataframe.html) . These libraries would be able to lazily load the `.csv` files into dataframes and process them by chunks when evaluated.
```
from turicreate import SFrame
filename = "myfile.csv"
sf = SFrame.read_csv(filename)
sf.apply(...) # Trying to process the data
```
or
```
import vaex
filename = "myfile.csv"
df = vaex.from_csv(filename,
convert=True,
chunk_size=50_000_000)
df.apply(...)
```
Converting the read data into numpy array (kinda answering Q part 1)
====================================================================
While out-of-core libraries can read and process the data efficiently, converting into numpy is an "**in-memory**" operation, the machine needs to have enough RAM to fit all data.
The `turicreate.SFrame.to_numpy` documentation writes:
>
> Converts this SFrame to a numpy array
>
>
> This operation will construct a numpy array in memory. Care must be taken when size of the returned object is big.
>
>
>
And the `vaex` documentation writes:
>
> In-memory data representations
>
>
> One can construct a Vaex DataFrame from a variety of in-memory data representations.
>
>
>
And `dask` best practices actually reimplemented their own array objects that are simpler than numpy array, see <https://docs.dask.org/en/stable/array-best-practices.html>. But when going through the docs, it seems like the format they have saved the dask array in are not `.npy` but various other formats.
Writing the file into non-`.npy` versions (answering Q Part 3)
==============================================================
Given the numpy arrays are inevitably in-memory, trying to save the data into one single `.npy` isn't the most viable option.
Different libraries seems to have different solutions for storage. E.g.
* `vaex` saves the data into `hdf5` by default if the `convert=True` argument is set when data is read through `vaex.from_csv()`
* `sframe` saves the data into their [own binary format](https://apple.github.io/turicreate/docs/api/generated/turicreate.SFrame.save.html#turicreate.SFrame.save)
* `dask` [export functions](https://docs.dask.org/en/stable/dataframe-create.html) save `to_hdf()` and `to_parquet()` format
|
74,054,668
|
How to convert a `.csv` file to `.npy` efficently?
--------------------------------------------------
I've tried:
```
import numpy as np
filename = "myfile.csv"
vec =np.loadtxt(filename, delimiter=",")
np.save(f"{filename}.npy", vec)
```
While the above works for smallish file, the actual `.csv` file I'm working on has ~12 million lines with 1024 columns, it takes quite a lot to load everything into RAM before converting into an `.npy` format.
### Q (Part 1): Is there some way to load/convert a `.csv` to `.npy` efficiently for large CSV file?
The above code snippet is similar to the answer from [Convert CSV to numpy](https://stackoverflow.com/questions/34932210/convert-csv-to-numpy) but that won't work for ~12M x 1024 matrix.
### Q (Part 2): If there isn't any way to to load/convert a `.csv` to `.npy` efficiently, is there some way to iteratively read the `.csv` file into `.npy` efficiently?
Also, there's an answer here <https://stackoverflow.com/a/53558856/610569> to save the csv file as numpy array iteratively. But seems like the `np.vstack` isn't the best solution when reading the file. The accepted answer there suggests hdf5 but the format is not the main objective of this question and the hdf5 format isn't desired in my use-case since I've to read it back into a numpy array afterwards.
### Q (Part 3): If part 1 and part2 are not possible, are there other efficient storage (e.g. tensorstore) that can store and efficiently convert to numpy array when loading the saved storage format?
There is another library `tensorstore` that seems to efficiently handles arrays which support conversion to numpy array when read, <https://google.github.io/tensorstore/python/tutorial.html>. But somehow there isn't any information on how to save the `tensor`/array without the exact dimensions, all of the examples seem to include configurations like `'dimensions': [1000, 20000],`.
Unlike the HDF5, the tensorstore doesn't seem to have reading overhead issues when converting to numpy, from docs:
>
> Conversion to an numpy.ndarray also implicitly performs a synchronous read (which hits the in-memory cache since the same region was just retrieved)
>
>
>
|
2022/10/13
|
[
"https://Stackoverflow.com/questions/74054668",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/610569/"
] |
Nice question; Informative in itself.
I understand you want to have the whole data set/array in memory, eventually, as a NumPy array. I assume, then, you have enough (RAM) memory to host such array -- 12M x 1K.
I don't specifically know about how `np.loadtxt` (`genfromtxt`) is operating behind the scenes, so I will tell you how I *would* do (after trying like you did).
### Reasoning about memory...
Notice that a simple boolean array will cost ~12 GBytes of memory:
```py
>>> print("{:.1E} bytes".format(
np.array([True]).itemsize * 12E6 * 1024
))
1.2E+10 bytes
```
And this is for a *Boolean* data type. Most likely, you have -- what -- a dataset of Integer, Float? The size may increase quite significantly:
```py
>>> np.array([1], dtype=bool).itemsize
1
>>> np.array([1], dtype=int).itemsize
8
>>> np.array([1], dtype=float).itemsize
8
```
>
> *It's a lot of memory* (which you know, just want to emphasize).
>
>
>
At this point, I would like to point out a possible *swapping* of the working memory. You may have enough physical (RAM) memory in your machine, but if not enough of *free* memory, your system will use the *swap* memory (i.e, *disk*) to keep your system stable & have the work done. The cost you pay is clear: read/writing from/to the disk is very slow.
**My point so far is**: check the data type of your dataset, estimate the size of your future array, and guarantee you have that minimum amount of RAM memory available.
### I/O text
Considering you do have all the (RAM) memory necessary to host the whole numpy array: I would then loop over the whole (~12M lines) text file, filling the pre-existing array row-by-row.
More precisely, I would have the (big) array already instantiated before start reading the file. Only then, I would read each line, split the columns, and give it to [`np.asarray`](https://numpy.org/devdocs/reference/generated/numpy.asarray.html) and assign those (1024) values to each respective row of the *output* array.
>
> The looping over the file is slow, yes. The thing here is that you limit (and control) the amount of memory being used. Roughly speaking, the big objects consuming your memory are the "output" (big) array, and the "line" (1024) array. Sure, there are quite a considerable amount of memory being consumed in each loop in the temporary objects during reading (text!) values, splitting into list elements and casting to an array. Still, it's something that will remain largely constant during the whole ~12M lines.
>
>
>
So, **the steps I would go through are**:
```
0) estimate and guarantee enough RAM memory available
1) instantiate (np.empty or np.zeros) the "output" array
2) loop over "input.txt" file, create a 1D array from each line "i"
3) assign the line values/array to row "i" of "output" array
```
Sure enough, you can even make it parallel: If on one hand text files cannot be randomly (r/w) accessed, on the other hand you can easily split them (see [How can I split one text file into multiple \*.txt files?](https://stackoverflow.com/q/19031144/687896)) to have -- if *fun* is at the table -- them read in parallel, if that time if critical.
Hope that helps.
|
It it's latest version (4.14) vaex support "streaming", i.e. lazy loading of CSV files. It uses pyarrow under the hood so it is supper fast. Try something like
```py
df = vaex.open(my_file.csv)
# or
df = vaex.from_csv_arrow(my_file.csv, lazy=True)
```
Then you can export to bunch of formats as needed, or keep working with it like that (it is surprisingly fast). Of course, it is better to convert to some kind of binary format..
|
74,054,668
|
How to convert a `.csv` file to `.npy` efficently?
--------------------------------------------------
I've tried:
```
import numpy as np
filename = "myfile.csv"
vec =np.loadtxt(filename, delimiter=",")
np.save(f"{filename}.npy", vec)
```
While the above works for smallish file, the actual `.csv` file I'm working on has ~12 million lines with 1024 columns, it takes quite a lot to load everything into RAM before converting into an `.npy` format.
### Q (Part 1): Is there some way to load/convert a `.csv` to `.npy` efficiently for large CSV file?
The above code snippet is similar to the answer from [Convert CSV to numpy](https://stackoverflow.com/questions/34932210/convert-csv-to-numpy) but that won't work for ~12M x 1024 matrix.
### Q (Part 2): If there isn't any way to to load/convert a `.csv` to `.npy` efficiently, is there some way to iteratively read the `.csv` file into `.npy` efficiently?
Also, there's an answer here <https://stackoverflow.com/a/53558856/610569> to save the csv file as numpy array iteratively. But seems like the `np.vstack` isn't the best solution when reading the file. The accepted answer there suggests hdf5 but the format is not the main objective of this question and the hdf5 format isn't desired in my use-case since I've to read it back into a numpy array afterwards.
### Q (Part 3): If part 1 and part2 are not possible, are there other efficient storage (e.g. tensorstore) that can store and efficiently convert to numpy array when loading the saved storage format?
There is another library `tensorstore` that seems to efficiently handles arrays which support conversion to numpy array when read, <https://google.github.io/tensorstore/python/tutorial.html>. But somehow there isn't any information on how to save the `tensor`/array without the exact dimensions, all of the examples seem to include configurations like `'dimensions': [1000, 20000],`.
Unlike the HDF5, the tensorstore doesn't seem to have reading overhead issues when converting to numpy, from docs:
>
> Conversion to an numpy.ndarray also implicitly performs a synchronous read (which hits the in-memory cache since the same region was just retrieved)
>
>
>
|
2022/10/13
|
[
"https://Stackoverflow.com/questions/74054668",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/610569/"
] |
Nice question; Informative in itself.
I understand you want to have the whole data set/array in memory, eventually, as a NumPy array. I assume, then, you have enough (RAM) memory to host such array -- 12M x 1K.
I don't specifically know about how `np.loadtxt` (`genfromtxt`) is operating behind the scenes, so I will tell you how I *would* do (after trying like you did).
### Reasoning about memory...
Notice that a simple boolean array will cost ~12 GBytes of memory:
```py
>>> print("{:.1E} bytes".format(
np.array([True]).itemsize * 12E6 * 1024
))
1.2E+10 bytes
```
And this is for a *Boolean* data type. Most likely, you have -- what -- a dataset of Integer, Float? The size may increase quite significantly:
```py
>>> np.array([1], dtype=bool).itemsize
1
>>> np.array([1], dtype=int).itemsize
8
>>> np.array([1], dtype=float).itemsize
8
```
>
> *It's a lot of memory* (which you know, just want to emphasize).
>
>
>
At this point, I would like to point out a possible *swapping* of the working memory. You may have enough physical (RAM) memory in your machine, but if not enough of *free* memory, your system will use the *swap* memory (i.e, *disk*) to keep your system stable & have the work done. The cost you pay is clear: read/writing from/to the disk is very slow.
**My point so far is**: check the data type of your dataset, estimate the size of your future array, and guarantee you have that minimum amount of RAM memory available.
### I/O text
Considering you do have all the (RAM) memory necessary to host the whole numpy array: I would then loop over the whole (~12M lines) text file, filling the pre-existing array row-by-row.
More precisely, I would have the (big) array already instantiated before start reading the file. Only then, I would read each line, split the columns, and give it to [`np.asarray`](https://numpy.org/devdocs/reference/generated/numpy.asarray.html) and assign those (1024) values to each respective row of the *output* array.
>
> The looping over the file is slow, yes. The thing here is that you limit (and control) the amount of memory being used. Roughly speaking, the big objects consuming your memory are the "output" (big) array, and the "line" (1024) array. Sure, there are quite a considerable amount of memory being consumed in each loop in the temporary objects during reading (text!) values, splitting into list elements and casting to an array. Still, it's something that will remain largely constant during the whole ~12M lines.
>
>
>
So, **the steps I would go through are**:
```
0) estimate and guarantee enough RAM memory available
1) instantiate (np.empty or np.zeros) the "output" array
2) loop over "input.txt" file, create a 1D array from each line "i"
3) assign the line values/array to row "i" of "output" array
```
Sure enough, you can even make it parallel: If on one hand text files cannot be randomly (r/w) accessed, on the other hand you can easily split them (see [How can I split one text file into multiple \*.txt files?](https://stackoverflow.com/q/19031144/687896)) to have -- if *fun* is at the table -- them read in parallel, if that time if critical.
Hope that helps.
|
I'm not aware of any existing function or utility that directly and efficiently converts csv files into npy files. With efficient I guess primarily meaning with low memory requirements.
Writing a npy file iteratively is indeed possible, with some extra effort. There's already a question on SO that addresses this, see:
[save numpy array in append mode](https://stackoverflow.com/questions/30376581/save-numpy-array-in-append-mode)
For example using the `NpyAppendArray` class from [Michael's answer](https://stackoverflow.com/a/64403144) you can do:
```
with open('data.csv') as csv, NpyAppendArray('data.npy') as npy:
for line in csv:
row = np.fromstring(line, sep=',')
npy.append(row[np.newaxis, :])
```
The `NpyAppendArray` class updates the npy file header on every call to `append`, which is a bit much for your 12M rows. Maybe you could update the class to (optionally) only write the header on `close`. Or you could easily batch the writes:
```
batch_lines = 128
with open('data.csv') as csv, NpyAppendArray('data.npy') as npy:
done = False
while not done:
batch = []
for count, line in enumerate(csv):
row = np.fromstring(line, sep=',')
batch.append(row)
if count + 1 >= batch_lines:
break
else:
done = True
npy.append(np.array(batch))
```
(code is not tested)
|
74,054,668
|
How to convert a `.csv` file to `.npy` efficently?
--------------------------------------------------
I've tried:
```
import numpy as np
filename = "myfile.csv"
vec =np.loadtxt(filename, delimiter=",")
np.save(f"{filename}.npy", vec)
```
While the above works for smallish file, the actual `.csv` file I'm working on has ~12 million lines with 1024 columns, it takes quite a lot to load everything into RAM before converting into an `.npy` format.
### Q (Part 1): Is there some way to load/convert a `.csv` to `.npy` efficiently for large CSV file?
The above code snippet is similar to the answer from [Convert CSV to numpy](https://stackoverflow.com/questions/34932210/convert-csv-to-numpy) but that won't work for ~12M x 1024 matrix.
### Q (Part 2): If there isn't any way to to load/convert a `.csv` to `.npy` efficiently, is there some way to iteratively read the `.csv` file into `.npy` efficiently?
Also, there's an answer here <https://stackoverflow.com/a/53558856/610569> to save the csv file as numpy array iteratively. But seems like the `np.vstack` isn't the best solution when reading the file. The accepted answer there suggests hdf5 but the format is not the main objective of this question and the hdf5 format isn't desired in my use-case since I've to read it back into a numpy array afterwards.
### Q (Part 3): If part 1 and part2 are not possible, are there other efficient storage (e.g. tensorstore) that can store and efficiently convert to numpy array when loading the saved storage format?
There is another library `tensorstore` that seems to efficiently handles arrays which support conversion to numpy array when read, <https://google.github.io/tensorstore/python/tutorial.html>. But somehow there isn't any information on how to save the `tensor`/array without the exact dimensions, all of the examples seem to include configurations like `'dimensions': [1000, 20000],`.
Unlike the HDF5, the tensorstore doesn't seem to have reading overhead issues when converting to numpy, from docs:
>
> Conversion to an numpy.ndarray also implicitly performs a synchronous read (which hits the in-memory cache since the same region was just retrieved)
>
>
>
|
2022/10/13
|
[
"https://Stackoverflow.com/questions/74054668",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/610569/"
] |
Nice question; Informative in itself.
I understand you want to have the whole data set/array in memory, eventually, as a NumPy array. I assume, then, you have enough (RAM) memory to host such array -- 12M x 1K.
I don't specifically know about how `np.loadtxt` (`genfromtxt`) is operating behind the scenes, so I will tell you how I *would* do (after trying like you did).
### Reasoning about memory...
Notice that a simple boolean array will cost ~12 GBytes of memory:
```py
>>> print("{:.1E} bytes".format(
np.array([True]).itemsize * 12E6 * 1024
))
1.2E+10 bytes
```
And this is for a *Boolean* data type. Most likely, you have -- what -- a dataset of Integer, Float? The size may increase quite significantly:
```py
>>> np.array([1], dtype=bool).itemsize
1
>>> np.array([1], dtype=int).itemsize
8
>>> np.array([1], dtype=float).itemsize
8
```
>
> *It's a lot of memory* (which you know, just want to emphasize).
>
>
>
At this point, I would like to point out a possible *swapping* of the working memory. You may have enough physical (RAM) memory in your machine, but if not enough of *free* memory, your system will use the *swap* memory (i.e, *disk*) to keep your system stable & have the work done. The cost you pay is clear: read/writing from/to the disk is very slow.
**My point so far is**: check the data type of your dataset, estimate the size of your future array, and guarantee you have that minimum amount of RAM memory available.
### I/O text
Considering you do have all the (RAM) memory necessary to host the whole numpy array: I would then loop over the whole (~12M lines) text file, filling the pre-existing array row-by-row.
More precisely, I would have the (big) array already instantiated before start reading the file. Only then, I would read each line, split the columns, and give it to [`np.asarray`](https://numpy.org/devdocs/reference/generated/numpy.asarray.html) and assign those (1024) values to each respective row of the *output* array.
>
> The looping over the file is slow, yes. The thing here is that you limit (and control) the amount of memory being used. Roughly speaking, the big objects consuming your memory are the "output" (big) array, and the "line" (1024) array. Sure, there are quite a considerable amount of memory being consumed in each loop in the temporary objects during reading (text!) values, splitting into list elements and casting to an array. Still, it's something that will remain largely constant during the whole ~12M lines.
>
>
>
So, **the steps I would go through are**:
```
0) estimate and guarantee enough RAM memory available
1) instantiate (np.empty or np.zeros) the "output" array
2) loop over "input.txt" file, create a 1D array from each line "i"
3) assign the line values/array to row "i" of "output" array
```
Sure enough, you can even make it parallel: If on one hand text files cannot be randomly (r/w) accessed, on the other hand you can easily split them (see [How can I split one text file into multiple \*.txt files?](https://stackoverflow.com/q/19031144/687896)) to have -- if *fun* is at the table -- them read in parallel, if that time if critical.
Hope that helps.
|
```
import numpy as np
import pandas as pd
# Define the input and output file names
csv_file = 'data.csv'
npy_file = 'data.npy'
# Create dummy data
data = np.random.rand(10000, 100)
df = pd.DataFrame(data)
df.to_csv(csv_file, index=False)
# Define the chunk size
chunk_size = 1000
# Read the header row and get the number of columns
header = pd.read_csv(csv_file, nrows=0)
num_cols = len(header.columns)
# Initialize an empty array to store the data
data = np.empty((0, num_cols))
# Loop over the chunks of the csv file
for chunk in pd.read_csv(csv_file, chunksize=chunk_size):
# Convert the chunk to a numpy array
chunk_array = chunk.to_numpy()
# Append the chunk to the data array
data = np.append(data, chunk_array, axis=0)
np.save(npy_file, data)
# Load the npy file and check the shape
npy_data = np.load(npy_file)
print('Shape of data before conversion:', data.shape)
print('Shape of data after conversion:', npy_data.shape)```
```
|
74,054,668
|
How to convert a `.csv` file to `.npy` efficently?
--------------------------------------------------
I've tried:
```
import numpy as np
filename = "myfile.csv"
vec =np.loadtxt(filename, delimiter=",")
np.save(f"{filename}.npy", vec)
```
While the above works for smallish file, the actual `.csv` file I'm working on has ~12 million lines with 1024 columns, it takes quite a lot to load everything into RAM before converting into an `.npy` format.
### Q (Part 1): Is there some way to load/convert a `.csv` to `.npy` efficiently for large CSV file?
The above code snippet is similar to the answer from [Convert CSV to numpy](https://stackoverflow.com/questions/34932210/convert-csv-to-numpy) but that won't work for ~12M x 1024 matrix.
### Q (Part 2): If there isn't any way to to load/convert a `.csv` to `.npy` efficiently, is there some way to iteratively read the `.csv` file into `.npy` efficiently?
Also, there's an answer here <https://stackoverflow.com/a/53558856/610569> to save the csv file as numpy array iteratively. But seems like the `np.vstack` isn't the best solution when reading the file. The accepted answer there suggests hdf5 but the format is not the main objective of this question and the hdf5 format isn't desired in my use-case since I've to read it back into a numpy array afterwards.
### Q (Part 3): If part 1 and part2 are not possible, are there other efficient storage (e.g. tensorstore) that can store and efficiently convert to numpy array when loading the saved storage format?
There is another library `tensorstore` that seems to efficiently handles arrays which support conversion to numpy array when read, <https://google.github.io/tensorstore/python/tutorial.html>. But somehow there isn't any information on how to save the `tensor`/array without the exact dimensions, all of the examples seem to include configurations like `'dimensions': [1000, 20000],`.
Unlike the HDF5, the tensorstore doesn't seem to have reading overhead issues when converting to numpy, from docs:
>
> Conversion to an numpy.ndarray also implicitly performs a synchronous read (which hits the in-memory cache since the same region was just retrieved)
>
>
>
|
2022/10/13
|
[
"https://Stackoverflow.com/questions/74054668",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/610569/"
] |
TL;DR
=====
Export to a different function other than `.npy` seems inevitable unless your machine is able to handle the size of the data in-memory as per described in [@Brandt answer](https://stackoverflow.com/a/74055562/610569).
---
Reading the data, then processing it (Kinda answering Q part 2)
===============================================================
To handle data size larger than what the RAM can handle, one would often resort to libraries that performs "**out-of-core**" computation, e.g. [`turicreate.SFrame`](https://apple.github.io/turicreate/docs/api/generated/turicreate.SFrame.html), [`vaex`](https://vaex.readthedocs.io/en/docs/example_io.html#In-memory-data-representations) or [`dask`](https://dask.pydata.org/en/latest/dataframe.html) . These libraries would be able to lazily load the `.csv` files into dataframes and process them by chunks when evaluated.
```
from turicreate import SFrame
filename = "myfile.csv"
sf = SFrame.read_csv(filename)
sf.apply(...) # Trying to process the data
```
or
```
import vaex
filename = "myfile.csv"
df = vaex.from_csv(filename,
convert=True,
chunk_size=50_000_000)
df.apply(...)
```
Converting the read data into numpy array (kinda answering Q part 1)
====================================================================
While out-of-core libraries can read and process the data efficiently, converting into numpy is an "**in-memory**" operation, the machine needs to have enough RAM to fit all data.
The `turicreate.SFrame.to_numpy` documentation writes:
>
> Converts this SFrame to a numpy array
>
>
> This operation will construct a numpy array in memory. Care must be taken when size of the returned object is big.
>
>
>
And the `vaex` documentation writes:
>
> In-memory data representations
>
>
> One can construct a Vaex DataFrame from a variety of in-memory data representations.
>
>
>
And `dask` best practices actually reimplemented their own array objects that are simpler than numpy array, see <https://docs.dask.org/en/stable/array-best-practices.html>. But when going through the docs, it seems like the format they have saved the dask array in are not `.npy` but various other formats.
Writing the file into non-`.npy` versions (answering Q Part 3)
==============================================================
Given the numpy arrays are inevitably in-memory, trying to save the data into one single `.npy` isn't the most viable option.
Different libraries seems to have different solutions for storage. E.g.
* `vaex` saves the data into `hdf5` by default if the `convert=True` argument is set when data is read through `vaex.from_csv()`
* `sframe` saves the data into their [own binary format](https://apple.github.io/turicreate/docs/api/generated/turicreate.SFrame.save.html#turicreate.SFrame.save)
* `dask` [export functions](https://docs.dask.org/en/stable/dataframe-create.html) save `to_hdf()` and `to_parquet()` format
|
I'm not aware of any existing function or utility that directly and efficiently converts csv files into npy files. With efficient I guess primarily meaning with low memory requirements.
Writing a npy file iteratively is indeed possible, with some extra effort. There's already a question on SO that addresses this, see:
[save numpy array in append mode](https://stackoverflow.com/questions/30376581/save-numpy-array-in-append-mode)
For example using the `NpyAppendArray` class from [Michael's answer](https://stackoverflow.com/a/64403144) you can do:
```
with open('data.csv') as csv, NpyAppendArray('data.npy') as npy:
for line in csv:
row = np.fromstring(line, sep=',')
npy.append(row[np.newaxis, :])
```
The `NpyAppendArray` class updates the npy file header on every call to `append`, which is a bit much for your 12M rows. Maybe you could update the class to (optionally) only write the header on `close`. Or you could easily batch the writes:
```
batch_lines = 128
with open('data.csv') as csv, NpyAppendArray('data.npy') as npy:
done = False
while not done:
batch = []
for count, line in enumerate(csv):
row = np.fromstring(line, sep=',')
batch.append(row)
if count + 1 >= batch_lines:
break
else:
done = True
npy.append(np.array(batch))
```
(code is not tested)
|
74,054,668
|
How to convert a `.csv` file to `.npy` efficently?
--------------------------------------------------
I've tried:
```
import numpy as np
filename = "myfile.csv"
vec =np.loadtxt(filename, delimiter=",")
np.save(f"{filename}.npy", vec)
```
While the above works for smallish file, the actual `.csv` file I'm working on has ~12 million lines with 1024 columns, it takes quite a lot to load everything into RAM before converting into an `.npy` format.
### Q (Part 1): Is there some way to load/convert a `.csv` to `.npy` efficiently for large CSV file?
The above code snippet is similar to the answer from [Convert CSV to numpy](https://stackoverflow.com/questions/34932210/convert-csv-to-numpy) but that won't work for ~12M x 1024 matrix.
### Q (Part 2): If there isn't any way to to load/convert a `.csv` to `.npy` efficiently, is there some way to iteratively read the `.csv` file into `.npy` efficiently?
Also, there's an answer here <https://stackoverflow.com/a/53558856/610569> to save the csv file as numpy array iteratively. But seems like the `np.vstack` isn't the best solution when reading the file. The accepted answer there suggests hdf5 but the format is not the main objective of this question and the hdf5 format isn't desired in my use-case since I've to read it back into a numpy array afterwards.
### Q (Part 3): If part 1 and part2 are not possible, are there other efficient storage (e.g. tensorstore) that can store and efficiently convert to numpy array when loading the saved storage format?
There is another library `tensorstore` that seems to efficiently handles arrays which support conversion to numpy array when read, <https://google.github.io/tensorstore/python/tutorial.html>. But somehow there isn't any information on how to save the `tensor`/array without the exact dimensions, all of the examples seem to include configurations like `'dimensions': [1000, 20000],`.
Unlike the HDF5, the tensorstore doesn't seem to have reading overhead issues when converting to numpy, from docs:
>
> Conversion to an numpy.ndarray also implicitly performs a synchronous read (which hits the in-memory cache since the same region was just retrieved)
>
>
>
|
2022/10/13
|
[
"https://Stackoverflow.com/questions/74054668",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/610569/"
] |
It it's latest version (4.14) vaex support "streaming", i.e. lazy loading of CSV files. It uses pyarrow under the hood so it is supper fast. Try something like
```py
df = vaex.open(my_file.csv)
# or
df = vaex.from_csv_arrow(my_file.csv, lazy=True)
```
Then you can export to bunch of formats as needed, or keep working with it like that (it is surprisingly fast). Of course, it is better to convert to some kind of binary format..
|
I'm not aware of any existing function or utility that directly and efficiently converts csv files into npy files. With efficient I guess primarily meaning with low memory requirements.
Writing a npy file iteratively is indeed possible, with some extra effort. There's already a question on SO that addresses this, see:
[save numpy array in append mode](https://stackoverflow.com/questions/30376581/save-numpy-array-in-append-mode)
For example using the `NpyAppendArray` class from [Michael's answer](https://stackoverflow.com/a/64403144) you can do:
```
with open('data.csv') as csv, NpyAppendArray('data.npy') as npy:
for line in csv:
row = np.fromstring(line, sep=',')
npy.append(row[np.newaxis, :])
```
The `NpyAppendArray` class updates the npy file header on every call to `append`, which is a bit much for your 12M rows. Maybe you could update the class to (optionally) only write the header on `close`. Or you could easily batch the writes:
```
batch_lines = 128
with open('data.csv') as csv, NpyAppendArray('data.npy') as npy:
done = False
while not done:
batch = []
for count, line in enumerate(csv):
row = np.fromstring(line, sep=',')
batch.append(row)
if count + 1 >= batch_lines:
break
else:
done = True
npy.append(np.array(batch))
```
(code is not tested)
|
74,054,668
|
How to convert a `.csv` file to `.npy` efficently?
--------------------------------------------------
I've tried:
```
import numpy as np
filename = "myfile.csv"
vec =np.loadtxt(filename, delimiter=",")
np.save(f"{filename}.npy", vec)
```
While the above works for smallish file, the actual `.csv` file I'm working on has ~12 million lines with 1024 columns, it takes quite a lot to load everything into RAM before converting into an `.npy` format.
### Q (Part 1): Is there some way to load/convert a `.csv` to `.npy` efficiently for large CSV file?
The above code snippet is similar to the answer from [Convert CSV to numpy](https://stackoverflow.com/questions/34932210/convert-csv-to-numpy) but that won't work for ~12M x 1024 matrix.
### Q (Part 2): If there isn't any way to to load/convert a `.csv` to `.npy` efficiently, is there some way to iteratively read the `.csv` file into `.npy` efficiently?
Also, there's an answer here <https://stackoverflow.com/a/53558856/610569> to save the csv file as numpy array iteratively. But seems like the `np.vstack` isn't the best solution when reading the file. The accepted answer there suggests hdf5 but the format is not the main objective of this question and the hdf5 format isn't desired in my use-case since I've to read it back into a numpy array afterwards.
### Q (Part 3): If part 1 and part2 are not possible, are there other efficient storage (e.g. tensorstore) that can store and efficiently convert to numpy array when loading the saved storage format?
There is another library `tensorstore` that seems to efficiently handles arrays which support conversion to numpy array when read, <https://google.github.io/tensorstore/python/tutorial.html>. But somehow there isn't any information on how to save the `tensor`/array without the exact dimensions, all of the examples seem to include configurations like `'dimensions': [1000, 20000],`.
Unlike the HDF5, the tensorstore doesn't seem to have reading overhead issues when converting to numpy, from docs:
>
> Conversion to an numpy.ndarray also implicitly performs a synchronous read (which hits the in-memory cache since the same region was just retrieved)
>
>
>
|
2022/10/13
|
[
"https://Stackoverflow.com/questions/74054668",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/610569/"
] |
```
import numpy as np
import pandas as pd
# Define the input and output file names
csv_file = 'data.csv'
npy_file = 'data.npy'
# Create dummy data
data = np.random.rand(10000, 100)
df = pd.DataFrame(data)
df.to_csv(csv_file, index=False)
# Define the chunk size
chunk_size = 1000
# Read the header row and get the number of columns
header = pd.read_csv(csv_file, nrows=0)
num_cols = len(header.columns)
# Initialize an empty array to store the data
data = np.empty((0, num_cols))
# Loop over the chunks of the csv file
for chunk in pd.read_csv(csv_file, chunksize=chunk_size):
# Convert the chunk to a numpy array
chunk_array = chunk.to_numpy()
# Append the chunk to the data array
data = np.append(data, chunk_array, axis=0)
np.save(npy_file, data)
# Load the npy file and check the shape
npy_data = np.load(npy_file)
print('Shape of data before conversion:', data.shape)
print('Shape of data after conversion:', npy_data.shape)```
```
|
I'm not aware of any existing function or utility that directly and efficiently converts csv files into npy files. With efficient I guess primarily meaning with low memory requirements.
Writing a npy file iteratively is indeed possible, with some extra effort. There's already a question on SO that addresses this, see:
[save numpy array in append mode](https://stackoverflow.com/questions/30376581/save-numpy-array-in-append-mode)
For example using the `NpyAppendArray` class from [Michael's answer](https://stackoverflow.com/a/64403144) you can do:
```
with open('data.csv') as csv, NpyAppendArray('data.npy') as npy:
for line in csv:
row = np.fromstring(line, sep=',')
npy.append(row[np.newaxis, :])
```
The `NpyAppendArray` class updates the npy file header on every call to `append`, which is a bit much for your 12M rows. Maybe you could update the class to (optionally) only write the header on `close`. Or you could easily batch the writes:
```
batch_lines = 128
with open('data.csv') as csv, NpyAppendArray('data.npy') as npy:
done = False
while not done:
batch = []
for count, line in enumerate(csv):
row = np.fromstring(line, sep=',')
batch.append(row)
if count + 1 >= batch_lines:
break
else:
done = True
npy.append(np.array(batch))
```
(code is not tested)
|
8,575,062
|
I am sure the configuration of `matplotlib` for python is correct since I have used it to plot some figures.
But today it just stop working for some reason. I tested it with really simple code like:
```
import matplotlib.pyplot as plt
import numpy as np
x = np.arange(0, 5, 0.1)
y = np.sin(x)
plt.plot(x, y)
```
There's no error but just no figure shown up.
I am using python 2.6, Eclipse in Ubuntu
|
2011/12/20
|
[
"https://Stackoverflow.com/questions/8575062",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/504283/"
] |
You must use `plt.show()` at the end in order to see the plot
|
`plt.plot(X,y)` function just draws the plot on the canvas. In order to view the plot, you have to specify `plt.show()` after `plt.plot(X,y)`. So,
```
import matplotlib.pyplot as plt
X = //your x
y = //your y
plt.plot(X,y)
plt.show()
```
|
8,575,062
|
I am sure the configuration of `matplotlib` for python is correct since I have used it to plot some figures.
But today it just stop working for some reason. I tested it with really simple code like:
```
import matplotlib.pyplot as plt
import numpy as np
x = np.arange(0, 5, 0.1)
y = np.sin(x)
plt.plot(x, y)
```
There's no error but just no figure shown up.
I am using python 2.6, Eclipse in Ubuntu
|
2011/12/20
|
[
"https://Stackoverflow.com/questions/8575062",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/504283/"
] |
In matplotlib you have two main options:
1. Create your plots and draw them at the end:
```
import matplotlib.pyplot as plt
plt.plot(x, y)
plt.plot(z, t)
plt.show()
```
2. Create your plots and draw them as soon as they are created:
```
import matplotlib.pyplot as plt
from matplotlib import interactive
interactive(True)
plt.plot(x, y)
raw_input('press return to continue')
plt.plot(z, t)
raw_input('press return to end')
```
|
Save the plot as png
```
plt.savefig("temp.png")
```
|
8,575,062
|
I am sure the configuration of `matplotlib` for python is correct since I have used it to plot some figures.
But today it just stop working for some reason. I tested it with really simple code like:
```
import matplotlib.pyplot as plt
import numpy as np
x = np.arange(0, 5, 0.1)
y = np.sin(x)
plt.plot(x, y)
```
There's no error but just no figure shown up.
I am using python 2.6, Eclipse in Ubuntu
|
2011/12/20
|
[
"https://Stackoverflow.com/questions/8575062",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/504283/"
] |
In case anyone else ends up here using Jupyter Notebooks, you just need
```
%matplotlib inline
```
[Purpose of "%matplotlib inline"](https://stackoverflow.com/questions/43027980/purpose-of-matplotlib-inline)
|
You have to use `show()` methode when you done all initialisations in your code in order to see the complet version of plot:
```
import matplotlib.pyplot as plt
plt.plot(x, y)
................
................
plot.show()
```
|
8,575,062
|
I am sure the configuration of `matplotlib` for python is correct since I have used it to plot some figures.
But today it just stop working for some reason. I tested it with really simple code like:
```
import matplotlib.pyplot as plt
import numpy as np
x = np.arange(0, 5, 0.1)
y = np.sin(x)
plt.plot(x, y)
```
There's no error but just no figure shown up.
I am using python 2.6, Eclipse in Ubuntu
|
2011/12/20
|
[
"https://Stackoverflow.com/questions/8575062",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/504283/"
] |
In matplotlib you have two main options:
1. Create your plots and draw them at the end:
```
import matplotlib.pyplot as plt
plt.plot(x, y)
plt.plot(z, t)
plt.show()
```
2. Create your plots and draw them as soon as they are created:
```
import matplotlib.pyplot as plt
from matplotlib import interactive
interactive(True)
plt.plot(x, y)
raw_input('press return to continue')
plt.plot(z, t)
raw_input('press return to end')
```
|
You have to use `show()` methode when you done all initialisations in your code in order to see the complet version of plot:
```
import matplotlib.pyplot as plt
plt.plot(x, y)
................
................
plot.show()
```
|
8,575,062
|
I am sure the configuration of `matplotlib` for python is correct since I have used it to plot some figures.
But today it just stop working for some reason. I tested it with really simple code like:
```
import matplotlib.pyplot as plt
import numpy as np
x = np.arange(0, 5, 0.1)
y = np.sin(x)
plt.plot(x, y)
```
There's no error but just no figure shown up.
I am using python 2.6, Eclipse in Ubuntu
|
2011/12/20
|
[
"https://Stackoverflow.com/questions/8575062",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/504283/"
] |
In matplotlib you have two main options:
1. Create your plots and draw them at the end:
```
import matplotlib.pyplot as plt
plt.plot(x, y)
plt.plot(z, t)
plt.show()
```
2. Create your plots and draw them as soon as they are created:
```
import matplotlib.pyplot as plt
from matplotlib import interactive
interactive(True)
plt.plot(x, y)
raw_input('press return to continue')
plt.plot(z, t)
raw_input('press return to end')
```
|
`plt.plot(X,y)` function just draws the plot on the canvas. In order to view the plot, you have to specify `plt.show()` after `plt.plot(X,y)`. So,
```
import matplotlib.pyplot as plt
X = //your x
y = //your y
plt.plot(X,y)
plt.show()
```
|
8,575,062
|
I am sure the configuration of `matplotlib` for python is correct since I have used it to plot some figures.
But today it just stop working for some reason. I tested it with really simple code like:
```
import matplotlib.pyplot as plt
import numpy as np
x = np.arange(0, 5, 0.1)
y = np.sin(x)
plt.plot(x, y)
```
There's no error but just no figure shown up.
I am using python 2.6, Eclipse in Ubuntu
|
2011/12/20
|
[
"https://Stackoverflow.com/questions/8575062",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/504283/"
] |
In matplotlib you have two main options:
1. Create your plots and draw them at the end:
```
import matplotlib.pyplot as plt
plt.plot(x, y)
plt.plot(z, t)
plt.show()
```
2. Create your plots and draw them as soon as they are created:
```
import matplotlib.pyplot as plt
from matplotlib import interactive
interactive(True)
plt.plot(x, y)
raw_input('press return to continue')
plt.plot(z, t)
raw_input('press return to end')
```
|
In case anyone else ends up here using Jupyter Notebooks, you just need
```
%matplotlib inline
```
[Purpose of "%matplotlib inline"](https://stackoverflow.com/questions/43027980/purpose-of-matplotlib-inline)
|
8,575,062
|
I am sure the configuration of `matplotlib` for python is correct since I have used it to plot some figures.
But today it just stop working for some reason. I tested it with really simple code like:
```
import matplotlib.pyplot as plt
import numpy as np
x = np.arange(0, 5, 0.1)
y = np.sin(x)
plt.plot(x, y)
```
There's no error but just no figure shown up.
I am using python 2.6, Eclipse in Ubuntu
|
2011/12/20
|
[
"https://Stackoverflow.com/questions/8575062",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/504283/"
] |
You must use `plt.show()` at the end in order to see the plot
|
You have to use `show()` methode when you done all initialisations in your code in order to see the complet version of plot:
```
import matplotlib.pyplot as plt
plt.plot(x, y)
................
................
plot.show()
```
|
8,575,062
|
I am sure the configuration of `matplotlib` for python is correct since I have used it to plot some figures.
But today it just stop working for some reason. I tested it with really simple code like:
```
import matplotlib.pyplot as plt
import numpy as np
x = np.arange(0, 5, 0.1)
y = np.sin(x)
plt.plot(x, y)
```
There's no error but just no figure shown up.
I am using python 2.6, Eclipse in Ubuntu
|
2011/12/20
|
[
"https://Stackoverflow.com/questions/8575062",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/504283/"
] |
Save the plot as png
```
plt.savefig("temp.png")
```
|
You have to use `show()` methode when you done all initialisations in your code in order to see the complet version of plot:
```
import matplotlib.pyplot as plt
plt.plot(x, y)
................
................
plot.show()
```
|
8,575,062
|
I am sure the configuration of `matplotlib` for python is correct since I have used it to plot some figures.
But today it just stop working for some reason. I tested it with really simple code like:
```
import matplotlib.pyplot as plt
import numpy as np
x = np.arange(0, 5, 0.1)
y = np.sin(x)
plt.plot(x, y)
```
There's no error but just no figure shown up.
I am using python 2.6, Eclipse in Ubuntu
|
2011/12/20
|
[
"https://Stackoverflow.com/questions/8575062",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/504283/"
] |
In case anyone else ends up here using Jupyter Notebooks, you just need
```
%matplotlib inline
```
[Purpose of "%matplotlib inline"](https://stackoverflow.com/questions/43027980/purpose-of-matplotlib-inline)
|
`plt.plot(X,y)` function just draws the plot on the canvas. In order to view the plot, you have to specify `plt.show()` after `plt.plot(X,y)`. So,
```
import matplotlib.pyplot as plt
X = //your x
y = //your y
plt.plot(X,y)
plt.show()
```
|
8,575,062
|
I am sure the configuration of `matplotlib` for python is correct since I have used it to plot some figures.
But today it just stop working for some reason. I tested it with really simple code like:
```
import matplotlib.pyplot as plt
import numpy as np
x = np.arange(0, 5, 0.1)
y = np.sin(x)
plt.plot(x, y)
```
There's no error but just no figure shown up.
I am using python 2.6, Eclipse in Ubuntu
|
2011/12/20
|
[
"https://Stackoverflow.com/questions/8575062",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/504283/"
] |
You must use `plt.show()` at the end in order to see the plot
|
Save the plot as png
```
plt.savefig("temp.png")
```
|
15,128,225
|
I have developed a python script where i have a setting window which has the options to select the paths for the installation of software.When clicked on OK button of the setting window, i want to write all the selected paths to the registry and read the same when setting window is opened again.
My code looks as below.
```
def OnOk(self, event):
data1=self.field1.GetValue() #path selected in setting window
aReg = ConnectRegistry(None,HKEY_LOCAL_MACHINE)
keyVal=OpenKey(aReg,r"SOFTWARE\my path to\Registry", 0,KEY_WRITE)
try:
SetValueEx(keyVal,"Log file",0,REG_SZ,data1)
except EnvironmentError:
pass
CloseKey(keyVal)
CloseKey(aReg)
```
I get a error like below:
```
Traceback (most recent call last):
File "D:\PROJECT\project.py", line 305, in OnOk
keyVal=OpenKey(aReg,r"SOFTWARE\my path to\Registry", 0,KEY_WRITE)
WindowsError: [Error 5] Access is denied
```
And to read from registry,the saved registry has to show up in the setting window.I tried with the below code.Though its working but not satisfied with the way i programmed it.Help me out for the better solution
```
key = OpenKey(HKEY_CURRENT_USER, r'Software\my path to\Registry', 0, KEY_READ)
for i in range(4):
try:
n,v,t = EnumValue(key,i)
if i==0:
self.field2.SetValue(v)
elif i==1:
self.field3.SetValue(v)
elif i==2:
self.field4.SetValue(v)
elif i==3:
self.field1.SetValue(v)
except EnvironmentError:
pass
CloseKey(key)
```
|
2013/02/28
|
[
"https://Stackoverflow.com/questions/15128225",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/2016950/"
] |
Reading registry keys:
```
def read(path, root=HKEY_CURRENT_USER):
path, name = os.path.split(path)
with suppress(FileNotFoundError), OpenKey(root, path) as key:
return QueryValueEx(key, name)[0]
```
And writing:
```
def write(path, value, root=HKEY_CURRENT_USER):
path, name = os.path.split(path)
with OpenKey(root, path, 0, KEY_WRITE) as key:
SetValueEx(key, name, 0, REG_SZ, value)
```
Extended for type handling. Provide type as argument, match current type in registry or python value type.
```
def write(path, value, root=HKEY_CURRENT_USER, regtype=None):
path, name = os.path.split(path)
with OpenKey(root, path, 0, KEY_WRITE|KEY_READ) as key:
with suppress(FileNotFoundError):
regtype = regtype or QueryValueEx(key, name)[1]
SetValueEx(key, name, 0, regtype or REG_DWORD if isinstance(value, int) else REG_SZ, str(value) if regtype==REG_SZ else value)
```
>
> **NOTE:** Use of [contextlib.suppress()](https://docs.python.org/3/library/contextlib.html#contextlib.suppress) (available since `python 3.4`) can be replaced by try..except..pass for older versions. The [context manager](https://docs.python.org/2/library/_winreg.html#_winreg.PyHKEY.__enter__) interface for winreg was introduced in `python 2.6`.
>
>
>
|
for Creating / writing values in registry key:
```
from winreg import*
import winreg
keyVal = r'SOFTWARE\\python'
try:
key = OpenKey(HKEY_LOCAL_MACHINE, keyVal, 0, KEY_ALL_ACCESS)
except:
key = CreateKey(HKEY_LOCAL_MACHINE, keyVal)
SetValueEx(key, "Start Page", 0, REG_SZ, "snakes")
CloseKey(key)
```
If access denied - try running the command (CMd or IDE) in administrative mode
for reading value in registry-key
```
from winreg import*
Registry = ConnectRegistry(None, HKEY_LOCAL_MACHINE)
RawKey = OpenKey(Registry, "SOFTWARE\\python")
try:
i = 0
while 1:
name, value, type = EnumValue(RawKey, i)
print("name:",name,"value:", value,"i:", i)
i += 1
except WindowsError:
print("")
```
|
15,128,225
|
I have developed a python script where i have a setting window which has the options to select the paths for the installation of software.When clicked on OK button of the setting window, i want to write all the selected paths to the registry and read the same when setting window is opened again.
My code looks as below.
```
def OnOk(self, event):
data1=self.field1.GetValue() #path selected in setting window
aReg = ConnectRegistry(None,HKEY_LOCAL_MACHINE)
keyVal=OpenKey(aReg,r"SOFTWARE\my path to\Registry", 0,KEY_WRITE)
try:
SetValueEx(keyVal,"Log file",0,REG_SZ,data1)
except EnvironmentError:
pass
CloseKey(keyVal)
CloseKey(aReg)
```
I get a error like below:
```
Traceback (most recent call last):
File "D:\PROJECT\project.py", line 305, in OnOk
keyVal=OpenKey(aReg,r"SOFTWARE\my path to\Registry", 0,KEY_WRITE)
WindowsError: [Error 5] Access is denied
```
And to read from registry,the saved registry has to show up in the setting window.I tried with the below code.Though its working but not satisfied with the way i programmed it.Help me out for the better solution
```
key = OpenKey(HKEY_CURRENT_USER, r'Software\my path to\Registry', 0, KEY_READ)
for i in range(4):
try:
n,v,t = EnumValue(key,i)
if i==0:
self.field2.SetValue(v)
elif i==1:
self.field3.SetValue(v)
elif i==2:
self.field4.SetValue(v)
elif i==3:
self.field1.SetValue(v)
except EnvironmentError:
pass
CloseKey(key)
```
|
2013/02/28
|
[
"https://Stackoverflow.com/questions/15128225",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/2016950/"
] |
Same as @Aramanethota but with pep8 and func def for easy usage.
```
REG_PATH = r"SOFTWARE\my_program\Settings"
def set_reg(name, value):
try:
_winreg.CreateKey(_winreg.HKEY_CURRENT_USER, REG_PATH)
registry_key = _winreg.OpenKey(_winreg.HKEY_CURRENT_USER, REG_PATH, 0,
_winreg.KEY_WRITE)
_winreg.SetValueEx(registry_key, name, 0, _winreg.REG_SZ, value)
_winreg.CloseKey(registry_key)
return True
except WindowsError:
return False
def get_reg(name):
try:
registry_key = _winreg.OpenKey(_winreg.HKEY_CURRENT_USER, REG_PATH, 0,
_winreg.KEY_READ)
value, regtype = _winreg.QueryValueEx(registry_key, name)
_winreg.CloseKey(registry_key)
return value
except WindowsError:
return None
```
|
The 'winreg' module is so ...strange working module, so, I wrote a class called 'WindowsRegistry' for working with Windows registry and 'winreg' module easier. I hope it will be more usefull:
```
import winreg
import re
class WindowsRegistry:
"""Class WindowsRegistry is using for easy manipulating Windows registry.
Methods
-------
query_value(full_path : str)
Check value for existing.
get_value(full_path : str)
Get value's data.
set_value(full_path : str, value : str, value_type='REG_SZ' : str)
Create a new value with data or set data to an existing value.
delete_value(full_path : str)
Delete an existing value.
query_key(full_path : str)
Check key for existing.
delete_key(full_path : str)
Delete an existing key(only without subkeys).
Examples:
WindowsRegistry.set_value('HKCU/Software/Microsoft/Windows/CurrentVersion/Run', 'Program', r'"c:\Dir1\program.exe"')
WindowsRegistry.delete_value('HKEY_CURRENT_USER/Software/Microsoft/Windows/CurrentVersion/Run/Program')
"""
@staticmethod
def __parse_data(full_path):
full_path = re.sub(r'/', r'\\', full_path)
hive = re.sub(r'\\.*$', '', full_path)
if not hive:
raise ValueError('Invalid \'full_path\' param.')
if len(hive) <= 4:
if hive == 'HKLM':
hive = 'HKEY_LOCAL_MACHINE'
elif hive == 'HKCU':
hive = 'HKEY_CURRENT_USER'
elif hive == 'HKCR':
hive = 'HKEY_CLASSES_ROOT'
elif hive == 'HKU':
hive = 'HKEY_USERS'
reg_key = re.sub(r'^[A-Z_]*\\', '', full_path)
reg_key = re.sub(r'\\[^\\]+$', '', reg_key)
reg_value = re.sub(r'^.*\\', '', full_path)
return hive, reg_key, reg_value
@staticmethod
def query_value(full_path):
value_list = WindowsRegistry.__parse_data(full_path)
try:
opened_key = winreg.OpenKey(getattr(winreg, value_list[0]), value_list[1], 0, winreg.KEY_READ)
winreg.QueryValueEx(opened_key, value_list[2])
winreg.CloseKey(opened_key)
return True
except WindowsError:
return False
@staticmethod
def get_value(full_path):
value_list = WindowsRegistry.__parse_data(full_path)
try:
opened_key = winreg.OpenKey(getattr(winreg, value_list[0]), value_list[1], 0, winreg.KEY_READ)
value_of_value, value_type = winreg.QueryValueEx(opened_key, value_list[2])
winreg.CloseKey(opened_key)
return value_of_value
except WindowsError:
return None
@staticmethod
def set_value(full_path, value, value_type='REG_SZ'):
value_list = WindowsRegistry.__parse_data(full_path)
try:
winreg.CreateKey(getattr(winreg, value_list[0]), value_list[1])
opened_key = winreg.OpenKey(getattr(winreg, value_list[0]), value_list[1], 0, winreg.KEY_WRITE)
winreg.SetValueEx(opened_key, value_list[2], 0, getattr(winreg, value_type), value)
winreg.CloseKey(opened_key)
return True
except WindowsError:
return False
@staticmethod
def delete_value(full_path):
value_list = WindowsRegistry.__parse_data(full_path)
try:
opened_key = winreg.OpenKey(getattr(winreg, value_list[0]), value_list[1], 0, winreg.KEY_WRITE)
winreg.DeleteValue(opened_key, value_list[2])
winreg.CloseKey(opened_key)
return True
except WindowsError:
return False
@staticmethod
def query_key(full_path):
value_list = WindowsRegistry.__parse_data(full_path)
try:
opened_key = winreg.OpenKey(getattr(winreg, value_list[0]), value_list[1] + r'\\' + value_list[2], 0, winreg.KEY_READ)
winreg.CloseKey(opened_key)
return True
except WindowsError:
return False
@staticmethod
def delete_key(full_path):
value_list = WindowsRegistry.__parse_data(full_path)
try:
winreg.DeleteKey(getattr(winreg, value_list[0]), value_list[1] + r'\\' + value_list[2])
return True
except WindowsError:
return False
```
|
15,128,225
|
I have developed a python script where i have a setting window which has the options to select the paths for the installation of software.When clicked on OK button of the setting window, i want to write all the selected paths to the registry and read the same when setting window is opened again.
My code looks as below.
```
def OnOk(self, event):
data1=self.field1.GetValue() #path selected in setting window
aReg = ConnectRegistry(None,HKEY_LOCAL_MACHINE)
keyVal=OpenKey(aReg,r"SOFTWARE\my path to\Registry", 0,KEY_WRITE)
try:
SetValueEx(keyVal,"Log file",0,REG_SZ,data1)
except EnvironmentError:
pass
CloseKey(keyVal)
CloseKey(aReg)
```
I get a error like below:
```
Traceback (most recent call last):
File "D:\PROJECT\project.py", line 305, in OnOk
keyVal=OpenKey(aReg,r"SOFTWARE\my path to\Registry", 0,KEY_WRITE)
WindowsError: [Error 5] Access is denied
```
And to read from registry,the saved registry has to show up in the setting window.I tried with the below code.Though its working but not satisfied with the way i programmed it.Help me out for the better solution
```
key = OpenKey(HKEY_CURRENT_USER, r'Software\my path to\Registry', 0, KEY_READ)
for i in range(4):
try:
n,v,t = EnumValue(key,i)
if i==0:
self.field2.SetValue(v)
elif i==1:
self.field3.SetValue(v)
elif i==2:
self.field4.SetValue(v)
elif i==3:
self.field1.SetValue(v)
except EnvironmentError:
pass
CloseKey(key)
```
|
2013/02/28
|
[
"https://Stackoverflow.com/questions/15128225",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/2016950/"
] |
Looks like you don't have permission to edit the Registry. Incase if you are Admin, Please run this script in Elevated state.
|
My solution:
```
def add_key(name,pathh):
try:
keyval=r"System\my path\Register"
if not os.path.exists("keyval"):
key = winreg.CreateKey(winreg.HKEY_CURRENT_USER,keyval)
Registrykey = winreg.OpenKey(winreg.HKEY_CURRENT_USER, r"System\my path\Register", 0,winreg.KEY_WRITE)
winreg.SetValueEx(Registrykey,name,1,winreg.REG_SZ,pathh)
winreg.CloseKey(Registrykey)
return True
except WindowsError:
return False
```
|
15,128,225
|
I have developed a python script where i have a setting window which has the options to select the paths for the installation of software.When clicked on OK button of the setting window, i want to write all the selected paths to the registry and read the same when setting window is opened again.
My code looks as below.
```
def OnOk(self, event):
data1=self.field1.GetValue() #path selected in setting window
aReg = ConnectRegistry(None,HKEY_LOCAL_MACHINE)
keyVal=OpenKey(aReg,r"SOFTWARE\my path to\Registry", 0,KEY_WRITE)
try:
SetValueEx(keyVal,"Log file",0,REG_SZ,data1)
except EnvironmentError:
pass
CloseKey(keyVal)
CloseKey(aReg)
```
I get a error like below:
```
Traceback (most recent call last):
File "D:\PROJECT\project.py", line 305, in OnOk
keyVal=OpenKey(aReg,r"SOFTWARE\my path to\Registry", 0,KEY_WRITE)
WindowsError: [Error 5] Access is denied
```
And to read from registry,the saved registry has to show up in the setting window.I tried with the below code.Though its working but not satisfied with the way i programmed it.Help me out for the better solution
```
key = OpenKey(HKEY_CURRENT_USER, r'Software\my path to\Registry', 0, KEY_READ)
for i in range(4):
try:
n,v,t = EnumValue(key,i)
if i==0:
self.field2.SetValue(v)
elif i==1:
self.field3.SetValue(v)
elif i==2:
self.field4.SetValue(v)
elif i==3:
self.field1.SetValue(v)
except EnvironmentError:
pass
CloseKey(key)
```
|
2013/02/28
|
[
"https://Stackoverflow.com/questions/15128225",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/2016950/"
] |
Reading registry keys:
```
def read(path, root=HKEY_CURRENT_USER):
path, name = os.path.split(path)
with suppress(FileNotFoundError), OpenKey(root, path) as key:
return QueryValueEx(key, name)[0]
```
And writing:
```
def write(path, value, root=HKEY_CURRENT_USER):
path, name = os.path.split(path)
with OpenKey(root, path, 0, KEY_WRITE) as key:
SetValueEx(key, name, 0, REG_SZ, value)
```
Extended for type handling. Provide type as argument, match current type in registry or python value type.
```
def write(path, value, root=HKEY_CURRENT_USER, regtype=None):
path, name = os.path.split(path)
with OpenKey(root, path, 0, KEY_WRITE|KEY_READ) as key:
with suppress(FileNotFoundError):
regtype = regtype or QueryValueEx(key, name)[1]
SetValueEx(key, name, 0, regtype or REG_DWORD if isinstance(value, int) else REG_SZ, str(value) if regtype==REG_SZ else value)
```
>
> **NOTE:** Use of [contextlib.suppress()](https://docs.python.org/3/library/contextlib.html#contextlib.suppress) (available since `python 3.4`) can be replaced by try..except..pass for older versions. The [context manager](https://docs.python.org/2/library/_winreg.html#_winreg.PyHKEY.__enter__) interface for winreg was introduced in `python 2.6`.
>
>
>
|
Looks like you don't have permission to edit the Registry. Incase if you are Admin, Please run this script in Elevated state.
|
15,128,225
|
I have developed a python script where i have a setting window which has the options to select the paths for the installation of software.When clicked on OK button of the setting window, i want to write all the selected paths to the registry and read the same when setting window is opened again.
My code looks as below.
```
def OnOk(self, event):
data1=self.field1.GetValue() #path selected in setting window
aReg = ConnectRegistry(None,HKEY_LOCAL_MACHINE)
keyVal=OpenKey(aReg,r"SOFTWARE\my path to\Registry", 0,KEY_WRITE)
try:
SetValueEx(keyVal,"Log file",0,REG_SZ,data1)
except EnvironmentError:
pass
CloseKey(keyVal)
CloseKey(aReg)
```
I get a error like below:
```
Traceback (most recent call last):
File "D:\PROJECT\project.py", line 305, in OnOk
keyVal=OpenKey(aReg,r"SOFTWARE\my path to\Registry", 0,KEY_WRITE)
WindowsError: [Error 5] Access is denied
```
And to read from registry,the saved registry has to show up in the setting window.I tried with the below code.Though its working but not satisfied with the way i programmed it.Help me out for the better solution
```
key = OpenKey(HKEY_CURRENT_USER, r'Software\my path to\Registry', 0, KEY_READ)
for i in range(4):
try:
n,v,t = EnumValue(key,i)
if i==0:
self.field2.SetValue(v)
elif i==1:
self.field3.SetValue(v)
elif i==2:
self.field4.SetValue(v)
elif i==3:
self.field1.SetValue(v)
except EnvironmentError:
pass
CloseKey(key)
```
|
2013/02/28
|
[
"https://Stackoverflow.com/questions/15128225",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/2016950/"
] |
Python script to read from registry is as follows:
```
try:
root_key=OpenKey(HKEY_CURRENT_USER, r'SOFTWARE\my path to\Registry', 0, KEY_READ)
[Pathname,regtype]=(QueryValueEx(root_key,"Pathname"))
CloseKey(root_key)
if (""==Pathname):
raise WindowsError
except WindowsError:
return [""]
```
Python script to write to the registry is:
```
try:
keyval=r"SOFTWARE\my path to\Registry"
if not os.path.exists("keyval"):
key = CreateKey(HKEY_CURRENT_USER,keyval)
Registrykey= OpenKey(HKEY_CURRENT_USER, r"SOFTWARE\my path to\Registry", 0,KEY_WRITE)
SetValueEx(Registrykey,"Pathname",0,REG_SZ,Pathname)
CloseKey(Registrykey)
return True
except WindowsError:
return False
```
Hope it helps you all.Cheers:)
|
Reading registry keys:
```
def read(path, root=HKEY_CURRENT_USER):
path, name = os.path.split(path)
with suppress(FileNotFoundError), OpenKey(root, path) as key:
return QueryValueEx(key, name)[0]
```
And writing:
```
def write(path, value, root=HKEY_CURRENT_USER):
path, name = os.path.split(path)
with OpenKey(root, path, 0, KEY_WRITE) as key:
SetValueEx(key, name, 0, REG_SZ, value)
```
Extended for type handling. Provide type as argument, match current type in registry or python value type.
```
def write(path, value, root=HKEY_CURRENT_USER, regtype=None):
path, name = os.path.split(path)
with OpenKey(root, path, 0, KEY_WRITE|KEY_READ) as key:
with suppress(FileNotFoundError):
regtype = regtype or QueryValueEx(key, name)[1]
SetValueEx(key, name, 0, regtype or REG_DWORD if isinstance(value, int) else REG_SZ, str(value) if regtype==REG_SZ else value)
```
>
> **NOTE:** Use of [contextlib.suppress()](https://docs.python.org/3/library/contextlib.html#contextlib.suppress) (available since `python 3.4`) can be replaced by try..except..pass for older versions. The [context manager](https://docs.python.org/2/library/_winreg.html#_winreg.PyHKEY.__enter__) interface for winreg was introduced in `python 2.6`.
>
>
>
|
15,128,225
|
I have developed a python script where i have a setting window which has the options to select the paths for the installation of software.When clicked on OK button of the setting window, i want to write all the selected paths to the registry and read the same when setting window is opened again.
My code looks as below.
```
def OnOk(self, event):
data1=self.field1.GetValue() #path selected in setting window
aReg = ConnectRegistry(None,HKEY_LOCAL_MACHINE)
keyVal=OpenKey(aReg,r"SOFTWARE\my path to\Registry", 0,KEY_WRITE)
try:
SetValueEx(keyVal,"Log file",0,REG_SZ,data1)
except EnvironmentError:
pass
CloseKey(keyVal)
CloseKey(aReg)
```
I get a error like below:
```
Traceback (most recent call last):
File "D:\PROJECT\project.py", line 305, in OnOk
keyVal=OpenKey(aReg,r"SOFTWARE\my path to\Registry", 0,KEY_WRITE)
WindowsError: [Error 5] Access is denied
```
And to read from registry,the saved registry has to show up in the setting window.I tried with the below code.Though its working but not satisfied with the way i programmed it.Help me out for the better solution
```
key = OpenKey(HKEY_CURRENT_USER, r'Software\my path to\Registry', 0, KEY_READ)
for i in range(4):
try:
n,v,t = EnumValue(key,i)
if i==0:
self.field2.SetValue(v)
elif i==1:
self.field3.SetValue(v)
elif i==2:
self.field4.SetValue(v)
elif i==3:
self.field1.SetValue(v)
except EnvironmentError:
pass
CloseKey(key)
```
|
2013/02/28
|
[
"https://Stackoverflow.com/questions/15128225",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/2016950/"
] |
Same as @Aramanethota but with pep8 and func def for easy usage.
```
REG_PATH = r"SOFTWARE\my_program\Settings"
def set_reg(name, value):
try:
_winreg.CreateKey(_winreg.HKEY_CURRENT_USER, REG_PATH)
registry_key = _winreg.OpenKey(_winreg.HKEY_CURRENT_USER, REG_PATH, 0,
_winreg.KEY_WRITE)
_winreg.SetValueEx(registry_key, name, 0, _winreg.REG_SZ, value)
_winreg.CloseKey(registry_key)
return True
except WindowsError:
return False
def get_reg(name):
try:
registry_key = _winreg.OpenKey(_winreg.HKEY_CURRENT_USER, REG_PATH, 0,
_winreg.KEY_READ)
value, regtype = _winreg.QueryValueEx(registry_key, name)
_winreg.CloseKey(registry_key)
return value
except WindowsError:
return None
```
|
Here is a class I wrote (python 2) which has the ability to restore state when you finish manipulating the registry. The class was not tested properly so it may contain some bugs:
```
import _winreg as winreg
class Registry(object):
def __init__(self, restore_state=False):
self.m_backup = {}
self.m_restore_state = restore_state
def get_key(self, hkey, subkey, access, create_if_doesnt_exist=True):
created_key = False
registry_key = None
try:
registry_key = winreg.OpenKey(hkey, subkey, 0, access)
except WindowsError:
try:
if create_if_doesnt_exist:
registry_key = winreg.CreateKey(hkey, subkey)
if registry_key not in self.m_backup:
self.m_backup[registry_key] = ({}, (hkey, subkey))
else:
registry_key = None
except WindowsError:
if registry_key:
self.close_key(registry_key)
raise Exception('Registry does not exist and could not be created.')
return registry_key
def close_key(self, registry_key):
closed = False
if registry_key:
try:
winreg.CloseKey(registry_key)
closed = True
except:
closed = False
return closed
def get_reg_value(self, hkey, subkey, name):
value = None
registry_key = self.get_key(hkey, subkey, winreg.KEY_READ, False)
if registry_key:
try:
value, _ = winreg.QueryValueEx(registry_key, name)
except WindowsError:
value = None
finally:
self.close_key(registry_key)
return value
def set_reg_value(self, hkey, subkey, name, type, value):
registry_key = self.get_key(hkey, subkey, winreg.KEY_WRITE, True)
backed_up = False
was_set = False
if registry_key:
if self.m_restore_state:
if registry_key not in self.m_backup:
self.m_backup[registry_key] = ({}, None)
existing_value = self.get_reg_value(hkey, subkey, name)
if existing_value:
self.m_backup[registry_key][0][name] = (existing_value, type, False)
else:
self.m_backup[registry_key][0][name] = (None, None, True)
backed_up = True
try:
winreg.SetValueEx(registry_key, name, 0, type, value)
was_set = True
except WindowsError:
was_set = False
finally:
if not backed_up:
self.close_key(registry_key)
return was_set
def restore_state(self):
if self.m_restore_state:
for registry_key, data in self.m_backup.iteritems():
backup_dict, key_info = data
try:
for name, backup_data in backup_dict.iteritems():
value, type, was_created = backup_data
if was_created:
print registry_key, name
winreg.DeleteValue(registry_key, name)
else:
winreg.SetValueEx(registry_key, name, 0, type, value)
if key_info:
hkey, subkey = key_info
winreg.DeleteKey(hkey, subkey)
except:
raise Exception('Could not restore value')
self.close_key(registry_key)
def __del__(self):
if self.m_restore_state:
self.restore_state()
```
|
15,128,225
|
I have developed a python script where i have a setting window which has the options to select the paths for the installation of software.When clicked on OK button of the setting window, i want to write all the selected paths to the registry and read the same when setting window is opened again.
My code looks as below.
```
def OnOk(self, event):
data1=self.field1.GetValue() #path selected in setting window
aReg = ConnectRegistry(None,HKEY_LOCAL_MACHINE)
keyVal=OpenKey(aReg,r"SOFTWARE\my path to\Registry", 0,KEY_WRITE)
try:
SetValueEx(keyVal,"Log file",0,REG_SZ,data1)
except EnvironmentError:
pass
CloseKey(keyVal)
CloseKey(aReg)
```
I get a error like below:
```
Traceback (most recent call last):
File "D:\PROJECT\project.py", line 305, in OnOk
keyVal=OpenKey(aReg,r"SOFTWARE\my path to\Registry", 0,KEY_WRITE)
WindowsError: [Error 5] Access is denied
```
And to read from registry,the saved registry has to show up in the setting window.I tried with the below code.Though its working but not satisfied with the way i programmed it.Help me out for the better solution
```
key = OpenKey(HKEY_CURRENT_USER, r'Software\my path to\Registry', 0, KEY_READ)
for i in range(4):
try:
n,v,t = EnumValue(key,i)
if i==0:
self.field2.SetValue(v)
elif i==1:
self.field3.SetValue(v)
elif i==2:
self.field4.SetValue(v)
elif i==3:
self.field1.SetValue(v)
except EnvironmentError:
pass
CloseKey(key)
```
|
2013/02/28
|
[
"https://Stackoverflow.com/questions/15128225",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/2016950/"
] |
```
#Python3 version of hugo24's snippet
import winreg
REG_PATH = r"Control Panel\Mouse"
def set_reg(name, value):
try:
winreg.CreateKey(winreg.HKEY_CURRENT_USER, REG_PATH)
registry_key = winreg.OpenKey(winreg.HKEY_CURRENT_USER, REG_PATH, 0,
winreg.KEY_WRITE)
winreg.SetValueEx(registry_key, name, 0, winreg.REG_SZ, value)
winreg.CloseKey(registry_key)
return True
except WindowsError:
return False
def get_reg(name):
try:
registry_key = winreg.OpenKey(winreg.HKEY_CURRENT_USER, REG_PATH, 0,
winreg.KEY_READ)
value, regtype = winreg.QueryValueEx(registry_key, name)
winreg.CloseKey(registry_key)
return value
except WindowsError:
return None
#Example MouseSensitivity
#Read value
print (get_reg('MouseSensitivity'))
#Set Value 1/20 (will just write the value to reg, the changed mouse val requires a win re-log to apply*)
set_reg('MouseSensitivity', str(10))
#*For instant apply of SystemParameters like the mouse speed on-write, you can use win32gui/SPI
#http://docs.activestate.com/activepython/3.4/pywin32/win32gui__SystemParametersInfo_meth.html
```
|
Here is a class I wrote (python 2) which has the ability to restore state when you finish manipulating the registry. The class was not tested properly so it may contain some bugs:
```
import _winreg as winreg
class Registry(object):
def __init__(self, restore_state=False):
self.m_backup = {}
self.m_restore_state = restore_state
def get_key(self, hkey, subkey, access, create_if_doesnt_exist=True):
created_key = False
registry_key = None
try:
registry_key = winreg.OpenKey(hkey, subkey, 0, access)
except WindowsError:
try:
if create_if_doesnt_exist:
registry_key = winreg.CreateKey(hkey, subkey)
if registry_key not in self.m_backup:
self.m_backup[registry_key] = ({}, (hkey, subkey))
else:
registry_key = None
except WindowsError:
if registry_key:
self.close_key(registry_key)
raise Exception('Registry does not exist and could not be created.')
return registry_key
def close_key(self, registry_key):
closed = False
if registry_key:
try:
winreg.CloseKey(registry_key)
closed = True
except:
closed = False
return closed
def get_reg_value(self, hkey, subkey, name):
value = None
registry_key = self.get_key(hkey, subkey, winreg.KEY_READ, False)
if registry_key:
try:
value, _ = winreg.QueryValueEx(registry_key, name)
except WindowsError:
value = None
finally:
self.close_key(registry_key)
return value
def set_reg_value(self, hkey, subkey, name, type, value):
registry_key = self.get_key(hkey, subkey, winreg.KEY_WRITE, True)
backed_up = False
was_set = False
if registry_key:
if self.m_restore_state:
if registry_key not in self.m_backup:
self.m_backup[registry_key] = ({}, None)
existing_value = self.get_reg_value(hkey, subkey, name)
if existing_value:
self.m_backup[registry_key][0][name] = (existing_value, type, False)
else:
self.m_backup[registry_key][0][name] = (None, None, True)
backed_up = True
try:
winreg.SetValueEx(registry_key, name, 0, type, value)
was_set = True
except WindowsError:
was_set = False
finally:
if not backed_up:
self.close_key(registry_key)
return was_set
def restore_state(self):
if self.m_restore_state:
for registry_key, data in self.m_backup.iteritems():
backup_dict, key_info = data
try:
for name, backup_data in backup_dict.iteritems():
value, type, was_created = backup_data
if was_created:
print registry_key, name
winreg.DeleteValue(registry_key, name)
else:
winreg.SetValueEx(registry_key, name, 0, type, value)
if key_info:
hkey, subkey = key_info
winreg.DeleteKey(hkey, subkey)
except:
raise Exception('Could not restore value')
self.close_key(registry_key)
def __del__(self):
if self.m_restore_state:
self.restore_state()
```
|
15,128,225
|
I have developed a python script where i have a setting window which has the options to select the paths for the installation of software.When clicked on OK button of the setting window, i want to write all the selected paths to the registry and read the same when setting window is opened again.
My code looks as below.
```
def OnOk(self, event):
data1=self.field1.GetValue() #path selected in setting window
aReg = ConnectRegistry(None,HKEY_LOCAL_MACHINE)
keyVal=OpenKey(aReg,r"SOFTWARE\my path to\Registry", 0,KEY_WRITE)
try:
SetValueEx(keyVal,"Log file",0,REG_SZ,data1)
except EnvironmentError:
pass
CloseKey(keyVal)
CloseKey(aReg)
```
I get a error like below:
```
Traceback (most recent call last):
File "D:\PROJECT\project.py", line 305, in OnOk
keyVal=OpenKey(aReg,r"SOFTWARE\my path to\Registry", 0,KEY_WRITE)
WindowsError: [Error 5] Access is denied
```
And to read from registry,the saved registry has to show up in the setting window.I tried with the below code.Though its working but not satisfied with the way i programmed it.Help me out for the better solution
```
key = OpenKey(HKEY_CURRENT_USER, r'Software\my path to\Registry', 0, KEY_READ)
for i in range(4):
try:
n,v,t = EnumValue(key,i)
if i==0:
self.field2.SetValue(v)
elif i==1:
self.field3.SetValue(v)
elif i==2:
self.field4.SetValue(v)
elif i==3:
self.field1.SetValue(v)
except EnvironmentError:
pass
CloseKey(key)
```
|
2013/02/28
|
[
"https://Stackoverflow.com/questions/15128225",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/2016950/"
] |
Same as @Aramanethota but with pep8 and func def for easy usage.
```
REG_PATH = r"SOFTWARE\my_program\Settings"
def set_reg(name, value):
try:
_winreg.CreateKey(_winreg.HKEY_CURRENT_USER, REG_PATH)
registry_key = _winreg.OpenKey(_winreg.HKEY_CURRENT_USER, REG_PATH, 0,
_winreg.KEY_WRITE)
_winreg.SetValueEx(registry_key, name, 0, _winreg.REG_SZ, value)
_winreg.CloseKey(registry_key)
return True
except WindowsError:
return False
def get_reg(name):
try:
registry_key = _winreg.OpenKey(_winreg.HKEY_CURRENT_USER, REG_PATH, 0,
_winreg.KEY_READ)
value, regtype = _winreg.QueryValueEx(registry_key, name)
_winreg.CloseKey(registry_key)
return value
except WindowsError:
return None
```
|
Reading registry keys:
```
def read(path, root=HKEY_CURRENT_USER):
path, name = os.path.split(path)
with suppress(FileNotFoundError), OpenKey(root, path) as key:
return QueryValueEx(key, name)[0]
```
And writing:
```
def write(path, value, root=HKEY_CURRENT_USER):
path, name = os.path.split(path)
with OpenKey(root, path, 0, KEY_WRITE) as key:
SetValueEx(key, name, 0, REG_SZ, value)
```
Extended for type handling. Provide type as argument, match current type in registry or python value type.
```
def write(path, value, root=HKEY_CURRENT_USER, regtype=None):
path, name = os.path.split(path)
with OpenKey(root, path, 0, KEY_WRITE|KEY_READ) as key:
with suppress(FileNotFoundError):
regtype = regtype or QueryValueEx(key, name)[1]
SetValueEx(key, name, 0, regtype or REG_DWORD if isinstance(value, int) else REG_SZ, str(value) if regtype==REG_SZ else value)
```
>
> **NOTE:** Use of [contextlib.suppress()](https://docs.python.org/3/library/contextlib.html#contextlib.suppress) (available since `python 3.4`) can be replaced by try..except..pass for older versions. The [context manager](https://docs.python.org/2/library/_winreg.html#_winreg.PyHKEY.__enter__) interface for winreg was introduced in `python 2.6`.
>
>
>
|
15,128,225
|
I have developed a python script where i have a setting window which has the options to select the paths for the installation of software.When clicked on OK button of the setting window, i want to write all the selected paths to the registry and read the same when setting window is opened again.
My code looks as below.
```
def OnOk(self, event):
data1=self.field1.GetValue() #path selected in setting window
aReg = ConnectRegistry(None,HKEY_LOCAL_MACHINE)
keyVal=OpenKey(aReg,r"SOFTWARE\my path to\Registry", 0,KEY_WRITE)
try:
SetValueEx(keyVal,"Log file",0,REG_SZ,data1)
except EnvironmentError:
pass
CloseKey(keyVal)
CloseKey(aReg)
```
I get a error like below:
```
Traceback (most recent call last):
File "D:\PROJECT\project.py", line 305, in OnOk
keyVal=OpenKey(aReg,r"SOFTWARE\my path to\Registry", 0,KEY_WRITE)
WindowsError: [Error 5] Access is denied
```
And to read from registry,the saved registry has to show up in the setting window.I tried with the below code.Though its working but not satisfied with the way i programmed it.Help me out for the better solution
```
key = OpenKey(HKEY_CURRENT_USER, r'Software\my path to\Registry', 0, KEY_READ)
for i in range(4):
try:
n,v,t = EnumValue(key,i)
if i==0:
self.field2.SetValue(v)
elif i==1:
self.field3.SetValue(v)
elif i==2:
self.field4.SetValue(v)
elif i==3:
self.field1.SetValue(v)
except EnvironmentError:
pass
CloseKey(key)
```
|
2013/02/28
|
[
"https://Stackoverflow.com/questions/15128225",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/2016950/"
] |
for Creating / writing values in registry key:
```
from winreg import*
import winreg
keyVal = r'SOFTWARE\\python'
try:
key = OpenKey(HKEY_LOCAL_MACHINE, keyVal, 0, KEY_ALL_ACCESS)
except:
key = CreateKey(HKEY_LOCAL_MACHINE, keyVal)
SetValueEx(key, "Start Page", 0, REG_SZ, "snakes")
CloseKey(key)
```
If access denied - try running the command (CMd or IDE) in administrative mode
for reading value in registry-key
```
from winreg import*
Registry = ConnectRegistry(None, HKEY_LOCAL_MACHINE)
RawKey = OpenKey(Registry, "SOFTWARE\\python")
try:
i = 0
while 1:
name, value, type = EnumValue(RawKey, i)
print("name:",name,"value:", value,"i:", i)
i += 1
except WindowsError:
print("")
```
|
The 'winreg' module is so ...strange working module, so, I wrote a class called 'WindowsRegistry' for working with Windows registry and 'winreg' module easier. I hope it will be more usefull:
```
import winreg
import re
class WindowsRegistry:
"""Class WindowsRegistry is using for easy manipulating Windows registry.
Methods
-------
query_value(full_path : str)
Check value for existing.
get_value(full_path : str)
Get value's data.
set_value(full_path : str, value : str, value_type='REG_SZ' : str)
Create a new value with data or set data to an existing value.
delete_value(full_path : str)
Delete an existing value.
query_key(full_path : str)
Check key for existing.
delete_key(full_path : str)
Delete an existing key(only without subkeys).
Examples:
WindowsRegistry.set_value('HKCU/Software/Microsoft/Windows/CurrentVersion/Run', 'Program', r'"c:\Dir1\program.exe"')
WindowsRegistry.delete_value('HKEY_CURRENT_USER/Software/Microsoft/Windows/CurrentVersion/Run/Program')
"""
@staticmethod
def __parse_data(full_path):
full_path = re.sub(r'/', r'\\', full_path)
hive = re.sub(r'\\.*$', '', full_path)
if not hive:
raise ValueError('Invalid \'full_path\' param.')
if len(hive) <= 4:
if hive == 'HKLM':
hive = 'HKEY_LOCAL_MACHINE'
elif hive == 'HKCU':
hive = 'HKEY_CURRENT_USER'
elif hive == 'HKCR':
hive = 'HKEY_CLASSES_ROOT'
elif hive == 'HKU':
hive = 'HKEY_USERS'
reg_key = re.sub(r'^[A-Z_]*\\', '', full_path)
reg_key = re.sub(r'\\[^\\]+$', '', reg_key)
reg_value = re.sub(r'^.*\\', '', full_path)
return hive, reg_key, reg_value
@staticmethod
def query_value(full_path):
value_list = WindowsRegistry.__parse_data(full_path)
try:
opened_key = winreg.OpenKey(getattr(winreg, value_list[0]), value_list[1], 0, winreg.KEY_READ)
winreg.QueryValueEx(opened_key, value_list[2])
winreg.CloseKey(opened_key)
return True
except WindowsError:
return False
@staticmethod
def get_value(full_path):
value_list = WindowsRegistry.__parse_data(full_path)
try:
opened_key = winreg.OpenKey(getattr(winreg, value_list[0]), value_list[1], 0, winreg.KEY_READ)
value_of_value, value_type = winreg.QueryValueEx(opened_key, value_list[2])
winreg.CloseKey(opened_key)
return value_of_value
except WindowsError:
return None
@staticmethod
def set_value(full_path, value, value_type='REG_SZ'):
value_list = WindowsRegistry.__parse_data(full_path)
try:
winreg.CreateKey(getattr(winreg, value_list[0]), value_list[1])
opened_key = winreg.OpenKey(getattr(winreg, value_list[0]), value_list[1], 0, winreg.KEY_WRITE)
winreg.SetValueEx(opened_key, value_list[2], 0, getattr(winreg, value_type), value)
winreg.CloseKey(opened_key)
return True
except WindowsError:
return False
@staticmethod
def delete_value(full_path):
value_list = WindowsRegistry.__parse_data(full_path)
try:
opened_key = winreg.OpenKey(getattr(winreg, value_list[0]), value_list[1], 0, winreg.KEY_WRITE)
winreg.DeleteValue(opened_key, value_list[2])
winreg.CloseKey(opened_key)
return True
except WindowsError:
return False
@staticmethod
def query_key(full_path):
value_list = WindowsRegistry.__parse_data(full_path)
try:
opened_key = winreg.OpenKey(getattr(winreg, value_list[0]), value_list[1] + r'\\' + value_list[2], 0, winreg.KEY_READ)
winreg.CloseKey(opened_key)
return True
except WindowsError:
return False
@staticmethod
def delete_key(full_path):
value_list = WindowsRegistry.__parse_data(full_path)
try:
winreg.DeleteKey(getattr(winreg, value_list[0]), value_list[1] + r'\\' + value_list[2])
return True
except WindowsError:
return False
```
|
15,128,225
|
I have developed a python script where i have a setting window which has the options to select the paths for the installation of software.When clicked on OK button of the setting window, i want to write all the selected paths to the registry and read the same when setting window is opened again.
My code looks as below.
```
def OnOk(self, event):
data1=self.field1.GetValue() #path selected in setting window
aReg = ConnectRegistry(None,HKEY_LOCAL_MACHINE)
keyVal=OpenKey(aReg,r"SOFTWARE\my path to\Registry", 0,KEY_WRITE)
try:
SetValueEx(keyVal,"Log file",0,REG_SZ,data1)
except EnvironmentError:
pass
CloseKey(keyVal)
CloseKey(aReg)
```
I get a error like below:
```
Traceback (most recent call last):
File "D:\PROJECT\project.py", line 305, in OnOk
keyVal=OpenKey(aReg,r"SOFTWARE\my path to\Registry", 0,KEY_WRITE)
WindowsError: [Error 5] Access is denied
```
And to read from registry,the saved registry has to show up in the setting window.I tried with the below code.Though its working but not satisfied with the way i programmed it.Help me out for the better solution
```
key = OpenKey(HKEY_CURRENT_USER, r'Software\my path to\Registry', 0, KEY_READ)
for i in range(4):
try:
n,v,t = EnumValue(key,i)
if i==0:
self.field2.SetValue(v)
elif i==1:
self.field3.SetValue(v)
elif i==2:
self.field4.SetValue(v)
elif i==3:
self.field1.SetValue(v)
except EnvironmentError:
pass
CloseKey(key)
```
|
2013/02/28
|
[
"https://Stackoverflow.com/questions/15128225",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/2016950/"
] |
Python script to read from registry is as follows:
```
try:
root_key=OpenKey(HKEY_CURRENT_USER, r'SOFTWARE\my path to\Registry', 0, KEY_READ)
[Pathname,regtype]=(QueryValueEx(root_key,"Pathname"))
CloseKey(root_key)
if (""==Pathname):
raise WindowsError
except WindowsError:
return [""]
```
Python script to write to the registry is:
```
try:
keyval=r"SOFTWARE\my path to\Registry"
if not os.path.exists("keyval"):
key = CreateKey(HKEY_CURRENT_USER,keyval)
Registrykey= OpenKey(HKEY_CURRENT_USER, r"SOFTWARE\my path to\Registry", 0,KEY_WRITE)
SetValueEx(Registrykey,"Pathname",0,REG_SZ,Pathname)
CloseKey(Registrykey)
return True
except WindowsError:
return False
```
Hope it helps you all.Cheers:)
|
Same as @Aramanethota but with pep8 and func def for easy usage.
```
REG_PATH = r"SOFTWARE\my_program\Settings"
def set_reg(name, value):
try:
_winreg.CreateKey(_winreg.HKEY_CURRENT_USER, REG_PATH)
registry_key = _winreg.OpenKey(_winreg.HKEY_CURRENT_USER, REG_PATH, 0,
_winreg.KEY_WRITE)
_winreg.SetValueEx(registry_key, name, 0, _winreg.REG_SZ, value)
_winreg.CloseKey(registry_key)
return True
except WindowsError:
return False
def get_reg(name):
try:
registry_key = _winreg.OpenKey(_winreg.HKEY_CURRENT_USER, REG_PATH, 0,
_winreg.KEY_READ)
value, regtype = _winreg.QueryValueEx(registry_key, name)
_winreg.CloseKey(registry_key)
return value
except WindowsError:
return None
```
|
15,576,693
|
I am currently working on a project for one of my classes where I have to implement an AI opponent to play tic-tac-toe using minmax and Alpha-Beta minmax algorithms to determine the moves.
The problem I am having however is attempting to generate a list of possible moves for a board.
My problem code is as follows
```
def genMoves(genBoard, turnNumber):
moveList = []
print "inMovesList"
#Figure out if X or O go now
if turnNumber % 2 == 0:
moveChar = "O"
else:
moveChar = "X"
i = 0;
while i < 9:
tempBoard = genBoard
if tempBoard[i] == "*":
#set tempBoard[i] to X or O
tempBoard[i] = moveChar
#append move, new board
moveList.append((i, tempBoard))
i+=1
print "MovesList: "
print moveList
return moveList
```
My board is represented as a list of 9 strings initialized to `["*", "*", "*", "*", "*", "*", "*", "*", "*"]`.
My goal is to have move list return a list of tuples with the first element of the tuple being i (Where the X or O was inserted) and the second element being the resulting board.
The problem I have is that I will recieve a list with the correct number of possible moves (Eg: If I manually play the first 4 moves for both sides it will only give me 5 possible moves) however it will place the same move in each location that contains a \*. (So it ends up generating something like X,O,O,O,O,O,O,O,O for possible second moves)
This is not the first time I have had to use minmax but it is the first time I've had to do it in python.
Any suggestions for how to get around this issue would be helpful!
Thanks!
|
2013/03/22
|
[
"https://Stackoverflow.com/questions/15576693",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/967929/"
] |
This line is a problem:
```
tempBoard = genBoard
```
After this line, you seem to think that you have *two* lists -- the original, still referenced
by `genBoard`, and a new one, now referenced by `tempBoard`. This is not the case.
This line does **not** create a copy of the list. Instead, it binds the name `tempBoard` to refer to *the same object* that `genBoard` is bound to.
Consequently, subsequent references to `tempBoard[i]` also affect `genBoard[i]`.
Try one of these instead:
```
tempBoard = list(genBoard)
tempBoard = genBoard[:]
tempBoard = copy.copy(genBoard)
```
Each of these lines creates a new list, the initial contents of which is the same as `genBoard`. `tempboard` is bound to this new list, while `genboard` remains bound to the old list.
If the object in question were more complicated than a list of strings, you might need to do this:
```
tempBoard = copy.deepcopy(genBoard)
```
|
I belive Python does not create a copy of your Board but just points to the original version. therefore your printout is:
*"MovesList: "
[[0,(X,O,O,O,O,O,O,O,O)],[1,(X,O,O,O,O,O,O,O,O)],[2,(X,O,O,O,O,O,O,O,O)],* etc.
and your genBoard, variable is changed.
to test that add
*print genBoard*
directly before the end of your method
if that was indeed the problem try to google how to create a copy of your Board instead of referencing to it.
|
47,801
|
I am trying to create a web application using Pylons and the resources on the web point to the [PylonsBook](http://pylonsbook.com/alpha1/authentication_and_authorization) page which isn't of much help. I want authentication and authorisation and is there anyway to setup Authkit to work easily with Pylons?
I tried downloading the [SimpleSiteTemplate](http://pypi.python.org/pypi/SimpleSiteTemplate/) from the cheeseshop but wasn't able to run the setup-app command. It throws up an error:
```
File "/home/cnu/env/lib/python2.5/site-packages/SQLAlchemy-0.4.7-py2.5.egg/sqlalchemy/schema.py", line 96, in __call__
table = metadata.tables[key]
AttributeError: 'module' object has no attribute 'tables'
```
I use Pylons 0.9.7rc1, SQLAlchemy 0.4.7, Authkit 0.4.
|
2008/09/06
|
[
"https://Stackoverflow.com/questions/47801",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/1448/"
] |
I don't think AuthKit is actively maintained anymore. It does use the Paste (<http://pythonpaste.org>) libs though for things like HTTP Basic/Digest authentication. I would probably go ahead and take a look at the source for some inspiration and then use the Paste tools if you want to use HTTP authentication.
There is also OpenID which is very easy to setup. The python-openid libs have an excellent example that is easy to translate to WSGI for wrapping a Pylons app. You can look at an example:
<http://ionrock.org/hg/brightcontent-main/file/d87b7dcc606c/brightcontent/plugins/openidauth.py>
|
This actually got me interested:[Check out this mailing on the pylons list](http://groups.google.com/group/pylons-discuss/browse_thread/thread/644deb53612af362?hl=en). So AuthKit is being developed, and I will follow the book and get back on the results.
|
47,801
|
I am trying to create a web application using Pylons and the resources on the web point to the [PylonsBook](http://pylonsbook.com/alpha1/authentication_and_authorization) page which isn't of much help. I want authentication and authorisation and is there anyway to setup Authkit to work easily with Pylons?
I tried downloading the [SimpleSiteTemplate](http://pypi.python.org/pypi/SimpleSiteTemplate/) from the cheeseshop but wasn't able to run the setup-app command. It throws up an error:
```
File "/home/cnu/env/lib/python2.5/site-packages/SQLAlchemy-0.4.7-py2.5.egg/sqlalchemy/schema.py", line 96, in __call__
table = metadata.tables[key]
AttributeError: 'module' object has no attribute 'tables'
```
I use Pylons 0.9.7rc1, SQLAlchemy 0.4.7, Authkit 0.4.
|
2008/09/06
|
[
"https://Stackoverflow.com/questions/47801",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/1448/"
] |
Ok, another update on the subject. It seems that the cheeseshop template is broken. I've followed the chapter you linked in the post and it seems that authkit is working fine. There are some caveats:
1. sqlalchemy has to be in 0.5 version
2. authkit has to be the dev version from svn (easy\_install authkit==dev)
I managed to get it working fine.
|
I don't think AuthKit is actively maintained anymore. It does use the Paste (<http://pythonpaste.org>) libs though for things like HTTP Basic/Digest authentication. I would probably go ahead and take a look at the source for some inspiration and then use the Paste tools if you want to use HTTP authentication.
There is also OpenID which is very easy to setup. The python-openid libs have an excellent example that is easy to translate to WSGI for wrapping a Pylons app. You can look at an example:
<http://ionrock.org/hg/brightcontent-main/file/d87b7dcc606c/brightcontent/plugins/openidauth.py>
|
47,801
|
I am trying to create a web application using Pylons and the resources on the web point to the [PylonsBook](http://pylonsbook.com/alpha1/authentication_and_authorization) page which isn't of much help. I want authentication and authorisation and is there anyway to setup Authkit to work easily with Pylons?
I tried downloading the [SimpleSiteTemplate](http://pypi.python.org/pypi/SimpleSiteTemplate/) from the cheeseshop but wasn't able to run the setup-app command. It throws up an error:
```
File "/home/cnu/env/lib/python2.5/site-packages/SQLAlchemy-0.4.7-py2.5.egg/sqlalchemy/schema.py", line 96, in __call__
table = metadata.tables[key]
AttributeError: 'module' object has no attribute 'tables'
```
I use Pylons 0.9.7rc1, SQLAlchemy 0.4.7, Authkit 0.4.
|
2008/09/06
|
[
"https://Stackoverflow.com/questions/47801",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/1448/"
] |
I gave up on authkit and rolled my own:
<http://tonylandis.com/openid-db-authentication-in-pylons-is-easy-with-rpx/>
|
I don't think AuthKit is actively maintained anymore. It does use the Paste (<http://pythonpaste.org>) libs though for things like HTTP Basic/Digest authentication. I would probably go ahead and take a look at the source for some inspiration and then use the Paste tools if you want to use HTTP authentication.
There is also OpenID which is very easy to setup. The python-openid libs have an excellent example that is easy to translate to WSGI for wrapping a Pylons app. You can look at an example:
<http://ionrock.org/hg/brightcontent-main/file/d87b7dcc606c/brightcontent/plugins/openidauth.py>
|
47,801
|
I am trying to create a web application using Pylons and the resources on the web point to the [PylonsBook](http://pylonsbook.com/alpha1/authentication_and_authorization) page which isn't of much help. I want authentication and authorisation and is there anyway to setup Authkit to work easily with Pylons?
I tried downloading the [SimpleSiteTemplate](http://pypi.python.org/pypi/SimpleSiteTemplate/) from the cheeseshop but wasn't able to run the setup-app command. It throws up an error:
```
File "/home/cnu/env/lib/python2.5/site-packages/SQLAlchemy-0.4.7-py2.5.egg/sqlalchemy/schema.py", line 96, in __call__
table = metadata.tables[key]
AttributeError: 'module' object has no attribute 'tables'
```
I use Pylons 0.9.7rc1, SQLAlchemy 0.4.7, Authkit 0.4.
|
2008/09/06
|
[
"https://Stackoverflow.com/questions/47801",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/1448/"
] |
Ok, another update on the subject. It seems that the cheeseshop template is broken. I've followed the chapter you linked in the post and it seems that authkit is working fine. There are some caveats:
1. sqlalchemy has to be in 0.5 version
2. authkit has to be the dev version from svn (easy\_install authkit==dev)
I managed to get it working fine.
|
This actually got me interested:[Check out this mailing on the pylons list](http://groups.google.com/group/pylons-discuss/browse_thread/thread/644deb53612af362?hl=en). So AuthKit is being developed, and I will follow the book and get back on the results.
|
47,801
|
I am trying to create a web application using Pylons and the resources on the web point to the [PylonsBook](http://pylonsbook.com/alpha1/authentication_and_authorization) page which isn't of much help. I want authentication and authorisation and is there anyway to setup Authkit to work easily with Pylons?
I tried downloading the [SimpleSiteTemplate](http://pypi.python.org/pypi/SimpleSiteTemplate/) from the cheeseshop but wasn't able to run the setup-app command. It throws up an error:
```
File "/home/cnu/env/lib/python2.5/site-packages/SQLAlchemy-0.4.7-py2.5.egg/sqlalchemy/schema.py", line 96, in __call__
table = metadata.tables[key]
AttributeError: 'module' object has no attribute 'tables'
```
I use Pylons 0.9.7rc1, SQLAlchemy 0.4.7, Authkit 0.4.
|
2008/09/06
|
[
"https://Stackoverflow.com/questions/47801",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/1448/"
] |
I gave up on authkit and rolled my own:
<http://tonylandis.com/openid-db-authentication-in-pylons-is-easy-with-rpx/>
|
This actually got me interested:[Check out this mailing on the pylons list](http://groups.google.com/group/pylons-discuss/browse_thread/thread/644deb53612af362?hl=en). So AuthKit is being developed, and I will follow the book and get back on the results.
|
44,532,041
|
I want to strip all python docstrings out of a file using simple search and replace, and the following (extremely) simplistic regex does the job for one line doc strings:
[Regex101.com](https://regex101.com/r/PiExji/1)
```
""".*"""
```
How can I extend that to work with multi-liners?
Tried to include `\s` in a number of places to no avail.
|
2017/06/13
|
[
"https://Stackoverflow.com/questions/44532041",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/2171758/"
] |
As you cannot use an inline `s` (DOTALL) modifier, the usual workaround to match any char is using a character class with opposite shorthand character classes:
```
"""[\s\S]*?"""
```
or
```
"""[\d\D]*?"""
```
or
```
"""[\w\W]*?"""
```
will match `"""` then any 0+ chars, as few as possible as `*?` is a lazy quantfiier, and then trailing `"""`.
|
Sometimes there are multiline strings that are not docstrings. For example, you may have a complicated SQL query that extends across multiple lines. The following attempts to look for multiline strings that appear before class definitions and after function definitions.
```
import re
input_str = """'''
This is a class level docstring
'''
class Article:
def print_it(self):
'''
method level docstring
'''
print('Article')
sql = '''
SELECT * FROM mytable
WHERE DATE(purchased) >= '2020-01-01'
'''
"""
doc_reg_1 = r'("""|\'\'\')([\s\S]*?)(\1\s*)(?=class)'
doc_reg_2 = r'(\s+def\s+.*:\s*)\n(\s*"""|\s*\'\'\')([\s\S]*?)(\2[^\n\S]*)'
input_str = re.sub(doc_reg_1, '', input_str)
input_str = re.sub(doc_reg_2, r'\1', input_str)
print(input_str)
```
Prints:
```
class Article:
def print_it(self):
print('Article')
sql = '''
SELECT * FROM mytable
WHERE DATE(purchased) >= '2020-01-01'
'''
```
|
43,970,466
|
I tried using the below python code to find the websites of the companies. But after trying few times, I face a **Service Unavailable** error.
I have finished the first level of finding the possible domains of the companies. For example :
CompanyExample [u'<http://www.examples.com/>', u'<https://www.example.com/quote/CGL:SP>', u'<http://example2.sgx.com/FileOpen/China%20Great%20Land.ashx?App=Prospectus&FileID=3813>', u'<https://www.example3.com/php/company-profile/SG/en_2036109.html>']
```
from google import search
for link in links:
parsed_uri = urlparse(link)
domain = '{uri.scheme}://{uri.netloc}/'.format(uri=parsed_uri)
for url in search(domain,stop = 4):
print url
```
Kindly help me on:
1. Why do I find **urllib2.HTTPError: HTTP Error 503: Service Unavailable** error suddenly.
2. Is there any other method(Python requests) to find websites of the list of companies ?
|
2017/05/15
|
[
"https://Stackoverflow.com/questions/43970466",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/7664428/"
] |
If you need to check a specific permission try
```
{% if user.hasPermission('administer nodes') %}
<div class="foo"><a href="/">Shorthand link for admins</a></div>
{% endif %}
```
|
If you try to check the current user is administer in TWIG file , you can use
```
{% if is_admin %}
<div class="foo"><a href="/">Shorthand link for admins</a></div>
{% endif %}
```
|
2,731,871
|
I have a web application in python wherein the user submits their email and password. These values are compared to values stored in a mysql database. If successful, the script generates a session id, stores it next to the email in the database and sets a cookie with the session id, with allows the user to interact with other parts of the sight.
When the user clicks logout, the script erases the session id from the database and deletes the cookie. The cookie expires after 5 hours. My concern is that if the user doesnt log out, and the cookie expires, the script will force him to login, but if he has copied the session id from before, it can still be validated.
How do i automatically delete the session id from the mysql database after 5 hours?
|
2010/04/28
|
[
"https://Stackoverflow.com/questions/2731871",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/301860/"
] |
You can encode the expiration time as part of your session id.
Then when you validate the session id, you can also check if it has expired, and if so force the user to log-in again.
You can also clean your database periodically, removing expired sessions.
|
You'd have to add a timestamp to the session ID in the database to know when it ran out.
Better, make the timestamp part of the session ID itself, eg.:
```
Set-Cookie: session=1272672000-(random_number);expires=Sat, 01-May-2010 00:00:00 GMT
```
so that your script can see just by looking at the number at the front whether it's still a valid session ID.
Better still, make both the expiry timestamp *and* the user ID part of the session ID, and make the bit at the end a cryptographic hash of the user ID, expiry time and a secret key. Then you can validate the ID *and* know which user it belongs to without having to store anything in the database. ([Related PHP example](https://stackoverflow.com/questions/2695153/php-csrf-how-to-make-it-works-in-all-tabs/2695291#2695291).)
|
47,412,723
|
I'm new in python and I am trying to build a script that gets as an argument a URL and send a GET request to the website and then print the details from the response and the HTML body as well.
The code I wrote:
```
import sys
import urllib.request
url = 'https://%s' % (sys.argv[1])
res = urllib.request.urlopen(url)
print('Response from the server:'+'\n', res.info())
print('HTML web page:'+'\n', res.read())
```
Now, I want to check if an argument is passed through the CMD or not and if not to alert about it.
And also to print the Status Code and the Reason.
What is exactly Reason means?
This is what I got so far and I do not understand why I don't get any result.
My current code is:
```
import sys
import urllib.request, urllib.error
try:
url = 'https://%s' % (sys.argv[1])
res = urllib.request.urlopen(url)
except IndexError:
if len(sys.argv) < 1:
print(res.getcode())
print(res.getreason())
elif len(sys.argv) > 1:
print('Response from the server:'+'\n', res.info())
print('HTML web page:'+'\n', res.read())
print('Status Code:', res.getcode())
```
Thank you.
|
2017/11/21
|
[
"https://Stackoverflow.com/questions/47412723",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/-1/"
] |
You have to handle array of elements separately:
```
var formToJSON = elements => [].reduce.call(elements, (data, element) => {
var isArray = element.name.endsWith('[]');
var name = element.name.replace('[]', '');
data[name] = isArray ? (data[name] || []).concat(element.value) : element.value;
return data;
}, {});
```
|
If you want to convert form data into json object, try the following
```
var formData = JSON.parse(JSON.stringify($('#frm').serializeArray()));
```
|
5,939,850
|
I'm writing some shell scripts to check if I can manipulate information on a web server.
I use a bash shell script as a wrapper script to kick off multiple python scripts in sequential order
The wrapper script takes the web server's host name and user name and password. and after performing some tasks, hand's it to the python scripts.
I have no problem getting the information passed to the python scripts
```
host% ./bash_script.sh host user passwd
```
Inside the wrapper script I do the following:
```
/usr/dir/python_script.py $1 $2 $3
print "Parameters: "
print "Host: ", sys.argv[1]
print "User: ", sys.argv[2]
print "Passwd: ", sys.argv[3]
```
The values are printed correctly
Now, I try to pass the values to open a connection to the web server:
(I successfully opened a connection to the web server using the literal host name)
```
f_handler = urlopen("https://sys.argv[1]/path/menu.php")
```
Trying the variable substitution format throws the following error:
```
HTTP Response Arguments: (gaierror(8, 'nodename nor servname provided, or not known'),)
```
I tried different variations,
```
'single_quotes'
"double_quotes"
http://%s/path/file.php % value
```
for the variable substitution but I always get the same error
I assume that the urlopen function does not convert the substituted variable correctly.
Does anybody now a fix for this? Do I need to convert the host name variable first?
Roland
|
2011/05/09
|
[
"https://Stackoverflow.com/questions/5939850",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/745512/"
] |
Please try the following code.
```
<?php
$thumbnail_id = get_post_thumbnail_id(get_the_ID());
if (!empty($thumbnail_id))
{
$thumbnail = wp_get_attachment_image_src($thumbnail_id, 'full');
if (count ($thumbnail) >= 3)
{
$thumbnail_url = $thumbnail[0];
$thumbnail_width = $thumbnail[1];
$thumbnail_height = $thumbnail[2];
$thumbnail_w = 620;
$thumbnail_h = floor($thumbnail_height * $thumbnail_w / $thumbnail_width);
}
}
if (!empty ($thumbnail_url)): ?>
<img class="thumbnail" src="<?php echo $thumbnail_url; ?>" alt="<?php the_title_attribute(); ?>"
width="<?php echo $thumbnail_w; ?>" height="<?php echo $thumbnail_h; ?>" />
<?php endif; ?>
```
<http://www.boxoft.net/2011/10/display-the-wordpress-featured-image-without-stretching-it/>
|
What about putting a around that an setting the above code and then using CSS to set the width and height? Something like this:
**CSS**
```
.deals img {
width: 620px;
max-height: 295px;
}
```
**HTML / PHP**
```
<div class="deals"><?php the_post_thumbnail( array(620,295) ); ?></div>
```
|
49,809,680
|
I'm extremely new to python, I've just got it set up on Visual Studio 2017CE version 15.6.6 on Windows 7 SP1 x64 Home Premium. I went through a couple of walk through tutorials and can verify that at the least Python is installed and working.
I'm trying to follow instructions from the MPIR documentations on building the needed libraries for (c/c++) to run in Visual Studio. I have the tools needed: I have Python, VYASM, and MPIR, MPFR and MPFRC++. I have all the newest versions of the libraries from the websites directly (no third party). These are default distributions.
While reading through the documentation for MPIR; It mentions that I should run the Python Script (mpir\_config.py) where N is the version of the visual studio that you will be building the libraries (static-dll) - (debug-release) versions. It states that I should run the Python Script first, and it also states that if available to choose a custom build for specific platforms-architects according to your cpu.
Here is the list that is generated from running Python script (module) in the Python Shell without any arguments.
```
1. gc
2. p3 (win32)
3. p3_p3mmx (win32)
4. p4 (win32)
5. p4_mmx (win32)
6. p4_sse2 (win32)
7. p6 (win32)
8. p6_mmx (win32)
9. p6_p3mmx (win32)
10. pentium4 (win32)
11. pentium4_mmx (win32)
12. pentium4_sse2 (win32)
13. atom (x64)
14. bobcat (x64)
15. bulldozer (x64)
16. bulldozer_piledriver (x64)
17. core2 (x64)
18. core2_penryn (x64)
19. haswell (x64)
20. haswell_avx (x64)
21. k8 (x64)
22. k8_k10 (x64)
23. k8_k10_k102 (x64)
24. nehalem (x64)
25. nehalem_westmere (x64)
26. netburst (x64)
27. sandybridge (x64)
28. sandybridge_ivybridge (x64)
29. skylake (x64)
30. skylake_avx (x64)
Space separated list of builds (1..30, 0 to exit)?
```
My system is a Intel DP45SG motherboard with chip set P45 running a QuadCore Intel Core 2 Quad Q9650, 3.0Ghz (9x333).
The Alias or code names are Intel Skyburg for the mother board. Intel Eaglelake for the chipset, and Yorkfield for the processor.
I don't know what selection I should choose if any... That's the first half of the problem. The other half is if I am to choose one being that it is appropriate; how do I run the mpir\_config.py file to set this? Does it accept an argument as you call it? Or do you run it in the shell then give it a value? Or would the actual code within the script have to be changed? I'm a Python noobie... you can call me (worm) I haven't reached the status of a snake yet. Being that I'm new to Python and I have no idea what to do next.
Now as for setting up projects in visual studio to actually build out the (c/c++) libraries from their solutions, setting the configurations and even setting environment variables is not a problem for me. Any and all help would be appreciative.
*All of this hassle because boost's multi precision library uses GMP which does not really support windows...*
|
2018/04/13
|
[
"https://Stackoverflow.com/questions/49809680",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/1757805/"
] |
As Intel Core 2 Quad Q9650 is a Yorkfield core from the Penryn family,
18. core2\_penryn (x64)
should be fine.
|
And for the second part of your question, mpir\_config.py will generate 2 projects in the mpir-3.0.0\build.vc15 solution directory : one for the dynamic lib, and one for the static lib.
Just open mpir.sln and build the desired one.
|
53,311,249
|
I am reading a utf8 file with normal python text encoding. I also need to get rid of all the quotes in the file. However, the utf8 code has multiple types of quotes and I can't figure out how to get rid of all of them. The code below serves as an example of what I've been trying to do.
```
def change_things(string, remove):
for thing in remove:
string = string.replace(thing, remove[thing])
return string
```
where
```
remove = {
'\'': '',
'\"': '',
}
```
Unfortunately, this code only removes normal quotes, not left or right facing quotes. Is there any way to remove all such quotes using a similar format to what I have done (I recognize that there are other, more efficient ways of removing items from strings but given the overall context of the code this makes more sense for my specific project)?
|
2018/11/15
|
[
"https://Stackoverflow.com/questions/53311249",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/10655038/"
] |
You can just type those sorts of into your file, and replace them same as any other character.
```
utf8_quotes = "“”‘’‹›«»"
mystr = 'Text with “quotes”'
mystr.replace('“', '"').replace('”', '"')
```
There's a few different single quote variants too.
|
There's a list of unicode quote marks at <https://gist.github.com/goodmami/98b0a6e2237ced0025dd>. That should allow you to remove any type of quotes.
|
53,311,249
|
I am reading a utf8 file with normal python text encoding. I also need to get rid of all the quotes in the file. However, the utf8 code has multiple types of quotes and I can't figure out how to get rid of all of them. The code below serves as an example of what I've been trying to do.
```
def change_things(string, remove):
for thing in remove:
string = string.replace(thing, remove[thing])
return string
```
where
```
remove = {
'\'': '',
'\"': '',
}
```
Unfortunately, this code only removes normal quotes, not left or right facing quotes. Is there any way to remove all such quotes using a similar format to what I have done (I recognize that there are other, more efficient ways of removing items from strings but given the overall context of the code this makes more sense for my specific project)?
|
2018/11/15
|
[
"https://Stackoverflow.com/questions/53311249",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/10655038/"
] |
You can just type those sorts of into your file, and replace them same as any other character.
```
utf8_quotes = "“”‘’‹›«»"
mystr = 'Text with “quotes”'
mystr.replace('“', '"').replace('”', '"')
```
There's a few different single quote variants too.
|
There are multiple ways to do this, regex is one:
```
import re
newstr = re.sub(u'[\u201c\u201d\u2018\u2019]', '', oldstr)
```
Another clean way to do it is to use the [`Unidecode` package](https://pypi.python.org/pypi/Unidecode). This doesn't remove the quotes directly, but converts them to neutral quotes. It also converts any non-ASCII character to its closest ASCII equivalent:
```
from unidecode import unidecode
newstr = unidecode(oldstr)
```
Then, you can remove the quotes with your code.
|
72,148,172
|
Is there a website or git repository where there is a clear listing of all the python packages and other dependencies installed on Google CoLaboratory? Is there a docker image that corresponds to a Google CoLab environment?
|
2022/05/06
|
[
"https://Stackoverflow.com/questions/72148172",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/3659451/"
] |
I see your question is to view the already installed packages in google colab.
You can run this command for viewing it:
```
!pip list -v | grep [Kk]eras
```
You can also view all the packages, through this command:
```
!pip list -v
```
For more info, visit [this link](https://colab.research.google.com/github/aviadr1/learn-advanced-python/blob/master/content/10_package_management/10-package_management.ipynb#scrollTo=HecBSOomHiF1).
|
You can find out about all the packages installed in Colab using
```
!pip list -v
```
Where it shows you the packages' names, versions, locations, and the installer.
|
72,148,172
|
Is there a website or git repository where there is a clear listing of all the python packages and other dependencies installed on Google CoLaboratory? Is there a docker image that corresponds to a Google CoLab environment?
|
2022/05/06
|
[
"https://Stackoverflow.com/questions/72148172",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/3659451/"
] |
I see your question is to view the already installed packages in google colab.
You can run this command for viewing it:
```
!pip list -v | grep [Kk]eras
```
You can also view all the packages, through this command:
```
!pip list -v
```
For more info, visit [this link](https://colab.research.google.com/github/aviadr1/learn-advanced-python/blob/master/content/10_package_management/10-package_management.ipynb#scrollTo=HecBSOomHiF1).
|
There is no clear listing of all the packages and dependencies installed on Google CoLaboratory. However, you can use the !pip freeze command to list all the packages that are currently installed in your environment.
The github repo for the Google CoLaboratory environment is here: [link](https://github.com/googlecolab/colabtools) and you should be able to see the imported package dependencies in this file [link](https://github.com/googlecolab/colabtools/blob/190c19930ce6fc38859874a9b359ba769e726403/setup.py)
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.