qid
int64
46k
74.7M
question
stringlengths
54
37.8k
date
stringlengths
10
10
metadata
listlengths
3
3
response_j
stringlengths
29
22k
response_k
stringlengths
26
13.4k
__index_level_0__
int64
0
17.8k
16,158,221
I got this error: ``` [Mon Apr 22 23:45:42 2013] [error] [client 192.168.1.88] mod_wsgi (pid=19481): Exception occurred processing WSGI script '/home/projects/treeio/treeio.wsgi'. [Mon Apr 22 23:45:42 2013] [error] [client 192.168.1.88] Traceback (most recent call last): [Mon Apr 22 23:45:42 2013] [error] [client 192.168.1.88] File "/usr/local/lib/python2.7/dist-packages/django/core/handlers/wsgi.py", line 236, in __call__ [Mon Apr 22 23:45:42 2013] [error] [client 192.168.1.88] self.load_middleware() [Mon Apr 22 23:45:42 2013] [error] [client 192.168.1.88] File "/usr/local/lib/python2.7/dist-packages/django/core/handlers/base.py", line 53, in load_middleware [Mon Apr 22 23:45:42 2013] [error] [client 192.168.1.88] raise exceptions.ImproperlyConfigured('Error importing middleware %s: "%s"' % (mw_module, e)) [Mon Apr 22 23:45:42 2013] [error] [client 192.168.1.88] ImproperlyConfigured: Error importing middleware treeio.core.middleware.user: "No module named csrf.middleware" ``` I have Django 1.5.1 and Python 2.7.3. I am trying to install Tree.io. Any suggest? EDIT: ``` MIDDLEWARE_CLASSES = ( 'johnny.middleware.LocalStoreClearMiddleware', 'johnny.middleware.QueryCacheMiddleware', 'django.middleware.gzip.GZipMiddleware', 'treeio.core.middleware.domain.DomainMiddleware', 'treeio.core.middleware.user.SSLMiddleware', 'django.middleware.common.CommonMiddleware', 'django.middleware.csrf.CsrfViewMiddleware', 'django.contrib.sessions.middleware.SessionMiddleware', 'treeio.core.middleware.user.AuthMiddleware', 'django.contrib.auth.middleware.AuthenticationMiddleware', 'treeio.core.middleware.chat.ChatAjaxMiddleware', "django.contrib.messages.middleware.MessageMiddleware", "treeio.core.middleware.modules.ModuleDetect", "minidetector.Middleware", "treeio.core.middleware.user.CommonMiddleware", "treeio.core.middleware.user.PopupMiddleware", "treeio.core.middleware.user.LanguageMiddleware",) ``` The SO: Ubuntu 12.04.2 LTS
2013/04/22
[ "https://Stackoverflow.com/questions/16158221", "https://Stackoverflow.com", "https://Stackoverflow.com/users/2309182/" ]
try `GROUP BY` with `GROUP_CONCAT` <http://www.mysqlperformanceblog.com/2006/09/04/group_concat-useful-group-by-extension/>
You could do this all in your query instead of relying on PHP. ``` Select item, group_concat(category) FROM yourtable GROUP BY Item ```
6,443
12,482,819
I have been working with Beaglebone lately and have a question. I have worked with TI microcontrollers before, setting the registers as I needed to. From what I understand, the Angstrom distro (the one that comes with the board) let to set the registers of the processor as you want (through the kernel and class folders from /sys). How can relate the files in Angstrom with the registers of the TI microprocessor? Also, how can I set the clock/timer for the PWM signals? I want to do it through a program in C. I have found libraries and programs written in python but they do not help me to understand what is really been set. I appreciate the help you could provide. Thanks in advance. gus
2012/09/18
[ "https://Stackoverflow.com/questions/12482819", "https://Stackoverflow.com", "https://Stackoverflow.com/users/1420553/" ]
After trying out a few variations this worked: ``` -:System.Diagnostics.CodeAnalysis.ExcludeFromCodeCoverageAttribute ```
Make sure you add this filter inside **Attribute Filter**: ``` -:System.Diagnostics.CodeAnalysis.ExcludeFromCodeCoverageAttribute ``` ![enter image description here](https://i.stack.imgur.com/Gw7tA.png)
6,451
41,009,009
I'm trying to create a function ``` rotate_character(char, rot) ``` that receives a character, "char" (a string with a length of 1), and an integer "rot". The function should return a new string with a length of 1, which is the result of rotating char by rot number of places to the right. So an input of "A" for char and "13" for rot would return ``` N ``` (with A having an initial value of 0, and B having an initial value of 1, etc). Capitalization should be maintained during rotation. I already created a function that returns the position of a letter in the alphabet by using a dictionary: ``` letter = input("Enter a letter: ") def alphabet_position(letter): alphabet_pos = {'A':0, 'a':0, 'B':1, 'b':1, 'C':2, 'c':2, 'D':3, 'd':3, 'E':4, 'e':4, 'F':5, 'f':5, 'G':6, 'g':6, 'H':7, 'h':7, 'I':8, 'i':8, 'J':9, 'j':9, 'K':10, 'k':10, 'L':11, 'l':11, 'M':12, 'm':12, 'N': 13, 'n':13, 'O':14, 'o':14, 'P':15, 'p':15, 'Q':16, 'q':16, 'R':17, 'r':17, 'S':18, 's':18, 'T':19, 't':19, 'U':20, 'u':20, 'V':21, 'v':21, 'W':22, 'w':22, 'X':23, 'x':23, 'Y':24, 'y':24, 'Z':25, 'z':25 } pos = alphabet_pos[letter] return pos ``` I figure that I can use this function to get the initial value of (char) before rotation. ``` def rotate_character(char, rot) initial_char = alphabet_position(char) final_char = initial_char + rot ``` But my problem is that, if initial\_char + rot is greater than 25, I need to wrap back to the beginning of the alphabet and continue counting. So an input of "w" (initial value of 22) + an input of 8 for rot should return ``` e ``` How do I say this using python? ``` if final_char > 25, start at the beginning of the list and continue counting ``` And do I necessarily need to use the dictionary that I created in the alphabet\_position function? [It was also suggested](https://stackoverflow.com/questions/41007646/python-function-that-receives-letter-returns-0-based-numerical-position-within) that I find the character number by using Python's built-in list of letters, like this: ``` import string letter = input('enter a letter: ') def alphabet_position(letter): letter = letter.lower() return list(string.ascii_lowercase).index(letter) return(alphabet_position(letter)) ``` I'm not sure which one of these is the better option to go with when you have to wrap while you're counting. Thanks for your help / suggestions! **EDIT**: Now my code looks like this: ``` letter = input("enter a letter") rotate = input("enter a number") def rotate(letter, rotate): letter = letter.lower() return chr((ord(letter) + rotate - 97) % 26 + 97) print(rotate(letter)) ``` **EDIT 2**: ``` def rotate(letter, number): letter = letter.lower() shift = 97 if letter.islower() else 65 return chr((ord(letter) + number - shift) % 26 + shift) letter = input('Enter a letter: ') number = int(eval(input('Enter a number: ') print(rotate(letter, number)) ``` gave me a ParseError: "ParseError: bad input on line 8" (the print line)
2016/12/07
[ "https://Stackoverflow.com/questions/41009009", "https://Stackoverflow.com", "https://Stackoverflow.com/users/3546086/" ]
``` def rotate(letter, rot): shift = 97 if letter.islower() else 65 return chr((ord(letter) + rot - shift) % 26 + shift) letter = input('Enter a letter: ') rot = int(input('Enter a number: ')) print(rotate(letter, rot)) ```
You can use the `string` module and then use the modulo operator to "wrap around" the end of the alphabet: ``` from string import lowercase def rotate_char(char, rot): i = lowercase.index(char) return lowercase[(i + rot) % 25] ```
6,452
39,606,112
I'm a beginner trying to write a program that will read in .exe files, .class files, or .pyc files and get the percentage of alphanumeric characters (a-z,A-Z,0-9). Here's what I have right now (I'm just trying to see if I can identify anything at the moment, not looking to count stuff yet): ``` chars_total = 0 chars_alphnum = 0 iterate = 1 with open("pythonfile.pyc", "rb") as f: byte = f.read(iterate) while byte != b"": chars_total += 1 print (byte) iterate +=1 byte = f.read(iterate) ``` This code prints out various bytes such as ``` b'\xe1WQ\x00' b'\x00\x00c\x00\x00' ``` but I'm having trouble with translating the bytes themselves. I've also tried `print (binascii.hexlify(byte))` after importing binascii which converts everything into alphanumeric characters, which seems to not quite be what I'm looking for. So am I just getting something severely mistaken or am I at least on the right track? Full disclaimer, this is related in small part to a homework assignment, but we have permission to use this site because neither the in class material nor the reading covers any coding at all. And yes, I have been trying to figure this out before I came on here.
2016/09/21
[ "https://Stackoverflow.com/questions/39606112", "https://Stackoverflow.com", "https://Stackoverflow.com/users/6856008/" ]
`viewWillLayoutSubviews` is called when view controller's view's bounds changed (usually happens when view loaded, or orientation changed, or if it's a child view controller, and its view was changed by the parent view controller), but before it's subview's bounds or position changes. You can override this method to make some changes to subview's bounds or position before the view layouts them. `layoutSubviews`, from Apple's [documentation](https://developer.apple.com/reference/uikit/uiview/1622482-layoutsubviews): > > You should override this method only if the autoresizing and constraint-based behaviors of the subviews do not offer the behavior you want > > > This method gets called when a layout update happens, either by changing the view's bounds explicitly or call `setNeedsLayout` or `layoutIfNeeded` on the view to force a layout update. Please remember that it will be called automatically by the OS, and you should never call it directly. It's quite rare that you need to override this method, cause usually the autoresizing or constraint will do the job for you.
You can call the `layoutSubviews()` of UIView when you are changing any constraint value which is inside the UIView and more then one element is effected by the constraint change. When you are performing some task by changing the constraint by taking an outlet of the constraint at runtime you can call this. But this is not a best practice. I suggest call `layoutIfNeeded()` instead of this `layoutSubviews()`. You need to use `viewWillLayoutSubviews()` function when you want to perform some task just before any element changes its size or position in the view controller. When you are going to make an application, those 2 functions are rarely used in worst case scenarios. This is as per my understanding. Thanks.
6,453
52,072,784
I am working on a positioning system. The input I have is a dict which will give us circles of radius d1 from point(x1,y1) and so on. The output I want is an array(similar to a 2D coordinate system) in which the intersecting area is marked 1 and rest is 0. I tried this: ``` xsize=3000 ysize=2000 lis={(x1,y1):d1,(x2,y2):d2,(x3,y3):d3} array=np.zeros((xsize,ysize)) for i in range(xsize-1): for j in range(ysize-1): for element in lis: if distance((i,j),element)<=(lis[element]): array[i][j]=1 else: array[i][j]=0 break def distance(p1,p2): return math.sqrt((p1[0]-p2[0])**2+(p1[1]-p2[1])**2) ``` The only problem is that the array is large and takes way too long(no. of loops is in 10 millions), especially on a raspberry pi, otherwise this works. Is there any way to do it using openCV and an image and then draw circles to get the intersecting area faster? It has to be python 2.x.
2018/08/29
[ "https://Stackoverflow.com/questions/52072784", "https://Stackoverflow.com", "https://Stackoverflow.com/users/10269207/" ]
As you are already using numpy, try to rewrite your operations in a vectorized fashion, instead of using loops. ``` # choose appropriate dtype for better perf dtype = np.float32 # take all indices in an array indices = np.indices((ysize, xsize), dtype=dtype).T points = np.array(list(lis.keys()), dtype=dtype) # squared distance from all indices to all points dist = (indices[..., np.newaxis] - points.T) ** 2 dist = dist.sum(axis=-2) # squared circle radii dist_thresh = np.array(list(lis.values()), dtype=dtype) ** 2 intersect = np.all(dist <= dist_thresh, axis=-1) ``` That's around 60x faster on my machine than the for loop version. It is still a brute-force version, doing possibly many needless computations for all coordinates. The circles are not given in the question, so it's hard to reason about them. If they cover a relatively small area, the problem will be solved much faster (still computationally, not analytically), if a smaller area is considered. For example instead of testing all coordinates, the intersection of the bounding boxes of the circles could be used, which may reduce the computational load considerably.
Thanks for the answers! I also found this: ``` pos=np.ones((xsize,ysize)) xx,yy=np.mgrid[:xsize,:ysize] for element in lis: circle=(xx-element[0])**2+(yy-element[1])**2 pos=np.logical_and(pos,(circle<(lis[element]**2))) #pos&circle<(lis[element]**2 doesn't work(I read somewhere it does) ``` I needed this array for marking when I reached my destination or not. ``` if pos[dest[0]][dest[1]]==1 #Reached ```
6,456
42,835,809
are there any tutorials available about `export_savedmodel` ? I have gone through [this article](https://www.tensorflow.org/versions/master/api_docs/python/contrib.learn/estimators) on tensorflow.org and [unittest code](https://github.com/tensorflow/tensorflow/blob/05d7f793ec5f04cd6b362abfef620a78fefdb35f/tensorflow/python/estimator/estimator_test.py) on github.com, and still have no idea about how to construct the parameter `serving_input_fn` of function `export_savedmodel`
2017/03/16
[ "https://Stackoverflow.com/questions/42835809", "https://Stackoverflow.com", "https://Stackoverflow.com/users/456105/" ]
if you are using tensorflow straight from the master branch there's a module tensorflow.python.estimator.export that provides a function for that: ``` from tensorflow.python.estimator.export import export feature_spec = {'MY_FEATURE': tf.constant(2.0, shape=[1, 1])} serving_input_fn = export.build_raw_serving_input_receiver_fn(feature_spec) ``` Unfortunately at least for me it will not go further than that but I'm not sure if my model is really correct so maybe you have more luck than I do. Alternatively, there are the following functions for the current version installed from pypi: ``` serving_input_fn = tf.contrib.learn.utils.build_parsing_serving_input_fn(feature_spec) serving_input_fn = tf.contrib.learn.utils.build_default_serving_input_fn(feature_spec) ``` But I couldn't get them to work, too. Probably, I'm not understanding this correctly so I hope you'll have more luck. chris
You need to have tf.train.Example and tf.train.Feature and pass the input to input receiver function and invoke the model. You can take a look at this example <https://github.com/tettusud/tensorflow-examples/tree/master/estimators>
6,457
70,068,198
I have api like this : ![api](https://i.stack.imgur.com/dvcZ5.png) I want to call this api in python, this is my code : ``` def get_province(): headers = { 'Content-type': 'application/json', 'x-api-key': api_key } response = requests.get(url, headers=headers) return response.json() ``` But, i've got > > error 500 : Internal Server Error. > > > I think there's something wrong with the header. Can anyone help me?
2021/11/22
[ "https://Stackoverflow.com/questions/70068198", "https://Stackoverflow.com", "https://Stackoverflow.com/users/17480369/" ]
**No Need for a LOOP** Here is a little technique Gordon Linoff demonstrated some time ago. 1. Expand 2. Elimnate 3. Restore You can substitute any `ODD` combination of characters/strings pairs like `§§` and `||` **Example** ``` Select replace(replace(replace('my string to split',' ','><'),'<>',''),'><',' ') ``` or More Unique strings ``` Select replace(replace(replace('my string to split',' ','§§||'),'||§§',''),'§§||',' ') ``` **Results** ``` my string to split ```
use charindex <https://www.w3schools.com/sql/func_sqlserver_charindex.asp> in a looping structure and then use a variable to keep track of the index position.
6,462
14,444,012
I am writing a bit of `python` code where I had to check if all values in `list2` was present in `list1`, I did that by using `set(list2).difference(list1)` but that function was too slow with many items in the list. So I was thinking that `list1` could be a dictionary for fast lookup... So I would like to find a fast way to determent if a list has an item that isn't part of a dict performance wise is there any difference between ``` d = {1: 1, 2:2, 3:3} l = [3, 4, 5] for n in l: if not n in d: do_stuff ``` vs ``` for n in l: if not d[n]: do_stuff ``` and please if both of these are rubbish and you know something much quicker, tell me. Edit1: list1 or d can contain elements not in list2 but not the other way around.
2013/01/21
[ "https://Stackoverflow.com/questions/14444012", "https://Stackoverflow.com", "https://Stackoverflow.com/users/1376883/" ]
A fast way to achieve what you want will be using `all` and a generator comprehension. ``` s_list2 = set(list2) all_present = all(l in s_list2 for l in list1) ``` This will be advantageous in the case that some elements of list1 are not present in list2. Some timing. In the case where all values in the first list are contained in the second: ``` In [4]: l1 = range(100) In [5]: l2 = range(1000) In [6]: random.shuffle(l1) In [9]: random.shuffle(l2) In [20]: %timeit s2 = set(l2); all(l in s2 for l in l1) 10000 loops, best of 3: 26.4 us per loop In [21]: %timeit s1 = set(l1); s2 = set(l2); s1.issubset(s2) 10000 loops, best of 3: 25.3 us per loop ``` If we look at the case where some values in the first list are **not** present in the second: ``` In [2]: l1 = range(1000) In [3]: l2 = range(100) In [4]: random.shuffle(l1) In [5]: random.shuffle(l2) In [6]: sl2 = set(l2) In [8]: %timeit ss = set(l2); set(l1) & ss == ss 10000 loops, best of 3: 27.8 us per loop In [10]: %timeit s1 = set(l1); s2 = set(l2); s2.issubset(s1) 10000 loops, best of 3: 24.7 us per loop In [11]: %timeit sl2 = set(l2); all(l in sl2 for l in l1) 100000 loops, best of 3: 3.58 us per loop ``` You can see that this method is equivalent in performance to the `issubset` in the first case and is faster in the second case as it will short circuit and obviates the need to construct 2 intermediate sets (only requiring one). Having one large list and one small lists demonstrates the benefit of the gencomp method: ``` In [7]: l1 = range(10) In [8]: l2 = range(10000) In [9]: %timeit sl2 = set(l2); all(l in sl2 for l in l1) 1000 loops, best of 3: 230 us per loop In [10]: %timeit sl1 = set(l1); all(l in sl1 for l in l2) 1000000 loops, best of 3: 1.45 us per loop In [11]: %timeit s1 = set(l1); s2 = set(l2); s1.issubset(s2) 1000 loops, best of 3: 228 us per loop In [12]: %timeit s1 = set(l1); s2 = set(l2); s2.issubset(s1) 1000 loops, best of 3: 228 us per loop ```
You can convert the lists to sets and then use the method `issubset()` to check whether one is a subset of another set or not. ``` In [78]: import random In [79]: lis2=range(100) In [80]: random.shuffle(lis2) In [81]: lis1=range(1000) In [82]: random.shuffle(lis1) In [83]: s1=set(lis1) In [84]: all(l in s1 for l in lis2) Out[84]: True In [85]: %timeit all(l in s1 for l in lis2) 10000 loops, best of 3: 28.6 us per loop In [86]: %timeit s2=set(lis2);s2.issubset(s1) 100000 loops, best of 3: 12 us per loop In [87]: s2.issubset(s1) Out[87]: True ```
6,463
11,962,123
I am trying to make a query which I haven't been able to yet. My permanent view function is following: ``` function(doc) { if('llweb_result' in doc){ for(i in doc.llweb_result){ emit(doc.llweb_result[i].llweb_result, doc); } } } ``` Depending on the key, I filter the result. So, I need this key. Secondly, as you see, there is a for loop. This causes identical tuples in the result. However, I also need to do this for loop to check everything. In here, I just want to know how to eliminate identical tuples? I am using couchdb-python. My related code is: ``` result = {} result['0'] = self.dns_db.view('llweb/llweb_filter', None, key=0, limit = amount, startkey_docid = '000000052130') result['1'] = self.dns_db.view('llweb/llweb_filter', None, key=1, limit=amount) result['2'] = self.dns_db.view('llweb/llweb_filter', None, key=2, limit=amount) ``` As it is understood from key values, there are three different types of keys. I thought that I can extend the 'key' with [doc.\_id, llweb\_result]. I need a key like [\*, 2], but I don't know it is possible. Then, use reduce function to group them. This will definitely work, but at this time the problem is how to make a selection query by using only the values [0,1,2]. Edited in 16.08.12 Example for 'llweb\_result' property of a couchdb record: ``` "llweb_result": { "1": { "ip": "66.233.123.15", "domain": "domain.com", "llweb_result": 1 }, "0": { "ip": "66.235.132.118", "domain": "domain.com', "llweb_result": 1 } } ``` there is only one domain name in one record, but ther could be multiple ips for it. You can consider the record as a dns packet. I want to group records depending on llweb\_result (0,1,2). I will do a selection query for them(e.g. I fetch records which contains '1'). But for the example above, there will be two identical tuples in the result. Any help will be appriciated.
2012/08/14
[ "https://Stackoverflow.com/questions/11962123", "https://Stackoverflow.com", "https://Stackoverflow.com/users/1277280/" ]
If you get duplicate pairs in the query results, it means that you have the duplicate `doc.llweb_result[i].llweb_result` values in each document. You can change the view function to emit only one of these values (as the key). One way to do so would be: ``` function(doc) { if ('llweb_result' in doc) { distinct_values = {}; for (var i in doc.llweb_result) { distinct_values[doc.llweb_result[i].llweb_result] = true; } for(var dv in distinct_values) { emit(dv, doc); } } } ```
I don't know anything about `couchdb-python` but CouchDB supports either a single `key` or multiple `keys` in an array. So, take a look in your `couchdb-python` docs for how to supply `keys=[0,1,2]` as a parameter. Regarding getting just the unique values, take a look [at this section of *CouchDB The Definitive Guide*](http://guide.couchdb.org/draft/cookbook.html#unique) which explains how to add basically a NOOP reduce, so you can use `group=true`
6,466
7,045,371
I recently learned I could run a server with this command: ``` sudo python -m HTTPSimpleServer ``` **My question: how do I terminate this server when done with it?**
2011/08/12
[ "https://Stackoverflow.com/questions/7045371", "https://Stackoverflow.com", "https://Stackoverflow.com/users/873392/" ]
Type Control-C. Simple as that.
You might want to check the HttpServer class in [this servlet module](http://code.google.com/p/verse-quiz/source/browse/trunk/servlet.py) for a modification that allows the server to be quit. If the handler raises a SystemExit exception, the server will break from its serving. --- ``` class HttpServer(socketserver.ThreadingMixIn, http.server.HTTPServer): """Create a server with specified address and handler. A generic web server can be instantiated with this class. It will listen on the address given to its constructor and will use the handler class to process all incoming traffic. Running a server is greatly simplified.""" # We should not be binding to an # address that is already in use. allow_reuse_address = False @classmethod def main(cls, RequestHandlerClass, port=80): """Start server with handler on given port. This static method provides an easy way to start, run, and exit a HttpServer instance. The server will be executed if possible, and the computer's web browser will be directed to the address.""" try: server = cls(('', port), RequestHandlerClass) active = True except socket.error: active = False else: addr, port = server.socket.getsockname() print('Serving HTTP on', addr, 'port', port, '...') finally: port = '' if port == 80 else ':' + str(port) addr = 'http://localhost' + port + '/' webbrowser.open(addr) if active: try: server.serve_forever() except KeyboardInterrupt: print('Keyboard interrupt received: EXITING') finally: server.server_close() def handle_error(self, request, client_address): """Process exceptions raised by the RequestHandlerClass. Overriding this method is necessary for two different reasons: (1) SystemExit exceptions are incorrectly caught otherwise and (2) Socket errors should be silently passed in the server code""" klass, value = sys.exc_info()[:2] if klass is SystemExit: self.__exit = value self._BaseServer__serving = None elif issubclass(klass, socket.error): pass else: super().handle_error(request, client_address) def serve_forever(self, poll_interval=0.5): """Handle all incoming client requests forever. This method has been overridden so that SystemExit exceptions raised in the RequestHandlerClass can be re-raised after being caught in the handle_error method above. This allows servlet code to terminate server execution if so desired or required.""" super().serve_forever(poll_interval) if self._BaseServer__serving is None: raise self.__exit ```
6,467
2,706,129
I'm trying to speed up a python routine by writing it in C++, then using it using ctypes or cython. I'm brand new to c++. I'm using Microsoft Visual C++ Express as it's free. I plan to implement an expression tree, and a method to evaluate it in postfix order. The problem I run into right away is: ``` class Node { char *cargo; Node left; Node right; }; ``` I can't declare `left` or `right` as `Node` types.
2010/04/24
[ "https://Stackoverflow.com/questions/2706129", "https://Stackoverflow.com", "https://Stackoverflow.com/users/169415/" ]
No, because the object would be infinitely large (because every `Node` has as members two other `Node` objects, which each have as members two other `Node` objects, which each... well, you get the point). You can, however, have a pointer to the class type as a member variable: ``` class Node { char *cargo; Node* left; // I'm not a Node; I'm just a pointer to a Node Node* right; // Same here }; ```
No, but it can have a reference or a pointer to itself: ``` class Node { Node *pnode; Node &rnode; }; ```
6,468
48,490,382
Gnome desktop has 2 clipboards, the X.org (saves every selection) and the legacy one (CTRL+C). I am writing a simple python script to clear both clipboards, securely preferably, since it may be done after copy-pasting a password. The code that I have seen over here is this: ``` # empty X.org clipboard os.system("xclip -i /dev/null") # empty GNOME clipboard os.system("touch blank") os.system("xclip -selection clipboard blank") ``` Unfortunately this code creates a file named `blank` for some reason, so we have to remove it: ``` os.remove("blank") ``` However the main problem is that by calling both of these scripts, it leaves the `xclip` process open, even after I close the terminal. So we have 2 problems with this option: --------------------------------------- 1) It creates a blank file, which seems like a flawed method to me ------------------------------------------------------------------ 2) It leaves a process open, which could be a security hole. ------------------------------------------------------------ I also know about this method: ``` os.system("echo "" | xclip -selection clipboard") # empty clipboard ``` However this one leaves a `\n` newline character in the clipboard, so I would not call this method effective either. So how to do it properly then?
2018/01/28
[ "https://Stackoverflow.com/questions/48490382", "https://Stackoverflow.com", "https://Stackoverflow.com/users/9213435/" ]
I know three ways to clear the clipboard from Python. First using tkinter: ``` try: from Tkinter import Tk except ImportError: from tkinter import Tk r = Tk() r.withdraw() r.clipboard_clear() r.destroy() ``` Second with xclip, but I use xclip like this: ``` echo -n | xclip -selection clipboard ``` Does it create a new line? Finally, it's possible to user xsel: ``` xsel -bc ```
I have figured out: ``` #CLIPBOARD cleaner subprocess.run(["xsel","-bc"]) #PRIMARY cleaner subprocess.run(["xsel","-c"]) ``` This one cleans both buffers, and leaves no zombie processes at all. Thanks for everyone who suggested some of them.
6,474
40,942,338
I'm working on a python AWS Cognito implementation using boto3. `jwt.decode` on the IdToken yields a payload that's in the form of a dictionary, like so: ```py { "sub": "a uuid", "email_verified": True, "iss": "https://cognito-idp....", "phone_number_verified": False, "cognito:username": "19407ea0-a79d-11e6-9ce4-09487ca06884", "given_name": "Aron Filbert", "aud": "my client app", "token_use": "id", "auth_time": 1480547504, "nickname": "Aron Filbert", "phone_number": "+14025555555", "exp": 1480551104, "iat": 1480547504, "email": "my@email.com" } ``` So I designed a User class that consumes that dictionary. Works great, until I need to hit Cognito again and grab fresh user details to make sure nothing changed (say, from another device). My return payload from the `get_user()` call ends up looking like a list of dictionaries: ```py [ { "Name": "sub", "Value": "a uuid" }, { "Name": "email_verified", "Value": "true" }, { "Name": "phone_number_verified", "Value": "false" }, { "Name": "phone_number", "Value": "+114025555555" }, { "Name": "given_name", "Value": "Aron Filbert" }, { "Name": "email", "Value": "my@email.com" } ] ``` Since I might be hitting that `get_user()` Cognito endpoint a lot, I'm looking for an efficient way to grab JUST the values of each dictionary in the list and use them to form the keys:values of a new dictionary. Example: ```py { "sub": "a uuid", # From first list item "email_verified": True, # From next list item ... } ``` Being new to Python, I'm struggling with how to accomplish this elegantly and efficiently.
2016/12/02
[ "https://Stackoverflow.com/questions/40942338", "https://Stackoverflow.com", "https://Stackoverflow.com/users/119041/" ]
As I noted in a comment, the bulk of your work can be done by a dict comprehension: ``` lst = get_user() # or something similar, lst is a list of dicts parsed_res = {k["Name"]:k["Value"] for k in lst} ``` This only differs from your expected output in that it contains `'true'` and `'false'` whereas you want bools in your final result. Well, the simplest solution is to define a function that does this conversion for you: ``` def boolify(inp): if inp=='true': return True elif inp=='false': return False else: return inp parsed_res = {k["Name"]:boolify(k["Value"]) for k in lst} ``` The same thing *could* be done in the comprehension itself, but it wouldn't be any clearer, nor efficient. This way you can do additional manipulations in your keys if you later realize that there are other stuff you want to do with your payload before storing.
A dictionary comprehension, as Andras answered above, is a simple, Pythonic one-liner for your case. Some style guidelines ([such as Google's](https://google.github.io/styleguide/pyguide.html?showone=List_Comprehensions#List_Comprehensions)), however, recommend against them if they introduce complex logic or take up more than two or three lines: > > Okay to use for simple cases. Each portion must fit on one line: > mapping expression, for clause, filter expression. Multiple for > clauses or filter expressions are not permitted. Use loops instead > when things get more complicated. > > > **Yes:** > > > > ```py > result = [] > for x in range(10): > for y in range(5): > if x * y > 10: > result.append((x, y)) > > for x in xrange(5): > for y in xrange(5): > if x != y: > for z in xrange(5): > if y != z: > yield (x, y, z) > > return ((x, complicated_transform(x)) > for x in long_generator_function(parameter) > if x is not None) > > squares = [x * x for x in range(10)] > > eat(jelly_bean for jelly_bean in jelly_beans > if jelly_bean.color == 'black') > > ``` > > **No:** > > > > ```py > result = [(x, y) for x in range(10) for y in range(5) if x * y > 10] > > return ((x, y, z) > for x in xrange(5) > for y in xrange(5) > if x != y > for z in xrange(5) > if y != z) > > ``` > > Dictionary comprehension is perfectly appropriate in your instance, but for the sake of completeness, this is a general method for performing operations with a couple of for-loops if you decide to do anything fancier: ```py for <dict> in <list>: for <key>, <value> in <dict>: # Perform any applicable operations. <new_dict>[<key>] = <value> ``` which comes out to... ```py user = get_user() user_info = {} for info in user: for name, value in info: n, v = info[name], info[value] if v.lowercase() == 'true': v = True else if v.lowercase() == 'false': v = False user_info[n] = v ```
6,476
67,448,604
I have a pandas DataFrame containing rows of nodes that I ultimately would like to *connect* and turn into a graph like object. For this, I first thought of converting this DataFrame to something that resembles an adjacency list, to later on easily create a graph from this. I have the following: A pandas Dataframe: ``` df = pd.DataFrame({"id": [0, 1, 2, 3, 4, 5, 6], "start": ["A", "B", "D", "A", "X", "F", "B"], "end": ["B", "C", "F", "G", "X", "X", "E"], "cases": [["c1", "c2", "c44"], ["c2", "c1", "c3"], ["c4"], ["c1", ], ["c1", "c7"], ["c4"], ["c44", "c7"]]}) ``` which looks like this: ``` id start end cases 0 0 A B [c1, c2, c44] 1 1 B C [c2, c1, c3] 2 2 D F [c4] 3 3 A G [c1] 4 4 X X [c1, c7] 5 5 F X [c4] 6 6 B E [c44, c7] ``` A function `directly_follows(i, j)` that returns true if the node in row `i` is followed by the node in row `j` (this wil later be a directed edge in a graph from node `i` to node `j`): ``` def directly_follows(row1, row2): return close(row1, row2) and case_overlap(row1, row2) def close(row1, row2): return row1["end"] == row2["start"] def case_overlap(row1, row2): return not set(row1["cases"]).isdisjoint(row2["cases"]) ``` Shortly, node `i` is followed by node `j` if the `end` value of node `i` is the same as the `start` value of node `j` and if their `cases` overlap Based on this `directly_follows` function, I want to create an extra column to my DataFrame `df` which acts as an adjacency list, containing for node `i` a list with the `id` values of nodes that follow `i` My desired result would thus be: ``` id start end cases adjacency_list 0 0 A B [c1, c2, c44] [1, 6] 1 1 B C [c2, c1, c3] [] 2 2 D F [c4] [5] 3 3 A G [c1] [] 4 4 X X [c1, c7] [] 5 5 F X [c4] [] 6 6 B E [c44, c7] [] ``` Basically I thought of first creating the column adjacency\_list as empty lists, and then looping through the rows of the Dataframe and if for row `i` and `j` directly\_follows(row\_i, row\_j) returns True, add the id of `j` to the adjacency list of `i`. I did it like this: ``` def connect(data): data["adjacency_list"] = np.empty((len(data), 0)).tolist() for i in range(len(data)): for j in range(len(data)): if i != j: if directly_follows(data.iloc[i], data.iloc[j]): data.iloc[i]["adjacency_list"] = data.iloc[i]["adjacency_list"].append(data.iloc[i]["id"]) ``` Now first, this returns an error ``` SettingWithCopyWarning: A value is trying to be set on a copy of a slice from a DataFrame ``` And secondly, I highly doubt this is the most pythonic and efficient way to solve this problem, since my actual DataFrame consists of about 9000 rows, which would give around 81 million comparisons. How to create the adjacency list in the least time consuming way? Is there maybe a faster or more elegant solution than mine?
2021/05/08
[ "https://Stackoverflow.com/questions/67448604", "https://Stackoverflow.com", "https://Stackoverflow.com/users/5560529/" ]
One option would be to apply the following function - it's not completely vectorised because Dataframes don't particularly like embedding mutable objects like lists, and I don't think you can apply set operations in a vectorised way. It does cut down the number of comparisons needed though. ``` def f(x): check = df[(x["end"] == df["start"])] return [ row["id"] for i, row in check.iterrows() if not set(row["cases"]).isdisjoint(x["cases"]) ] df["adjacency_list"] = df.apply(f, axis=1) ``` Or, as a big lambda function: ``` df["adjacency_list"] = df.apply( lambda x: [ row["id"] for i, row in df[(x["end"] == df["start"])].iterrows() if not set(row["cases"]).isdisjoint(x["cases"]) ], axis=1, ) ``` Output ------ ``` id start end cases adjacency_list 0 0 A B [c1, c2, c44] [1, 6] 1 1 B C [c2, c1, c3] [] 2 2 D F [c4] [5] 3 3 A G [c1] [] 4 4 X X [c1, c7] [4] 5 5 F X [c4] [] 6 6 B E [c44, c7] [] ```
TRY: ``` k=0 def test(x): global k k+=1 test_df = df[k:] return list(test_df[test_df['start'] == x].index) df['adjancy_matrix'] = df.end.apply(test,1) ``` **OUTPUT:** ``` id start end cases adjancy_matrix 0 0 A B [c1,c2,c44] [1, 6] 1 1 B C [c2,c1,c3] [] 2 2 D F [c4] [5] 3 3 A G [c1] [] 4 4 X X [c1,c7] [] 5 5 F X [c4] [] 6 6 B E [c44,c7] [] ```
6,477
40,041,463
I installed OpenCV 3.1.0 and CUDA 8.0 in Ubuntu 16.04. When I check "nvcc --version" to check the CUDA version, it is 8.0. But when I try to compile a C++ OpenCV program I get the following error: ``` Could NOT find CUDA: Found unsuitable version "7.5", but required is exact version "8.0" (found /usr/local/cuda) ``` So OpenCV tells it founds version 7.5 when the only installed one is 8.0. Both CUDA and OpenCV work well toguether in python with no error. Any idea about what is happening?
2016/10/14
[ "https://Stackoverflow.com/questions/40041463", "https://Stackoverflow.com", "https://Stackoverflow.com/users/4136920/" ]
I had a similar issue after upgrading from CUDA 8.0 to 9.1. When I compiled my code and got error "found unsuitable version (CUDA 8.0)". In my case, it's the problem of previous cmake files. Just deleted previous files generated by cmake and then it worked fine.
Environment Variables As part of the CUDA environment, you should add the following in the .bashrc file of your home folder. ``` export CUDA_HOME=/usr/local/cuda-7.5 export LD_LIBRARY_PATH=${CUDA_HOME}/lib64 PATH=${CUDA_HOME}/bin:${PATH} export PATH ```
6,479
39,656,433
I need to download incoming attachment without past attachment from mail using Python Script. For example:If anyone send mail at this time(now) then just download that attachment only into local drive not past attachments. Please anyone help me to download attachment using python script or java.
2016/09/23
[ "https://Stackoverflow.com/questions/39656433", "https://Stackoverflow.com", "https://Stackoverflow.com/users/5693776/" ]
``` import email import imaplib import os class FetchEmail(): connection = None error = None mail_server="host_name" username="outlook_username" password="password" self.save_attachment(self,msg,download_folder) def __init__(self, mail_server, username, password): self.connection = imaplib.IMAP4_SSL(mail_server) self.connection.login(username, password) self.connection.select(readonly=False) # so we can mark mails as read def close_connection(self): """ Close the connection to the IMAP server """ self.connection.close() def save_attachment(self, msg, download_folder="/tmp"): """ Given a message, save its attachments to the specified download folder (default is /tmp) return: file path to attachment """ att_path = "No attachment found." for part in msg.walk(): if part.get_content_maintype() == 'multipart': continue if part.get('Content-Disposition') is None: continue filename = part.get_filename() att_path = os.path.join(download_folder, filename) if not os.path.isfile(att_path): fp = open(att_path, 'wb') fp.write(part.get_payload(decode=True)) fp.close() return att_path def fetch_unread_messages(self): """ Retrieve unread messages """ emails = [] (result, messages) = self.connection.search(None, 'UnSeen') if result == "OK": for message in messages[0].split(' '): try: ret, data = self.connection.fetch(message,'(RFC822)') except: print "No new emails to read." self.close_connection() exit() msg = email.message_from_string(data[0][1]) if isinstance(msg, str) == False: emails.append(msg) response, data = self.connection.store(message, '+FLAGS','\\Seen') return emails self.error = "Failed to retrieve emails." return emails ``` Above code works for me to download attachment. Hope this really helpful for any one.
``` import win32com.client #pip install pypiwin32 to work with windows operating sysytm import datetime import os # To get today's date in 'day-month-year' format(01-12-2017). dateToday=datetime.datetime.today() FormatedDate=('{:02d}'.format(dateToday.day)+'-'+'{:02d}'.format(dateToday.month)+'-'+'{:04d}'.format(dateToday.year)) # Creating an object for the outlook application. outlook = win32com.client.Dispatch("Outlook.Application").GetNamespace("MAPI") # Creating an object to access Inbox of the outlook. inbox=outlook.GetDefaultFolder(6) # Creating an object to access items inside the inbox of outlook. messages=inbox.Items def save_attachments(subject,which_item,file_name): # To iterate through inbox emails using inbox.Items object. for message in messages: if (message.Subject == subject): body_content = message.body # Creating an object for the message.Attachments. attachment = message.Attachments # To check which item is selected among the attacments. print (message.Attachments.Item(which_item)) # To iterate through email items using message.Attachments object. for attachment in message.Attachments: # To save the perticular attachment at the desired location in your hard disk. attachment.SaveAsFile(os.path.join("D:\Script\Monitoring",file_name)) break ```
6,481
70,580,711
How can I change my slurm script below so that each python job gets a unique GPU? The node had 4 GPUs, I would like to run 1 python job per each GPU. The problem is that all jobs use the first GPU and other GPUs are idle. ``` #!/bin/bash #SBATCH --qos=maxjobs #SBATCH -N 1 #SBATCH --exclusive for i in `seq 0 3`; do cd ${i} srun python gpu_code.py & cd .. done wait ```
2022/01/04
[ "https://Stackoverflow.com/questions/70580711", "https://Stackoverflow.com", "https://Stackoverflow.com/users/7242276/" ]
If the solution proposed by @Iagows doesn't work for you, have a look at this: [flutter\_launcher\_icons-issues](https://github.com/fluttercommunity/flutter_launcher_icons/issues/324#issuecomment-1005736502)
The issue is explained in the readme of the plugin, section "Dependency incompatible". It says ``` Because flutter_launcher_icons >=0.9.0 depends on args 2.0.0 and flutter_native_splash 1.2.0 depends on args ^2.1.1, flutter_launcher_icons >=0.9.0 is incompatible with flutter_native_splash 1.2.0. And because no versions of flutter_native_splash match >1.2.0 <2.0.0, flutter_launcher_icons >=0.9.0 is incompatible with flutter_native_splash ^1.2.0. So, because enstack depends on both flutter_native_splash ^1.2.0 and flutter_launcher_icons ^0.9.0, version solving failed. pub get failed ``` The solution is given in as cryptic a manner as the description above, but the simple hack given by @deffo worked for me. <https://github.com/fluttercommunity/flutter_launcher_icons/issues/324#issuecomment-1013611137> What you do is that you skip the version solving done when the plugin reads `build.gradle` and you force `minSdkVersion` to be whatever version you prefer. It's a hack but since you generate automatically the app icons only once and for all, who cares? :-) BTW, the following solution seems cleaner but I didn't test it: <https://github.com/fluttercommunity/flutter_launcher_icons/issues/262#issuecomment-877653847>
6,486
60,548,289
I don't know why I am getting this error. Below is the code I am using. **settings.py** ``` TEMPLATE_DIRS = (os.path.join(os.path.dirname(BASE_DIR), "mysite", "static", "templates"),) ``` **urls.py** ``` from django.urls import path from django.conf.urls import include, url from django.contrib.auth import views as auth_views from notes import views as notes_views urlpatterns = [ url(r'^$', notes_views.home, name='home'), url(r'^admin/', admin.site.urls), ]``` **views.py** `def home(request): notes = Note.objects template = loader.get_template('note.html') context = {'notes': notes} return render(request, 'templates/note.html', context)` NOTE : I am following this tutorial - https://pythonspot.com/django-tutorial-building-a-note-taking-app/ ```
2020/03/05
[ "https://Stackoverflow.com/questions/60548289", "https://Stackoverflow.com", "https://Stackoverflow.com/users/5227269/" ]
The problem comes from the "this" scope. Either you have to bind the function you're using in the class. ``` constructor( props ){ super( props ); this.resetTimer = this.resetTimer.bind(this); } ``` A second option is to use arrow functions when you declare your functions in order to maintain the scope of "this" on the class. ``` resetTimer = () => { this.setState(initialState); } ```
instead of writing ``` `resetTimer() { this.setState(initialState); }` ``` use arrow function `const resetTimer=()=> { this.setState(initialState); }` this will work
6,496
64,965,247
I was developing a bot on discord, and I want to log when user roles changes. I tried the code below and that was just starting. ```py TOKEN = "" client = discord.Client() @client.event async def on_ready(): print(f'{client.user} has connected to Discord!') @client.event async def on_message(message): print(message) @client.event async def on_member_update(before, after): print(before) print(after) client.run(TOKEN) ``` When I type message to a channel, It prints the message to python console, However, when I add role to myself in the same guild, It does not print anything. **Note: I enabled `presence intent` and `server member intent` in discord developer portal**
2020/11/23
[ "https://Stackoverflow.com/questions/64965247", "https://Stackoverflow.com", "https://Stackoverflow.com/users/14041512/" ]
Your intents should be enabled both on the portal and the code itself. Here is how you do it in the code. ```py intents = discord.Intents().all() client = discord.Client(intents=intents) ``` And according to the [docs of on\_memebr\_update](https://discordpy.readthedocs.io/en/latest/api.html#discord.on_member_update) `This requires Intents.members to be enabled.` That is why it did not work.
You should activate that from your code like: ```py intents = discord.Intents.default() intents.members = True intents.presences = True client= discord.Client(intents=intents) ``` [reference for more information](https://discordpy.readthedocs.io/en/latest/intents.html)
6,501
61,296,763
I have trained a CNN in Matlab 2019b that classifies images between three classes. When this CNN was tested in Matlab it was functioning fine and only took 10-15 seconds to classify an image. I used the exportONNXNetwork function in Maltab so that I can implement my CNN in Tensorflow. This is the code I am using to use the ONNX file in python: ```py import onnx from onnx_tf.backend import prepare import numpy as np from PIL import Image onnx_model = onnx.load('trainednet.onnx') tf_rep = prepare(onnx_model) filepath = 'filepath.png' img = Image.open(filepath).resize((224,224)).convert("RGB") img = array(img).transpose((2,0,1)) img = np.expand_dims(img, 0) img = img.astype(np.uint8) probabilities = tf_rep.run(img) print(probabilities) ``` When trying to use this code to classify the same test set, it seems to be classifying the images correctly but it is very slow and freezes my computer as it reaches high memory usages of up to 95+% at some points. I also noticed in the command prompt while classifying it prints this: ``` 2020-04-18 18:26:39.214286: W tensorflow/core/grappler/optimizers/meta_optimizer.cc:530] constant_folding failed: Deadline exceeded: constant_folding exceeded deadline., time = 486776.938ms. ``` Is there any way I can make this python code classify faster?
2020/04/18
[ "https://Stackoverflow.com/questions/61296763", "https://Stackoverflow.com", "https://Stackoverflow.com/users/-1/" ]
Maybe you could try to understand what part of the code takes a long time this way: ``` import onnx from onnx_tf.backend import prepare import numpy as np from PIL import Image import datetime now = datetime.datetime.now() onnx_model = onnx.load('trainednet.onnx') tf_rep = prepare(onnx_model) filepath = 'filepath.png' later = datetime.datetime.now() difference = later - now print("Loading time : %f ms" % (difference.microseconds / 1000)) img = Image.open(filepath).resize((224,224)).convert("RGB") img = array(img).transpose((2,0,1)) img = np.expand_dims(img, 0) img = img.astype(np.uint8) now = datetime.datetime.now() probabilities = tf_rep.run(img) later = datetime.datetime.now() difference = later - now print("Prediction time : %f ms" % (difference.microseconds / 1000)) print(probabilities) ``` Let me know what the output looks like :)
You should consider some points while working on TensorFlow with Python. A GPU will be better for work as it fastens the whole processing. For that, you have to install CUDA support. Apart from this, the compiler also sometimes matters. I can tell VSCode is better than Spyder from my experience. I hope it helps.
6,502
50,315,645
I have a simple script which is using [signalr-client-py](https://github.com/TargetProcess/signalr-client-py) as an external module. ``` from requests import Session from signalr import Connection import threading ``` When I try to run my script using the `sudo python myScriptName.py` I get an error: ``` Traceback (most recent call last): File "buttonEventDetectSample.py", line 3, in <module> from signalrManager import * File "/home/pi/Desktop/GitRepo/DiatAssign/Main/signalrManager.py", line 2, in <module> from signalr import Connection ImportError: No module named signalr ``` If I run my script typing only `python myScriptName.py` it works perfectly fine but I need to have the **sudo** in front because later on in my other scripts (that use this one) I perform write operation on the File system. I am quite new to Python and that's why I need to know how I can handle this situation. If I type `pydoc modules` I get a list which contains: ``` signalr signalrManager ``` If I type `pip freeze` I can see there listed: ``` signalr-client==0.0.7 ```
2018/05/13
[ "https://Stackoverflow.com/questions/50315645", "https://Stackoverflow.com", "https://Stackoverflow.com/users/2128702/" ]
By default sudo runs commands in different environment. You can ask sudo to preserve environment with `-E` switch. ``` sudo -E python myScriptName.py ``` It comes with it's own security risks. So be careful
You need to check where signalr is installed. sudo runs the program in the environment available to root and if signalr is not installed globally it won't be picked up. Try 'sudo pip freeze' to see what is available in the root environment.
6,506
16,170,268
I have written a python27 module and installed it using `python setup.py install`. Part of that module has a script which I put in my bin folder within the module before I installed it. I think the module has installed properly and works (has been added to site-packages and scripts). I have built a simple script "test.py" that just runs functions and the script from the module. The functions work fine (the expected output prints to the console) but the script does not. I tried `from [module_name] import [script_name]` in test.py which did not work. How do I run a script within the bin of a module from the command line?
2013/04/23
[ "https://Stackoverflow.com/questions/16170268", "https://Stackoverflow.com", "https://Stackoverflow.com/users/2270903/" ]
Are you using `distutils` or `setuptools`? I tested right now, and if it's distutils, it's enough to have `scripts=['bin/script_name']` in your `setup()` call If instead you're using setuptools you can avoid to have a script inside bin/ altogether and define your entry point by adding `entry_points={'console_scripts': ['script_name = module_name:main']}` inside your `setup()` call (assuming you have a `main` function inside `module_name`) are you sure that the bin/script\_name is marked as executable? what is the exact error you get when trying to run the script? what are the contents of your setup.py?
Please check your installed module for using condition to checking state of global variable `__name__`. I mean: ``` if __name__ == "__main__": ``` Global variable `__name__` changing to "`__main__`" string in case, then you starting script manually from command line (e.g. python sample.py). If you using this condition, and put all your code under this, it will be be work when you will try to import your installed module from another script. For example (code from module will not run, when you will import it): testImport.py: ``` import sample ...another code here... ``` sample.py: ``` if __name__ == "__main__": print "You will never see this message, except of case, when you start this module manually from command line" ```
6,509
36,212,431
I want to using python to open two files at the same time, read one line from each of them then do some operations. Then read the next line from each of then and do some operation,then the next next line...I want to know how can I do this. It seems that `for` loop cannot do this job.
2016/03/25
[ "https://Stackoverflow.com/questions/36212431", "https://Stackoverflow.com", "https://Stackoverflow.com/users/5362936/" ]
``` file1 = open("some_file") file2 = open("other_file") for some_line,other_line in zip(file1,file2): #do something silly file1.close() file2.close() ``` note that `itertools.izip` may be prefered if you dont want to store the whole file in memory ... also note that this will finish when the end of either file is reached...
Why not read each file into a list each element in the list holds 1 line. Once you have both files loaded to your lists you can work line by line (index by index) through your list doing whatever comparisons/operations you require.
6,510
23,769,001
I would like to know if it is possible to enable gzip compression for Server-Sent Events (SSE ; Content-Type: text/event-stream). It seems it is possible, according to this book: <http://chimera.labs.oreilly.com/books/1230000000545/ch16.html> But I can't find any example of SSE with gzip compression. I tried to send gzipped messages with the response header field *Content-Encoding* set to "gzip" without success. For experimenting around SSE, I am testing a small web application made in Python with the bottle framework + gevent ; I am just running the bottle WSGI server: ``` @bottle.get('/data_stream') def stream_data(): bottle.response.content_type = "text/event-stream" bottle.response.add_header("Connection", "keep-alive") bottle.response.add_header("Cache-Control", "no-cache") bottle.response.add_header("Content-Encoding", "gzip") while True: # new_data is a gevent AsyncResult object, # .get() just returns a data string when new # data is available data = new_data.get() yield zlib.compress("data: %s\n\n" % data) #yield "data: %s\n\n" % data ``` The code without compression (last line, commented) and without gzip content-encoding header field works like a charm. **EDIT**: thanks to the reply and to this other question: [Python: Creating a streaming gzip'd file-like?](https://stackoverflow.com/questions/2192529/python-creating-a-streaming-gzipd-file-like), I managed to solve the problem: ``` @bottle.route("/stream") def stream_data(): compressed_stream = zlib.compressobj() bottle.response.content_type = "text/event-stream" bottle.response.add_header("Connection", "keep-alive") bottle.response.add_header("Cache-Control", "no-cache, must-revalidate") bottle.response.add_header("Content-Encoding", "deflate") bottle.response.add_header("Transfer-Encoding", "chunked") while True: data = new_data.get() yield compressed_stream.compress("data: %s\n\n" % data) yield compressed_stream.flush(zlib.Z_SYNC_FLUSH) ```
2014/05/20
[ "https://Stackoverflow.com/questions/23769001", "https://Stackoverflow.com", "https://Stackoverflow.com/users/2753095/" ]
TL;DR: If the requests are not cached, you likely want to use zlib and declare Content-Encoding to be 'deflate'. That change alone should make your code work. --- If you declare Content-Encoding to be gzip, you need to actually use gzip. They are based on the the same compression algorithm, but gzip has some extra framing. This works, for example: ``` import gzip import StringIO from bottle import response, route @route('/') def get_data(): response.add_header("Content-Encoding", "gzip") s = StringIO.StringIO() with gzip.GzipFile(fileobj=s, mode='w') as f: f.write('Hello World') return s.getvalue() ``` That only really makes sense if you use an actual file as a cache, though.
There's also middleware you can use so you don't need to worry about gzipping responses for each of your methods. Here's one I used recently. <https://code.google.com/p/ibkon-wsgi-gzip-middleware/> This is how I used it (I'm using bottle.py with the gevent server) ``` from gzip_middleware import Gzipper import bottle app = Gzipper(bottle.app()) run(app = app, host='0.0.0.0', port=8080, server='gevent') ``` For this particular library, you can set w/c types of responses you want to compress by modifying the `DEFAULT_COMPRESSABLES variable` for example ``` DEFAULT_COMPRESSABLES = set(['text/plain', 'text/html', 'text/css', 'application/json', 'application/x-javascript', 'text/xml', 'application/xml', 'application/xml+rss', 'text/javascript', 'image/gif']) ``` All responses go through the middleware and get gzipped without modifying your existing code. By default, it compresses responses whose content-type belongs to `DEFAULT_COMPRESSABLES` and whose content-length is greater than 200 characters.
6,513
47,310,884
I need to passively install Python in my applications package installation so i use the following: ``` python-3.5.4-amd64.exe /passive PrependPath=1 ``` according this: [3.1.4. Installing Without UI](https://docs.python.org/3.6/using/windows.html#installing-without-ui) I use the PrependPath parameter which should add paths into Path in Windows environment variables. But it seems not to work. The variables does not take any changes. If i start installation manually and select or deselect checkbox with add into Paths then everything works. Works same with clear installation also on modify current installation. Unfortunately i do not have other PC with Win 10 Pro to test it. I have also tried it with Python 3.6.3 with same results. **EDIT:** Also tried with PowerShell `Start-Process python-3.5.4-amd64.exe -ArgumentList /passive , PretendPath=1` with same results. Also tested on several PCs with Windows 10, same results, so the problem is not just on single PC **EDIT:** Of cource all attempts were run as administrator.
2017/11/15
[ "https://Stackoverflow.com/questions/47310884", "https://Stackoverflow.com", "https://Stackoverflow.com/users/7031374/" ]
Ok, from my point of view it seems to be bug in Python Installer and I can not find any way how to make it works. I have founds the following workaround: Use py.exe which is wrapper for all version of Python on local machine located in C:\Windows so you can run it directly from CMD anywhere thanks to C:\Windows is standard content of Path variable. ``` py -3.5 -c "import sys; print(sys.executable[:-10])" ``` This gives me directory of python 3.5 installation. And then i set it into Path manually by: ``` setx Path %UserProfile%";PythonLocFromPreviousCommand ```
try powershell to do that ``` Start-Process -NoNewWindow .\python.exe /passive ```
6,514
34,076,773
So i have an empty main frame called `MainWindow` and a `WelcomeWidget` that gets called immidiatley on program startup and loads inside the main frame. Then i want the button `next_btn` inside `WelcomeWidget` to call `LicenseWidget` QWidget inside the `MainWindow` class . How do i do that? Here is my code: **Main.py** ``` #!/usr/bin/env python # -*- coding: utf-8 -*- # # Main.py # # Copyright 2015 Ognjen Galic <gala@thinkpad> # # This program is free software; you can redistribute it and/or modify # it under the terms of the GNU General Public License as published by # the Free Software Foundation; either version 2 of the License, or # (at your option) any later version. # # This program is distributed in the hope that it will be useful, # but WITHOUT ANY WARRANTY; without even the implied warranty of # MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the # GNU General Public License for more details. # # You should have received a copy of the GNU General Public License # along with this program; if not, write to the Free Software # Foundation, Inc., 51 Franklin Street, Fifth Floor, Boston, # MA 02110-1301, USA. # # from PyQt4 import QtGui, QtCore from MainWindow import Ui_MainWindow from WelcomeWidget import Ui_welcome_widget from LicenseWidget import Ui_license_widget import sys class WelcomeWidget(QtGui.QWidget, Ui_welcome_widget): def __init__(self, parent=None): super(WelcomeWidget, self).__init__() self.setupUi(self) self.cancel_btn.pressed.connect(self.close) self.next_btn.pressed.connect(self.license_show) def close(self): sys.exit(0) def license_show(self): mainWindow.cw = LicenseWidget(self) mainWindow.setCentralWidget(self.cw) class LicenseWidget(QtGui.QWidget, Ui_license_widget): def __init__(self, parent=None): super(LicenseWidget, self).__init__() self.setupUi(self) class mainWindow(QtGui.QMainWindow, Ui_MainWindow): def __init__(self): super(mainWindow, self).__init__() self.setupUi(self) mainWindow.cw = WelcomeWidget(self) self.setCentralWidget(self.cw) def main(): app = QtGui.QApplication(sys.argv) ui = mainWindow() ui.show() sys.exit(app.exec_()) main() ``` **LicenseWidget.py** ``` # -*- coding: utf-8 -*- # Form implementation generated from reading ui file 'LicenseWidget.ui' # # Created by: PyQt4 UI code generator 4.11.4 # # WARNING! All changes made in this file will be lost! from PyQt4 import QtCore, QtGui try: _fromUtf8 = QtCore.QString.fromUtf8 except AttributeError: def _fromUtf8(s): return s try: _encoding = QtGui.QApplication.UnicodeUTF8 def _translate(context, text, disambig): return QtGui.QApplication.translate(context, text, disambig, _encoding) except AttributeError: def _translate(context, text, disambig): return QtGui.QApplication.translate(context, text, disambig) class Ui_license_widget(object): def setupUi(self, license_widget): license_widget.setObjectName(_fromUtf8("license_widget")) license_widget.resize(640, 420) license_widget.setMinimumSize(QtCore.QSize(640, 420)) license_widget.setMaximumSize(QtCore.QSize(640, 420)) self.frame_btn = QtGui.QFrame(license_widget) self.frame_btn.setGeometry(QtCore.QRect(0, 365, 641, 56)) self.frame_btn.setFrameShape(QtGui.QFrame.StyledPanel) self.frame_btn.setFrameShadow(QtGui.QFrame.Raised) self.frame_btn.setObjectName(_fromUtf8("frame_btn")) self.no_btn = QtGui.QPushButton(self.frame_btn) self.no_btn.setGeometry(QtCore.QRect(540, 15, 87, 26)) self.no_btn.setObjectName(_fromUtf8("no_btn")) self.yes_btn = QtGui.QPushButton(self.frame_btn) self.yes_btn.setGeometry(QtCore.QRect(430, 15, 87, 26)) self.yes_btn.setObjectName(_fromUtf8("yes_btn")) self.back_btn = QtGui.QPushButton(self.frame_btn) self.back_btn.setEnabled(True) self.back_btn.setGeometry(QtCore.QRect(346, 15, 87, 26)) self.back_btn.setCheckable(False) self.back_btn.setObjectName(_fromUtf8("back_btn")) self.main_frame = QtGui.QFrame(license_widget) self.main_frame.setGeometry(QtCore.QRect(0, 0, 640, 75)) self.main_frame.setMinimumSize(QtCore.QSize(8, 0)) self.main_frame.setFrameShape(QtGui.QFrame.StyledPanel) self.main_frame.setFrameShadow(QtGui.QFrame.Raised) self.main_frame.setObjectName(_fromUtf8("main_frame")) self.Title = QtGui.QLabel(self.main_frame) self.Title.setGeometry(QtCore.QRect(10, 5, 311, 61)) self.Title.setObjectName(_fromUtf8("Title")) self.license_cont = QtGui.QTextEdit(license_widget) self.license_cont.setGeometry(QtCore.QRect(0, 74, 640, 260)) self.license_cont.setObjectName(_fromUtf8("license_cont")) self.agree_or_not = QtGui.QLabel(license_widget) self.agree_or_not.setGeometry(QtCore.QRect(10, 340, 621, 17)) self.agree_or_not.setObjectName(_fromUtf8("agree_or_not")) self.retranslateUi(license_widget) QtCore.QMetaObject.connectSlotsByName(license_widget) def retranslateUi(self, license_widget): license_widget.setWindowTitle(_translate("license_widget", "Form", None)) self.no_btn.setText(_translate("license_widget", "No", None)) self.yes_btn.setText(_translate("license_widget", "Yes", None)) self.back_btn.setText(_translate("license_widget", "Back", None)) self.Title.setText(_translate("license_widget", "<html><head/><body><p><span style=\" font-size:11pt; font-weight:600;\">Program License</span></p><p>Please read the license carefully</p></body></html>", None)) self.license_cont.setHtml(_translate("license_widget", "<!DOCTYPE HTML PUBLIC \"-//W3C//DTD HTML 4.0//EN\" \"http://www.w3.org/TR/REC-html40/strict.dtd\">\n" "<html><head><meta name=\"qrichtext\" content=\"1\" /><style type=\"text/css\">\n" "p, li { white-space: pre-wrap; }\n" "</style></head><body style=\" font-family:\'Droid Sans\'; font-size:10pt; font-weight:400; font-style:normal;\">\n" "<p style=\" margin-top:0px; margin-bottom:0px; margin-left:0px; margin-right:0px; -qt-block-indent:0; text-indent:0px;\">example license</p></body></html>", None)) self.agree_or_not.setText(_translate("license_widget", "Do you agree to the license? If you click \"No\", the installer will close.", None)) ``` **WelcomeWidget.py** ``` # -*- coding: utf-8 -*- # Form implementation generated from reading ui file 'WelcomeWidget.ui' # # Created by: PyQt4 UI code generator 4.11.4 # # WARNING! All changes made in this file will be lost! from PyQt4 import QtCore, QtGui try: _fromUtf8 = QtCore.QString.fromUtf8 except AttributeError: def _fromUtf8(s): return s try: _encoding = QtGui.QApplication.UnicodeUTF8 def _translate(context, text, disambig): return QtGui.QApplication.translate(context, text, disambig, _encoding) except AttributeError: def _translate(context, text, disambig): return QtGui.QApplication.translate(context, text, disambig) class Ui_welcome_widget(object): def setupUi(self, welcome_widget): welcome_widget.setObjectName(_fromUtf8("welcome_widget")) welcome_widget.resize(640, 420) welcome_widget.setMinimumSize(QtCore.QSize(640, 420)) welcome_widget.setMaximumSize(QtCore.QSize(640, 420)) self.side_pixmap = QtGui.QLabel(welcome_widget) self.side_pixmap.setGeometry(QtCore.QRect(0, 0, 220, 365)) self.side_pixmap.setText(_fromUtf8("")) self.side_pixmap.setPixmap(QtGui.QPixmap(_fromUtf8("media/InstallShield.png"))) self.side_pixmap.setObjectName(_fromUtf8("side_pixmap")) self.welcome_frame = QtGui.QFrame(welcome_widget) self.welcome_frame.setGeometry(QtCore.QRect(0, 365, 641, 56)) self.welcome_frame.setFrameShape(QtGui.QFrame.StyledPanel) self.welcome_frame.setFrameShadow(QtGui.QFrame.Raised) self.welcome_frame.setObjectName(_fromUtf8("welcome_frame")) self.cancel_btn = QtGui.QPushButton(self.welcome_frame) self.cancel_btn.setGeometry(QtCore.QRect(540, 15, 87, 26)) self.cancel_btn.setObjectName(_fromUtf8("cancel_btn")) self.next_btn = QtGui.QPushButton(self.welcome_frame) self.next_btn.setGeometry(QtCore.QRect(430, 15, 87, 26)) self.next_btn.setObjectName(_fromUtf8("next_btn")) self.back_btn = QtGui.QPushButton(self.welcome_frame) self.back_btn.setEnabled(False) self.back_btn.setGeometry(QtCore.QRect(346, 15, 87, 26)) self.back_btn.setObjectName(_fromUtf8("back_btn")) self.welcome_header = QtGui.QLabel(welcome_widget) self.welcome_header.setEnabled(True) self.welcome_header.setGeometry(QtCore.QRect(240, 10, 361, 91)) font = QtGui.QFont() font.setPointSize(20) self.welcome_header.setFont(font) self.welcome_header.setWordWrap(True) self.welcome_header.setObjectName(_fromUtf8("welcome_header")) self.welcome_desc = QtGui.QLabel(welcome_widget) self.welcome_desc.setGeometry(QtCore.QRect(240, 120, 391, 51)) self.welcome_desc.setWordWrap(True) self.welcome_desc.setObjectName(_fromUtf8("welcome_desc")) self.retranslateUi(welcome_widget) QtCore.QMetaObject.connectSlotsByName(welcome_widget) def retranslateUi(self, welcome_widget): welcome_widget.setWindowTitle(_translate("welcome_widget", "Form", None)) self.cancel_btn.setText(_translate("welcome_widget", "Cancel", None)) self.next_btn.setText(_translate("welcome_widget", "Next", None)) self.back_btn.setText(_translate("welcome_widget", "Back", None)) self.welcome_header.setText(_translate("welcome_widget", "<html><head/><body><p><span style=\" font-size:16pt;\">Welcome to the InstallShield wizard for Google Chrome.</span></p></body></html>", None)) self.welcome_desc.setText(_translate("welcome_widget", "<html><head/><body><p>This install wizard will install Google Chrome to your computer. To continue press Next.</p></body></html>", None)) ``` **MainWindow.py** ``` # -*- coding: utf-8 -*- # Form implementation generated from reading ui file 'MainWindow.ui' # # Created by: PyQt4 UI code generator 4.11.4 # # WARNING! All changes made in this file will be lost! from PyQt4 import QtCore, QtGui try: _fromUtf8 = QtCore.QString.fromUtf8 except AttributeError: def _fromUtf8(s): return s try: _encoding = QtGui.QApplication.UnicodeUTF8 def _translate(context, text, disambig): return QtGui.QApplication.translate(context, text, disambig, _encoding) except AttributeError: def _translate(context, text, disambig): return QtGui.QApplication.translate(context, text, disambig) class Ui_MainWindow(object): def setupUi(self, MainWindow): MainWindow.setObjectName(_fromUtf8("MainWindow")) MainWindow.resize(640, 420) MainWindow.setMinimumSize(QtCore.QSize(640, 420)) MainWindow.setMaximumSize(QtCore.QSize(640, 420)) MainWindow.setToolButtonStyle(QtCore.Qt.ToolButtonIconOnly) MainWindow.setAnimated(False) self.main_widget = QtGui.QWidget(MainWindow) self.main_widget.setObjectName(_fromUtf8("main_widget")) MainWindow.setCentralWidget(self.main_widget) self.retranslateUi(MainWindow) QtCore.QMetaObject.connectSlotsByName(MainWindow) def retranslateUi(self, MainWindow): MainWindow.setWindowTitle(_translate("MainWindow", "InstallShield Wizard", None)) ``` If this worked straight out of the WelcomeWidget class and told the MainWindow class that would be awesome. ``` class WelcomeWidget(QtGui.QWidget, Ui_welcome_widget): [ ... ] def license_show(self): mainWindow.cw = LicenseWidget(self) mainWindow.setCentralWidget(self.cw) ``` Someone with an answer gets an e-cookie!
2015/12/03
[ "https://Stackoverflow.com/questions/34076773", "https://Stackoverflow.com", "https://Stackoverflow.com/users/3855967/" ]
`QWizard` might be of use. Another way would be to layout both widgets in a `QVerticalLayout` and hide the one you are not interested in. The visible one takes then up all the space. It could even be completely constructed in QtCreator .. just `hide()` what you don't want to see and `show()` what you want to see. It is possible to build complex layouts with lots of widgets in the QtCreator and show only what's necessary for the task at hand.
``` #!/usr/bin/env python # -*- coding: utf-8 -*- # # Main.py # # Copyright 2015 Ognjen Galic <gala@thinkpad> # # This program is free software; you can redistribute it and/or modify # it under the terms of the GNU General Public License as published by # the Free Software Foundation; either version 2 of the License, or # (at your option) any later version. # # This program is distributed in the hope that it will be useful, # but WITHOUT ANY WARRANTY; without even the implied warranty of # MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the # GNU General Public License for more details. # # You should have received a copy of the GNU General Public License # along with this program; if not, write to the Free Software # Foundation, Inc., 51 Franklin Street, Fifth Floor, Boston, # MA 02110-1301, USA. # # from PyQt4 import QtGui, QtCore from MainWindow import Ui_MainWindow from MainWidget import Ui_main_widget from WelcomeWidget import Ui_welcome_widget from LicenseWidget import Ui_license_widget import sys class mainWindow(QtGui.QMainWindow, Ui_MainWindow): def __init__(self): super(mainWindow, self).__init__() self.setupUi(self) mainWindow.welcomeWidget = WelcomeWidget(self) mainWindow.licenseWidget = LicenseWidget(self) mainWindow.mainWidget = MainWidget(self) self.setCentralWidget(self.mainWidget) self.mainWidget.addWidget(self.welcomeWidget) self.welcomeWidget.next_btn.pressed.connect(self.license_show) self.welcomeWidget.cancel_btn.pressed.connect(self.close_cancel) self.licenseWidget.cancel_btn.pressed.connect(self.close_cancel) self.licenseWidget.back_btn.pressed.connect(self.go_back) self.licenseWidget.i_do.toggled.connect(self.accept_license) self.licenseWidget.i_dont.toggled.connect(self.dont_accept_license) def go_back(self): self.mainWidget.removeWidget(self.licenseWidget) self.mainWidget.addWidget(self.welcomeWidget) def close_cancel(self): sys.exit(0) def license_show(self): self.mainWidget.removeWidget(self.welcomeWidget) self.mainWidget.addWidget(self.licenseWidget) def accept_license(self): self.licenseWidget.next_btn.setEnabled(True) def dont_accept_license(self): self.licenseWidget.next_btn.setEnabled(False) class WelcomeWidget(QtGui.QStackedWidget, Ui_welcome_widget): def __init__(self, parent=None): super(WelcomeWidget, self).__init__() self.setupUi(self, "Linux 2.6.32.68") class LicenseWidget(QtGui.QStackedWidget, Ui_license_widget): def __init__(self, parent=None): super(LicenseWidget, self).__init__() license_file = open("license.html") license_text_file = license_file.read() self.setupUi(self, license_text_file) class MainWidget(QtGui.QStackedWidget, Ui_main_widget): def __init__(self, parent=None): super(MainWidget, self).__init__() self.setupUi(self) def main(): app = QtGui.QApplication(sys.argv) ui = mainWindow() ui.show() sys.exit(app.exec_()) main() ``` This works. Thanks to anyone who helped.
6,518
56,914,224
I have a dataframe as shown in the picture: [problem dataframe: attdf](https://i.stack.imgur.com/9e5y4.png) I would like to group the data by Source class and Destination class, count the number of rows in each group and sum up Attention values. While trying to achieve that, I am unable to get past this type error: ``` --------------------------------------------------------------------------- TypeError Traceback (most recent call last) <ipython-input-100-6f2c8b3de8f2> in <module>() ----> 1 attdf.groupby(['Source Class', 'Destination Class']).count() 8 frames pandas/_libs/properties.pyx in pandas._libs.properties.CachedProperty.__get__() /usr/local/lib/python3.6/dist-packages/pandas/core/algorithms.py in _factorize_array(values, na_sentinel, size_hint, na_value) 458 table = hash_klass(size_hint or len(values)) 459 uniques, labels = table.factorize(values, na_sentinel=na_sentinel, --> 460 na_value=na_value) 461 462 labels = ensure_platform_int(labels) pandas/_libs/hashtable_class_helper.pxi in pandas._libs.hashtable.PyObjectHashTable.factorize() pandas/_libs/hashtable_class_helper.pxi in pandas._libs.hashtable.PyObjectHashTable._unique() TypeError: unhashable type: 'numpy.ndarray' ``` ``` attdf.groupby(['Source Class', 'Destination Class']) ``` gives me a `<pandas.core.groupby.generic.DataFrameGroupBy object at 0x7f1e720f2080>` which I'm not sure how to use to get what I want. Dataframe attdf can be imported from : <https://drive.google.com/open?id=1t_h4b8FQd9soVgYeiXQasY-EbnhfOEYi> Please advise.
2019/07/06
[ "https://Stackoverflow.com/questions/56914224", "https://Stackoverflow.com", "https://Stackoverflow.com/users/9003184/" ]
@Adam.Er8 and @jezarael helped me with their inputs. The unhashable type error in my case was because of the datatypes of the columns in my dataframe. [Original df and df imported from csv](https://i.stack.imgur.com/soqF0.png) It turned out that the original dataframe had two object columns which i was trying to use up in the groupby. Hence the unhashable type error. But on importing the data into a new dataframe right out of a csv fixed the datatypes. Consequently, no type errors faced anymore.
try using [`.agg`](https://pandas.pydata.org/pandas-docs/version/0.22/generated/pandas.core.groupby.DataFrameGroupBy.agg.html) as follows: ```py import pandas as pd attdf = pd.read_csv("attdf.csv") print(attdf.groupby(['Source Class', 'Destination Class']).agg({"Attention": ['sum', 'count']})) ``` Output: ``` Attention sum count Source Class Destination Class 0 0 282.368908 1419 1 7.251101 32 2 3.361009 23 3 22.482438 161 4 14.020189 88 5 10.138409 75 6 11.377947 80 1 0 6.172269 32 1 181.582437 1035 2 9.440956 62 3 12.007303 67 4 3.025752 20 5 4.491725 28 6 0.279559 2 2 0 3.349921 23 1 8.521828 62 2 391.116034 2072 3 9.937170 53 4 0.412747 2 5 4.441985 30 6 0.220316 2 3 0 33.156251 161 1 11.944373 67 2 9.176584 53 3 722.685180 3168 4 29.776050 137 5 8.827215 54 6 2.434347 16 4 0 17.431855 88 1 4.195519 20 2 0.457089 2 3 20.401789 137 4 378.802604 1746 5 3.616083 19 6 1.095061 6 5 0 13.525333 75 1 4.289306 28 2 6.424412 30 3 10.911705 54 4 3.896328 19 5 250.309764 1132 6 8.643153 46 6 0 15.249959 80 1 0.150240 2 2 0.413639 2 3 3.108417 16 4 0.850280 6 5 8.655959 46 6 151.571505 686 ```
6,519
5,475,259
I run a small VPS with 512M memory of memory that currently hosts 3 very low traffic PHP sites and a personal email account. I have been teaching myself Django over the last few weeks and am starting to think about deploying a project. There seem to be a very large number of methods for deploying a Django site. Given the limited resources I have available, what would be the most appropriate option? Will the VPS be suitable to host both python and PHP sites or would it be worth getting a separate server? Any advice appreciated. Thanks.
2011/03/29
[ "https://Stackoverflow.com/questions/5475259", "https://Stackoverflow.com", "https://Stackoverflow.com/users/570068/" ]
There aren't really a great number of ways to do it. In fact, there's the recommended way - via Apache/mod\_wsgi - and all the other ways. The recommended way is fully documented [here](http://docs.djangoproject.com/en/1.3/howto/deployment/modwsgi/). For a low-traffic site, you should have no trouble fitting it in your 512MB VPS along with your PHP sites.
Django has documentation describing possible [server arrangements](http://code.djangoproject.com/wiki/ServerArrangements). For light weight, yet very robust set up, I'd recommend Nginx setup. It's much lighter than Apache.
6,520
50,552,404
So far `pandas` read through all my CSV files without any problem, however now there seems to be a problem.. When doing: ``` df = pd.read_csv(r'path to file', sep=';') ``` I get: > > OSError Traceback (most recent call > last) in () > ----> 1 df = pd.read\_csv(r'path > Übersicht\Input\test\test.csv', sep=';') > > > c:\program files\python36\lib\site-packages\pandas\io\parsers.py in > parser\_f(filepath\_or\_buffer, sep, delimiter, header, names, index\_col, > usecols, squeeze, prefix, mangle\_dupe\_cols, dtype, engine, converters, > true\_values, false\_values, skipinitialspace, skiprows, nrows, > na\_values, keep\_default\_na, na\_filter, verbose, skip\_blank\_lines, > parse\_dates, infer\_datetime\_format, keep\_date\_col, date\_parser, > dayfirst, iterator, chunksize, compression, thousands, decimal, > lineterminator, quotechar, quoting, escapechar, comment, encoding, > dialect, tupleize\_cols, error\_bad\_lines, warn\_bad\_lines, skipfooter, > skip\_footer, doublequote, delim\_whitespace, as\_recarray, compact\_ints, > use\_unsigned, low\_memory, buffer\_lines, memory\_map, float\_precision) > 703 skip\_blank\_lines=skip\_blank\_lines) > 704 > --> 705 return \_read(filepath\_or\_buffer, kwds) > 706 > 707 parser\_f.**name** = name > > > c:\program files\python36\lib\site-packages\pandas\io\parsers.py in > \_read(filepath\_or\_buffer, kwds) > 443 > 444 # Create the parser. > --> 445 parser = TextFileReader(filepath\_or\_buffer, \*\*kwds) > 446 > 447 if chunksize or iterator: > > > c:\program files\python36\lib\site-packages\pandas\io\parsers.py in > **init**(self, f, engine, \*\*kwds) > 812 self.options['has\_index\_names'] = kwds['has\_index\_names'] > 813 > --> 814 self.\_make\_engine(self.engine) > 815 > 816 def close(self): > > > c:\program files\python36\lib\site-packages\pandas\io\parsers.py in > \_make\_engine(self, engine) 1043 def \_make\_engine(self, engine='c'): 1044 if engine == 'c': > -> 1045 self.\_engine = CParserWrapper(self.f, \*\*self.options) 1046 else: 1047 if engine == 'python': > > > c:\program files\python36\lib\site-packages\pandas\io\parsers.py in > **init**(self, src, \*\*kwds) 1682 kwds['allow\_leading\_cols'] = self.index\_col is not False 1683 > -> 1684 self.\_reader = parsers.TextReader(src, \*\*kwds) 1685 1686 # XXX > > > pandas\_libs\parsers.pyx in > pandas.\_libs.parsers.TextReader.**cinit**() > > > pandas\_libs\parsers.pyx in > pandas.\_libs.parsers.TextReader.\_setup\_parser\_source() > > > OSError: Initializing from file failed > > > Other files in the same folder that are XLS files can be accessed without an issue. When using the Python library like so: ``` import csv file = csv.reader(open(r'pathtofile')) for row in file: print(row) break df = pd.read_csv(file, sep=';') ``` the file is being loaded and the first line is printed. However I get: > > ValueError: Invalid file path or buffer object type: > > > Probably because I can't use `read_csv` this way... How to get the first `pandas` function to work? The csv does not contain any special characters except German ones. The filesize is 10MB.
2018/05/27
[ "https://Stackoverflow.com/questions/50552404", "https://Stackoverflow.com", "https://Stackoverflow.com/users/2252633/" ]
I ran into a similar problem. It turned out the CSV I had downloaded had no permissions at all. The error message from pandas did not point this out, making it hard to debug. Check that your file have read permissions
pandas read\_csv OSError: Initializing from file failed We could try `chmod 600 file.csv`.
6,522
65,645,999
For some context, I am coding some geometric transformations into a python class, adding some matrix multiplication methods. There can be many 3D objects inside of a 3D "scene". In order to allow users to switch between applying transformations to the entire scene or to one object in the scene, I'm computing the geometric center of the object's bounding box (cuboid?) in order to allow that geometric center to function as the "origin" in the objects Euclidean space, and then to apply transformation matrix multiplications to that object alone. My specific question occurs when mapping points from scene space to local object space, I subtract the geometric center from the points. Then after the transformation, to convert back, I add the geometric center to the points. Is there a pythonic way to change my function from adding to subtracting via keyword argument? I don't like what I have now, it doesn't seem very pythonic. ```py def apply_centroid_transform(point, centroid, reverse=False): reverse_mult = 1 if reverse: reverse_mult = -1 new_point = [ point[0] - (reverse_mult * centroid["x"]), point[1] - (reverse_mult * centroid["y"]), point[2] - (reverse_mult * centroid["z"]), ] return new_point ``` I don't want to have the keyword argument be `multiply_factor=1` and then make the user know to type `-1` there, because that seems unintuitive. I hope my question makes sense. Thanks for any guidance you may have.
2021/01/09
[ "https://Stackoverflow.com/questions/65645999", "https://Stackoverflow.com", "https://Stackoverflow.com/users/10151432/" ]
One line: ```py def apply_centroid_transform(point, centroid, reverse=False): return [point[i] - [1, -1][reverse]*centroid[j] for i, j in enumerate('xyz')] ``` It is not very readable, but it is very concise :)
Well, if you like, you can do this: ``` def apply_centroid_transform(point, centroid, reverse=False): op = float.__sub__ if reverse else float.__add__ return [op(p_i, c_i) for p_i, c_i in zip(point, centroid)] ```
6,532
26,199,343
I have a bunch of `Album` objects in a `list` (code for objects posted below). 5570 to be exact. However, when looking at unique objects, I should have 385. Because of the way that the objects are created (I don't know if I can explain it properly), I thought it would be best to add all the objects into the list, and then delete the ones that are similar afterwards. Certain objects have the same strings for each argument(`artist`, `title`, `tracks`) and I would like to get rid of them. However, I know I cannot simply remove the duplicates, since they are stored in separate memory locations, and therefore aren't exactly identical. Can anyone help me with removing the duplicates? As you can probably tell, I am quite new to python. Thanks in advance! ``` class Album(object) : def __init__(self, artist, title, tracks = None) : tracks = [] self.artist = artist self.title = title self.tracks = tracks def add_track(self, track) : self.track = track (self.tracks).append(track) print "The track %s was added." % (track) def __str__(self) : return "Artist: %s, Album: %s [" % (self.artist, self.title) + str(len(self.tracks)) + " Tracks]" ```
2014/10/05
[ "https://Stackoverflow.com/questions/26199343", "https://Stackoverflow.com", "https://Stackoverflow.com/users/4027606/" ]
when you do mvvm and wanna use button then you should use DelegateCommand or RelayCommand. if you use this then you just have to implement the ICommand properly (CanExecute!) the Command binding to the button will handle IsEnabled for you. ``` <Button Command="{Binding MyRemoveCommand}"></Button> ``` cs. ``` public ICommand MyRemoveCommand {get;set;} this.MyRemoveCommand = new DelegateCommand(this.RemoveCommandExecute, this.CanRemoveCommandExecute); private bool CanRemoveCommandExecute() { return this.CanRemove; } private bool RemoveCommandExecute() { if(!this.CanRemoveCommandExecute) return; //execution logic here } ```
As far as i can see in your MVVM there is a bool "CanRemove". You can bind this to your buttons visibility with the already [BooleanToVisibilityConverter](http://msdn.microsoft.com/en-us/library/system.windows.controls.booleantovisibilityconverter%28v=vs.110%29.aspx) which is provided by .NET
6,535
3,904,033
Is there an easy way to get a python code segment to run every 5 minutes? I know I could do it using time.sleep() but was there any other way? For example I want to run this every 5 minutes: ``` x = 0 def run_5(): print "5 minutes later" global x += 5 print x, "minutes since start" ``` That's only a fake example but the idea is there. Any ideas? I am on linux and would happily use cron but was just wondering if there was a python alternative?
2010/10/11
[ "https://Stackoverflow.com/questions/3904033", "https://Stackoverflow.com", "https://Stackoverflow.com/users/472006/" ]
you can do it with the threading module ``` >>> import threading >>> END = False >>> def run(x=0): ... x += 5 ... print x ... if not END: ... threading.Timer(1.0, run, [x]).start() ... >>> threading.Timer(1.0, run, [x]).start() >>> 5 10 15 20 25 30 35 40 ``` Then when you want it to stop, set `END = True`.
You might want to have a look at `cron` if you are running a \*nix type OS. You could easily have it run you program every 5 minutes <http://www.unixgeeks.org/security/newbie/unix/cron-1.html> <https://help.ubuntu.com/community/CronHowto>
6,538
3,113,002
After doing web development (php/js) for the last few years i thought it is about time to also have a look at something different. I thought it may be always good to have look of different areas in programming to understand some different approaches better, so i now want to have look at GUI development. As programming language i did choose Python where i now slowly get the basics and i also found this question: [How to learn python](https://stackoverflow.com/questions/17988/how-to-learn-python) which already contains good links and book proposals. So i am now mainly looking for some infos about PyQt: * Tutorials * Books * General tips for GUI development I already looked at some tutorials, but didn't find any really good ones. Most were pretty short and didn't really explain anything. Thanks in advance for advises.
2010/06/24
[ "https://Stackoverflow.com/questions/3113002", "https://Stackoverflow.com", "https://Stackoverflow.com/users/276382/" ]
The first thing to realize is that you'll get more mileage out of understanding Qt than understanding PyQt. Most of the good documentation discusses Qt, not PyQt, so getting conversant with them (and how to convert that code to PyQt code) is a lifesaver. Note, I don't actually recommend *programming* Qt in C++; Python is a fantastic language for Qt programming, since it takes care of a lot of gruntwork, leaving you to actually code application logic. The best book I've found for working with PyQt is [Rapid GUI Programming with Python and Qt](http://www.qtrac.eu/pyqtbook.html). It's got a nice small Python tutorial in the front, then takes you through the basics of building a Qt application. By the end of the book you should have a good idea of how to build an application, and some basic idea of where to start for more advanced topics. The other critical reference is the [bindings documentation for PyQt](http://www.riverbankcomputing.co.uk/static/Docs/PyQt4/pyqt4ref.html). Pay particular attention to the "New-style Signal and Slot Support"; it's a *huge* improvement over the old style. Once you really understand that document (and it's pretty short) you'll be able to navigate the Qt docs pretty easily.
I had this bookmark saved: <http://www.harshj.com/2009/04/26/the-pyqt-intro/>
6,541
18,905,026
When trying to unit test a method that returns a tuple and I am trying to see if the code accesses the correct tuple index, python tries to evaluate the expected call and turns it into a string. `call().methodA().__getitem__(0)` ends up getting converted into `'().methodA'` in my `expected_calls` list for the assertion. The example code provided, results in the output and traceback: ``` expected_calls=[call().methodA(), '().methodA'] result_calls=[call().methodA(), call().methodA().__getitem__(0)] ====================================================================== ERROR: test_methodB (badMockCalls.Test_UsingToBeMocked_methods) ---------------------------------------------------------------------- Traceback (most recent call last): File "C:\dev\workspace\TestCode\src\badMockCalls.py", line 43, in test_methodB self.assertListEqual(expected_calls, self.result_calls) File "C:\Python33\lib\unittest\case.py", line 844, in assertListEqual self.assertSequenceEqual(list1, list2, msg, seq_type=list) File "C:\Python33\lib\unittest\case.py", line 764, in assertSequenceEqual if seq1 == seq2: File "C:\Python33\lib\unittest\mock.py", line 1927, in __eq__ first, second = other ValueError: too many values to unpack (expected 2) ---------------------------------------------------------------------- Ran 1 test in 0.006s FAILED (errors=1) ``` How do I go about asserting that methodB is calling self.tbm.methodA()[0] properly? Example code (Python 3.3.2): ``` import unittest from unittest.mock import call, patch import logging log = logging.getLogger(__name__) log.setLevel(logging.DEBUG) _ch = logging.StreamHandler() _ch.setLevel(logging.DEBUG) log.addHandler(_ch) class ToBeMocked(): # external resource that can't be changed def methodA(self): return (1,) class UsingToBeMocked(): # project code def __init__(self): self.tbm = ToBeMocked() def methodB(self): value = self.tbm.methodA()[0] return value class Test_UsingToBeMocked_methods(unittest.TestCase): def setUp(self): self.patcher = patch(__name__ + '.ToBeMocked') self.mock_ToBeMocked = self.patcher.start() self.utbm = UsingToBeMocked() # clear out the mock_calls list from the constructor calls self.mock_ToBeMocked.mock_calls = [] # set result to always point to the mock_calls that we are testing self.result_calls = self.mock_ToBeMocked.mock_calls def tearDown(self): self.patcher.stop() def test_methodB(self): self.utbm.methodB() # make sure the correct sequence of calls is made with correct parameters expected_calls = [call().methodA(), call().methodA().__getitem__(0)] log.debug('expected_calls=' + str(expected_calls)) log.debug(' result_calls=' + str(self.result_calls)) self.assertListEqual(expected_calls, self.result_calls) if __name__ == "__main__": unittest.main() ```
2013/09/19
[ "https://Stackoverflow.com/questions/18905026", "https://Stackoverflow.com", "https://Stackoverflow.com/users/373628/" ]
To test for `mock_object.account['xxx1'].patch(body={'status': 'active'})` I had to use the test: ``` mock_object.account.__getitem__.assert_has_calls([ call('xxxx1'), call().patch(body={'status': 'active'}), ]) ``` I can't explain why this works, this looks like weird behaviour, possibly a bug in mock, but I consistently get these results and it works.
I've just stumbled upon the same problem. I've used the solution/work-around from here: <http://www.voidspace.org.uk/python/mock/examples.html#mocking-a-dictionary-with-magicmock> namely: ``` >> mock.__getitem__.call_args_list [call('a'), call('c'), call('d'), call('b'), call('d')] ``` You can skip the magic function name misinterpretation and check against its arguments.
6,546
14,366,668
I've been working on learning python and somehow came up with following codes: ``` for item in list: while list.count(item)!=1: list.remove(item) ``` I was wondering if this kind of coding can be done in c++. (Using list length for the for loop while decreasing its size) If not, can anyone tell me why? Thanks!
2013/01/16
[ "https://Stackoverflow.com/questions/14366668", "https://Stackoverflow.com", "https://Stackoverflow.com/users/1948847/" ]
I am not a big Python programmer, but it seems like the above code removes duplicates from a list. Here is a C++ equivalent: ``` list.sort(); list.unique(); ``` As for modifying the list while iterating over it, you can do that as well. Here is an example: ``` for (auto it = list.begin(), eit = list.end(); it != eit; ) { if (std::count(it, eit, *it) > 1) it = list.erase(it); else ++it; } ``` Hope it helps.
In C++, you can compose something like this from various algorithms of the standard library, check out remove(), find(), However, the way your algorithm is written, it looks like O(n^2) complexity. Sorting the list and then scanning over it to put one of each value into a new list has O(n log n) complexity, but ruins the order. In general, both for Python and C++, it is often better to copy or move elements to a temporary container and then swap with the original than modifying the original in-place. This is easier to get right since you don't step on your own feet (see delnan's comment) and it is faster because it avoids repeated reallocation and copying of objects.
6,547
49,889,153
I am running an RNN on a signal in fixed-size segments. The following code allows me to preserve the final state of the previous batch to initialize the initial state of the next batch. ``` rnn_outputs, final_state = tf.contrib.rnn.static_rnn(cell, rnn_inputs, initial_state=init_state) ``` This works when the batches are non-overlapping. For example, my first batch processes samples 0:124 and `final_state` is the state after this processing. Then, the next batch processes samples 124:256, setting `init_state` to `final_state`. My question is how to retrieve an intermediary state when the batches are overlapping. First, I process samples 0:124, then 10:134, 20:144, so the hop size is 10. I would like to retrieve not the `final_state` but the state after processing 10 samples. Is it possible in TF to keep the intermediary state? The [documentation](https://www.tensorflow.org/versions/r1.1/api_docs/python/tf/contrib/rnn/static_rnn) shows that the return value consists only of the final state. The image shows the issue I am facing due to state discontinuity. In my program, the RNN segment length is 215 and the hop length is 20. [![Sample results](https://i.stack.imgur.com/G9owu.png)](https://i.stack.imgur.com/G9owu.png) Update: the easiest turned out to be what [David Parks](https://stackoverflow.com/a/49890068/4008884) described: ``` rnn_outputs_one, mid_state = tf.contrib.rnn.static_rnn(cell, rnn_inputs_one, initial_state=rnn_tuple_state) rnn_outputs_two, final_state = tf.contrib.rnn.static_rnn(cell, rnn_inputs_two, initial_state=mid_state) rnn_outputs = rnn_outputs_one + rnn_outputs_two ``` and ``` prev_state = sess.run(mid_state) ``` Now, after just a few iterations, the results look much better. [![enter image description here](https://i.stack.imgur.com/4EU6N.png)](https://i.stack.imgur.com/4EU6N.png)
2018/04/17
[ "https://Stackoverflow.com/questions/49889153", "https://Stackoverflow.com", "https://Stackoverflow.com/users/4008884/" ]
In tensorflow the only thing that is kept after returning from a call to `sess.run` are variables. You should create a variable for the state, then use `tf.assign` to assign the result from your RNN cell to that variable. You can then use that Variable in the same way as any other tensor. If you need to initialize the variable to something other than `0` you can call `sess.run` once with a placeholder and `tf.assign` specifically to setup the variable. --- Added detail: If you need an intermediate state, let's say you ran for timesteps 0:124 and you want step 10, you should split that up into 2 RNN cells, one that processes the first 10 timesteps and the second that continues processing the next 114 timesteps. This shouldn't affect training and back propagation as long as you use the same cell (LSTM or other cell) in both `static_rnn` functions. The cell is where your weights are defined, and that has to remain constant. Your gradient will flow backwards through the second cell and then finally the first appropriately.
So, I came here looking for an answer earlier, but I ended up creating one. Similar to above posters about making it assignable... When you build your graph, make a list of sequence placeholders like.. ``` my_states = [None] * int(sequence_length + 1) my_states[0] = cell.zero_state() for step in steps: cell_out, my_states[step+1] = cell( ) ``` Then outside of your graph after the sess.run() you say ``` new_states = my_states[1:] model.my_states = new_states ``` This situation is for stepping 1 timestep at a time, but it could easily be made for steps of 10. Just slice the list of states after sess.run() and make those the initial states. Good luck!
6,552
38,552,688
I am trying to filter all the `#` keywords from the tweet text. I am using `str.extractall()` to extract all the keywords with `#` keywords. This is the first time I am working on filtering keywords from the tweetText using pandas. Inputs, code, expected output and error are given below. Input: ``` userID,tweetText 01, home #sweet home 01, #happy #life 02, #world peace 03, #all are one 04, world tour ``` and so on... the total datafile is in GB size scraped tweets with several other columns. But I am interested in only two columns. Code: ``` import re import pandas as pd data = pd.read_csv('Text.csv', index_col=0, header=None, names=['userID', 'tweetText']) fout = data['tweetText'].str.extractall('#') print fout ``` Expected Output: ``` userID,tweetText 01,#sweet 01,#happy 01,#life 02,#world 03,#all ``` Error: ``` Traceback (most recent call last): File "keyword_split.py", line 7, in <module> fout = data['tweetText'].str.extractall('#') File "/usr/local/lib/python2.7/dist-packages/pandas/core/strings.py", line 1621, in extractall return str_extractall(self._orig, pat, flags=flags) File "/usr/local/lib/python2.7/dist-packages/pandas/core/strings.py", line 694, in str_extractall raise ValueError("pattern contains no capture groups") ValueError: pattern contains no capture groups ``` Thanks in advance for the help. What should be the simplest way to filter keywords with respect to userid? Output Update: When used only this the output is like above `s.name = "tweetText" data_1 = data[~data['tweetText'].isnull()]` The output in this case has empty `[]` and the userID at still listed and for those which has keywords has an array of keywords and not in list form. When used only this the output us what needed but with `NAN` ``` s.name = "tweetText" data_2 = data_1.drop('tweetText', axis=1).join(s) ``` The output here is correct format but those with no keywords has yet considered and has NAN If it is possible we got to neglect such userIDs and not shown in output at all.In next stages I am trying to calculate the frequency of keywords in which the `NAN` or empty `[]` will also be counted and that frequency may compromise the far future classification. [![enter image description here](https://i.stack.imgur.com/VYHY4.png)](https://i.stack.imgur.com/VYHY4.png)
2016/07/24
[ "https://Stackoverflow.com/questions/38552688", "https://Stackoverflow.com", "https://Stackoverflow.com/users/5056548/" ]
If you are not too tied to using `extractall`, you can try the following to get your final output: ``` from io import StringIO import pandas as pd import re data_text = """userID,tweetText 01, home #sweet home 01, #happy #life 02, #world peace 03, #all are one """ data = pd.read_csv(StringIO(data_text),header=0) data['tweetText'] = data.tweetText.apply(lambda x: re.findall('#(?=\w+)\w+',x)) s = data.apply(lambda x: pd.Series(x['tweetText']),axis=1).stack().reset_index(level=1, drop=True) s.name = "tweetText" data = data.drop('tweetText', axis=1).join(s) userID tweetText 0 1 #sweet 1 1 #happy 1 1 #life 2 2 #world 3 3 #all 4 4 NaN ``` You drop the rows where the textTweet column returns `Nan`'s by doing the following: ``` data = data[~data['tweetText'].isnull()] ``` This should return: ``` userID tweetText 0 1 #sweet 1 1 #happy 1 1 #life 2 2 #world 3 3 #all ``` I hope this helps.
The `extractall` function requires a regex pattern **with capturing groups** as the first argument, for which you have provided `#`. A possible argument could be `(#\S+)`. The braces indicate a capture group, in other words what the `extractall` function needs to extract from each string. Example: ``` data="""01, home #sweet home 01, #happy #life 02, #world peace 03, #all are one """ import pandas as pd from io import StringIO df = pd.read_csv(StringIO(data), header=None, names=['col1', 'col2'], index_col=0) df['col2'].str.extractall('(#\S+)') ``` The error `ValueError: pattern contains no capture groups` doesn't appear anymore with the above code (meaning the issue in the question is solved), but this hits a bug in the current version of pandas (I'm using `'0.18.1'`). The error returned is: ``` AssertionError: 1 columns passed, passed data had 6 columns ``` The issue is described [here](https://github.com/pydata/pandas/issues/13382). If you would try `df['col2'].str.extractall('#(\S)')`(which will give you the first letter of every hashtag, you'll see that the `extractall` function works as long as the captured group only contains a single character (which matches the issue description). As the issue is closed, it should be fixed in an upcoming pandas release.
6,553
22,714,864
I'm trying to craft a regex able to match anything up to a specific pattern. The regex then will continue looking for other patterns until the end of the string, but in some cases the pattern will not be present and the match will fail. Right now I'm stuck at: ``` .*?PATTERN ``` The problem is that, in cases where the string is not present, this takes too much time due to backtraking. In order to shorten this, I tried mimicking atomic grouping using positive lookahead as explained in this thread (btw, I'm using re module in python-2.7): [Do Python regular expressions have an equivalent to Ruby's atomic grouping?](https://stackoverflow.com/questions/13577372/do-python-regular-expressions-have-an-equivalent-to-rubys-atomic-grouping) So I wrote: ``` (?=(?P<aux1>.*?))(?P=aux1)PATTERN ``` Of course, this is faster than the previous version when STRING is not present but trouble is, it doesn't match STRING anymore as the . matches everyhing to the end of the string and the previous states are discarded after the lookahead. So the question is, is there a way to do a match like `.*?STRING` and alse be able to fail faster when the match is not present?
2014/03/28
[ "https://Stackoverflow.com/questions/22714864", "https://Stackoverflow.com", "https://Stackoverflow.com/users/3472731/" ]
You could try using `split` If the results are of length 1 you got no match. If you get two or more you know that the first one is the first match. If you limit the split to size one you'll short-circuit the later matching: ``` "HI THERE THEO".split("TH", 1) # ['HI ', 'ERE THEO'] ``` The first element of the results is up to the match.
The Python documentation includes a brief outline of the differences between the `re.search()` and `re.match()` functions <http://docs.python.org/2/library/re.html#search-vs-match>. In particular, the following quote is relevant: > > Sometimes you’ll be tempted to keep using re.match(), and just add .\* to the front of your RE. Resist this temptation and use re.search() instead. The regular expression compiler does some analysis of REs in order to speed up the process of looking for a match. One such analysis figures out what the first character of a match must be; for example, a pattern starting with Crow must match starting with a 'C'. The analysis lets the engine quickly scan through the string looking for the starting character, only trying the full match if a 'C' is found. > > > Adding .\* defeats this optimization, requiring scanning to the end of the string and then backtracking to find a match for the rest of the RE. Use re.search() instead. > > > In your case, it would be preferable to define your pattern simply as: ``` pattern = re.compile("PATTERN") ``` And then call `pattern.search(...)`, which will not backtrack when the pattern is not found.
6,558
62,461,709
currently, I'm trying to execute a python code that extracts information from the snowflake. When I running my code in my PC executed well, but if I try to run the code in a VM It shows me this error: [![enter image description here](https://i.stack.imgur.com/DTXuD.png)](https://i.stack.imgur.com/DTXuD.png) The VM is new, and I just have executed these commands: -pip install virtualenv (inside of the env) -pip install snowflake-connector-python[pandas] -pip install azure.eventhub (I need this package) Thanks for the help
2020/06/19
[ "https://Stackoverflow.com/questions/62461709", "https://Stackoverflow.com", "https://Stackoverflow.com/users/6153466/" ]
The Pandas python library requires some extra native libraries (DLLs) to load certain submodules due to use of C-extensions. Very recent Pandas versions, after 1.0.1, [are facing a build distribution issue](https://github.com/pandas-dev/pandas/issues/32857) currently, where their published packages are not carrying the necessary Microsoft Visual C++ redistributed DLL files to allow these modules to load. You can try to get around this issue in two ways: [Install the Microsoft Visual C++ Redistributable package](https://support.microsoft.com/en-us/help/2977003/the-latest-supported-visual-c-downloads) in your Windows VM directly, so that their DLLs appear for Pandas to load dynamically. Or, switch to using a slightly older release of Pandas (1.0.1) [which distributed the necessary DLLs properly](https://github.com/pandas-dev/pandas/pull/21321), until they resolve the issue with their binary packaging in future: ```vb C:\> pip install pandas==1.0.1 snowflake-connector-python ```
Please make sure to have the [Snowflake Python Connector prerequisites](https://docs.snowflake.com/en/user-guide/python-connector-install.html#prerequisites) installed. You can try the following commands: ``` // Install Python sudo yum install python36 // Install pip curl https://bootstrap.pypa.io/get-pip.py -o get-pip.py sudo python get-pip.py // Install Snowflake Connector sudo yum install -y libffi-devel openssl-devel; pip install --upgrade snowflake-connector-python; ```
6,560
36,676,629
I use the following trick in some of my Python scripts to drop into an interactive Python REPL session: ``` import code; code.InteractiveConsole(locals=globals()).interact() ``` This usually works well on various RHEL machines at work, but on my laptop (OS X 10.11.4) it starts the REPL seemingly without readline support. You can see I get the `^[[A^C` garbage characters. ``` My-MBP:~ complex$ cat repl.py a = 'alpha' import code; code.InteractiveConsole(locals=globals()).interact() b = 'beta' ``` ``` My-MBP:~ complex$ python repl.py Python 2.7.10 (default, Oct 23 2015, 19:19:21) [GCC 4.2.1 Compatible Apple LLVM 7.0.0 (clang-700.0.59.5)] on darwin Type "help", "copyright", "credits" or "license" for more information. (InteractiveConsole) >>> a 'alpha' >>> ^[[A^C ``` If I call `python` directly, up arrow command history in the REPL works fine. I tried inspecting `globals()` to look for some clues, but in each case they appear to be the same. Any ideas on how I can fix this, or where to look? Edit: And to show the correct behavior: ``` My-MBP:~ complex$ python Python 2.7.10 (default, Oct 23 2015, 19:19:21) [GCC 4.2.1 Compatible Apple LLVM 7.0.0 (clang-700.0.59.5)] on darwin Type "help", "copyright", "credits" or "license" for more information. >>> 'a' 'a' >>> 'a' ```
2016/04/17
[ "https://Stackoverflow.com/questions/36676629", "https://Stackoverflow.com", "https://Stackoverflow.com/users/6215905/" ]
Just `import readline`, either in the script or at the console.
The program `rlwrap` solves this problem in general, not just for Python but also for other programs in need of this feature such as `telnet`. You can install it with `brew install rlwrap` if you have Homebrew (which you should) and then use it by inserting it at the beginning of a command, i.e. `rlwrap python repl.py`.
6,561
21,421,987
Somewhat a python/programming newbie here. I am trying to access a specified range of tuples from a list of tuples, but I only want to access the first element from the range of tuples. The specified range is based on a pattern I am looking for in a string of text that has been tokenized and tagged by nltk. My code: ``` from nltk.tokenize import word_tokenize from nltk.tag import pos_tag text = "It is pretty good as far as driveway size is concerned, otherwise I would skip it" tokenized = word_tokenize(text) tagged = pos_tag(tokenized) def find_phrase(): counter = -1 for tag in tagged: counter += 1 if tag[0] == "as" and tagged[counter+6][0] == "concerned": print tagged[counter:counter+7] find_phrase() ``` Printed output: `[('as', 'IN'), ('far', 'RB'), ('as', 'IN'), ('driveway', 'NN'), ('size', 'NN'), ('is', 'VBZ'), ('concerned', 'VBN')]` What I actually want: `['as', 'far', 'as', 'driveway', 'size', 'is', 'concerned']` Is it possible to modify the my line of code `print tagged[counter:counter+7]` to get my desired printed output?
2014/01/29
[ "https://Stackoverflow.com/questions/21421987", "https://Stackoverflow.com", "https://Stackoverflow.com/users/2680443/" ]
Probably the simplest method uses a [list comprehension](http://docs.python.org/2/tutorial/datastructures.html#list-comprehensions). This statement creates a list from the first element of every tuple in your list: ``` print [tup[0] for tup in tagged[counter:counter+7]] ``` Or just for fun, if the tuples are always pairs, you could flatten the list (using any method you like) and then print every second element with the *step* notation of python's [slice](https://stackoverflow.com/questions/509211/pythons-slice-notation) notation: ``` print list(sum(tagged[counter:counter+7], ()))[::2] ``` Or use `map` with the [`itemgetter`](http://docs.python.org/2/library/operator.html?highlight=itemgetter#operator.itemgetter) function, which calls the `__getitem__()` method to retrieve the 0th index of every tuple in your list: ``` from operator import itemgetter print map(itemgetter(0), tagged[counter:counter+7]) ``` Anything else? I'm sure there are more.
Have you tried zip? also item[0] for item in name
6,562
22,026,177
I'm getting a strange error from the Django tests, I get this error when I test Django or when I unit test my story app. It's complaining about multiple block tags with the name "content" but I've renamed all the tags so there should be zero block tags with the name content. The test never even hits my app code, and fails when I run django's test suite too. The application runs fine, but I'm trying to write unit tests, and this is really getting in the way. Here's the test from story/tests.py: ``` class StoryViewsTests(TestCase): def test_root_url_shows_home_page_content(self): response = self.client.get(reverse('index')) ... ``` Here's the view from story/views.py: ``` class FrontpageView(DetailView): template_name = "welcome_content.html" def get_object(self): return get_object_or_404(Article, slug="front-page") def get_context_data(self, **kwargs): context = super(FrontpageView, self).get_context_data(**kwargs) context['slug'] = "front-page" queryset = UserProfile.objects.filter(user_type="Reporter") context['reporter_list'] = queryset return context ``` Here's the url from urls.py: ``` urlpatterns = patterns('', url(r'^$', FrontpageView.as_view(), name='index'), ... ``` Here's the template: ``` {% extends "welcome.html" %} {% block frontpagecontent %} <div> {{ object.text|safe}} <div class="span12"> <a href="/accounts/register/"> <div class="well"> <h3>Register for the Nebraska News Service today.</h3> </div> <!-- well --> </a> </div> </div> <div class="row-fluid"> <div class="span8"> <div class="well" align="center"> <img src="{{STATIC_URL}}{{ object.docfile }}" /> </div> <!-- well --> </div> <!-- span8 --> <div class="span4"> <div class="well"> <h3>NNS Staff:</h3> {% for r in reporter_list %} <p>{{ r.user.get_full_name }}</p> {% endfor %} </div> <!-- well --> </div> <!-- span4 --> </div> {% endblock %} ``` And here's the trace: ``` ERROR: test_root_url_shows_home_page_content (story.tests.StoryViewsTests) ---------------------------------------------------------------------- Traceback (most recent call last): File "/vagrant/webapps/nns/settings/../apps/story/tests.py", line 11, in test_root_url_shows_home_page_content response = self.client.get(reverse('about')) File "/home/vagrant/django5/local/lib/python2.7/site-packages/django/test/client.py", line 473, in get response = super(Client, self).get(path, data=data, **extra) File "/home/vagrant/django5/local/lib/python2.7/site-packages/django/test/client.py", line 280, in get return self.request(**r) File "/home/vagrant/django5/local/lib/python2.7/site-packages/django/test/client.py", line 444, in request six.reraise(*exc_info) File "/home/vagrant/django5/local/lib/python2.7/site-packages/django/core/handlers/base.py", line 152, in get_response response = callback(request, **param_dict) File "/home/vagrant/django5/local/lib/python2.7/site-packages/django/utils/decorators.py", line 99, in _wrapped_view response = view_func(request, *args, **kwargs) File "/home/vagrant/django5/local/lib/python2.7/site-packages/django/views/defaults.py", line 23, in page_not_found template = loader.get_template(template_name) File "/home/vagrant/django5/local/lib/python2.7/site-packages/django/template/loader.py", line 138, in get_template template, origin = find_template(template_name) File "/home/vagrant/django5/local/lib/python2.7/site-packages/django/template/loader.py", line 127, in find_template source, display_name = loader(name, dirs) File "/home/vagrant/django5/local/lib/python2.7/site-packages/django/template/loader.py", line 43, in __call__ return self.load_template(template_name, template_dirs) File "/home/vagrant/django5/local/lib/python2.7/site-packages/django/template/loader.py", line 49, in load_template template = get_template_from_string(source, origin, template_name) File "/home/vagrant/django5/local/lib/python2.7/site-packages/django/template/loader.py", line 149, in get_template_from_string return Template(source, origin, name) File "/home/vagrant/django5/local/lib/python2.7/site-packages/django/template/base.py", line 125, in __init__ self.nodelist = compile_string(template_string, origin) File "/home/vagrant/django5/local/lib/python2.7/site-packages/django/template/base.py", line 153, in compile_string return parser.parse() File "/home/vagrant/django5/local/lib/python2.7/site-packages/django/template/base.py", line 278, in parse compiled_result = compile_func(self, token) File "/home/vagrant/django5/local/lib/python2.7/site-packages/django/template/loader_tags.py", line 215, in do_extends nodelist = parser.parse() File "/home/vagrant/django5/local/lib/python2.7/site-packages/django/template/base.py", line 278, in parse compiled_result = compile_func(self, token) File "/home/vagrant/django5/local/lib/python2.7/site-packages/django/template/loader_tags.py", line 190, in do_block nodelist = parser.parse(('endblock',)) File "/home/vagrant/django5/local/lib/python2.7/site-packages/django/template/base.py", line 278, in parse compiled_result = compile_func(self, token) File "/home/vagrant/django5/local/lib/python2.7/site-packages/django/template/loader_tags.py", line 186, in do_block raise TemplateSyntaxError("'%s' tag with name '%s' appears more than once" % (bits[0], block_name)) TemplateSyntaxError: 'block' tag with name 'content' appears more than once ```
2014/02/25
[ "https://Stackoverflow.com/questions/22026177", "https://Stackoverflow.com", "https://Stackoverflow.com/users/1319434/" ]
You can use `while` and `each` like this: ``` while (my ($key1, $inner_hash) = each %foo) { while (my ($key2, $inner_inner_hash) = each %$inner_hash) { while (my ($key3, $value) = each %$inner_inner_hash) { print $value; } } } ``` This approach uses less memory than `foreach keys %hash`, which constructs a list of all the keys in the hash before you begin iterating. The drawback with `each` is that you cannot specify a sort order. See the [documentation](http://perldoc.perl.org/functions/each.html) for details.
You're looking for something like this: ``` for my $key1 ( keys %foo ) { my $subhash = $foo{$key1}; for my $key2 ( keys %$subhash ) { my $subsubhash = $subhash->{$key2}; for my $key3 ( keys %$subsubhash ) ```
6,565
74,259,497
``` n: 8 0 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 ``` How to print a number table like this in python with n that can be any number? I am using a very stupid way to print it but the result is not the one expected: ``` n = int(input('n: ')) if n == 4: print(' 0 1 2 3\n4 5 6 7\n8 9 10 11\n12 13 14 15') if n == 5: print(' 0 1 2 3 4\n5 6 7 8 9\n10 11 12 13 14\n15 16 17 18 19\n20 21 22 23 24') if n == 6: print(' 0 1 2 3 4 5\n6 7 8 9 10 11\n12 13 14 15 16 17\n18 19 20 21 22 23\n24 25 26 27 28 29\n30 31 32 33 34 35') if n == 7: print(' 0 1 2 3 4 5 6\n7 8 9 10 11 12 13\n14 15 16 17 18 19 20\n21 22 23 24 25 26 27\n28 29 30 31 32 33 34\n35 36 37 38 39 40 41\n42 43 44 45 46 47 48') if n == 8: print(' 0 1 2 3 4 5 6 7\n8 9 10 11 12 13 14 15\n16 17 18 19 20 21 22 23\n24 25 26 27 28 29 30 31\n32 33 34 35 36 37 38 39\n40 41 42 43 44 45 46 47\n48 49 50 51 52 53 54 55\n56 57 58 59 60 61 62 63') if n == 9: print(' 0 1 2 3 4 5 6 7 8\n9 10 11 12 13 14 15 16 17\n18 19 20 21 22 23 24 25 26\n27 28 29 30 31 32 33 34 35\n36 37 38 39 40 41 42 43 44\n45 46 47 48 49 50 51 52 53\n54 55 56 57 58 59 60 61 62\n63 64 65 66 67 68 69 70 71\n72 73 74 75 76 77 78 79 80') if n == 10: print(' 0 1 2 3 4 5 6 7 8 9\n10 11 12 13 14 15 16 17 18 19\n20 21 22 23 24 25 26 27 28 29\n30 31 32 33 34 35 36 37 38 39\n40 41 42 43 44 45 46 47 48 49\n50 51 52 53 54 55 56 57 58 59\n60 61 62 63 64 65 66 67 68 69\n70 71 72 73 74 75 76 77 78 79\n80 81 82 83 84 85 86 87 88 89\n90 91 92 93 94 95 96 97 98 99') ``` here is the result: ``` n: 8 0 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 ```
2022/10/31
[ "https://Stackoverflow.com/questions/74259497", "https://Stackoverflow.com", "https://Stackoverflow.com/users/20376552/" ]
If you have more than one reference to a list, then `.clear()` clears the list and preserves the references, but the assignment creates a new list and does not affect the original list. ``` a = [1,2,3] b = a # make an additional reference b.clear() print(a, b) # [] [] a = [1,2,3] b = a # make an additional reference b = [] print(a, b) #[1, 2, 3] [] ``` Interestingly, you can clear the contents through an assignment to a full list slice: ``` a = [1,2,3] b = a # make an additional reference b[:] = [] print(a, b) #[] [] ```
When you do `array.clear()`, that tells that existing object to clear itself. When you do `array = []`, that creates a brand-new object and replaces the one it had before. The new `array` object is unrelated to the one you stored in `self.array`.
6,570
33,617,551
I'm dealing with large raster stacks and I need to re-sample and clip them. I read list of Tiff files and create stack: ``` files <- list.files(path=".", pattern="tif", all.files=FALSE, full.names=TRUE) s <- stack(files) r <- raster("raster.tif") s_re <- resample(s, r,method='bilinear') e <- extent(-180, 180, -60, 90) s_crop <- crop(s_re, e) ``` This process takes days to complete! However, it's much faster using ArcPy and python. My question is: why the process is so slow in R and if there is a way to speed up the process? (I used snow package for parallel processing, but that didn't help either). These are `r` and `s` layers: ``` > r class : RasterLayer dimensions : 3000, 7200, 21600000 (nrow, ncol, ncell) resolution : 0.05, 0.05 (x, y) extent : -180, 180, -59.99999, 90.00001 (xmin, xmax, ymin, ymax) coord. ref. : +proj=longlat +datum=WGS84 +no_defs +ellps=WGS84 +towgs84=0,0,0 > s class : RasterStack dimensions : 2160, 4320, 9331200, 365 (nrow, ncol, ncell, nlayers) resolution : 0.08333333, 0.08333333 (x, y) extent : -180, 180, -90, 90 (xmin, xmax, ymin, ymax) coord. ref. : +proj=longlat +datum=WGS84 +no_defs +ellps=WGS84 +towgs84=0,0,0 ```
2015/11/09
[ "https://Stackoverflow.com/questions/33617551", "https://Stackoverflow.com", "https://Stackoverflow.com/users/2124725/" ]
I second @JoshO'Brien's suggestion to use GDAL directly, and `gdalUtils` makes this straightforward. Here's an example using double precision grids of the same dimensions as yours. For 10 files, it takes ~55 sec on my system. It scales linearly, so you'd be looking at about 33 minutes for 365 files. ``` library(gdalUtils) library(raster) # Create 10 rasters with random values, and write to temp files ff <- replicate(10, tempfile(fileext='.tif')) writeRaster(stack(replicate(10, { raster(matrix(runif(2160*4320), 2160), xmn=-180, xmx=180, ymn=-90, ymx=90) })), ff, bylayer=TRUE) # Define clipping extent and res e <- bbox(extent(-180, 180, -60, 90)) tr <- c(0.05, 0.05) # Use gdalwarp to resample and clip # Here, ff is my vector of tempfiles, but you'll use your `files` vector # The clipped files are written to the same file names as your `files` # vector, but with the suffix "_clipped". Change as desired. system.time(sapply(ff, function(f) { gdalwarp(f, sub('\\.tif', '_clipped.tif', f), tr=tr, r='bilinear', te=c(e), multi=TRUE) })) ## user system elapsed ## 0.34 0.64 54.45 ``` You can parallelise further with, e.g., `parLapply`: ``` library(parallel) cl <- makeCluster(4) clusterEvalQ(cl, library(gdalUtils)) clusterExport(cl, c('tr', 'e')) system.time(parLapply(cl, ff, function(f) { gdalwarp(f, sub('\\.tif', '_clipped.tif', f), tr=tr, r='bilinear', te=c(e), multi=TRUE) })) ## user system elapsed ## 0.05 0.00 31.92 stopCluster(cl) ``` At 32 sec for 10 grids (using 4 simultaneous processes), you're looking at about 20 minutes for 365 files. Actually, it should be faster than that, since two threads were probably doing nothing at the end (10 is not a multiple of 4).
For comparison, this is what I get: ``` library(raster) r <- raster(nrow=3000, ncol=7200, ymn=-60, ymx=90) s <- raster(nrow=2160, ncol=4320) values(s) <- 1:ncell(s) s <- writeRaster(s, 'test.tif') x <- system.time(resample(s, r, method='bilinear')) # user system elapsed # 15.26 2.56 17.83 ``` 10 files takes 150 seconds. So I would expect that 365 files would take 1.5 hr --- but I did not try that.
6,571
71,344,145
So I'm making a very simple battle mechanic in python where the player will be able to attack, defend or inspect the enemy : ``` print("------Welcome To The Game------") player_hp=5 player_attack=3 enemy_hp=10 enemy_attack=2 while player_hp !=0 and enemy_hp !=0: Choice=input("""What will you do: A.Attack B.Defend C.Inspect (A/B/C): """) if Choice=="A": enemy_hp-player_attack print("You dealt",str(player_attack), " damage" if Choice=="B": dice_roll=set(("1","2","3","4","5")) dice_list=list(dice_roll) value=dice_list[0] if value =="1": player_hp-1 elif value=="2": player_hp-2 elif value=="3": player_hp-3 elif value=="4": player_hp-4 elif value=="5": player_hp-5 if Choice=="C": print("""Enemy hp is,enemy_hp Enemy attack is ,enemy_attack""") else: if player_hp ==0: print("you lost") if enemy_hp==0: print("you won") ``` Problem that I'm having is that the **value** number doesn't reset after the loop finishes , if let's say the value first was 2 damage it will remain 2 everytime you press B until your hp finishes , so how can I make it that everytime you press the **defend** option the value is different?
2022/03/03
[ "https://Stackoverflow.com/questions/71344145", "https://Stackoverflow.com", "https://Stackoverflow.com/users/18367808/" ]
You should not rely on sets to provide random arrangements of values. In this case you should use `random.randint` function. Example: ``` import random if Choice== "B": player_hp -= random.randint(1, 5) ``` Also as Shayan pointed out you are not modifying `player_hp` by doing `player_hp - ...` you should use `player_hp -= ...` instead.
> > how can I make it that everytime you press the defend option the value is different > > > You should look at using the `random` module Ignoring the game logic, here's a simple example ``` import random dice_values = list(range(1, 7)) # example for six-sided die alive = True while alive: value = random.choice(dice_values) print(f'you picked {value}') ```
6,572
48,515,581
I have seen two ways of visualizing transposed convolutions from credible sources, and as far as I can see they conflict. My question boils down to, for each application of the kernel, do we go from many (e.g. `3x3`) elements with input padding to one, or do we go from one element to many (e.g. `3x3`)? *Related question:* Which version does [tf.nn.conv2d\_transpose](https://www.tensorflow.org/api_docs/python/tf/nn/conv2d_transpose) implement? **The sources of my confusion are:** --- [A guide to convolution arithmetic for deep learning](https://arxiv.org/abs/1603.07285) has probably the most famous visualization out there, but it isn't peer reviewed (Arxiv). ![](https://github.com/vdumoulin/conv_arithmetic/raw/master/gif/padding_strides_transposed.gif) --- The second is from [Deconvolution and Checkerboard Artifacts](https://distill.pub/2016/deconv-checkerboard/), which technically isn't peer reviewed either (Distil), but it is from a much more reputable source. (The term deconvolution is used in the article, but it is stated that this is the same as transposed conv.) [![enter image description here](https://i.stack.imgur.com/KvNi1.png)](https://i.stack.imgur.com/KvNi1.png) --- Due to the nature of this question it is hard to look for results online, e.g. this [SO](https://datascience.stackexchange.com/questions/6107/what-are-deconvolutional-layers) posts takes the first position, but I am not sure to what extent I can trust it.
2018/01/30
[ "https://Stackoverflow.com/questions/48515581", "https://Stackoverflow.com", "https://Stackoverflow.com/users/3747801/" ]
Strided convolutions, deconvolutions, transposed convolutions all mean the same thing. Both papers are correct and you don't need to be doubtful as both of them are [cited](https://scholar.google.com/) a lot. But the distil image is from a different perspective as its trying to show the artifacts problem. The first visualisation is transposed convolutions with stride 2 and padding 1. If it was stride 1, there wouldn't be any padding in between inputs. The padding on the borders depend on the dimension of the output. By deconvolution, we generally go from a smaller dimension to a higher dimension. And input data is generally padded to achieve the desired output dimensions. I believe the confusion arises from the padding patterns. Take a look at this formula ``` output = [(input-1)stride]+kernel_size-2*padding_of_output ``` Its a rearrangement of the general convolution output formula. Output here refers to the output of the deconvolution operation. To best understand deconvolution, I suggest thinking in terms of the equation, i.e., flipping what a convolution does. Its asking how do I reverse what a convolution operation does? Hope that helps.
Good explanation from Justin Johnson (part of the Stanford cs231n mooc): <https://youtu.be/ByjaPdWXKJ4?t=1221> (starts at 20:21) He reviews strided conv and then he explains transposed convolutions. ![](https://i.stack.imgur.com/h0xMp.png)
6,578
12,760,271
I have a fasta file as shown below. I would like to convert the [three letter codes](https://en.wikipedia.org/wiki/Amino_acid#Table_of_standard_amino_acid_abbreviations_and_properties) to one letter code. How can I do this with python or R? ``` >2ppo ARGHISLEULEULYS >3oot METHISARGARGMET ``` desired output ``` >2ppo RHLLK >3oot MHRRM ``` your suggestions would be appreciated!!
2012/10/06
[ "https://Stackoverflow.com/questions/12760271", "https://Stackoverflow.com", "https://Stackoverflow.com/users/1725152/" ]
Here is a way to do it in R: ``` # Variables: foo <- c("ARGHISLEULEULYS","METHISARGARGMET") # Code maps: code3 <- c("Ala", "Arg", "Asn", "Asp", "Cys", "Glu", "Gln", "Gly", "His", "Ile", "Leu", "Lys", "Met", "Phe", "Pro", "Ser", "Thr", "Trp", "Tyr", "Val") code1 <- c("A", "R", "N", "D", "C", "E", "Q", "G", "H", "I", "L", "K", "M", "F", "P", "S", "T", "W", "Y", "V") # For each code replace 3letter code by 1letter code: for (i in 1:length(code3)) { foo <- gsub(code3[i],code1[i],foo,ignore.case=TRUE) } ``` Results in : ``` > foo [1] "RHLLK" "MHRRM" ``` Note that I changed the variable name as variable names are not allowed to start with a number in R.
Python 3 solutions. In my work, the annoyed part is that the amino acid codes can refer to the modified ones which often appear in the PDB/mmCIF files, like > > 'Tih'-->'A'. > > > So the mapping can be more than 22 pairs. The 3rd party tools in Python like > > Bio.SeqUtils.IUPACData.protein\_letters\_3to1 > > > cannot handle it. My easiest solution is to use the <http://www.ebi.ac.uk/pdbe-srv/pdbechem> to find the mapping and add the unusual mapping to the dict in my own functions whenever I encounter them. ``` def three_to_one(three_letter_code): mapping = {'Aba':'A','Ace':'X','Acr':'X','Ala':'A','Aly':'K','Arg':'R','Asn':'N','Asp':'D','Cas':'C', 'Ccs':'C','Cme':'C','Csd':'C','Cso':'C','Csx':'C','Cys':'C','Dal':'A','Dbb':'T','Dbu':'T', 'Dha':'S','Gln':'Q','Glu':'E','Gly':'G','Glz':'G','His':'H','Hse':'S','Ile':'I','Leu':'L', 'Llp':'K','Lys':'K','Men':'N','Met':'M','Mly':'K','Mse':'M','Nh2':'X','Nle':'L','Ocs':'C', 'Pca':'E','Phe':'F','Pro':'P','Ptr':'Y','Sep':'S','Ser':'S','Thr':'T','Tih':'A','Tpo':'T', 'Trp':'W','Tyr':'Y','Unk':'X','Val':'V','Ycm':'C','Sec':'U','Pyl':'O'} # you can add more return mapping[three_letter_code[0].upper() + three_letter_code[1:].lower()] ``` The other solution is to retrieve the mapping online (But the url and the html pattern may change through time): ``` import re import urllib.request def three_to_one_online(three_letter_code): url = "http://www.ebi.ac.uk/pdbe-srv/pdbechem/chemicalCompound/show/" + three_letter_code with urllib.request.urlopen(url) as response: single_letter_code = re.search('\s*<td\s*>\s*<h3>One-letter code.*</h3>\s*</td>\s*<td>\s*([A-Z])\s*</td>', response.read().decode('utf-8')).group(1) return single_letter_code ``` Here I directly use the re instead of the html parsers for the simplicity. Hope these can help.
6,580
27,451,561
I'm new to stack so this might be a very silly mistake. I'm trying to setup a one node swift configuration for a simple proof of concept. I did follow the [instructions](http://docs.openstack.org/juno/install-guide/install/apt/content/ch_swift.html). However, something is missing. I keep getting this error: ``` root@lab-srv2544:/etc/swift# swift stat Traceback (most recent call last): File "/usr/bin/swift", line 10, in <module> sys.exit(main()) File "/usr/lib/python2.7/dist-packages/swiftclient/shell.py", line 1287, in main globals()['st_%s' % args[0]](parser, argv[1:], output) File "/usr/lib/python2.7/dist-packages/swiftclient/shell.py", line 492, in st_stat stat_result = swift.stat() File "/usr/lib/python2.7/dist-packages/swiftclient/service.py", line 427, in stat raise SwiftError('Account not found', exc=err) swiftclient.service.SwiftError: 'Account not found' ``` Also, the syslog always complains about proxy-server: ``` Dec 12 12:16:37 lab-srv2544 proxy-server: Account HEAD returning 503 for [] (txn: tx9536949d19d14f1ab5d8d-00548b4d25) (client_ip: 127.0.0.1) Dec 12 12:16:37 lab-srv2544 proxy-server: 127.0.0.1 127.0.0.1 12/Dec/2014/20/16/37 HEAD /v1/AUTH_71e79a29599149099aa98d5d276eaa0b HTTP/1.0 503 - python-swiftclient-2.3.0 8d2b0748804f4b34... - - - tx9536949d19d14f1ab5d8d-00548b4d25 - 0.0013 - - 1418415397.334497929 1418415397.335824013 ``` Anyone seen this problem before?
2014/12/12
[ "https://Stackoverflow.com/questions/27451561", "https://Stackoverflow.com", "https://Stackoverflow.com/users/224982/" ]
`JSON.parse(data)` will turn the data you showing into a JavaScript object, and there are a TON of ways to use the data from there. Example: ``` var parsedData = JSON.parse(data), obj = {}; for(var key in parsedData['model']){ obj[key] = parsedData['model'][key]['id']; } ``` Which would give you a resulting object of this: ``` {category:1, food:1} ``` This is based on the limited example of JSON you provided, the way you access it is entirely dependent on its structure. Hopefully this helps get you started, though.
You want to use JSON.parse(), but it returns the parsed object, so use it thusly: ``` var parsed = JSON.parse(data); ``` then work with parsed.
6,590
10,775,007
I want to create a python script which could be used to execute Android adb commands. I had a look at <https://github.com/rbrady/python-adb> but can't seem to make it work perfectly. Any suggestions?
2012/05/27
[ "https://Stackoverflow.com/questions/10775007", "https://Stackoverflow.com", "https://Stackoverflow.com/users/1391277/" ]
This tool should do the work. <https://pypi.python.org/pypi/pyadb/0.1.1> I had to modify a few functions to have it operate on Python 2.7 and use subprocess instead. Here the modified code in my version: ``` def __build_command__(self,cmd): if self.__devices is not None and len(self.__devices) > 1 and self.__target is None: self.__error = "Must set target device first" return None if type(cmd) is tuple: a = list(cmd) elif type(cmd) is list: a = cmd else: a = [cmd] a.insert(0, self.__adb_path) if self.__target is not None: a.insert(1, ['-s', self.__target]) return a def run_cmd(self, cmd): """ Run a command against adb tool ($ adb <cmd>) """ self.__clean__() if self.__adb_path is None: self.__error = "ADB path not set" return try: args = self.__build_command__(cmd) if args is None: return # print 'args>', args cmdp = subprocess.Popen(args, shell=True, stdout=subprocess.PIPE, stderr=subprocess.PIPE) self.__output, self.__error = cmdp.communicate() retcode = cmdp.wait() # print 'stdout>', self.__output # print 'stderr>', self.__error if retcode < 0: print >>sys.stderr, "Child was terminated by signal", -retcode else: return except OSError, e: self.__error = str(e) return ```
Use a `monkeyrunner` tool for that. From it's [documentation](https://developer.android.com/studio/test/monkeyrunner/index.html): > > Besides using the monkeyrunner API itself, you can use the standard Python os and subprocess modules to call Android tools such as Android Debug Bridge. > > >
6,591
13,561,220
Think about this code: ``` #!/usr/bin/env python from threading import Thread count = 0 def test(): global count for i in range(10): count = count + 1 if __name__ == '__main__': for i in range(1000): Thread(target = test).start() print count ``` I use multiple threads, but the result is always correct. Does that mean I could use python threads without a lock when implementing something like a visitor counter ?
2012/11/26
[ "https://Stackoverflow.com/questions/13561220", "https://Stackoverflow.com", "https://Stackoverflow.com/users/802585/" ]
You do need one. Although multithreading works differently in Python, due to the [Global Interpreter Lock](http://wiki.python.org/moin/GlobalInterpreterLock), operations that are not atomic in Python-bytecode will still need locking. In you case, you can inspect the bytecode for your function `test` (`dis.dis(test)`): ``` 3 0 SETUP_LOOP 30 (to 33) 3 LOAD_GLOBAL 0 (range) 6 LOAD_CONST 1 (1000) 9 CALL_FUNCTION 1 12 GET_ITER >> 13 FOR_ITER 16 (to 32) 16 STORE_FAST 0 (i) 4 19 LOAD_GLOBAL 1 (count) # start of increment 22 LOAD_CONST 2 (1) 25 BINARY_ADD 26 STORE_GLOBAL 1 (count) # end of increment 29 JUMP_ABSOLUTE 13 >> 32 POP_BLOCK >> 33 LOAD_CONST 0 (None) 36 RETURN_VALUE ``` As you can see, the increment is a 2xload, update, store on bytecode-level, so this wont work. The increment is actually 4 separate operations, which you must protect to ensure they are not interrupted. In your example the problem remains even if you use `count += 1`, as the bytecode shows: ``` 4 19 LOAD_GLOBAL 1 (count) 22 LOAD_CONST 2 (1) 25 INPLACE_ADD 26 STORE_GLOBAL 1 (count) ```
Definitely you should use Lock. You have got a correct answer in such simple case. Try to make it for range(100000) in **main**. You will see the problem. In my machine the result is 999960, but it is random result. Errors will arise depending on system load and so on.
6,594
72,852,359
I would like to know if someone can answer why I cant seem to get a python gstreamer pipline to work without sudo in linux. I have a very small gstreamer pipline and it fails to open the gstreamer if I dont run with sudo infront of python. I have soon depleted my options, any help would be appriciated. (Using Jetson Orin and ubuntu 20.05) ``` import sys import cv2 def read_cam(): G_STREAM_TO_SCREEN = "videotestsrc num-buffers=50 ! videoconvert ! appsink" cap = cv2.VideoCapture(G_STREAM_TO_SCREEN, cv2.CAP_GSTREAMER) if cap.isOpened(): cv2.namedWindow("demo", cv2.WINDOW_AUTOSIZE) while True: ret_val, img = cap.read() cv2.imshow('demo',img) cv2.waitKey(1) else: print ("camera open failed") cv2.destroyAllWindows() if __name__ == '__main__': read_cam() ```
2022/07/04
[ "https://Stackoverflow.com/questions/72852359", "https://Stackoverflow.com", "https://Stackoverflow.com/users/8610564/" ]
accessToken is correct just don't forget use: ``` use Laravel\Passport\HasApiTokens; ``` instead of: ``` use Laravel\Sanctum\HasApiTokens; ``` This is correct: `$token = $user->createToken('Laravel Password Grant Client')->accessToken;`
you have to log in user first after the login token is created. $data['email'] = request email $data['password'] = request password ``` Auth::attempt($data); $loginUser = Auth::user(); $token = $loginUser->createToken('Laravel Password Grant Client')->accessToken; $loginUser->accessToken = $token; ```
6,597
64,472,414
I'm using `isbnlib.meta` which pulls metadata (book title, author, year publisher, etc.) when you enter in an isbn. I have a dataframe with 482,000 isbns (column title: isbn13). When I run the function, I'll get an error like `NotValidISBNError` which stops the code in it's tracks. What I want to happen is if there is an error the code will simply skip that row and move onto the next one. Here is my code now: ```py list_df[0]['publisher_isbnlib'] = list_df[0]['isbn13'].apply(lambda x: isbnlib.meta(x).get('Publisher', None)) list_df[0]['yearpublished_isbnlib'] = list_df[0]['isbn13'].apply(lambda x: isbnlib.meta(x).get('Year', None)) #list_df[0]['language_isbnlib'] = list_df[0]['isbn13'].apply(lambda x: isbnlib.meta(x).get('Language', None)) list_df[0] ``` `list_df[0]` is the first 20,000 rows since I'm trying to chunk through the dataframe. I've just manually entered in this code 24 times to handle each chunk. I attempted try: and except: but all that ends up happening is the code stops and I don't get any meta data reported. ### Traceback: ```py --------------------------------------------------------------------------- NotValidISBNError Traceback (most recent call last) <ipython-input-39-a06c45d36355> in <module> ----> 1 df['meta'] = df.isbn.apply(isbnlib.meta) e:\Anaconda3\lib\site-packages\pandas\core\series.py in apply(self, func, convert_dtype, args, **kwds) 4198 else: 4199 values = self.astype(object)._values -> 4200 mapped = lib.map_infer(values, f, convert=convert_dtype) 4201 4202 if len(mapped) and isinstance(mapped[0], Series): pandas\_libs\lib.pyx in pandas._libs.lib.map_infer() e:\Anaconda3\lib\site-packages\isbnlib\_ext.py in meta(isbn, service) 23 def meta(isbn, service='default'): 24 """Get metadata from Google Books ('goob'), Open Library ('openl'), ...""" ---> 25 return query(isbn, service) if isbn else {} 26 27 e:\Anaconda3\lib\site-packages\isbnlib\dev\_decorators.py in memoized_func(*args, **kwargs) 22 return cch[key] 23 else: ---> 24 value = func(*args, **kwargs) 25 if value: 26 cch[key] = value e:\Anaconda3\lib\site-packages\isbnlib\_metadata.py in query(isbn, service) 18 if not ean: 19 LOGGER.critical('%s is not a valid ISBN', isbn) ---> 20 raise NotValidISBNError(isbn) 21 isbn = ean 22 # only import when needed NotValidISBNError: (abc) is not a valid ISBN ```
2020/10/21
[ "https://Stackoverflow.com/questions/64472414", "https://Stackoverflow.com", "https://Stackoverflow.com/users/12020223/" ]
* The current implementation for extracting isbn meta data, is incredibly slow and inefficient. + As stated, there are 482,000 unique isbn values, for which the data is being downloaded multiple times (e.g. once for each column, as the code is currently written) * It will be better to download all the meta data at once, and then extract the data from the `dict`, as a separate operation. * A `try-except` block is used to capture the error from invalid isbn values. + An empty `dict`, `{}` is returned, because `pd.json_normalize` won't work with `NaN` or `None`. + It will be unnecessary to chunk the isbn column. * `pd.json_normalize` is used to expand the `dict` returned from `.meta`. * Use `pandas.DataFrame.rename` to rename columns, and `pandas.DataFrame.drop` to delete columns. * This implementation will be significantly faster than the current implementation, and will make far fewer requests to the API being used to get the meta data. * To extract values from `lists`, such as the `'Authors'` column, use `df_meta = df_meta.explode('Authors')`; if there is more than one author, a new row will be created for each additional author in the list. ```py import pandas as pd # version 1.1.3 import isbnlib # version 3.10.3 # sample dataframe df = pd.DataFrame({'isbn': ['9780446310789', 'abc', '9781491962299', '9781449355722']}) # function with try-except, for invalid isbn values def get_meta(col: pd.Series) -> dict: try: return isbnlib.meta(col) except isbnlib.NotValidISBNError: return {} # get the meta data for each isbn or an empty dict df['meta'] = df.isbn.apply(get_meta) # df isbn meta 0 9780446310789 {'ISBN-13': '9780446310789', 'Title': 'To Kill A Mockingbird', 'Authors': ['Harper Lee'], 'Publisher': 'Grand Central Publishing', 'Year': '1988', 'Language': 'en'} 1 abc {} 2 9781491962299 {'ISBN-13': '9781491962299', 'Title': 'Hands-On Machine Learning With Scikit-Learn And TensorFlow - Techniques And Tools To Build Learning Machines', 'Authors': ['Aurélien Géron'], 'Publisher': "O'Reilly Media", 'Year': '2017', 'Language': 'en'} 3 9781449355722 {'ISBN-13': '9781449355722', 'Title': 'Learning Python', 'Authors': ['Mark Lutz'], 'Publisher': '', 'Year': '2013', 'Language': 'en'} # extract all the dicts in the meta column df = df.join(pd.json_normalize(df.meta)).drop(columns=['meta']) # extract values from the lists in the Authors column df = df.explode('Authors') # df isbn ISBN-13 Title Authors Publisher Year Language 0 9780446310789 9780446310789 To Kill A Mockingbird Harper Lee Grand Central Publishing 1988 en 1 abc NaN NaN NaN NaN NaN NaN 2 9781491962299 9781491962299 Hands-On Machine Learning With Scikit-Learn And TensorFlow - Techniques And Tools To Build Learning Machines Aurélien Géron OReilly Media 2017 en 3 9781449355722 9781449355722 Learning Python Mark Lutz 2013 en ```
Hard to answer without seeing the code, but [try/except](https://docs.python.org/3/tutorial/errors.html#handling-exceptions) should really be able to handle this. I am not an expert here, but look at this code: ``` l = [0, 1, "a", 2, 3] for item in l: try: print(item + 1) except TypeError as e: print(item, "is not integer") ``` If you try to do addition with a string, python hates that and backs out with a `TypeError`. So you capture the `TypeError` using except and maybe report something about it. When I run this code: ``` 1 2 a is not integer # exception handled! 3 4 ``` You should be able to handle your exception with `except NotValidISBNError`, and then reporting whatever metadata you like. You can get much more sophisticated with exception handling but that is the basic idea.
6,599
53,480,409
I have installed the bloomberg Python API and set the `BLPAPI_ROOT` to the VC++ folder. However, when I import `blpapi`, I got the following error. How to get rid of these errors? Thank you very much. ``` import blpapi Traceback (most recent call last): File "C:\Users\user\AppData\Roaming\Python\Python36\site-packages\blpapi\internals.py", line 39, in swig_import_helper return importlib.import_module(mname) File "C:\Program Files\WinPython-64bit-3.6.2.0Qt5\python-3.6.2.amd64\lib\importlib\__init__.py", line 126, in import_module return _bootstrap._gcd_import(name[level:], package, level) File "<frozen importlib._bootstrap>", line 978, in _gcd_import File "<frozen importlib._bootstrap>", line 961, in _find_and_load File "<frozen importlib._bootstrap>", line 950, in _find_and_load_unlocked File "<frozen importlib._bootstrap>", line 648, in _load_unlocked File "<frozen importlib._bootstrap>", line 560, in module_from_spec File "<frozen importlib._bootstrap_external>", line 922, in create_module File "<frozen importlib._bootstrap>", line 205, in _call_with_frames_removed ImportError: DLL load failed: The specified procedure could not be found. During handling of the above exception, another exception occurred: Traceback (most recent call last): File "C:\Users\user\AppData\Roaming\Python\Python36\site-packages\blpapi\__init__.py", line 4, in <module> from .internals import CorrelationId File "C:\Users\user\AppData\Roaming\Python\Python36\site-packages\blpapi\internals.py", line 42, in <module> _internals = swig_import_helper() File "C:\Users\user\AppData\Roaming\Python\Python36\site-packages\blpapi\internals.py", line 41, in swig_import_helper return importlib.import_module('_internals') File "C:\Program Files\WinPython-64bit-3.6.2.0Qt5\python-3.6.2.amd64\lib\importlib\__init__.py", line 126, in import_module return _bootstrap._gcd_import(name[level:], package, level) ModuleNotFoundError: No module named '_internals' During handling of the above exception, another exception occurred: Traceback (most recent call last): File "<stdin>", line 1, in <module> File "C:\Users\user\AppData\Roaming\Python\Python36\site-packages\blpapi\__init__.py", line 9, in <module> raise debug_load_error(error) ImportError: No module named '_versionhelper' Could not open the C++ SDK library. Download and install the latest C++ SDK from: http://www.bloomberg.com/professional/api-library If the C++ SDK is already installed, please ensure that the path to the library was added to PATH before entering the interpreter. ```
2018/11/26
[ "https://Stackoverflow.com/questions/53480409", "https://Stackoverflow.com", "https://Stackoverflow.com/users/9087866/" ]
I did 2 things to solve an issue similar to that: 1- I installed Microsoft Visual Studio with the following components * C++/CLI Support * VC++ 2015.3 v14.00 (v140) toolset for desktop * Visual C++ MFC for x86 and x64 * Visual C++ ATL for x86 and x64 2- I manually copied the .dll files in C++API\lib (blpapi3\_32.dll and blpapi3\_64.dll in my case) into C:\windows\system32 where all the dll files that system uses. Also, I copied the dll files in in C++API\lib into C:\blp\DAPI, replacing the new ones with the old ones.
Please set BLPAPI\_ROOT environment variable to the location where the blpapi C++ SDK is located.
6,600
1,440,233
I want to intercept and transform some automated emails into a more readable format. I believe this is possible using VBA, but I would prefer to manipulate the text with Python. Can I create an ironpython client-side script to pre-process certain emails? EDIT: I believe this can be done with outlook rules. In Outlook 2007, you can do: Tools->Rules -> New Rule "check messages when they arrive" next [filter which emails to process] next "run a script" In the "run a script" it allows you to use a VBA script.
2009/09/17
[ "https://Stackoverflow.com/questions/1440233", "https://Stackoverflow.com", "https://Stackoverflow.com/users/20712/" ]
I can only offer you a pointer. Years ago, I used a Bayesian spam filter with Outlook. It was written in Python, and provided an Outlook plug-in that would filter incoming mail. The name of the software was [*SpamBayes*](http://spambayes.sourceforge.net/), and the project is still online. As it is open source, you will probably find all necessary information how to plug a mail filter into Outlook. This should give you enough background to add code that will actually be able to transform mail content. My understanding was it was written in vanilla Python (CPython), but if you are more comfortable with IronPython, it shouldn't be hard to translate. Give it a go.
There is an answer here: [how to trigger a python script in outlook using rules?](https://stackoverflow.com/questions/13265861/how-to-trigger-a-python-script-in-outlook-using-rules) that may help you. You can make a simple VBA script to trigger the python script.
6,602
63,896,043
I am creating my first django app in django 3.1.1. There are video tutorials for old django versions and they don't always work... I want to create HTML pages for both home and about sections. I have already written some HTML files, but the ``` def home(request): return render(request, 'home.html') ``` doesn't want to work. I add my file tree for you to see the structure of files. ``` RemoveBigFile ├── RBF1module │   ├── __init__.py │   ├── admin.py │   ├── apps.py │   ├── migrations │   │   └── __init__.py │   ├── models.py │   ├── tests.py │   └── views.py ├── RemoveBigFile │   ├── __init__.py │   ├── __pycache__ │   │   ├── __init__.cpython-38.pyc │   │   ├── settings.cpython-38.pyc │   │   ├── urls.cpython-38.pyc │   │   ├── views.cpython-38.pyc │   │   └── wsgi.cpython-38.pyc │   ├── asgi.py │   ├── settings.py │   ├── urls.py │   ├── views.py │   └── wsgi.py ├── RemoveBigFile.sublime-project ├── RemoveBigFile.sublime-workspace ├── db.sqlite3 ├── manage.py └── templates ├── about.html └── home.html ``` And that is the error message I get: ``` TemplateDoesNotExist at / home.html Request Method: GET Request URL: http://127.0.0.1:8000/ Django Version: 3.1.1 Exception Type: TemplateDoesNotExist Exception Value: home.html ``` Django also asks me to put my templates in one of main django installation directories called templates and as far as I know, if I do so, I won't be able to send my app to other people (and that's what I intend to do with it after I am finished). I use my RemoveBigFile/RemoveBigFile views.py to point django to HTML templates. EDIT: as requested, I add my templates definition from settings.py ``` TEMPLATES = [ { 'BACKEND': 'django.template.backends.django.DjangoTemplates', 'DIRS': [], 'APP_DIRS': True, 'OPTIONS': { 'context_processors': [ 'django.template.context_processors.debug', 'django.template.context_processors.request', 'django.contrib.auth.context_processors.auth', 'django.contrib.messages.context_processors.messages', ], }, }, ] ``` As I see, there is nothing in DIRS. Should I put the path to my templates in DIRS parentheses? I also have one more question - is it better to have templates in the folder where manage.py is or where settings.py is?
2020/09/15
[ "https://Stackoverflow.com/questions/63896043", "https://Stackoverflow.com", "https://Stackoverflow.com/users/14224948/" ]
Alright, found it myself with your inspiration :) Thank you, @Selcuk and @m.arthur. Thanks for contributing @Mahmoud Ishag too :) The answer lies in there I mustn't have created this app as a project and there was a short string missing in templates: ``` TEMPLATES = [ { 'BACKEND': 'django.template.backends.django.DjangoTemplates', 'DIRS': [BASE_DIR / 'templates'], 'APP_DIRS': True, 'OPTIONS': { 'context_processors': [ 'django.template.context_processors.debug', 'django.template.context_processors.request', 'django.contrib.auth.context_processors.auth', 'django.contrib.messages.context_processors.messages', ], }, }, ] ``` namely this one: ``` 'DIRS': [BASE_DIR / 'templates'] ``` Please compare my code from main post to this one.
Django look for the template in folder called `templates` you need to create that folder inside your app folder and put `home.html` inside it.
6,605
43,413,914
I recently found out that scrapy is a great library for scraping so i tried to install scrapy on my machine, but when i tried to do `pip install scrapy` it installed for a while and threw me this error.. ``` error: Microsoft Visual C++ 14.0 is required. Get it with "Microsoft Visual C++ Build Tools": http://landinghub.visualstudio.com/visual-cpp-build-tools ``` and ``` error: Microsoft Visual C++ 14.0 is required. Get it with "Microsoft Visual C++ Build Tools": http://landinghub.visualstudio.com/visual-cpp-build-tools ---------------------------------------- Command "d:\pycharmprojects\environments\scrapyenv\scripts\python.exe -u -c "import setuptools, tokenize;__file__='C:\\Users\\User\\AppData\\Local\\Temp\\pip-build-arbeqlly\\Twisted\\setup.py';f=getattr(tokenize, 'open', open)(__file__);code=f.read().replace('\r\n', '\n');f.close();exec(compile(code, __file__, 'exec'))" install --record C:\Users\User\AppData\Local\Temp\pip-jdj93131-record\install-record.txt --single-version-externally-managed --compile --install-headers d:\pycharmprojects\environments\scrapyenv\include\site\python3.5\Twisted" failed with error code 1 in C:\Users\User\AppData\Local\Temp\pip-build-arbeqlly\Twisted\ ``` So after that I went to [this](https://www.visualstudio.com/downloads/#build-tools-for-visual-studio-2017) website and installed the tools for python.. but getting this at the end of installing: ``` warning: no previously-included files matching '*.misc' found under directory 'src\twisted' warning: no previously-included files matching '*.bugfix' found under directory 'src\twisted' warning: no previously-included files matching '*.doc' found under directory 'src\twisted' warning: no previously-included files matching '*.feature' found under directory 'src\twisted' warning: no previously-included files matching '*.removal' found under directory 'src\twisted' warning: no previously-included files matching 'NEWS' found under directory 'src\twisted' warning: no previously-included files matching 'README' found under directory 'src\twisted' warning: no previously-included files matching 'topfiles' found under directory 'src\twisted' warning: no previously-included files found matching 'src\twisted\topfiles\CREDITS' warning: no previously-included files found matching 'src\twisted\topfiles\ChangeLog.Old' warning: no previously-included files found matching 'codecov.yml' warning: no previously-included files found matching 'appveyor.yml' no previously-included directories found matching 'bin' no previously-included directories found matching 'admin' no previously-included directories found matching '.travis' warning: no previously-included files found matching 'docs\historic\2003' warning: no previously-included files matching '*' found under directory 'docs\historic\2003' error: Setup script exited with error: [WinError 3] The system cannot find the path specified: 'C:\\Program Files\\Microsoft Visual Studio\\2017\\BuildTools\\VC\\Tools\\MSVC\\14.10.25017\\PlatformSDK\\lib' ``` any help?
2017/04/14
[ "https://Stackoverflow.com/questions/43413914", "https://Stackoverflow.com", "https://Stackoverflow.com/users/7618421/" ]
Scrapy docs now recommend using [Conda](https://conda.io/miniconda) for Windows installation: <https://doc.scrapy.org/en/latest/intro/install.html#windows>
This error comes up when you don't have Windows SDK installed. When you used Build Tools or Visual Studio to compile, often you need both the C++ compiler and Windows SDK. In the Build Tools, there is a "Custom" installation option which enables you to select to install both C++ and Windows SDK. In Visual Studio, you have to modify your installation to install Windows SDK. I *believe* you need the SDK corresponding to the the platform you are using (Windows 10 SDK for Windows 10, etc).
6,606
43,989,929
I'm trying to use `pytesseract` for the first time. I'm also not so confortable with python. I've created a new folder called `python_test` on my desktop. I'm on Mac. In this folder I have a `test.png` file and a py script : ``` from pytesseract import image_to_string from PIL import Image print image_to_string(Image.open('test.png')) print image_to_string(Image.open('test-english.jpg'), lang='eng') ``` So from my terminal, I'm going into the python\_test folder then I'm running `python read.py` then I have the following error : ``` Traceback (most recent call last): File "read.py", line 4, in <module> print image_to_string(Image.open('test.png')) File "/anaconda/anaconda/lib/python2.7/site-packages/pytesseract/pytesseract.py", line 161, in image_to_string config=config) File "/anaconda/anaconda/lib/python2.7/site-packages/pytesseract/pytesseract.py", line 94, in run_tesseract stderr=subprocess.PIPE) File "/anaconda/anaconda/lib/python2.7/subprocess.py", line 711, in __init__ errread, errwrite) File "/anaconda/anaconda/lib/python2.7/subprocess.py", line 1343, in _execute_child raise child_exception OSError: [Errno 2] No such file or directory ``` What I'am doing wrong ?
2017/05/15
[ "https://Stackoverflow.com/questions/43989929", "https://Stackoverflow.com", "https://Stackoverflow.com/users/6273496/" ]
I got the same error as you, installing the `tesseract` package fixed it (or `tesseract-ocr` on debian/ubuntu). It contains the native code library used under the hood by `pytesseract`. An image load error seems like an odd way for the library to fail if the underlying native library is not installed, but there you go. To install use commands (insert sudo as appropriate) macos ``` brew install tesseract ``` ubuntu ``` apt install tesseract-ocr ```
I also had the error when I used first time `image_to_string`. You have to change the the following line in the `pytesseract.py` file. ``` tesseract_cmd = 'C:\\Tesseract-OCR\\tesseract' ``` *Note: I'm using windows.*
6,607
4,942,305
The Windows console has been Unicode aware for at least a decade and perhaps as far back as Windows NT. However for some reason the major cross-platform scripting languages including Perl and Python only ever output various 8-bit encodings, requiring much trouble to work around. Perl gives a "wide character in print" warning, Python gives a charmap error and quits. Why on earth after all these years do they not just simply call the Win32 -W APIs that output UTF-16 Unicode instead of forcing everything through the ANSI/codepage bottleneck? Is it just that cross-platform performance is low priority? Is it that the languages use UTF-8 internally and find it too much bother to output UTF-16? Or are the -W APIs inherently broken to such a degree that they can't be used as-is? **UPDATE** It seems that the blame may need to be shared by all parties. I imagined that the scripting languages could just call `wprintf` on Windows and let the OS/runtime worry about things such as redirection. But it turns out that [even wprintf on Windows converts wide characters to ANSI and back before printing to the console](http://blog.kalmbachnet.de/?postid=23)! Please let me know if this has been fixed since the bug report link seems broken but my Visual C test code still fails for wprintf and succeeds for WriteConsoleW. **UPDATE 2** Actually you can print UTF-16 to the console from C using `wprintf` but only if you first do `_setmode(_fileno(stdout), _O_U16TEXT)`. From C you can print UTF-8 to a console whose codepage is set to codepage 65001, however Perl, Python, PHP and Ruby all have bugs which prevent this. Perl and PHP corrupt the output by adding additional blank lines following lines which contain at least one wide character. Ruby has slightly different corrupt output. Python crashes. **UPDATE 3** Node.js is the first scripting language that shipped without this problem straight out of the box. The Python dev team slowly came to realize this was a real problem since [it was first reported back at the end of 2007](http://bugs.python.org/issue1602) and has seen a huge flurry of activity to fully understand and fully fix the bug in 2016.
2011/02/09
[ "https://Stackoverflow.com/questions/4942305", "https://Stackoverflow.com", "https://Stackoverflow.com/users/527702/" ]
Michael Kaplan has series of blog posts about the `cmd` console and Unicode that may be informative (while not really answering your question): * [Conventional wisdom is retarded, aka What the @#%&\* is \_O\_U16TEXT?](https://web.archive.org/web/20130101094000/http://www.siao2.com/2008/03/18/8306597.aspx) * [Anyone who says the console can't do Unicode isn't as smart as they think they are](https://web.archive.org/web/20130519074717/http://www.siao2.com/2010/04/07/9989346.aspx) * [A confluence of circumstances leaves a stone unturned...](https://web.archive.org/web/20130620152913/http://www.siao2.com/2010/09/23/10066660.aspx) PS: Thanks [@Jeff](https://stackoverflow.com/users/604049/jeff) for finding the archive.org links.
Are you sure your script would output Unicode on some other platform correctly? "wide character in print" warning makes me very suspicious. I recommend to look over this [overview](http://en.wikibooks.org/wiki/Perl_Programming/Unicode_UTF-8)
6,608
10,474,942
I try my very best to never use php and as thus i am not well versed in it I am having a issue that I don't understand at all here I have tried a number of things including not including the table reference in /config changing to from localhost as mysql host ect. It will probably be something exceedingly obvious but im just not spotting it! I goes with out saying all the DB credentials are completely correct (And i can connect with it fine from scala and python) anway here's the error (the real username/password not there OFC) > > Fatal error: Uncaught exception 'PDOException' with message 'SQLSTATE[42000] [1044] Access denied for user 'username'@'10.1.1.43' to database 'a5555413\_AudioPl.users'' in /home/a5555413/public\_html/drive/auth\_handler.php:153 Stack trace: #0 /home/a5555413/public\_html/drive/auth\_handler.php(153): PDO->\_\_construct('mysql:host=mysq...', 'username...', 'passwd') #1 /home/a5555413/public\_html/drive/auth\_handler.php(74): AuthHandler->GetDbConnection() #2 /home/a5555413/public\_html/drive/index.php(47): AuthHandler->AuthHandler('webui') #3 {main} thrown in /home/a5555413/public\_html/drive/auth\_handler.php on line 153 > > > Connect code the const's are defined in a separate config file ``` function GetDbConnection() { $dbh = new PDO(Config::DB_PDO_CONNECT, Config::DB_PDO_USER, Config::DB_PDO_PASSWORD); $dbh->setAttribute(PDO::ATTR_ERRMODE, PDO::ERRMODE_EXCEPTION); $dbConnection = $dbh; return $dbConnection; } ```
2012/05/06
[ "https://Stackoverflow.com/questions/10474942", "https://Stackoverflow.com", "https://Stackoverflow.com/users/1231175/" ]
Access denied means you're using the wrong username/password, and/or the account you're accessing has not be properly created in MySQL. What does a `show grants for username@10.1.1.43` show? if you created the account as `username@machinename` and MySQL is unable to do a reverse DNS lookup to change your IP back to machinename, you'll also get this error.
Check the following: 1) You entered the right database hostname (this is not always localhost), username and password? 2) The user has permissions on database. Let me know if you don't know how to check.
6,618
38,570,698
This is my first time using raw sockets (yes, I need to use them as I must modify a field inside a network header) and all the documentation or tutorials I read describe a solution to sniff packets but that is not exactly what I need. I need to create a script which intercepts the packet, process it and sends it further to the destination, i.e. the packets should not reach the destination unless my script decides to. In order to learn, I created a small prototype which detects pings and just prints "PING". I would expect ping not to work as I intercept the packets and I don't include the logic to send them to its destination. However ping is working (again, it seems as it is just sniffing/mirroring packets). My goal is that the ping packets are "trapped" in my script and I don't know how to do that. This is what I do in my current python script (I avoid writing how I do the decode for simplicity) ``` sock = socket.socket(socket.AF_PACKET, socket.SOCK_RAW, socket.ntohs(0x0003)) sock.bind((eth0, 0)) packet = sock.recvfrom(65565) decode_eth(packet) decode_ip(packet) if (ipheader.ip_proto == 1): print("\nPING") ``` Can somebody explain how can I achieve my goal or point me to the right documentation?
2016/07/25
[ "https://Stackoverflow.com/questions/38570698", "https://Stackoverflow.com", "https://Stackoverflow.com/users/6104632/" ]
Your description seems to be different from what your title suggest. My understanding is that you want to receive, modify and possibly drop incoming network packets. And this is to be done on Linux. In that case I suggest you use a netfilter prerouting hook, which will make things a lot simpler (and likely more stable). Netfilter is well documented, a nice overview including information related to your requirements can be seen [here](http://www.grep.it/RMD/05-Netfilter.pdf). The important function to use is nf\_register\_hook(), read the answer to [this](https://stackoverflow.com/questions/19342252/how-to-filter-and-intercept-linux-packets-by-using-net-dev-add-api) question to get an idea of how to set things up.
I suppose that your Linux box is configured as a router (not a bridge). The packet will pass through your Linux because you have enabled **IP Forwarding**. So there are two solution: ***Solution 1:*** Disable **IP Forwarding** and then receive the packet from one interface and do the appropriate task (forwarding to another interface or dropping it). ***Solution 2:*** Use `NetFilterQueue`. Install it on your Linux box (Debian/Ubuntu in my example): ``` apt-get install build-essential python-dev libnetfilter-queue-dev ``` Use `iptables` to send packets coming from input interface (eth0 in my example): ``` iptables -I INPUT -i eth0 -j NFQUEUE --queue-num 1 ``` Run this script to handle packets forwarded to the Queue No.1 : ``` from netfilterqueue import NetfilterQueue def print_and_accept(pkt): print pkt pkt.accept() nfqueue = NetfilterQueue() nfqueue.bind(1, print_and_accept) try: nfqueue.run() except KeyboardInterrupt: print ``` Note that `pkt.drop()` cause dropping the packet. Also you should accept/drop every packet.
6,619
35,393,672
python 2.7: Count how many numbers are entered by the user. I can't figure out how to count the raw\_input... here's what I have so far: ``` while True: datum = raw_input('enter a number: ') if datum == 'done': break count = 0 for line in datum: if datum == int(datum): count = count + 1 print 'count', count ```
2016/02/14
[ "https://Stackoverflow.com/questions/35393672", "https://Stackoverflow.com", "https://Stackoverflow.com/users/5925494/" ]
The reason for the endless loop is dead simple. You are *never* updating the value of `$terms` inside the loop body, which is being used in loop condition. So the loop body is executed either zero times or infinitely. --- The fix seems to replace the `while` with the `if`, because you handle the parent via a recursive call already. However, I may be wrong since your function doesn't return anything and seems to have no side effects...
The misunderstanding lies in the scope of the `$terms` variable. This variable has scope that is local to the function call it exists in. The loop ``` while ($terms > 0) { //do some logic $parent_id = $terms->parent; $this->parent_category_has_fiance($parent_id); } ``` is referencing the `$terms` variable, but when `parent_category_has_fiance` is called, the `$terms` variable inside that function only exists there. That is, it doesn't change `$terms` that the while loop is looking at.
6,620
55,758,883
When i tried to install openCV using pip3 install opencv-python i got this error Could not find a version that satisfies the requirement opencv-python (from versions: ) No matching distribution found for opencv-python i have tried upgrading pip using ``` pip install --upgrade pip ``` and ``` curl https://bootstrap.pypa.io/get-pip.py | python ``` none of them helped me and pip is up to date tried to download and compile opencv manually gives me bunch of errors ``` python version -2.7,3.6.2 pip version- up to date raspberry pi 2 ```
2019/04/19
[ "https://Stackoverflow.com/questions/55758883", "https://Stackoverflow.com", "https://Stackoverflow.com/users/8923850/" ]
Here is the code corrected, there are. Which are to be deleted I also added `math.round` to get rounded result: ```html <html> <head> <script> function calcAmount() { let userAmount = document.getElementById("amount").value; let level1 = 351; if (userAmount <= level1) { document.getElementById("marginAmount").textContent = Math.round(userAmount * 7) / 100; } else { document.getElementById("marginAmount").textContent = "Amount is > 351"; } } </script> </head> <body> <div class="form"> <label for="points amount">Entrez un montant en points</label> <input type="number" class="amount-enter" id="amount"> <input type="submit" value="Montant marge additionnelle" id="submit" class="submit-enter" onclick="calcAmount()"> </div> <p id="marginAmount" class="enter-margin"></p> </body> ```
Three things: * You had `.` in your ID selectors, these shouldn't be there (they should be `#` but only in a `querySelector` method). * You were calculating `calcLevel1` straightaway, which meant it was `0 * 7 / 100` which is `0`. * You needed to calculate `calcLevel1` with `amount.value`, which means you can remove `calcLevel1.value` when you're setting the `textContent` of the paragraph. I also added an `else` statement so the `textContent` is emptied when the statement is false. ```js let amount = document.getElementById("amount"); let marginAmount = document.getElementById("marginAmount") let submit = document.getElementById("submit"); submit.addEventListener('click', calcAmount); function calcAmount() { const calcLevel1 = amount.value * 7 / 100; let userAmount = Number(amount.value); let level1 = 351; if (userAmount <= level1) { marginAmount.textContent = calcLevel1; } else { marginAmount.textConten = ""; } } ``` ```html <div class="form"> <label for="points amount">Entrez un montant en points</label> <input type="number" class="amount-enter" id="amount"> <input type="submit" value="Montant marge additionnelle" id="submit" class="submit-enter"> </div> <p id="marginAmount" class="enter-margin"></p> ```
6,621
47,041,267
I'm currently trying to program and learn python, while making a project with the raspberry pi zero w. So far I'm just trying to get it to start recording video using picamera in python, as well as stream that video so I can monitor what the output is on my phone. However as it currently stands it only starts recording video once I connect to it via some sort of streaming program. What I'd like for it to do is start recording the video at the start of the program and be able to connect to it whenever I'd like to monitor it. As it stands I can connect to it no problem, but then I'm unable to reconnect to it. A basic idea of what I'm wanting kinda goes like this. ``` Start Recording Listen on port 8080 if connection is started start streaming video stream 2 (also known as splitter port) else connection ended wait for new connection ``` I realize that sounds horrible. I hope it gives the general idea of what I'm trying to do. Like I said I'm just learning python, and only have some basic knowledge in Basic. Here's my code that I'm currently working with. Like I said, it works, just only when I connect to it. ``` #!/usr/bin/python import socket import picamera import datetime as dt import os.path filename = 'hauntvideo' save_path = '/home/pi' completed_video = os.path.join(save_path, filename) import warnings warnings.filterwarnings('error', category=DeprecationWarning) #Camera Setup with picamera.PiCamera() as camera: camera.resolution = (1920, 1080) camera.framerate = 30 camera.hflip = True camera.vflip = True #Connection Listening server_socket = socket.socket() server_socket.bind(('0.0.0.0', 8080)) server_socket.listen(5) connection = server_socket.accept()[0].makefile('wb') try: camera.start_recording(connection, format='h264', splitter_port=2, resize=(640,360)) camera.start_recording(completed_video + '{}.h264'.format( dt.datetime.now().strftime('%Y%m%d%H%M%S') ), bitrate=4500000) camera.wait_recording(7*60*60) camera.stop_recording() finally: connection.close() server_socket.close() quit() ```
2017/10/31
[ "https://Stackoverflow.com/questions/47041267", "https://Stackoverflow.com", "https://Stackoverflow.com/users/8863714/" ]
> > However as it currently stands it only starts recording video once I connect to it via some sort of streaming program. What I'd like for it to do is start recording the video at the start of the program and be able to connect to it whenever I'd like to monitor it. > > > This is due to the following line of code: ``` connection = server_socket.accept()[0].makefile('wb') ``` The accept() function blocks, meaning that it doesn't return until the connection is established. Therefore, the code that starts recording: ``` camera.start_recording(connection, format='h264', ... ) ``` only gets executed after the connection is established (after accept() returns). In order to accomplish what you want, you would need to use threads. In one thread, start recording the video locally, i.e. ``` camera.start_recording(completed_video + '{}.h264'.format( dt.datetime.now().strftime('%Y%m%d%H%M%S') ), bitrate=4500000) ``` In another thread, accept an incoming connection, and upon connection call: ``` camera.start_recording(connection, format='h264', splitter_port=2, resize=(640,360)) ``` > > As it stands I can connect to it no problem, but then I'm unable to reconnect to it. > > > If you want to be able to connect, disconnect, and reconnect indefinitely, then you need to accept the connection in a loop (in its own thread). Perhaps something like this: ``` while(True): connection = server_socket.accept()[0].makefile('wb') camera.start_recording(connection, format='h264', splitter_port=2, resize=(640,360)) camera.wait_recording(7*60*60) #assuming this records for 7 hours? ``` > > I'm currently trying to program and learn python, while making a project with the raspberry pi zero w. > > > I don't know if you are new to programming in general, or just to python. In the former case, dealing with network programming concepts and threads might be a little bit challenging at first. However, they are necessary to use/understand in order to achieve the functionality you desire.
KillerXtreme, This should be the code you were looking for 5 years ago (namely, you can connect and reconnect to this over and over with no hiccups)! Hope this helps somebody! ``` import socket import picamera from threading import Thread def stop_recording(cam): try: cam.stop_recording() except Exception as e: pass # print('Error: {}'.format(e)) def camera_setup(): camera = picamera.PiCamera() camera.resolution = (1024, 768) camera.framerate = 24 return camera def create_socket(): server_socket = socket.socket() server_socket.bind(('0.0.0.0', 9090)) server_socket.listen(0) return server_socket def stream(server_socket, cam): connection = server_socket.accept()[0].makefile('wb') if cam.recording: stop_recording(cam) cam.start_recording(connection, format='h264', resize=(1024, 768), inline_headers=True) def stream_video_to_network(cam): server_socket = create_socket() while True: stream(server_socket, cam) def main(camera): Thread(target=stream_video_to_network, args=(camera,)).start() if __name__ == "__main__": main(camera_setup()) ```
6,622
42,630,281
I have a python script which is supposed to loop through all files in a directory and set the date of each file to the current time. It seems to have no effect, i.e. the Date column in the file explorer shows no change. I see the code looping through all files, it just appears that the call to `utime` has no effect. The problem is [not this](https://mail.python.org/pipermail/python-bugs-list/2004-November/025966.html) because most of the dates are months old. ``` # set file access time to current time #!/usr/bin/python import os import math import datetime def convertSize(size): if (size == 0): return '0B' size_name = ("B", "KB", "MB", "GB", "TB", "PB", "EB", "ZB", "YB") i = int(math.floor(math.log(size,1024))) p = math.pow(1024,i) s = round(size/p,2) return '%s %s' % (s,size_name[i]) # see www.tutorialspoint.com/python/os_utime.htm def touch(fname, times=None): fhandle = open(fname, 'a') try: os.utime(fname, times) finally: fhandle.close() def main(): print ("*** Touch Files ***"); aml_root_directory_string = "C:\\Documents" file_count = 0 file_size = 0 # traverse root directory, and list directories as dirs and files as files for root, dirs, files in os.walk(aml_root_directory_string): path = root.split('/') #print((len(path) - 1) * '---', os.path.basename(root)) for file in files: filename, file_extension = os.path.splitext(file) print(len(path) * '---', file) touch(filename, ) # print ("\n*** Total files: " + str(file_count) + " Total file size: " + convertSize(file_size) + " ***"); print ("*** Done: Time: " + str(datetime.datetime.now()) + " - Touch Files ***"); # main ############################################################################### if __name__ == "__main__": # stuff only to run when not called via 'import' here main() ``` Edit: In case anyone reads this in the future, it is also important to note the the file explorer can [display more than 1 kind of date](https://superuser.com/questions/212542/how-can-i-make-windows-explorer-show-file-modified-date-instead-of-created-date)
2017/03/06
[ "https://Stackoverflow.com/questions/42630281", "https://Stackoverflow.com", "https://Stackoverflow.com/users/1978617/" ]
You've got three issues: 1. You're using the file name, not the full path, when `touch`ing, so all the `touch`ing occurs in the working directory 2. You're stripping the file extension too, so the touched files lack extensions 3. You're touching files to which you have an open file handle, and on Windows, on Python 2.7, this is a problem, because [`os.utime` opens the files with no sharing allowed](https://hg.python.org/cpython/file/2.7/Modules/posixmodule.c#l2970), which is incompatible with existing open file handles To fix #3, change your `touch` method to: ``` def touch(fname, times=None): # Open and immediately close file to force existence with open(fname, 'ab') as f: pass # Only alter times when file is closed os.utime(fname, times) ``` To fix #1 and #2, change your main method to call `touch` like so: ``` touch(os.path.join(root, file)) ``` which uses the original name and joins it with the root directory being traversed, where `touch(filename)` was touching a file without the extension, in the program's working directory (because you used an unqualified name). If you find your program's working directory (`print os.getcmd()` will tell you where to look), you'll find a bunch of random empty files there corresponding to the files found in the tree you were traversing, stripped of paths and file extensions. Side-note: If you can move to Python 3 (it's been a while, and there are a lot of improvements), you can make a slightly safer (race-free) and faster `touch` thanks to file descriptor support in `os.utime`: ``` def touch(fname, times=None): with open(fname, 'ab') as f: os.utime(f.fileno(), times) ``` Not all systems will support file descriptors, so if you need to handle such systems, define `touch` based on testing for [file descriptor support via `os.supports_fd`](https://docs.python.org/3/library/os.html#os.supports_fd).
os.utime does work on Windows but probably you are looking at the wrong date in explorer. os.utime does not modify the creation date (which it looks like is what is used in the date field in explorer). It does update the "Date modified" field. You can see this if you right click on the category bar and check the "date modified" box. Alternatively start a command line and type "dir". The date shown there should reflect the change. I tested os.utime on python 2.7 where you have to give two arguments: ``` os.utime("file.txt", None) ``` and on Python 3 where the second argument defaults to None: ``` os.utime("file.txt") ```
6,623
46,742,947
I have a folder full of files that need to be modified in order to extract the true file in it's real format. I need to remove a certain number of bytes from BOTH the beginning and end of the file in order to extract the data I am looking for. How can I do this in python? * I need this to work recursively on an entire folder only * I also need this to output (or modify the exisiting) file with the bytes removed. I would greatly appreciate any help or guidance you can provide.
2017/10/14
[ "https://Stackoverflow.com/questions/46742947", "https://Stackoverflow.com", "https://Stackoverflow.com/users/7838844/" ]
1. Recursive iteration over files [os.walk](https://docs.python.org/3/library/os.html#os.walk) 2. Change position in file: [f.seek](https://docs.python.org/3/tutorial/inputoutput.html?highlight=seek) 3. Get file size: [os.stat](https://docs.python.org/3/library/os.html#os.stat) 4. Remove data from current position to end of file: [f.truncate](https://docs.python.org/3/tutorial/inputoutput.html?highlight=truncate) So, base logic: 1. Iterate over files 2. Get file size. 3. Open file ('rb+' i suppouse ) 4. Seek to position from wich you want read file 5. Read until bytes you want to drop ( f.read(file\_size - top\_dropped - bottom\_dropped ) ) 6. Seek(0) 7. Write read text to file 8. Truncate file
Your question is pretty badly constructed, but as this is somewhat advanced stuff I'll provide you with a code. You can now use os.walk() to recursively traverse directory you want and apply my slicefile() function. This code does the following: 1. After checking validity of start and end arguments it creates a memory map on top of an opened file. mmap() creates a memory map object that mapps, in this case, portion of a file system over which the file is written. The object exposes both a string-like and file-like interface with some additional methods like move(). So you can treat memory map either as a string or as a file or use size(), move(), resize() or whatever additional methods you need. 2. We calculate what is a distance between our start and end, i.e. this is how much bytes we will have in the end. 3. We move stream of bytes, end-start long, starting from our start position to the 0 position i.e. we move them backwards for number of bytes indicated by starting point. 4. We discard the rest of file. I.e. we resize it to end-start bytes. So, what is left is our new string. The operation will be longer as the file is bigger. Unfortunately there is nothing much you can do about it. If a file is big, this is your best bet. The procedure is the same as when removing items from a start/middle of an in-memory array, except this has to be buffered (in chunks) not to fill RAM too much. If your file is smaller than a third of your free RAM space, you can load it whole into a string with f.read(), then perform string slicing on the loaded content ( s = s[start:end] ) and then write it back into file by opening it again and just doing f.write(s). If you have enough disk space, you can open another file, seek to the starting point you want in the original file and then read it in chunks and write these into the new file. Perhaps even using shutil.copyfileobj(). After that, you remove the original file and use os.rename() to put the new one in its place. These are your only 3 options. Whole file into RAM; move by buffering backward and then resizing; and, copying into another file, then renaming it. The second option is most universal and won't fail you for small or big files. Therefore I used it. OK, Not only 3 options. There is a fourth option. It could be possible to cut off N number of bytes from beginning of the file by manipulating the file system itself using low-level operations. To write a kind of truncate() function that truncates the beginning instead of the end. But this would be pretty suicidal. In the end memory fragmentation would occur and whole mess will arise. You don't need such speed anyway. You will be patient until your script finishes. :D Why did i use mmap()? Because it uses memory maps implemented in OS rather than completely new code. This reduces number of system calls needed to deal with the opened file. Half of the work is thrust upon operating system, leaving Python to breathe easily. Because it is mostly written in C which makes it a touch faster than its pure Python implementation would be. Because it implements move() which wee need. The buffering and everything is already written, so no needs for bulky while loop which would be the alternative (manual) solution. And so on... ``` from mmap import mmap def slicefile (path, start=0, end=None): f = open(path, "r+b") # Read and write binary f.seek(0, 2) size = f.tell() start = 0 if start==None else start end = size if end==None else end start = size+start if start<0 else start end = size+end if end<0 else end end = size if end>size else end if (end==size and start==0) or (end<=start): f.close() return # If start is 0, no need to move anything, just cut off the rest after end if start==0: f.seek(end) f.truncate() f.close() return # Modify in place using mapped memory: newsize = end-start m = mmap(f.fileno(), 0) m.move(0, start, newsize) m.flush() m.resize(newsize) m.close() f.close() ```
6,624
14,416,914
I am new to bash and shell but I am running a debian install and I am trying to make a script which can find a date in the past without having to install any additional packages. From tutorials I have got to this stage: ``` #!/bin/sh # # BACKUP DB TO S3 # # VARIABLES TYPE="DATABASE" DAYS="30" # GET CURRENT DATETIME CURRENTDATE="$(date +%Y%m%d%H%M%S)" # GENERATE PAST DATE FROM DAYS CONTSTANT OLDERDATE=`expr $CURRENTDATE - $DAYS' # CALL PYTHON SCRIPT WITH OLDERDATE ARGUMENT python script.py $OLDERDATE ``` Where I am getting stuck is the fact that my "days" is just the number 30 and isnt datetime formattted, so when I come to minus it from the currentdate variable it obviously isnt compatible. Would anyone be kind enough to help me find a way to get this working as it should?
2013/01/19
[ "https://Stackoverflow.com/questions/14416914", "https://Stackoverflow.com", "https://Stackoverflow.com/users/1738522/" ]
Try ``` date -d '30 days ago' ``` should do on debian.
You could modify the python script instead -- that way you would not depend on particular implementation of `date`
6,625
35,642,131
I am training a problem such that my output (y) could be more than one class. For example, the SVM could say, this input vector is class 1, but it could also say, this input vector is classes 1 AND 5. This is not the same as a multiclass SVM problem, where the output could be ONE of multiple classes. My output could be ONE or SEVERAL of multiple classes. Example: My training data looks like (for each training vector X) ``` X_1 [1 0 0 0] X_2 [0 1 1 0] X_3 [0 0 0 1] X_4 [1 1 0 1] ``` Where the right side is the Y to be predicted (here I am showing an example with 4 classes, where a 1 indicates class membership). I understand that I need to probably use a structural SVM such as discussed here: <http://www.robots.ox.ac.uk/~vedaldi/svmstruct.html> However, this is all very confusing to me. I do NOT want to simply do one-vs-all classifiers for each of the possible output classes, however. I need to somehow take into account class relationships, so I guess what I need is a structural SVM. My training data will have some instances tagged with single classes and other instances tagged with multiple classes. I guess I'm asking how to tackle this problem and if you know any packages to do so in MATLAB or python.
2016/02/26
[ "https://Stackoverflow.com/questions/35642131", "https://Stackoverflow.com", "https://Stackoverflow.com/users/5983697/" ]
Yes, use a for loop using the `length` method. You can get the letter at a given position using [CharSequence#charAt](https://docs.oracle.com/javase/8/docs/api/java/lang/CharSequence.html#charAt-int-). `String` implements `CharSequence` so you can do `a.charAt(i);`
I like the `charAtSequence()` suggested by pyb, my own first instinct however would be char array. I think they both will work the same though. I didint want to write the whole code for you, but I hope this helps... ``` char [] _a = a.toCharArray(); // note 1 //do the same for the b string here... else if (a.length() == b.length()); for(int i = 0; i < a.length; i++){ // note 2 //compare each character here.. } System.out.printf("Bye"); } ``` Here are some good resources on `toCharArray()` if you need more help or understanding [java documentation on `tocharArray()`](https://docs.oracle.com/javase/7/docs/api/java/lang/String.html#toCharArray%28%29) [beginner's guide to `toCharArray()`](http://beginnersbook.com/2013/12/java-string-tochararray-method-example/) note 1: this line sets aside a char[] variable ( an array of characters) : `char[]_a` the second par `= a.toCharArray` uses a method of the String class that will instantiate a char array of the length of string `a` then take every letter of the String `a` and place them into the char[] array.. so.... ``` String a = cat char[] _a = a.toCharArray(); ``` results in an array characters [c] [a] [t] now we preform the same manipulation with string `b`.... ``` String b = cat char[] _b = b.toCharArray(); ``` this (since the strings are equivalent) will result in equivalent arrays.. however you must test this to be sure. to test the equivalency of these arrays we must test each index against the corresponding index... we will do so with a `for` loop This for loop, as shown above [note 2] is set to run through the same number of rotations as the array is long. (I'm not sure how to phrase this right now I'm not using my words well tonight, so if you can phrase that better please do...) then we will compare each index to the cooresponding one here... ``` boolean flag = true; for(int i = 0; i < a.length; i++){ // ( or > or < or >= or <= ) however it is you wish to compare these if( _a[i] != _b[i]){ flag = false; } } ``` And then if flag == true, print out your corresponding message else.. print other message... for this answer.. ``` if( _a[i] != _b[i]){ flag = false; System.out.printf("Bye\n"); } ``` Take your System.out.print, and move it outside your loop.. everytime your loop passes through it prints bye... try a statment that prints Flag and bye...
6,630
73,739,734
I am currently able to use quick fix to auto import python functions from external typings such as `from typing import List`. [Python module quick fix import](https://i.stack.imgur.com/q3m7F.png) However, I am unable to detect local functions/classes for import. For example: If I have the data class `SampleDataClass` in `dataclasses.py`, and I reference it in a function in `test_file.py`, VSCode is unable to detect it and I have to manually type out the import path for the dataclass. [Definition of dataclass](https://i.stack.imgur.com/KlXuJ.png) [Reference to dataclass](https://i.stack.imgur.com/dRY6H.png) I have the following extensions enabled: * Python * Pylance * Intellicode My settings.json includes: ``` { "python.envFile": "${workspaceFolder}/.env", "python.languageServer": "Pylance", "python.analysis.indexing": true, "python.formatting.provider": "black", "python.analysis.autoImportCompletions": true, "python.analysis.autoSearchPaths": true, "python.autoComplete.extraPaths": ["~/Development/<django repo name>/server"], "python.analysis.extraPaths": ["~/Development/<django repo name>/server"], "vsintellicode.features.python.deepLearning": "enabled", } ``` I am using poetry for my virtual environment which is located at `~/Development/<django repo name>/.venv` Is there something that I'm missing?
2022/09/16
[ "https://Stackoverflow.com/questions/73739734", "https://Stackoverflow.com", "https://Stackoverflow.com/users/20008915/" ]
Turns out the latest versions for Pylance broke quick-fix imports and any extra path settings for VSCode. When I rolled back the version to `v2022.8.50` it now works again. I filed an issue here: <https://github.com/microsoft/pylance-release/issues/3353>.
According to an [issue](https://github.com/microsoft/pylance-release/issues/3324) I raised in github earlier, the developer gave a reply. Custom code will **not** be added to the autocomplete list at this time (unless it has already been imported). This is done to prevent users from having too many custom modules, which may lead to too long loading time. If necessary, you can start a discussion in github and vote for it. **Add:** [![enter image description here](https://i.stack.imgur.com/1sO0x.png)](https://i.stack.imgur.com/1sO0x.png)
6,635
72,631,459
I am trying to test an API that sends long-running jobs to a queue processed by Celery workers.. I am using RabbitMQ running in a Docker container as the message queue. However, when sending a message to the queue I get the following error: `Error: [Errno 111] Connection refused` Steps to reproduce: * Start RabbitMQ container: `docker run -d -p 5672:5672 rabbitmq` * Start Celery server: `celery -A celery worker --loglevel=INFO` * Build docker image: `docker build -t fastapi .` * Run container `docker run -it -p 8000:8000 fastapi` Dockerfile: ``` FROM python:3.9 WORKDIR / COPY . . RUN pip install --no-cache-dir --upgrade -r ./requirements.txt EXPOSE 8000 CMD ["uvicorn", "app:app", "--host", "0.0.0.0", "--port", "8000"] ``` requirements.txt: ``` anyio==3.6.1 asgiref==3.5.2 celery==5.2.7 click==8.1.3 colorama==0.4.4 fastapi==0.78.0 h11==0.13.0 httptools==0.4.0 idna==3.3 pydantic==1.9.1 python-dotenv==0.20.0 PyYAML==6.0 sniffio==1.2.0 starlette==0.19.1 typing_extensions==4.2.0 uvicorn==0.17.6 watchgod==0.8.2 websockets==10.3 ``` app.py: ``` from fastapi import FastAPI import tasks @app.get("/{num}") async def root(num): tasks.update_db.delay(num) return {"success": True} ``` tasks.py: ``` from celery import Celery import time celery = Celery('tasks', broker='amqp://') @celery.task(name='update_db') def update_db(num: int) -> None: time.sleep(30) return ```
2022/06/15
[ "https://Stackoverflow.com/questions/72631459", "https://Stackoverflow.com", "https://Stackoverflow.com/users/6131554/" ]
You can't connect to rabbitmq on `localhost`; it's not running in the same container as your Python app. Since you've exposed rabbit on your host, you can connect to it using the address of your host. One way of doing that is starting the app container like this: ``` docker run -it -p 8000:8000 --add-host host.docker.internal:host-gateway fastapi ``` And then modify your code like this: ``` celery = Celery('tasks', broker='amqp://host.docker.internal') ``` With that code in place, let's re-run your example: ``` $ docker run -d -p 5672:5672 rabbitmq $ docker run -d -p 8000:8000 --add-host host.docker.internal:host-gateway fastapi $ curl http://localhost:8000/1 {"success":true} ``` --- There's no reason to publish the rabbitmq ports on your host if you only need to access it from within a container. When building an application with multiple containers, using something like docker-compose can make your life easier. If you used the following `docker-compose.yaml`: ``` version: "3" services: rabbitmq: image: rabbitmq app: build: context: . ports: - "8000:8000" ``` And modified your code to connect to `rabbitmq`: ``` celery = Celery('tasks', broker='amqp://rabbitmq') ``` You could then run `docker-compose up` to bring up both containers. Your app would be exposed on host port `8000`, but rabbitmq would only be available to your app container. Incidentally, rather than hardcoding the broker uri in your code, you might want to get that from an environment variable instead: ``` celery = Celery('tasks', broker=os.getenv('APP_BROKER_URI')) ``` That allows you to use different connection strings without needing to rebuild your image every time. We'd need to modify the `docker-compose.yaml` to include the appropriate variable: ``` version: "3" services: rabbitmq: image: rabbitmq app: build: context: . environment: APP_BROKER_URI: "amqp://rabbitmq" ports: - "8000:8000" ```
Update **tasks.py** ``` import time celery = Celery('tasks', broker='amqp://user:pass@host:port//') @celery.task(name='update_db') def update_db(num: int) -> None: time.sleep(30) return ```
6,636
6,999,522
I'm looking to generate, from a large Python codebase, a summary of heap usage or memory allocations over the course of a function's run. I'm familiar with [heapy](http://guppy-pe.sourceforge.net/), and it's served me well for taking "snapshots" of the heap at particular points in my code, but I've found it difficult to generate a "memory-over-time" summary with it. I've also played with [line\_profiler](http://packages.python.org/line_profiler/), but that works with run time, not memory. My fallback right now is Valgrind with [massif](http://valgrind.org/docs/manual/ms-manual.html), but that lacks a lot of the contextual Python information that both Heapy and line\_profiler give. Is there some sort of combination of the latter two that give a sense of memory usage or heap growth over the execution span of a Python program?
2011/08/09
[ "https://Stackoverflow.com/questions/6999522", "https://Stackoverflow.com", "https://Stackoverflow.com/users/104200/" ]
I would use [`sys.settrace`](http://docs.python.org/library/sys.html#sys.settrace) at program startup to register a custom tracer function. The custom\_trace\_function will be called for each line of code. Then you can use that function to store information gathered by heapy or [meliae](https://launchpad.net/meliae) in a file for later processing. Here is a very simple example which logs the output of hpy.heap() each second to a plain text file: ``` import sys import time import atexit from guppy import hpy _last_log_time = time.time() _logfile = open('logfile.txt', 'w') def heapy_profile(frame, event, arg): currtime = time.time() if currtime - _last_log_time < 1: return _last_log_time = currtime code = frame.f_code filename = code.co_filename lineno = code.co_firstlineno idset = hpy().heap() logfile.write('%s %s:%s\n%s\n\n' % (currtime, filename, lineno, idset)) logfile.flush() atexit.register(_logfile.close) sys.settrace(heapy_profile) ```
You might be interested by [memory\_profiler](https://pypi.org/project/memory-profiler/).
6,637
35,667,105
I am an amateur python programer with 2 months of experience. I am trying to write a GUI to-do list through tkinter. The actual placement of the buttons are not important. I can play around with those after. I need some help with displaying the appended item to the list. In the program, it updates well on the digit, but it won't print onto the list. I double checked it on the console and it says "tkinter.StringVar object at 0x102fa4048" but didn't update the actual list. What I need help is how can I update the list Main\_Q on my the label column? Much appreciate some direction and coding help. Thanks. ``` Main_Q =["read","clean dishes", "wash car"] from tkinter import* root=Tk(className="total tasks in the Q") #formula def update(): global Main_Q a=len(Main_Q) num.set(a) def add2list(): Main_Q.append(name) a=len(Main_Q) num.set(a) print (Main_Q) #output num=StringVar() y=Label(root, textvariable=num).grid(row=0, column=1) #input name=StringVar() b=Entry(root, textvariable=name).grid(row=7,column=0) #buttons z=Button(root, text="update", command=update).grid(row=7, column=2) add2list=Button(root,text="add", command=add2list).grid(row=7, column=1) r = 0 for c in Main_Q: Label(text=c, relief=RIDGE,width=15).grid(row=r,column=0) r = r + 1 root.mainloop() ```
2016/02/27
[ "https://Stackoverflow.com/questions/35667105", "https://Stackoverflow.com", "https://Stackoverflow.com/users/5989438/" ]
Just replace the package name with your package name / applicationId You do not need to publish the app first, just keep the same applicationId when you eventually upload the app. For example <https://play.google.com/store/apps/details?id=com.instagram.android> market://details?id=com.instagram.android app/build.gradle: ``` android { compileSdkVersion 19 buildToolsVersion "19.1" defaultConfig { applicationId "com.instagram.android" minSdkVersion 15 targetSdkVersion 19 versionCode 1 versionName "1.0" } ```
No need to update the app. First you can do it with a temp link with dynamic process. After your app publish, you can replace that link with playstore link.
6,638
67,883,300
I want to print all Thursdays between these date ranges ``` from datetime import date, timedelta sdate = date(2015, 1, 7) # start date edate = date(2015, 12, 31) # end date ``` What is the best pythonic way to do that?
2021/06/08
[ "https://Stackoverflow.com/questions/67883300", "https://Stackoverflow.com", "https://Stackoverflow.com/users/778942/" ]
using your `sdate.weekday() # returns int between 0 (mon) and 6 (sun)`: ``` sdate = ... while sdate < edate: if sdate.weekday() != 3: # not thursday sdate += timedelta(days=1) continue # It is thursday print(sdate) sdate += timedelta(days=7) # next week ```
```py import datetime import calendar def weekday_count(start, end, day): start_date = datetime.datetime.strptime(start, '%d/%m/%Y') end_date = datetime.datetime.strptime(end, '%d/%m/%Y') day_count = [] for i in range((end_date - start_date).days): if calendar.day_name[(start_date + datetime.timedelta(days=i+1)).weekday()] == day: print(str(start_date + datetime.timedelta(days=i+1)).split()[0]) weekday_count("01/01/2017", "31/01/2017", "Thursday") # prints result # 2017-01-05 # 2017-01-12 # 2017-01-19 # 2017-01-26 ```
6,641
64,867,992
I am using beautiful soup (BS4) with python to scrape data from the yellowpages through the waybackmachine/webarchive. I am able to return the Business name and phone number easily but when I attempt to retrieve the website url for the business, I only return the entire div tag. ``` #Import Dependencies from splinter import Browser from bs4 import BeautifulSoup import requests import pandas as pd # Path to chromedriver !which chromedriver # Set the executable path and initialize the chrome browser in splinter executable_path = {'executable_path': '/usr/local/bin/chromedriver'} browser = Browser('chrome', **executable_path) #visit Webpage url = 'https://web.archive.org/web/20171004082203/https://www.yellowpages.com/houston-tx/air-conditioning-service-repair' browser.visit(url) # Convert the browser html to a soup object and then quit the browser html = browser.html soup = BeautifulSoup(html, "html.parser") ##Scrapers #business name print(soup.find('a', class_='business-name').text) #Telephone print(soup.find('li', class_='phone primary').text) #website print(soup.find('div', class_='links')) ``` How can I return just the website URL of the company? Thanks.
2020/11/17
[ "https://Stackoverflow.com/questions/64867992", "https://Stackoverflow.com", "https://Stackoverflow.com/users/10930010/" ]
Simple and very neat solution: *(added my comments so you can understand what I did)* ``` start = int(input('enter the starting range: ')) end = int(input('enter the ending range: ')) #create an empty list numbers = [] for d in range(start,end): if d % 2 != 0: #if number is odd, add it to the list numbers.append(d) #we use end=" " to make sure there is no new line after first print print('The odd numbers between',start,'&',end,'are:',end=" ") #* symbol is used to print the list elements in a single line with space. To print all elements in new lines or separated by space use sep=”\n” or sep=”, ” respectively print(*numbers,sep = ",") ```
Print the message before the loop then the numbers during the loop: ```py start = int(input('enter the starting range:')) end = int(input('enter the ending range:')) # use end='' to avoid printing a newline at the end print(f'The odd numbers between {start} & {end} is: ', end='') should_print_comma = False for d in range(start,end): if d%2!=0: # only start printing commas after the first element if should_print_comma: print(',', end='') print(d, end='') should_print_comma = True ```
6,650
43,247,031
I have a file named phone.py which give me output as(in terminal): ``` +911234567890 +910123321423 ``` There can be more number of outputs. Another file named email.py which produces(in terminal): ``` and@abc.com bcd@cdc.com ``` or more. And I have a JSON File whose structure is as follows: ``` {"One":"Some data", "two":"Some more data", "three":"Even more data"} ``` There can be a many more sections like this. Now I want the capture the terminal output and also load the existing JSON and finally, have an output as follows (as a JSON file): ``` {Phone:"+911234567890,+910123321423", "Email":"and@abc.com,bcd@cdc.com","Sections":"{"One":"Some data", "two":"Some more data", "three":Even more data"}"} ``` I tried to capture the output using subprocess module in python and now it is stored in a variable ``` subprocess.run(['python','email.py','filename.txt'], stdout= subprocess.PIPE) ``` output: ``` CompletedProcess(args=['python', 'email_txt.py', 'upload/filename.txt'], returncode=0, stdout=b'abc@xyz.com\nbcd@dcd.com\n') ``` I have a string in which data is stored not I want the desired output through these components. What can I do or refer to tackle this problem ?
2017/04/06
[ "https://Stackoverflow.com/questions/43247031", "https://Stackoverflow.com", "https://Stackoverflow.com/users/6541394/" ]
You can obtain the stdout from `subprocess.run` simply by `resp.stdout`, where `resp` is the returned object.
As already mentioned by Rishav, you need to assign the output to a variable & then use it to get the related attributes. [Sample usage](https://docs.python.org/3/library/subprocess.html#subprocess.CompletedProcess) - ``` >>> import subprocess >>> out = subprocess.run(["ls", "-l", "/dev/null"], stdout=subprocess.PIPE) >>> out.stdout b'crw-rw-rw- 1 root root 1, 3 Apr 6 2017 /dev/null\n' ```
6,654
47,227,445
Im trying to make a while loop in python, but the loop keeps looping infinitely. Here's what I have so far: ``` def pvalue(num): ans = '' while num > 0: if 1 <= num <= 9: ans += 'B' num -= 1 if num >= 10: ans += 'A' num -= 10 return ans ``` I want num to be returned as ans as follows: if num is 5, I want ans to be BBBBB if num is 10, ans is A if num is 22, I want ans to be AABB.
2017/11/10
[ "https://Stackoverflow.com/questions/47227445", "https://Stackoverflow.com", "https://Stackoverflow.com/users/5016648/" ]
You may want to learn about the [break statement](https://docs.python.org/3/reference/simple_stmts.html#break). As for your code: ``` def pvalue(num): ans = '' while num > 0: if num >= 10: ans += 'A' num -= 10 else: ans += 'B' num -= 1 return ans ``` Is much better, the case when `num == 9` is now handled properly
Use the break statement to get out of the loop. ``` def pvalue(num): ans = '' while num > 0: if 1 <= num <= 9: ans += 'B' num -= 1 if num >= 10: ans += 'A' num -= 10 if num<=0: break return ans ```
6,655
68,262,836
I'm trying to use an array of dictionaries in python as arguement to a custom dash component and use it as array of objects in python : ``` audioList_py = [ { "name": "random", "singer": 'waveGAN\'s music', "cover": 'link_1.jpg', "musicSrc": 'link_1.mp3', }, { "name": "random", "singer": 'waveGAN\'s music', "cover": 'link_2.jpg', "musicSrc": 'link_2.mp3', }, ... etc ] ``` in Javascript: ``` audioList1_js = [ { name: "random", singer: 'waveGAN\'s music', cover:'link_1.jpg', musicSrc: 'link_1.mp3', }, { name: "random", singer: 'waveGAN\'s music', cover: 'link_2.jpg', musicSrc: 'link_2.mp3', }, ... etc ] ``` Here is snippet of javascript code of the dash custom component: ``` export default class MusicComponent extends Component { render() { const {id, audioLists} = this.props; return ( <div> <h1>{id}</h1> <ReactJkMusicPlayer audioLists={audio_list}/>, </div> ); } } MusicComponent.defaultProps = {}; MusicComponent.propTypes = { /** * The ID used to identify this component in Dash callbacks. */ audios: PropTypes.array, id: PropTypes.string, }; ``` And using the generated component in python: ``` app = dash.Dash(__name__) app.layout = html.Div([ music_component.MusicComponent(audios=audioList_py), html.Div(id='output'), ... etc ]) ``` But I got : ``` TypeError: The `music_component.MusicComponent` component (version 0.0.1) received an unexpected keyword argument: `audios`Allowed arguments: id ``` What I am doing wrong ? Any help or advice will be appreciated, Thanks a lot.
2021/07/05
[ "https://Stackoverflow.com/questions/68262836", "https://Stackoverflow.com", "https://Stackoverflow.com/users/13652942/" ]
Make sure you run `npm run build` after you make a change to your custom React component. With those proptypes you shouldn't get that error. If I remove the `audios` proptype I can reproduce that error. Besides that you pass a value to the `audios` property: ``` music_component.MusicComponent(audios=audioList_py) ``` but you try to retrieve `audioLists` from props: ``` const {id, audioLists} = this.props; ``` Change this to: ``` const {id, audios} = this.props; ``` Demo ``` export default class MusicComponent extends Component { render() { const {id, audios} = this.props; return ( <div> <h1>{id}</h1> <ReactJkMusicPlayer audioLists={audios} /> </div> ); } } MusicComponent.defaultProps = {}; MusicComponent.propTypes = { /** * The ID used to identify this component in Dash callbacks. */ id: PropTypes.string, audios: PropTypes.array, }; ```
Issue fixed, I should run : `npm run build:backends` to generate the Python, R and Julia class files for the components, but instead I was executing `npm run build:js` and this command just generate the JavaScript bundle (which didn't know about the new props). And set the audios property in the component to be like so: ``` MusicComponent.defaultProps = {audios: audioList1}; MusicComponent.propTypes = { id: PropTypes.string, audios: PropTypes.arrayOf(PropTypes.objectOf(PropTypes.string)).isRequired }; ```
6,656
61,976,835
I'm new to python and I've seen two different ways to initialize an empty list: ``` # way 1 empty_list = [] # way 2 other_empty_list = list() ``` Is there a "preferred" way to do it? Maybe one way is better for certain conditions than the other? Thank you
2020/05/23
[ "https://Stackoverflow.com/questions/61976835", "https://Stackoverflow.com", "https://Stackoverflow.com/users/8947060/" ]
No there isn't much of a difference, you can neglect the difference and pick anyone of them As most of the times it's a one or limited execution any difference in execution won't have much impact
Using parenthesis will create a tuple, which is immutable. A list uses brackets.
6,657
66,062,702
I am trying to install the jupyterlab plotly extension with this command (according to <https://plotly.com/python/getting-started/>): **jupyter labextension install jupyterlab-plotly@4.14.3** I get this error: ``` An error occured. ValueError: Please install Node.js and npm before continuing installation. You may be able to install Node.js from your package manager, from conda, or directly from the Node.js website (https://nodejs.org). See the log file for details: /tmp/jupyterlab-debug-epx8b4n6.log ``` I didn't install Node.js on system level, but in a virtual environment using pip. Pip list shows both nodejs 0.1.1 and npm 0.1.1 . I am also using ipywidgets in jupyterlab, which requires nodejs and it is working fine. So I have two questions: 1. How to use plotlywidgets with pip nodejs in a virtual environment? 2. What's the difference between pip nodejs and system level Node.js
2021/02/05
[ "https://Stackoverflow.com/questions/66062702", "https://Stackoverflow.com", "https://Stackoverflow.com/users/11904988/" ]
No, you have **not** installed node.js. You installed some kind of Python bindings for node ([python-nodejs](https://pypi.org/project/nodejs/), with its [repository](https://github.com/markfinger/python-nodejs) archived by the author) which itself require an actual nodejs. It is dangerous to install stuff from PyPI without checking what you are installing. It could have been a malicious code - you shouldn't just type a name after `pip install` and hope that it installs what you think. It's the same for your `npm` installation (package comes from the same author); both were not updated in the last 6 years and may contain some vulnerabilities so I would uninstall those quickly ;) It can be seen immediately from the version number that something is wrong because the current nodejs versions are generally >10, (with exact version depending on your JupyterLab version, i.e. either 10 or 12; 14 might work too).
First install nodejs latest version `conda install nodejs -c conda-forge --repodata-fn=repodata.json` Then install jupyterlab extension: `jupyter labextension install jupyterlab-plotly@4.14.3` Then **RESTART JUPYTER LAB**
6,660
36,227,688
I have a python program that I usually run as a part of a package: ``` python -m mymod.client ``` in order to deal with relative imports inside "mymod/client.py." How do I run this with pdb - the python debugger. The following does not work: ``` python -m pdb mymod.client ``` It yields the error: ``` Error: mymod.client does not exist ``` EDIT #1 (to address possible duplicity of question) --------------------------------------------------- My question isn't really about running two modules simultaneously python, rather it is about how to use pdb on a python script that has relative imports inside it and which one usually deals with by running the script with "python -m." Restated, my question could then be, how do I use pdb on such a script while not having to change the script itself just to have it run with pdb (ie: preserving the relative imports inside the script as much as possible). Shouldn't this be possible, or am I forced to refactor in some way if I want to use pdb? If so what would be the minimal changes to the structure of the script that I'd have to introduce to allow me to leverage pdb. In summary, I don't care *how* I run the script, just so long as I can get it working with pdb without changing it's internal structure (relative imports, etc) too much.
2016/03/25
[ "https://Stackoverflow.com/questions/36227688", "https://Stackoverflow.com", "https://Stackoverflow.com/users/3357587/" ]
I *think* I have a solution. Run it like this: ``` python -m pdb path/mymod/client.py arg1 arg2 ``` that will run it as a script, but will not treat it as a package. At the top of client.py, the first line should be: ``` import mymod ``` That will get the package itself loaded. I am still playing with this, but it seems to work so far.
This is not possible. Though unstated in documentation, Python will not parse two modules via the `-m` command line option.
6,661
56,798,328
I am just beginning to use python and could use some help! I was working on a rock, paper, scissors game and I wanted to add a restart option once either the human or computer reaches 3 wins. I have looked all over for some answers but from all the other code I looked at seemed way out of my league or extremely different from what I wrote. I haven't tried using def and classes, which I saw a lot of, and made it look very simple. I know I'm probably going about this in a really round about way but I just want to finish this without completely copying someone's code. ``` import random moves = ["Rock", "Paper", "Scissors"] player_score = 0 computer_score = 0 draws = 0 keep_playing = True while keep_playing == True: cmove = random.choice(moves) pmove = input("What is your move: Rock, Paper, or Scissors?") print("The computer chose", cmove) #Logic to game if cmove == pmove: print("It's a DRAW!") elif pmove == "Rock" and cmove == "Scissors": print("--Player Wins!--") elif pmove == "Rock" and cmove == "Paper": print("--Computer Wins!--") elif cmove == "Paper" and cmove == "Rock": print("--Player Wins!--") elif pmove == "Paper" and cmove == "Scissors": print("--Computer Wins!--") elif pmove == "Scissors" and cmove == "Paper": print("--Player Wins!--") elif pmove == "Scissors" and cmove == "Rock": print("--Computer Wins!--") #Scoreboard if pmove == cmove: draws = draws + 1 print("Player:" + str(player_score) + ' | ' + "Computer:" + str(computer_score) + ' | ' + "Draws:" + str(draws)) if pmove == "Rock" and cmove == "Scissors": player_score = player_score + 1 print("Player:" + str(player_score) + ' | ' + "Computer:" + str(computer_score) + ' | ' + "Draws:" + str(draws)) if pmove == "Rock" and cmove == "Paper": computer_score = computer_score + 1 print("Player:" + str(player_score) + ' | ' + "Computer:" + str(computer_score) + ' | ' + "Draws:" + str(draws)) if pmove == "Paper" and cmove == "Rock": player_score = player_score + 1 print("Player:" + str(player_score) + ' | ' + "Computer:" + str(computer_score) + ' | ' + "Draws:" + str(draws)) if pmove == "Paper" and cmove == "Scissors": computer_score = computer_score + 1 print("Player:" + str(player_score) + ' | ' + "Computer:" + str(computer_score) + ' | ' + "Draws:" + str(draws)) if pmove == "Scissors" and cmove == "Paper": player_score = player_score + 1 print("Player:" + str(player_score) + ' | ' + "Computer:" + str(computer_score) + ' | ' + "Draws:" + str(draws)) if pmove == "Scissors" and cmove == "Rock": computer_score = computer_score + 1 print("Player:" + str(player_score) + ' | ' + "Computer:" + str(computer_score) + ' | ' + "Draws:" + str(draws)) #Win/lose restart point? if player_score == 3: print("-----You Win!-----") break if computer_score == 3: print("-----You Lose!-----") break ``` I want the code to end saying, "You Win!" or "You Lose!"and then ask for an input to whether or not they want to restart and then it resets the scores and keeps going or if they say no it breaks.
2019/06/27
[ "https://Stackoverflow.com/questions/56798328", "https://Stackoverflow.com", "https://Stackoverflow.com/users/11606900/" ]
You already have the loop to do this. So when you want to "restart" your game, you really just need to reset the scores. Starting from your win/lose conditions: ``` #Win/lose restart point? if player_score == 3: print("-----You Win!-----") replay = input("Would you like to play again?") if replay.upper().startswith('Y'): player_score = 0 computer_score = 0 draws = 0 else: keep_playing = False if computer_score == 3: print("-----You Lose!-----") if replay.upper().startswith('Y'): player_score = 0 computer_score = 0 draws = 0 else: keep_playing = False ```
You need to get user input in the `while` loop and set `keep_playing` accordingly. ``` while keep_playing == True: cmove = random.choice(moves) pmove = input("What is your move: Rock, Paper, or Scissors?") print("The computer chose", cmove) #Logic to game if cmove == pmove: print("It's a DRAW!") elif pmove == "Rock" and cmove == "Scissors": print("--Player Wins!--") elif pmove == "Rock" and cmove == "Paper": print("--Computer Wins!--") elif cmove == "Paper" and cmove == "Rock": print("--Player Wins!--") elif pmove == "Paper" and cmove == "Scissors": print("--Computer Wins!--") elif pmove == "Scissors" and cmove == "Paper": print("--Player Wins!--") elif pmove == "Scissors" and cmove == "Rock": print("--Computer Wins!--") # new code play_again = input("Would you like to play again?") keep_playing = play_again.lower().startswith('y') # this will be set to False if the user's input doesn't start with 'y', the loop will exit ```
6,662
57,420,171
I am writing a batch file that lies somewhere in a folder structure alongside a .venv folder (python virtual environment) ``` KnownFolderName | +-- 1.0.0 | | | +-- .venv | | | +-- folder | | | +-- folder | | | +-- batch.bat | +-- 1.0.1 | +-- .venv | +-- folder | +-- batch.bat ``` I want to be able to navigate to .venv from wherever the batch file starts. If you do this manually you can just cd .. until you reach x.y.z then cd .venv But I can't work out how to automate that in a batch file. findstr doesn't return a substring match which was going to be my way to get to KnownFolderName/x.y.z directly. Maybe a looping if would work?
2019/08/08
[ "https://Stackoverflow.com/questions/57420171", "https://Stackoverflow.com", "https://Stackoverflow.com/users/869598/" ]
Got it. Of course as soon as I post the question the answer appears! After trying all sorts of ways to extract the string from the cwd I found that just looping backwards is the way: ``` :loop IF EXIST .venv ( cd .venv\Scripts ) ELSE ( cd .. goto loop ) ``` Then you can return to the original location with the standard: ``` %~dp0 ```
What about the following approach which makes use of string manipulation features: ```cmd @echo off set "BATCH=%~f0" echo ABSOLUTE BATCH PATH: "%BATCH%" set "RELATIVE=%BATCH:*\KnownFolderName\=%" echo RELATIVE BATCH PATH: "%RELATIVE%" set "TEST=%BATCH%|" call set "ROOT=%%TEST:\%RELATIVE%|=%%" echo ROOT DIRECTORY PATH: "%ROOT%" set "VERSION=%RELATIVE:\=" & rem "%" echo VERSION NUMBER NAME: "%VERSION%" set "TARGET=%ROOT%\%VERSION%\.venv" echo DESIRED FOLDER PATH: "%TARGET%" if exist "%TARGET%\" (echo ^(WHICH EXISTS^)) else (echo ^(WHICH DOES NOT EXIST^)) pause ```
6,663
68,708,540
I want to generate random number between 0 and 1 in **python code** but only get 2 numbers after sign . Example: ``` 0.22 0.25 0.9 ``` I have searched many sources but have not found the solution. Can you help me?
2021/08/09
[ "https://Stackoverflow.com/questions/68708540", "https://Stackoverflow.com", "https://Stackoverflow.com/users/16462598/" ]
Want this? ``` import random round(random.random(), 2) ```
Try this `print(round(random.random(), 2))`
6,664
49,544,701
I'm relatively new to python, and I am trying to build a program that can visit a website using a proxy from a list of proxies in a text file, and continue doing so with each proxy in the file until they're all used. I found some code online and tweaked it to my needs, but when I run the program, the proxies are successfully used, but they don't get used in order. For whatever reason, the first proxy gets used twice in a row, then the second proxy gets used, then the first again, then third, blah blah. It doesn't go in order one by one. The proxies in the text file are organized as such: 123.45.67.89:8080 987.65.43.21:8080 And so on. Here's the code I am using: ``` from fake_useragent import UserAgent import pyautogui import webbrowser import time import random import random import requests from selenium import webdriver import os import re proxylisttext = 'proxylistlist.txt' useragent = UserAgent() profile = webdriver.FirefoxProfile() profile.set_preference("network.proxy.type", 1) profile.set_preference("network.proxy_type", 1) def Visiter(proxy1): try: proxy = proxy1.split(":") print ('Visit using proxy :',proxy1) profile.set_preference("network.proxy.http", proxy[0]) profile.set_preference("network.proxy.http_port", int(proxy[1])) profile.set_preference("network.proxy.ssl", proxy[0]) profile.set_preference("network.proxy.ssl_port", int(proxy[1])) profile.set_preference("general.useragent.override", useragent.random) driver = webdriver.Firefox(firefox_profile=profile) driver.get('https://www.iplocation.net/find-ip-address') time.sleep(2) driver.close() except: print('Proxy failed') pass def loadproxy(): try: get_file = open(proxylisttext, "r+") proxylist = get_file.readlines() writeused = get_file.write('used') count = 0 proxy = [] while count < 10: proxy.append(proxylist[count].strip()) count += 1 for i in proxy: Visiter(i) except IOError: print ("\n[-] Error: Check your proxylist path\n") sys.exit(1) def main(): loadproxy() if __name__ == '__main__': main() ``` And so as I said, this code successfully navigates to the ipchecker site using the proxy, but then it doesn't go line by line in order, the same proxy will get used multiple times. So I guess more specifically, how can I ensure the program iterates through the proxies one by one, without repeating? I have searched exhaustively for a solution, but I haven't been able to find one, so any help would be appreciated. Thank you.
2018/03/28
[ "https://Stackoverflow.com/questions/49544701", "https://Stackoverflow.com", "https://Stackoverflow.com/users/9566525/" ]
Your problem is with these nested loops, which don't appear to be doing what you want: ``` proxy = [] while count < 10: proxy.append(proxylist[count].strip()) count += 1 for i in proxy: Visiter(i) ``` The outer loop builds up the `proxy` list, adding one value each time until there are ten. After each value has been added, the inner loop iterates over the `proxy` list that has been built so far, visiting each item. I suspect you want to unnest the loops. That way, the `for` loop will only run after the `while` loop has completed, and so it will only visit each proxy once. Try something like this: ``` proxy = [] while count < 10: proxy.append(proxylist[count].strip()) count += 1 for i in proxy: Visiter(i) ``` You could simplify that into a single loop, if you want. For instance, using `itertools.islice` to handle the bounds checking, you could do: ``` for proxy in itertools.islice(proxylist, 10): Visiter(proxy.strip()) ``` You could even run that directly on the file object (since files are iterable) rather than calling `readlines` first, to read it into a list. (You might then need to add a `seek` call on the file before writing `"used"`, but you may need that anyway, some OSs don't allow you to mix reads and writes without seeking in between.)
`while count < 10: proxy.append(proxylist[count].strip()) count += 1 for i in proxy: Visiter(i)` The for loop within the while loop means that every time you hit proxy.append you'll call Visiter for *every* item already in proxy. That might explain why you're getting multiple hits per proxy. As far as the out of order issue, I'm not sure why readlines() isn't maintaining the line order of your file but I'd try something like: `with open('filepath', 'r') as file: for line in file: do_stuff_with_line(line)` With the above you don't need to hold the whole file in memory at once either which ca be nice for big files. Good luck!
6,666
4,778,679
i have no way of upgrade to python 2.7 or 3.1 so i am stuck with python 2.6 on my ubuntu 10.04 machine. will i still be able to find host that supports python 2.6? is using python 2.6 still consider outdated or bad practice?
2011/01/24
[ "https://Stackoverflow.com/questions/4778679", "https://Stackoverflow.com", "https://Stackoverflow.com/users/62617/" ]
2.6 will be around for a long time. There are many machines that still run even 2.4, so you're fine.
Python3.1 is in the repositories for 10.04 ``` $ apt-cache show python3 Package: python3 Priority: optional Section: python Installed-Size: 76 Maintainer: Ubuntu Developers <ubuntu-devel-discuss@lists.ubuntu.com> Original-Maintainer: Matthias Klose <doko@debian.org> Architecture: all Source: python3-defaults Version: 3.1.2-0ubuntu1 Depends: python3.1 (>= 3.1.2), python3-minimal (= 3.1.2-0ubuntu1) Suggests: python3-doc (>= 3.1.2-0ubuntu1), python3-tk (>= 3.1.2-0ubuntu1), python3-profiler (>= 3.1.2-0ubuntu1) Filename: pool/main/p/python3-defaults/python3_3.1.2-0ubuntu1_all.deb Size: 11096 MD5sum: 81f3f3bf790f5d7756b76c8d92fcea86 SHA1: 32e12dc7f9500456e063f22645c1cfed76b8845c SHA256: 0f541352ace2fcf1929a93320ffbe2f1de4e1d140bbe70a7c5a709403b73341c Description: An interactive high-level object-oriented language (default python3 version) Python, the high-level, interactive object oriented language, includes an extensive class library with lots of goodies for network programming, system administration, sounds and graphics. . This package is a dependency package, which depends on Debian's default Python version (currently v3.1). Bugs: https://bugs.launchpad.net/ubuntu/+filebug Origin: Ubuntu Supported: 3y ```
6,667
61,804,295
I am using python to do some data cleaning and i've used the datetime module to split date time and tried to create another column with just the time. My script works but it just takes the last value of the data frame. Here is the code: ``` import datetime i = 0 for index, row in df.iterrows(): date = datetime.datetime.strptime(df.iloc[i, 0], "%Y-%m-%dT%H:%M:%SZ") df['minutes'] = date.minute i = i + 1 ``` This is the dataframe : [Output](https://i.stack.imgur.com/uVQsA.png)
2020/05/14
[ "https://Stackoverflow.com/questions/61804295", "https://Stackoverflow.com", "https://Stackoverflow.com/users/13543531/" ]
`df['minutes'] = date.minute` reassigns the entire `'minutes'` column with the scalar value `date.minute` from the last iteration. You don't need a loop, as 99% of the cases when using pandas. You can use vectorized assignment, just replace `'source_column_name'` with the name of the column with the source data. ``` df['minutes'] = pd.to_datetime(df['source_column_name'], format='%Y-%m-%dT%H:%M:%SZ').dt.minute ``` It is also most likely that you won't need to specify `format` as `pd.to_datetime` is fairly smart. Quick example: ``` df = pd.DataFrame({'a': ['2020.1.13', '2019.1.13']}) df['year'] = pd.to_datetime(df['a']).dt.year print(df) ``` outputs ``` a year 0 2020.1.13 2020 1 2019.1.13 2019 ```
Seems like you're trying to get the time column from the datetime which is in string format. That's what I understood from your post. Could you give this a shot? ``` from datetime import datetime import pandas as pd def get_time(date_cell): dt = datetime.strptime(date_cell, "%Y-%m-%dT%H:%M:%SZ") return datetime.strftime(dt, "%H:%M:%SZ") df['time'] = df['date_time'].apply(get_time) ```
6,672
66,402,926
I am new to python and I want to know how to create empty column with a specific number. Let's say I want to create 20 columns. What I tried: ``` import pandas as pd num =20 for i in range(num): df = df + pd.DataFrame(columns=['col'+str(i)]) ``` But I got the unwanted result: ``` Empty DataFrame Columns: [col0, col1, col10, col11, col12, col13, col14, col15, col16, col17, col18, col19, col2, col3, col4, col5, col6, col7, col8, col9] Index: [] ``` Desired result: ``` Empty DataFrame Columns: [col0, col1, col2,...,col19] Index: [] ``` How to rectify it? Any help will be much appreciated!
2021/02/27
[ "https://Stackoverflow.com/questions/66402926", "https://Stackoverflow.com", "https://Stackoverflow.com/users/13230133/" ]
Assuming you wish to create an empty dataframe, the solution is to remove the for loop, and use a list comprehension for the column names: ``` import pandas as pd num =20 df = pd.DataFrame(columns=['col'+str(i) for i in range(num)]) ```
Instead of adding dataframes together, let's just create a dictionary with the mappings you'd like, and create one dataframe: ``` data = {"col" + str(i): [] for i in range(20)} df = pd.DataFrame(data) #Empty DataFrame #Columns: [col0, col1, col2, col3, col4, col5, col6, col7, col8, col9, col10, col11, col12, col13, col14, col15, col16, col17, col18, col19] #Index: [] ```
6,673
35,328,941
In python / numpy - is there a way to build an expression containing factorials - but since in my scenario, many factorials will be duplicated or reduced, wait until I instruct the run time to compute it. Let's say `F(x) := x!` And I build an expression like `(F(6) + F(7)) / F(4)` - I can greatly accelerate this, even do it in my head by doing ``` (F(6) * (1 + 7)) / F(4) = 5 * 6 * 8 = 240 ``` Basically, I'm going to generate such expressions and would like the computer to be smart, not compute all factorials by multiplying to 1, i.e using my example not actually do ``` (6*5*4*3*2 + 7*6*5*4*3*2) / 4*3*2 ``` I've actually started developing a Factorial class, but I'm new to python and numpy and was wondering if this is a problem that's already solved.
2016/02/11
[ "https://Stackoverflow.com/questions/35328941", "https://Stackoverflow.com", "https://Stackoverflow.com/users/127257/" ]
As @Oleg has suggested, you can do this with sympy: ``` import numpy as np import sympy as sp # preparation n = sp.symbols("n") F = sp.factorial # create the equation f = (F(n) + F(n + 1)) / F(n - 2) print(f) # => (factorial(n) + factorial(n + 1))/factorial(n - 2) # reduce it f = f.simplify() print(f) # => n*(n - 1)*(n + 2) # evaluate it in SymPy # Note: very slow! print(f.subs(n, 6)) # => 240 # turn it into a numpy function # Note: much faster! f = sp.lambdify(n, f, "numpy") a = np.arange(2, 10) print(f(a)) # => [ 8 30 72 140 240 378 560 792] ```
Maybe you could look into increasing the efficiency using table lookups if space efficiency isn't a major concern. It would greatly reduce the number of repeated calculations. The following isn't terribly efficient, but it's the basic idea. ``` cache = {1:1} def cached_factorial(n): if (n in cache): return cache[n] else: result = n * cached_factorial(n-1) cache[n] = result return result ```
6,674
34,394,578
I am using python 2.7.8. I have a website which contains text written with bullets list which is ordered list aka <**ol**> . I want to extract those text i.e ``` Coffee Tea Milk ``` My html code: ``` <!DOCTYPE html> <html> <body> <ol type="I"> <li>Coffee</li> <li>Tea</li> <li>Milk</li> </ol> <ol type="a"> <li>Coffee</li> <li>Tea</li> <li>Milk</li> </ol> <ol type="1"> <li>Coffee</li> <li>Tea</li> <li>Milk</li> </ol> </body> </html> ``` The code which is i am constantly trying is not working bcz on the way i am every time getting Error. Python code: ``` import urllib2 from urllib2 import Request import re from bs4 import BeautifulSoup url = "http://www.sanfoundry.com/c-programming-questions-answers-variable-names-1/" #url="http://www.sanfoundry.com/c-programming-questions-answers-variable-names-2/" req = Request(url) resp = urllib2.urlopen(req) htmls = resp.read() c=0; soup = BeautifulSoup(htmls, 'lxml') #skipp portion of code res2 = soup.find('h1',attrs={"class":"entry-title"}) br = soup.find('span',attrs={'class':'IL_ADS'}) br = soup.find('p').text # separate title for question in soup.find_all(text=re.compile(r"^\d+\.")): answers = [br.next_sibling.strip() for br in question.find_next_siblings("br")] #s = ''.join([i for i in question if not i.isdigit()]) if not answers: break print question.encode('utf-8') ul = question.find_next_sibling("ul") print(ul.get_text(' ', strip=True)) ``` but when i run this code i got also Error: ``` Traceback (most recent call last): File "C:\Users\DELL\Desktop\python\s\fyp\crawldataextraction.py", line 47, in <module> print(ul.get_text(' ', strip=True)) AttributeError: 'NoneType' object has no attribute 'get_text' ```
2015/12/21
[ "https://Stackoverflow.com/questions/34394578", "https://Stackoverflow.com", "https://Stackoverflow.com/users/3440716/" ]
You can see why it is that beautifulsoup function does not work for your variable 'ul" by inserting this line while commenting out the line you previously had. ``` print ul """print(ul.get_text(' ', strip=True))""" ``` What is happening is that your variable ul is storing the string: 1. C99 standard guarantees uniqueness of \_\_\_\_ characters for internal names. None 2. C99 standard guarantess uniqueness of \_\_\_\_\_ characters for external names. None 3. Which of the following is not a valid variable name declaration? None 4. Which of the following is not a valid variable name declaration? None 5. Variable names beginning with underscore is not encouraged. Why? None 6. All keywords in C are in None 7. Variable name resolving (number of significant characters for uniqueness of variable) depends on None 8. Which of the following is not a valid C variable name? None 9. Which of the following is true for variable names in C? None But since there is no ul tag for beautifulsoup to find inside of ul, your ul.get\_text method does not work. So in this case, the way I would go about stripping the spaces would be to use regex. -------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- Answer about removing number and dots. ``` import urllib2 from urllib2 import Request import re from bs4 import BeautifulSoup url = "http://www.sanfoundry.com/c-programming-questions-answers-variable-names-1/" #url="http://www.sanfoundry.com/c-programming-questions-answers-variable-names-2/" req = Request(url) resp = urllib2.urlopen(req) htmls = resp.read() c = 0 soup = BeautifulSoup(htmls, 'lxml') # skipp portion of code res2 = soup.find('h1', attrs={"class": "entry-title"}) br = soup.find('span', attrs={'class': 'IL_ADS'}) br = soup.find('p').text # separate title for question in soup.find_all(text=re.compile(r"^\d+\.")): answers = [br.next_sibling.strip() for br in question.find_next_siblings("br")] # s = ''.join([i for i in question if not i.isdigit()]) if not answers: break ul = question.encode('utf-8') ol = re.compile('[\d][.]') ol = ol.sub(' ', str(ul)) print ol """print(ul.get_text(' ', strip=True))""" ``` Output: ``` C99 standard guarantees uniqueness of ____ characters for internal names. C99 standard guarantess uniqueness of _____ characters for external names. Which of the following is not a valid variable name declaration? Which of the following is not a valid variable name declaration? Variable names beginning with underscore is not encouraged. Why? All keywords in C are in Variable name resolving (number of significant characters for uniqueness of variable) depends on Which of the following is not a valid C variable name? Which of the following is true for variable names in C? ``` I used regex to compile the pattern of number followed by a dot. Then used the re.sub() function to replace it with a space.
I never used the BeautifulSoup, but I do this with regular expression: ``` import re html = """<!DOCTYPE html> <html> <body> <ol type="I"> <li>Coffee</li> <li>Tea</li> <li>Milk</li> </ol> <ol type="a"> <li>Coffee</li> <li>Tea</li> <li>Milk</li> </ol> <ol type="1"> <li>Coffee</li> <li>Tea</li> <li>Milk</li> </ol> </body> </html>""" regexp = re.compile('<li>(\w+)<\/li>') result = regexp.findall(html) for i in result: print(i) ```
6,675
4,387,847
Assuming i have a class that implements several methods. We want a user to chose to which methods to run among the exisiting methods or he can decide to add any method on\_the\_fly. from example ``` class RemoveNoise(): pass ``` then methods are added as wanted ``` RemoveNoise.raw = Raw() RemoveNoise.bais = Bias() etc ``` he can even write a new one ``` def new(): pass ``` and also add the `new()` method ``` RemoveNoise.new=new run(RemoveNoise) ``` `run()` is a function that evaluates such a class. I want to save the class\_with\_the\_methods\_used and link this class to the object created. Any hints on how to solve this in python?
2010/12/08
[ "https://Stackoverflow.com/questions/4387847", "https://Stackoverflow.com", "https://Stackoverflow.com/users/535019/" ]
Functions can be added to a class at runtime. ``` class Foo(object): pass def bar(self): print 42 Foo.bar = bar Foo().bar() ```
There is no solving needed, you just do it. Here is your code, with the small changes needed: ``` class RemoveNoise(): pass RemoveNoise.raw = Raw RemoveNoise.bias = Bias def new(self): pass RemoveNoise.new=new instance = RemoveNoise() ``` It's that simple. Python is wonderful. Why on earth you would need this is beyond me, though.
6,676
53,469,407
I got into automating tasks on the web using python. I have tried requests/urllib3/requests-html but they don't get me the right elements, because they get only the `html` (not the updated version with `javascript`). Some recommended Selenium, but it opens a browser with the `webdriver`. I need a way to get elements after they get updated, and maybe after they get updated for a second time. The reason I don't want it to open a browser is I'm running my script on a hosting-scripts service.
2018/11/25
[ "https://Stackoverflow.com/questions/53469407", "https://Stackoverflow.com", "https://Stackoverflow.com/users/9534986/" ]
Don't overthink this. It seems that everything you have in your object gets allocated there, so use smart pointers: ``` std::vector<std::unique_ptr<Object>> pointer_vector; ```
Every object that you create with `new` will have to be `delete`ed at some point. It's your responsibility to do that. In your case the easiest solution is to add it in the destructor for your `Problem` class. ``` Problem::~Problem() { for (auto ptr : pointer_vector) delete ptr; } ``` If you ever remove objects from the vector, you have to make sure that they are `delete`d there as well. **Please Note:** the proper way to do this is however to use smart pointers as Matthieu already said in his answer.
6,679
58,194,852
I am trying to load a pkl file, ``` pkl_file = open(sys.argv[1], 'rb') world = pickle.load(pkl_file) ``` but I get an error from these lines ``` Traceback (most recent call last): File "E:/python/test.py", line 186, in <module> world = pickle.load(pkl_file) ModuleNotFoundError: No module named 'numpy.core.multiarray\r' ``` I am using Windows 10, python 3.7, and installed four packages (numpy 1.17.2, opencv-python 4.1.1.26, pip 19.2.3, setuptools 41.2.0 ). I have tried to change "rb" to "r", but still got the error, how can I fix this?
2019/10/02
[ "https://Stackoverflow.com/questions/58194852", "https://Stackoverflow.com", "https://Stackoverflow.com/users/7548018/" ]
I think there are two problems here. First, your pickle is or contains a [NumPy](https://numpy.org/) object, which is not part of the standard library. Therefore you must ensure that NumPy is installed into your current Python environment and imported **before** you try to load the pickled object. Depending on your setup, installation may be as simple as, ``` pip install numpy ``` Then you must add the line, ```py import numpy as np ``` to the top of your script. Second, it looks like Python is encountering [this issue](https://stackoverflow.com/questions/8527241/python-pickle-fix-r-characters-before-loading), where your binary file was erroneously saved as text on Windows, resulting in resulted in each `'\n'` being converted to `'\r\n'`. To fix this, you must re-convert to `'\r\n'` back to `'\n'`. So long as the file isn't **huge**, this usually isn't very painful. Here is a relatively complete example: ```py import sys import numpy as np src = sys.argv[1] # path to your file data = open(src).read().replace('\r\n', '\n') # read and replace file contents dst = src + ".tmp" open(dst, "w").write(data) # save a temporary file world = pickle.load(open(dst, "rb"), encoding='latin1') ```
Ok, I just had to figure this out for myself, and I solved it. All you have to do is change all the "\r\n" to "\n". You can do this in multiple ways. You can go into Notepad++ and change line endings from CR LF to just LF. Or programmatically you can do ``` open(newfile, 'w', newline = '\n').write(open(oldfile, 'r').read()) ```
6,680
31,575,359
I recently downloaded some software that requires one to change to the directory with python files, and run `python setup.py install --user` in the Terminal. One then checks whether the code is running correctly by trying `from [x] import [y]` This works on my Terminal. However, when I then try `from [x] import [y]` in the notebook, it never works. So, this makes me think I must install the `setup.py` file within the iPython notebook. How does one do this?
2015/07/22
[ "https://Stackoverflow.com/questions/31575359", "https://Stackoverflow.com", "https://Stackoverflow.com/users/4596596/" ]
What about using the following selector: `input[id^='something_stuff_'][id$='_work']` It will get inputs with id starting with "something\_stuff\_" and finishing with "\_work".
An approach to this problem would be to use classes instead of ids and have things that are styled the same to be classed the same. for example: ``` <input id="something_stuff_01_work" class="input_class"> <input id="something_stuff_02_work" class="input_class"> <input id="something_stuff_03_work" class="input_class"> ``` Then select the class instead of the id. ``` .input_class { sweetstyleofawesomeness; } ```
6,681
43,994,599
Thanks to [Python Library](https://docs.python.org/2/library/telnetlib.html "Library") i was able to use their example to telnet to Cisco switches, I am using this for learning purposes, specifically learning python. However, although all the code seem generally easy to read, I am a bit confused as to the following: 1- why use the if statement below 2- why use the "\n" after the username and password write method 3- why am i not getting the output on my bash terminal when the changes are infact committed and successful ``` HOST = "172.16.1.76" user = raw_input("Enter your Telnet username : ") password = getpass.getpass() tn = telnetlib.Telnet(HOST) tn.read_until("Username: ") tn.write(user + '\n') <----- 2 if password: <----- 1 tn.read_until("Password: ") tn.write(password + "\n") <------2 tn.write("show run \n") time.sleep(5) output = tn.read_all() <----- 3 print output print "=" * 30 print "Configuration Complete." ``` I am not sure as to why using the if statement above, typically once you input in the Username, you get the password prompt right afterward. why cant we just type : ``` tn.read_until("Username: ") tn.write(user + '\n') tn.read_until("Password: ") tn.write(password + "\n") ``` As for the second point, why use the '\n' after the passwords and username in the write method if we going to hit enter after we add them anyway?
2017/05/16
[ "https://Stackoverflow.com/questions/43994599", "https://Stackoverflow.com", "https://Stackoverflow.com/users/1736707/" ]
It looks like `PyCrypto` is [not being maintained currently](https://github.com/dlitz/pycrypto/issues/168). So, it's better you switch to [PyCryptodome](https://pypi.python.org/pypi/pycryptodome). ``` pip install pycryptodome ``` If you still want to use PyCrypto you could still try, > > <https://packaging.python.org/extensions/#setting-up-a-build-environment-on-windows> > > > <https://stackoverflow.com/a/33338523/887007> > > > <https://stackoverflow.com/a/27327236/887007> > > >
If doing `python -m Cryptodome.SelfTest` is giving you an error, you can do: ``` pip uninstall pycryptodome ``` and then: ``` easy_install pycryptodome ``` Even after this, running `python -m Cryptodome.SelfTest` was giving me an error, but when I re-ran the file, it worked.
6,686
18,615,524
I'm trying to get the valid python list from the response of a server like you can see below: > > window.\_\_search.list=[{"order":"1","base":"LAW","n":"148904","access":{"css":"avail\_yes","title":"\u042 > 2\u0435\u043a\u0441\u0442\u0434\u043e\u043a\u0443\u043c\u0435\u043d\u0442\u0430\u0434\u043e\u0441\u0442\u0443\u043f\u0435\u043d"},"title":"\"\u0410\u0440\u0431\u0438\u0442\u0440\u0430\u0436\u043d\u044b\u0439\u043f\u0440\u043e\u0446\u0435\u0441\u0441\u0443\u0430\u043b\u044c\u043d\u044b\u0439\u043a\u043e\u0434\u0435\u043a\u0441\u0420\u043e\u0441\u0441\u0438\u0439\u0441\u043a\u043e\u0439\u0424\u0435\u0434\u0435\u0440\u0430\u0446\u0438\u0438\" \u043e\u0442 24.07.2002 N 95-\u0424\u0417 (\u0440\u0435\u0434. \u043e\u0442 02.07.2013) (\u0441 \u0438\u0437\u043c. \u0438 \u0434\u043e\u043f.,\u0432\u0441\u0442\u0443\u043f\u0430 \u044e\u0449\u0438\u043c\u0438\u0432 \u0441\u0438\u043b\u0443 \u0441 01.08.2013)"}, ... }]; > > > I did it through cutting off "window.\_\_search.list=" and ";" from the string using `data = json.loads(re.search(r"(?=\[)(.*?)\s*(?=\;)", url).group(1))` and then it was looked like standard JSON: > > [{u'access': {u'css': u'avail\_yes', u'title': u'\u0422\u0435\u043a\u0441\u0442\u0434\u043e\u043a\u04 > 43\u043c\u0435\u043d\u0442\u0430 \u0434\u043e\u0441\u0442\u0443\u043f\u0435\u043d'},u'title': u'"\u0410\u0440\u0431\u0438\u0442\u0440\u0430\u0436\u043d\u044b\u0439\u043f\u0440\u043e\u0446\u0435\u0441\u0441\u0443\u0430\u043b\u044c\u043d\u044b\u0439\u043a\u043e\u0434\u0435\u043a\u0441\u0420\u043e\u0441\u0441\u0438\u0439\u0441\u043a\u043e\u0439\u0424\u0435\u0434\u0435\u0440\u0430\u0446\u0438\u0438" \u043e\u0442 24.07.2002 N 95-\u0424\u0417 (\u04 > 40\u0435\u0434. \u043e\u0442 02.07.2013) (\u0441 \u0438\u0437\u043c. \u0438 \u0434\u043e > \u043f.,\u0432\u0441\u0442\u0443\u043f\u0430\u044e\u0449\u0438\u043c\u0438 \u0432 \u0441 > \u0438\u043b\u0443 \u0441 01.08.2013)', u'base': u'LAW', u'order': u'1', u'n': u'148904'}, ... }] > > > But sometimes, during iterating an others urls I get an error like this: ``` File "/Developer/Python/test.py", line 123, in order_search data = json.loads(re.search(r"(?=\[)(.*?)\s*(?=\;)", url).group(1)) File "/System/Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7/json/__init__.py", line 326, in loads return _default_decoder.decode(s) File "/System/Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7/json/decoder.py", line 366, in decode obj, end = self.raw_decode(s, idx=_w(s, 0).end()) File "/System/Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7/json/decoder.py", line 382, in raw_decode obj, end = self.scan_once(s, idx) ValueError: Invalid \uXXXX escape: line 1 column 20235 (char 20235) ``` How can I fix it, or maybe there's an another way to get valid JSON (desirable using native libraries)?
2013/09/04
[ "https://Stackoverflow.com/questions/18615524", "https://Stackoverflow.com", "https://Stackoverflow.com/users/1621838/" ]
Probably, your regular expression has found char ';' somewhere in the middle of a response, and because of this you get an error, because, using your regular expression, you might have received an incomplete, cropped response, and that's why you could not convert it into JSON. Yes, I agree with user RickyA that sometimes using a native tools, a code will easier to read than trying to make up RegEx. But here, I'd rather to use exactly regular expression, something like this: ``` data = re.search(r'(?=\[)(.*?)[\;]*$', response).group(1) ``` > > > ``` > /(?=\[)(.*?)[\;]*$/ > (?=\[) Positive Lookahead > \[ Literal [ > 1st Capturing group (.*?) > . 0 to infinite times [lazy] Any character (except newline) > Char class [\;] 0 to infinite times [greedy] matches: > \; The character ; > $ End of string > > ``` > > I believe you meant that the variable '**url**' means a response from a server, then maybe better to use name of variable '**response**' instead of '**url**'. And, if you've some troubles with using RegEx, I advise you to use an editor of regular expressions, like [RegEx 101](http://regex101.com).This is the online regular expression editor, which explains each block of inputted expression.
What about: ``` response = response.strip() #get rid of whitespaces response = response[response.find("["):] #trim everything before the first '[' if response[-1:] == ";": #if last char == ";" response = response[:-1] #trim it ``` Seems like a big overkill to do this with regex.
6,687
21,067,730
I have this piece of code in my views.py for a django app: ``` for i in range(0,10): row = cursor.fetchone() tablestring = tablestring + "<tr><td>" + row[0] + "</td><td>" + + str(row[3]) + "</td></tr>" ``` This works fine when I load the page but if I change the range to (0,20) or anything higher, I just get a blank page. My question is: what is causing this limitation? Is it something with python or django or with the host (pythonanywhere)? Also, I'm just starting with django and I understand this may not be the best code. If you have any suggestions to make it neater or more efficient they would be appreciated. Thanks for the help Edit: here is my query: ``` cursor.execute("""SELECT title, movie_url, movie_id, cScore FROM movies""") ```
2014/01/11
[ "https://Stackoverflow.com/questions/21067730", "https://Stackoverflow.com", "https://Stackoverflow.com/users/754604/" ]
you should really use [django orm](https://docs.djangoproject.com/en/dev/topics/db/queries/) and write those table markups in a [template](https://docs.djangoproject.com/en/dev/topics/templates/), follow this [tutorial](https://docs.djangoproject.com/en/dev/intro/tutorial01/) to get the basic concepts
My first guess would be that there are less than 20 rows, so once you run out of them `row` will be None and your attempt to index it will throw an exception. As for improving the code: Like Yossi suggested, you should probably go with an ORM. An ORM (Object Relational Mapper) lets you access a database in a more object-oriented way, which can make for cleaner code as well as avoid bugs related to argument escaping (on the other hand, it may not be ideal if you want to create really complex queries, for that SQL is still the best IMHO). I also agree with Guy in that you should using a template system instead of concatenating HTML strings.
6,688
29,386,310
I am downloading a compressed file from the internet: ``` with lzma.open(urllib.request.urlopen(url)) as file: for line in file: ... ``` After having downloaded and processed a a large part of the file, I eventually get the error: > > File "/usr/lib/python3.4/lzma.py", line 225, in \_fill\_buffer raise > EOFError("Compressed file ended before the " EOFError: Compressed file > ended before the end-of-stream marker was reached > > > I am thinking that it might be caused by an internet connection that drops or the server not responding for some time. If that is the case, is there anyway to make it keep trying, until connection is reestablished, instead of throwing an exception. I don't think it is a problem with the file, as I have manually downloaded many files like it from the same website manually and decompressed it. I have also been able to download and decompress some smaller files with Python. The file I am trying to download has a compressed size of about 20 GB.
2015/04/01
[ "https://Stackoverflow.com/questions/29386310", "https://Stackoverflow.com", "https://Stackoverflow.com/users/4424589/" ]
from the [urllib.urlopen docs:](https://docs.python.org/2/library/urllib.html) > > One caveat: the read() method, if the size argument is omitted or > negative, may not read until the end of the data stream; there is no > good way to determine that the entire stream from a socket has been > read in the general case. > > > Maybe the lzma.open trips on huge size/connection errors/timeout because of the above.
Assuming you need to download a big file, it is better to use the "write and binary" mode when writing content to a file in python. You may also try to use the [python requests](http://docs.python-requests.org/en/master/) module more than the urllib module: Please see below a working code: ``` import requests url="http://www.google.com" with open("myoutputfile.ext","wb") as f: f.write( requests.get(url).content ) ``` Could you test that piece of code and answer back if it doesn't solve your issue. Best regards
6,689
21,310,125
I would like to know what are the advantages and disadvantages of using AWS OpsWorks vs AWS Beanstalk and AWS CloudFormation? I am interested in a system that can be auto scaled to handle any high number of simultaneous web requests (From 1000 requests per minute to 10 million rpm.), including a database layer that can be auto scalable as well. Instead of having a separate instance for each app, Ideally I would like to share some hardware resources efficiently. In the past I have used mostly an EC2 instance + RDS + Cloudfront + S3 The stack system will host some high traffic ruby on rails apps that we are migrating from Heroku, also some python/django apps and some PHP apps as well. Thanks in advance.
2014/01/23
[ "https://Stackoverflow.com/questions/21310125", "https://Stackoverflow.com", "https://Stackoverflow.com/users/1112656/" ]
**AWS Beanstalk:** It is Deploy and manage applications in the AWS cloud without worrying about the infrastructure that runs yor web applications with Elastic Beanstalk. No need to worry about EC2 or else installations. **AWS OpsWorks** AWS OpsWorks is nothing but an application management service that makes it easy for the new DevOps users to model & manage the entire their application
You should use OpsWorks in place of CloudFormation if you need to deploy an application that requires updates to its EC2 instances. If your application uses a lot of AWS resources and services, including EC2, use a combination of CloudFormation and OpsWorks If your application will need other AWS resources, such as database or storage service. In this scenario, use CloudFormation to deploy Elastic Beanstalk along with the other resources.
6,692
46,044,003
I'm trying to write a function that guesses a type of a variable represented as a string. So if I've got a variable of some type then in order to find out what type of a variable it is I can use python's `type()` function like this `type(var)`. But how do I concisely and pythonicaly convert the output of this function into a string so that the output would be like 'int' in case of the integer, 'bool' in case of the bool etc. The only way I see I can do this is first use `str(type(var))` and then use a regular expression to strip the part indicating the type. So basically I could write a simple type guessing python function as follows: ``` import ast import re def guess_type(var): return re.findall('\w+',str(type(ast.literal_eval(var))))[1] ``` where var is of type `str`. But my question is "Is there a more simple way to get the same result?" Speaking of performance: ``` In [156]: %timeit guess_type 10000000 loops, best of 3: 28.1 ns per loop. ```
2017/09/04
[ "https://Stackoverflow.com/questions/46044003", "https://Stackoverflow.com", "https://Stackoverflow.com/users/8243859/" ]
``` >>> type(0).__name__ 'int' >>> type('').__name__ 'str' >>> type({}).__name__ 'dict' ```
What are you actually trying to do? If you just want to get the name of the class of the object you could use: ``` type(var).__name__ ``` This will give you the name of the class of the object `var`.
6,702
7,389,567
I have a webpage generated from python that works as it should, using: ``` print 'Content-type: text/html\n\n' print "" # blank line, end of headers print '<link href="default.css" rel="stylesheet" type="text/css" />' print "<html><head>" ``` I want to add images to this webpage, but when I do this: ``` sys.stdout.write( "Content-type: image/png\n\n" + file("11.png","rb").read() ) print 'Content-type: text/html\n\n' print "" # blank line, end of headers print '<link href="default.css" rel="stylesheet" type="text/css" />' ... ``` All I get is the image, then if I place the image code below my html/text header all I get is the text from the image, ie: ``` <Ï#·öÐδÝZºm]¾|‰k×®]žòåËÛ¶ÃgžyFK–,ÑôéÓU½zuIÒ}÷ݧ&MšH’V¯^­?üð¼1±±±zýõ×%IñññÚºu«*W®¬wß}W.—K3gÎÔÌ™ÿw‹Ú””I’¹w¤¥hdÒd½q÷X•Šˆ²m¿þfïÞ½*]º´éÈs;¥¤¤Ø¿ILLÔˆ#rÊ ``` Also, if I try: ``` print "<img src='11.png'>" ``` I get a broken image in the browser, and browing directly to the image produces a 500 internal server error, with my apache log saying: ``` 8)Exec format error: exec of './../../11.png' failed Premature end of script headers: 11.png ```
2011/09/12
[ "https://Stackoverflow.com/questions/7389567", "https://Stackoverflow.com", "https://Stackoverflow.com/users/828573/" ]
You can use this code to directly embed the image in your HTML: Python 3 ``` import base64 data_uri = base64.b64encode(open('Graph.png', 'rb').read()).decode('utf-8') img_tag = '<img src="data:image/png;base64,{0}">'.format(data_uri) print(img_tag) ``` Python 2.7 ``` data_uri = open('11.png', 'rb').read().encode('base64').replace('\n', '') img_tag = '<img src="data:image/png;base64,{0}">'.format(data_uri) print(img_tag) ``` Alternatively for Python <2.6: ``` data_uri = open('11.png', 'rb').read().encode('base64').replace('\n', '') img_tag = '<img src="data:image/png;base64,%s">' % data_uri print(img_tag) ```
Images in web pages are typically a second request to the server. The HTML page itself has no images in it, simply references to images like `<img src='the_url_to_the_image'>`. Then the browser makes a second request to the server, and gets the image data. The only option you have to serve images and HTML together is to use a `data:` url in the `img` tag.
6,703
19,330,790
I'm writing Python code using Vim inside Terminal (typing command "vim" to start up Vim). I've been trying to find a way to execute the code through the mac terminal in the same window. I'm trying to use :!python % but I get the following error message: E499: Empty file name for '%' or '#', only works with ":p:h" Anyone have any suggestions?
2013/10/12
[ "https://Stackoverflow.com/questions/19330790", "https://Stackoverflow.com", "https://Stackoverflow.com/users/-1/" ]
You can use splits to have both vim and a bash prompt in the same terminal window. I would highly recommend switching from the default `Terminal` app to [`iTerm2`](http://www.iterm2.com/). It's a terminal with [many nice features](http://www.iterm2.com/#/section/features), including 256 colours, tmux integration, and vertical splits. Vertical splits are much nicer for looking at code and output together in the same window than the horizontal splits available in `Terminal`. ![screenshot](https://i.stack.imgur.com/RwObu.png) You can also map shortcut keys to quickly switch between the splits.
You can execute command line arguments inside vim by starting the argument with a "!" from the command mode. Also, in command mode, "%" means the current file. Thus, you can execute the current file that you are editing like this: ``` :!python % ``` I should probably also add, as another option, that you can split the terminal pane in OS X by pressing Command+d. Then you can run commands in the bottom half, and edit in the top half
6,706