qid
int64
46k
74.7M
question
stringlengths
54
37.8k
date
stringlengths
10
10
metadata
listlengths
3
3
response_j
stringlengths
29
22k
response_k
stringlengths
26
13.4k
__index_level_0__
int64
0
17.8k
58,877,657
I am learning python I have project structure shown below. ``` i3cmd i3lib __init__.py i3common.py i3sound i3sound.py ``` ============================================================== **init**.py is empty i3common.py (removed actual code to simplify the post) ``` def rangeofdata(cmd, device, index): return ["a", "b", "c"] ``` i3sound.py (removed actual code to simplify the post) ``` from i3lib import i3common def getvolume(rangedata): return rangedata if __name__ == '__main__': rangedata = i3common.rangeofdata(["pactl", "list", "sinks"], "Sink", 2) print(getvolume(rangedata)) ``` When execute this code in pycharm it execute and get output ``` /home/vipin/Documents/python/i3cmd/venv/bin/python /home/vipin/Documents/python/i3cmd/i3sound/i3sound.py ['a', 'b', 'c'] Process finished with exit code 0 ``` But when open a terminal and go to /home/vipin/Documents/python/i3cmd/i3sound ``` cd /home/vipin/Documents/python/i3cmd/i3sound ``` then execute ``` python i3sound.py ``` below error i am getting ``` Traceback (most recent call last): File "i3sound.py", line 1, in <module> from i3lib import i3common ModuleNotFoundError: No module named 'i3lib' ``` What i am missing?
2019/11/15
[ "https://Stackoverflow.com/questions/58877657", "https://Stackoverflow.com", "https://Stackoverflow.com/users/8446934/" ]
I think you are just missing the installation of the `Lombok` on `intellij` double click on `Lombok.jar` and chose the `intelliJ IDE` Example config for lombok annotation procession in your `build.gradle` : ``` dependencies { compileOnly('org.projectlombok:lombok:1.16.20') annotationProcessor 'org.projectlombok:lombok:1.16.20' // compile 'org.projectlombok:lombok:1.16.20' <-- this no longer works! // other dependencies... } ``` > > @Wither is deprecated since 10.X.X. With has been promoted to the main > package, so use that one instead. > > > Please look at this [Lombok Wither](https://projectlombok.org/api/lombok/experimental/Wither.html) that's why you are not having the withA() function, if you downgrade your package you could use it sure
Line `compileOnly 'org.projectlombok:lombok:1.18.8'` shows that you're using gradle. I think the easiest way to check whether it works or not can be just running the gradle build (without IDE). Since lombok is an annotation processor, as long as the code passes the compilation, it's supposed to work (and the chances are that it really works based on that line). So you should check how does the IDE (you haven't specified which IDE it is actually) integrate with lombok. Maybe you need to enable "annotation processing" if you compile it with java compiler (like in intelliJ) and configure lombok. You an also install Lombok plugin for your IDE. Another useful hint is to use delombok and see whether the lombok has actually generated something or not
16,010
8,595,689
I'm trying to send a request to an API that only accepts XML. I've used `elementtree.SimpleXMLWriter` to build the XML tree and it's stored in a StringIO object. That's all fine and dandy. The problem is that I have to urlencode the StringIO object in order to send it to the API. But when I try, I get: ``` File "C:\Python27\lib\urllib.py", line 1279, in urlencode if len(query) and not isinstance(query[0], tuple): AttributeError: StringIO instance has no attribute '__len__' ``` Apparently this has been discussed as [an issue with Python](http://bugs.python.org/issue12327). I'm just wondering if there are any other built-in functions for urlencoding a string, specifically ones that don't need to call `len()` so that I can encode this StringIO object. Thanks! **PS:** I'm open to using something other than StringIO for storing the XML object, if that's an easier solution. I just need some sort of "[file](http://effbot.org/zone/xml-writer.htm)" for `SimpleXMLWriter` to store the XML in.
2011/12/21
[ "https://Stackoverflow.com/questions/8595689", "https://Stackoverflow.com", "https://Stackoverflow.com/users/625840/" ]
As the links you provided point out, php is not a persistent language and there is no way to have persistence across sessions (i.e. page loads). You can create a middle ground though by running a second php script as a daemon, and have your main script (i.e. the one the user hits) connect to that (yes - over a socket...) and get data from it. If you were to do that, and want to avoid the hassel of Web Sockets, try the new HTML5 [EventStream API](http://www.html5rocks.com/en/tutorials/eventsource/basics/), as it gives you the best of both worlds: A commet like infrastructure without the hackyness of long-polling or the need for a dedicated Web Sockets server.
If you need to keep the connection open, you need to keep the PHP script open. Commonly PHP is just invoked and then closed after the script has run (CGI, CLI), or it's a mixture (mod\_php in apache, FCGI) in which sometimes the PHP interpreter stays in memory after your script has finished (so everything associated from the OS to that process would still remain as a socket handle). However this is never save. Instead you need to make PHP a daemon which can keep your PHP scripts in memory. An existing solution for that is [Appserver-In-PHP](https://github.com/indeyets/appserver-in-php). It will keep your code in memory until you restart the server. Like the code, you can as well preserve variables between requests, e.g. a connection handle.
16,013
60,780,826
I try to write a python function that counts a specific word in a string. My regex pattern doesn't work when the word I want to count is repeated multiple times in a row. The pattern seems to work well otherwise. Here is my function ``` import re def word_count(word, text): return len(re.findall('(^|\s|\b)'+re.escape(word)+'(\,|\s|\b|\.|$)', text, re.IGNORECASE)) ``` When I test it with a random string ``` >>> word_count('Linux', "Linux, Word, Linux") 2 ``` When the word I want to count is adjacent to itself ``` >>> word_count('Linux', "Linux Linux") 1 ```
2020/03/20
[ "https://Stackoverflow.com/questions/60780826", "https://Stackoverflow.com", "https://Stackoverflow.com/users/7915157/" ]
Problem is in your regex. Your regex is using 2 capture groups and `re.findall` will return any capture groups if available. That needs to change to non-capture groups using `(?:...)` Besides there is reason to use `(^|\s|\b)` as `\b` or word boundary is suffice which covers all the cases besides `\b` is zero width. Same way `(\,|\s|\b|\.|$)` can be changed to `\b`. So you can just use: ``` def word_count(word, text): return len(re.findall(r'\b' + re.escape(word) + r'\b', text, re.I)) ``` This will give: ``` >>> word_count('Linux', "Linux, Word, Linux") 2 >>> word_count('Linux', "Linux Linux") 2 ```
I am not sure this is 100% because I don't understand the part about passing the function the word to search for when you are just looking for words that repeat in a string. So maybe consider... ``` import re pattern = r'\b(\w+)( \1\b)+' def word_count(text): split_words = text.split(' ') count = 0 for split_word in split_words: count = count + len(re.findall(pattern, text, re.IGNORECASE)) return count word_count('Linux Linux Linux Linux') ``` Output: ``` 4 ``` Maybe it helps. UPDATE: Based on comment below... ``` def word_count(word, text): count = text.count(word) return count word_count('Linux', "Linux, Word, Linux") ``` Output: ``` 2 ```
16,014
66,702,514
I am trying to create a function that would take a user inputted number and determine if the number is an integer or a floating-point depending on what the mode is set to. I am very new to python and learning the language and I am getting an invalid syntax error and I don't know what to do. So far I am making the integer tester first. Here is the code: ``` def getNumber(IntF, FloatA, Msg, rsp): print("What would you like to do?") print("Option A = Interger") print("Option B = Floating Point") Rsp = int(input("What number would like to test as an interger?")) A = rsp if rsp == "A": while True: try: userInput = int(input("What number would like to test as an interger")) except ValueError as ve: print("Not an integer! Try again.") continue else: return userInput break ```
2021/03/19
[ "https://Stackoverflow.com/questions/66702514", "https://Stackoverflow.com", "https://Stackoverflow.com/users/15429618/" ]
`.testcontainer.properties` in my `$HOME` directory fixed the issue for me. This file is used to override properties but I am still not sure how that fixes the issue. I see in my `.gitlab.yml` that what we do and just imitated that in my local, that solved the issue.
For some it might help to update the version of testcontainers
16,015
62,421,333
I have a dataframe like image1. I want to convert it to image2. I have tried r, python, and excel but failed. Excel formula: =INDEX(AV2:AW2,MODE(MATCH(AV2:AW2,AV2:AW2,0))) give me N/A output. the "k2" column would be the most common element from "knumbers" column. Any Help. Best, Zillur [![image1](https://i.stack.imgur.com/O7SkM.png)](https://i.stack.imgur.com/O7SkM.png) [![image2](https://i.stack.imgur.com/obB56.png)](https://i.stack.imgur.com/obB56.png)
2020/06/17
[ "https://Stackoverflow.com/questions/62421333", "https://Stackoverflow.com", "https://Stackoverflow.com/users/4168405/" ]
In R, you can split the strings on comma, count the frequency using `table` and get the most frequently occurring string. ``` df$k2 <- sapply(strsplit(df$knumbers, ','), function(x) names(sort(table(x), decreasing = TRUE)[1])) ```
Python solution: ``` # Initialise pandas, and mode in session: import pandas as pd from statistics import mode # Scalar denoting the full path to file (including file name): filepath => string scalar filepath = '' # Read in the Excel sheet: df => Data Frame df = pd.read_excel(filepath) # Find modal element per row: k2 => string vector df['k2'] = [*map(lambda x: mode(str(x).split(',')), df['knumbers'])] ``` Base R Solution: ``` # Define a function to retrieve the modal element in a factor/character vector: mode_stat => function mode_stat <- function(chr_vec){names(sort(table(as.character(chr_vec)), decreasing = TRUE)[1])} # Apply the function to a list of split knumber strings: k2 => character vector df$k2 <- sapply(strsplit(df$knumbers, ","), mode_stat) ``` Data (reconstruct in R): ``` df <- structure(list(Total = c(446, 346, 332, 308), knumbers = c("K10401", "K10413,K10413,K10412", "K13844,K13844,K13845", "K19206,K19207,K19207" )), row.names = c(NA, -4L), class = c("tbl_df", "tbl", "data.frame")) ``` In Excel: ``` (goodluck) ```
16,016
9,905,874
I'm running into a problem that I haven't seen anyone on StackOverflow encounter or even google for that matter. My main goal is to be able to replace occurences of a string in the file with another string. Is there a way there a way to be able to acess all of the lines in the file. The problem is that when I try to read in a large text file (1-2 gb) of text, python only reads a subset of it. For example, I'll do a really simply command such as: ``` newfile = open("newfile.txt","w") f = open("filename.txt","r") for line in f: replaced = line.replace("string1", "string2") newfile.write(replaced) ``` And it only writes the first 382 mb of the original file. Has anyone encountered this problem previously? I tried a few different solutions such as using: ``` import fileinput for i, line in enumerate(fileinput.input("filename.txt", inplace=1) sys.stdout.write(line.replace("string1", "string2") ``` But it has the same effect. Nor does reading the file in chunks such as using ``` f.read(10000) ``` I've narrowed it down to mostly likely being a reading in problem and not a writing problem because it happens for simply printing out lines. I know that there are more lines. When I open it in a full text editor such as Vim, I can see what the last line should be, and it is not the last line that python prints. Can anyone offer any advice or things to try? I'm currently using a 32-bit version of Windows XP with 3.25 gb of ram, and running Python 2.7 \*Edit Solution Found (Thanks Lattyware). Using an Iterator ``` def read_in_chunks(file, chunk_size=1000): while True: data = file.read(chunk_size) if not data: break yield data ```
2012/03/28
[ "https://Stackoverflow.com/questions/9905874", "https://Stackoverflow.com", "https://Stackoverflow.com/users/1297872/" ]
Try: ``` f = open("filename.txt", "rb") ``` On Windows, `rb` means open file in binary mode. According to the docs, text mode vs. binary mode only has an impact on end-of-line characters. But (if I remember correctly) I believe opening files in text mode on Windows also does something with EOF (hex 1A). You can also specify the mode when using `fileinput`: ``` fileinput.input("filename.txt", inplace=1, mode="rb") ```
If you use the file like this: ``` with open("filename.txt") as f: for line in f: newfile.write(line.replace("string1", "string2")) ``` It should only read into memory one line at a time, unless you keep a reference to that line in memory. After each line is read it will be up to pythons garbage collector to get rid of it. Give this a try and see if it works for you :)
16,017
68,945,015
I need a simple python library to convert PDF to image (render the PDF as is), but after hours of searching, I keep hitting the same wall, I find libraries like `pdf2image` python library (and many similar ones), which depend on external applications or wrap command-line tools. Although there are workarounds to allow using these libraries in serverless settings, they all would complicate our deployment and require creating the likes of `Execution Environments` or extra lambda layers, which will eat up from the small allowed lambda size. Is there a self-contained, independent mechanism (not dependent on command-line tools) to allow achieving this (seemingly simple) task? Also, I am wondering, is there a reason (licensing or patents) for the scarcity of tools that deal with PDFs (they are mostly commercial or under strict AGPL licenses)?
2021/08/26
[ "https://Stackoverflow.com/questions/68945015", "https://Stackoverflow.com", "https://Stackoverflow.com/users/452748/" ]
You said "Ended up using pdf2image" [pdf2image (MIT)](https://pypi.org/project/pdf2image/). A python (3.6+) module that wraps pdftoppm (GPL?) and pdftocairo (GPL?) to convert PDF to a PIL Image object. Generally [Poppler (GPL)](https://en.wikipedia.org/wiki/Poppler_(software)) spinoffs from Open Source [Xpdf (GPL)](http://www.xpdfreader.com/about.html) which has * pdftopng: * pdftoppm: * pdfimages: and a 3rd party pdftotiff
You can convert PDF's to images without external dependencies using PyMuPDF. I use it for Azure functions. Install with `pip install PyMuPDF` In your python file: ``` import fitz pdfDoc = fitz.open(filepath) img = pdfDoc[0].get_pixmap(matrix=fitz.Matrix(2,2)) bytesimg = img.tobytes() ``` This takes the first page of the PDF and converts it to an image, the matrix is for the resolution. You can also open a stream instead of a file on disk: ``` pdfDoc = fitz.open(stream = pdfstream, filetype="pdf") ```
16,020
31,941,951
In my Python code I use a third party shared object, a `.so` file, which I suspect to contains a memory leak. During the run of my program I have a loop where I repeatedly call functions of the shared object. While the programm is running I can see in `htop`, that the memory usage is steadily increasing. When the RAM is full, the programm crashes with the terminal output `killed`. My assumption is, that if the memory leak is produced by the shared object, because otherwise Python would raise an `Exception.MemoryError`. I tried using [`reload(modul_name)`](https://stackoverflow.com/questions/437589/how-do-i-unload-reload-a-python-module) followed by a `gc.collect()` but it did not free the memory according to `htop`. What shall I do?
2015/08/11
[ "https://Stackoverflow.com/questions/31941951", "https://Stackoverflow.com", "https://Stackoverflow.com/users/380038/" ]
The exact cause of the exception is, that the number `1439284609013` is too big to fit into `Integer`. However, the actual issue lies elsewhere. I have looked at the source code, your parameters seem to be wrong: ``` emp1 ~/KT/bkp 1439284609013 1439284641872 ``` You have given a `String`, another `String` and two `Long`s, these are the * `args[0]`: `tableName` * `args[1]`: `outputDir` * `args[2]`: `startTime` * `args[3]`: `endTime` the problem is, that you are missing an argument: `args[2]` should be an `Integer`,`startTime` should become `args[3]` and `endTime` should become `args[4]`. In the source, that expected third, `Integer` argument is called `versions`, however I don't exactly know what that means. --- ### Official documentation Going through the source is one thing, but the [official docs](http://hbase.apache.org/book.html#_export) also give the syntax of `Export` the following: > > `$ bin/hbase org.apache.hadoop.hbase.mapreduce.Export <tablename> <outputdir> [<versions> [<starttime> [<endtime>]]]` > > > By default, the Export tool only exports the newest version of a given cell, regardless of the number of versions stored. To export more than one version, replace `<versions>` with the desired number of versions. > > > --- ### Wrapping it up To achive what you wanted originally, just simple add `1` as the third argument: ``` hbase org.apache.hadoop.hbase.mapreduce.Export emp1 ~/KT/bkp 1 1439284609013 1439284641872 ```
I entered only the start time and end time. Export is expecting versions before start and end time. So finally I entered the version number it worked. ``` ./hbase org.apache.hadoop.hbase.mapreduce.Export emp1 ~/KT/bkp 2147483647 1439284609013 1439284646830 ```
16,021
67,280,726
I want to extract some data from a text file to a dataframe : the text file look like this ``` URL: http://www.nytimes.com/2016/06/30/sports/baseball/washington-nationals-max-scherzer-baffles-mets-completing-a-sweep.html WASHINGTON — Stellar .... stretched thin. “We were going t......e do anything.” Wednesday’s ... starter. “We’re n... work.” The Mets did not scor....their 40-37 record. URL: http://www.nytimes.com/2016/06/30/nyregion/mayor-de-blasios-counsel-to-leave-next-month-to-lead-police-review-board.html Mayor Bill de .... Department. The move.... April. A civil ... conversations. More... administration. URL: http://www.nytimes.com/2016/06/30/nyregion/three-men-charged-in-killing-of-cuomo-administration-lawyer.html In the early..., the Folk Nation. As hundreds ... wounds. For some...residents. On Wednesd...killing. One ...murder. ``` It contains the URL and the text from new york times articles, I want to create a dataframe of 2 columns, the first one being the URL and the second one being the text. The issue I have is that I couldn't deal with the Delimiters as there are two new lines between the URL and the corresponding text. But there are single new lines also in the text itself. I tried using this code, but instead of getting a 2 column dataframe, I got a single column with a new row for each newline used, so it is also separating the text into multiple paragraphs, I am using dask btw : ``` df_csv = dd.read_csv(filename,sep="\n\n",header=None,engine='python') ```
2021/04/27
[ "https://Stackoverflow.com/questions/67280726", "https://Stackoverflow.com", "https://Stackoverflow.com/users/10586681/" ]
``` # read file file = open('ny.txt', encoding="utf8").read() url = [] text = [] # split text at every 2-new-lines # elements at 'odd' positions are 'urls' # elements at 'even' positions are 'text/content' for ind, line in enumerate(file.split('\n\n')): if ind%2==0: url.append(line) else: text.append(line) # save to a dataframe df = pd.DataFrame({'url':url, 'text':text}) df url text 0 URL: http://www.nytimes.com/2016/06/30/sports/... WASHINGTON — Stellar .... stretched thin.\n“We... 1 URL: http://www.nytimes.com/2016/06/30/nyregio... Mayor Bill de .... Department.\nThe move.... A... 2 URL: http://www.nytimes.com/2016/06/30/nyregio... In the early..., the Folk Nation.\nAs hundreds... # ADDITIONAL : Remove the characters 'URL: ' with empty string df['url'] = df['url'].str.replace('URL: ', '') df url text 0 http://www.nytimes.com/2016/06/30/sports/baseb... WASHINGTON — Stellar .... stretched thin.\n“We... 1 http://www.nytimes.com/2016/06/30/nyregion/may... Mayor Bill de .... Department.\nThe move.... A... 2 http://www.nytimes.com/2016/06/30/nyregion/thr... In the early..., the Folk Nation.\nAs hundreds... ```
You can do it easily in the following way: ``` import pandas as pd text = '''URL: http://www.nytimes.com/2016/06/30/sports/baseball/washington-nationals-max-scherzer-baffles-mets-completing-a-sweep.html WASHINGTON — Stellar .... stretched thin. “We were going t......e do anything.” Wednesday’s ... starter. “We’re n... work.” The Mets did not scor....their 40-37 record. URL: http://www.nytimes.com/2016/06/30/nyregion/mayor-de-blasios-counsel-to-leave-next-month-to-lead-police-review-board.html Mayor Bill de .... Department. The move.... April. A civil ... conversations. More... administration. URL: http://www.nytimes.com/2016/06/30/nyregion/three-men-charged-in-killing-of-cuomo-administration-lawyer.html In the early..., the Folk Nation. As hundreds ... wounds. For some...residents. On Wednesd...killing. One ...murder. ''' # 1) Extract the text to lines list text = text.replace('\n', '') # delete all the single '\n' text = text.replace('\n\n', '') # delete all the '\n\n' lines = text.split('URL: ')[1:] # to drop the first match of '' # 2) Create pandas.DataFrame object and populate it with the extracted lines list from (1) df = pd.DataFrame(dict(lines=lines)) # 3) Extract the URLs into a new column df.loc[:, 'URL'] = df.loc[:, 'lines'].str.extract(r'(http:[^,]+.html)', expand=False) # 4) Extract the message into a new column df.loc[:, 'Text'] = df.loc[:, 'lines'].str.extract(r'(?<=\.html)([^$]+)', expand=False) # 4) Delete the original lines column df.drop('lines', axis='columns', inplace = True) ``` **Output:** ``` URL Text 0 http://www.nytimes.com/2016/06/30/sports/baseb... WASHINGTON — Stellar .... stretched thin.“We w... 1 http://www.nytimes.com/2016/06/30/nyregion/may... Mayor Bill de .... Department.The move.... Apr... 2 http://www.nytimes.com/2016/06/30/nyregion/thr... In the early..., the Folk Nation.As hundreds .... ``` Cheers!
16,023
63,506,041
Am new to python and am trying to read a PDF file to pull the `ID No.`. I have been successful so far to extract the text out of the PDF file using `pdfplumber`. Below is the code block: ``` import pdfplumber with pdfplumber.open('ABC.pdf') as pdf_file: firstpage = pdf_file.pages[0] raw_text = firstpage.extract_text() print (raw_text) ``` Here is the text output: ``` Welcome to ABC 01 January, 1991 ID No. : 10101010 Welcome to your ABC portal. Learn More text here.. Even more text here.. Mr Jane Doe Jack & Jill Street Learn more about your www.abc.com .... .... .... ``` However, am unable to find the optimum way to parse this unstructured text further. The final output am expecting to be is just the ID No. i.e. `10101010`. On a side note, the script would be using against fairly huge set of PDFs so performance would be of concern.
2020/08/20
[ "https://Stackoverflow.com/questions/63506041", "https://Stackoverflow.com", "https://Stackoverflow.com/users/7855187/" ]
Try using a regular expression: ``` import pdfplumber import re with pdfplumber.open('ABC.pdf') as pdf_file: firstpage = pdf_file.pages[0] raw_text = firstpage.extract_text() m = re.search(r'ID No\. : (\d+)', raw_text) if m: print(m.group(1)) ``` Of course you'll have to iterate over *all* the PDF's contents - not just the first page! Also ask yourself if it's possible that there's more than one match per page. Anyway: you know the structure of the input better than I do (and we don't have access to the sample file), so I'll leave it as an exercise for you.
If the length of the id number is always the same, I would try to find the location of it with the find-function. `position = raw_text.find('ID No. : ')`should return the position of the I in ID No. position + 9 should be the first digit of the id. When the number has always a length of 8 you could get it with `int(raw_text[position+9:position+17]`)
16,024
14,657,498
I'd like to create a `text/plain` message using Markdown formatting and transform that into a `multipart/alternative` message where the `text/html` part has been generated from the Markdown. I've tried using the filter command to filter this through a python program that creates the message, but it seems that the message doesn't get sent through properly. The code is below (this is just test code to see if I can make `multipart/alternative` messages at all. ``` import sys from email.mime.text import MIMEText from email.mime.multipart import MIMEMultipart html = """<html> <body> This is <i>HTML</i> </body> </html> """ msgbody = sys.stdin.read() newmsg = MIMEMultipart("alternative") plain = MIMEText(msgbody, "plain") plain["Content-Disposition"] = "inline" html = MIMEText(html, "html") html["Content-Disposition"] = "inline" newmsg.attach(plain) newmsg.attach(html) print newmsg.as_string() ``` Unfortunately, in mutt, you only get the message body sent to the filter command when you compose (the headers are not included). Once I get this working, I think the markdown part won't be too hard.
2013/02/02
[ "https://Stackoverflow.com/questions/14657498", "https://Stackoverflow.com", "https://Stackoverflow.com/users/1053149/" ]
Inside your `DialogFragment`, call [`Fragment.setRetainInstance(boolean)`](http://developer.android.com/reference/android/app/Fragment.html#setRetainInstance%28boolean%29) with the value `true`. You don't need to save the fragment manually, the framework already takes care of all of this. Calling this will prevent your fragment from being destroyed on rotation and your network requests will be unaffected. You may have to add this code to stop your dialog from being dismissed on rotation, due to a [bug](https://code.google.com/p/android/issues/detail?id=17423) with the compatibility library: ``` @Override public void onDestroyView() { Dialog dialog = getDialog(); // handles https://code.google.com/p/android/issues/detail?id=17423 if (dialog != null && getRetainInstance()) { dialog.setDismissMessage(null); } super.onDestroyView(); } ```
One of the advantages of using `dialogFragment` compared to just using `alertDialogBuilder` is exactly because dialogfragment can automatically recreate itself upon rotation without user intervention. However, when the dialogfragment does not recreate itself, it is possible that you overwrite `onSaveInstanceState` but didn't to call `super`: ``` @Override protected void onSaveInstanceState(Bundle outState) { super.onSaveInstanceState(outState); // <-- must call this if you want to retain dialogFragment upon rotation ... } ```
16,026
40,390,874
So, I'm making a Bank class in python. It has the basic functions of deposit, withdrawing, and checking your balance. I'm having trouble with a transfer method though. This is my code for the class. ``` def __init__(self, customerID): self.ID = customerID self.total = 0 def deposit(self, amount): self.total = self.total + amount return self.total def withdraw(self, amount): self.total = self.total - amount return self.total def balance(self): return self.total def transfer(self, amount, ID): self.total = self.total - amount ID.total = ID.total + amount return ID.balance() ``` Now, it works, but not the way I want it to. If I write a statement like this, it'll work ``` bank1 = Bank(111) bank1.deposit(150) bank2 = Bank(222) bank1.transfer(50, bank2) ``` But I want to be able to use the bank's ID number, not the name I gave it, if that makes any sense? So instead of saying ``` bank1.transfer(50, bank2) ``` I want to it say ``` bank1.transfer(50, 222) ``` I just have no idea how to do this.
2016/11/02
[ "https://Stackoverflow.com/questions/40390874", "https://Stackoverflow.com", "https://Stackoverflow.com/users/6023942/" ]
``` def __init__(self, customerID): self.ID = customerID self.__class__.__dict__.setdefault("idents",{})[self.ID] = self self.total = 0 @classmethod def get_bank(cls,id): return cls.__dict__.setdefault("idents",{}).get(id) ``` is one kind of gross way you could do it ``` bank2_found = Bank.get_bank(222) ```
You could store all the ID numbers and their associated objects in a dict with the ID as the key and the object as the value.
16,034
31,977,902
How can I calculates the elapsed time between a start time and an end time of a event using python, in format like 00:00:00 and 23:59:59?
2015/08/13
[ "https://Stackoverflow.com/questions/31977902", "https://Stackoverflow.com", "https://Stackoverflow.com/users/5221453/" ]
Make it easy on yourself and try to make your code easy to read. I personally prefer to write my html cleanly and outside of echo statements like so: **Html** ``` if (strlen($in) > 0 and strlen($in) < 20) { $sql = "select name, entry, displayid from item_template where name like '%{$in}%' LIMIT 10"; // the query foreach ($dbo->query($sql) as $nt) { //$msg.=$nt[name]."->$nt[id]<br>"; ?> <table style="table-layout:fixed;"> <tr> <td>Name</td> <td>Entry ID</td> <td>Display ID</td> </tr> <tr> <td align="center"> <a href="http://wowhead.com/item=<?=$nt['entry'];?>"><?=$nt['name'];?></a> </td> <td><?=$nt['entry'];?></td> <td> <input type="button" class="button" value="<?=$nt['displayid'];?>"> </td> </tr> </table> <?php } } ``` **Javascript** ``` $( document ).ready(function() { var $theButtons = $(".button"); var $theinput = $("#theinput"); $theButtons.click(function() { // $theinput is out of scope here unless you make it a global (remove 'var') // Okay, not out of scope, but I feel it is confusing unless you're using this specific selector more than once or twice. $("#theinput").val(jQuery(this).val()); }); }); ```
Ok, here goes... 1. Use event delegation in your JavaScript to handle the button clicks. This will work for all present and future buttons ``` jQuery(function($) { var $theInput = $('#theinput'); $(document).on('click', '.button', function() { $theInput.val(this.value); }); }); ``` 2. Less important but I have no idea why you're producing a complete table for each record. I'd structure it like this... ``` // snip if (strlen($in)>0 and strlen($in) <20 ) : // you really should be using a prepared statement $sql="select name, entry, displayid from item_template where name like '%$in%' LIMIT 10"; ?> <table style="table-layout:fixed;"> <thead> <tr> <th>Name</th> <th>Entry ID</th> <th>Display ID</th> </tr> </thead> <tbody> <?php foreach ($dbo->query($sql) as $nt) : ?> <tr> <td align="center"> <a href="http://wowhead.com/?item=<?= htmlspecialchars($nt['entry']) ?>"><?= htmlspecialchars($nt['name']) ?></a> </td> <td><?= htmlspecialchars($nt['entry']) ?></td> <td> <button type="button" class="button" value="<?= htmlspecialchars($nt['displayid']) ?>"><?= htmlspecialchars($nt['displayid']) ?></button> </td> </tr> <?php endforeach ?> </tbody> </table> <?php endif; ```
16,035
47,717,179
If my python script is pivoting and i can no predict how many columns will be outputed, can this be done with the U-SQL REDUCE statement? e.g. ``` @pythonOutput = REDUCE @filteredBets ON [BetDetailID] PRODUCE [BetDetailID] string, EventID float USING new Extension.Python.Reducer(pyScript:@myScript); ``` There could be multiple columns, so i can't hard set the names in the Produce part. Any ideas?
2017/12/08
[ "https://Stackoverflow.com/questions/47717179", "https://Stackoverflow.com", "https://Stackoverflow.com/users/2725941/" ]
If you have a way to produce a `SqlMap<string,string>` value from within Python (I am not sure if that is supported right now, you can do it with a C# reducer :)), then you could use the map for the dynamic schema part. If it is not supported in Python, please file a feature request at <http://aka.ms/adlfeedback>.
The only way right now is to serialize all the columns into a single column, either as a byte[] or string in your python script. SqlMap/SqlArray are not supported yet as output columns.
16,036
50,113,683
i try to train.py in object\_detection in under git url <https://github.com/tensorflow/models/tree/master/research/object_detection> However, the following error occurs. > > ModuleNotFoundError: No module named 'object\_detection' > > > So I tried to solve the problem by writing the following code. ``` import sys sys.path.append('/home/user/Documents/imgmlreport/inception/models/research/object_detection') from object_detection.builders import dataset_builder ``` This problem has not been solved yet. The directory structure is shown below. ``` ~/object_detection/train.py ~/object_detection/builders/dataset_bulider.py ``` and here is full error massage > > /home/user/anaconda3/lib/python3.6/site-packages/h5py/**init**.py:34: FutureWarning: Conversion of the second argument of issubdtype from `float` to `np.floating` is deprecated. > > > In future, it will be treated as `np.float64 == np.dtype(float).type`. > from .\_conv import register\_converters as \_register\_converters > > > Traceback (most recent call last): > > > File "train.py", line 52, in > import trainer > > > File"/home/user/Documents/imgmlreport/inception/models/research/object\_detection/trainer.py", line 26, in > from object\_detection.builders import optimizer\_builder > > > ModuleNotFoundError: No module named 'object\_detection' > > > how can i import modules?
2018/05/01
[ "https://Stackoverflow.com/questions/50113683", "https://Stackoverflow.com", "https://Stackoverflow.com/users/9019755/" ]
try this: python setup.py build python setup.py install
I had to do: `sudo pip3 install -e .` ([ref](https://github.com/tensorflow/models/issues/2031#issuecomment-343782858)) `sudo python3 setup.py install` System: OS: Ubuntu 16.04, Anaconda (I guess this is why I need to use `pip3` and `python3` even I made virtual environment with Pyehon 3.8)
16,037
49,191,477
The `hypot` function, introduced into C in the 1999 revision of the language, calculates the hypotenuse of a right triangle given the other sides as arguments, but with care taken to avoid the over/underflow which would result from the naive implementation as ``` double hypot(double a, double b) { return sqrt(a*a + b*b); } ``` I find myself with the need for companion functionality: given a side and the hypotenuse of a triangle, find the third side (avoiding under/overflow). I can think of a few ways to do this, but wondered if there was an existing "best practice"? My target is Python, but really I'm looking for algorithm pointers. --- Thanks for the replies. In case anyone is interested in the result, my C99 implementation can be found [here](https://gitlab.com/jjg/cathetus) and a Python version [here](https://github.com/HypothesisWorks/hypothesis/blob/master/hypothesis-python/src/hypothesis/internal/cathetus.py), part of the [Hypothesis](https://hypothesis.works/) project.
2018/03/09
[ "https://Stackoverflow.com/questions/49191477", "https://Stackoverflow.com", "https://Stackoverflow.com/users/468334/" ]
Assuming IEEE 754 basic 64-bit binary floating-point, I would consider an algorithm such as: * Set *s* (for scale) to be 2−512 if 2100 ≤ *a*, 2+512 if *a* < 2−100, and 1 otherwise. * Let *a*' be *a*•*s* and *b*' be *b*•*s*. * Compute sqrt(*a*'•*a*' − *b*'•*b*') / *s*. Notes about the reasoning: * If *a* is large (or small), multiplying by *s* decreases (or increases) the values so that the square of *a*' remains in the floating-point range. * The scale factor is a power of two, so multiplying and dividing by it is exact in binary floating-point. * *b* is necessarily smaller than (or equal to) *a*, or else we return NaN, which is appropriate. In the case where we are increasing *a*, no error occurs; *b*' and *b*'•*b*' remain within range. In the case where we are decreasing *a*, *b*' may lose precision or become zero if *b* is small, but then *b* is so much smaller than *a* that the computed result cannot depend on the precise value of *b* in any case. * I partitioned the floating-point range into three intervals because two will not suffice. For example, if you set *s* to be 2−512 if 1 ≤ *a* and 2+512 otherwise, then 1 will scale to 2−512 and then square to 2−1024, at which point a *b* slightly under 1 will be losing precision relevant to the result. But if you use a less-magnitude power for *s*, such as 2−511, then 21023 will scale to 2512 and square to 21024, which is out of bounds. Therefore, we need different scale factors for *a* = 1 and *a* = 21023. Similarly, *a* = 2−1049 needs a scale factor that would be too large for *a* = 1. So three are needed. * Division is notoriously slow, so one might want to multiply by a prepared *s*−1 rather than dividing by *s*.
`hypot` has its idiosyncrasies in that it's one of a *very select few* C standard library functions that does **not** propagate `NaN`! (Another one is `pow` for the case where the first argument being 1.) Setting that aside, I'd be inclined to write merely ``` returns sqrt(h * h - a * a); // h is the hypotenuse ``` as the body of the function, and burden the caller with checking the inputs. If you can't do that then follow the specification of `hypot` faithfully.
16,047
393,637
I'm running a Django application. Had it under Apache + mod\_python before, and it was all OK. Switched to Lighttpd + FastCGI. Now I randomly get the following exception (neither the place nor the time where it appears seem to be predictable). Since it's random, and it appears only after switching to FastCGI, I assume it has something to do with some settings. Found a few results when googleing, but they seem to be related to setting maxrequests=1. However, I use the default, which is 0. Any ideas where to look for? PS. I'm using PostgreSQL. Might be related to that as well, since the exception appears when making a database query. ``` File "/usr/lib/python2.6/site-packages/django/core/handlers/base.py", line 86, in get_response response = callback(request, *callback_args, **callback_kwargs) File "/usr/lib/python2.6/site-packages/django/contrib/admin/sites.py", line 140, in root if not self.has_permission(request): File "/usr/lib/python2.6/site-packages/django/contrib/admin/sites.py", line 99, in has_permission return request.user.is_authenticated() and request.user.is_staff File "/usr/lib/python2.6/site-packages/django/contrib/auth/middleware.py", line 5, in __get__ request._cached_user = get_user(request) File "/usr/lib/python2.6/site-packages/django/contrib/auth/__init__.py", line 83, in get_user user_id = request.session[SESSION_KEY] File "/usr/lib/python2.6/site-packages/django/contrib/sessions/backends/base.py", line 46, in __getitem__ return self._session[key] File "/usr/lib/python2.6/site-packages/django/contrib/sessions/backends/base.py", line 172, in _get_session self._session_cache = self.load() File "/usr/lib/python2.6/site-packages/django/contrib/sessions/backends/db.py", line 16, in load expire_date__gt=datetime.datetime.now() File "/usr/lib/python2.6/site-packages/django/db/models/manager.py", line 93, in get return self.get_query_set().get(*args, **kwargs) File "/usr/lib/python2.6/site-packages/django/db/models/query.py", line 304, in get num = len(clone) File "/usr/lib/python2.6/site-packages/django/db/models/query.py", line 160, in __len__ self._result_cache = list(self.iterator()) File "/usr/lib/python2.6/site-packages/django/db/models/query.py", line 275, in iterator for row in self.query.results_iter(): File "/usr/lib/python2.6/site-packages/django/db/models/sql/query.py", line 206, in results_iter for rows in self.execute_sql(MULTI): File "/usr/lib/python2.6/site-packages/django/db/models/sql/query.py", line 1734, in execute_sql cursor.execute(sql, params) OperationalError: server closed the connection unexpectedly This probably means the server terminated abnormally before or while processing the request. ```
2008/12/26
[ "https://Stackoverflow.com/questions/393637", "https://Stackoverflow.com", "https://Stackoverflow.com/users/5475/" ]
Possible solution: <http://groups.google.com/group/django-users/browse_thread/thread/2c7421cdb9b99e48> > > Until recently I was curious to test > this on Django 1.1.1. Will this > exception be thrown again... surprise, > there it was again. It took me some > time to debug this, helpful hint was > that it only shows when (pre)forking. > So for those who getting randomly > those exceptions, I can say... fix > your code :) Ok.. seriously, there > are always few ways of doing this, so > let me firs explain where is a > problem first. If you access database > when any of your modules will import > as, e.g. reading configuration from > database then you will get this error. > When your fastcgi-prefork application > starts, first it imports all modules, > and only after this forks children. > If you have established db connection > during import all children processes > will have an exact copy of that > object. This connection is being > closed at the end of request phase > (request\_finished signal). So first > child which will be called to process > your request, will close this > connection. But what will happen to > the rest of the child processes? They > will believe that they have open and > presumably working connection to the > db, so any db operation will cause an > exception. Why this is not showing in > threaded execution model? I suppose > because threads are using same object > and know when any other thread is > closing connection. How to fix this? > Best way is to fix your code... but > this can be difficult sometimes. > Other option, in my opinion quite > clean, is to write somewhere in your > application small piece of code: > > > ``` from django.db import connection from django.core import signals def close_connection(**kwargs): connection.close() signals.request_started.connect(close_connection) ``` Not ideal thought, connecting twice to the DB is a workaround at best. --- Possible solution: using connection pooling (pgpool, pgbouncer), so you have DB connections pooled and stable, and handed fast to your FCGI daemons. The problem is that this triggers another bug, psycopg2 raising an *InterfaceError* because it's trying to disconnect twice (pgbouncer already handled this). Now the culprit is Django signal *request\_finished* triggering *connection.close()*, and failing loud even if it was already disconnected. I don't think this behavior is desired, as if the request already finished, we don't care about the DB connection anymore. A patch for correcting this should be simple. The relevant traceback: ``` /usr/local/lib/python2.6/dist-packages/Django-1.1.1-py2.6.egg/django/core/handlers/wsgi.py in __call__(self=<django.core.handlers.wsgi.WSGIHandler object at 0x24fb210>, environ={'AUTH_TYPE': 'Basic', 'DOCUMENT_ROOT': '/storage/test', 'GATEWAY_INTERFACE': 'CGI/1.1', 'HTTPS': 'off', 'HTTP_ACCEPT': 'application/xml,application/xhtml+xml,text/html;q=0.9,text/plain;q=0.8,image/png,*/*;q=0.5', 'HTTP_ACCEPT_ENCODING': 'gzip, deflate', 'HTTP_AUTHORIZATION': 'Basic dGVzdGU6c3VjZXNzbw==', 'HTTP_CONNECTION': 'keep-alive', 'HTTP_COOKIE': '__utma=175602209.1371964931.1269354495.126938948...none); sessionid=a1990f0d8d32c78a285489586c510e8c', 'HTTP_HOST': 'www.rede-colibri.com', ...}, start_response=<function start_response at 0x24f87d0>) 246 response = self.apply_response_fixes(request, response) 247 finally: 248 signals.request_finished.send(sender=self.__class__) 249 250 try: global signals = <module 'django.core.signals' from '/usr/local/l.../Django-1.1.1-py2.6.egg/django/core/signals.pyc'>, signals.request_finished = <django.dispatch.dispatcher.Signal object at 0x1975710>, signals.request_finished.send = <bound method Signal.send of <django.dispatch.dispatcher.Signal object at 0x1975710>>, sender undefined, self = <django.core.handlers.wsgi.WSGIHandler object at 0x24fb210>, self.__class__ = <class 'django.core.handlers.wsgi.WSGIHandler'> /usr/local/lib/python2.6/dist-packages/Django-1.1.1-py2.6.egg/django/dispatch/dispatcher.py in send(self=<django.dispatch.dispatcher.Signal object at 0x1975710>, sender=<class 'django.core.handlers.wsgi.WSGIHandler'>, **named={}) 164 165 for receiver in self._live_receivers(_make_id(sender)): 166 response = receiver(signal=self, sender=sender, **named) 167 responses.append((receiver, response)) 168 return responses response undefined, receiver = <function close_connection at 0x197b050>, signal undefined, self = <django.dispatch.dispatcher.Signal object at 0x1975710>, sender = <class 'django.core.handlers.wsgi.WSGIHandler'>, named = {} /usr/local/lib/python2.6/dist-packages/Django-1.1.1-py2.6.egg/django/db/__init__.py in close_connection(**kwargs={'sender': <class 'django.core.handlers.wsgi.WSGIHandler'>, 'signal': <django.dispatch.dispatcher.Signal object at 0x1975710>}) 63 # when a Django request is finished. 64 def close_connection(**kwargs): 65 connection.close() 66 signals.request_finished.connect(close_connection) 67 global connection = <django.db.backends.postgresql_psycopg2.base.DatabaseWrapper object at 0x17b14c8>, connection.close = <bound method DatabaseWrapper.close of <django.d...ycopg2.base.DatabaseWrapper object at 0x17b14c8>> /usr/local/lib/python2.6/dist-packages/Django-1.1.1-py2.6.egg/django/db/backends/__init__.py in close(self=<django.db.backends.postgresql_psycopg2.base.DatabaseWrapper object at 0x17b14c8>) 74 def close(self): 75 if self.connection is not None: 76 self.connection.close() 77 self.connection = None 78 self = <django.db.backends.postgresql_psycopg2.base.DatabaseWrapper object at 0x17b14c8>, self.connection = <connection object at 0x1f80870; dsn: 'dbname=co...st=127.0.0.1 port=6432 user=postgres', closed: 2>, self.connection.close = <built-in method close of psycopg2._psycopg.connection object at 0x1f80870> ``` Exception handling here could add more leniency: **/usr/local/lib/python2.6/dist-packages/Django-1.1.1-py2.6.egg/django/db/\_\_init\_\_.py** ``` 63 # when a Django request is finished. 64 def close_connection(**kwargs): 65 connection.close() 66 signals.request_finished.connect(close_connection) ``` Or it could be handled better on psycopg2, so to not throw fatal errors if all we're trying to do is disconnect and it already is: **/usr/local/lib/python2.6/dist-packages/Django-1.1.1-py2.6.egg/django/db/backends/\_\_init\_\_.py** ``` 74 def close(self): 75 if self.connection is not None: 76 self.connection.close() 77 self.connection = None ``` Other than that, I'm short on ideas.
In the end I switched back to Apache + mod\_python (I was having other random errors with fcgi, besides this one) and everything is good and stable now. The question still remains open. In case anybody has this problem in the future and solves it they can record the solution here for future reference. :)
16,053
43,893,431
I am new to python(version 3.4.) and I am wondering how I can make a code similar to this one: ``` #block letters B1 = ("BBBB ") B2 = ("B B ") B3 = ("B B ") B4 = ("BBBB ") B5 = ("B B ") B6 = ("B B ") B7 = ("BBBB ") B = [B1, B2, B3, B4, B5, B6, B7] E1 = ("EEEEE ") E2 = ("E ") E3 = ("E ") E4 = ("EEEEE ") E5 = ("E ") E6 = ("E ") E7 = ("EEEEE ") E = [E1, E2, E3, E4, E5, E6, E7] N1 = ("N N") N2 = ("NN N") N3 = ("N N N") N4 = ("N N N") N5 = ("N N N") N6 = ("N NN") N7 = ("N N") N = [N1, N2, N3, N4, N5, N6, N7] for i in range(7): print(B[i], E[i], N[i]) ``` The output of my current code looks like this: ``` BBBB EEEEE N N B B E NN N B B E N N N BBBB EEEEE N N N B B E N N N B B E N NN BBBB EEEEE N N ``` But I want to know how to make one that can take user input and print it in the style above. I have been trying for a few hours and can't come up with a solution, it would be great to see how other people could do/have done it. I think it becomes a lot harder when ther letters do not fit on the screen, so I only want to be able to print 10 letters. Thanks
2017/05/10
[ "https://Stackoverflow.com/questions/43893431", "https://Stackoverflow.com", "https://Stackoverflow.com/users/7991835/" ]
> > **Assumption**: you have **all** the letters constructed and that **all letters have the same number of rows**. > > > In that case you can **construct a dictionary**, like: ``` ascii_art = { 'B': B, 'E': E, 'N': N } ``` of course in real life, you construct a dictionary with all letters, and perhaps spaces, digits, etc. Now you can take an string as input with: ``` text = input('Enter text? ') ``` Next we map the string onto an iterable of letters: ``` chars = map(ascii_art.get,text) ``` and finally we put these into a zip and print that: ``` for d in zip(*chars): print(*d) ``` Or putting it all together: ``` ascii_art = { 'B': B, 'E': E, 'N': N } text = input('Enter text? ') chars = map(ascii_art.get,text) for d in zip(*chars): print(*d) ``` In case you want to **limit** the output to 10 chars per line, you can alter the code to: ``` ascii_art = { 'B': B, 'E': E, 'N': N } text = input('Enter text? ') for i in range(0,len(text),10): chars = map(ascii_art.get,text[i:i+10]) for d in zip(*chars): print(*d) ``` This results into: ``` Enter text? BEBEBEBBEBEENNNENNNN BBBB EEEEE BBBB EEEEE BBBB EEEEE BBBB BBBB EEEEE BBBB B B E B B E B B E B B B B E B B B B E B B E B B E B B B B E B B BBBB EEEEE BBBB EEEEE BBBB EEEEE BBBB BBBB EEEEE BBBB B B E B B E B B E B B B B E B B B B E B B E B B E B B B B E B B BBBB EEEEE BBBB EEEEE BBBB EEEEE BBBB BBBB EEEEE BBBB EEEEE EEEEE N N N N N N EEEEE N N N N N N N N E E NN N NN N NN N E NN N NN N NN N NN N E E N N N N N N N N N E N N N N N N N N N N N N EEEEE EEEEE N N N N N N N N N EEEEE N N N N N N N N N N N N E E N N N N N N N N N E N N N N N N N N N N N N E E N NN N NN N NN E N NN N NN N NN N NN EEEEE EEEEE N N N N N N EEEEE N N N N N N N N ``` We can add an empty line per row, by adding a single extra statement: ``` ascii_art = { 'B': B, 'E': E, 'N': N } text = input('Enter text? ') for i in range(0,len(text),10): chars = map(ascii_art.get,text[i:i+10]) for d in zip(*chars): print(*d) **print()** ``` this generates: ``` Enter text? BBBEEEEEEENNNNN BBBB BBBB BBBB EEEEE EEEEE EEEEE EEEEE EEEEE EEEEE EEEEE B B B B B B E E E E E E E B B B B B B E E E E E E E BBBB BBBB BBBB EEEEE EEEEE EEEEE EEEEE EEEEE EEEEE EEEEE B B B B B B E E E E E E E B B B B B B E E E E E E E BBBB BBBB BBBB EEEEE EEEEE EEEEE EEEEE EEEEE EEEEE EEEEE N N N N N N N N N N NN N NN N NN N NN N NN N N N N N N N N N N N N N N N N N N N N N N N N N N N N N N N N N N N N N N N N N N N N N N N NN N NN N NN N NN N NN N N N N N N N N N N ```
First you'd have to manually make the alphabet as you did before, ``` N1 = ("N N") N2 = ("NN N") N3 = ("N N N") N4 = ("N N N") N5 = ("N N N") N6 = ("N NN") N7 = ("N N") N = [N1, N2, N3, N4, N5, N6, N7] ``` Do that for each letter. [a-z] ``` # Now to let user input print your alphabet we will use a dictionary # The the key is the letter and value is the printable array d = {'a':A,'b':B, ... , 'z':Z } # Let's ask for user input: line = input('What do you want to print> ') # Now lets print what the user said in our alphabet # iterate through the input and print it sentence = map(d.get,line) for letter in zip(*sentence): print(*letter) ```
16,063
1,839,567
I have a vector consisting of a point, speed and direction. We will call this vector R. And another vector that only consists of a point and a speed. No direction. We will call this one T. Now, what I am trying to do is to find the shortest intersection point of these two vectors. Since T has no direction, this is proving to be difficult. I was able to create a formula that works in CaRMetal but I can not get it working in python. Can someone suggest a more efficient way to solve this problem? Or solve my existing formula for X? Formula: [![Formula](https://i.stack.imgur.com/kGd2H.png)](https://i.stack.imgur.com/kGd2H.png) (source: [bja888.com](http://storage.bja888.com/formula.png)) Key: [![Definitions](https://i.stack.imgur.com/1svrA.png)](https://i.stack.imgur.com/1svrA.png) (source: [bja888.com](http://storage.bja888.com/keys.png)) Where o or k is the speed difference between vectors. R.speed / T.speed
2009/12/03
[ "https://Stackoverflow.com/questions/1839567", "https://Stackoverflow.com", "https://Stackoverflow.com/users/223779/" ]
My math could be a bit rusty, but try this: *p* and *q* are the position vectors, *d* and *e* are the direction vectors. After time *t*, you want them to be at the same place: **(1)** *p+t\*d = q+t\*e* Since you want the direction vector *e*, write it like this **(2)** *e = (p-q)/t + d* Now you don't need the time *t*, which you can calculate using your speed constraint *s* (otherwise you could just travel to the other point directly): The direction vector *e* has to be of the length *s*, so **(3)** *e12 + e22 = s2* After some equation solving you end up with **(4)** **I)** *a = sum(p-q)/(s2-sum(d2))* **II)** *b = 2\*sum(d\*(p-q))/(s2-sum(d2))* **III)** *c = -1* **IV)** *a + b\*t + c\*t2 = 0* The *sum* goes over your vector components (2 in 2d, 3 in 3d) The last one is a quadratic formula which you should be able to solve on your own ;-)
1. Let's assume that the first point, A, has zero speed. In this case, it should be very simple to find the direction which will give the fastest intersection. 2. Now, A **does** have a speed. We can force it to have zero speed by deducting it's speed vector from the vector of B. Now we can solve as we did in 1. Just a rough idea that came to mind... **Some more thoughts:** If A is standing still, then the direction B need to travel in is directly towards A. This gives us the direction in the coordinate system in which A is standing still. Let's call it d. Now we only need to convert the direction B needs to travel from the coordinate system in which A is still to the coordinate system in which A is moving at the given speed and direction, d2. This is simply vector addition. d3 = d - d2 We can now find the direction of d3. **And a bit more formal:** *A is stationary*: Sb = speed of B, known, scalar alpha = atan2( a\_y-b\_y, a\_x-b\_x ) Vb\_x = Sb \* cos(alpha) Vb\_y = Sb \* sin(alpha) *A moves at speed Sa, direction beta*: Vb\_x' = Sb \* cos(alpha) + Sa \* cos(beta) Vb\_y' = Sb \* sin(alpha) + Sa \* sin(beta) alpha' = atan2( Vb\_y', Vb\_x' ) Haven't tested the above, but it looks reasonable at first glance...
16,068
34,278,955
On the linux system I'm using, the scheduler is not very generous giving cpu time to subprocesses spawned from python's multiprocessing module. When using 4 subprocceses on a 4-core machine, I get around 22% CPU according to `ps`. However, if the subprocesses are child processes of the shell, and not the python program, it goes up to near 100% CPU. But multiprocessing is a much nicer interface than manually splitting my data, and running separate python programs for each split, and it would be nice to get the best of both worlds (code organization and high CPU utilization). I tried setting the processes' niceness to -20, but that didn't help. I'm wondering whether recompiling the linux kernel with some option would help the scheduler give more CPU time to python multiprocessing workers. Maybe there is a relevant configuration option? The exact version I'm using is: ``` $ uname -a Linux <hostname> 3.19.0-39-generic #44~14.04.1-Ubuntu SMP Wed Dec 2 10:00:35 UTC 2015 x86_64 x86_64 x86_64 GNU/Linux ``` In case this might be related to the way I'm using multiprocessing, it is of the form: ``` with Pool(4) as p: p.map(function,data) ``` Update: This is not a reproducible problem. The results reported here were from a few days ago, and I ran the test again and the multiprocessing processes were as fast as I hoped for. Maybe this question should get deleted, it wouldn't be good to mislead people about the performance to expect of `multiprocessing`.
2015/12/15
[ "https://Stackoverflow.com/questions/34278955", "https://Stackoverflow.com", "https://Stackoverflow.com/users/1483516/" ]
I don't believe your benchmarks are executing as independent tasks as you might think they do. You didn't show the code of `function` but I suspect it does some synchronization. I wrote the following benchmark. If I run the script with either the `--fork` or the `--mp` option, I always get 400 % CPU utilization (on my quad core machine) and comparable overall execution time of about 18 seconds. If called with the `--threads` option, however, the program effectively runs sequentially, achieving only about 100 % CPU utilization and taking a minute to complete for the reason [mentioned](https://stackoverflow.com/questions/34278955/operating-system-level-changes-to-speed-up-pythons-multiprocessing/34279460#34279460) by [dave](https://stackoverflow.com/users/450609/dave). ``` import multiprocessing import os import random import sys import threading def find_lucky_number(x): prng = random.Random() prng.seed(x) for i in range(100000000): prng.random() return prng.randint(0, 100) def with_threading(inputs): callback = lambda x : print(find_lucky_number(x)) threads = [threading.Thread(target=callback, args=(x,)) for x in inputs] for t in threads: t.start() for t in threads: t.join() def with_multiprocessing(inputs): with multiprocessing.Pool(len(inputs)) as pool: for y in pool.map(find_lucky_number, inputs): print(y) def with_forking(inputs): pids = list() for x in inputs: pid = os.fork() if pid == 0: print(find_lucky_number(x)) sys.exit(0) else: pids.append(pid) for pid in pids: os.waitpid(pid, 0) if __name__ == '__main__': inputs = [1, 2, 3, 4] if sys.argv[1] == '--threads': with_threading(inputs) if sys.argv[1] == '--mp': with_multiprocessing(inputs) elif sys.argv[1] == '--fork': with_forking(inputs) else: print("What should I do?", file=sys.stderr) sys.exit(1) ```
Welcome to the CPython Global Interpreter Lock. Your threads show up as distinct processes to the linux kernel (that is how threads are implemented in Linux in general: each thread gets its own process so the kernel can schedule them). So why isn't Linux scheduling more than one of them to run at a time (that is why your 4 core machine is averaging around 25% minus a bit of overhead)? The python interpreter is holding a lock while interpreting each thread, thus blocking the other threads from running (so they can't be scheduled). To get around this you can either: 1. Use processes rather than threads (as you mention in your question) 2. Use a different python interpreter that doesn't have a Global Interpreter Lock.
16,073
49,037,104
So, I am making a login system in python with tkinter and I want it to move to another page after the email and password have been validated. The only way I have found to do this is by using a button click command. I only want it to move on to the next page after the email and password have been validated. Thanks in advance. ``` from tkinter import * class login: def __init__(self, master, *args, **kwargs): self.emailGranted = False self.passwordGranted = False self.attempts = 8 self.label_email = Label(text="email:", font=('Serif', 13)) self.label_email.grid(row=0, column=0, sticky=E) self.label_password = Label(text="password:", font=('Serif', 13)) self.label_password.grid(row=1, column=0, sticky=E) self.entry_email = Entry(width=30) self.entry_email.grid(row=0, column=1, padx=(3, 10)) self.entry_password = Entry(width=30, show="•") self.entry_password.grid(row=1, column=1, padx=(3, 10)) self.login = Button(text="Login", command=self.validate) self.login.grid(row=2, column=1, sticky=E, padx=(0, 10), pady=(2, 2)) self.label_granted = Label(text="") self.label_granted.grid(row=3, columnspan=3, sticky=N+E+S+W) def validate(self): self.email = self.entry_email.get() self.password = self.entry_password.get() if self.email == "email": self.emailGranted = True else: self.emailGranted = False self.label_granted.config(text="wrong email") self.attempts -= 1 self.entry_email.delete(0, END) if self.attempts == 0: root.destroy() if self.password == "password": self.passwordGranted = True else: self.passwordGranted = False self.label_granted.config(text="wrong password") self.attempts -= 1 self.entry_password.delete(0, END) if self.attempts == 0: root.destroy() if self.emailGranted is False and self.passwordGranted is False: self.label_granted.config(text="wrong email and password") if self.emailGranted is True and self.passwordGranted is True: self.label_granted.config(text="access granted") // I want it to move on to PageOne here but I'm not sure how class PageOne: def __init__(self, master, *args, **kwargs): Button(text="it works").grid(row=0, column=0) if __name__ == "__main__": root = Tk() root.resizable(False, False) root.title("login") login(root) root.mainloop() ```
2018/02/28
[ "https://Stackoverflow.com/questions/49037104", "https://Stackoverflow.com", "https://Stackoverflow.com/users/7698965/" ]
You can split the string and then `Array.includes` to check whether the value exists in the array or not. ```js function check(str, val){ return str.split(", ").includes(val+""); } var str = "1, 13, 112, 12, 1212, 555" console.log(check(str, 12)); console.log(check(str, 121)); console.log(check(str, 1212)); ```
Another possible answer  : ```js var twelve = /(^| )12(,|$)/; var s = "1, 13, 112, 12, 1212, 555"; console.log(twelve.test(s)); // true ``` About the regular expression ---------------------------- Following your comment, let me give you a little help to understand the first line. `/(^| )12(,|$)/` is a regular expression. A regular expression is a sequence of characters that defines a search pattern. This is one of the features embedded in JavaScript, but it's not inherently linked to JavaScript. In other words, you should not learn regular expressions in the scope of JavaScript, but JavaScript remains a good way to experiment on regular expressions. That being said, what does `/(^| )12(,|$)/` mean ? The two `/` are delimiters indicating the boundaries of the expression. What's in between the `/` is the expression itself, `(^| )12(,|$)`, it describes the pattern we are looking for. We can classify the various characters involved in this expression into two categories : * regular characters (`1`, `2`, `,` and ), * metacharacters (`(`, `)`, `|`, `^` and `$`). Regular characters are characters with no special meaning. Example : ```none /cat/.test("cat") // true /cat/.test("concat") // true ``` Metacharacters are characters with a special meaning : * `^` means "beginning of the text", * `$` means "end of the text", * `()` indicates a subexpression, * `|` indicates a logical OR. Example 1, empty text : ```none /^$/.test("") // true /^$/.test("azerty") // false ``` Example 2, exact match : ```none /^zert$/.test("zert") // true /^zert$/.test("azerty") // false ``` Example 3, alternatives : ```none /(az|qw)erty/.test("azerty") // true /(az|qw)erty/.test("qwerty") // true ``` To wrap it up, let's come back to `/(^| )12(,|$)/` : ```none (^| ) start of the text or " " 12 then "12" (,|$) then "," or end of the text ``` Thus, our pattern matches with strings like `12`, `* 12`, `12,*` or `* 12,*`, where `*` means "zero or more characters". Last word, in JavaScript you can declare a regular expression using the `new` keyword : ``` var twelve = new RegExp("(^| )12(,|$)"); ``` This is useful when you need to change some part of the expression dynamically : ``` function newNumberPattern (n) { return new RegExp("(^| )" + n + "(,|$)"); } var eleven = newNumberPattern(11); var twelve = newNumberPattern(12); ``` That's it, I hope this is enlightening :-)
16,074
7,598,159
I am trying to access the Amazon Advertising through Python and I created a Python script to automate the authentication process. This file, called amazon.py is located in ~/PROJECT/APP/amazon.py. I want to be able to play around with the API, so I launched python manage.py shell from the ~/PROJECT directory to enter the Python shell. My goal is to be able to execute the python script amazon.py within this shell. What command should I be using to execute amazon.py?
2011/09/29
[ "https://Stackoverflow.com/questions/7598159", "https://Stackoverflow.com", "https://Stackoverflow.com/users/971235/" ]
``` int* ptr = (int*)&a; ``` This is dangerous (this itself doesn't invoke UB, though). But this, ``` *ptr = 3; ``` This invokes undefined behavior (UB), because you're attempting to modify the `const` object pointing to by `ptr`. UB means anything could happen. Note that `a` is truly a const object. §7.1.5.1/4 (C++03) says, > > Except that any class member declared mutable (7.1.1) can be modified, **any attempt to modify a const object during its lifetime (3.8) results in undefined behavior.** > > > ``` [Example: [...] const int* ciq = new const int (3); // initialized as required int* iq = const_cast<int*>(ciq); // cast required *iq = 4; // undefined: modifies a const object ```
Don't do things like this. It's undefined behavior. If you lie to the compiler, it will get its revenge (c)
16,077
12,961,475
I am looking for a way to parse the following commandline syntax using the argparse module in python3: ``` myapp.py [folder] [[from] to] ``` Meaning: The user may optionally define a folder, which defaults to cwd. Additionally the user may pass up to two integers. If only one number is given, it should be stored in the `to` variable. This is similar to the syntax of the python builtin `range()`. e.g.: ``` myapp.py folder myapp.py 10 myapp.py 5 10 myapp.py folder 5 10 myapp.py folder 10 ``` Is that possible? If so, how?
2012/10/18
[ "https://Stackoverflow.com/questions/12961475", "https://Stackoverflow.com", "https://Stackoverflow.com/users/1110748/" ]
Use options; that's what they're there for (and what `argparse` is good at parsing). Thus, a syntax like ``` myapp.py [-F folder] [[from] to] ``` would make a lot more sense, and be easier to parse.
I couldn't see a way to do it without using a named argument for folder: ``` # usage: argparsetest2.py [-h] [--folder [FOLDER]] [to] [fr] import argparse import os parser = argparse.ArgumentParser() parser.add_argument('--folder', dest='folder', nargs='?', default=os.getcwd()) parser.add_argument('to', type=int, nargs='?') parser.add_argument('fr', type=int, nargs='?') args = parser.parse_args() print args ```
16,082
46,341,816
I'm working on a Python project using PyCharm and now I need to generate the corresponding API documentation. I'm documenting the code methods and classes using `docstrings`. I read about Sphinx and Doxygen, with Sphinx being the most recommended right now. I tried to configure Sphinx whitin PyCharm but I had no luck in getting it working. This is the **project structure**: [![Project Structure](https://i.stack.imgur.com/Zf0pI.png)](https://i.stack.imgur.com/Zf0pI.png) and this was the I/O interaction with the command **Sphinx Quickstart** ``` C:\Python\Python36\Scripts\sphinx-quickstart.exe Welcome to the Sphinx 1.6.3 quickstart utility. Please enter values for the following settings (just press Enter to accept a default value, if one is given in brackets). Enter the root path for documentation. > Root path for the documentation [.]: You have two options for placing the build directory for Sphinx output. Either, you use a directory "_build" within the root path, or you separate "source" and "build" directories within the root path. > Separate source and build directories (y/n) [n]: Inside the root directory, two more directories will be created; "_templates" for custom HTML templates and "_static" for custom stylesheets and other static files. You can enter another prefix (such as ".") to replace the underscore. > Name prefix for templates and static dir [_]: . The project name will occur in several places in the built documentation. > Project name: Attributed Graph Profiler > Author name(s): M.C & D.A.T. Sphinx has the notion of a "version" and a "release" for the software. Each version can have multiple releases. For example, for Python the version is something like 2.5 or 3.0, while the release is something like 2.5.1 or 3.0a1. If you don't need this dual structure, just set both to the same value. > Project version []: 0.0.1 > Project release [0.0.1]: If the documents are to be written in a language other than English, you can select a language here by its language code. Sphinx will then translate text that it generates into that language. For a list of supported codes, see http://sphinx-doc.org/config.html#confval-language. > Project language [en]: The file name suffix for source files. Commonly, this is either ".txt" or ".rst". Only files with this suffix are considered documents. > Source file suffix [.rst]: One document is special in that it is considered the top node of the "contents tree", that is, it is the root of the hierarchical structure of the documents. Normally, this is "index", but if your "index" document is a custom template, you can also set this to another filename. > Name of your master document (without suffix) [index]: Sphinx can also add configuration for epub output: > Do you want to use the epub builder (y/n) [n]: Please indicate if you want to use one of the following Sphinx extensions: > autodoc: automatically insert docstrings from modules (y/n) [n]: y > doctest: automatically test code snippets in doctest blocks (y/n) [n]: > intersphinx: link between Sphinx documentation of different projects (y/n) [n]: > todo: write "todo" entries that can be shown or hidden on build (y/n) [n]: > coverage: checks for documentation coverage (y/n) [n]: > imgmath: include math, rendered as PNG or SVG images (y/n) [n]: > mathjax: include math, rendered in the browser by MathJax (y/n) [n]: > ifconfig: conditional inclusion of content based on config values (y/n) [n]: > viewcode: include links to the source code of documented Python objects (y/n) [n]: y > githubpages: create .nojekyll file to publish the document on GitHub pages (y/n) [n]: y A Makefile and a Windows command file can be generated for you so that you only have to run e.g. `make html' instead of invoking sphinx-build directly. > Create Makefile? (y/n) [y]: > Create Windows command file? (y/n) [y]: Creating file .\conf.py. Creating file .\index.rst. Creating file .\Makefile. Creating file .\make.bat. Finished: An initial directory structure has been created. You should now populate your master file .\index.rst and create other documentation source files. Use the Makefile to build the docs, like so: make builder where "builder" is one of the supported builders, e.g. html, latex or linkcheck. Process finished with exit code 0 ``` Then I moved to the `/docs` folder [![enter image description here](https://i.stack.imgur.com/8hZMB.png)](https://i.stack.imgur.com/8hZMB.png) , edited the **conf.py** file: ``` #!/usr/bin/env python3 # -*- coding: utf-8 -*- # # "Query Rewriter" documentation build configuration file, created by # sphinx-quickstart on Thu Sep 21 14:56:19 2017. # # This file is execfile()d with the current directory set to its # containing dir. # # Note that not all possible configuration values are present in this # autogenerated file. # # All configuration values have a default; values that are commented out # serve to show the default. # If extensions (or modules to document with autodoc) are in another directory, # add these directories to sys.path here. If the directory is relative to the # documentation root, use os.path.abspath to make it absolute, like shown here. # import os import sys sys.path.append(os.path.abspath("../../query_rewriter")) # -- General configuration ------------------------------------------------ # If your documentation needs a minimal Sphinx version, state it here. # # needs_sphinx = '1.0' # Add any Sphinx extension module names here, as strings. They can be # extensions coming with Sphinx (named 'sphinx.ext.*') or your custom # ones. extensions = ['sphinx.ext.autodoc', 'sphinx.ext.intersphinx', 'sphinx.ext.ifconfig', 'sphinx.ext.viewcode', 'sphinx.ext.githubpages'] # Add any paths that contain templates here, relative to this directory. templates_path = ['_templates'] # The suffix(es) of source filenames. # You can specify multiple suffix as a list of string: # # source_suffix = ['.rst', '.md'] source_suffix = '.rst' # The master toctree document. master_doc = 'index' # General information about the project. project = '"Query Rewriter"' copyright = '2017, M.C & D.A.T' author = 'M.C & D.A.T' # The version info for the project you're documenting, acts as replacement for # |version| and |release|, also used in various other places throughout the # built documents. # # The short X.Y version. version = '0.0.1' # The full version, including alpha/beta/rc tags. release = '0.0.1' # The language for content autogenerated by Sphinx. Refer to documentation # for a list of supported languages. # # This is also used if you do content translation via gettext catalogs. # Usually you set "language" from the command line for these cases. language = "en" # List of patterns, relative to source directory, that match files and # directories to ignore when looking for source files. # This patterns also effect to html_static_path and html_extra_path exclude_patterns = [] # The name of the Pygments (syntax highlighting) style to use. pygments_style = 'sphinx' # If true, `todo` and `todoList` produce output, else they produce nothing. todo_include_todos = False # -- Options for HTML output ---------------------------------------------- # The theme to use for HTML and HTML Help pages. See the documentation for # a list of builtin themes. # html_theme = 'alabaster' # Theme options are theme-specific and customize the look and feel of a theme # further. For a list of options available for each theme, see the # documentation. # # html_theme_options = {} # Add any paths that contain custom static files (such as style sheets) here, # relative to this directory. They are copied after the builtin static files, # so a file named "default.css" will overwrite the builtin "default.css". html_static_path = ['_static'] # Custom sidebar templates, must be a dictionary that maps document names # to template names. # # This is required for the alabaster theme # refs: http://alabaster.readthedocs.io/en/latest/installation.html#sidebars html_sidebars = { '**': [ 'about.html', 'navigation.html', 'relations.html', # needs 'show_related': True theme option to display 'searchbox.html', 'donate.html', ] } # -- Options for HTMLHelp output ------------------------------------------ # Output file base name for HTML help builder. htmlhelp_basename = 'QueryRewriterdoc' # -- Options for LaTeX output --------------------------------------------- latex_elements = { # The paper size ('letterpaper' or 'a4paper'). # # 'papersize': 'letterpaper', # The font size ('10pt', '11pt' or '12pt'). # # 'pointsize': '10pt', # Additional stuff for the LaTeX preamble. # # 'preamble': '', # Latex figure (float) alignment # # 'figure_align': 'htbp', } # Grouping the document tree into LaTeX files. List of tuples # (source start file, target name, title, # author, documentclass [howto, manual, or own class]). latex_documents = [ (master_doc, 'QueryRewriter.tex', '"Query Rewriter" Documentation', 'M.C \\& D.A.T', 'manual'), ] # -- Options for manual page output --------------------------------------- # One entry per manual page. List of tuples # (source start file, name, description, authors, manual section). man_pages = [ (master_doc, 'queryrewriter', '"Query Rewriter" Documentation', [author], 1) ] # -- Options for Texinfo output ------------------------------------------- # Grouping the document tree into Texinfo files. List of tuples # (source start file, target name, title, author, # dir menu entry, description, category) texinfo_documents = [ (master_doc, 'QueryRewriter', '"Query Rewriter" Documentation', author, 'QueryRewriter', 'One line description of project.', 'Miscellaneous'), ] # Example configuration for intersphinx: refer to the Python standard library. intersphinx_mapping = {'https://docs.python.org/': None} ``` and ran the following command: ``` B:\_Python_Workspace\AttributedGraphProfiler\docs>make html Running Sphinx v1.6.3 making output directory... loading pickled environment... not yet created building [mo]: targets for 0 po files that are out of date building [html]: targets for 1 source files that are out of date updating environment: 1 added, 0 changed, 0 removed reading sources... [100%] index looking for now-outdated files... none found pickling environment... done checking consistency... done preparing documents... done writing output... [100%] index generating indices... genindex writing additional pages... search copying static files... done copying extra files... done dumping search index in English (code: en) ... done dumping object inventory... done build succeeded. Build finished. The HTML pages are in .build\html. B:\_Python_Workspace\AttributedGraphProfiler\docs> ``` I thought I was done, but this is the poor result I got, without any documentation for classes and modules. **index.html** [![enter image description here](https://i.stack.imgur.com/6pAuT.png)](https://i.stack.imgur.com/6pAuT.png) **genindex.html** [![enter image description here](https://i.stack.imgur.com/9f0Xk.png)](https://i.stack.imgur.com/9f0Xk.png) Am I doing something wrong? Thanks in advance for your time.
2017/09/21
[ "https://Stackoverflow.com/questions/46341816", "https://Stackoverflow.com", "https://Stackoverflow.com/users/8078050/" ]
Just solved excatly the same problem Juan. **Sphinx unfortunately is not a fully automated doc generator from code comments** like doxygen, jautodoc etc. As in the link mentioned in mzjn's [comment](https://stackoverflow.com/a/25555982/1980180) some steps are necessary for a proper work. As I see you are working on Pycharm, so I will touch on Pycharm-Sphinx integration. I hope you will not have to change anything manually like conf.py. 1. In PyCharm "File/setting/tools/python integrated tools" define the sphinx-working-directory to codebase/Docs. (Just for clearness, choose where ever you want) So your sphinx scripts will run at this path. --- 2. Run "Tools/Sphinx Quickstart" in PyCharm Like you wrote above select the proper options. But "autodoc" is a must (y) and "Separate source and build directories" is recommended (y) to understand what is going on. *This script will generate the skeleton of the sphinx project.* --- 3. Create a python task in Run/Edit-Configurations... in PyCharm like below. Be careful with the python interpreter and your script (If you use python environment like me). *This script will generate the rst files for your modules.* `source/` shows Docs/Source directory created by 1.step. It has the .rst files for our modules. `../` shows our modules' py files. UPDATE 1: --------- > > A-) Run this task to generate the rst files. > > > B-) Add "modules" term to index.rst file, like; > > > ``` bla bla .. toctree:: :maxdepth: 2 :caption: Contents: modules bla bla ``` There is no need to run and add "modules" term again in every doc creation. Step A is necessary only when new modules are introduced in the project. [![Sphinx create Rst Files for modules](https://i.stack.imgur.com/iOZfg.png)](https://i.stack.imgur.com/iOZfg.png) 4. Create a Python Docs task in Run/Edit-Configurations... in PyCharm like below. `command` shows the documentation type. `Input` shows our rst files path. `output` shows our document output directory. (FINALLY :)) [![enter image description here](https://i.stack.imgur.com/MJhqo.png)](https://i.stack.imgur.com/MJhqo.png) 5. Run and enjoy... UPDATE 2: --------- If you integrate with the Readthedocs: (After sign up and github authorization) 1. You have to add modules path to sys.path. Readthedocs calls sphinx-build so it should know the path of the py files. Basically uncomment the line in conf.py `sys.path.insert(0, os.path.abspath('..//..//'))` (if working directory is pyfiles/Docs/source) 2. Always check the build results on readthedocs, siply click on lines like these: You will see if there is an error or a warning. [![enter image description here](https://i.stack.imgur.com/um6ox.png)](https://i.stack.imgur.com/um6ox.png) 3. If you didn't do 1 & 2 probably you will not have a working "Indices and tables" section. **genindex.html** and **py-modindex.html** will be respectively an *empty template* and a *non existing page*.
Prior to `make html` you should do the reading the apidoc comments from your code and creating .rst files. You should run something like this from your project root folder: ``` sphinx-apidoc . -o ./docs -f tests ``` This will rewrite .rst files (hence `-f`) in `docs` folder and ignore to do the apidoc reading in `tests` package. More on `sphinx-apidoc` you can find [here](http://www.sphinx-doc.org/en/stable/man/sphinx-apidoc.html). After this there goes `make html` command.
16,085
13,788,349
Reading Guido's infamous answer to the question [Sorting a million 32-bit integers in 2MB of RAM using Python](http://neopythonic.blogspot.fr/2008/10/sorting-million-32-bit-integers-in-2mb.html), I discovered the module [heapq](http://docs.python.org/2.7/library/heapq.html). I also discover I didn't understand jack about it, nor did I know what I could do with it. Can you explain to me (with the proverbial 6 years old target) what is the heap queue algorithm for and what you can do with it ? Can you provide a **simple** Python snippet where using it (with the `heapq` module) solves a problem that will be better solved with it and not with something else ?
2012/12/09
[ "https://Stackoverflow.com/questions/13788349", "https://Stackoverflow.com", "https://Stackoverflow.com/users/9951/" ]
`heapq` implements [binary heaps](https://en.wikipedia.org/wiki/Binary_heap), which are a partially sorted data structure. In particular, they have three interesting operations: * `heapify` turns a list into a heap, in-place, in O(*n*) time; * `heappush` adds an element to the heap in O(lg *n*) time; * `heappop` retrieves the *smallest* element off the heap in O(lg *n*) time. Many interesting algorithms rely on heaps for performance. The simplest one is probably partial sorting: getting the *k* smallest (or largest) elements of a list without sorting the entire list. `heapq.nsmallest` (`nlargest`) does that. The [implementation of `nlargest`](http://hg.python.org/cpython/file/70ee0b76239d/Lib/heapq.py#l183) can be paraphrased as: ``` def nlargest(n, l): # make a heap of the first n elements heap = l[:n] heapify(heap) # loop over the other len(l)-n elements of l for i in xrange(n, len(l)): # push the current element onto the heap, so its size becomes n+1 heappush(heap, l[i]) # pop the smallest element off, so that the heap will contain # the largest n elements of l seen so far heappop(heap) return sorted(heap, reverse=True) ``` Analysis: let N be the number of elements in `l`. `heapify` is run once, for a cost of O(n); that's negligible. Then, in a loop running N-n = O(N) times, we perform a `heappop` and a `heappush` at O(lg n) cost each, giving a total running time of O(N lg n). When N >> n, this is a big win compared to the other obvious algorithm, `sorted(l)[:n]`, which takes O(N lg N) time.
For example: you have a set of 1000 floating-point number. You want to repeatedly remove the smallest item from the set and replace it with a random number between 0 and 1. The fastest way to do it is with the heapq module: ``` heap = [0.0] * 1000 # heapify(heap) # usually you need this, but not if the list is initially sorted while True: x = heappop(heap) heappush(head, random.random()) ``` This takes a time per iteration that is logarithmic in the length of the heap (i.e. around 7 units, for a list of length 1000). Other solutions take a linear time (i.e. around 1000 units, which is 140 times slower, and gets slower and slower when the length increases): ``` lst = [0.0] * 1000 while True: x = min(lst) # linear lst.remove(x) # linear lst.append(random.random()) ``` or: ``` lst = [0.0] * 1000 while True: x = lst.pop() # get the largest one in this example lst.append(random.random()) lst.sort() # linear (in this case) ``` or even: ``` lst = [0.0] * 1000 while True: x = lst.pop() # get the largest one in this example bisect.insort(lst, random.random()) # linear ```
16,086
51,347,732
I am trying to replace a block of text which is spanning over multiple lines of text file using python. Here is how my input file looks like. input.txt: ``` ABCD abcd ( . X (x), .Y (y) ); ABCD1 abcd1 ( . X1 (x1), .Y1 (y1) ); ``` I am reading the above file with the below code and trying to replace the text but failed to do so. Below is my code. ``` fo = open(input.txt, 'r') input_str = fo.read() find_str = '''ABCD abcd ( .X (x), .Y (y) );''' replace_str = '''ABCDE abcde ( . XX (xx), .YY (yy) );''' input_str = re.sub(find_str, replace_str, input_str) ``` But the input\_str seems to be unchanged. Not sure what am I missing. Any clues?
2018/07/15
[ "https://Stackoverflow.com/questions/51347732", "https://Stackoverflow.com", "https://Stackoverflow.com/users/3843912/" ]
Running `oc whoami --show-console` returns the link to the console app.
Thanks, `oc login` helped me to get the web console url
16,087
34,394,650
I have a python's pexpect code where it sends some commands listed in a file. Say I store some commands in a file named `commandbase` ``` ls -l /dev/ ls -l /home/ramana ls -l /home/ramana/xyz ls -l /home/ramana/xxx ls -l /home/ramana/xyz/abc ls -l /home/ramana/xxx/def ls -l /home/dir/ ``` and so on. Observe here that after `/` I have `dev` and `home` as variables. If I'm in `home` again `ramana` and `dir` are as variables. If enter into into `ramana` there are again `xyz` and `xxx`. So basically it is of the form ``` ls -l /variable1/variable2/variable3/ ``` and so on. Here I need to build a tree for every variable and its specific secondary variables. Now I should have a list/array/file where I will store first variable and its secondary variables in another list and so on. So I need a function like this In the main script ``` for line in database: child.sendline(line+"\r") child.expect("\$",timeout) ``` The data base file should be something like: ``` def commands(): return "ls -l <some variable>/<second variable and so on>" ``` This function should return all commands with all the combinations How do I return variable commands here instead of defining all the commands? Is it possible with arrays or lists? **[EDIT] Editing as it is less clear. Hope I'm clear this time**
2015/12/21
[ "https://Stackoverflow.com/questions/34394650", "https://Stackoverflow.com", "https://Stackoverflow.com/users/4894197/" ]
This can be done with a list comprehension... ``` paths = ['/dev/', '/dev/ramana/', ...] command = 'ls -l' commandsandpaths = [command + ' ' + x for x in paths] ``` `commandsandpaths` will be a list with... ``` ls -l /dev/ ls -l /dev/ramana/ ``` Personally, I prefer to use string formatting rather than string concatenation... ``` commandsandpaths = ['{0} {1}'.format(command, x) for x in paths] ``` But it may be less readable if you're not familiar with the syntax
Your requirements are a little more complicated than it appears at first glance. Below I have adopted a convention to use lists `[...]` to indicate things to concatenate, and tuples `(...)` for things to choose from, i.e. optionals. Your list of path names can now be expressed as this:- ``` database = ( 'dev', ['home', ( 'dir', ['ramana', ( '', ['xyz', ( '', 'abc' ) ], ['xxx', ( '', 'def' ) ] ) ] ) ] ) ``` The above syntax avoids redundancy as much as possible. The whitespace is not necessary but helps here to illustrate which parts are on the same nested level. Next we need a way to transform this into a list of commands:- ``` def permute(prefix, tree): def flatten(branch): #print 'flatten', branch results = [ ] if type(branch) is list: parts = [ ] for part in branch: if type(part) is basestring: if part: parts.append([part]) else: parts.append(flatten(part)) index = map(lambda x: 0, parts) count = map(len, parts) #print 'combining', parts, index, count while True: line = map(lambda i: parts[i][index[i]], range(len(parts))) line = '/'.join(line) #print '1:', line results.append( line ) curIndex = len(parts)-1 while curIndex >= 0: index[curIndex] += 1 if index[curIndex] < count[curIndex]: break index[curIndex] = 0 curIndex -= 1 if curIndex < 0: break elif type(branch) is tuple: for option in branch: if type(option) is basestring: if len(option): #print '2:', option results.append( option ) else: results.extend(flatten(option)) else: #print '3:', branch results.append( branch ) return results return map(lambda x: prefix + x, flatten(tree)) ``` So now if we call `permute('ls -l /', database)` it returns the following list:- ``` [ 'ls -l /dev', 'ls -l /home/dir', 'ls -l /home/ramana/', 'ls -l /home/ramana/xyz/', 'ls -l /home/ramana/xyz/abc', 'ls -l /home/ramana/xxx/', 'ls -l /home/ramana/xxx/def' ] ``` From here it is now trivial to write these strings to a file named `commandbase` or execute it line by line.
16,090
35,205,173
I am trying to learn numpy array slicing. But this is a syntax i cannot seem to understand. What does `a[:1]` do. I ran it in python. ``` a = np.array([1,2,3,4,5,6,7,8,9,10,11,12,13,14,15,16]) a = a.reshape(2,2,2,2) a[:1] ``` **Output:** ``` array([[[ 5, 6], [ 7, 8]], [[13, 14], [15, 16]]]) ``` Can someone explain to me the slicing and how it works. The documentation doesn't seem to answer this question. Another question would be would there be a way to generate the a array using something like `np.array(1:16)` or something like in python where ``` x = [x for x in range(16)] ```
2016/02/04
[ "https://Stackoverflow.com/questions/35205173", "https://Stackoverflow.com", "https://Stackoverflow.com/users/1939166/" ]
The commas in slicing are to separate the various dimensions you may have. In your first example you are reshaping the data to have 4 dimensions each of length 2. This may be a little difficult to visualize so if you start with a 2D structure it might make more sense: ``` >>> a = np.arange(16).reshape((4, 4)) >>> a array([[ 0, 1, 2, 3], [ 4, 5, 6, 7], [ 8, 9, 10, 11], [12, 13, 14, 15]]) >>> a[0] # access the first "row" of data array([0, 1, 2, 3]) >>> a[0, 2] # access the 3rd column (index 2) in the first row of the data 2 ``` If you want to access multiple values using slicing you can use the colon to express a range: ``` >>> a[:, 1] # get the entire 2nd (index 1) column array([[1, 5, 9, 13]]) >>> a[1:3, -1] # get the second and third elements from the last column array([ 7, 11]) >>> a[1:3, 1:3] # get the data in the second and third rows and columns array([[ 5, 6], [ 9, 10]]) ``` You can do steps too: ``` >>> a[::2, ::2] # get every other element (column-wise and row-wise) array([[ 0, 2], [ 8, 10]]) ``` Hope that helps. Once that makes more sense you can look in to stuff like adding dimensions by using `None` or `np.newaxis` or using the `...` ellipsis: ``` >>> a[:, None].shape (4, 1, 4) ``` You can find more here: <http://docs.scipy.org/doc/numpy/reference/arrays.indexing.html>
It might pay to explore the `shape` and individual entries as we go along. Let's start with ``` >>> a = np.array([1,2,3,4,5,6,7,8,9,10,11,12,13,14,15,16]) >>> a.shape (16, ) ``` This is a one-dimensional array of length 16. Now let's try ``` >>> a = a.reshape(2,2,2,2) >>> a.shape (2, 2, 2, 2) ``` It's a multi-dimensional array with 4 dimensions. Let's see the 0, 1 element: ``` >>> a[0, 1] array([[5, 6], [7, 8]]) ``` Since there are two dimensions left, it's a matrix of two dimensions. --- Now `a[:, 1]` says: take `a[i, 1` for all possible values of `i`: ``` >>> a[:, 1] array([[[ 5, 6], [ 7, 8]], [[13, 14], [15, 16]]]) ``` It gives you an array where the first item is `a[0, 1]`, and the second item is `a[1, 1]`.
16,092
62,719,356
Hi i'm codding a bot in python to the zoom download api, but but now i'm going through this. I need to know the name of the file I am downloading through that URL, but inside the URL it does not contain the name of the file. It is just downloaded automatically through it. Ex of an download URL: <https://zztop.us/rec/download/6cUsf-r5pjo3GNfGtgSDAv9xIXbzy9vms0iRKq6YNn0m8UHILNlKiMrMWMecDkmKyv5o675Hp1ZrKPF16> How can i code in python a way to know the filename being downloaded ?
2020/07/03
[ "https://Stackoverflow.com/questions/62719356", "https://Stackoverflow.com", "https://Stackoverflow.com/users/13860212/" ]
with the help of Mostafa Labib I managed to get where I wanted, here is the code working for those who want to know the filename of a file downloaded by download\_url of zoom ``` from urllib.request import urlopen from os.path import basename url="https://zztop.us/rec/download/6cUsfr5pjo3GNfGtgSDAv9xIXbzy9vms0iRKq6YNn0m8UHILNlKiMrMWMecDkmKyv5o675Hp1ZrKPF16" token = "XXXXXXXXXXXXXXXXXXXXXXX" url = (url + token) response = urlopen(url) arq_name = basename(response.url) arq, tsh = arq_name.split("?", 1) print(arq) ```
You can use urllib to parse the link then get the filename from the headers. ``` from urllib.request import urlopen url = "https://zztop.us/rec/download/6cUsf-r5pjo3GNfGtgSDAv9xIXbzy9vms0iRKq6YNn0m8UHILNlKiMrMWMecDkmKyv5o675Hp1ZrKPF16" response = urlopen(url) filename = response.headers.get_filename() print(filename) ```
16,094
11,459,861
I am a molecular biologist using Biopython to analyze mutations in genes and my problem is this: I have a file containing many different sequences (millions), most of which are duplicates. I need to find the duplicates and discard them, keeping one copy of each unique sequence. I was planning on using the module editdist to calculate the edit distance between them all to determine which ones the duplicates are, but editdist can only work with 2 strings, not files. Anyone know how I can use that module with files instead of strings?
2012/07/12
[ "https://Stackoverflow.com/questions/11459861", "https://Stackoverflow.com", "https://Stackoverflow.com/users/1513202/" ]
If you want to filter out exact duplicates, you can use the `set` Python built-in type. As an example : ``` a = ["tccggatcc", "actcctgct", "tccggatcc"] # You have a list of sequences s = set(a) # Put that into a set ``` `s` is then equal to `['tccggatcc', 'actcctgct']`, without duplicates.
Don't be afraid of files! ;-) I'm posting an example by assuming the following: 1. its a text-file 2. one sequence per line - ``` filename = 'sequence.txt' with open(filename, 'r') as sqfile: sequences = sqfile.readlines() # now we have a list of strings #discarding the duplicates: uniques = list(set(sequences)) ``` That's it - by using pythons set-type we eliminate all duplicates automagically. if you have the id and the sequence in the same line like: ``` 423401 ttacguactg ``` you may want to eliminate the ids like: ``` sequences = [s.strip().split()[-1] for s in sequences] ``` with strip we strip the string from leading and trailing whitespaces and with split we split the line/string into 2 components: the id, and the sequence. with the [-1] we select the last component (= the sequence-string) and repack it into our sequence-list.
16,095
2,396,382
this is the script >> ``` import ClientForm import urllib2 request = urllib2.Request("http://ritaj.birzeit.edu") response = urllib2.urlopen(request) forms = ClientForm.ParseResponse(response, backwards_compat=False) response.close() form = forms[0] print form sooform = str(raw_input("Form Name: ")) username = str(raw_input("Username: ")) password = str(raw_input("Password: ")) form[sooform] = [username, password] request2 = form.click() try: response2 = urllib2.urlopen(request2) except urllib2.HTTPError, response2: pass print response2.geturl() print response2.info() # headers print response2.read() # body response2.close() ``` when start the script ,, i got this ``` Traceback (most recent call last): File "C:/Python26/ritaj2.py", line 9, in <module> form = forms[0] IndexError: list index out of range ``` what is th problem,, i running on windows, python 2.6.4 **Update:** I want a script that login this site, and print the response :)
2010/03/07
[ "https://Stackoverflow.com/questions/2396382", "https://Stackoverflow.com", "https://Stackoverflow.com/users/288208/" ]
The only `<form>` tag in the HTML served at that URL (save it to a file and look for yourself!) is: ``` <form method="GET" action="http://www.google.com/u/ritaj"> ``` which does a customized Google search and has nothing to do with logging in (plus, for some reason, ClientForm has some problem identifying that specific form -- but that form is no use to you anyway, so I didn't explore that issue further). You can still get at the **controls** in the page by using ``` forms = ClientForms.ParseResponseEx(response) ``` which makes `forms[0]` an artificial one containing all controls that aren't within a form. Specifically, this approach identifies controls with the following names, in order (again there's a bit of parsing confusion here, but hopefully not a killer for you...): ``` >>> f = forms[0] >>> [c.name for c in f.controls] ['q', 'sitesearch', 'sa', 'domains', 'form:mode', 'form:id', '__confirmed_p', '__refreshing_p', 'return_url', 'time', 'token_id', 'hash', 'username', 'password', 'persistent_p', 'formbutton:ok'] ``` so you should be able to set the `username` and `password` controls of the "non-form form" `f`, and proceed from there. (A side bit: `raw_input` already returns a string, lose those redundant `str()` calls around it).
the actual address seems to be using `https` instead of `http`. check the [urllib2](http://docs.python.org/library/urllib2.html) doc to see if it handles HTTPS( i believe you need ssl)
16,103
25,240,268
Say for example, I have two text files containing the following: **File 1** > > "key\_one" = "String value for key one" > > "key\_two" = "String value for key two" > > // COMMENT // > > "key\_three" = "String value for key two" > > > **File 2** > > // COMMENT > > "key\_one" = "key\_one" > > // COMMENT > > "key\_two" = "key\_two" > > > Now, I want to loop through **File 1** and get out each key and string value (if its not a comment line). I then want to search **File 2** for the key and if its found, replace its string value with the string value from **File 1** I'd guess using some regex would be good here but thats where my plan fails. I don't really have a great understanding of regex although I am getting better. Heres the regex I came up with to match the keys: `"^\"\w*\""` And heres the regex I was trying to match the string: `"= [\"a-zA-Z0-9 ]*"` These may not be right or the best so feel free to correct me. I am looking to complete this task using either a bash script or a python script. I did try in python to use the regex search and match functions but with little success.
2014/08/11
[ "https://Stackoverflow.com/questions/25240268", "https://Stackoverflow.com", "https://Stackoverflow.com/users/1813167/" ]
There is a quote that I heard from somewhere: "If you have a problem and you try to solve it with regular expressions, you now have two problems". What you want to achieve can be easily done with just a few inbuilt Python string methods such as `startswith()` and `split()`, without using any regex. In short you can do the following: ``` For each line of File 1 Check if it's a comment line by checking that it starts with '//' If not a comment line, split it to `key` and `value` Store the key/value in a dictionary For each line of File 2 Check if it's a comment line by checking that it starts with '//' If not a comment line, split it to `key` and `value` Check the dictionary to see if the key exists Output to the file as necessary ```
``` import pprint ``` def get\_values(f): file1 = open(f,"r").readlines() values = {} for line in file1: if line[:2] !="//" and "=" in line: #print line key, value = line.split("=") #print key, value values[key]=value ``` return values ``` def replace\_values(v1, v2): for key in v1: v = v1[key] if key in v2: v2[key]=v file1\_values = get\_values("file1.txt") file2\_values = get\_values("file2.txt") print "BEFORE" print pprint.pprint(file1\_values) print pprint.pprint(file2\_values) replace\_values(file1\_values, file2\_values) print "AFTER" print pprint.pprint(file1\_values) print pprint.pprint(file2\_values) If the text files are that predictable then you could use something like that. The above code will do what you want and replace the values with the following output: ``` BEFORE {'"key_one" ': ' "String value for key one"\n', '"key_three" ': ' "String value for key two"', '"key_two" ': ' "String value for key two"\n'} {'"key_one" ': ' "key_one"\n', '"key_two" ': ' "key_two"'} AFTER {'"key_one" ': ' "String value for key one"\n', '"key_three" ': ' "String value for key two"', '"key_two" ': ' "String value for key two"\n'} {'"key_one" ': ' "String value for key one"\n', '"key_two" ': ' "String value for key two"\n'} ```
16,104
50,201,607
TL;DR When updating from CMake 3.10 to CMake 3.11.1 on archlinux, the following configuration line: find\_package(Boost COMPONENTS python3 COMPONENTS numpy3 REQUIRED) leads to CMake linking against 3 different libraries ``` -- Boost version: 1.66.0 -- Found the following Boost libraries: -- python3 -- numpy3 -- python ``` instead of the previous behaviour: ``` -- Boost version: 1.66.0 -- Found the following Boost libraries: -- python3 -- numpy3 ``` resulting in a linker error. --- I use CMake to build a piece of software that relies on Boost python, and, since a couple of days ago, it seems that the line ``` find_package(Boost COMPONENTS numpy3 REQUIRED) ``` is no longer sufficient for CMake to understand that it should link the program against the Boost `python3` library, and it uses the Boost library `python` instead. Here is a minimal working example to reproduce what I am talking about. **test.cpp** ``` #include <iostream> using namespace std; int main() { cout << "Hello, world!" << endl; } ``` **CMakeList.txt** ``` set(CMAKE_VERBOSE_MAKEFILE ON) find_package(PythonLibs 3 REQUIRED) find_package(Boost COMPONENTS numpy3 REQUIRED) add_executable (test test.cpp) target_link_libraries(test ${Boost_LIBRARIES} ${PYTHON_LIBRARIES}) ``` With this configuration of CMake, a linker error will occur, and the error persists when I change the line adding numpy to ``` find_package(Boost COMPONENTS python3 COMPONENTS numpy3 REQUIRED) ``` Here is the result of `cmake . && make`: ``` /home/rastapopoulos/test $ cmake . -- Boost version: 1.66.0 -- Found the following Boost libraries: -- numpy3 -- python CMake Warning (dev) in CMakeLists.txt: No cmake_minimum_required command is present. A line of code such as cmake_minimum_required(VERSION 3.11) should be added at the top of the file. The version specified may be lower if you wish to support older CMake versions for this project. For more information run "cmake --help-policy CMP0000". This warning is for project developers. Use -Wno-dev to suppress it. -- Configuring done -- Generating done -- Build files have been written to: /home/rastapopoulos/test /home/rastapopoulos/test $ make /usr/bin/cmake -H/home/rastapopoulos/test -B/home/rastapopoulos/test --check-build-system CMakeFiles/Makefile.cmake 0 /usr/bin/cmake -E cmake_progress_start /home/rastapopoulos/test/CMakeFiles /home/rastapopoulos/test/CMakeFiles/progress.marks make -f CMakeFiles/Makefile2 all make[1]: Entering directory '/home/rastapopoulos/test' make -f CMakeFiles/test.dir/build.make CMakeFiles/test.dir/depend make[2]: Entering directory '/home/rastapopoulos/test' cd /home/rastapopoulos/test && /usr/bin/cmake -E cmake_depends "Unix Makefiles" /home/rastapopoulos/test /home/rastapopoulos/test /home/rastapopoulos/test /home/rastapopoulos/test /home/rastapopoulos/test/CMakeFi les/test.dir/DependInfo.cmake --color= make[2]: Leaving directory '/home/rastapopoulos/test' make -f CMakeFiles/test.dir/build.make CMakeFiles/test.dir/build make[2]: Entering directory '/home/rastapopoulos/test' [ 50%] Linking CXX executable test /usr/bin/cmake -E cmake_link_script CMakeFiles/test.dir/link.txt --verbose=1 /usr/bin/c++ -rdynamic CMakeFiles/test.dir/test.o -o test -lboost_numpy3 -lboost_python -lpython3.6m /usr/lib/gcc/x86_64-pc-linux-gnu/7.3.1/../../../../lib/libboost_python.so: undefined reference to `PyString_Size' /usr/lib/gcc/x86_64-pc-linux-gnu/7.3.1/../../../../lib/libboost_python.so: undefined reference to `PyUnicodeUCS4_FromEncodedObject' /usr/lib/gcc/x86_64-pc-linux-gnu/7.3.1/../../../../lib/libboost_python.so: undefined reference to `PyFile_FromString' /usr/lib/gcc/x86_64-pc-linux-gnu/7.3.1/../../../../lib/libboost_python.so: undefined reference to `PyString_Type' /usr/lib/gcc/x86_64-pc-linux-gnu/7.3.1/../../../../lib/libboost_python.so: undefined reference to `PyInt_Type' /usr/lib/gcc/x86_64-pc-linux-gnu/7.3.1/../../../../lib/libboost_python.so: undefined reference to `PyString_FromString' /usr/lib/gcc/x86_64-pc-linux-gnu/7.3.1/../../../../lib/libboost_python.so: undefined reference to `PyUnicodeUCS4_AsWideChar' /usr/lib/gcc/x86_64-pc-linux-gnu/7.3.1/../../../../lib/libboost_python.so: undefined reference to `PyString_FromStringAndSize' /usr/lib/gcc/x86_64-pc-linux-gnu/7.3.1/../../../../lib/libboost_python.so: undefined reference to `Py_InitModule4_64' /usr/lib/gcc/x86_64-pc-linux-gnu/7.3.1/../../../../lib/libboost_python.so: undefined reference to `PyString_FromFormat' /usr/lib/gcc/x86_64-pc-linux-gnu/7.3.1/../../../../lib/libboost_python.so: undefined reference to `PyNumber_Divide' /usr/lib/gcc/x86_64-pc-linux-gnu/7.3.1/../../../../lib/libboost_python.so: undefined reference to `PyNumber_InPlaceDivide' /usr/lib/gcc/x86_64-pc-linux-gnu/7.3.1/../../../../lib/libboost_python.so: undefined reference to `PyInt_AsLong' /usr/lib/gcc/x86_64-pc-linux-gnu/7.3.1/../../../../lib/libboost_python.so: undefined reference to `PyString_InternFromString' /usr/lib/gcc/x86_64-pc-linux-gnu/7.3.1/../../../../lib/libboost_python.so: undefined reference to `PyClass_Type' /usr/lib/gcc/x86_64-pc-linux-gnu/7.3.1/../../../../lib/libboost_python.so: undefined reference to `PyString_AsString' /usr/lib/gcc/x86_64-pc-linux-gnu/7.3.1/../../../../lib/libboost_python.so: undefined reference to `PyInt_FromLong' /usr/lib/gcc/x86_64-pc-linux-gnu/7.3.1/../../../../lib/libboost_python.so: undefined reference to `PyFile_AsFile' collect2: error: ld returned 1 exit status make[2]: *** [CMakeFiles/test.dir/build.make:90: test] Error 1 make[2]: Leaving directory '/home/rastapopoulos/test' make[1]: *** [CMakeFiles/Makefile2:71: CMakeFiles/test.dir/all] Error 2 make[1]: Leaving directory '/home/rastapopoulos/test' make: *** [Makefile:87: all] Error 2 ``` Has anyone experienced a similar problem and managed to solve it? I use `cmake 3.11.1`, `boost 1.66.0-2`, and run an updated version of Archlinux.
2018/05/06
[ "https://Stackoverflow.com/questions/50201607", "https://Stackoverflow.com", "https://Stackoverflow.com/users/7141288/" ]
This bug is due to an invalid dependency description in `FindBoost.cmake` ``` set(_Boost_NUMPY_DEPENDENCIES python) ``` This has been fixed at <https://github.com/Kitware/CMake/commit/c747d4ccb349f87963a8d1da69394bc4db6b74ed> Please use latest one, or you can rewrite it manually: ``` set(_Boost_NUMPY_DEPENDENCIES python${component_python_version}) ```
[CMake 3.10 does not properly support Boost 1.66](https://stackoverflow.com/a/42124857/2799037). The Boost dependencies are hard-coded and if they chance, CMake has to adopt. Delete the build directory and reconfigure. The configure step uses cached variables which prevents re-detection with the newer routines.
16,109
7,092,407
Im working a mongodb database using pymongo python module. I have a function in my code which when called updates the records in the collection as follows. ``` for record in coll.find(<some query here>): #Code here #... #... coll.update({ '_id' : record['_id'] },record) ``` Now, if i modify the code as follows : ``` for record in coll.find(<some query here>): try: #Code here #... #... coll.update({ '_id' : record['_id'] },record,safe=True) except: #Handle exception here ``` Does this mean an exception will be thrown when update fails or no exception will be thrown and update will just skip the record causing a problem ? Please Help Thank you
2011/08/17
[ "https://Stackoverflow.com/questions/7092407", "https://Stackoverflow.com", "https://Stackoverflow.com/users/898562/" ]
Increase your memory buffer size `php_value memory_limit 64M` in your .htacess or `ini_set('memory_limit','64M');` in your php file
It depends your implimentation. last time when I was working on csv file with more then 500000 records, I got the same message. Later I introduce classes and try to close the open objects. it reduces it memeory consumption. if you are opening an image and editing it. it means it is loading in a memory. in that case size really matter. if you are operating multiple images. I will record to one per image and then close that image. In my experience when I was working on pdf artwork files to check the crop marks. I was having the same error. ``` //you can set the memory limits values // in htaccess php_value memory_limit 64M //or in you using following in php ini_set('memory_limit', '128M'); //or update it in your php.ini file ``` but if you optimize your code. and use object oriented aproach then you memory consumption will be very less. because in that every object has its own scope and out of that scope it is destroyed.
16,110
41,504,340
This question [explains](https://stackoverflow.com/questions/7300321/how-to-use-pythons-pip-to-download-and-keep-the-zipped-files-for-a-package) how to make pip download and save packages. If I follow this formula, Pip will download wheel (.whl) files if available. ``` (venv) [user@host glances]$ pip download -d wheelhouse -r build_requirements.txt Collecting wheel (from -r build_requirements.txt (line 1)) File was already downloaded /usr_data/tmp/glances/wheelhouse/wheel-0.29.0-py2.py3-none-any.whl Collecting pex (from -r build_requirements.txt (line 2)) File was already downloaded /usr_data/tmp/glances/wheelhouse/pex-1.1.18-py2.py3-none-any.whl Collecting requests (from -r build_requirements.txt (line 3)) File was already downloaded /usr_data/tmp/glances/wheelhouse/requests-2.12.4-py2.py3-none-any.whl Collecting pip (from -r build_requirements.txt (line 4)) File was already downloaded /usr_data/tmp/glances/wheelhouse/pip-9.0.1-py2.py3-none-any.whl Collecting setuptools (from -r build_requirements.txt (line 5)) File was already downloaded /usr_data/tmp/glances/wheelhouse/setuptools-32.3.1-py2.py3-none-any.whl Successfully downloaded wheel pex requests pip setuptools ``` Every single file that it downloaded was a Wheel - but what if I want to get a different kind of file? I actually want to download the sdist (.tar.gz) files in preference to .whl files? Is there a way to tell Pip what kinds of files I actually want it to get? So instead of getting a directory full of wheels I might want a bunch of tar.gz files.
2017/01/06
[ "https://Stackoverflow.com/questions/41504340", "https://Stackoverflow.com", "https://Stackoverflow.com/users/1179137/" ]
According to `pip install -h`: > > --no-use-wheel Do not Find and prefer wheel archives when searching indexes and find-links locations. DEPRECATED in favour of --no-binary. > > > And > > --no-binary Do not use binary packages. Can be supplied multiple times, and each time adds to the existing value. Accepts either :all: to disable all binary packages, :none: to empty the set, or one or more package > > > You may need to upgrade pip with `pip install -U pip` if your version is too old.
use `pip download --no-binary=:all: -r requirements.txt` According to the pip documentation: **--no-binary:** > > Do not use binary packages. Can be supplied multiple times, and each > time adds to the existing value. Accepts either :all: to disable all > binary packages, :none: to empty the set, or one or more package names > with commas between them. Note that some packages are tricky to > compile and may fail to install when this option is used on them. > > > It worked for me!
16,115
6,022,450
I'm using Scrapy to scrape a website. The item page that I want to scrape looks like: <http://www.somepage.com/itempage/&page=x>. Where `x` is any number from `1` to `100`. Thus, I have an `SgmlLinkExractor` Rule with a callback function specified for any page resembling this. The website does not have a listpage with all the items, so I want to somehow well scrapy to scrape those urls (from `1` to `100`). This guy [here](https://stackoverflow.com/questions/4640804/python-scrapy-how-to-fetch-an-url-not-via-following-links-inside-a-spider) seemed to have the same issue, but couldn't figure it out. Does anyone have a solution?
2011/05/16
[ "https://Stackoverflow.com/questions/6022450", "https://Stackoverflow.com", "https://Stackoverflow.com/users/648121/" ]
You could list all the known URLs in your [`Spider`](http://doc.scrapy.org/topics/spiders.html#spiders) class' [start\_urls](http://doc.scrapy.org/topics/spiders.html#scrapy.spider.BaseSpider.start_urls) attribute: ``` class SomepageSpider(BaseSpider): name = 'somepage.com' allowed_domains = ['somepage.com'] start_urls = ['http://www.somepage.com/itempage/&page=%s' % page for page in xrange(1, 101)] def parse(self, response): # ... ```
If it's just a one time thing, you can create a local html file `file:///c:/somefile.html` with all the links. Start scraping that file and add `somepage.com` to allowed domains. Alternately, in the parse function, you can return a new Request which is the next url to be scraped.
16,116
58,635,279
I have created a brand new [Python repository](https://github.com/neuropsychology/NeuroKit) based on a cookie-cutter template. Everything looks okay, so I am trying now to set the testing and testing coverage using travis and codecov. I am new to pytest but I am trying to do things right. After looking on the internet, I ended up with this setup: In [`.travis.yml`](https://github.com/neuropsychology/NeuroKit/blob/master/.travis.yml), I have added the following: ``` install: - pip install -U tox-travis - pip install coverage - pip install codecov script: - python setup.py install - tox - coverage run tests/test_foo.py ``` In my [`tox.ini`](https://github.com/neuropsychology/NeuroKit/blob/master/tox.ini) file: ``` [testenv] passenv = CI TRAVIS TRAVIS_* setenv = PYTHONPATH = {toxinidir} PIPENV_IGNORE_VIRTUALENVS=1 deps = pipenv codecov pytest {py27}: pathlib2 commands_pre = pipenv install --dev --skip-lock codecov ``` I have created a minimal [`tests/test_foo.py`](https://github.com/neuropsychology/NeuroKit/blob/master/tests/test_foo.py) file with the following (`foo()` is the only function currently present in the package). ```py import pytest import doctest import neurokit2 as nk if __name__ == '__main__': doctest.testmod() pytest.main() def test_foo(): assert nk.foo() == 4 ``` I have It seems that codecov, triggered by travis does not run through the test. Moreover, on travis, it says [`Error: No coverage report found`](https://travis-ci.org/neuropsychology/NeuroKit/jobs/604805529#L332) I wonder what am I doing wrong?
2019/10/31
[ "https://Stackoverflow.com/questions/58635279", "https://Stackoverflow.com", "https://Stackoverflow.com/users/4198688/" ]
1) create pytest.ini file in your project directory and add the following lines ``` [pytest] testpaths = tests python_files = *.py python_functions = test_* ``` 2) create .coveragerc file in project directory and add the following lines ``` [report] fail_under = 90 show_missing = True ``` 3) pytest for code coverage ``` pytest --verbose --color=yes --cov=Name of directory for which you need code coverage --assert=plain ``` Note: Name of the directory for which you need code coverage must be inside the project directory
Looks like you're missing `coverage` on your installs. You have it on scripts but it might not be running. Try adding `pip install coverage` in your travis.yml file. Have a go at this too: [codecov](https://github.com/codecov/example-python)
16,117
44,492,238
I am learning python & trying to scrape a website, having 10 listing of properties on each page. I want to extract information from each listing on each page. My code for first 5 pages is as follows :- ``` import requests from bs4 import BeautifulSoup urls = [] for i in range(1,5): pages = "http://www.realcommercial.com.au/sold/property-offices-retail-showrooms+bulky+goods-land+development-hotel+leisure-medical+consulting-other-in-vic/list-{0}?includePropertiesWithin=includesurrounding&activeSort=list-date&autoSuggest=true".format(i) urls.append(pages) for info in urls: page = requests.get(info) soup = BeautifulSoup(page.content, 'html.parser') links = soup.find_all('a', attrs ={'class' :'details-panel'}) hrefs = [link['href'] for link in links] Data = [] for urls in hrefs: pages = requests.get(urls) soup_2 =BeautifulSoup(pages.content, 'html.parser') Address_1 = soup_2.find_all('p', attrs={'class' :'full-address'}) Address = [Address.text.strip() for Address in Address_1] Date = soup_2.find_all('li', attrs ={'class' :'sold-date'}) Sold_Date = [Sold_Date.text.strip() for Sold_Date in Date] Area_1 =soup_2.find_all('ul', attrs={'class' :'summaryList'}) Area = [Area.text.strip() for Area in Area_1] Agency_1=soup_2.find_all('div', attrs={'class' :'agencyName ellipsis'}) Agency_Name=[Agency_Name.text.strip() for Agency_Name in Agency_1] Agent_1=soup_2.find_all('div', attrs={'class' :'agentName ellipsis'}) Agent_Name=[Agent_Name.text.strip() for Agent_Name in Agent_1] Data.append(Sold_Date+Address+Area+Agency_Name+Agent_Name) ``` The above code is not working for me. Please let me know the correct coding to achieve the purpose.
2017/06/12
[ "https://Stackoverflow.com/questions/44492238", "https://Stackoverflow.com", "https://Stackoverflow.com/users/7961265/" ]
There is one problem in your code is that you declared the variable "urls" twice. You need to update the code like below: ``` import requests from bs4 import BeautifulSoup urls = [] for i in range(1,6): pages = "http://www.realcommercial.com.au/sold/property-offices-retail-showrooms+bulky+goods-land+development-hotel+leisure-medical+consulting-other-in-vic/list-{0}?includePropertiesWithin=includesurrounding&activeSort=list-date&autoSuggest=true".format(i) urls.append(pages) Data = [] for info in urls: page = requests.get(info) soup = BeautifulSoup(page.content, 'html.parser') links = soup.find_all('a', attrs ={'class' :'details-panel'}) hrefs = [link['href'] for link in links] for href in hrefs: pages = requests.get(href) soup_2 =BeautifulSoup(pages.content, 'html.parser') Address_1 = soup_2.find_all('p', attrs={'class' :'full-address'}) Address = [Address.text.strip() for Address in Address_1] Date = soup_2.find_all('li', attrs ={'class' :'sold-date'}) Sold_Date = [Sold_Date.text.strip() for Sold_Date in Date] Area_1 =soup_2.find_all('ul', attrs={'class' :'summaryList'}) Area = [Area.text.strip() for Area in Area_1] Agency_1=soup_2.find_all('div', attrs={'class' :'agencyName ellipsis'}) Agency_Name=[Agency_Name.text.strip() for Agency_Name in Agency_1] Agent_1=soup_2.find_all('div', attrs={'class' :'agentName ellipsis'}) Agent_Name=[Agent_Name.text.strip() for Agent_Name in Agent_1] Data.append(Sold_Date+Address+Area+Agency_Name+Agent_Name) print Data ```
Use headers in the code and use string concatenation instead of .format(i) The code looks like this ``` import requests from bs4 import BeautifulSoup urls = [] for i in range(1,6): pages = 'http://www.realcommercial.com.au/sold/property-offices-retail-showrooms+bulky+goods-land+development-hotel+leisure-medical+consulting-other-in-vic/list-'i+'?includePropertiesWithin=includesurrounding&activeSort=list-date&autoSuggest=true' urls.append(pages) Data = [] for info in urls: headers = {'User-agent':'Mozilla/5.0'} page = requests.get(info,headers=headers) soup = BeautifulSoup(page.content, 'html.parser') links = soup.find_all('a', attrs ={'class' :'details-panel'}) hrefs = [link['href'] for link in links] for href in hrefs: pages = requests.get(href) soup_2 =BeautifulSoup(pages.content, 'html.parser') Address_1 = soup_2.find_all('p', attrs={'class' :'full-address'}) Address = [Address.text.strip() for Address in Address_1] Date = soup_2.find_all('li', attrs ={'class' :'sold-date'}) Sold_Date = [Sold_Date.text.strip() for Sold_Date in Date] Area_1 =soup_2.find_all('ul', attrs={'class' :'summaryList'}) Area = [Area.text.strip() for Area in Area_1] Agency_1=soup_2.find_all('div', attrs={'class' :'agencyName ellipsis'}) Agency_Name=[Agency_Name.text.strip() for Agency_Name in Agency_1] Agent_1=soup_2.find_all('div', attrs={'class' :'agentName ellipsis'}) Agent_Name=[Agent_Name.text.strip() for Agent_Name in Agent_1] Data.append(Sold_Date+Address+Area+Agency_Name+Agent_Name) print Data ```
16,118
21,778,187
I would like to find text in file with regular expression and after replace it to another name. I have to read file line by line at first because in other way re.match(...) can`t find text. My test file where I would like to make modyfications is (no all, I removed some code): ``` //... #include <boost/test/included/unit_test.hpp> #ifndef FUNCTIONS_TESTSUITE_H #define FUNCTIONS_TESTSUITE_H //... BOOST_AUTO_TEST_SUITE(FunctionsTS) BOOST_AUTO_TEST_CASE(test) { std::string l_dbConfigDataFileName = "../../Config/configDB.cfg"; DB::FUNCTIONS::DBConfigData l_dbConfigData; //... } BOOST_AUTO_TEST_SUITE_END() //... ``` Now python code which replace the configDB name to another. I have to find configDB.cfg name by regular expression because all the time the name is changing. Only the name, extension not needed. Code: ``` import fileinput import re myfile = "Tset.cpp" #first search expression - ok. working good find and print configDB with open(myfile) as f: for line in f: matchObj = re.match( r'(.*)../Config/(.*).cfg(.*)', line, re.M|re.I) if matchObj: print "Search : ", matchObj.group(2) #now replace searched expression to another name - so one more time find and replace - another way - not working - file after run this code is empty?!!! for line in fileinput.FileInput(myfile, inplace=1): matchObj = re.match( r'(.*)../Config/(.*).cfg(.*)', line, re.M|re.I) if matchObj: line = line.replace("Config","AnotherConfig") ```
2014/02/14
[ "https://Stackoverflow.com/questions/21778187", "https://Stackoverflow.com", "https://Stackoverflow.com/users/1693143/" ]
Looks like this isn't possible to do. To cut down on duplicate code, simply declare the error handling function separately and reuse it inside the response and responseError functions. ``` $httpProvider.interceptors.push(function($q) { var handleError = function (rejection) { ... } return { response: function (response) { if (response.data.error) { return handleError(response); } return response; }, responseError: handleError } }); ```
To add to this answer: rejecting the promise in the response interceptor DOES do something. Although one would expect it to call the responseError in first glance, this would not make a lot of sense: the request is fulfilled with succes. But rejecting it in the response interceptor will make the caller of the promise go into error handling. So when doing this ``` $http.get('some_url') .then(succes) .catch(err) ``` Rejecting the promise will call the catch function. So you don't have you proper generic error handling, but your promise IS rejected, and that's useful :-)
16,120
49,773,418
after writing import tensorflow\_hub, the following error emerges: ``` class LatestModuleExporter(tf.estimator.Exporter): ``` AttributeError: module 'tensorflow.python.estimator.estimator\_lib' has no attribute 'Exporter' I'm using python 3.6 with tensorflow 1.7 on Windows 10 thanks!
2018/04/11
[ "https://Stackoverflow.com/questions/49773418", "https://Stackoverflow.com", "https://Stackoverflow.com/users/2393805/" ]
You can reinstall TensorFlow\_hub: ``` pip install ipykernel pip install tensorflow_hub ```
I believe your python3 runtime is not really running with tensorflow 1.7. That attribute exists since tensorflow 1.4. I suspect some mismatch between python2/3 environment, mismatch installing with pip/pip3 or an issue with installing both tensorflow and tf-nightly pip packages. You can double check with: ``` $ python3 -c "import tensorflow as tf; print(tf.__version__)" ```
16,123
15,448,584
I have 2 lists `a = [2, 6, 12, 13, 1, 4, 5]` and `b = [12, 1]`. Elements in list `b` are a subset of list `a`. From the above pair of lists, I need to create a list of tuples as following : ``` [(12,6),(12,2),(1,13),(1,12),(1,6),(1,2)] ``` Basically, at the point of intersection of list `b` and list `a`, so from above e.g `a` and `b` first intersection point is at index `2`, the value `12`. So create a tuple with the first element from list `b` and the second element from list `a`. I am trying this in python, any suggestion for efficiently doing creating this tuple ? Please note, each list can have 100 elements, in it.
2013/03/16
[ "https://Stackoverflow.com/questions/15448584", "https://Stackoverflow.com", "https://Stackoverflow.com/users/1923226/" ]
You typically need to use `glibtool` and `glibtoolize`, since `libtool` already exists on OS X as a binary tool for creating Mach-O dynamic libraries. So, that's how MacPorts installs it, using a program name transform, though the port itself is still named 'libtool'. Some `autogen.sh` scripts (or their equivalent) will honor the `LIBTOOL` / `LIBTOOLIZE` environment variables. I have a line in my own `autogen.sh` scripts: ``` case `uname` in Darwin*) glibtoolize --copy ;; *) libtoolize --copy ;; esac ``` You may or may not want the `--copy` flag. --- Note: If you've installed the autotools using MacPorts, a correctly written `configure.ac` with `Makefile.am` files should only require `autoreconf -fvi`. It should call `glibtoolize`, etc., as expected. Otherwise, some packages will distribute an `autogen.sh` or similar script.
I hope my answer is not too naive. I am a noob to OSX. [brew](http://brew.sh/) install libtool solved a similar issue for me.
16,129
68,438,620
I am trying to build and run the sample `python` application from AWS SAM. I just installed python, below is what command lines gives.. ``` D:\Udemy Work>python Python 3.9.6 (tags/v3.9.6:db3ff76, Jun 28 2021, 15:26:21) [MSC v.1929 64 bit (AMD64)] on win32 Type "help", "copyright", "credits" or "license" for more information. >>> D:\Udemy Work>pip -V pip 21.1.3 from c:\users\user\appdata\local\programs\python\python39\lib\site-packages\pip (python 3.9) ``` When I run `sam build`, I get the following error ``` Build Failed Error: PythonPipBuilder:Validation - Binary validation failed for python, searched for python in following locations : ['C:\\Users\\User\\AppData\\Local\\Programs\\Python\\Python39\\python.EXE', 'C:\\Users\\User\\AppData\\Local\\Microsoft\\WindowsApps\\python.EXE'] which did not satisfy constraints for runtime: python3.8. Do you have python for runtime: python3.8 on your PATH? ``` Below is my code **template.yaml** ``` AWSTemplateFormatVersion: '2010-09-09' Transform: AWS::Serverless-2016-10-31 Description: > python-test Sample SAM Template for python-test # More info about Globals: https://github.com/awslabs/serverless-application-model/blob/master/docs/globals.rst Globals: Function: Timeout: 3 Resources: HelloWorldFunction: Type: AWS::Serverless::Function # More info about Function Resource: https://github.com/awslabs/serverless-application-model/blob/master/versions/2016-10-31.md#awsserverlessfunction Properties: CodeUri: hello_world/ Handler: app.lambda_handler Runtime: python3.8 Events: HelloWorld: Type: Api # More info about API Event Source: https://github.com/awslabs/serverless-application-model/blob/master/versions/2016-10-31.md#api Properties: Path: /hello Method: get ``` **app.py** ``` AWSTemplateFormatVersion: '2010-09-09' Transform: AWS::Serverless-2016-10-31 Description: > python-test Sample SAM Template for python-test # More info about Globals: https://github.com/awslabs/serverless-application-model/blob/master/docs/globals.rst Globals: Function: Timeout: 3 Resources: HelloWorldFunction: Type: AWS::Serverless::Function # More info about Function Resource: https://github.com/awslabs/serverless-application-model/blob/master/versions/2016-10-31.md#awsserverlessfunction Properties: CodeUri: hello_world/ Handler: app.lambda_handler Runtime: python3.9 Events: HelloWorld: Type: Api # More info about API Event Source: https://github.com/awslabs/serverless-application-model/blob/master/versions/2016-10-31.md#api Properties: Path: /hello Method: get ``` If I change the run time in yaml, then I get the following error ``` PS D:\Udemy Work\awslambda\python-test> sam build Building codeuri: D:\Udemy Work\awslambda\python-test\hello_world runtime: python3.9 metadata: {} functions: ['HelloWorldFunction'] Build Failed Error: 'python3.9' runtime is not supported ``` What is the solution here?
2021/07/19
[ "https://Stackoverflow.com/questions/68438620", "https://Stackoverflow.com", "https://Stackoverflow.com/users/1379286/" ]
You basically need unpivot or melt: <https://pandas.pydata.org/docs/reference/api/pandas.melt.html> ``` pd.melt(df, id_vars=['Number','From','To'], value_vars = ['D1_value','D2_value'])\ .rename({'variable':'Type'},axis=1)\ .dropna(subset=['value'],axis=0) ```
You can also use `pd.wide_to_long`, after reordering the column positions: ``` temp = df.rename(columns = lambda col: "_".join(col.split("_")[::-1]) if col.endswith("value") else col) pd.wide_to_long(temp, stubnames = 'value', i=['Number', 'From', 'To'], j='Type', sep='_', suffix=".+").dropna().reset_index() Out[19]: Number From To Type value 0 111 A B D1 10.0 1 111 A B D2 12.0 2 222 B A D2 4.0 3 222 B A D3 6.0 ``` You could also use `pivot_longer` from `pyjanitor` : ``` # pip install pyjanitor import janitor import pandas as pd df.pivot_longer(index = slice('Number', 'To'), #.value keeps column labels associated with it # as column headers names_to=('Type', '.value'), names_sep='_').dropna() Out[22]: Number From To Type value 0 111 A B D1 10.0 2 111 A B D2 12.0 3 222 B A D2 4.0 5 222 B A D3 6.0 ``` You can also use `stack`: ``` df = df.set_index(['Number', 'From', 'To']) # this creates a MultiIndex column df.columns = df.columns.str.split("_", expand = True) df.columns.names = ['Type', None] # stack has dropna=True as default df.stack(level = 0).reset_index() Number From To Type value 0 111 A B D1 10.0 1 111 A B D2 12.0 2 222 B A D2 4.0 3 222 B A D3 6.0 ```
16,135
28,967,976
I'm reading a pcap file in python using scapy which contains Ethernet packets that have trailer. How can I remove these trailers? P.S: Ethernet packets can not be less than 64 bytes (including FCS).Network adapters add padding zero bytes to end of the packet to overcome this problem. These padding bytes called "Trailer". See [here](https://wiki.wireshark.org/Ethernet#Allowed_Packet_Lengths) for more information.
2015/03/10
[ "https://Stackoverflow.com/questions/28967976", "https://Stackoverflow.com", "https://Stackoverflow.com/users/2133144/" ]
It seems there is no official way to remove it. This work on frames that have IPv4 as network layer protocol: ``` packet_without_trailer=IP(str(packet[IP])[0:packet[IP].len]) ```
Just use the upper layers and ignore the Ethernet layer: `packet = eval(originalPacket[IP])`
16,136
5,127,860
When I have lots of different modules using the standard python logging module, the following stack trace does little to help me find out where, exactly, I had a badly formed log statement: ``` Traceback (most recent call last): File "/usr/lib/python2.6/logging/__init__.py", line 768, in emit msg = self.format(record) File "/usr/lib/python2.6/logging/__init__.py", line 648, in format return fmt.format(record) File "/usr/lib/python2.6/logging/__init__.py", line 436, in format record.message = record.getMessage() File "/usr/lib/python2.6/logging/__init__.py", line 306, in getMessage msg = msg % self.args TypeError: not all arguments converted during string formatting ``` I'm only starting to use python's logging module, so maybe I am overlooking something obvious. I'm not sure if the stack-trace is useless because I am using greenlets, or if this is normal for the logging module, but any help would be appreciated. I'd be willing to modify the source, anything to make the logging library actually give a clue as to where the problem lies.
2011/02/26
[ "https://Stackoverflow.com/questions/5127860", "https://Stackoverflow.com", "https://Stackoverflow.com/users/254704/" ]
The logging module is designed to stop bad log messages from killing the rest of the code, so the `emit` method catches errors and passes them to a method `handleError`. The easiest thing for you to do would be to temporarily edit `/usr/lib/python2.6/logging/__init__.py`, and find `handleError`. It looks something like this: ``` def handleError(self, record): """ Handle errors which occur during an emit() call. This method should be called from handlers when an exception is encountered during an emit() call. If raiseExceptions is false, exceptions get silently ignored. This is what is mostly wanted for a logging system - most users will not care about errors in the logging system, they are more interested in application errors. You could, however, replace this with a custom handler if you wish. The record which was being processed is passed in to this method. """ if raiseExceptions: ei = sys.exc_info() try: traceback.print_exception(ei[0], ei[1], ei[2], None, sys.stderr) sys.stderr.write('Logged from file %s, line %s\n' % ( record.filename, record.lineno)) except IOError: pass # see issue 5971 finally: del ei ``` Now temporarily edit it. Inserting a simple `raise` at the start should ensure the error gets propogated up your code instead of being swallowed. Once you've fixed the problem just restore the logging code to what it was.
Rather than editing installed python code, you can also find the errors like this: ``` def handleError(record): raise RuntimeError(record) handler.handleError = handleError ``` where handler is one of the handlers that is giving the problem. Now when the format error occurs you'll see the location.
16,137
52,629,106
Hello everyone I have a file of which consist of some random information but I only want the part that is important to me. ``` name: Zack age: 17 As Mixed: Zack:17 Subjects opted : 3 Subject #1: Arts name: Mike age: 15 As Mixed: Mike:15 Subjects opted : 3 Subject #1: Arts ``` Above is a example of my text file I want **Zack:17** and **Mike:15** part to be written in a text file and everything else to be ignored. I watched some YouTube videos and came across split statement in python but it didn't work. My code example ``` with open("/home/ninja/Desktop/raw.txt","r") as raw: for rec in raw: print rec.split('As Mixed: ')[0] ``` This didn’t work. Any help will really help me to finish this project. Thanks.
2018/10/03
[ "https://Stackoverflow.com/questions/52629106", "https://Stackoverflow.com", "https://Stackoverflow.com/users/9606164/" ]
You can split the data at the `:` and grab only `As Mixed` parameter ``` content = [i.strip('\n').split(': ') for i in open('filename.txt')] results = [b for a, b in content if a.startswith('As Mixed')] ``` Output: ``` ['Zack:17', 'Mike:15'] ``` To write the results to a file: ``` with open('filename.txt', 'w') as f: for i in results: f.write(f'{i}\n') ```
Try this ``` import re found = [] match = re.compile('(Mike|Zack):(\w*)') with open('/hope/ninja/Destop/raw.twt', "r") as raw: for rec in raw: found.extend(match.find_all(rec)) print(found) #output: [('Mike', '15'), ('Zack', '17')] ``` This uses regular expressions to find the value needed, basically `(Mike|Zack):(\w*)` finds Mike or Zack and then a `:` character and then as many word as it can find. To learn more about regular expressions you can read from this website: <https://docs.python.org/3.4/library/re.html>
16,147
31,112,523
I am using this python script to download OSM data and convert it to an undirected networkx graph: <https://gist.github.com/rajanski/ccf65d4f5106c2cdc70e> However,in the ideal case, I would like to generate a directed graph from it in order to refelct the directionality of the osm street network. First of all, can you confirm that as stated [here](https://help.openstreetmap.org/answer_link/15463/) and [here](https://wiki.openstreetmap.org/wiki/Way) in OSM raw xml data, the order of the nd-entries in the way is what matters for the direction? And secondly, how would you suggest to implement the generation of a directed graph from the osm raw data, give the the above gist code snippet as a template? many thanks!
2015/06/29
[ "https://Stackoverflow.com/questions/31112523", "https://Stackoverflow.com", "https://Stackoverflow.com/users/2772305/" ]
The order of the nodes only matters if the way is tagged with *[oneway](https://wiki.openstreetmap.org/wiki/Key:oneway)=yes* or *oneway=-1*. Otherwise the way is bidirectional. This applies only for vehicles of course. The only exception is *[highway=motorway](https://wiki.openstreetmap.org/wiki/Tag:highway%3Dmotorway)* which implies *oneway=yes*. You might also be interested in the [routing](https://wiki.openstreetmap.org/wiki/Routing) wiki page. It lists two routers implemented in python, and many others.
OK, I updated my script in order to enable directionality: <https://gist.github.com/rajanski/ccf65d4f5106c2cdc70e>
16,148
45,382,324
I will try to be very specific and informative. I want to create a Dockerfile with all the packages that are used in geosciences for the good of the geospatial/geoscientific community. The Dockerfile is built on top of the [scipy-notebook](https://github.com/jupyter/docker-stacks/tree/master/scipy-notebook) docker-stack. **The problem:** I am trying to build HPGL (a Python package for Geostatistics). For the dependencies: I build some packages using `apt-get` and for those packages that I couldn't install via `apt` I downloaded the .deb packages. The Dockerfile below shows the steps for building all the HPGL dependencies: ``` FROM jupyter/scipy-notebook ### ### HPGL - High Performance Geostatistics Library ### USER root RUN apt-get update && \ apt-get install -y \ gcc \ g++ \ libboost-all-dev RUN apt-get update && \ apt-get install -y \ liblapack-dev \ libblas-dev \ liblapacke-dev RUN apt-get update && \ apt-get install -y \ scons RUN wget http://ftp.us.debian.org/debian/pool/main/libf/libf2c2/libf2c2_20090411-2_amd64.deb && \ dpkg -i libf2c2_20090411-2_amd64.deb RUN wget http://ftp.us.debian.org/debian/pool/main/libf/libf2c2/libf2c2-dev_20090411-2_amd64.deb && \ dpkg -i libf2c2-dev_20090411-2_amd64.deb RUN wget http://mirrors.kernel.org/ubuntu/pool/universe/c/clapack/libcblas3_3.2.1+dfsg-1_amd64.deb && \ dpkg -i libcblas3_3.2.1+dfsg-1_amd64.deb RUN wget http://mirrors.kernel.org/ubuntu/pool/universe/c/clapack/libcblas-dev_3.2.1+dfsg-1_amd64.deb && \ dpkg -i libcblas-dev_3.2.1+dfsg-1_amd64.deb RUN wget http://ftp.us.debian.org/debian/pool/main/c/clapack/libclapack3_3.2.1+dfsg-1_amd64.deb && \ dpkg -i libclapack3_3.2.1+dfsg-1_amd64.deb RUN wget http://ftp.us.debian.org/debian/pool/main/c/clapack/libclapack-dev_3.2.1+dfsg-1_amd64.deb && \ dpkg -i libclapack-dev_3.2.1+dfsg-1_amd64.deb RUN wget https://mirror.kku.ac.th/ubuntu/ubuntu/pool/main/l/lapack/libtmglib3_3.7.1-1_amd64.deb && \ dpkg -i libtmglib3_3.7.1-1_amd64.deb RUN wget http://ftp.us.debian.org/debian/pool/main/l/lapack/libtmglib-dev_3.7.1-1_amd64.deb && \ dpkg -i libtmglib-dev_3.7.1-1_amd64.deb RUN git clone https://github.com/hpgl/hpgl.git RUN cd hpgl/src/ && \ bash -c "source activate python2 && scons -j 2" RUN cd hpgl/src/ && \ bash -c "source activate python2 && python2 setup.py install" RUN rm -rf hpgl \ scons-2.5.0* \ libf2c2_20090411-2_amd64.deb \ libf2c2-dev_20090411-2_amd64.deb \ libtmglib3_3.7.1-1_amd64.deb \ libtmglib-dev_3.7.1-1_amd64.deb \ libcblas3_3.2.1+dfsg-1_amd64.deb \ libcblas-dev_3.2.1+dfsg-1_amd64.deb \ libclapack3_3.2.1+dfsg-1_amd64.deb \ libclapack-dev_3.2.1+dfsg-1_amd64.deb USER $NB_USER ``` This runs smooth and I can run the Docker container and start notebooks, but when I import HPGL in Python I get this error that I have no idea what is happening or how to solve this: ``` --------------------------------------------------------------------------- OSError Traceback (most recent call last) <ipython-input-1-604a7d0744ab> in <module>() ----> 1 import geo_bsd /opt/conda/envs/python2/lib/python2.7/site-packages/HPGL_BSD-0.9.9-py2.7.egg/geo_bsd/__init__.py in <module>() 2 3 ----> 4 from geo import * 5 from sgs import sgs_simulation 6 from sis import sis_simulation /opt/conda/envs/python2/lib/python2.7/site-packages/HPGL_BSD-0.9.9-py2.7.egg/geo_bsd/geo.py in <module>() 3 import ctypes as C 4 ----> 5 from hpgl_wrap import _HPGL_SHAPE, _HPGL_CONT_MASKED_ARRAY, _HPGL_IND_MASKED_ARRAY, _HPGL_UBYTE_ARRAY, _HPGL_FLOAT_ARRAY, _HPGL_OK_PARAMS, _HPGL_SK_PARAMS, _HPGL_IK_PARAMS, _HPGL_MEDIAN_IK_PARAMS, __hpgl_cov_params_t, __hpgl_cockriging_m1_params_t, __hpgl_cockriging_m2_params_t, _hpgl_so 6 from hpgl_wrap import hpgl_output_handler, hpgl_progress_handler 7 /opt/conda/envs/python2/lib/python2.7/site-packages/HPGL_BSD-0.9.9-py2.7.egg/geo_bsd/hpgl_wrap.py in <module>() 144 _hpgl_so = NC.load_library('hpgl_d', __file__) 145 else: --> 146 _hpgl_so = NC.load_library('hpgl', __file__) 147 148 _hpgl_so.hpgl_set_output_handler.restype = None /opt/conda/envs/python2/lib/python2.7/site-packages/numpy/ctypeslib.pyc in load_library(libname, loader_path) 148 if os.path.exists(libpath): 149 try: --> 150 return ctypes.cdll[libpath] 151 except OSError: 152 ## defective lib file /opt/conda/envs/python2/lib/python2.7/ctypes/__init__.pyc in __getitem__(self, name) 435 436 def __getitem__(self, name): --> 437 return getattr(self, name) 438 439 def LoadLibrary(self, name): /opt/conda/envs/python2/lib/python2.7/ctypes/__init__.pyc in __getattr__(self, name) 430 if name[0] == '_': 431 raise AttributeError(name) --> 432 dll = self._dlltype(name) 433 setattr(self, name, dll) 434 return dll /opt/conda/envs/python2/lib/python2.7/ctypes/__init__.pyc in __init__(self, name, mode, handle, use_errno, use_last_error) 360 361 if handle is None: --> 362 self._handle = _dlopen(self._name, mode) 363 else: 364 self._handle = handle OSError: /usr/lib/libf2c.so.2: undefined symbol: MAIN__ ``` EDIT1: So apparently there is this very similar problem pointed by @Jean-François Fabre [Here!](https://stackoverflow.com/questions/8345725/linker-errors-with-fortran-to-c-library-usr-lib-libf2c-so-undefined-referenc). There, the problem was related to the file `libf2c.so` and was solved like this: ``` rm /usr/lib/libf2c.so && ln -s /usr/lib/libf2c.a /usr/lib/libf2c.so ``` This solution was explained by @p929 in the same thread: > > What it does is in fact is to delete the dynamic library and create an > alias to the static library. > > > Now, I understand that I have the same problem, but with a different file (`/usr/lib/libf2c.so.2`). The solution would be to "delete the dynamic library and create an alias to the static library". I tried that with the same static library `/usr/lib/libf2c.a` and had no success.
2017/07/28
[ "https://Stackoverflow.com/questions/45382324", "https://Stackoverflow.com", "https://Stackoverflow.com/users/5361345/" ]
> > looks like it would work in the older groovy Jenkinsfiles > > > you can use the `script` step to enclose a block of code, and, inside this block, declarative pipelines basically act like scripted, so you can still use the technique described in the answer you referenced. welcome to stackoverflow. i hope you enjoy yourself here.
I was facing the same issue and found that instead of using the following avoids 'Requires approval of the script in my Jenkins server at Jenkins > Manage jenkins > In-process Script Approval'. Instead of: env['setup\_build\_number'] = setupResult.getNumber() (from code mentioned in Solution above) Use this: env.setup\_build\_number = setupResult.getNumber()
16,149
26,154,104
I'm trying to run the following Cypher query in neomodel: ``` MATCH (b1:Bal { text:'flame' }), (b2:Bal { text:'candle' }), p = shortestPath((b1)-[*..15]-(b2)) RETURN p ``` which works great on neo4j via the server console. It returns 3 nodes with two relationships connecting. However, when I attempt the following in python: ``` # Py2Neo version of cypher query in python from py2neo import neo4j graph_db = neo4j.GraphDatabaseService() shortest_path_text = "MATCH (b1:Bal { text:'flame' }), (b2:Bal { text:'candle' }), p = shortestPath((b1)-[*..15]-(b2)) RETURN p" results = neo4j.CypherQuery(graph_db, shortest_path_text).execute() ``` or ``` # neomodel version of cypher query in python from neomodel import db shortest_path_text = "MATCH (b1:Bal { text:'flame' }), (b2:Bal { text:'candle' }), p = shortestPath((b1)-[*..15]-(b2)) RETURN p" results, meta = db.cypher_query(shortest_path_text) ``` both give the following error: ``` /Library/Python/2.7/site-packages/neomodel-1.0.1-py2.7.egg/neomodel/util.py in _hydrated(data) 73 elif obj_type == 'relationship': 74 return Rel(data) ---> 75 raise NotImplemented("Don't know how to inflate: " + repr(data)) 76 elif neo4j.is_collection(data): 77 return type(data)([_hydrated(datum) for datum in data]) TypeError: 'NotImplementedType' object is not callable ``` which makes sense considering neomodel is based on py2neo. The main question is how to get a shortestPath query to work via either of these? Is there a better method within python? or is cypher the best way to do it? edit: I also tried the following from [here](https://stackoverflow.com/questions/19989994/cypher-query-in-py2neo) which gave the same error. ``` graph_db = neo4j.GraphDatabaseService() query_string = "START beginning=node(1), end=node(4) \ MATCH p = shortestPath(beginning-[*..500]-end) \ RETURN p" result = neo4j.CypherQuery(graph_db, query_string).execute() for r in result: print type(r) # r is a py2neo.util.Record object print type(r.p) # p is a py2neo.neo4j.Path object ```
2014/10/02
[ "https://Stackoverflow.com/questions/26154104", "https://Stackoverflow.com", "https://Stackoverflow.com/users/4101066/" ]
Ok, I figured it out. I used the tutorial [here]( based on @nigel-small 's answer. ``` from py2neo import cypher session = cypher.Session("http://localhost:7474") tx = session.create_transaction() tx.append("START beginning=node(3), end=node(16) MATCH p = shortestPath(beginning-[*..500]-end) RETURN p") tx.execute() ``` which returned: ``` [[Record(columns=(u'p',), values=(Path(Node('http://localhost:7474/db/data/node/3'), ('threads', {}), Node('http://localhost:7474/db/data/node/1'), ('threads', {}), Node('http://localhost:7474/db/data/node/2'), ('threads', {}), Node('http://localhost:7474/db/data/node/16')),))]] ``` From here, I expect I'll inflate each of the values back to my neomodel objects and into django for easier manipulation. Will post that code as I get there.
The error message you provide is specific to neomodel and looks to have been raised as there is not yet any support for inflating py2neo Path objects in neomodel. This should however work fine in raw py2neo as paths are fully supported, so it may be worth trying that again. Py2neo certainly wouldn't raise an error from within the neomodel code. I've just tried a `shortestPath` query myself and it returns a value as expected.
16,150
70,929,680
I have a dataframe ``` import pandas as pd import numpy as np df1 = pd.DataFrame.from_dict( {"col1": [0, 0, 0, 0, 0], "col2": ["15", [10,15,20], "30", [20, 25], np.nan]}) ``` which looks like this | col1 | col2 | | --- | --- | | 0 | "15" | | 0 | [10,15,20] | | 0 | "30" | | 0 | [20,25] | | 0 | NaN | For col2, I need the highest value of each row, e.g. 15 for the first row and 20 for the second row, so that I end up with the following dataframe: ``` df2 = pd.DataFrame.from_dict( {"col1": [0, 0, 0, 0, 0], "col2": [15, 20, 30, 25, np.nan]}) ``` which should look like this | col1 | col2 | | --- | --- | | 0 | 15 | | 0 | 20 | | 0 | 30 | | 0 | 25 | | 0 | NaN | I tried using a for-loop that checks which type col2 for each row has, and then converts str to int, applies max() to lists and leaves nan's as they are but did not succeed. This is how I did tried (although I suggest to just ignore my attempt): ``` col = df1["col2"] coltypes = [] for i in col: #get type of each row coltype = type(i) coltypes.append(coltype) df1["coltypes"] = coltypes #assign value to col3 based on type df1["col3"] = np.where(df1["coltypes"] == str, df1["col1"].astype(int), np.where(df1["coltypes"] == list, max(df1["coltypes"]), np.nan)) ``` Giving the following error ``` --------------------------------------------------------------------------- TypeError Traceback (most recent call last) <ipython-input-10-b8eb266d5519> in <module> 9 10 df1["col3"] = np.where(df1["coltypes"] == str, df1["col1"].astype(int), ---> 11 np.where(df1["coltypes"] == list, max(df1["coltypes"]), np.nan)) TypeError: '>' not supported between instances of 'type' and 'type' ```
2022/01/31
[ "https://Stackoverflow.com/questions/70929680", "https://Stackoverflow.com", "https://Stackoverflow.com/users/15815734/" ]
Let us try `explode` then `groupby` with `max` ``` out = df1.col2.explode().groupby(level=0).max() Out[208]: 0 15 1 20 2 30 3 25 4 NaN Name: col2, dtype: object ```
``` import pandas as pd import numpy as np df1 = pd.DataFrame.from_dict( {"col1": [0, 0, 0, 0, 0], "col2": ["15", [10,15,20], "30", [20, 25], np.nan]}) res=df1['col2'] lis=[] for i in res: if type(i)==str: i=int(i) if type(i)==list: i=max(i) lis.append(i) else: lis.append(i) df1['col2']=lis df1 ``` I think you want this in answer.... [![enter image description here](https://i.stack.imgur.com/lk0Pb.png)](https://i.stack.imgur.com/lk0Pb.png)
16,151
28,708,752
I apologize for my ignorance of how python handles strings in advance. I have a .txt file that is at least 1000 lines long. It looks something like below ``` :dodge 1 6 some description string of unknown length E7 8 another description string 3445 0 oil temp something description voltage over limit etc :ford AF 4 description of stuff 0 8 string descritiopn ``` What I want to do is basically put a ";" before each string so what I will end up with is as follows ``` :dodge 1 6 ;some description string of unknown length E7 8 ;another description string 3445 0 ;oil temp something description voltage over limit etc :ford AF 4 ;description of stuff 0 8 ;string descritiopn ``` My idea is to open the file, search for ":" character, next line, goto " " character, goto next " " character and write a ";". Another thought was goto "/n" character in text file if next charachter != ":" then look for second space import sys import fileinput ``` with open("testDTC.txt", "r+") as f: for line in f: if ' ' in line: #read first space if ' ' in line: #read second space line.append(';') f.write(line) f.close() ``` I know its not close to getting what I need but its been a really long time since I did string manipulation in python.
2015/02/25
[ "https://Stackoverflow.com/questions/28708752", "https://Stackoverflow.com", "https://Stackoverflow.com/users/2884999/" ]
You simply need to split twice on whitespace and join the string, you don't need a regex for a simple repeating pattern: ``` with open("testDTC.txt") as f: for line in f: if line.strip() and not line.startswith(":"): spl = line.split(None,2) print("{} ;{}".format(" ".join(spl[:2]),spl[2])) ``` To write the changes to the original file you can use `fileinput.input` with `inplace=True`: ``` from fileinput import input for line in input("testDTC.txt",inplace=True): if line.strip() and not line.startswith(":"): spl = line.split(None,2) print("{} ;{}".format(" ".join(spl[:2]),spl[2]),end="") else: print(line,end="") ``` Instead of indexing we can unpack: ``` a, b, c = line.split(None,2) print("{} {} ;{}".format(a, b, c),end="") ``` Output: ``` :dodge 1 6 ;some description string of unknown length E7 8 ;another description string 3445 0 ;oil temp something description voltage over limit etc :ford AF 4 ;description of stuff 0 8 ;string descritiopn ``` For python 2 you can remove the `end=""` and use a commas after the print statement instead i.e `print(line),` We avoid the starting paragraph lines with `line.startswith(":")` and the empty lines with `if line.strip()`.
Based on your example, it seems that in your second column you have a number or numbers separated by spaces, e.g. `8`, `6` followed by some description in third colum which seem not to have any numbers. If this is the case in general, not only for this example, you can use this fact to search for the number separated by the spaces and add `;` after it as follows: import re ``` rep = re.compile(r'(\s\d+\s)') out_lines = [] with open("file.txt", "r+") as f: for line in f: re_match = rep.search(line) if re_match: # append ; after the found expression. line = line.replace(re_match.group(1), re_match.group(1)+';') out_lines.append(line) with open('file2.txt', 'w') as f: f.writelines(out_lines) ``` The file2.txt obtained is as follows: ``` :dodge 1 6 ;some description string of unknown length E7 8 ;another description string 3445 0 ;oil temp something description voltage over limit etc :ford AF 4 ;description of stuff 0 8 ;string descritiopn ```
16,154
42,409,365
I am trying to check a website for specific .js files and image files as part of a regular configuration management check. I am using python and selenium. My code is: ``` #!/usr/bin/env python #import modules required for the test to run import time from pyvirtualdisplay import Display from selenium import webdriver from selenium.webdriver.common.by import By #Start headless browser web = Display(visible=0, size=(1024, 768)) web.start() browser = webdriver.PhantomJS() browser.set_window_size(1024,768) #Navigate to the current URL browser.get("https://XXXXXXXX") time.sleep(2) page = browser.find_elements(By.TAG_NAME, 'script') for i in page: print(i) for j in page: print(j.text) browser.quit() web.stop ``` The array returned contains entries like ``` selenium.webdriver.remote.webelement.WebElement (session="238c4f20-f995-11e6-9445-570b2cf065ee", element=":wdc:1487832970059")> ``` which I get when I try print the array entries. I assume these are the files referenced with the script tag that I have found. I cannot access them in any way to check if the file name or path is correct. Any advice on how to do this? Thanks Rudi
2017/02/23
[ "https://Stackoverflow.com/questions/42409365", "https://Stackoverflow.com", "https://Stackoverflow.com/users/7609361/" ]
You need to use ``` for i in page: print(i.get_attribute('src')) ``` This should print `JavaScript` file name like `https://www.google-analytics.com/analytics.js` Also you should note that some `<script>` tags could contain just `JavaScript` code, but not reference to remote file. If you want to get this code you need `i.get_attribute('textContent')` **Update** If you want to get scripts from `iframe` also, try: ``` for frame in browser.find_elements_by_tag_name('iframe'): browser.switch_to.frame(frame) for i in browser.find_elements(By.TAG_NAME, 'script'): print(i.get_attribute('src')) browser.switch_to.default_content() ```
as you are using phantomJS, why not use its scripts to capture these data. You can use `netlog.js` to capture all network data loaded for a given page in HAR format. Later use a `python-HAR parser` to list down all your .js or img files. command line: ``` phantomjs --cookies-file=/tmp/foo netlog.js https://google.com ``` [netlog.js](https://github.com/ariya/phantomjs/blob/master/examples/netlog.js) [Har Parser for Python](https://pypi.python.org/pypi/haralyzer/1.4.10)
16,161
62,772,454
If given a year-week range e.g, start\_year, start\_week = (2019,45) and end\_year, end\_week = (2020,15) In python how can I check if Year-Week of interest is within the above range of not? For example, for Year = 2020 and Week = 5, I should get a 'True'.
2020/07/07
[ "https://Stackoverflow.com/questions/62772454", "https://Stackoverflow.com", "https://Stackoverflow.com/users/6870708/" ]
Assuming all Year-Week pairs are well-formed (so there's no such thing as `(2019-74)` you can just check with: ``` start_year_week = (2019, 45) end_year_week = (2020, 15) under_test_year_week = (2020, 5) in_range = start_year_week <= under_test_year_week < end_year_week # True ``` Python does tuple comparison by first comparing the first element, and if they're equal then compare the second and so on. And that is exactly what you want even without treating it as actual dates/weeks :D (using `<` or `<=` based on whether you want `(2019, 45)` or `(2020, 15)` to be included or not.)
You can parse year and week to a `datetime` object. If you do the same with your test-year /-week, you can use comparison operators to see if it falls within the range. ``` from datetime import datetime start_year, start_week = (2019, 45) end_year, end_week = (2020, 15) # start date, beginning of week date0 = datetime.strptime(f"{start_year} {start_week} 0", "%Y %W %w") # end date, end of week date1 = datetime.strptime(f"{end_year} {end_week} 6", "%Y %W %w") testyear, testweek = (2020, 5) testdate = datetime.strptime(f"{testyear} {testweek} 0", "%Y %W %w") date0 <= testdate < date1 # True ```
16,162
45,403,597
trying to deploy my app to uwsgi server my settings file: ``` STATIC_ROOT = "/home/root/djangoApp/staticRoot/" STATIC_URL = '/static/' STATICFILES_DIRS = [ os.path.join(BASE_DIR, "static"), '/home/root/djangoApp/static/', ] ``` and url file: ``` urlpatterns = [ #urls ] + static(settings.STATIC_URL, document_root=settings.STATIC_ROOT) ``` and if I try to execute the command: > > python manage.py collectstatic > > > Then some files are okay ( admin files ), but I see an error next to files from static folder. The error is like: > > Found another file with the destination path 'js/bootstrap.min.js'. It > will be ignored since only the first encountered file is collected. If > this is not what you want, make sure every static file has a unique > path. > > > and have no idea what can I do to solve it. Thanks in advance,
2017/07/30
[ "https://Stackoverflow.com/questions/45403597", "https://Stackoverflow.com", "https://Stackoverflow.com/users/8375888/" ]
The two paths you have in STATICFILES\_DIRS are the same. So Django copies the files from one of them, then goes on to the second and tries to copy them again, only to see the files already exist. Remove one of those entries, preferably the second.
do you have more than one application? If so, you should put any file on a subdirectory with a unique name (like the app name for example). collectstatic collects files from all the /static/ subdirectories, and if there is a duplication, it throw this error.
16,165
72,664,087
I'm using python3 tkinter to build a small GUI on Linux Centos I have my environment set up with all the dependencies installed (cython, numpy, panda, etc) When I go to install tkinter ``` pip3 install tk $ python3 Python 3.6.8 (default, Nov 16 2020, 16:55:22) [GCC 4.8.5 20150623 (Red Hat 4.8.5-44)] on linux Type "help", "copyright", "credits" or "license" for more information. >>> import tkinter as tk >>> No module found: tkinter ``` I get the above error despite 'pip list' displaying the 'tk' dependency, python still throws the error. The dependency correctly shows up in "site-packages" as well. But when I use yum to install tkinter ``` sudo yum install python3-tkinter ``` and do the same thing ``` python3 Python 3.6.8 (default, Nov 16 2020, 16:55:22) [GCC 4.8.5 20150623 (Red Hat 4.8.5-44)] on linux Type "help", "copyright", "credits" or "license" for more information. >>> import tkinter as tk >>> tkinter._test() ``` it works perfectly fine. The issue is that if I want to package all the dependencies together and share it, the working version of tkinter won't be in the package and other users will be confused when they build the project Why is 'pip install tk' not being recognized as a valid installation of tkinter but 'sudo yum install python3-tkinter' works? All the other dependencies work with pip, it's just tkinter that is broken. How can I make python recognize the pip installation?
2022/06/17
[ "https://Stackoverflow.com/questions/72664087", "https://Stackoverflow.com", "https://Stackoverflow.com/users/11760778/" ]
> > Why is 'pip install tk' not being recognized as a valid installation of tkinter but 'sudo yum install python3-tkinter' works? > > > Because `pip install tk` installs an old package called tensorkit, not tkinter. You can't install tkinter with pip.
so i don't know if centOS uses apt put you can try first uninstalling tinkter with pip and then use apt to install it ``` sudo apt-get install python3-tk ```
16,166
73,584,455
I am trying to create a diverging dot plot with python and I am using seaborn relplot to do the small multiples with one of the columns. The datasouce is MakeoverMonday 2018w18: [MOM2018w48](https://data.world/makeovermonday/2018w48) I got this far with this code: ``` sns.set_style("whitegrid") g=sns.relplot(x=cost ,y=city, col=item, s=120, size = cost, hue = cost, col_wrap= 2) sns.despine(left=True, bottom=True) ``` which generates this: [![relplot dot plot](https://i.stack.imgur.com/vmojM.png)](https://i.stack.imgur.com/vmojM.png) So, far, so good. Now, I want only horizontal gridlines, sort it and get rid of the column name ('item'=) in the small multiple charts. Any ideas? This is what I am trying to recreate: [![enter image description here](https://i.stack.imgur.com/CP5iE.png)](https://i.stack.imgur.com/CP5iE.png)
2022/09/02
[ "https://Stackoverflow.com/questions/73584455", "https://Stackoverflow.com", "https://Stackoverflow.com/users/3433875/" ]
Like most query interfaces, the `Query()` function can only execute one SQL statement at a time. MySQL's prepared statements don't work with multi-query. You could solve this by executing the `SET` statement in one call, then the `SELECT` in a second call. But you'd have to take care to ensure they are executed on the same database connection, or else the connection pool is likely to run them on different connections. So you'd need to do something like: ``` conn, err := d.Conn(context.TODO()) conn.QueryContext(context.TODO(), "SET ...") conn.QueryContext(context.TODO(), "SELECT ...") ``` Alternatively, change the way you prepare the ORDER BY so you don't need user-defined variables. The way I'd do it is to build the ORDER BY statement in Go code instead of in SQL, using a string map to ensure a valid column and direction is used. If the input is not in the map, then set a default order to the primary key. ``` validOrders := map[string]string{ "type,asc": "type ASC", "type,desc": "type DESC", "visible,asc": "visible ASC", "visible,desc": "visible DESC", "create_date,asc": "create_date ASC", "create_date,desc": "create_date DESC", "update_date,asc": "update_date ASC", "update_date,desc": "update_date DESC", } orderBy, ok := validOrders[srt] if !ok { orderBy = "id ASC" } query := fmt.Sprintf(` SELECT ... WHERE user_id = ? ORDER BY %s LIMIT ?, ? `, orderBy) ``` This is safe with respect to SQL injection, because the function input is not interpolated into the query. It's the value from my map that is interpolated into the query, and the value is under my control. If someone tries to input some malicious value, it won't match any key in my map, so it'll just use the default sort order.
Unless drivers implement a special interface, the query is prepared on the server first before execution. Bindvars are therefore database specific: * MySQL: uses the ? variant shown above * PostgreSQL: uses an enumerated $1, $2, etc bindvar syntax * SQLite: accepts both ? and $1 syntax * Oracle: uses a :name syntax * MsSQL: @ (as you use) I guess that's why you can't do what you want with query().
16,167
40,322,718
I'm new to getting data using API and Python. I want to pull data from my trading platform. They've provided the following instructions: <http://www.questrade.com/api/documentation/getting-started> I'm ok up to step 4 and have an access token. I need help with step 5. How do I translate this request: ``` GET /v1/accounts HTTP/1.1 Host: https://api01.iq.questrade.com Authorization: Bearer C3lTUKuNQrAAmSD/TPjuV/HI7aNrAwDp ``` into Python code? I've tried ``` import requests r = requests.get('https://api01.iq.questrade.com/v1/accounts', headers={'Authorization': 'access_token myToken'}) ``` I tried that after reading this: [python request with authentication (access\_token)](https://stackoverflow.com/questions/13825278/python-request-with-authentication-access-token) Any help would be appreciated. Thanks.
2016/10/29
[ "https://Stackoverflow.com/questions/40322718", "https://Stackoverflow.com", "https://Stackoverflow.com/users/4838024/" ]
As you point out, after step 4 you should have received an access token as follows: ``` { “access_token”: ”C3lTUKuNQrAAmSD/TPjuV/HI7aNrAwDp”, “token_type”: ”Bearer”, “expires_in”: 300, “refresh_token”: ”aSBe7wAAdx88QTbwut0tiu3SYic3ox8F”, “api_server”: ”https://api01.iq.questrade.com” } ``` To make subsequent API calls, you will need to construct your URI as follows: ``` uri = [api_server]/v1/[rest_operation] e.g. uri = "https://api01.iq.questrade.com/v1/time" Note: Make sure you use the same [api_server] that you received in your json object from step 4, otherwise your calls will not work with the given access_token ``` Next, construct your headers as follows: ``` headers = {'Authorization': [token_type] + ' ' + [access_token]} e.g. headers = {'Authorization': 'Bearer C3lTUKuNQrAAmSD/TPjuV/HI7aNrAwDp'} ``` Finally, make your requests call as follows ``` r = requests.get(uri, headers=headers) response = r.json() ``` Hope this helps! Note: You can find a Questrade API Python wrapper on GitHub which handles all of the above for you. <https://github.com/pcinat/QuestradeAPI_PythonWrapper>
Improving a bit on Peter's reply (Thank you Peter!) start by using the token you got from the QT website to obtain an access\_token and get an api\_server assigned to handle your requests. ``` # replace XXXXXXXX with the token given to you in your questrade account import requests r = requests.get('https://login.questrade.com/oauth2/token?grant_type=refresh_token&refresh_token=XXXXXXXX') access_token = str(r.json()['access_token']) refresh_token= str(r.json()['refresh_token']) # you will need this refresh_token to obtain another access_token when it expires api_server= str(r.json()['api_server']) token_type= str(r.json()['token_type']) api_server= str(r.json()['api_server']) expires_in = str(r.json()['expires_in']) # uri = api_server+'v1/'+[action] - let's try checking the server's time: uri = api_server+'v1/'+'time' headers = {'Authorization': token_type +' '+access_token} # will look sth like this # headers will look sth like {'Authorization': 'Bearer ix7rAhcXx83judEVUa8egpK2JqhPD2_z0'} # uri will look sth like 'https://api05.iq.questrade.com/v1/time' # you can test now with r = requests.get(uri, headers=headers) response = r.json() print(response) ```
16,170
51,750,967
[![enter image description here](https://i.stack.imgur.com/qpDFX.jpg)](https://i.stack.imgur.com/qpDFX.jpg)I'm trying to control a relay board (USB RLY08) using a section of python code I found online (<https://github.com/jkesanen/usbrly08/blob/master/usbrly08.py>). It is currently returning an error which I'm not sure about.[![enter image description here](https://i.stack.imgur.com/6N1Ir.jpg)](https://i.stack.imgur.com/6N1Ir.jpg) Does anyone have any ideas? I'm not actually needing all the code and instead just wanting to turn a single relay on and off. Thanks.
2018/08/08
[ "https://Stackoverflow.com/questions/51750967", "https://Stackoverflow.com", "https://Stackoverflow.com/users/4153219/" ]
You are getting this error probably because **pyserial** module is not installed on your system. Try installing pyserial package from PyPi index using below command : ``` python -m pip install pyserial ```
you need to install pyserial e.g. with ``` pip install pyserial ```
16,171
30,316,639
I am looking for a way to calculate a square root with an arbitrary precision (something like 50 digits after the dot). In python, it is easily accessible with [Decimal](https://docs.python.org/2/library/decimal.html): ``` from decimal import * getcontext().prec = 50 Decimal(2).sqrt() # and here you go my 50 digits ``` After seeing the power of `math/big` I skimmed through the [documentation](https://golang.org/pkg/math/big/#Rat) but have not found anything similar. So is my only option is to write some sort of [numerical computing method](http://en.wikipedia.org/wiki/Methods_of_computing_square_roots) which will iteratively try to compute the answer?
2015/05/19
[ "https://Stackoverflow.com/questions/30316639", "https://Stackoverflow.com", "https://Stackoverflow.com/users/1090562/" ]
This is my own implementation of square root calculation. While waiting for answers, I decided to give [methods of computing square roots](http://en.wikipedia.org/wiki/Methods_of_computing_square_roots) a try. It has a whole bunch of methods but at the very end I found a link to a [Square roots by subtraction](http://www.afjarvis.staff.shef.ac.uk/maths/jarvisspec02.pdf) pdf, which I really liked because the description of the algorithm is only a couple of lines (and I have not seen it before in comparison to Newton's method). So here is my implementation (bigint is not really nice to work with in go): ``` func square(n int64, precision int64) string{ ans_int := strconv.Itoa(int(math.Sqrt(float64(n)))) limit := new(big.Int).Exp(big.NewInt(10), big.NewInt(precision + 1), nil) a := big.NewInt(5 * n) b := big.NewInt(5) five := big.NewInt(5) ten := big.NewInt(10) hundred := big.NewInt(100) for b.Cmp(limit) < 0{ if a.Cmp(b) < 0{ a.Mul(a, hundred) tmp := new(big.Int).Div(b, ten) tmp.Mul(tmp, hundred) b.Add(tmp, five) } else { a.Sub(a, b) b.Add(b, ten) } } b.Div(b, hundred) ans_dec := b.String() return ans_dec[:len(ans_int)] + "." + ans_dec[len(ans_int):] } ``` **P.S.** thank you Nick Craig-Wood for making the code better with your amazing comment. And using it, one can find that `square(8537341, 50)` is: > > 2921.8728582879851242173838229735693053765773170487 > > > which is only by one last digit of from python's ``` getcontext().prec = 50 print str(Decimal(8537341).sqrt()) ``` > > 2921.8728582879851242173838229735693053765773170488 > > > This one digit is off because the last digit is not really precise. As always `[Go Playground](http://play.golang.org/p/u1CoB4cwXy)`. **P.S.** if someone would find a native way to do this, I would gladly give my accept and upvote.
Adding precision ---------------- There is probably a solution in go but as I don't code in go, here is a general solution. For instance if your selected language doesn't provide a solution to handle the precision of floats (already happened to me): If your language provides you N digits after the dot, you can, in the case of the square root, multiply the input, here `2`, by `10^(2*number_of_extra_digits)`. For instance if go would give you only `1.41` as an answer but you would want `1.4142`, then you ask it the square root of `2*10^(2*2) = 2*10000` instead and you get `141.42` as an answer. Now I leave it up to you to rectify the placement of the dot. **Explanation:** There is some math magic behind it. If you were to want to add some precision to a simple division, you would just need to multiply the input by `10^number_of_extra_digits`. The trick is to multiply the input to get more precision as we can't multiply the output (the lost of precision already happened). It does work because most languages cut more decimals after the dot, than before it. So we just need to change the *output equation* to the *input equation* (when possible): For simple division: `(a/b) * 10 = (a*10)/b` For square root: `sqrt(a) * 10 = sqrt(a) * sqrt(100) = sqrt(a*100)` Reducing precision ------------------ Some similar tinkering can also help to reduce the precision if needed. For instance if you were trying to calculate the progression of a download in percent, two digits after the dot. Let's say we downloaded 1 file on 3, then `1/3 * 100` would give us `33.33333333`. If there is no way to control the precision of this float, then you could do `cast_to_an_int(1/3 * 100 * 100) / 100` to return `33.33`.
16,173
51,395,535
I'm trying to get my head around \*\*kwargs in python 3 and am running into a strange error. Based on [this post](https://stackoverflow.com/questions/1769403/understanding-kwargs-in-python) on the matter, I tried to create my own version to confirm it worked for me. ``` table = {'Person A':'Age A','Person B':'Age B','Person C':'Age C'} def kw(**kwargs): for i,j in kwargs.items(): print(i,'is ',j) kw(table) ``` The strange thing is that I keep getting back `TypeError: kw() takes 0 positional arguments but 1 was given`. I have no idea why and can see no appreciable difference between my code and the code in the example at the provided link. Can someone help me determine what is causing this error?
2018/07/18
[ "https://Stackoverflow.com/questions/51395535", "https://Stackoverflow.com", "https://Stackoverflow.com/users/8032508/" ]
call kw function with `kw(**table)` Python 3 Doc: [link](https://docs.python.org/3.2/glossary.html)
There's no need to make `kwargs` a variable keyword argument here. By specifying `kwargs` with `**` you are defining the function with a variable number of keyword arguments but no positional argument, hence the error you're seeing. Instead, simply define your `kw` function with: ``` def kw(kwargs): ```
16,174
66,204,201
I'm trying to install pymatgen in Google colab via the following command: ``` !pip install pymatgen ``` This throws the following error: ``` Collecting pymatgen Using cached https://files.pythonhosted.org/packages/06/4f/9dc98ea1309012eafe518e32e91d2a55686341f3f4c1cdc19f1f64cb33d0/pymatgen-2021.2.14.tar.gz Installing build dependencies ... error ERROR: Command errored out with exit status 1: /usr/bin/python3 /usr/local/lib/python3.6/dist-packages/pip install --ignore-installed --no-user --prefix /tmp/pip-build-env-9j4h3p2n/overlay --no-warn-script-location -v --no-binary :none: --only-binary :none: -i https://pypi.org/simple -- 'numpy>=1.20.1' 'setuptools>=43.0.0' Check the logs for full command output. ``` Trying to install with following: ``` !pip install -vvv pymatgen ``` This throws the following error: ``` pip._internal.exceptions.InstallationError: Command errored out with exit status 1: /usr/bin/python3 /usr/local/lib/python3.6/dist-packages/pip install --ignore-installed --no-user --prefix /tmp/pip-build-env-g1m0e202/overlay --no-warn-script-location -v --no-binary :none: --only-binary :none: -i https://pypi.org/simple -- 'numpy>=1.20.1' 'setuptools>=43.0.0' Check the logs for full command output. ``` Please help solve this issue.
2021/02/15
[ "https://Stackoverflow.com/questions/66204201", "https://Stackoverflow.com", "https://Stackoverflow.com/users/15211978/" ]
You will need to validate within the addNewUser method. And then `throw` an exception when your validate hits. Example ```java if(username.length > 10) { throw new Exception("Username is too long"); } ``` it will then be catched by your try-catch statement.
There are a few things to consider here. With a try-catch block you can manage exceptions that occur in your program flow. When writing a program it's a good idea to make it as clear as possible so that other people reading it later can understand it better. To that end, consider refactoring the methods. For example ``` public void addNewUser(String username, String password) throws exceptions.InvalidUsernameException, exceptions.InvalidPasswordException, exceptions.DuplicateUserException { // Check if the username has a correct format if (EligibilityCheck.getInstance().checkingUserName(username)) { throw new InvalidUsernameException(); } // Check if the password has a correct format if (EligibilityCheck.getInstance().checkingPassword(password)) { throw new InvalidPasswordException(); } // Check if username is already being used if (loginVault.containsKey(username)) { throw new DuplicateUserException(); } //If not, success loginVault.put(username, CaesarCipher.getInstance().encrypt(password)); userLogin.put(username, null); } ``` This goes through all the checks in sequence just like the if/else branches did but it's clearer to read. The `addOneUser` method doesn't do anything (apparently) meaningful when an exception is raised. Consider using the exception handling to send meaningful messages to the user as appropriate. You mention the program is not printing the output you expect it to. Look into using a testing framework for your test cases, such as junit, so that you can make assertions, look at code coverage, etc. For a simple example in plain English, the first check is on the user name for the correct format. From the code it's not evident what that might be. If for instance a "valid username" is one with only lowercase a-z characters, then you could make the following test cases: > > When a username with anything other than a-z (such as A-Z, 0-9, special characters) is provided, an invalid user exception will be thrown > > > > > WHen a username with only a-z characters is provided, no user exception will be thrown > > > You can then write these tests and use assertions as appropriate. Consider also using a static analysis tool like sonar to help with code quality.
16,179
16,536,071
I was working on these functions (see [this](https://stackoverflow.com/questions/16525224/how-to-breakup-a-list-of-list-in-a-given-way-in-python)): ``` def removeFromList(elementsToRemove): def closure(list): for element in elementsToRemove: if list[0] != element: return else: list.pop(0) return closure def func(listOfLists): result = [] for i, thisList in enumerate(listOfLists): result.append(thisList) map(removeFromList(thisList), listOfLists[i+1:]) return result ``` I have a list which I want to pass as argument, but I want this list to remain intact. What I tried is: ``` my_list = [[1], [1, 2], [1, 2, 3]] print my_list #[[1], [1, 2], [1, 2, 3]] copy_my_list = list (my_list) #This also fails #copy_my_list = my_list [:] print id (my_list) == id (copy_my_list) #False print func (copy_my_list) #[[1], [2], [3]] print my_list #[[1], [2], [3]] ``` But it does change my original list. Any ideas?
2013/05/14
[ "https://Stackoverflow.com/questions/16536071", "https://Stackoverflow.com", "https://Stackoverflow.com/users/2338725/" ]
Use `copy.deepcopy`: ``` from copy import deepcopy new_list = deepcopy([[1], [1, 2], [1, 2, 3]]) ``` Demo: ``` >>> lis = [[1], [1, 2], [1, 2, 3]] >>> new_lis = lis[:] # creates a shallow copy >>> [id(x)==id(y) for x,y in zip(lis,new_lis)] [True, True, True] #inner lists are still the same object >>> new_lis1 = deepcopy(lis) # create a deep copy >>> [id(x)==id(y) for x,y in zip(lis,new_lis1)] [False, False, False] #inner lists are now different object ```
both with `list(my_list)` and `my_list[:]` you get a shallow copy of the list. ``` id(copy_my_list[0]) == id(my_list[0]) # True ``` so use `copy.deepcopy` to avoid your problem: ``` copy_my_list = copy.deepcopy(my_list) id(copy_my_list[0]) == id(my_list[0]) # False ```
16,181
29,813,423
the below python gui code i am trying to select the values from the drop down menu buttons(graph and density) and trying to pass them as command line arguments to os.system command in the readfile() function as shown below but I am having a problem in passing the values I have selected from the drop down menu to os.system command. import os import Tkinter as tk ``` def buttonClicked(btn): density= btn def graphselected(graphbtn): graph=graphbtn def readfile(): os.system( 'python C:Desktop/python/ABC.py graph density') root = tk.Tk() root.title("Dense Module Enumeration") btnList=[0.0,0.1,0.2,0.3,0.4,0.5,0.6,0.7,0.8,0.9,1.0] btnMenu = tk.Menubutton(root, text='Density') contentMenu = tk.Menu(btnMenu) btnMenu.config(menu=contentMenu) for btn in btnList: contentMenu.add_command(label=btn, command = lambda btn=btn: buttonClicked(btn)) btnMenu.pack() graph_list=['graph1.txt','graph2.txt','graph3.txt','graph.txt'] btnMenu = tk.Menubutton(root, text='graph') contentMenu = tk.Menu(btnMenu) btnMenu.config(menu=contentMenu) for btn in graph_list: contentMenu.add_command(label=btn, command =lambda btn= btn: graphselected(btn)) btnMenu.pack() button = tk.Button(root, text="DME", command=readfile) button.pack() root.mainloop() ```
2015/04/23
[ "https://Stackoverflow.com/questions/29813423", "https://Stackoverflow.com", "https://Stackoverflow.com/users/2014111/" ]
It is easy to implement with [functools.partial](https://docs.python.org/2/library/functools.html#functools.partial) - apply needed value to your function for each button. Here is a sample: ``` from functools import partial import Tkinter as tk BTNLIST = [0.0, 0.1, 0.2] def btn_clicked(payload=None): """Just prints out given payload.""" print('Me was clicked. Payload: {}'.format(payload)) def init_controls(): """Prepares GUI controls and starts mainloop""" root = tk.Tk() menu = tk.Menu(root) root.config(menu=menu) sample_menu = tk.Menu(menu) menu.add_cascade(label="Destiny", menu=sample_menu) for btn_value in BTNLIST: sample_menu.add_command( label=btn_value, # Here is the trick with partial command=partial(btn_clicked, btn_value) ) root.mainloop() init_controls() ```
The way you have it, `graph` and `density` are local variables to `graphselected()` and `buttonClicked()`. Therefore, `readfile()` can never access these variables unless you declare them as global in all three functions. Then you want to format a string to incorporate the values in `graph` and `density`. You can do that using the strings [`.format` method](https://docs.python.org/2/library/stdtypes.html#str.format). Combining that your three functions become ``` def buttonClicked(btn): global density density = btn def graphselected(graphbtn): global graph graph = graphbtn def readfile(): global density, graph os.system('python C:Desktop/python/ABC.py {} {}'.format(graph, density)) ```
16,184
6,958,833
I'm trying to insert a string that was received as an argument into a sqlite db using python: ``` def addUser(self, name): cursor=self.conn.cursor() t = (name) cursor.execute("INSERT INTO users ( unique_key, name, is_online, translate) VALUES (NULL, ?, 1, 0);", t) self.conn.commit() ``` I don's want to use string concatenation because <http://docs.python.org/library/sqlite3.html> advises against it. However, when I run the code, I get the exception ``` cursor.execute("INSERT INTO users ( unique_key, name, is_online, translate) VALUES (NULL, ?, 1, 0);", t) pysqlite2.dbapi2.ProgrammingError: Incorrect number of bindings supplied. The current statement uses 1, and there are 7 supplied ``` Why is Python splitting the string by characters, and is there a way to prevent it from doing so? EDIT: changing to `t = (name,)` gives the following exception ``` print "INSERT INTO users ( unique_key, name, is_online, translate) VALUES (NULL, ?, 1, 0)" + t exceptions.TypeError: cannot concatenate 'str' and 'tuple' objects ```
2011/08/05
[ "https://Stackoverflow.com/questions/6958833", "https://Stackoverflow.com", "https://Stackoverflow.com/users/752462/" ]
You need this: ``` t = (name,) ``` to make a single-element tuple. Remember, it's **commas** that make a tuple, not brackets!
Your `t` variable isn't a tuple, i think it is a 7-length string. To make a tuple, don't forget to put a trailing coma: ``` t = (name,) ```
16,185
41,196,390
I have my `index.py` in `/var/www/cgi-bin` My `index.py` looks like this : ``` #!/usr/bin/python print "Content-type:text/html\r\n\r\n" print '<html>' print '<head>' print '<title>Hello Word - First CGI Program</title>' print '</head>' print '<body>' print '<h2>Hello Word! This is my first CGI program</h2>' print '</body>' print '</html>' ``` My apache2 `/etc/apache2/sites-enabled/000-default.conf` looks like this : ``` <VirtualHost *:80> <Directory /var/www/cgi-bin> Options +ExecCGI AddHandler cgi-script .cgi .py Order allow,deny Allow from all </Directory> </VirtualHost> ``` Let me know if anything else also require any modification, I have already enabled cgi. The problem is no matter whatever URL I visit I keep on getting error **Not Found** [localhost](http://localhost) , or [localhost/index.py](http://localhost/index.py)
2016/12/17
[ "https://Stackoverflow.com/questions/41196390", "https://Stackoverflow.com", "https://Stackoverflow.com/users/3405554/" ]
Try this : Enable `CGI` `a2enmod cgid` `chmod a+x /var/www/cgi-bin/index.py` but check `cgi-bin` directory owner is `wwwdata` ? Need a `directory` definition on every `Virtualhost` ! Some time required `restart` for killing all `apache` threads ! ``` DocumentRoot /var/www/htdocs #A include B if owner are same ! <Directory /var/www/htdocs/cgi-bin/ > FallbackResource /index.py Options +ExecCGI -MultiViews -SymLinksIfOwnerMatch -Indexes Order allow,deny Allow from all AddHandler cgi-script .py </Directory> ```
use this file to run cgi script: ``` import cgi; import cgitb;cgitb.enable() ```
16,186
19,090,032
I need to scrape career pages of multiple companies(with their permission). Important Factors in deciding what do I use 1. I would be scraping around 2000 pages daily, so need a decently fast solution 2. Some of these pages populate data via ajax after page is loaded. 3. My webstack is Ruby/Rails with MySql etc. 4. I have written scrapers earlier using scrapy(python) (+ Selenium for ajax enabled pages). My doubts 1. I am confused whether I should go with python (i.e. scrapy + Selenium, I think this is the best alternative in python), or instead prefer something in ruby(as my entire codebase is in ruby). 2. Scrapy + selenium is often slow, are there faster alternatives in ruby?(this would make the decision easier) Most popular Ruby alternative with support for Ajax Loaded pages seems to be **Watir** Can anybody comment on its speed. Also are there any other ruby alternatives (e.g. **Mechanize/Nokogiri** + *something else for Ajax Loaded pages*) **EDIT** Ended up using Watir-webdriver + Nokogiri, so that I can take advantage of active record while storing data. Nokogiri is much faster than Watir-webdriver at extracting data. Scrapy would have been faster, but the speed tradeoff wasn't as significant as the complexity tradeoff in handling different kind of websites in scrapy (e.g. ajax-driven search on some target sites, which i have to necessarily go through). Hopefully this helps someone.
2013/09/30
[ "https://Stackoverflow.com/questions/19090032", "https://Stackoverflow.com", "https://Stackoverflow.com/users/1549934/" ]
The real benefit of closures and higher-order functions is that they can represent what the programmer sometimes has in mind. If you as the programmer find that what you have in mind is a piece of code, a function, an instruction on how to compute something (or do something), then you should use a closure for this. If, on the other hand, what you have in mind is more like an object, a thing (which happens to have some properties, methods, instructions, capabilities, etc.), then you should program it as an object, a class. In your case I think the best way to implement this is neither ;-) I'd do this with a generator: ``` def incrX(i): while True: i += 1 i %= 10 yield i incr = incrX(10) print incr.next() print incr.next() ```
With a closure, one can save the `self` variable. In particular, when there are many variables to be passed, a closure could be more readable. ``` class Incr: """a class that increments internal variable""" def __init__(self, i): self._i = i def __call__(self): self._i = (self._i + 1) % 10 return self._i ``` ``` def incr(i): """closure that increments internal variable""" def incr(): nonlocal i i = (i + 1) % 10 return i return incr ``` ``` print('class...') a = Incr(10) print(a()) # 1 print(a()) # 2 print('closure...') b = incr(10) print(b()) # 1 print(b()) # 2 ```
16,187
43,190,221
I have a training file in the following format: > > 0.086, 0.4343, 0.4212, ...., class1 > > > 0.086, 0.4343, 0.4212, ...., class2 > > > 0.086, 0.4343, 0.4212, ...., class5 > > > Where, each row is a one-dimensional vector and the last column is the class in which that vector represents. We can see that a vector repeats itself several times, since it has several classes. Reading this data is done by the python "Panda" library. That said, I need to conduct training with a convolutional network. I already researched some sites and did not get much success and also do not know if the network needs to be prepared for the "Multi-Class" form. I would like to know if someone knows a multi-class 1D classification approach with tensorflow or could guide me with an example, being that after training the network, I need to pass a template (which would be a vector) and the network output me Give the correct percentage of each class. Thank you!
2017/04/03
[ "https://Stackoverflow.com/questions/43190221", "https://Stackoverflow.com", "https://Stackoverflow.com/users/6363322/" ]
This is a pretty straight forward setup. First thing to know: Your labels need to be in "one hot encoding" format. That means, if you have 5 classes, class 1 is represented by the vector [1,0,0,0,0], class 2 by the vector [0,1,0,0,0], and so on. This is standard. Second, you mention that you want multi-class classification. But the example you gave is single-class classification. So this is probably a terminology clarification here. When you say multi-class classification it means that you want a single sample to belong to more than one class, let's say your first sample is part of both class 2 and class 3. But it doesn't look like that in your case. So for single-class classification with 5 classes you want to use cross entropy as your loss function. You can follow the cifar 10 tutorial. This is the same setup where each image is 1 of 10 classes. <https://www.tensorflow.org/tutorials/deep_cnn> You mentioned that your data is 1-dimensional. This is trivial to accomplish, just treat it like the cifar 10 2-dimensional data with one of those dimensions set to be 1. You don't need to change any other code. In the cifar 10 example your images will be 32x32, in your data your images will be maybe 32x1, or 10x1, whatever kernel you decide on (try different kernel sizes!). The same change will apply to stride. Just treat your problem as a 2D problem with a flat 2nd dimension, easy as pie.
From what I understand you have a multi-label problem. Meaning that a sample can belong to more than one classes Take a look at [sigmoid\_cross\_entropy\_with\_logits](https://www.tensorflow.org/api_docs/python/tf/nn/sigmoid_cross_entropy_with_logits) and use that as your loss function. You do not need to use one hot encoding or repeat your samples for each label they belong to for this loss function. Just use a label vector and set to one the classes that the sample belongs to.
16,189
69,262,618
So I just watched a tutorial that the author didn't need to `import sklearn` when using `predict` function of pickled model in anaconda environment (sklearn installed). I have tried to reproduce the minimal version of it in Google Colab. If you have a pickled-sklearn-model, the code below works in Colab (sklearn installed): ``` import pickle model = pickle.load(open("model.pkl", "rb"), encoding="bytes") out = model.predict([[20, 0, 1, 1, 0]]) print(out) ``` I realized that I still need the sklearn package installed. If I uninstall the sklearn, the `predict` function now is not working: ``` !pip uninstall scikit-learn import pickle model = pickle.load(open("model.pkl", "rb"), encoding="bytes") out = model.predict([[20, 0, 1, 1, 0]]) print(out) ``` the error: ``` WARNING: Skipping scikit-learn as it is not installed. --------------------------------------------------------------------------- ModuleNotFoundError Traceback (most recent call last) <ipython-input-1-dec96951ae29> in <module>() 1 get_ipython().system('pip uninstall scikit-learn') 2 import pickle ----> 3 model = pickle.load(open("model.pkl", "rb"), encoding="bytes") 4 out = model.predict([[20, 0, 1, 1, 0]]) 5 print(out) ModuleNotFoundError: No module named 'sklearn' ``` So, how does it work? as far as I understand pickle doesn't depend on scikit-learn. Does the serialized model do `import sklearn`? **Why can I use `predict` function without import scikit learn in the first code?**
2021/09/21
[ "https://Stackoverflow.com/questions/69262618", "https://Stackoverflow.com", "https://Stackoverflow.com/users/2147347/" ]
There's a few questions being asked here, so let's go through them one by one: > > So, how does it work? as far as I understand pickle doesn't depend on scikit-learn. > > > There is nothing particular to scikit-learn going on here. Pickle will exhibit this behaviour for any module. Here's an example with Numpy: ``` will@will-desktop ~ $ python Python 3.9.6 (default, Aug 24 2021, 18:12:51) [GCC 9.3.0] on linux Type "help", "copyright", "credits" or "license" for more information. >>> import sys >>> 'numpy' in sys.modules False >>> import numpy >>> 'numpy' in sys.modules True >>> pickle.dumps(numpy.array([1, 2, 3])) b'\x80\x04\x95\xa0\x00\x00\x00\x00\x00\x00\x00\x8c\x15numpy.core.multiarray\x94\x8c\x0c_reconstruct\x94\x93\x94\x8c\x05numpy\x94\x8c\x07ndarray\x94\x93\x94K\x00\x85\x94C\x01b\x94\x87\x94R\x94(K\x01K\x03\x85\x94h\x03\x8c\x05dtype\x94\x93\x94\x8c\x02i8\x94\x89\x88\x87\x94R\x94(K\x03\x8c\x01<\x94NNNJ\xff\xff\xff\xffJ\xff\xff\xff\xffK\x00t\x94b\x89C\x18\x01\x00\x00\x00\x00\x00\x00\x00\x02\x00\x00\x00\x00\x00\x00\x00\x03\x00\x00\x00\x00\x00\x00\x00\x94t\x94b.' >>> exit() ``` So far what I've done is show that in a fresh Python process `'numpy'` is not in `sys.modules` (the dict of imported modules). Then we import Numpy, and pickle a Numpy array. Then in a new Python process shown below, we we see that before we unpickle the array Numpy has not been imported, but after we have Numpy has been imported. ``` will@will-desktop ~ $ python Python 3.9.6 (default, Aug 24 2021, 18:12:51) [GCC 9.3.0] on linux Type "help", "copyright", "credits" or "license" for more information. >>> import pickle >>> import sys >>> 'numpy' in sys.modules False >>> pickle.loads(b'\x80\x04\x95\xa0\x00\x00\x00\x00\x00\x00\x00\x8c\x15numpy.core.multiarray\x94\x8c\x0c_reconstruct\x94\x93\x94\x8c\x05numpy\x94\x8c\x07ndarray\x94\x93\x94K\x00\x85\x94C\x01b\x94\x87\x94R\x94(K\x01K\x03\x85\x94h\x03\x8c\x05dtype\x94\x93\x94\x8c\x02i8\x94\x89\x88\x87\x94R\x94(K\x03\x8c\x01<\x94NNNJ\xff\xff\xff\xffJ\xff\xff\xff\xffK\x00t\x94b\x89C\x18\x01\x00\x00\x00\x00\x00\x00\x00\x02\x00\x00\x00\x00\x00\x00\x00\x03\x00\x00\x00\x00\x00\x00\x00\x94t\x94b.') array([1, 2, 3]) >>> 'numpy' in sys.modules True >>> numpy Traceback (most recent call last): File "<stdin>", line 1, in <module> NameError: name 'numpy' is not defined ``` Despite being imported, however, `numpy` is still not a defined variable name. Imports in Python are global, but an import will only update the namespace of the module that actually did the import. If we want to access `numpy` we still need to write `import numpy`, but since Numpy was already imported elsewhere in the process this will not re-run Numpy's module initialization code. Instead it will create a `numpy` variable in our module's globals dictionary, and make it a reference to the Numpy module object that existed beforehand, and could be accessed through `sys.modules['numpy']`. So what is Pickle doing here? It embeds the information about what module was used to define whatever it is pickling within the pickle. Then when it unpickles something, it uses that information to import the module such that it can use the unpickle method of the class. We can look to the source code for the Pickle module we can see that's what's happening: In the [`_Pickler`](https://github.com/python/cpython/blob/7b88f63e1dd4006b1a08b9c9f087dd13449ecc76/Lib/pickle.py#L407) we see [`save`](https://github.com/python/cpython/blob/7b88f63e1dd4006b1a08b9c9f087dd13449ecc76/Lib/pickle.py#L535) method uses the [`save_global`](https://github.com/python/cpython/blob/7b88f63e1dd4006b1a08b9c9f087dd13449ecc76/Lib/pickle.py#L1056) method. This in turn uses the [`whichmodule`](https://github.com/python/cpython/blob/7b88f63e1dd4006b1a08b9c9f087dd13449ecc76/Lib/pickle.py#L335) function to obtain the module name (`'scikit-learn'`, in your case), which is then saved in the pickle. In the [`_UnPickler`](https://github.com/python/cpython/blob/7b88f63e1dd4006b1a08b9c9f087dd13449ecc76/Lib/pickle.py#L1137) we see the [`find_class`](https://github.com/python/cpython/blob/7b88f63e1dd4006b1a08b9c9f087dd13449ecc76/Lib/pickle.py#L1572) method uses [`__import__`](https://docs.python.org/3/library/functions.html#__import__) to import the module using the stored module name. The `find_class` method is used in a few of the `load_*` methods, such as [`load_inst`](https://github.com/python/cpython/blob/7b88f63e1dd4006b1a08b9c9f087dd13449ecc76/Lib/pickle.py#L1497), which is what would be used to load an instance of a class, such as your model instance: ```py def load_inst(self): module = self.readline()[:-1].decode("ascii") name = self.readline()[:-1].decode("ascii") klass = self.find_class(module, name) self._instantiate(klass, self.pop_mark()) ``` [The documentation for `Unpickler.find_class` explains](https://docs.python.org/3/library/pickle.html#pickle.Unpickler.find_class): > > Import module if necessary and return the object called name from it, where the module and name arguments are str objects. > > > [The docs also explain how you can restrict this behaviour](https://docs.python.org/3/library/pickle.html#restricting-globals): > > [You] may want to control what gets unpickled by customizing Unpickler.find\_class(). Unlike its name suggests, Unpickler.find\_class() is called whenever a global (i.e., a class or a function) is requested. Thus it is possible to either completely forbid globals or restrict them to a safe subset. > > > Though this is generally only relevant when unpickling untrusted data, which doesn't appear to be the case here. --- > > Does the serialized model do import sklearn? > > > The serialized model itself doesn't *do* anything, strictly speaking. It's all handled by the Pickle module as described above. --- > > Why can I use predict function without import scikit learn in the first code? > > > Because sklearn is imported by the Pickle module when it unpickles the data, thereby providing you with a fully realized model object. It's just like if some other module imported sklearn, created the model object, and then passed it into your code as a parameter to a function. --- As a consequence of all this, in order to unpickle your model you'll need to have sklearn installed - ideally the same version that was used to create the pickle. In general the Pickle module stores the fully qualified path of any required module, so the Python process that pickles the object and the one that unpickles the object must have all [1] required modules exist with the same fully qualified names. --- [1] A caveat to that is that the Pickle module can automatically adjust/fix certain imports for particular modules/classes that have different fully qualified names between Python 2 and 3. From [the docs](https://docs.python.org/3/library/pickle.html#pickle.Unpickler): > > If fix\_imports is true, pickle will try to map the old Python 2 names to the new names used in Python 3. > > >
*When the model was first pickled*, you had sklearn installed. The pickle file depends on sklearn for its structure, as the class of the object it represents is a sklearn class, and `pickle` needs to know the details of that class’s structure in order to unpickle the object. When you try to unpickle the file without sklearn installed, `pickle` determines from the file that the class the object is an instance of is `sklearn.x.y.z` or what have you, and then the unpickling fails because the module `sklearn` cannot be found when `pickle` tries to resolve that name. Notice that the exception occurs on the unpickling line, not on the line where `predict` is called. You don’t need to import sklearn in your code when it does work because once the object is unpickled, it knows what its class is and what all its method names are, so you can just call them from the object.
16,190
42,913,788
I'm trying to ask a question on python, so that if the person gets it right, they can move onto the next question. If they get it wrong, they have 3 or so attempts at getting it right, before the quiz moves onto the next question. I thought I solved it with the below program, however this just asks the user make another choice even if they get it correct. How do I move onto the next question if the user gets it correct, but also gives another chance to those that get it wrong? ``` score = 0 counter = 0 while counter<3: answer = input("Make your choice >>>> ") if answer == "c": print("Correct!") score += 1 else: print("That is incorrect. Try again.") counter = counter +1 print("The correct answer is C!") print("Your current score is {0}".format(score) ```
2017/03/20
[ "https://Stackoverflow.com/questions/42913788", "https://Stackoverflow.com", "https://Stackoverflow.com/users/7735015/" ]
You're stuck in the loop. So put ``` counter = 3 ``` after ``` score += 1 ``` To get out of the loop. ``` score = 0 counter = 0 while counter<3: answer = input("Make your choice >>>> ") if answer == "c": print("Correct!") score += 1 counter = 3 else: print("That is incorrect. Try again.") counter = counter +1 print("The correct answer is C!") print("Your current score is {0}".format(score) ```
You're stucked in the loop, a cleaner way of solving this is using the function break as in: ``` score = 0 counter = 0 while counter < 3: answer = input("Make your choice >>>> ") if answer == "c": print ("Correct!") score += 1 break else: print("That is incorrect. Try Again") counter += 1 print("The correct answer is C!") print("Your current score is {" + str(score) + "}") ``` I would like to highlight a few things about your original code. 1- Python is case sensitive, the code that you gave us will work as long as you type 'c' in lowercase. 2- The last line of I edited it so it would correctly print the score. For further reading about control flow and the function break try python docs here: <https://docs.python.org/2/tutorial/controlflow.html>
16,191
66,157,729
I have some info store in a MySQL database, something like: `AHmmgZq\n/+AH+G4` We get that using an API, so when I read it in my python I get: `AHmmgZq\\n/+AH+G4` The backslash is doubled! Now I need to put that into a JSON file, how can I remove the extra backslash? **EDIT:** let me show my full code: ``` json_dict = { "private_key": "AHmmgZq\\n/+AH+G4" } print(json_dict) print(json_dict['private_key']) with open(file_name, "w", encoding="utf-8") as f: json.dump(json_dict, f, ensure_ascii=False, indent=2) ``` In the first print I have the doubled backslash, but in the second one there's only one. When I dump it to the json file it gives me doubled.
2021/02/11
[ "https://Stackoverflow.com/questions/66157729", "https://Stackoverflow.com", "https://Stackoverflow.com/users/4663446/" ]
Turns out that the badge appears once you open a TeX file. I thought you'd first create a TeX project, then the file.
As you already figured out, the badge appears once you open a TeX file. Keep also in mind that you have to install LaTeX, or update LaTex. I say so because personally I was trying to use `\tableofcontents` but the table wouldn't be generated until the moment I installed texlive using homebrew (`brew install texlive`)
16,192
39,305,286
According to [documentation](https://docs.python.org/3.4/c-api/capsule.html?highlight=capsule), the third argument to `PyCapsule_New()` can specify a destructor, which I assume should be called when the capsule is destroyed. ``` void mapDestroy(PyObject *capsule) { lash_map_simple_t *map; fprintf(stderr, "Entered destructor\n"); map = (lash_map_simple_t*)PyCapsule_GetPointer(capsule, "MAP_C_API"); if (map == NULL) return; fprintf(stderr, "Destroying map %p\n", map); lashMapSimpleFree(map); free(map); } static PyObject * mapSimpleInit_func(PyObject *self, PyObject *args) { unsigned int w; unsigned int h; PyObject *pymap; lash_map_simple_t *map = (lash_map_simple_t*)malloc(sizeof(lash_map_simple_t)); pymap = PyCapsule_New((void *)map, "MAP_C_API", mapDestroy); if (!PyArg_ParseTuple(args, "II", &w, &h)) return NULL; lashMapSimpleInit(map, &w, &h); return Py_BuildValue("O", pymap); } ``` However, when I instantiate the object and delete it or exit from Python console, the destructor doesn't seem to be called: ``` >>> a = mapSimpleInit(10,20) >>> a <capsule object "MAP_C_API" at 0x7fcf4959f930> >>> del(a) >>> a = mapSimpleInit(10,20) >>> a <capsule object "MAP_C_API" at 0x7fcf495186f0> >>> quit() lash@CANTANDO ~/programming/src/liblashgame $ ``` My guess is that it has something to do with the `Py_BuildValue()` returning a new reference to the "capsule", which upon deletion doesn't affect the original. Anyway, how would I go about ensuring that the object is properly destroyed? Using Python 3.4.3 [GCC 4.8.4] (on linux)
2016/09/03
[ "https://Stackoverflow.com/questions/39305286", "https://Stackoverflow.com", "https://Stackoverflow.com/users/3333488/" ]
The code above has a reference leak: `pymap = PyCapsule_New()` returns a new object (its refcount is 1), but `Py_BuildValue("O", pymap)` creates a new reference to the same object, and its refcount is now 2. Just `return pymap;`.
`Py_BuildValue("O", thingy)` will just increment the refcount for `thingy` and return it – the docs say that it returns a “new reference” but that is not quite true when you pass it an existing `PyObject*`. If these functions of yours – the ones in your question, that is – are all defined in the same translation unit, the destructor function will likely have to be declared `static` (so its full signature would be `static void mapDestroy(PyObject* capsule);`) to ensure that the Python API can look up the functions’ address properly when it comes time to call the destructor. … You don’t have to use a `static` function, as long as the destructor’s address in memory is valid. For example, I’ve successfully used [a C++ non-capturing lambda as a destructor](https://github.com/fish2000/libimread/blob/093a4e6d6556b543dbf750a2d61f746b6cf12e77/python/im/include/pycapsule.hpp#L18-L30), as non-capturing C++ lambdas can be converted to function pointers; if you want to use another way of obtaining and handing off a function pointer for your capsule destructor that works better for you, by all means go for it.
16,193
48,775,587
I am trying to learn python through some basic exercises with my own online store. I have a list of parts that are in-transit to us that we have already ordered, and I have a list of parts that we are currently out of stock of. I want to be able to send a list to the supplier of what we need - but I do not want to create duplicate orders as a result of the fact that the parts on order, are listed as out of stock. I put together this basic program that looks through the list of items that are out of stock and only prints the item if it is present in the outofstock list but *not* present in the onorder list, so that if it is on order we do not order it again. However, it outputs nothing. ``` onorder = ["A1417", "A1322", "ISL6259", "LP8545B1SQ", "PM6640", "SLG3NB148V", "PD4HDMIREG", "338S1201", "SN2400B0", "AD7149", "J3801", "J4502", "IPRO97B"] outofstock = ["ISL6259", "LY-UVH900", "triwing", "banana-to-alligator", "LP8548B1SQ", "EDP-J9000-30-PIN-IPEX", "J3801", "LT3470", "PM6640", "SN2400B0", "IPRO97B", "SLG3NB148V", "SN2400AB0", "usbammeter", "821-00814-A", "J5713", "343S0645", "PMCM4401VPE", "J4502", "PMD9645", "J9600", "J2401", "AD7149", "593-1604", "821-1722", "LM3534TMX", "U4001"] for part in onorder: if (part in onorder) == False and (part in outofstock) == True: print (part) ``` It doesn't print anything, even though there are entries in outofstock that are not in onorder. If I try this outside of a loop, it works and prints every part in the onorder list. ``` for part in onorder: print (part) ``` If I try this outside a loop, it also works and prints triwing, since it is true that triwing is in the outofstock list. ``` if ('triwing' in outofstock) == True: print ("triwing") ``` However, the program in the for loop returns nothing. What am I missing?
2018/02/13
[ "https://Stackoverflow.com/questions/48775587", "https://Stackoverflow.com", "https://Stackoverflow.com/users/6713690/" ]
``` for part in onorder: if (part in onorder) == False ... ``` This does not make sense. Since you are iterating over exactly every element of `onorder`, you will never get a `part` not in `onorder`. Therefore, it is not a miracle that the print statement is not being executed.
Doh! Appropriate code was ``` for part in outofstock: if (part not in onorder): print (part) ``` This way it prints my out of stock items which I need to order, unless they were already on order. I can't believe I overly complicated this for no good reason. Thank you so much for pointing out where I had gone wrong. This was such a dumb question in hindsight.
16,194
2,286,633
I have a basic grasp of XML and python and have been using minidom with some success. I have run into a situation where I am unable to get the values I want from an XML file. Here is the basic structure of the pre-existing file. ``` <localization> <b n="Stats"> <l k="SomeStat1"> <v>10</v> </l> <l k="SomeStat2"> <v>6</v> </l> </b> <b n="Levels"> <l k="Level1"> <v>Beginner Level</v> </l> <l k="Level2"> <v>Intermediate Level</v> </l> </b> </localization> ``` There are about 15 different `<b>` tags with dozens of children. What I'd like to do is, if given a level number(1), is find the `<v>` node for the corresponding level. I just have no idea how to go about this.
2010/02/18
[ "https://Stackoverflow.com/questions/2286633", "https://Stackoverflow.com", "https://Stackoverflow.com/users/224476/" ]
``` #!/usr/bin/python from xml.dom.minidom import parseString xml = parseString("""<localization> <b n="Stats"> <l k="SomeStat1"> <v>10</v> </l> <l k="SomeStat2"> <v>6</v> </l> </b> <b n="Levels"> <l k="Level1"> <v>Beginner Level</v> </l> <l k="Level2"> <v>Intermediate Level</v> </l> </b> </localization>""") level = 1 blist = xml.getElementsByTagName('b') for b in blist: if b.getAttribute('n') == 'Levels': llist = b.getElementsByTagName('l') l = llist.item(level) v = l.getElementsByTagName('v') print v.item(0).firstChild.nodeValue; #prints Intermediate Level ```
``` level = "Level"+raw_input("Enter level number: ") content= open("xmlfile").read() data= content.split("</localization>") for item in data: if "localization" in item: s = item.split("</b>") for i in s: if """<b n="Levels">""" in i: for c in i.split("</l>"): if "<l" in c and level in c: for v in c.split("</v>"): if "<v>" in v: print v[v.index("<v>")+3:] ```
16,196
48,275,466
I was trying to install [AWS CLI](https://docs.aws.amazon.com/cli/latest/userguide/cli-install-macos.html) on mac but was facing some challenges as aws command was unable to parse the credential file. So I decided to re-install the whole stuff but facing some issues here again. I am trying `pip uninstall awscli` which says ``` Cannot uninstall requirement awscli, not installed ``` So, i try `pip3 install awscli --upgrade --user` which gives me this: ``` You are using pip version 6.0.8, however version 9.0.1 is available. You should consider upgrading via the 'pip install --upgrade pip' command. Requirement already up-to-date: awscli in ./Library/Python/3.5/lib/python/site-packages Requirement already up-to-date: rsa<=3.5.0,>=3.1.2 in ./Library/Python/3.5/lib/python/site-packages (from awscli) Requirement already up-to-date: docutils>=0.10 in ./Library/Python/3.5/lib/python/site-packages (from awscli) Requirement already up-to-date: PyYAML<=3.12,>=3.10 in ./Library/Python/3.5/lib/python/site-packages (from awscli) Requirement already up-to-date: colorama<=0.3.7,>=0.2.5 in ./Library/Python/3.5/lib/python/site-packages (from awscli) Requirement already up-to-date: botocore==1.8.29 in ./Library/Python/3.5/lib/python/site-packages (from awscli) Requirement already up-to-date: s3transfer<0.2.0,>=0.1.12 in ./Library/Python/3.5/lib/python/site-packages (from awscli) Requirement already up-to-date: pyasn1>=0.1.3 in ./Library/Python/3.5/lib/python/site-packages (from rsa<=3.5.0,>=3.1.2->awscli) Requirement already up-to-date: python-dateutil<3.0.0,>=2.1 in ./Library/Python/3.5/lib/python/site-packages (from botocore==1.8.29->awscli) Requirement already up-to-date: jmespath<1.0.0,>=0.7.1 in ./Library/Python/3.5/lib/python/site-packages (from botocore==1.8.29->awscli) Requirement already up-to-date: six>=1.5 in ./Library/Python/3.5/lib/python/site-packages (from python-dateutil<3.0.0,>=2.1->botocore==1.8.29->awscli) ``` Not sure what to do.
2018/01/16
[ "https://Stackoverflow.com/questions/48275466", "https://Stackoverflow.com", "https://Stackoverflow.com/users/1471314/" ]
You run **pip3** `install awscli` but **pip** `uninstall awscli`. Shouldn't it be **pip3** `uninstall awscli`?
I had a similar issue. And I used the following command to fix it. ``` pip3 install --no-cache-dir awscli==1.14.39 ```
16,200
52,977,914
I'm trying to segment the numbers and/or characters of the following image then converting each individual num/char to text using ocr: [![enter image description here](https://i.stack.imgur.com/rWMEa.png)](https://i.stack.imgur.com/rWMEa.png) This is the code (in python) used: ``` new, contours, hierarchy = cv2.findContours(gray, cv2.RETR_TREE, cv2.CHAIN_APPROX_SIMPLE) digitCnts = [] final = gray.copy() # loop over the digit area candidates for c in contours: (x, y, w, h) = cv2.boundingRect(c) # if the contour is sufficiently large, it must be a digit if (w >= 20 and w <= 290) and h >= (gray.shape[0]>>1)-15: x1 = x+w y1 = y+h digitCnts.append([x,x1,y,y1]) #print(x,x1,y,y1) # Drawing the selected contour on the original image cv2.rectangle(final,(x,y),(x1,y1),(0, 255, 0), 2) plt.imshow(final, cmap=cm.gray, vmin=0, vmax=255) ``` I get the following output: [![enter image description here](https://i.stack.imgur.com/jJOHY.png)](https://i.stack.imgur.com/jJOHY.png) You see that all are detected correctly except the middle 2 with only the top part has bounding box on it and not around the whole digit. I cannot figure out why only this one not detected correctly especially that it is similar to the others. Any idea how to resolve this?
2018/10/24
[ "https://Stackoverflow.com/questions/52977914", "https://Stackoverflow.com", "https://Stackoverflow.com/users/1261829/" ]
As far as I know, most of OpenCV methods for binary images operate `white objects on the black background`. Src: [![enter image description here](https://i.stack.imgur.com/fc1Ld.png)](https://i.stack.imgur.com/fc1Ld.png) Threahold INV and morph-open: [![enter image description here](https://i.stack.imgur.com/oIktF.png)](https://i.stack.imgur.com/oIktF.png) Filter by height and draw on the src: [![enter image description here](https://i.stack.imgur.com/OXMVp.png)](https://i.stack.imgur.com/OXMVp.png) --- ``` #!/usr/bin/python3 # 2018/10/25 08:30 import cv2 import numpy as np # (1) src img = cv2.imread( "car.png") gray = cv2.cvtColor(img, cv2.COLOR_BGR2GRAY) # (2) threshold-inv and morph-open th, threshed = cv2.threshold(gray, 100, 255, cv2.THRESH_OTSU|cv2.THRESH_BINARY_INV) morphed = cv2.morphologyEx(threshed, cv2.MORPH_OPEN, np.ones((2,2))) # (3) find and filter contours, then draw on src cnts = cv2.findContours(morphed, cv2.RETR_EXTERNAL, cv2.CHAIN_APPROX_SIMPLE)[-2] nh, nw = img.shape[:2] for cnt in cnts: x,y,w,h = bbox = cv2.boundingRect(cnt) if h < 0.3 * nh: continue cv2.rectangle(img, (x,y), (x+w, y+h), (255, 0, 255), 1, cv2.LINE_AA) cv2.imwrite("dst.png", img) cv2.imwrite("morphed.png", morphed) ```
Your image is a bit noisy, therefore binarizing it would do the trick. ``` cv2.threshold(gray, 127, 255, cv2.THRESH_BINARY, gray) new, contours, hierarchy = cv2.findContours(gray, cv2.RETR_CCOMP, cv2.CHAIN_APPROX_SIMPLE) # cv2.drawContours(gray, contours, -1, 127, 5) digitCnts = [] final = gray.copy() # loop over the digit area candidates for c in contours: (x, y, w, h) = cv2.boundingRect(c) # if the contour is sufficiently large, it must be a digit if (w >= 20 and w <= 290) and h >= (gray.shape[0]>>1)-15: x1 = x+w y1 = y+h digitCnts.append([x,x1,y,y1]) #print(x,x1,y,y1) # Drawing the selected contour on the original image cv2.rectangle(final,(x,y),(x1,y1),(0, 255, 0), 2) ``` [![enter image description here](https://i.stack.imgur.com/BzfDJ.png)](https://i.stack.imgur.com/BzfDJ.png)
16,201
9,101,800
So I've been experimenting with numpy and matplotlib and have stumbled across some bug when running python from the emacs inferior shell. When I send the py file to the shell interpreter I can run commands after the code executed. The command prompt ">>>" appears fine. However, after I invoke a matplotlib show command on a plot the shell just hangs with the command prompt not showing. ``` >>> plt.plot(x,u_k[1,:]); [<matplotlib.lines.Line2D object at 0x0000000004A9A358>] >>> plt.show(); ``` I am running the traditional C-python implementation. under emacs 23.3 with Fabian Gallina's Python python.el v. 0.23.1 on Win7. A similar question has been raised here under the i-python platform: [running matplotlib or enthought.mayavi.mlab from a py-shell inside emacs on windows](https://stackoverflow.com/questions/4701607/running-matplotlib-or-enthought-mayavi-mlab-from-a-py-shell-inside-emacs-on-wind) **UPDATE: I have duplicated the problem on a fresh instalation of Win 7 x64 with the typical python 2.7.2 binaries available from the python website and with numpy 1.6.1 and matplotlib 1.1.0 on emacs 23.3 and 23.4 for Windows.** There must be a bug somewhere in the emacs shell.
2012/02/01
[ "https://Stackoverflow.com/questions/9101800", "https://Stackoverflow.com", "https://Stackoverflow.com/users/752726/" ]
I think there are two ways to do it. 1. Use ipython. Then you can use `-pylab` option. I don't use Fabian Gallina's python.el, but I guess you will need something like this: ``` (setq python-shell-interpreter-args "-pylab") ``` Please read the documentation of python.el. 2. You can manually activate interactive mode by [ion](http://matplotlib.sourceforge.net/api/pyplot_api.html#matplotlib.pyplot.ion) ``` >>> from matplotlib import pyplot as plt >>> plt.ion() >>> plt.plot([1,2,3]) [<matplotlib.lines.Line2D object at 0x20711d0>] >>> ```
I think that this might have something to do with the behavior of the show function: > > [matplotlib.pyplot.show(\*args, \*\*kw)](http://matplotlib.sourceforge.net/api/pyplot_api.html#matplotlib.pyplot.show) > > > When running in ipython with its pylab mode, display all figures and > return to the ipython prompt. > > > In non-interactive mode, display all figures and block until the > figures have been closed; in interactive mode it has no effect unless > figures were created prior to a change from non-interactive to > interactive mode (not recommended). In that case it displays the > figures but does not block. > > > A single experimental keyword argument, block, may be set to True or > False to override the blocking behavior described above. > > > I think your running into the blocking behavior mentioned above which would result in the shell hanging. Perhaps try running the function as: `plt.show(block = False)` and see if it produces the output you expect. If this is still giving you trouble let me know and I will try and reproduce your setup locally.
16,202
58,498,100
I have a complicated nested numpy array which contains list. I am trying to converted the elements to float32. However, it gives me following error: ``` ValueError Traceback (most recent call last) <ipython-input-225-22d2824961c2> in <module> ----> 1 x_train_single.astype(np.float32) ValueError: setting an array element with a sequence. ``` Here is the code and sample input: ``` x_train_single.astype(np.float32) array([[ list([[[0, 0, 0, 0, 0, 0]], [-1.0], [0]]), list([[[0, 0, 0, 0, 0, 0], [173, 8, 172, 0, 0, 0]], [-1.0], [0]]) ]]) ```
2019/10/22
[ "https://Stackoverflow.com/questions/58498100", "https://Stackoverflow.com", "https://Stackoverflow.com/users/1584253/" ]
As your array contains lists of different sizes and nesting depths, I doubt that there is a simple or fast solution. Here is a "get-the-job-done-no-matter-what" approach. It comes in two flavors. One creates arrays for leaves, the other one lists. ``` >>> a array([[list([[[0, 0, 0, 0, 0, 0]], [-1.0], [0]]), list([[[0, 0, 0, 0, 0, 0], [173, 8, 172, 0, 0, 0]], [-1.0], [0]])]], dtype=object) >>> def mkarr(a): ... try: ... return np.array(a,np.float32) ... except: ... return [*map(mkarr,a)] ... >>> def mklst(a): ... try: ... return [*map(mklst,a)] ... except: ... return np.float32(a) ... >>> np.frompyfunc(mkarr,1,1)(a) array([[list([array([[0., 0., 0., 0., 0., 0.]], dtype=float32), array([-1.], dtype=float32), array([0.], dtype=float32)]), list([array([[ 0., 0., 0., 0., 0., 0.], [173., 8., 172., 0., 0., 0.]], dtype=float32), array([-1.], dtype=float32), array([0.], dtype=float32)])]], dtype=object) >>> np.frompyfunc(mklst,1,1)(a) array([[list([[[0.0, 0.0, 0.0, 0.0, 0.0, 0.0]], [-1.0], [0.0]]), list([[[0.0, 0.0, 0.0, 0.0, 0.0, 0.0], [173.0, 8.0, 172.0, 0.0, 0.0, 0.0]], [-1.0], [0.0]])]], dtype=object) ```
if number of columns is fixed then ``` np.array([l.astype(np.float) for l in x_train_single.squeeze()]) ``` But it will remove the redundant dimensions, convert everything into numpy array. Before: (1, 1, 1, 11, 6) After: (11,6)
16,209
18,662,264
from the documents, the urllib.unquote\_plus should replce plus signs by spaces. but when I tried the below code in IDLE for python 2.7, it did not. ``` >>s = 'http://stackoverflow.com/questions/?q1=xx%2Bxx%2Bxx' >>urllib.unquote_plus(s) >>'http://stackoverflow.com/questions/?q1=xx+xx+xx' ``` I also tried doing something like `urllib.unquote_plus(s).decode('utf-8').` is there a proper to decode the url component?
2013/09/06
[ "https://Stackoverflow.com/questions/18662264", "https://Stackoverflow.com", "https://Stackoverflow.com/users/251024/" ]
`%2B` is the escape code for a *literal* `+`; it is being unescaped entirely correctly. Don't confuse this with the *URL escaped* `+`, which is the escape character for spaces: ``` >>> s = 'http://stackoverflow.com/questions/?q1=xx+xx+xx' >>> urllib.parse.unquote_plus(s) 'http://stackoverflow.com/questions/?q1=xx xx xx' ``` `unquote_plus()` only decodes encoded spaces to literal spaces (`'+'` -> `' '`), not encoded `+` symbols (`'%2B'` -> `'+'`). If you have input to decode that uses `%2B` instead of `+` where you expected spaces, then those input values were perhaps *doubly* quoted, you'd need to unquote them twice. You'd see `%` escapes encoded too: ``` >>> urllib.parse.quote_plus('Hello world!') 'Hello+world%21' >>> urllib.parse.quote_plus(urllib.quote_plus('Hello world!')) 'Hello%2Bworld%2521' ``` where `%25` is the quoted `%` character.
Those aren't spaces, those are actual pluses. A space is %20, which in that part of the URL is indeed equivalent to +, but %2B means a literal plus.
16,211
34,495,839
I saw the following coding gif, which depicts a user typing commands (e.g. `import`) and a pop up message would describe the usage for that command. How can I set up something similar?[![gif depicting python shell with automatic code usage](https://i.stack.imgur.com/7OUwv.gif)](https://i.stack.imgur.com/7OUwv.gif)
2015/12/28
[ "https://Stackoverflow.com/questions/34495839", "https://Stackoverflow.com", "https://Stackoverflow.com/users/2636317/" ]
According to the github issues in the repo of that gif, the video was taken using [bpython](http://bpython-interpreter.org) Source: <https://github.com/tqdm/tqdm/issues/67>
Code editors like [`vim`](http://www.vim.org/) (with [`jedi`](https://github.com/davidhalter/jedi-vim) or [`python-mode`](https://github.com/klen/python-mode.git)) or [`emacs`](https://www.gnu.org/software/emacs/) and integrated development environments like [`pycharm`](https://www.jetbrains.com/pycharm/) can offer the same functionality.
16,212
51,060,433
I coded a jQuery with flask where on-click it should perform an SQL search and export the dataframe as excel, the script is: ``` <script type=text/javascript> $(function () { $('a#export_to_excel').bind('click', function () { $.getJSON($SCRIPT_ROOT + ' /api/sanctionsSearch/download', { nm: $('input[name="nm"]').val(), searchtype: $('select[name="searchtype"]').val() }, function (data) { $("#download_results").text(data.result); }); return false; }); }); ``` However there was not response on the browser, my python code is as below: ``` from io import BytesIO,StringIO from flask import render_template, request, url_for, jsonify, redirect, request, Flask, send_file def index(): #get the dataframe ready and define as 'data', parameters obtained from form input in html name = request.args.get('nm','', type = str) type = request.args.get('searchtype','Entity',type = str) #function get_entity() to get the dataframe #I have checked and the dataframe is functioning properly data = get_entity(name,type) #check if the dataframe is empty if data.empty == True: print("its not working bruh...") word = "No results to export! Please try again!" return jsonify(result = word) #store the csv to BytesIO proxy = StringIO() data.to_csv(proxy) mem = BytesIO() mem.write(proxy.getvalue().encode('utf-8')) mem.seek(0) proxy.close() print("download starting....") #send file send_file(mem, as_attachment=True,attachment_filename='Exportresults.csv', mimetype='text/csv') word = "Download starting!" return jsonify(result = word) ``` Can someone tell me what's wrong with my code? The "download starting..." was properly printed to the html but the download did not start at all.
2018/06/27
[ "https://Stackoverflow.com/questions/51060433", "https://Stackoverflow.com", "https://Stackoverflow.com/users/9979747/" ]
The solution is not ideal, but what I did is adding a window.open(url) command in the jquery which will call another function, this function will send\_file to the user.
You should use return statement ``` return send_file() ```
16,213
59,959,629
I've been stuck on this for the last week and I'm fairly lost as to what do for next steps. I have a Django application that uses a MySQL database. I've deployed it using AWS Elastic Beanstalk using the following tutorial : <https://docs.aws.amazon.com/elasticbeanstalk/latest/dg/create-deploy-python-django.html> It successfully deployed. However, I keep getting 500 errors when trying to access the site. I've also updated the host value as well. Here's the error\_log, but I'm not able to deduce much from it. ``` [Tue Jan 28 08:05:34.444677 2020] [suexec:notice] [pid 3125] AH01232: suEXEC mechanism enabled (wrapper: /usr/sbin/suexec) [Tue Jan 28 08:05:34.460731 2020] [http2:warn] [pid 3125] AH10034: The mpm module (prefork.c) is not supported by mod_http2. The mpm determines how things are processed in your server. HTTP/2 has more demands in this regard and the currently selected mpm will just not do. This is an advisory warning. Your server will continue to work, but the HTTP/2 protocol will be inactive. [Tue Jan 28 08:05:34.460743 2020] [http2:warn] [pid 3125] AH02951: mod_ssl does not seem to be enabled [Tue Jan 28 08:05:34.461206 2020] [lbmethod_heartbeat:notice] [pid 3125] AH02282: No slotmem from mod_heartmonitor [Tue Jan 28 08:05:34.461249 2020] [:warn] [pid 3125] mod_wsgi: Compiled for Python/3.6.2. [Tue Jan 28 08:05:34.461253 2020] [:warn] [pid 3125] mod_wsgi: Runtime using Python/3.6.8. [Tue Jan 28 08:05:34.463081 2020] [mpm_prefork:notice] [pid 3125] AH00163: Apache/2.4.41 (Amazon) mod_wsgi/3.5 Python/3.6.8 configured -- resuming normal operations [Tue Jan 28 08:05:34.463096 2020] [core:notice] [pid 3125] AH00094: Command line: '/usr/sbin/httpd -D FOREGROUND' [Tue Jan 28 08:06:21.350696 2020] [mpm_prefork:notice] [pid 3125] AH00169: caught SIGTERM, shutting down [Tue Jan 28 08:06:22.419261 2020] [suexec:notice] [pid 4501] AH01232: suEXEC mechanism enabled (wrapper: /usr/sbin/suexec) [Tue Jan 28 08:06:22.435310 2020] [so:warn] [pid 4501] AH01574: module wsgi_module is already loaded, skipping [Tue Jan 28 08:06:22.437572 2020] [http2:warn] [pid 4501] AH10034: The mpm module (prefork.c) is not supported by mod_http2. The mpm determines how things are processed in your server. HTTP/2 has more demands in this regard and the currently selected mpm will just not do. This is an advisory warning. Your server will continue to work, but the HTTP/2 protocol will be inactive. [Tue Jan 28 08:06:22.437582 2020] [http2:warn] [pid 4501] AH02951: mod_ssl does not seem to be enabled [Tue Jan 28 08:06:22.438217 2020] [lbmethod_heartbeat:notice] [pid 4501] AH02282: No slotmem from mod_heartmonitor [Tue Jan 28 08:06:22.438283 2020] [:warn] [pid 4501] mod_wsgi: Compiled for Python/3.6.2. [Tue Jan 28 08:06:22.438292 2020] [:warn] [pid 4501] mod_wsgi: Runtime using Python/3.6.8. [Tue Jan 28 08:06:22.440572 2020] [mpm_prefork:notice] [pid 4501] AH00163: Apache/2.4.41 (Amazon) mod_wsgi/3.5 Python/3.6.8 configured -- resuming normal operations [Tue Jan 28 08:06:22.440593 2020] [core:notice] [pid 4501] AH00094: Command line: '/usr/sbin/httpd -D FOREGROUND' [Tue Jan 28 08:08:03.028260 2020] [mpm_prefork:notice] [pid 4501] AH00169: caught SIGTERM, shutting down Exception ignored in: <bound method BaseEventLoop.__del__ of <_UnixSelectorEventLoop running=False closed=False debug=False>> Traceback (most recent call last): File "/usr/lib64/python3.6/asyncio/base_events.py", line 526, in __del__ NameError: name 'ResourceWarning' is not defined [Tue Jan 28 08:08:04.152017 2020] [suexec:notice] [pid 4833] AH01232: suEXEC mechanism enabled (wrapper: /usr/sbin/suexec) [Tue Jan 28 08:08:04.168082 2020] [so:warn] [pid 4833] AH01574: module wsgi_module is already loaded, skipping [Tue Jan 28 08:08:04.170245 2020] [http2:warn] [pid 4833] AH10034: The mpm module (prefork.c) is not supported by mod_http2. The mpm determines how things are processed in your server. HTTP/2 has more demands in this regard and the currently selected mpm will just not do. This is an advisory warning. Your server will continue to work, but the HTTP/2 protocol will be inactive. [Tue Jan 28 08:08:04.170256 2020] [http2:warn] [pid 4833] AH02951: mod_ssl does not seem to be enabled [Tue Jan 28 08:08:04.170793 2020] [lbmethod_heartbeat:notice] [pid 4833] AH02282: No slotmem from mod_heartmonitor [Tue Jan 28 08:08:04.170852 2020] [:warn] [pid 4833] mod_wsgi: Compiled for Python/3.6.2. [Tue Jan 28 08:08:04.170856 2020] [:warn] [pid 4833] mod_wsgi: Runtime using Python/3.6.8. [Tue Jan 28 08:08:04.173067 2020] [mpm_prefork:notice] [pid 4833] AH00163: Apache/2.4.41 (Amazon) mod_wsgi/3.5 Python/3.6.8 configured -- resuming normal operations [Tue Jan 28 08:08:04.173089 2020] [core:notice] [pid 4833] AH00094: Command line: '/usr/sbin/httpd -D FOREGROUND' [Tue Jan 28 08:25:28.783035 2020] [mpm_prefork:notice] [pid 4833] AH00169: caught SIGTERM, shutting down [Tue Jan 28 08:25:32.859422 2020] [suexec:notice] [pid 5573] AH01232: suEXEC mechanism enabled (wrapper: /usr/sbin/suexec) [Tue Jan 28 08:25:32.875584 2020] [so:warn] [pid 5573] AH01574: module wsgi_module is already loaded, skipping [Tue Jan 28 08:25:32.877541 2020] [http2:warn] [pid 5573] AH10034: The mpm module (prefork.c) is not supported by mod_http2. The mpm determines how things are processed in your server. HTTP/2 has more demands in this regard and the currently selected mpm will just not do. This is an advisory warning. Your server will continue to work, but the HTTP/2 protocol will be inactive. [Tue Jan 28 08:25:32.877552 2020] [http2:warn] [pid 5573] AH02951: mod_ssl does not seem to be enabled [Tue Jan 28 08:25:32.878103 2020] [lbmethod_heartbeat:notice] [pid 5573] AH02282: No slotmem from mod_heartmonitor [Tue Jan 28 08:25:32.878167 2020] [:warn] [pid 5573] mod_wsgi: Compiled for Python/3.6.2. [Tue Jan 28 08:25:32.878174 2020] [:warn] [pid 5573] mod_wsgi: Runtime using Python/3.6.8. [Tue Jan 28 08:25:32.880448 2020] [mpm_prefork:notice] [pid 5573] AH00163: Apache/2.4.41 (Amazon) mod_wsgi/3.5 Python/3.6.8 configured -- resuming normal operations [Tue Jan 28 08:25:32.880477 2020] [core:notice] [pid 5573] AH00094: Command line: '/usr/sbin/httpd -D FOREGROUND' [Wed Jan 29 01:11:07.166917 2020] [mpm_prefork:notice] [pid 5573] AH00169: caught SIGTERM, shutting down Exception ignored in: <bound method BaseEventLoop.__del__ of <_UnixSelectorEventLoop running=False closed=False debug=False>> Traceback (most recent call last): File "/usr/lib64/python3.6/asyncio/base_events.py", line 526, in __del__ NameError: name 'ResourceWarning' is not defined [Wed Jan 29 01:11:08.333254 2020] [suexec:notice] [pid 28706] AH01232: suEXEC mechanism enabled (wrapper: /usr/sbin/suexec) [Wed Jan 29 01:11:08.349662 2020] [so:warn] [pid 28706] AH01574: module wsgi_module is already loaded, skipping [Wed Jan 29 01:11:08.351804 2020] [http2:warn] [pid 28706] AH10034: The mpm module (prefork.c) is not supported by mod_http2. The mpm determines how things are processed in your server. HTTP/2 has more demands in this regard and the currently selected mpm will just not do. This is an advisory warning. Your server will continue to work, but the HTTP/2 protocol will be inactive. [Wed Jan 29 01:11:08.351813 2020] [http2:warn] [pid 28706] AH02951: mod_ssl does not seem to be enabled [Wed Jan 29 01:11:08.352386 2020] [lbmethod_heartbeat:notice] [pid 28706] AH02282: No slotmem from mod_heartmonitor [Wed Jan 29 01:11:08.352447 2020] [:warn] [pid 28706] mod_wsgi: Compiled for Python/3.6.2. [Wed Jan 29 01:11:08.352451 2020] [:warn] [pid 28706] mod_wsgi: Runtime using Python/3.6.8. [Wed Jan 29 01:11:08.354766 2020] [mpm_prefork:notice] [pid 28706] AH00163: Apache/2.4.41 (Amazon) mod_wsgi/3.5 Python/3.6.8 configured -- resuming normal operations [Wed Jan 29 01:11:08.354783 2020] [core:notice] [pid 28706] AH00094: Command line: '/usr/sbin/httpd -D FOREGROUND' ``` If anyone could provide some insight/help/further steps, it would be greatly appreciated. I can provide more logs, etc anything else that would help. Thank you.
2020/01/29
[ "https://Stackoverflow.com/questions/59959629", "https://Stackoverflow.com", "https://Stackoverflow.com/users/3310212/" ]
This should be sufficient to hide all but one sheet. ``` function hideAllSheetsExceptThisOne(sheetName) { var sheetName=sh||'Student Report';//default for testing var ss = SpreadsheetApp.getActive(); var sheets=ss.getSheets(); for(var i=0;i<sheets.length; i++){ if(sheets[i].getName()!=sheetName){ sheets[i].hideSheet(); } } SpreadsheetApp.flush(); } ```
I had to do something similar earlier this year, and this code proved to be very helpful. <https://gist.github.com/ixhd/3660885>
16,214
67,111,664
I created a little app with Python as backend and React as frontend. I receive some data from the frontend and I want to eliminate the first 20 words of the text I receive if a condition is satisfyed. ``` @app.route("/translate", methods=["GET", "POST"]) def translate(): prompt = request.json["prompt"] max_tokens=50 prompt = re.sub(r"^(?:.+?\b\s+?\b){20}", "", prompt) response = translation_response(prompt) return {'text': response} ``` How can I translate **eliminate the first 20 words** of the var prompt into python code? Thanks a lot in advance....
2021/04/15
[ "https://Stackoverflow.com/questions/67111664", "https://Stackoverflow.com", "https://Stackoverflow.com/users/14880010/" ]
``` import pandas as pd ``` Use `to_datetime()` method and convert your date column from string to datetime: ``` df['Date']=pd.to_datetime(df['Date']) ``` Finally use `apply()` method: ``` df['comm0']=df['Date'].apply(lambda x:1 if x==pd.to_datetime('2021-01-07') else 0) ``` Or as suggested by @anky: Simply use: ``` df['comm0']=pd.to_datetime(df['Date']).eq('2021-01-07').astype(int) ``` Or If you are familiar with `numpy` then you can also use after converting your Date columns to datetime: ``` import numpy as np df['comm0']=np.where(df['Date']=='2021-01-07',1,0) ```
It's a problem with types. df['Date'] is a string and not a datetime object, so when you compare each element with '2021-01-07' (another string) they differ because the time informations (00:00:00). as solution you can convert elements to datetime, as following: ``` def int_21(x): if x == pd.to_datetime('2021-01-07'): return '1' else: return '0' df['Date'] = pd.to_datetime(df['Date']) df['comm0'] = df['Date'].apply(int_21) ``` or, you can still use string objects, but the comparing element must have the same format as the dates: ``` def int_21(x): if x == '2021-01-07 00:00:00': return '1' else: return '0' ```
16,215
39,815,551
I am trying to make a program in python that will accept a user's input and check if it is a Kaprekar number. I'm still a beginner, and have been having a lot of issues, but my main issue now that I can't seem to solve is how I would add up all possibilities in a list, with only two variables. I'm probably not explaining it very well so here is an example: I have a list that contains the numbers `['2', '0', '2', '5']`. How would I make python do `2 + 025`, `20 + 25` and `202 + 5`? It would be inside an if else statement, and as soon as it would equal the user inputted number, it would stop. ([Here](http://pastebin.com/Kg9bQq47) is what the entire code looks like if it helps- where it currently says `if 1 == 0:`, it should be adding them up.)
2016/10/02
[ "https://Stackoverflow.com/questions/39815551", "https://Stackoverflow.com", "https://Stackoverflow.com/users/-1/" ]
Say you start with ``` a = ['2', '0', '2', '5'] ``` Then you can run ``` >>> [(a[: i], a[i: ]) for i in range(1, len(a))] [(['2'], ['0', '2', '5']), (['2', '0'], ['2', '5']), (['2', '0', '2'], ['5'])] ``` to obtain all the possible contiguous splits. If you want to process it further, you can change it to numbers via ``` >>> [(int(''.join(a[: i])), int(''.join(a[i: ]))) for i in range(1, len(a))] [(2, 25), (20, 25), (202, 5)] ``` or add them up ``` >>> [int(''.join(a[: i])) + int(''.join(a[i: ])) for i in range(1, len(a))] [27, 45, 207] ```
Not a direct answer to your question, but you can write an expression to determine whether a number, N, is a Krapekar number more concisely. ``` >>> N=45 >>> digits=str(N**2) >>> Krapekar=any([N==int(digits[:_])+int(digits[_:]) for _ in range(1,len(digits))]) >>> Krapekar True ```
16,216
8,827,304
I'm using Plone v4.1.2, and I'd like to know if there a way to include more than one author in the by line of a page? I have two authors listed in ownership, but only one author is listed in the byline. I'd like the byline to look something like this: by First Author and Second Author — last modified Jan 11, 2012 01:53 PM — History UPDATE - Thanks everyone for your replies. I managed to bungle my way through this (I've never used tal before). I edited plone.belowcontenttitle.documentbyline as suggested by Giaccamo, and managed to learn a bit about tal along the way. Here is the code that does what I needed (this replaces the existing tal:creator construct): ``` <span> by <span class="documentCreators" tal:condition="context/Creators" tal:repeat="creator context/Creators" i18n:translate="text_creators"> <span tal:define="cond1 repeat/creator/start; cond2 repeat/creator/end" tal:condition="python: not cond1 and not cond2" >, </span> <span tal:define="cond1 repeat/creator/start; cond2 repeat/creator/end" tal:condition="python: not cond1 and cond2" > and </span> <tal:i18n i18n:translate="label_by_author"> <a href="#" tal:attributes="href string:${context/@@plone_portal_state/navigation_root_url}/author/${creator}" tal:content="creator" tal:omit-tag="python:view.author() is None" i18n:name="author">Roland Barthes</a> </tal:i18n> </span> </span> ``` This puts the userid on the byline instead of the full name. I tried to get the full name, but after some time without success I decided I could live with userid.
2012/01/11
[ "https://Stackoverflow.com/questions/8827304", "https://Stackoverflow.com", "https://Stackoverflow.com/users/1144225/" ]
In order to browse more than one author you'll need a little bit of coding: That piece of page is called `viewlets`. That specific viewlet is called `plone.belowcontenttitle.documentbyline`. You can use [z3c.jbot](http://pypi.python.org/pypi/z3c.jbot) to override the viewlet template. Take a look at [this howto](https://weblion.psu.edu/trac/weblion/wiki/z3c.jbot) for usage. Another option is to customize the template through-the-web following [this tutorial](http://plone.org/documentation/manual/theme-reference/elements/visibleelements/plone.belowcontenttitle.documentbyline).
you could use the contributors- instead of the owners-field. they are listed by default in the docByLine. hth, i
16,217
65,433,038
So I'm trying to run Django developing server on a container but I can't access it through my browser. I have 2 containers using the same docker network, one with postgress and the other is Django. I manage to ping both containers and successfully connect 2 of them together and run `./manage.py runserver` ok but can't `curl` or open it in a browser Here is my Django docker file ``` FROM alpine:latest COPY ./requirements.txt . ADD ./parking/ /parking RUN apk add --no-cache --virtual .build-deps python3-dev gcc py3-pip postgresql-dev py3-virtualenv musl-dev libc-dev linux-headers RUN virtualenv /.env RUN /.env/bin/pip install -r /requirements.txt WORKDIR /parking EXPOSE 8000 5432 ``` The postgres container I pulled it from docker hub I ran django with `docker run --name=django --network=app -p 127.4.3.1:6969:8000 -it dev/django:1.0` I ran postgres with `docker run --name=some-postgres --network=app -p 127.2.2.2:6969:5432 -e POSTGRES_PASSWORD=123 -e POSTGRES_DB=parking postgres` Any help would be great. Thank you
2020/12/24
[ "https://Stackoverflow.com/questions/65433038", "https://Stackoverflow.com", "https://Stackoverflow.com/users/11386561/" ]
Think of it this way: Your React application is the U-Haul truck that delivers **everything** from the Web Server (Back-End) to the Browser (Front-End) ![](https://i.imgur.com/tMRFrMkm.png) Now you say you want everything wrapped in a (native) Web Component: `<move-house></move-house>` It is do-able, but you as the Developer have to develop all **dependencies** It starts by **fully** understanding what React is and does, so you can wrap **all** its behaviour. **Unlike** other Frameworks (Angular, Vue, Svelte) React has no "*Make-it-a-Web-Component*" option, **because** React, with its virtual DOM, is a totally different (and rather outdated) beast that doesn't comply with modern Web Standards. (today [Dec2020] React only scores **71%** on the Custom Elements Everywhere test) See: <https://custom-elements-everywhere.com/libraries/react/results/results.html> for what you as the developer have to fix, because React does not do that for you Some say React compared to modern JavaScript technologies, Frameworks, Web Component libraries like Lit and Stencil or Svelte, is more like: ![](https://i.imgur.com/HsKLccom.png)
It is possible in react using direflow. <https://direflow.io/>
16,218
19,037,928
I am using python + beautifulsoup to parse html. My problem is that I have a variable amount of text items. In this case, for example, I want to extract 'Text 1', 'Text 2', ... 'Text 4'. In other webpages, there may be only 'Text 1' or possibly two, etc. So it changes. If the 'Text x's were contained in a tag, it would make my life easier. But they are not. I can access them using next and previous (or maybe nextSibling and previousSibling), but off the top of my head I don't know how to get all of them. The idea would be to (assuming the max. number I would ever encounter would be four) write the 'Text 1' to a file, then proceed all the way to 'Text 4'. That is in this case. In the case where there were only 'Text 1', I would write 'Text 1' to the file, and then just have blanks for 2-4. Any suggestions on what I should do? ``` <div id="DIVID" style="display: block; margin-left: 1em;"> <b>Header 1</b> <br/> Text 1 <br/> Text 2 <br/> Text 3 <br/> Text 4 <br/> <b>Header 2</b> </div> ``` While I'm at it, I have a not-so-related question. Say I have a website that has a variable number of links that all link to html exactly like what I have above. This is not what this application is, but think craigslist - there are a number of links on a central page. I need to be able to access each of these pages in order to do my parsing. What would be a good approach to do this? Thanks! extra: The next webpage might look like this: ``` <div id="DIVID2" style="display: block; margin-left: 1em;"> <b>Header 1</b> <br/> Different Text 1 <br/> Different Text 2 <br/> <b>Header 2</b> </div> ``` Note the differences: 1. DIVID is now DIVID2. I can figure out the ending on DIVID based on other parsing on pages. This is not a problem. 2. I only have two items of text instead of four. 3. The text now is different. Note the key similarity: 1. Header 1 and Header 2 are the same. These don't change.
2013/09/26
[ "https://Stackoverflow.com/questions/19037928", "https://Stackoverflow.com", "https://Stackoverflow.com/users/2049545/" ]
You might try something like this: ``` >>> test ="""<b>Header 1</b> <br/> Text 1 <br/> Text 2 <br/> Text 3 <br/> Text 4 <br/> <b>Header 2</b>""" >>> soup = BeautifulSoup(test) >>> test = soup.find('b') >>> desired_text = [x.strip() for x in str(test.parent).split('<br />')] ['<b>Header 1</b>', 'Text 1', 'Text 2', 'Text 3', 'Text 4', '<b>Header 2</b>'] ``` Now you just need to separate by your 'Header' blocks, which I think is doable and I believe may get you started in the right direction. As for your other question, you need to assemble a list of links and then iterate through them opening each one individually and processing how you will. This is a much broader question, though, so you should attempt some stuff and come back to refine what you have and ask a new question once you need some help on a specific issue. --- Explanation on last line of code: ``` [x.strip() for x in str(test.parent).split('<br />')] ``` This takes my "test" node that I assigned above and grabs the parent. By turning into a string, I can "split" on the `<br>` tags, which makes those tags disappear and separates all the text we want separated. This creates a list where each list-item has the text we want and some '\n's. Finally, what is probably most confusing is the list comprehension syntax, which looks like this: ``` some_list = [item for item in some_iterable] ``` This simply produces a list of "item"s all taken from "some\_iterable". In my list comprehension, I'm running through the list, taking each item in the list, and simply stripping off a newline (the `x.strip()` part). There are many ways to do this, by the way.
Here is a different solution. nextSibling can get parts of the structured document that follow a named tag. ``` from BeautifulSoup import BeautifulSoup text=""" <b>Header 1</b> <br/> Text 1 <br/> Text 2 <br/> Text 3 <br/> Text 4 <br/> <b>Header 2</b> """ soup = BeautifulSoup(text) for br in soup.findAll('br'): following = br.nextSibling print following.strip() ```
16,219
10,899,197
``` #include <ext/hash_map> using namespace std; class hash_t : public __gnu_cxx::hash_map<const char*, list<time_t> > { }; hash_t hash; ... ``` I'm having some problems using this hash\_map. The const char\* im using as a key is always a 12 length number with this format 58412xxxxxxx. I know there are 483809 different numbers, so that should be the hash\_map size after inserting everything, but i'm only getting 193 entries. ``` hash_t::iterator it = hash.find(origen.c_str()); if (it != hash.end()) { //Found x++; (*it).second.push_front(fecha); } else { //Not found y++; list<time_t> lista(1, fecha); hash.insert(make_pair(origen.c_str(), lista)); } ``` The same procedure works perfectly using python dictionaries (i'm getting the correct number of entries) but not even close using c++. Is it possible that since every key begins with 58412 (actually almost every key, but not all of them, and that's the reason I don't want to chop those 5 chars), im getting a lot of collisions?
2012/06/05
[ "https://Stackoverflow.com/questions/10899197", "https://Stackoverflow.com", "https://Stackoverflow.com/users/1430913/" ]
`const char*` is not good for a key, since you now have pointer comparison instead of string comparison (also, you probably have dangling pointers, the return value of `c_str()` is not usable long-term). Use `hash_map<std::string, list<time_t> >` instead.
If your key is `char*`, you are comparing no the strings, but pointers, which makes your hashmap work differently than what you expect. Consider using `const std::string` for the keys, so they are compared using lexicographical ordering
16,221
39,599,596
I´m writing a simple calculator program that will let a user add a list of integers together as a kind of entry to the syntax of python. I want the program to allow the user to add as many numbers together as they want. My error is: ``` Traceback (most recent call last): File "Calculator.py", line 17, in <module> addition = sum(inputs) TypeError: unsupported operand type(s) for +: 'int' and 'str' ``` My code is: ``` #declare variables inputs = [] done = False #while loop for inputting numbers while done == False: value = raw_input() #escape loop if user enters done if value == "Done": print inputs done = True else: inputs.append(value) addition = sum(inputs) print addition ```
2016/09/20
[ "https://Stackoverflow.com/questions/39599596", "https://Stackoverflow.com", "https://Stackoverflow.com/users/6854420/" ]
[`raw_input`](https://docs.python.org/2/library/functions.html#raw_input) returns strings, not numbers. [`sum`](https://docs.python.org/2/library/functions.html#sum) operates only on numbers. You can convert each item to an int as you add it to the list: `inputs.append(int(value))`. If you use `float` rather than `int` then non-integer numbers will work too. In either case, this will produce an error if the user enters something that is neither `Done` nor an integer. You can use `try`/`except` to deal with that, but that's probably out of the scope of this question.
When using `raw_input()` you're storing a string in `value`. Convert it to an int before appending it to your list, e.g. ``` inputs.append( int( value ) ) ```
16,222
63,640,435
SSO is not enabled for bot on Teams channel. I develop a bot on Bot Framework and Azure Service, using python 3.7. I needed user authentication in the Microsoft system to use Graph API, etc. Previously successfully used the [example](https://github.com/microsoft/BotBuilder-Samples/tree/main/samples/python) 18.bot-authentication and 24.bot-authentication-msgraph. And this [guide](https://learn.microsoft.com/en-us/azure/bot-service/bot-builder-authentication?view=azure-bot-service-4.0&tabs=aadv2%2Cpython) I got the error “SSO is not enabled for bot”. I created new certificates and a new server with a bot, for the source code example 18.bot-authentication. Created a new channel in Azure and try to login from Teams, but have the same problem. In Bot Emulator and test in web-chat both authentications work. Teams want SSO. Any tips? Thank you
2020/08/28
[ "https://Stackoverflow.com/questions/63640435", "https://Stackoverflow.com", "https://Stackoverflow.com/users/13382091/" ]
Please check the following articles: <https://learn.microsoft.com/en-us/power-virtual-agents/advanced-end-user-authentication> <https://learn.microsoft.com/en-us/power-virtual-agents/configuration-end-user-authentication> <https://learn.microsoft.com/en-us/power-virtual-agents/publication-add-bot-to-microsoft-teams> The second article explains step by step how you can set a PVA bot to use in Microsoft Teams. Please be aware of this part: "Currently, if your bot supports end-user authentication, the user will not be able to explicitly sign out. This will fail the Microsoft Teams AppSource certification if you are publishing your bot in the Seller Dashboard. This does not apply to personal or tenant usage of the bot. Learn more at Publish your Microsoft Teams app and AppSource Validation Policy."
Please refer to the Teams-Auth [sample](https://github.com/microsoft/BotBuilder-Samples/tree/main/samples/python/46.teams-auth) and the [documentation](https://learn.microsoft.com/en-us/microsoftteams/platform/bots/how-to/authentication/add-authentication?tabs=dotnet%2Cdotnet-sample) which helps you get started with authenticating a bot in Microsoft Teams as Teams behaves slightly differently than other channels. Presently, you can enable [Single Sign-On(SSO)](https://learn.microsoft.com/en-us/microsoftteams/platform/tabs/how-to/authentication/auth-aad-sso) in a custom tab. Microsoft Teams is currently working on the feature to enable SSO for bots.
16,223
24,136,733
``` process_name = "CCC.exe" for proc in psutil.process_iter(): if proc.name == process_name: print ("have") else: print ("Dont have") ``` I know for the fact that CCC.exe is running. I tried this code with both 2.7 and 3.4 python I have imported psutil as well. However the process is there but it is printing "Dont have".
2014/06/10
[ "https://Stackoverflow.com/questions/24136733", "https://Stackoverflow.com", "https://Stackoverflow.com/users/2016977/" ]
Here is the modified version that worked for me on Windows 7 with python v2.7 You were doing it in a wrong way here `if proc.name == process_name:` in your code. Try to `print proc.name` and you'll notice why your code didn't work as you were expecting. Code: ``` import psutil process_name = "System" for proc in psutil.process_iter(): process = psutil.Process(proc.pid)# Get the process info using PID pname = process.name()# Here is the process name #print pname if pname == process_name: print ("have") else: print ("Dont have") ``` [Here](https://pypi.python.org/pypi?%3aaction=display&name=psutil#downloads) are some examples about how to use psutil. I just read them and figured out this solution, may be there is a better solution. I hope it was helpful.
I solved it by using WMI instead of psutil. <https://pypi.python.org/pypi/WMI/> install it on windows. `import wmi c = wmi.WMI () for process in c.Win32_Process (): if "a" in process.Name: print (process.ProcessId, process.Name)`
16,224
57,640,451
I'm trying to iterate each row in a Pandas dataframe named 'cd'. If a specific cell, e.g. [row,empl\_accept] in a row contains a substring, then updates the value of an other cell, e.g.[row,empl\_accept\_a] in the same dataframe. ```py for row in range(0,len(cd.index),1): if 'Master' in cd.at[row,empl_accept]: cd.at[row,empl_accept_a] = '1' else: cd.at[row,empl_accept_a] = '0' ``` The code above not working and jupyter notebook displays the error: ```py TypeError Traceback (most recent call last) <ipython-input-70-21b1f73e320c> in <module> 1 for row in range(0,len(cd.index),1): ----> 2 if 'Master' in cd.at[row,empl_accept]: 3 cd.at[row,empl_accept_a] = '1' 4 else: 5 cd.at[row,empl_accept_a] = '0' TypeError: argument of type 'float' is not iterable ``` I'm not really sure what is the problem there as the for loop contains no float variable.
2019/08/24
[ "https://Stackoverflow.com/questions/57640451", "https://Stackoverflow.com", "https://Stackoverflow.com/users/10609069/" ]
Please do *not* use loops for this. You can do this in bulk with: ``` cd['empl_accept_a'] = cd['empl_accept'].str.contains('Master').astype(int).astype(str) ``` This will store `'0`' and `'1'` in the column. That being said, I am not convinced if storing this as strings is a good idea. You can just store these as `bool`s with: ``` cd['empl_accept_a'] = cd['empl_accept'].str.contains('Master') ``` For example: ``` >>> cd empl_accept empl_accept_a 0 Master True 1 Slave False 2 Slave False 3 Master Windu True ```
You need to check in your dataframe what value is placed at [row,empl\_accept]. I'm sure there will be some numeric value at this location in your dataframe. Just print the value and you'll see the problem if any. ``` print (cd.at[row,empl_accept]) ```
16,227
52,338,706
I already split the data into test and training set into the different folder. Now I need to load the patient data. Each patient has 8 images. ```py def load_dataset(root_dir, split): """ load the data set numpy arrays saved by the preprocessing script :param root_dir: path to input data :param split: defines whether to load the training or test set :return: data: dictionary containing one dictionary ({'data', 'seg', 'pid'}) per patient """ in_dir = os.path.join(root_dir, split) data_paths = [os.path.join(in_dir, f) for f in os.listdir(in_dir)] data_and_seg_arr = [np.load(ii, mmap_mode='r') for ii in data_paths] pids = [ii.split('/')[-1].split('.')[0] for ii in data_paths] data = OrderedDict() for ix, pid in enumerate(pids): data[pid] = {'data': data_and_seg_arr[ix][..., 0], 'seg': data_and_seg_arr[ix][..., 1], 'pid': pid} return data ``` But, the error said: ``` File "/home/zhe/Research/Seg/heart_seg/data_loader.py", line 61, in load_dataset data_and_seg_arr = [np.load(ii, mmap_mode='r') for ii in data_paths] File "/home/zhe/Research/Seg/heart_seg/data_loader.py", line 61, in <listcomp> data_and_seg_arr = [np.load(ii, mmap_mode='r') for ii in data_paths] File "/home/zhe/anaconda3/envs/tf_env/lib/python3.6/site-packages/numpy/lib/npyio.py", line 372, in load fid = open(file, "rb") IsADirectoryError: [Errno 21] Is a directory: './data/preprocessed_data/train/Patient009969' ``` It is already a file name, not a directory. Thanks!
2018/09/14
[ "https://Stackoverflow.com/questions/52338706", "https://Stackoverflow.com", "https://Stackoverflow.com/users/9403249/" ]
It seems that `./data/preprocessed_data/train/Patient009969` is a directory, not a file. `os.listdir()` returns both files and directories. Maybe try using `os.walk()` instead. It treats files and directories separately, and can recurse inside the subdirectories to find more files in a iterative way: ``` data_paths = [os.path.join(pth, f) [for pth, dirs, files in os.walk(in_dir) for f in files] ```
Do you have both files and directories inside your path? `os.listdir` will list both files and directories, so when you try to open a directory with `np.load` it will give that error. You can filter only files to avoid the error: ``` data_paths = [os.path.join(in_dir, f) for f in os.listdir(in_dir)] data_paths = [i for i in data_paths if os.path.isfile(i)] ``` Or all together in a single line: ``` data_paths = [i for i in (os.path.join(in_dir, f) for f in os.listdir(in_dir)) if os.path.isfile(i)] ```
16,228
57,690,881
Interested in the scala spark implementation of this [split-column-of-list-into-multiple-columns-in-the-same-pyspark-dataframe](https://stackoverflow.com/questions/49650907/split-column-of-list-into-multiple-columns-in-the-same-pyspark-dataframe) Given this Dataframe: ``` | X | Y| +--------------------+-------------+ | rent|[1,2,3......]| | is_rent_changed|[4,5,6......]| | phone|[7,8,9......]| ``` I want A new Dataframe with exploded values and mapped to my provided col names: ``` colNames = ['cat','dog','mouse'....] | Column|cat |dog |mouse |.......| +--------------------+---|---|--------|-------| | rent|1 |2 |3 |.......| | is_rent_changed|4 |5 |6 |.......| | phone|7 |8 |9 |.......| ``` Tried: ``` val out = df.select(col("X"),explode($"Y")) ``` But its wrong format and i dont know how to map to my colNames list: ``` X | Y | ---------------|---| rent |1 | rent |2 | rent |3 | . |. | . |. | is_rent_changed|4 | is_rent_changed|5 | ``` In the link above, the python solution was to use a list comprehension: ``` univar_df10.select([univar_df10.Column] + [univar_df10.Quantile[i] for i in range(length)]) ``` But it doesn't show how to use a provided column name list given the column names are just the index of the columns.
2019/08/28
[ "https://Stackoverflow.com/questions/57690881", "https://Stackoverflow.com", "https://Stackoverflow.com/users/2800939/" ]
I modified the loss functions and used the *metrics* in compile finction. ``` def recon_loss(inputs,outputs): reconstruction_loss = original_dim*binary_crossentropy(inputs,outputs) return K.mean(reconstruction_loss) def latent_loss(inputs,outputs): kl_loss = -0.5*K.sum(1 + z_log_var - K.square(z_mean) - K.exp(z_log_var), axis=-1) return K.mean(kl_loss) def total_loss(inputs,outputs): reconstruction_loss = original_dim*binary_crossentropy(inputs,outputs) kl_loss = -0.5*K.sum(1 + z_log_var - K.square(z_mean) - K.exp(z_log_var), axis=-1) return K.mean(reconstruction_loss + kl_loss) vae.compile(optimizer='adam',loss=total_loss,metrics=[recon_loss, latent_loss]) ``` Now, the model returns reconstruction, latent and total losses for both training and validation data sets.
Please check the type of each loss in your losses dictionary. ``` print (type(losses['recon_loss'])) ```
16,231
56,034,031
I am a new user of python and the neo4j. I just want to run the python file in Pycharm and connect to Neo4j. But the import of py2neo always does not work, I tried to use Virtualenv but still does not work. I have tried to put my py file inside env folder or outside and both don't work. I really install the py2neo and the version is the latest, how to solve this problem​??? My code: ``` from py2neo import Graph, Node, Relationship graph = Graph("http://localhost:7474") jack = Node("Perosn", name="Jack") nicole = Node("Person",name="Nicole") tina = Node("Person", name="Tina") graph.create(Relationship(nicole, "KNOWS",jack)) graph.create(Relationship(nicole, "KNOWS",tina)) graph.create(Relationship(tina, "KNOWS",jack)) graph.create(Relationship(jack, "KNOWS",tina)) Error: Traceback (most recent call last): File "/Users/huangjingzhan/PycharmProjects/untitled2/venv/neo4j.py", line 1, in <module> from py2neo import Graph, Node, Relationship ModuleNotFoundError: No module named 'py2neo' ```
2019/05/08
[ "https://Stackoverflow.com/questions/56034031", "https://Stackoverflow.com", "https://Stackoverflow.com/users/11467790/" ]
check which python version is configured to run the project and make sure that module is installed for that version. here is how to: [Pycharm](https://www.jetbrains.com/help/idea/configuring-local-python-interpreters.html)
You need to install py2neo in virtual environment. if you haven't install. and Check you python version on your machine and project. ``` pip install py2neo ```
16,232
13,584,524
In the old world I had a pretty ideal development setup going to work together with a webdesigner. Keep in mind we mostly do small/fast projects, so this is how it worked: * I have a staging site on a server (Webfaction or other) * Designer accesses that site and edits templates and assets to his satisfaction * I SSH in regularly to checkin everything into source control, update files from upstream, resolve conflicts It works brilliantly because the designer does not need to learn git, python, package tools, syncdb, migrations etc. And there's only the one so we don't have any conflicts on staging either. Now the problem is in the new world under Heroku, this is not possible. Or is it? In any way, I would like your advice on a development setup that caters to those who are not technical.
2012/11/27
[ "https://Stackoverflow.com/questions/13584524", "https://Stackoverflow.com", "https://Stackoverflow.com/users/102315/" ]
Reformat a string to display it as a MAC address: ``` var macadres = "0018103AB839"; var regex = "(.{2})(.{2})(.{2})(.{2})(.{2})(.{2})"; var replace = "$1:$2:$3:$4:$5:$6"; var newformat = Regex.Replace(macadres, regex, replace); // newformat = "00:18:10:3A:B8:39" ``` If you want to validate the input string use this regex (thanks to J0HN): ``` var regex = String.Concat(Enumerable.Repeat("([a-fA-F0-9]{2})", 6)); ```
Suppose that we have the Mac Address stored in a long. This is how to have it in a formatted string: ``` ulong lMacAddr = 0x0018103AB839L; string strMacAddr = String.Format("{0:X2}:{1:X2}:{2:X2}:{3:X2}:{4:X2}:{5:X2}", (lMacAddr >> (8 * 5)) & 0xff, (lMacAddr >> (8 * 4)) & 0xff, (lMacAddr >> (8 * 3)) & 0xff, (lMacAddr >> (8 * 2)) & 0xff, (lMacAddr >> (8 * 1)) & 0xff, (lMacAddr >> (8 * 0)) & 0xff); ```
16,234
9,955,715
i'm trying to do some "post"/"lazy" evaluation of arguments on my strings. Suppose i've this: ``` s = "SELECT * FROM {table_name} WHERE {condition}" ``` I'd like to return the string with the `{table_name}` replaced, but not the `{condition}`, so, something like this: ``` s1 = s.format(table_name = "users") ``` So, I can build the hole string later, like: ``` final = s1.format(condition= "user.id = {id}".format(id=2)) ``` The result should be, of course: ``` "SELECT * FROM users WHERE user.id = 2" ``` I've found this previous answer, this is exactly what I need, but i'd like to use the `format` string function. [python, format string](https://stackoverflow.com/questions/4928526/python-format-string) Thank you for your help!
2012/03/31
[ "https://Stackoverflow.com/questions/9955715", "https://Stackoverflow.com", "https://Stackoverflow.com/users/198212/" ]
You can replace the condition with itself: ``` s.format(table_name='users', condition='{condition}') ``` which gives us: ``` SELECT * FROM users WHERE {condition} ``` You can use this string later to fill in the condition.
I have been using this function for some time now, which casts the `Dict` of inputted keyword arguments as a `SafeDict` object that subclasses `Dict`. ``` def safeformat(str, **kwargs): class SafeDict(dict): def __missing__(self, key): return '{' + key + '}' replacements = SafeDict(**kwargs) return str.format_map(replacements) ``` I didn't make this up, but I think it's a good solution. The one downside is that you can't call `mystring.safeformat(**kwargs)` - of course, you have to call `safeformat(mystring,**kwargs)`. --- If you're really interested in being able to call `mystr.safeformat(**kwargs)` (which I am interested in doing!), consider using this: ``` class safestr(str): def safeformat(self, **kwargs): class SafeDict(dict): def __missing__(self, key): return '{' + key + '}' replacements = SafeDict(**kwargs) return safestr(self.format_map(replacements)) ``` You can then create a `safestr` object as `a = safestr(mystr)` (for some `str` called `mystr`), and you can in fact call `mystr.safeformat(**kwargs)`. e.g. ``` mysafestr = safestr('Hey, {friendname}. I am {myname}.') print(mysafestr.safeformat(friendname='Bill')) ``` prints `Hey, Bill. I am {myname}.` This is cool in some ways - you can pass around a partially-formatted `safestr`, and could call `safeformat` in different contexts. I especially like to call `mystr.format(**locals())` to format with the appropriate namespace variables; the `safeformat` method is especially useful in this case, because I don't always carefully looks through my namespace. The main issue with this is that inherited methods from `str` return a `str` object, not a `safestr`. So `mysafestr.lower().safeformat(**kwargs)` fails. Of course you could cast as a `safestr` when using `safeformat`: `safestr(mysafestr.lower()).safeformat(**kwargs)`, but that's less than ideal looking. I wish Python just gave the `str` class a `safeformat` method of some kind.
16,236
39,689,012
i have written a code (python 2.7) that goes to a website [Cricket score](http://www.cricbuzz.com/live-cricket-scorecard/16822/ind-vs-nz-1st-test-new-zealand-tour-of-india-2016) and then takes out some data out of it to display just its score .It also periodically repeats and keeps running because the scores keep changing. i have also written a code for taking a message input from user and send that message as an sms to my number . i want to club these two so that the scores printed on my screen serve as the message input for sending live scores to me. codes are **sms.py** ``` import urllib2 import cookielib from getpass import getpass import sys import os from stat import * import sched, time import requests from bs4 import BeautifulSoup s = sched.scheduler(time.time, time.sleep) from urllib2 import Request #from livematch import function #this sends the desired input message to my number number = raw_input('enter number you want to message: ') message = raw_input('enter text: ' ) #this declares my credentials if __name__ == "__main__": username = "9876543210" passwd = "abcdefghij" message = "+".join(message.split(' ')) #logging into the sms site url ='http://site24.way2sms.com/Login1.action?' data = 'username='+username+'&password='+passwd+'&Submit=Sign+in' #For cookies cj= cookielib.CookieJar() opener = urllib2.build_opener(urllib2.HTTPCookieProcessor(cj)) #Adding header details opener.addheaders=[('User-Agent','Mozilla/5.0 (X11; Linux x86_64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/37.0.2062.120')] try: usock =opener.open(url, data) except IOError: print "error" #return() jession_id =str(cj).split('~')[1].split(' ')[0] send_sms_url = 'http://site24.way2sms.com/smstoss.action?' send_sms_data = 'ssaction=ss&Token='+jession_id+'&mobile='+number+'&message='+message+'&msgLen=136' opener.addheaders=[('Referer', 'http://site25.way2sms.com/sendSMS?Token='+jession_id)] try: sms_sent_page = opener.open(send_sms_url,send_sms_data) except IOError: print "error" #return() print "success" #return () ``` **livematch.py** ``` import sched, time import requests from bs4 import BeautifulSoup s = sched.scheduler(time.time, time.sleep) from urllib2 import Request url=raw_input('enter the desired score card url here : ') req=Request(url) def do_something(sc) : #global x r=requests.get(url) soup=BeautifulSoup(r.content) for i in soup.find_all("div",{"id":"innings_1"}): x=i.text.find('Batsman') in_1=i.text print(in_1[0:x]) for i in soup.find_all("div",{"id":"innings_2"}): x=i.text.find('Batsman') in_1=i.text print(in_1[0:x]) for i in soup.find_all("div",{"id":"innings_3"}): x=i.text.find('Batsman') in_1=i.text print(in_1[0:x]) for i in soup.find_all("div",{"id":"innings_4"}): x=i.text.find('Batsman') in_1=i.text print(in_1[0:x]) # do your stuff #do what ever s.enter(5, 1, do_something, (sc,)) s.enter(5, 1, do_something, (s,)) s.run() ``` note that instead of using 9876543210 as username and abcdefghij as password use the credentials of actual account. sign up at way2sms.com for those credentials
2016/09/25
[ "https://Stackoverflow.com/questions/39689012", "https://Stackoverflow.com", "https://Stackoverflow.com/users/6878406/" ]
i am sorry, i added a bit too many double-quotes in the above code. instead it should be this way: ``` asm (".section .drectve\n\t.ascii \" -export:DllInitialize=api.DllInitialize @2\""); ``` If you need to use it many times, consider putting it in a macro, e.g. ``` #ifdef _MSC_VER #define FORWARDED_EXPORT_WITH_ORDINAL(exp_name, ordinal, target_name) __pragma (comment (linker, "/export:" #exp_name "=" #target_name ",@" #ordinal)) #endif #ifdef __GNUC__ #define FORWARDED_EXPORT_WITH_ORDINAL(exp_name, ordinal, target_name) asm (".section .drectve\n\t.ascii \" -export:" #exp_name "= " #target_name " @" #ordinal "\""); #endif FORWARDED_EXPORT_WITH_ORDINAL(DllInitialize, 2, api.DllInitialize) FORWARDED_EXPORT_WITH_ORDINAL(my_create_file_a, 100, kernel32.CreateFileA) ``` you get the idea
here is how you can do it: ``` #ifdef _MSC_VER #pragma comment (linker, "/export:DllInitialize=api.DllInitialize,@2") #endif #ifdef __GNUC__ asm (".section .drectve\n\t.ascii \" -export:\\\"DllInitialize=api.DllInitialize\\\" @2\""); #endif ``` Note that "drectve" is not a typo, thats how it must be written however odd it may seem. By the way, this strange abbreviation is a microsoft's idea, not GCC's.
16,246
71,940,988
I have trained a model based on YOLOv5 on a custom dataset which has two classes (for example human and car) I am using `detect.py` with the following command: ``` > python detect.py --weights best.pt --source video.mp4 ``` I want only car class to be detected without detecting humans, how it could be done?
2022/04/20
[ "https://Stackoverflow.com/questions/71940988", "https://Stackoverflow.com", "https://Stackoverflow.com/users/16637958/" ]
You can specify classes, which you want to detect **[--classes]** arguments will be used. **Example** ``` python detect.py --weights "your weights.pt" --source "video/image/stream" --classes 0,1,2 ``` In above command, 0,1,2 are classId, so when you will run it, only mentioned classes will be detect.
I think you can use the argument --classes of detect.py. Just use the index of the classes.
16,247
23,784,951
I have a string that looks like this: `POLYGON ((148210.445767647 172418.761192525, 148183.930888667 172366.054787545, 148183.866770629 172365.316772032, 148184.328078148 172364.737139913, 148220.543522168 172344.042601933, 148221.383518338 172343.971823159), (148221.97916844 172344.568316375, 148244.61381946 172406.651932395, 148244.578100039 172407.422441673, 148244.004662562 172407.938319453, 148211.669446582 172419.255646473, 148210.631989339 172419.018894911, 148210.445767647 172418.761192525))` I can easily strip `POLYGON` out of the string to focus on the numbers but I'm kinda wondering what would be the easiest/best way to parse this string into a list of dict. The first parenthesis (right after POLYGON) indicates that multiple elements can be provided (separated by a comma `,`). So each pair of numbers is to supposed to be `x` and `y`. I'd like to parse this string to end up with the following data structure (using `python 2.7`): ``` list [ //list of polygons list [ //polygon n°1 dict { //polygon n°1's first point 'x': 148210.445767647, //first number 'y': 172418.761192525 //second number }, dict { //polygon n°1's second point 'x': 148183.930888667, 'y': 148183.930888667 }, ... // rest of polygon n°1's points ], //end of polygon n°1 list [ // polygon n°2 dict { // polygon n°2's first point 'x': 148221.9791684, 'y': 172344.568316375 }, ... // rest of polygon n°2's points ] // end of polygon n°2 ] // end of list of polygons ``` Polygons' number of points is virtually infinite. Each point's numbers are separated by a blank. Do you guys know a way to do this in a loop or any recursive way ? PS: I'm kind of a python beginner (only a few months under my belt) so don't hesitate to explain in details. Thank you!
2014/05/21
[ "https://Stackoverflow.com/questions/23784951", "https://Stackoverflow.com", "https://Stackoverflow.com/users/1300454/" ]
The data structure you have defining your Polygon object looks very similar to a python tuple declaration. One option, albeit a bit hacky would be to use python's [AST parser](https://docs.python.org/2/library/ast.html#ast.literal_eval). You would have to strip off the POLYGON part and this solution may not work for other declarations that are more complex. ``` import ast your_str = "POLYGON (...)" # may be better to use a regex to split off the class part # if you have different types data = ast.literal_eval(your_str.replace("POLYGON ","")) x, y = data #now you can zip the two x and y pairs together or make them into a dictionary ```
Lets say u have a string that looks like this my\_str = 'POLYGON ((148210.445767647 172418.761192525, 148183.930888667 172366.054787545, 148183.866770629 172365.316772032, 148184.328078148 172364.737139913, 148220.543522168 172344.042601933, 148221.383518338 172343.971823159), (148221.97916844 172344.568316375, 148244.61381946 172406.651932395, 148244.578100039 172407.422441673, 148244.004662562 172407.938319453, 148211.669446582 172419.255646473, 148210.631989339 172419.018894911, 148210.445767647 172418.761192525))' ``` my_str = my_str.replace('POLYGON ', '') coords_groups = my_str.split('), (') for coords in coords_groups: coords.replace('(', '').replace(')', '') coords_list = coords.split(', ') coords_list2 = [] for item in coords_list: item_split = item.split(' ') coords_list2.append({'x', item_split[0], 'y': item_split[1]}) ``` I think this should help a little All u need now is a way to get info between parenthesis, this should help [Regular expression to return text between parenthesis](https://stackoverflow.com/questions/4894069/python-regex-help-return-text-between-parenthesis) **UPDATE** updated code above thanks to another answer by <https://stackoverflow.com/users/2635860/mccakici> , but this works only if u have structure of string as u have said in your question
16,248
54,485,654
Simplified example of my code, please ignore syntax errors: ``` import numpy as np import pandas as pd import pymysql.cursors from datetime import date, datetime connection = pymysql.connect(host=, user=, password=, db=, cursorclass=pymysql.cursors.DictCursor) df1 = pd.read_sql() df2 = pd.read_sql( df3 = pd.read_sql() np.where(a=1, b, c) df1.append([df2, d3]) path = r'C:\Users\\' df.to_csv(path+'a.csv') ``` On Jupyternotebook it outputs the csv file like it supposed to. However, it I download the .py and run with python. It will only output a csv the first time I run it after restarting my computer. Other times it will just run and nothing happens. Why this is happening is blowing my mind.
2019/02/01
[ "https://Stackoverflow.com/questions/54485654", "https://Stackoverflow.com", "https://Stackoverflow.com/users/9637684/" ]
Have you seen the join command? This in combination with sort maybe what you are looking for. <https://shapeshed.com/unix-join/> for example: ``` $ cat a aaaa bbbb cccc dddd $ cat b aaaa eeee ffff gggg $ join a b aaaa bbbb eeee ``` If the values in the first column are not sorted, than you have to sort them first, otherwise join will not work. `join <(sort a) <(sort b)` Kind regards Oliver
There are different kinds and different tools to compare: * diff * cmp * comm * ... All commands have options to vary the comparison. For each command, you can specify filters. E.g. ``` # remove comments before comparison diff <( grep -v ^# file1) <( grep -v ^# file2) ``` Without concrete examples, it is impossible to be more exact.
16,250
26,506,830
I have the expectation that `reduce(add,lst)` and `sum(lst)` should give me the same result, but ``` In [18]: class p(): def __init__(self, x, y): self.x=x ; self.y=y def __repr__(self): return "(%r,%r)"%(self.x,self.y) def __add__(self, P): return p(self.x+P.x, self.y+P.y) ....: In [19]: pts=[p(1,0), p(2,1), p(-3,4)] In [20]: from operator import add In [21]: print reduce(add,pts) (0,5) In [22]: print sum(pts) --------------------------------------------------------------------------- TypeError Traceback (most recent call last) <ipython-input-22-98a81789d257> in <module>() ----> 1 print sum(pts) TypeError: unsupported operand type(s) for +: 'int' and 'instance' In [23]: ``` of course I'm not understanding something, possibly obvious; could someone enlighten me?
2014/10/22
[ "https://Stackoverflow.com/questions/26506830", "https://Stackoverflow.com", "https://Stackoverflow.com/users/2749397/" ]
`reduce()` starts with `pts[0]` as the initial value (unless you give it an explicit starting value), but `sum()` defaults with `0`. From the [`sum()` function documentation](https://docs.python.org/2/library/functions.html#sum): > > `sum(iterable[, start])` > > Sums *start* and the items of an iterable from left to right and returns the total. *start* defaults to `0`. > > > Give `sum()` a better *start* value; use the first object just like `reduce()` does: ``` sum(pts[1:], pts[0]) ``` or a suitable empty value: ``` sum(pts, p(0, 0)) ```
If you define `__radd__` to special case if 0 is added to an instance of your object, then you can just use `sum()`. ``` ... rest of Point class defn ... def __radd__(self, other): if other == 0: return self raise TypeError("invalid type to add to point") print sum(pts) ``` will give `(0,5)`.
16,255