qid
int64
46k
74.7M
question
stringlengths
54
37.8k
date
stringlengths
10
10
metadata
listlengths
3
3
response_j
stringlengths
29
22k
response_k
stringlengths
26
13.4k
__index_level_0__
int64
0
17.8k
56,305,416
Hi I am following this tutorial <https://stackabuse.com/association-rule-mining-via-apriori-algorithm-in-python/> and am getting the following error when I run the below code. I am honestly not sure what to try as I am following the tutorial verbatim. I don't see what the issue is. ``` #import numpy as np #import matplotlib as plt import pandas as pd from apyori import apriori store_data = pd.read_csv('C:\\Users\\eyaze\\Downloads\\store_data.csv', header=None) print(store_data.head()) records = [] for i in range(0, 7501): records.append([str(store_data.values[i,j]) for j in range(0, 20)]) association_rules = apriori(records, min_support=0.0045, min_confidence=0.2, min_lift=3, min_length=2) association_results = list(association_rules) print(len(association_rules)) ``` I am expecting to get 48 as per the tutorial but I instead get the error: ``` TypeError: object of type 'generator' has no len() ``` What is going on?
2019/05/25
[ "https://Stackoverflow.com/questions/56305416", "https://Stackoverflow.com", "https://Stackoverflow.com/users/11554788/" ]
Your code is very similar to this one I found on medium: <https://medium.com/@deepak.r.poojari/apriori-algorithm-in-python-recommendation-engine-5ba89bd1a6da> I guess you wanted to do `print(len(association_results))` instead of association\_rules, as is done in the linked article?
it's a generator, and it only point's to the first block of your code list, if u want to find length , then iterate over it first and then use length ie `print(len(list(association_rules)))`
971
14,669,990
How can I write this complete code in python in just one line or may be I must say something which uses least space or least no of characters? ``` t=int(input()) while t>0: n=int(input()) s=sum(1/(2.0*i+1) for i in range(n)) print "%.15f"%s t-=1 ```
2013/02/03
[ "https://Stackoverflow.com/questions/14669990", "https://Stackoverflow.com", "https://Stackoverflow.com/users/1440140/" ]
You're welcome ``` for t in range(int(input()), 0, -1): print '%.15f' % sum(1/(2.0*i+1) for i in range(int(input()))) ``` **EDIT** (explanation): Firstly, instead of a while loop you can use a [for loop](http://docs.python.org/2/reference/compound_stmts.html#for) in a [range](http://docs.python.org/2/library/functions.html#range). The last argument in the for loop is a -1 to subtract 1 every time instead of the default of plus 1 every time. If there is only one statement in an if statement, or loop, you can keep the one statement in the same line without going to the next line. Instead of creating the variable of n, you can simply plug it in since it's only being used once. Same goes for s.
`exec"print sum((-1.)**i/(i-~i)for i in range(input()));"*input()` I know I am too late for answering this question.but above code gives same result. It will get even more shorter. I am also finding ways to shorten it. #CodeGolf #Python2.4
972
58,599,829
I have a python script called "server.py" and inside it I have a function `def calcFunction(arg1): ... return output` How can I call the function calcFunction with arguments and use the return value in autohotkey? This is what I want to do in autohotkey: ``` ToSend = someString ; a string output = Run server.py, calcFunction(ToSend) ; get the returned value from the function with ToSend as argument Send, output ; use the returned value in autohotkey ``` I have looked online but nothing seems to fully answer my question. Can it even be done?
2019/10/28
[ "https://Stackoverflow.com/questions/58599829", "https://Stackoverflow.com", "https://Stackoverflow.com/users/11340003/" ]
You can define a custom GHCi command using `:def` in this way: ``` > :def foo (\_ -> return "print 100\nprint 200\n:t length") > :foo 100 200 length :: Foldable t => t a -> Int ``` In the returned string, `:`-commands can be included as well, like `:t` above.
One way I've found is creating a separate file: ``` myfunction 0 100 myfunction 0 200 myfunction 0 300 :r ``` and then using: `:script path/to/file`
974
58,736,009
I am trying to run the puckel airflow docker container using the LocalExecutor.yml file found here: <https://github.com/puckel/docker-airflow> I am not able to get airflow to send me emails on failure or retry. I've tried the following: 1. Editing the config file with the smtp host name ``` [smtp] # If you want airflow to send emails on retries, failure, and you want to use # the airflow.utils.email.send_email_smtp function, you have to configure an # smtp server here smtp_host = smtp@mycompany.com smtp_starttls = True smtp_ssl = False # Uncomment and set the user/pass settings if you want to use SMTP AUTH # smtp_user = airflow # smtp_password = airflow smtp_port = 25 smtp_mail_from = myname@mycompany.com ``` 2. Editing environment variables in the entrypoint.sh script included in the repo: ``` : "${AIRFLOW__SMTP__SMTP_HOST:="smtp-host"}" : "${AIRFLOW__SMTP__SMTP_PORT:="25"}" # Defaults and back-compat : "${AIRFLOW_HOME:="/usr/local/airflow"}" : "${AIRFLOW__CORE__FERNET_KEY:=${FERNET_KEY:=$(python -c "from cryptography.fernet import Fernet; FERNET_KEY = Fernet.generate_key().decode(); print(FERNET_KEY)")}}" : "${AIRFLOW__CORE__EXECUTOR:=${EXECUTOR:-Sequential}Executor}" export \ AIRFLOW_HOME \ AIRFLOW__CELERY__BROKER_URL \ AIRFLOW__CELERY__RESULT_BACKEND \ AIRFLOW__CORE__EXECUTOR \ AIRFLOW__CORE__FERNET_KEY \ AIRFLOW__CORE__LOAD_EXAMPLES \ AIRFLOW__CORE__SQL_ALCHEMY_CONN \ AIRFLOW__SMTP__SMTP_HOST \ AIRFLOW__SMTP__SMTP_PORT \ if [ "$AIRFLOW__SMTP__SMTP_HOST" != "smtp-host" ]; then AIRFLOW__SMTP__SMTP_HOST="smtp-host" AIRFLOW__SMTP__SMTP_PORT=25 fi ``` I currently have a dag running that intentionally fails, but I am never alerted for retries or failures.
2019/11/06
[ "https://Stackoverflow.com/questions/58736009", "https://Stackoverflow.com", "https://Stackoverflow.com/users/7470072/" ]
You can groupby and transform for idxmax, eg: ``` dfx['MAXIDX'] = dfx.groupby('NAME').transform('idxmax') ```
Given your example, you can use reset\_index() together with groupby and then merge it into the original dataframe: ``` import pandas as pd data = [['AAA','2019-01-01', 10], ['AAA','2019-01-02', 21], ['AAA','2019-02-01', 30], ['AAA','2019-02-02', 45], ['BBB','2019-01-01', 50], ['BBB','2019-01-02', 60], ['BBB','2019-02-01', 70],['BBB','2019-02-02', 80]] dfx = pd.DataFrame(data, columns = ['NAME', 'TIMESTAMP','VALUE']) dfx = dfx.reset_index(drop=False) dfx = dfx.merge(dfx.groupby('NAME',as_index=False).agg({'index':'max'}),how='left',on='NAME').rename(columns={'index_y':'max_index'}).drop(columns='index_x') ``` Output: ``` NAME TIMESTAMP VALUE max_index AAA 2019-01-01 10 3 AAA 2019-01-02 21 3 AAA 2019-02-01 30 3 AAA 2019-02-02 45 3 BBB 2019-01-01 50 7 BBB 2019-01-02 60 7 BBB 2019-02-01 70 7 BBB 2019-02-02 80 7 ```
976
44,424,308
C++ part I have a class `a` with a **public** variable 2d int array `b` that I want to print out in python.(The way I want to access it is `a.b`) I have been able to wrap the most part of the code and I can call most of the functions in class a in python now. So how can I read b in python? How to read it into an numpy array with numpy.i(I find some solution on how to work with a function not variable)? Is there a way I can read any array in the c++ library? Or I have to deal with each of the variables in the interface file. for now b is `<Swig Object of type 'int (*)[24]' at 0x02F65158>` when I try to use it in python ps: 1. If possible I don't want to modify the cpp part. 2. I'm trying to access a variable, not a function. So don't refer me to links that doesn't really answer my question, thanks.
2017/06/07
[ "https://Stackoverflow.com/questions/44424308", "https://Stackoverflow.com", "https://Stackoverflow.com/users/7967265/" ]
There's some relevant information in the Fetch specification. As per <https://fetch.spec.whatwg.org/#forbidden-response-header-name>: > > A forbidden response-header name is a header name that is a > byte-case-insensitive match for one of: > > > * `Set-Cookie` > * `Set-Cookie2` > > > And then as per item 6 in <https://fetch.spec.whatwg.org/#concept-headers-append>: > > Otherwise, if guard is "response" and name is a forbidden response-header name, return. > > > This restriction on adding in the `Set-Cookie` header applies to either constructing new `Response` objects with an initial set of headers, or adding in headers after the fact to an existing `Response` object. There is a plan to add in support for reading and writing cookies inside of a service worker, but that will use a mechanism other than the `Set-Cookie` header in a `Response` object. There's more information about the plans in [this GitHub issue](https://github.com/w3c/ServiceWorker/issues/707).
You may try following: ``` async function handleRequest(request) { let response = await fetch(request.url, request); // Copy the response so that we can modify headers. response = new Response(response.body, response) response.headers.set("Set-Cookie", "test=1234"); return response; } ```
981
38,722,340
Let's consider the code below code: ``` #!/usr/bin/env python class Foo(): def __init__(self, b): self.a = 0.0 self.b = b def count_a(self): self.a += 0.1 foo = Foo(1) for i in range(0, 15): foo.count_a() print "a =", foo.a, "b =", foo.b, '"a == b" ->', foo.a == foo.b ``` Output: ``` a = 0.2 b = 1 "a == b" -> False a = 0.4 b = 1 "a == b" -> False a = 0.6 b = 1 "a == b" -> False a = 0.8 b = 1 "a == b" -> False a = 1.0 b = 1 "a == b" -> True a = 1.2 b = 1 "a == b" -> False a = 1.4 b = 1 "a == b" -> False a = 1.6 b = 1 "a == b" -> False a = 1.8 b = 1 "a == b" -> False a = 2.0 b = 1 "a == b" -> False a = 2.2 b = 1 "a == b" -> False a = 2.4 b = 1 "a == b" -> False a = 2.6 b = 1 "a == b" -> False a = 2.8 b = 1 "a == b" -> False a = 3.0 b = 1 "a == b" -> False ``` But if I change code on line `11` to `foo = Foo(2)`, the output is turned to be: ``` a = 0.2 b = 2 "a == b" -> False a = 0.4 b = 2 "a == b" -> False a = 0.6 b = 2 "a == b" -> False a = 0.8 b = 2 "a == b" -> False a = 1.0 b = 2 "a == b" -> False a = 1.2 b = 2 "a == b" -> False a = 1.4 b = 2 "a == b" -> False a = 1.6 b = 2 "a == b" -> False a = 1.8 b = 2 "a == b" -> False a = 2.0 b = 2 "a == b" -> False * a = 2.2 b = 2 "a == b" -> False a = 2.4 b = 2 "a == b" -> False a = 2.6 b = 2 "a == b" -> False a = 2.8 b = 2 "a == b" -> False a = 3.0 b = 2 "a == b" -> False ``` You will see that the output `a = 2.0 b = 2 "a == b" -> False` is totally weird. I think I might misunderstand some concept of OOP in Python. Please explain to me why this unexpected output is happened and how to solve this problem.
2016/08/02
[ "https://Stackoverflow.com/questions/38722340", "https://Stackoverflow.com", "https://Stackoverflow.com/users/829496/" ]
This has nothing to do with Object Orientation - it has to do with the way computers represent floating point numbers internally, and rounding errors. <http://floating-point-gui.de/basic/> The Python specificity here is the default string representation of floating point numbers, which will round them at less decimal places than the internal representation for pretty printing. Although, for people needing correct comparisons, respecting the scale of floating point numbers, Python has introduced a nice mechanism with [PEP 485](https://www.python.org/dev/peps/pep-0485/), which added the `math.isclose` function to the standard library.
Aside from the, correct, explanation by jsbueno, remember that Python often allows casting of "basic types" to themselves. i.e. str("a") == "a" So, if you need a workaround in addition to the reason, just convert your int/float mix to all floats and test those. ``` a = 2.0 b = 2 print "a == b", float(a) == float(b) ``` output: ``` a == b True ```
982
58,721,480
I have a python/flask/html project I'm currently working on. I'm using bootstrap 4 for a grid system with multiple rows and columns. When I try to run my project, it cuts off a few pixels on the left side of the screen. I've looked all over my code and I'm still not sure as to why this is. [Here](https://jsfiddle.net/mk78ebo2/) is a jsfiddle with my code, as well as a hard copy of it. Thank you for any help! ``` <!DOCTYPE html> ``` ``` <!-- Link to css file --> <link rel="stylesheet" type="text/css" href="/static/main.css" mesdia="screen"/> <!-- Bootstrap CDN--> <link rel="stylesheet" href="https://maxcdn.bootstrapcdn.com/bootstrap/4.3.1/css/bootstrap.min.css"> <script src="https://ajax.googleapis.com/ajax/libs/jquery/3.4.1/jquery.min.js"></script> <script src="https://cdnjs.cloudflare.com/ajax/libs/popper.js/1.14.7/umd/popper.min.js"></script> <script src="https://maxcdn.bootstrapcdn.com/bootstrap/4.3.1/js/bootstrap.min.js"></script> <meta charset="UTF-8"> <title>Layout</title> <!--<h1>Welcome to the layout</h1> <p>Where would you like to navigate to?</p>--> ``` ``` <div class="row" id="navcols"> <div class="col"> <div class="row h-50">Row</div> <div class="row h-50">Row</div> </div> <div class="col-sm-6"> <div class="row h-100">Row</div> <div class="row h-100">Row</div> <div class="row h-100">Row</div> <div class="row h-100">Row</div> <div class="row h-100">Row</div> </div> <div class="col"> <div class="row">Row</div> </div> </div> <!-- <div class="container-fluid"> <h1 id="test">Welcome to my page! Where do you want to navigate to?</h1> </div>--> ``` ``` html,body { height: 100%; width: 100%; } .col { background-color:red; } ```
2019/11/06
[ "https://Stackoverflow.com/questions/58721480", "https://Stackoverflow.com", "https://Stackoverflow.com/users/11757353/" ]
As far as Bootstrap layout is concerned, you always need to put your row and col elements within a div with the class of **container** or **container-fluid**. If you want full width column then use **container-fluid**. **container** has a max width pixel value, whereas .**container-fluid** is max-width 100%. .**container-fluid** continuously resizes as you change the width of your window/browser by any amount. : ```css html, body { height: 100%; width: 100%; } .col { background-color: red; } ``` ```html <link rel="stylesheet" href="https://maxcdn.bootstrapcdn.com/bootstrap/4.3.1/css/bootstrap.min.css"> <script src="https://ajax.googleapis.com/ajax/libs/jquery/3.4.1/jquery.min.js"></script> <script src="https://cdnjs.cloudflare.com/ajax/libs/popper.js/1.14.7/umd/popper.min.js"></script> <script src="https://maxcdn.bootstrapcdn.com/bootstrap/4.3.1/js/bootstrap.min.js"></script> <div class="container-fluid"> <div class="row" id="navcols"> <div class="col"> <div class="row h-50">Row</div> <div class="row h-50">Row</div> </div> <div class="col-sm-6"> <div class="row h-100">Row</div> <div class="row h-100">Row</div> <div class="row h-100">Row</div> <div class="row h-100">Row</div> <div class="row h-100">Row</div> </div> <div class="col"> <div class="row">Row</div> </div> </div> </div> ```
wrap your row class div inside of a container or container-fluid class div ```html <div class='container'> <div class='row'> <!--your grid--> </div> </div> ```
983
5,159,351
urllib fetches data from urls right? is there a python library that can do the reverse of that and send data to urls instead (for example, to a site you are managing)? and if so, is that library compatible with apache? thanks.
2011/03/01
[ "https://Stackoverflow.com/questions/5159351", "https://Stackoverflow.com", "https://Stackoverflow.com/users/582485/" ]
What does sending data to a URL mean? The usual way to do that is just via an HTTP POST, and `urllib` (and `urllib2`) handle that just fine.
urllib can send data associated with a request by using GET or POST. `urllib2.urlopen(url, data)` where data is a dict of key-values representing the data you're sending. See [this link on usage](http://www.java2s.com/Code/Python/Network/SubmitPOSTData.htm). Then process that data on the server-side. If you want to send data any other way, you should use some other protocol such as FTP (see [ftplib](http://docs.python.org/library/ftplib.html)).
986
63,894,354
i'm very much a newbie to python. I've read a csv file correctly, and if i do a print(row[0]) in a for loop it prints the first column. But now within the loop, i'd like to do a conditional. Obviously using row[0] in it doesn't work. what's the correct syntax? here's my code ``` video_choice = input("What movie would you like to look up? ") # read the rows for row in netflix_reader: If video_choice == row[0]: print(row[0]) ```
2020/09/15
[ "https://Stackoverflow.com/questions/63894354", "https://Stackoverflow.com", "https://Stackoverflow.com/users/8628471/" ]
you could do ``` if video_choice in row[0]: ... ```
nvm, I have the if as "If". I'm def a newb
987
28,176,866
I have a list in python like this: ``` myList = [1,14,2,5,3,7,8,12] ``` How can I easily find the first unused value? (in this case '4')
2015/01/27
[ "https://Stackoverflow.com/questions/28176866", "https://Stackoverflow.com", "https://Stackoverflow.com/users/2506998/" ]
Don't know how efficient, but why not use an xrange as a mask and use set minus? ``` >>> myList = [1,14,2,5,3,7,8,12] >>> min(set(xrange(1, len(myList) + 1)) - set(myList)) 4 ``` You're only creating a set as big as `myList`, so it can't be that bad :) This won't work for "full" lists: ``` >>> myList = range(1, 5) >>> min(set(xrange(1, len(myList) + 1)) - set(myList)) Traceback (most recent call last): File "<stdin>", line 1, in <module> ValueError: min() arg is an empty sequence ``` But the fix to return the next value is simple (add one more to the masked set): ``` >>> min(set(xrange(1, len(myList) + 2)) - set(myList)) 5 ```
I just solved this in a probably non pythonic way ``` def solution(A): # Const-ish to improve readability MIN = 1 if not A: return MIN # Save re-computing MAX MAX = max(A) # Loop over all entries with minimum of 1 starting at 1 for num in range(1, MAX): # going for greatest missing number return optimistically (minimum) # If order needs to switch, then use max as start and count backwards if num not in A: return num # In case the max is < 0 double wrap max with minimum return value return max(MIN, MAX+1) ``` I think it reads quite well
988
8,616,617
I installed a local [SMTP server](http://www.hmailserver.com/) and used [`logging.handlers.SMTPHandler`](http://docs.python.org/library/logging.handlers.html#smtphandler) to log an exception using this code: ``` import logging import logging.handlers import time gm = logging.handlers.SMTPHandler(("localhost", 25), 'info@somewhere.com', ['my_email@gmail.com'], 'Hello Exception!',) gm.setLevel(logging.ERROR) logger.addHandler(gm) t0 = time.clock() try: 1/0 except: logger.exception('testest') print time.clock()-t0 ``` It took more than 1sec to complete, blocking the python script for this whole time. How come? How can I make it not block the script?
2011/12/23
[ "https://Stackoverflow.com/questions/8616617", "https://Stackoverflow.com", "https://Stackoverflow.com/users/348545/" ]
The simplest form of asynchronous smtp handler for me is just to override `emit` method and use the original method in a new thread. GIL is not a problem in this case because there is an I/O call to SMTP server which releases GIL. The code is as follows ``` class ThreadedSMTPHandler(SMTPHandler): def emit(self, record): thread = Thread(target=SMTPHandler.emit, args=(self, record)) thread.start() ```
Here's the implementation I'm using, which I based on Jonathan Livni code. ``` import logging.handlers import smtplib from threading import Thread # File with my configuration import credentials as cr host = cr.set_logSMTP["host"] port = cr.set_logSMTP["port"] user = cr.set_logSMTP["user"] pwd = cr.set_logSMTP["pwd"] to = cr.set_logSMTP["to"] def smtp_at_your_own_leasure( mailhost, port, username, password, fromaddr, toaddrs, msg ): smtp = smtplib.SMTP(mailhost, port) if username: smtp.ehlo() # for tls add this line smtp.starttls() # for tls add this line smtp.ehlo() # for tls add this line smtp.login(username, password) smtp.sendmail(fromaddr, toaddrs, msg) smtp.quit() class ThreadedTlsSMTPHandler(logging.handlers.SMTPHandler): def emit(self, record): try: # import string # <<<CHANGE THIS>>> try: from email.utils import formatdate except ImportError: formatdate = self.date_time port = self.mailport if not port: port = smtplib.SMTP_PORT msg = self.format(record) msg = "From: %s\r\nTo: %s\r\nSubject: %s\r\nDate: %s\r\n\r\n%s" % ( self.fromaddr, ",".join(self.toaddrs), # <<<CHANGE THIS>>> self.getSubject(record), formatdate(), msg, ) thread = Thread( target=smtp_at_your_own_leasure, args=( self.mailhost, port, self.username, self.password, self.fromaddr, self.toaddrs, msg, ), ) thread.start() except (KeyboardInterrupt, SystemExit): raise except: self.handleError(record) # Test if __name__ == "__main__": logger = logging.getLogger() gm = ThreadedTlsSMTPHandler((host, port), user, to, "Error!:", (user, pwd)) gm.setLevel(logging.ERROR) logger.addHandler(gm) try: 1 / 0 except: logger.exception("Test ZeroDivisionError: division by zero") ```
998
10,647,045
When I try to use `ftp.delete()` from ftplib, it raises `error_perm`, resp: ``` >>> from ftplib import FTP >>> ftp = FTP("192.168.0.22") >>> ftp.login("user", "password") '230 Login successful.' >>> ftp.cwd("/Public/test/hello/will_i_be_deleted/") '250 Directory successfully changed.' >>> ftp.delete("/Public/test/hello/will_i_be_deleted/") ... Traceback (most recent call last): File "<stdin>", line 1, in <module> File "/System/Library/Frameworks/Python.framework/Versions/2.6/lib/python2.6/ftplib.py", line 520, in delete resp = self.sendcmd('DELE ' + filename) File "/System/Library/Frameworks/Python.framework/Versions/2.6/lib/python2.6/ftplib.py", line 243, in sendcmd return self.getresp() File "/System/Library/Frameworks/Python.framework/Versions/2.6/lib/python2.6/ftplib.py", line 218, in getresp raise error_perm, resp ftplib.error_perm: 550 Delete operation failed. ``` The directory exists, and "user" has sufficient permissions to delete the folder. The site is actually a NAS (WD MyBookWorld) that supports ftp. Changing to parent directory and using command `ftp.delete("will_i_be_deleted")` does not work either. "will\_i\_be\_deleted" is an empty directory. ftp settings for WD MyBookWorld: ``` Service - Enable; Enable Anonymous - No; Port (Default 21) - Default ```
2012/05/18
[ "https://Stackoverflow.com/questions/10647045", "https://Stackoverflow.com", "https://Stackoverflow.com/users/1402511/" ]
You need to use the `rmd` command, i.e `ftp.rmd("/Public/test/hello/will_i_be_deleted/")` `rmd` is for removing directories, `delete`is for removing files.
The only method that works for me is that I can rename with the ftp.rename() command: e.g. ``` ftp.mkd("/Public/Trash/") ftp.rename("/Public/test/hello/will_i_be_deleted","/Public/Trash/will_i_be_deleted") ``` and then to manually delete the contents of Trash from time to time. I do not know if this is an exclusive problem for the WD MyBookWorld ftp capabilities or not, but at least I got a workaround.
1,008
56,581,237
How efficient is python (cpython I guess) when allocating resources for a newly created instance of a class? I have a situation where I will need to instantiate a node class millions of times to make a tree structure. Each of the node objects *should* be lightweight, just containing a few numbers and references to parent and child nodes. For example, will python need to allocate memory for all the "double underscore" properties of each instantiated object (e.g. the docstrings, `__dict__`, `__repr__`, `__class__`, etc, etc), either to create these properties individually or store pointers to where they are defined by the class? Or is it efficient and does not need to store anything except the custom stuff I defined that needs to be stored in each object?
2019/06/13
[ "https://Stackoverflow.com/questions/56581237", "https://Stackoverflow.com", "https://Stackoverflow.com/users/1394763/" ]
Superficially it's quite simple: Methods, class variables, and the class docstring are stored in the class (function docstrings are stored in the function). Instance variables are stored in the instance. The instance also references the class so you can look up the methods. Typically all of them are stored in dictionaries (the `__dict__`). So yes, the short answer is: Python doesn't store methods in the instances, but all instances need to have a reference to the class. For example if you have a simple class like this: ```py class MyClass: def __init__(self): self.a = 1 self.b = 2 def __repr__(self): return f"{self.__class__.__name__}({self.a}, {self.b})" instance_1 = MyClass() instance_2 = MyClass() ``` Then in-memory it looks (very simplified) like this: [![enter image description here](https://i.stack.imgur.com/JhLI3.png)](https://i.stack.imgur.com/JhLI3.png) Going deeper ------------ However there are a few things that important when going deeper in CPython: * Having a dictionary as abstraction leads to quite a bit of overhead: You need a reference to the instance dictionary (bytes) and each entry in the dictionary stores the hash (8bytes), a pointer to a key (8bytes) and a pointer to the stored attribute (another 8 bytes). Also dictionaries generally over-allocate so that adding another attribute doesn't trigger a dictionary-resize. * Python doesn't have "value-types", even an integer will be an instance. That means that you don't need 4 bytes to store an integer - Python needs (on my computer) 24bytes to store the integer 0 and at least 28 bytes to store integers different from zero. However references to other objects just require 8 bytes (pointer). * CPython uses reference counting so each instance needs a reference count (8bytes). Also most of CPythons classes participate in the cyclic garbage collector, which incurs an overhead of another 24bytes per instance. In addition to these classes that can be weak-referenced (most of them) also have a `__weakref__` field (another 8 bytes). At this point it's also necessary to point out that CPython optimizes for a few of these "problems": * Python uses [Key-Sharing Dictionaries](https://www.python.org/dev/peps/pep-0412/) to avoid some of the memory overheads (hash and key) of instance dictionaries. * You can use `__slots__` in classes to avoid `__dict__` and `__weakref__`. This can give a significantly less memory-footprint per instance. * Python interns some values, for example if you create a small integer it will not create a new integer instance but return a reference to an already existing instance. Given all that and that several of these points (especially the points about optimizing) are implementation-details it's hard to give an canonical answer about the effective memory-requirements of Python classes. Reducing the memory footprint of instances ------------------------------------------ However in case you want to reduce the memory-footprint of your instances definitely give `__slots__` a try. They do have draw-backs but in case they don't apply to you they are a very good way to reduce the memory. ```py class Slotted: __slots__ = ('a', 'b') def __init__(self): self.a = 1 self.b = 1 ``` If that's not enough and you operate with lots of "value types" you could also go a step further and create extension classes. These are classes that are defined in C but are wrapped so that you can use them in Python. For convenience I'm using the IPython bindings for Cython here to simulate an extension class: ```py %load_ext cython ``` ```py %%cython cdef class Extensioned: cdef long long a cdef long long b def __init__(self): self.a = 1 self.b = 1 ``` Measuring the memory usage -------------------------- The remaining interesting question after all this theory is: How can we measure the memory? I also use a normal class: ```py class Dicted: def __init__(self): self.a = 1 self.b = 1 ``` I'm generally using [`psutil`](https://psutil.readthedocs.io/en/latest/) (even though it's a proxy method) for measuring memory impact and simply measure how much memory it used before and after. The measurements are a bit offset because I need to keep the instances in memory somehow, otherwise the memory would be reclaimed (immediately). Also this is only an approximation because Python actually does quite a bit of memory housekeeping especially when there are lots of create/deletes. ```py import os import psutil process = psutil.Process(os.getpid()) runs = 10 instances = 100_000 memory_dicted = [0] * runs memory_slotted = [0] * runs memory_extensioned = [0] * runs for run_index in range(runs): for store, cls in [(memory_dicted, Dicted), (memory_slotted, Slotted), (memory_extensioned, Extensioned)]: before = process.memory_info().rss l = [cls() for _ in range(instances)] store[run_index] = process.memory_info().rss - before l.clear() # reclaim memory for instances immediately ``` The memory will not be exactly identical for each run because Python re-uses some memory and sometimes also keeps memory around for other purposes but it should at least give a reasonable hint: ``` >>> min(memory_dicted) / 1024**2, min(memory_slotted) / 1024**2, min(memory_extensioned) / 1024**2 (15.625, 5.3359375, 2.7265625) ``` I used the `min` here mostly because I was interested what the minimum was and I divided by `1024**2` to convert the bytes to MegaBytes. Summary: As expected the normal class with dict will need more memory than classes with slots but extension classes (if applicable and available) can have an even lower memory footprint. Another tools that could be very handy for measuring memory usage is [`memory_profiler`](https://pypi.org/project/memory-profiler/), although I haven't used it in a while.
*[edit] It is not easy to get an accurate measurement of memory usage by a python process; **I don't think my answer completely answers the question**, but it is one approach that may be useful in some cases.* *Most approaches use proxy methods (create n objects and estimate the impact on the system memory), and external libraries attempting to wrap those methods. For instance, threads can be found [here](https://stackoverflow.com/questions/9850995/tracking-maximum-memory-usage-by-a-python-function), [here](https://stackoverflow.com/questions/552744/how-do-i-profile-memory-usage-in-python), and [there](https://stackoverflow.com/questions/110259/which-python-memory-profiler-is-recommended) [/edit]* On `cPython 3.7`, The minimum size of a regular class instance is 56 bytes; with `__slots__` (no dictionary), 16 bytes. ``` import sys class A: pass class B: __slots__ = () pass a = A() b = B() sys.getsizeof(a), sys.getsizeof(b) ``` ### output: ``` 56, 16 ``` Docstrings, class variables, & type annotations are not found at the instance level: ``` import sys class A: """regular class""" a: int = 12 class B: """slotted class""" b: int = 12 __slots__ = () a = A() b = B() sys.getsizeof(a), sys.getsizeof(b) ``` ### output: ``` 56, 16 ``` *[edit ]In addition, see [@LiuXiMin answer](https://stackoverflow.com/questions/56581237/what-resources-does-an-instance-of-a-class-use/56598070#56598070) for **a measure of the size of the class definition**. [/edit]*
1,010
13,876,441
Hej, I'm using the latest version (1.2.0) of matplotlib distributed with macports. I run into an AssertionError (I guess stemming from internal test) running this code ``` #!/usr/bin/env python import numpy as np import matplotlib.pyplot as plt X,Y = np.meshgrid(np.arange(0, 2*np.pi, .2), np.arange(0, 2*np.pi, .2)) U = np.cos(X) V = np.sin(Y) Q = plt.quiver(U, V) plt.quiverkey(Q, 0.5, .9, 1., 'Label') plt.gca().add_patch(plt.Circle((10, 10), 1)) plt.savefig('test.pdf') ``` Three parts of this code are required for me to reproduce the error: 1. The quiver plot has to have a key created with quiver key 2. have to add an additional patch to the current axes 3. I have to save the figure as a PDF (I can display it just fine) The bug is not dependent on the backend. The traceback I get reads ``` Traceback (most recent call last): File "./test_quiver.py", line 15, in <module> plt.savefig('test.pdf') File "/opt/local/Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7/site-packages/matplotlib/pyplot.py", line 472, in savefig return fig.savefig(*args, **kwargs) File "/opt/local/Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7/site-packages/matplotlib/figure.py", line 1363, in savefig self.canvas.print_figure(*args, **kwargs) File "/opt/local/Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7/site-packages/matplotlib/backend_bases.py", line 2093, in print_figure **kwargs) File "/opt/local/Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7/site-packages/matplotlib/backend_bases.py", line 1845, in print_pdf return pdf.print_pdf(*args, **kwargs) File "/opt/local/Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7/site-packages/matplotlib/backends/backend_pdf.py", line 2301, in print_pdf self.figure.draw(renderer) File "/opt/local/Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7/site-packages/matplotlib/artist.py", line 54, in draw_wrapper draw(artist, renderer, *args, **kwargs) File "/opt/local/Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7/site-packages/matplotlib/figure.py", line 999, in draw func(*args) File "/opt/local/Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7/site-packages/matplotlib/artist.py", line 54, in draw_wrapper draw(artist, renderer, *args, **kwargs) File "/opt/local/Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7/site-packages/matplotlib/axes.py", line 2086, in draw a.draw(renderer) File "/opt/local/Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7/site-packages/matplotlib/artist.py", line 54, in draw_wrapper draw(artist, renderer, *args, **kwargs) File "/opt/local/Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7/site-packages/matplotlib/quiver.py", line 306, in draw self.vector.draw(renderer) File "/opt/local/Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7/site-packages/matplotlib/artist.py", line 54, in draw_wrapper draw(artist, renderer, *args, **kwargs) File "/opt/local/Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7/site-packages/matplotlib/collections.py", line 755, in draw return Collection.draw(self, renderer) File "/opt/local/Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7/site-packages/matplotlib/artist.py", line 54, in draw_wrapper draw(artist, renderer, *args, **kwargs) File "/opt/local/Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7/site-packages/matplotlib/collections.py", line 259, in draw self._offset_position) File "/opt/local/Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7/site-packages/matplotlib/backends/backend_pdf.py", line 1548, in draw_path_collection output(*self.gc.pop()) File "/opt/local/Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7/site-packages/matplotlib/backends/backend_pdf.py", line 2093, in pop assert self.parent is not None AssertionError ``` In case it's important: I'm on Mac OS X 10.7.5, using python 2.7.3 and matplotlib 1.2.0. Do you also get this error? Is it a bug in matplotlib? Is it system dependent? Is there some workaround?
2012/12/14
[ "https://Stackoverflow.com/questions/13876441", "https://Stackoverflow.com", "https://Stackoverflow.com/users/932593/" ]
You can save as eps or svg and convert to pdf. I found that the best way to produce small pdf files is to save as eps in matplotlib and then use epstopdf. svg also works fine, you can use Inkscape to convert to pdf. A side-effect of svg is that the text is converted to paths (no embedded fonts), which might be desirable in some circumstances.
The matplotlib (v 1.2.1) distributed with Ubuntu 13.04 (raring) also has this bug. I don't know if it's still a problem in newer versions. Another workaround (seems to work for me) is to completely delete the `draw_path_collection` function in `.../matplotlib/backends/backend_pdf.py`.
1,013
892,196
buildin an smtp client in python . which can send mail , and also show that mail has been received through any mail service for example gmail !!
2009/05/21
[ "https://Stackoverflow.com/questions/892196", "https://Stackoverflow.com", "https://Stackoverflow.com/users/105167/" ]
If you want the Python standard library to do the work for you (recommended!), use [smtplib](http://docs.python.org/library/smtplib.html). To see whether sending the mail worked, just open your inbox ;) If you want to implement the protocol yourself (is this homework?), then read up on the [SMTP protocol](http://www.ietf.org/rfc/rfc0821.txt) and use e.g. the [socket](http://docs.python.org/library/socket.html) module.
Depends what you mean by "received". It's possible to verify "delivery" of a message to a server but there is no 100% reliable guarantee it actually ended up in a mailbox. smtplib will throw an exception on certain conditions (like the remote end reporting user not found) but just as often the remote end will accept the mail and then either filter it or send a bounce notice at a later time.
1,014
7,230,621
I'm trying to find a method to iterate over an a pack variadic template argument list. Now as with all iterations, you need some sort of method of knowing how many arguments are in the packed list, and more importantly how to individually get data from a packed argument list. The general idea is to iterate over the list, store all data of type int into a vector, store all data of type char\* into a vector, and store all data of type float, into a vector. During this process there also needs to be a seperate vector that stores individual chars of what order the arguments went in. As an example, when you push\_back(a\_float), you're also doing a push\_back('f') which is simply storing an individual char to know the order of the data. I could also use a std::string here and simply use +=. The vector was just used as an example. Now the way the thing is designed is the function itself is constructed using a macro, despite the evil intentions, it's required, as this is an experiment. So it's literally impossible to use a recursive call, since the actual implementation that will house all this will be expanded at compile time; and you cannot recruse a macro. Despite all possible attempts, I'm still stuck at figuring out how to actually do this. So instead I'm using a more convoluted method that involves constructing a type, and passing that type into the varadic template, expanding it inside a vector and then simply iterating that. However I do not want to have to call the function like: ``` foo(arg(1), arg(2.0f), arg("three"); ``` So the real question is how can I do without such? To give you guys a better understanding of what the code is actually doing, I've pasted the optimistic approach that I'm currently using. ``` struct any { void do_i(int e) { INT = e; } void do_f(float e) { FLOAT = e; } void do_s(char* e) { STRING = e; } int INT; float FLOAT; char *STRING; }; template<typename T> struct get { T operator()(const any& t) { return T(); } }; template<> struct get<int> { int operator()(const any& t) { return t.INT; } }; template<> struct get<float> { float operator()(const any& t) { return t.FLOAT; } }; template<> struct get<char*> { char* operator()(const any& t) { return t.STRING; } }; #define def(name) \ template<typename... T> \ auto name (T... argv) -> any { \ std::initializer_list<any> argin = { argv... }; \ std::vector<any> args = argin; #define get(name,T) get<T>()(args[name]) #define end } any arg(int a) { any arg; arg.INT = a; return arg; } any arg(float f) { any arg; arg.FLOAT = f; return arg; } any arg(char* s) { any arg; arg.STRING = s; return arg; } ``` I know this is nasty, however it's a pure experiment, and will not be used in production code. It's purely an idea. It could probably be done a better way. But an example of how you would use this system: ``` def(foo) int data = get(0, int); std::cout << data << std::endl; end ``` looks a lot like python. it works too, but the only problem is how you call this function. Heres a quick example: ``` foo(arg(1000)); ``` I'm required to construct a new any type, which is highly aesthetic, but thats not to say those macros are not either. Aside the point, I just want to the option of doing: foo(1000); I know it can be done, I just need some sort of iteration method, or more importantly some std::get method for packed variadic template argument lists. Which I'm sure can be done. Also to note, I'm well aware that this is not exactly type friendly, as I'm only supporting int,float,char\* and thats okay with me. I'm not requiring anything else, and I'll add checks to use type\_traits to validate that the arguments passed are indeed the correct ones to produce a compile time error if data is incorrect. This is purely not an issue. I also don't need support for anything other then these POD types. It would be highly apprecaited if I could get some constructive help, opposed to arguments about my purely illogical and stupid use of macros and POD only types. I'm well aware of how fragile and broken the code is. This is merley an experiment, and I can later rectify issues with non-POD data, and make it more type-safe and useable. Thanks for your undertstanding, and I'm looking forward to help.
2011/08/29
[ "https://Stackoverflow.com/questions/7230621", "https://Stackoverflow.com", "https://Stackoverflow.com/users/658162/" ]
There is no specific feature for it right now but there are some workarounds you can use. Using initialization list ========================= One workaround uses the fact, that subexpressions of [initialization lists](http://en.cppreference.com/w/cpp/language/list_initialization) are evaluated in order. `int a[] = {get1(), get2()}` will execute `get1` before executing `get2`. Maybe [fold expressions](http://en.cppreference.com/w/cpp/language/fold) will come handy for similar techniques in the future. To call `do()` on every argument, you can do something like this: ``` template <class... Args> void doSomething(Args... args) { int x[] = {args.do()...}; } ``` However, this will only work when `do()` is returning an `int`. You can use the [comma operator](http://en.cppreference.com/w/cpp/language/operator_other#Built-in_comma_operator) to support operations which do not return a proper value. ``` template <class... Args> void doSomething(Args... args) { int x[] = {(args.do(), 0)...}; } ``` To do more complex things, you can put them in another function: ``` template <class Arg> void process(Arg arg, int &someOtherData) { // You can do something with arg here. } template <class... Args> void doSomething(Args... args) { int someOtherData; int x[] = {(process(args, someOtherData), 0)...}; } ``` Note that with generic [lambdas](http://en.cppreference.com/w/cpp/language/lambda) (C++14), you can define a function to do this boilerplate for you. ``` template <class F, class... Args> void do_for(F f, Args... args) { int x[] = {(f(args), 0)...}; } template <class... Args> void doSomething(Args... args) { do_for([&](auto arg) { // You can do something with arg here. }, args...); } ``` Using recursion =============== Another possibility is to use recursion. Here is a small example that defines a similar function `do_for` as above. ``` template <class F, class First, class... Rest> void do_for(F f, First first, Rest... rest) { f(first); do_for(f, rest...); } template <class F> void do_for(F f) { // Parameter pack is empty. } template <class... Args> void doSomething(Args... args) { do_for([&](auto arg) { // You can do something with arg here. }, args...); } ```
You can use multiple variadic templates, this is a bit messy, but it works and is easy to understand. You simply have a function with the variadic template like so: ``` template <typename ...ArgsType > void function(ArgsType... Args){ helperFunction(Args...); } ``` And a helper function like so: ``` void helperFunction() {} template <typename T, typename ...ArgsType > void helperFunction(T t, ArgsType... Args) { //do what you want with t function(Args...); } ``` Now when you call "function" the "helperFunction" will be called and isolate the first passed parameter from the rest, this variable can b used to call another function (or something). Then "function" will be called again and again until there are no more variables left. Note you might have to declare helperClass before "function". The final code will look like this: ``` void helperFunction(); template <typename T, typename ...ArgsType > void helperFunction(T t, ArgsType... Args); template <typename ...ArgsType > void function(ArgsType... Args){ helperFunction(Args...); } void helperFunction() {} template <typename T, typename ...ArgsType > void helperFunction(T t, ArgsType... Args) { //do what you want with t function(Args...); } ``` The code is not tested.
1,017
13,736,191
I'm stuck on this [exercise](http://www.codecademy.com/courses/python-beginner-en-qzsCL/0?curriculum_id=4f89dab3d788890003000096#!/exercises/3). Its asking me to print out from the list that has dictionaries in it but I don't even know how to. I can't find how to do something like this on google...
2012/12/06
[ "https://Stackoverflow.com/questions/13736191", "https://Stackoverflow.com", "https://Stackoverflow.com/users/1260452/" ]
This is a more efficient answer that I found: ``` for persons in students: for grades in persons: print persons[grades] ```
Next time you should specify more carefully what your question is. Below is the loop that the exercise is asking for. ``` for s in students: print s['name'] print s['homework'] print s['quizzes'] print s['tests'] ```
1,027
29,384,129
Basically I have a matrix in python 'example' (although much larger). I need to product the array 'example\_what\_I\_want' with some python code. I guess a for loop is in order- but how can I do this? ``` example= [1,2,3,4,5], [6,7,8,9,10], [11,12,13,14,15], [16,17,18,19,20], [21,22,23,24,25] example_what_I_want = [25,24,23,22,21], [16,17,18,19,20], [15,14,13,12,11], [6,7,8,9,10], [5,4,3,2,1] ``` So it increments in kind of snake fashion. And the first row must be reversed! and then follow that pattern. thanks!
2015/04/01
[ "https://Stackoverflow.com/questions/29384129", "https://Stackoverflow.com", "https://Stackoverflow.com/users/3285950/" ]
I'm assuming `example` is actually: ``` example = [[1,2,3,4,5], [6,7,8,9,10], [11,12,13,14,15], [16,17,18,19,20], [21,22,23,24,25]] ``` In which case you could do: ``` swapped_example = [sublst if idx%2 else sublst[::-1] for idx,sublst in enumerate(example)][::-1] ``` Which will give you: ``` In [5]: swapped_example Out[5]: [[25, 24, 23, 22, 21], [16, 17, 18, 19, 20], [15, 14, 13, 12, 11], [6, 7, 8, 9, 10], [5, 4, 3, 2, 1]] ```
Or, you can use iter. ``` a = [[1,2,3,4,5], [6,7,8,9,10], [11,12,13,14,15], [16,17,18,19,20], [21,22,23,24,25]] b = [] rev_a = iter(a[::-1]) while rev_a: try: b.append(rev_a.next()[::-1]) b.append(rev_a.next()) except StopIteration: break print b ``` Modified (Did not know that earlier. @Adam), ``` a = iter(a) while a: try: b.insert(0, a.next()[::-1]) b.insert(0, a.next()) except StopIteration: break print b[::-1] ```
1,028
61,882,136
working on a coding program that I think I have licked and I'm getting the correct values for. However the test conditions are looking for the values formatted in a different way. And I'm failing at figuring out how to format the return in my function correctly. I get the values correctly when looking for the answer: [4, 15, 7, 19, 1, 20, 13, 9, 4, 14, 9, 7, 8, 20] but the test condition expects should equal '20 8 5 14 1 18 23 8 1 12 2 1 3 15 14 19 1 20 13 9 4 14 9 7 8 20' and for the life of me I haven't been able to figure this out yet. View the original problem here: <https://www.codewars.com/kata/546f922b54af40e1e90001da/train/python> Still very new to Python, but tackling these problems best I can. Code may be ugly, but it's mine =D EDIT: I am looking for a way to reformat my return statement as a string instead of a list of integers. Thanks for the help in advance! Any help is appreciated, even how to post better questions here. Koruptedkernel. ``` import string def alphabet_position(positions): #Declaring final position list. position = [] #Stripping punctuation from the passed string. out1 = positions.translate(str.maketrans("","", string.punctuation)) #Stripping digits from the passed string. out = out1.translate(str.maketrans("","", string.digits)) #Removing Spaces from the passed string. outter = out.replace(" ","") #reducing to lowercase. mod_text = str.lower(outter) #For loop to iterate through alphabet and report index location to position list. for letter in mod_text: #Declare list of letters (lower) in the alphabet (US). alphabet = list('abcdefghijklmnopqrstuvwxyz') position.append(alphabet.index(letter) + 1) return(position) #Call the function with text. alphabet_position("The sunset sets at twelve o'clock.") ```
2020/05/19
[ "https://Stackoverflow.com/questions/61882136", "https://Stackoverflow.com", "https://Stackoverflow.com/users/13571383/" ]
The `try-except` indention creates the problem: ``` def myAlarm(): try: myTime = list(map(int, input("Enter time in hr min sec\n") .split())) if len(mytime) == 3: total_secounds = myTime[0]*60*60+myTime[1]*60+myTime[2] time.sleep(total_secounds) frequency = 2500 duration = 10 winsound.Beep(frequency, duration) else: print("Please enter time in correct format as mentioned\n") myAlarm() except Exception as e: # <--- spelling corrected from exept to except print("This is the exception\n ", e, "So!, please enter correct details") myAlarm() ```
There were three mistakes in your code 1. Indentation of try-except block 2. Spelling of except 3. at one place you have used myTime and at another mytime. ``` import time import winsound print("Made by Ethan") def myAlarm(): try: myTime = list(map(int, input("Enter time in hr min sec\n") .split())) if len(myTime) == 3: total_secounds = myTime[0]*60*60+myTime[1]*60+myTime[2] time.sleep(total_secounds) frequency = 2500 duration = 10 winsound.Beep(frequency, duration) else: print("Please enter time in correct format as mentioned\n") myAlarm() except Exception as e: print("This is the exception\n ", e, "So!, please enter correct details") myAlarm() myAlarm() ```
1,029
39,026,950
I'm trying to set up a django app that connects to a remote MySQL db. I currently have Django==1.10 and MySQL-python==1.2.5 installed in my venv. In settings.py I have added the following to the DATABASES variable: ``` 'default': { 'ENGINE': 'django.db.backends.mysql', 'NAME': 'db_name', 'USER': 'db_user', 'PASSWORD': 'db_password', 'HOST': 'db_host', 'PORT': 'db_port', } ``` I get the error ``` from django.contrib.sites.models import RequestSite ``` when I run python manage.py migrate I am a complete beginner when it comes to django. Is there some step I am missing? edit: I have also installed mysql-connector-c via brew install edit2: realized I just need to connect to a db by importing MySQLdb into a file. sorry for the misunderstanding.
2016/08/18
[ "https://Stackoverflow.com/questions/39026950", "https://Stackoverflow.com", "https://Stackoverflow.com/users/3861295/" ]
The error you're seeing has nothing to do with your database settings (assuming your real code has the actual database name, username, and password) or connection. You are not importing the RequestSite from the correct spot. Change (wherever you have this set) from: ``` from django.contrib.sites.models import RequestSite ``` to: ``` from django.contrib.sites.requests import RequestSite ```
This is documentation you should look at this for connecting to databases and how django does it, from what you gave us, as long as your parameters are correct you should be connecting to the database but a good confirmation of this would be the inspectdb tool. <https://docs.djangoproject.com/en/1.10/ref/databases/> This shows you how to import your models from the sql database: <https://docs.djangoproject.com/en/1.10/howto/legacy-databases/> To answer your question in the comments " Auto-generate the models Django comes with a utility called inspectdb that can create models by introspecting an existing database. You can view the output by running this command: ``` python manage.py inspectdb ``` Save this as a file by using standard Unix output redirection: ``` python manage.py inspectdb > models.py ``` "
1,034
50,322,384
I am new to machine learning and the sklearn package. when trying to import sklearn, I am getting an error saying it cannot find a DLL. I installed sklearn through pip, have un-installed everything including python and re-installed it all and still am having the same issue. only one version of python is installed on this machine. I am running python 3.6.1 and have visual studio 2017 community installed as well. All packages are up to date. The traceback is as follows. (removed username from all the paths) code being ran: ``` import numpy as np from sklearn import cross_validation, neighbors import pandas as pd Traceback (most recent call last): File "C:/Users/Public/Documents/Machine learning project/Classification/KNN.py", line 2, in <module> from sklearn import cross_validation, neighbors File "C:\Users\\AppData\Roaming\Python\Python36\site-packages\sklearn\__init__.py", line 134, in <module> from .base import clone File "C:\Users\\AppData\Roaming\Python\Python36\site-packages\sklearn\base.py", line 11, in <module> from scipy import sparse File "C:\Users\\AppData\Roaming\Python\Python36\site-packages\scipy\sparse\__init__.py", line 229, in <module> from .csr import * File "C:\Users\\AppData\Roaming\Python\Python36\site-packages\scipy\sparse\csr.py", line 15, in <module> from ._sparsetools import csr_tocsc, csr_tobsr, csr_count_blocks, get_csr_submatrix, csr_sample_values ImportError: DLL load failed: %1 is not a valid Win32 application. ```
2018/05/14
[ "https://Stackoverflow.com/questions/50322384", "https://Stackoverflow.com", "https://Stackoverflow.com/users/9785849/" ]
Check the version of python that you are using. Is it 64 bit or 32 bit? The only time I have seen that error is when there was a mismatch between the package type and the Python version. If there is nothing wrong there you can try the following: ``` import imp imp.find_module("sklearn") ``` This will tell you exactly what is being loaded and the path it is being loaded from. If that is loading the correct package, I'd say try and install the package binary manually instead of going through pip. However I did just test it and saw it working on my system.
Although it is difficult to guess the issue you are experiencing based on what is provided, try the following: ``` from sklearn.model_selection import cross_validate from sklearn.neighbors import KNeighborsClassifier ```
1,035
54,717,221
I'm trying to write a Python operator in an airflow DAG and pass certain parameters to the Python callable. My code looks like below. ``` def my_sleeping_function(threshold): print(threshold) fmfdependency = PythonOperator( task_id='poke_check', python_callable=my_sleeping_function, provide_context=True, op_kwargs={'threshold': 100}, dag=dag) end = BatchEndOperator( queue=QUEUE, dag=dag) start.set_downstream(fmfdependency) fmfdependency.set_downstream(end) ``` But I keep getting the below error. > > TypeError: my\_sleeping\_function() got an unexpected keyword argument 'dag\_run' > > > Not able to figure out why.
2019/02/15
[ "https://Stackoverflow.com/questions/54717221", "https://Stackoverflow.com", "https://Stackoverflow.com/users/4017926/" ]
Add \*\*kwargs to your operator parameters list after your threshold param
This is how you can pass arguments for a Python operator in Airflow. ``` from airflow import DAG from airflow.operators.dummy_operator import DummyOperator from airflow.operators.python_operator import PythonOperator from time import sleep from datetime import datetime def my_func(*op_args): print(op_args) return op_args[0] with DAG('python_dag', description='Python DAG', schedule_interval='*/5 * * * *', start_date=datetime(2018, 11, 1), catchup=False) as dag: dummy_task = DummyOperator(task_id='dummy_task', retries=3) python_task = PythonOperator(task_id='python_task', python_callable=my_func, op_args=['one', 'two', 'three']) dummy_task >> python_task ```
1,038
55,052,883
I may may need help phrasing this question better. I'm writing an async api interface, via python3.7, & with a class (called `Worker()`). `Worker` has a few blocking methods I want to run using `loop.run_in_executor()`. I'd like to build a decorator I can just add above all of the non-`async` methods in `Worker`, but I keep running into problems. I am being told that I need to `await` `wraps()` in the decorator below: ```py def run_method_in_executor(func, *, loop=None): async def wraps(*args): _loop = loop if loop is not None else asyncio.get_event_loop() return await _loop.run_in_executor(executor=None, func=func, *args) return wraps ``` which throws back: `RuntimeWarning: coroutine 'run_method_in_executor.<locals>.wraps' was never awaited` I'm not seeing how I could properly `await` `wraps()` since the containing function & decorated functions aren't asynchronous. Not sure if this is due to misunderstanding `asyncio`, or misunderstanding decorators. Any help (or help clarifying) would be greatly appreciated!
2019/03/07
[ "https://Stackoverflow.com/questions/55052883", "https://Stackoverflow.com", "https://Stackoverflow.com/users/6293857/" ]
Not\_a\_Golfer answered my question in the comments. Changing the inner `wraps()` function from a coroutine into a generator solved the problem: ```py def run_method_in_executor(func, *, loop=None): def wraps(*args): _loop = loop if loop is not None else asyncio.get_event_loop() yield _loop.run_in_executor(executor=None, func=func, *args) return wraps ``` **Edit:** This has been really useful for IO, but I haven't figured out how to `await` the yielded executor function, which means it *will* create a race condition if I'm relying on a decorated function to update some value used by any of my other async functions.
Here is a complete example for Python 3.6+ which does not use interfaces deprecated by 3.8. Returning the value of `loop.run_in_executor` effectively converts the wrapped function to an *awaitable* which executes in a thread, so you can `await` its completion. ```py #!/usr/bin/env python3 import asyncio import functools import time def run_in_executor(_func): @functools.wraps(_func) def wrapped(*args, **kwargs): loop = asyncio.get_event_loop() func = functools.partial(_func, *args, **kwargs) return loop.run_in_executor(executor=None, func=func) return wrapped @run_in_executor def say(text=None): """Block, then print.""" time.sleep(1.0) print(f'say {text} at {time.monotonic():.3f}') async def main(): print(f'beginning at {time.monotonic():.3f}') await asyncio.gather(say('asdf'), say('hjkl')) await say(text='foo') await say(text='bar') if __name__ == '__main__': loop = asyncio.get_event_loop() loop.run_until_complete(main()) ``` ```none beginning at 3461039.617 say asdf at 3461040.618 say hjkl at 3461040.618 say foo at 3461041.620 say bar at 3461042.621 ```
1,039
40,535,066
I'm trying to override a python class (first time doing this), and I can't seem to override this method. When I run this, my recv method doesn't run. It runs the superclasses's method instead. What am I doing wrong here? (This is python 2.7 by the way.) ``` import socket class PersistentSocket(socket.socket): def recv(self, count): print("test") return super(self.__class__, self).recv(count) if __name__ == '__main__': s = PersistentSocket(socket.AF_INET, socket.SOCK_STREAM) s.connect(('localhost', 2300)) print(s.recv(1) ```
2016/11/10
[ "https://Stackoverflow.com/questions/40535066", "https://Stackoverflow.com", "https://Stackoverflow.com/users/2100448/" ]
The socket type (officially `socket.SocketType`, though `socket.socket` happens to be the same object) makes the strange choice of implementing `recv` and a few other methods as instance attributes, rather than as normal methods in the class dict. In `socket.SocketType.__init__`, it sets a `self.recv` instance attribute that overrides the `recv` method you tried to define.
Picking on the explanation from @user2357112, one thing that seems to have helped is to do a `delattr(self, 'recv')` on the class constructor (inheriting from `socket.SocketType`) and then define you own `recv` method; for example: ``` class PersistentSocket(socket.SocketType): def __init__(self): """As usual.""" delattr(self, 'recv') def recv(self, buffersize=1024, flags=0): """Your own implementation here.""" return None ```
1,042
37,598,337
I want to apply a fee to an amount according with this scale: ``` AMOUNT FEE ------- --- 0 24.04 € 6010.12 0.00450 30050.61 0.00150 60101.21 0.00100 150253.03 0.00050 601012.11 0.00030 ``` From 0 to 6010.13€ is a fix fee of 24.04€ My code: ``` def fee(amount): scale = [[0, 24.04], [6010.12, 0.00450], [30050.61, 0.00150], [60101.21, 0.00100], [150253.03, 0.00050], [601012.11, 0.00030]] if amount <= scale[1][0]: fee = scale[0][1] else: for i in range(0, 5): if amount >= scale[i][0] and amount < scale[i+1][0]: fee = amount * scale[i][1] break return fee print(fee(601012.12)) ``` This code works fine from 0€ to 601012.11€, but for 601012.12€ or greater fails. > > > ``` > return fee UnboundLocalError: local variable 'fee' referenced before assignment > > ``` > > I suppose that the problem is here: `amount < scale[i+1][0]` when i=4 the `fee` variable isn't assigned. Are there any methods more pythonic to select range limits of a scale?
2016/06/02
[ "https://Stackoverflow.com/questions/37598337", "https://Stackoverflow.com", "https://Stackoverflow.com/users/3160820/" ]
I think a better way would be to use a while loop to check if `amount` is lesser then `scale[i+1][0]` so that you may just use `scale[i][1]`. And also give an else to handle anything greater than `scale[len(scale)][0]`.
You aren't handling that one case. You should just add another `if` statement outside the loop (similar to the `if ammount <= scale[1][0]` you used, because values in both of these ranges are not handled by the loop): ``` if ammount >= scale[len(scale) - 1][0]: fee = scale[len(scale) - 1][1] ``` Btw, there's a little inconsistency with the `<=` for the first `if`, but `>=` and `<` for the second, in the loop. I doubt I can make better than this: ``` for i in range(-1, len(scale) - 1): if((i == -1 or ammount >= scale[i][0]) and (i == len(scale) or ammount < scale[i + 1][0])): fee = ammount * scale[i][1] return fee ``` `scale[-1][0]` and `scale[len(scale)][0]` cannot be executed, so they're short-circuited.
1,043
60,011,277
New to coding and running through the exercises in Python Crash Course version 2. One of the exercises involves creating a file called "remember\_me.py" and as far as I can tell, I'm entering the code as it exists in the book almost verbatim, but getting an error: ``` """Exercise for Python Crash Course.""" import json #Load the username, if it has been stored previously. #Otherwise, prompt for the username and store it. filename = 'username.json' try: with open(filename) as f: username = json.load(f) except FileNotFoundError: username = input("What is your name?\n") with open(filename, 'w') as f: json.dump(username, f) print(f"We'll remember you when you come back, {username}!") else: print(f"Welcome back, {username}!") ``` Whenever I try to run it, I get the following traceback (I replaced the username with "me" here): ``` Traceback (most recent call last): File "C:\Users\me\Desktop\python_work\remember_me.py", line 9, in <module> username = json.load(f) File "C:\Users\me\AppData\Local\Programs\Python\Python38-32\lib\json\__init__.py", line 293, in load return loads(fp.read(), File "C:\Users\me\AppData\Local\Programs\Python\Python38-32\lib\json\__init__.py", line 357, in loads return _default_decoder.decode(s) File "C:\Users\me\AppData\Local\Programs\Python\Python38-32\lib\json\decoder.py", line 337, in decode obj, end = self.raw_decode(s, idx=_w(s, 0).end()) File "C:\Users\me\AppData\Local\Programs\Python\Python38-32\lib\json\decoder.py", line 355, in raw_decode raise JSONDecodeError("Expecting value", s, err.value) from None json.decoder.JSONDecodeError: Expecting value: line 1 column 1 (char 0) ``` Any ideas what I'm doing wrong? I've tried just starting from scratch and re-typing, but apparently I'm making the same error!
2020/01/31
[ "https://Stackoverflow.com/questions/60011277", "https://Stackoverflow.com", "https://Stackoverflow.com/users/12821839/" ]
``` import json filename = 'username.json' try: with open(filename) as f: username = json.load(f) except FileNotFoundError: username = input("What is your name? ") with open(filename, 'w') as f: json.dump(username, f) print(f"We'll remember you when you come back, {username}!") except json.decoder.JSONDecodeError: # This error is for when the json file is EMPTY. username = input("What is your name? ") with open(filename, 'w') as f: json.dump(username, f) print(f"We'll remember you when you come back, {username}!") else: print(f"Welcome back, {username}!") ```
``` def get_stored_number(): try: with open(filename) as f: username = json.load(f) except FileNotFoundError: return None **except json.decoder.JSONDecodeError: return None** else: return username number() ``` Solve this problem adding except for blank file username.json
1,046
5,238,785
In ruby if I have an object obj, with a method called funcname, I can call the method using the following syntax obj.send(funcname) Is there something similar in python. The reason I want to do this, that I have a switch statement where I set the funcname, and want to call it at the end of the switch statement.
2011/03/08
[ "https://Stackoverflow.com/questions/5238785", "https://Stackoverflow.com", "https://Stackoverflow.com/users/650556/" ]
``` getattr(obj, "name")(args) ``` ​​​​
hmmm... getattr(obj, funcname)(\*args, \*\*kwargs) ? ``` >>> s = "Abc" >>> s.upper() 'ABC' >>> getattr(s, "upper")() 'ABC' >>> getattr(s, "lower")() 'abc' ```
1,047
47,608,612
Whilst I was learning pygame, I stumbled across a line of code that I did not understand: ``` if y == 0 or y == height-1: var1 *= -1 ``` I understand what if statements are in python and the usage of logic gates, what I don't understand is the small piece of statement after the if statement: "var1 \*= 1" Can someone explain this syntax? I do not understand the code and thought it would return a syntax error if we type anything beyond a colon.
2017/12/02
[ "https://Stackoverflow.com/questions/47608612", "https://Stackoverflow.com", "https://Stackoverflow.com/users/8687842/" ]
Actually there is no rule that you cannot write something after a colon in Python. In fact, you could write multiple statements after an if condition as well, like: `if True: print("foo"); print("bar")`. However for stylistic reasons generally it is recommended to write it in a new line after the colon. Exceptions might be when the content of the block is very simple and one line. `*=` means to assign the variable on the left to the value of itself multiplied by the expression on the right.
``` var *= -1 ``` is equivalent to ``` var = var * (-1) ``` So it means that the sign of var will change . --- ``` if condition: statement ``` is equivalent to ``` if condition: statement ```
1,052
59,338,922
I'm looking for a convention stating how different types of methods (i.e. `@staticmethod` or `@classmethod`) inside the Python class definition should be arranged. [PEP-8](https://www.python.org/dev/peps/pep-0008) does not provide any information about such topic. For example, Java programming language has some [code conventions](https://www.oracle.com/technetwork/java/codeconventions-141855.html#1852) referring to order in which static and instance variables are appearing in the class definition block. Is there any standard for Python 3 declaring such recommendations?
2019/12/14
[ "https://Stackoverflow.com/questions/59338922", "https://Stackoverflow.com", "https://Stackoverflow.com/users/7217645/" ]
The lack of answers so far just triggered me to simply write down the way I do it: ``` class thing(object): info = 'someinfo' # this can be useful to provide information about the class, # either as text or otherwise, in case you have multiple similar # but different classes. # It's also useful if there are some fixed numbers defining # the class' behaviour but you don't want to bury them somewhere # in the code (and haven't gotten around to putting them into # a separate file or look-up table, as you probably should) @staticmethod def stat(arg1, arg2): '''similarly, static methods are for stuff that is specific to the class but still useful without an instance''' pass # e.g.: some preprocessor for input needed to instantiate the class, or a computation specific to the type of thing represented by the class return(arg1 + arg2) @classmethod def classy(cls, str1) '''class methods get access to other class stuff. So, basically, they can do less-trivial things''' return(cls.stat(str1, cls.info) def __init__(self, arg): # ...and here starts the regular class-internal stuff self.arg = arg def simplething(self, arg): '''I like to put the simple things first which don't call other instance methods...''' return(arg * self.arg) def complicated(self, arg1, arg2): '''...before the methods which call the other methods above them''' return(self.simplething(arg1) / simplething(arg2)) ``` In short: anything that is useful before the class has been created goes above the `__init__()` definition, first the class-global definitions, then static methods, then class methods, and after `__init__()`, all methods in a hierarchical order. The benefit (to me) of doing it this way is that when I'm scrolling through the code and get to some method that calls other methods from the class, I've already seen those methods. I suppose there is no objectively "best" way of doing this, since you could arrange things the other way round just as well and still know where to look first. To me, it makes more intuitive sense to me this way round. It also has the advantage that at some point during development, I'm more likely to add higher-level methods to the bottom than to squeeze low-level ones in at the top. Another way to do it might be to sort things by "relevance". I.e.: The things which a user of the class is expected to need most often could be at the top. That might be good if you have lots of people using your code but not modifying it (much). I do see some people organizing not their classes but modules of functions that way: The function you're most likely to call directly is at the top, and the function it calls are below. But for the internals of a class whose methods are rarely executed in order, that kind of order can be hard to determine and may make it harder to find the definite list of static or class methods. ...definitely not a standard, but maybe a little orientation for those looking for it, or at least a starting point to explain why your order is better than mine :)
I think the way to declare `@staticmethod` and `@classmethod` on python is by adding that method statement (`@staticmethod` or `@classmethod`) directly above of your function. The usage of `@classmethod` is like instance function, it requires some argument like `class` or `cls` (u can change the parameter name so it's up to u). Whether for the `@static method`, it doesn't require self or class argument like instance method or class method. These two methods are bound to the class not to the object of the class so that `@staticmethod` cannot access or modify that because `@staticmethod` doesn't know about the state of the class, but for the `@classmethod` can access or modify class state because it has access into it (look at cls argument when calling `@classmethod`) but remember, if u are changing some state through `@classmethod`, It will change all the state of all instances of the class. U can see the differences on the example below. Example : ``` @classmethod def something(cls, arg1, arg2, ....): #this is class method @staticmethod def something2(arg1, arg2, ...): #this is static method ``` for further explanations, maybe you can see this links : Static and class method explanation : [link](https://www.geeksforgeeks.org/class-method-vs-static-method-python/) Class or static variable explanation : [link](https://www.geeksforgeeks.org/g-fact-34-class-or-static-variables-in-python/) \*I edited my answer because I seem not answer your question There isn't any strict rules / standard to arrange it and luckily in python, u don't have to specify what types of method do u want to define (in java like public void ... or private int..., etc). 1. Name of Class The first order comes the name of classes and then u can have attribute references and instantiation just like java. U can specify the scope of the attribute too (like private etc). 2. Instantiation Then u can declare instantiation just like java by declaring `def __init__(self, ...)`. If u declare instantiation, then it will automatically invokes `__init__` when u declare new class instance. If u not declare this, python will keep to call this function because it automatically inherit from base class and won't do anything 3. Function / Method After that, u can declare function. U can define `instance method` or `@classmethod` or `@staticmethod` and do something. Its method has its own advantages so it depends on how u need it. Python supports inheritance and multiple inheritance too. For further explanation u can see this link [link](https://docs.python.org/3/tutorial/classes.html). Hope it helps.
1,057
51,928,090
I am trying to separate the pixel values of an image in python which are in a numpy array of 'object' data-type in a single quote like this: ``` ['238 236 237 238 240 240 239 241 241 243 240 239 231 212 190 173 148 122 104 92 .... 143 136 132 127 124 119 110 104 112 119 78 20 17 19 20 23 26 31 30 30 32 33 29 30 34 39 49 62 70 75 90'] ``` The shape of the numpy array is coming as 1. There are a total of 784 numbers but I cannot access them individually. I wanted something like: `[238, 236, 237, ......, 70, 75, 90]` of dtype int or float. There are 1000 such numpy arrays like the one above. Thanks in advance.
2018/08/20
[ "https://Stackoverflow.com/questions/51928090", "https://Stackoverflow.com", "https://Stackoverflow.com/users/9804344/" ]
You can use `str.split` **Ex:** ``` l = ['238 236 237 238 240 240 239 241 241 243 240 239 231 212 190 173 148 122 104 92 143 136 132 127 124 119 110 104 112 119 78 20 17 19 20 23 26 31 30 30 32 33 29 30 34 39 49 62 70 75 90'] print( list(map(int, l[0].split())) ) ``` **Output:** ``` [238, 236, 237, 238, 240, 240, 239, 241, 241, 243, 240, 239, 231, 212, 190, 173, 148, 122, 104, 92, 143, 136, 132, 127, 124, 119, 110, 104, 112, 119, 78, 20, 17, 19, 20, 23, 26, 31, 30, 30, 32, 33, 29, 30, 34, 39, 49, 62, 70, 75, 90] ```
I believe using [`np.ndarray.item()`](https://docs.scipy.org/doc/numpy/reference/generated/numpy.ndarray.item.html) is idiomatic to retrieve a single item from a numpy array. ``` import numpy as np your_numpy_array = np.asarray(['238 236 237 238 240 240 239 241 241 243 240 239 231 212 190 173 148 122 104 92 143 136 132 127 124 119 110 104 112 119 78 20 17 19 20 23 26 31 30 30 32 33 29 30 34 39 49 62 70 75 90'] ) values = your_numpy_array.item().split(' ') new_numpy_array = np.asarray(values, dtype='int') ``` Note that `values` here is a list of strings. `np.asarray` can construct an array of integers from list of string values, we just need to specify the `dtype` (as suggested by [hpaulj](https://stackoverflow.com/users/901925))
1,058
37,335,027
The following python code has a bug: ``` class Location(object): def is_nighttime(): return ... if location.is_nighttime: close_shades() ``` The bug is that the programmer forgot to call `is_nighttime` (or forgot to use a `@property` decorator on the method), so the method is cast by `bool` evaluated as `True` without being called. Is there a way to prevent the programmer from doing this, both in the case above, and in the case where `is_nighttime` is a standalone function instead of a method? For example, something in the following spirit? ``` is_nighttime.__bool__ = TypeError ```
2016/05/19
[ "https://Stackoverflow.com/questions/37335027", "https://Stackoverflow.com", "https://Stackoverflow.com/users/1205529/" ]
In theory, you could wrap the function in a function-like object with a `__call__` that delegates to the function and a `__bool__` that raises a TypeError. It'd be really unwieldy and would probably cause more bad interactions than it'd catch - for example, these objects won't work as methods unless you add more special handling for that - but you could do it: ``` class NonBooleanFunction(object): """A function wrapper that prevents a function from being interpreted as a boolean.""" def __init__(self, func): self.func = func def __call__(self, *args, **kwargs): return self.func(*args, **kwargs) def __bool__(self): raise TypeError __nonzero__ = __bool__ @NonBooleanFunction def is_nighttime(): return True # We're at the Sun-Earth L2 point or something. if is_nighttime: # TypeError! ``` There's still a lot of stuff you can't catch: ``` nighttime_list.append(is_nighttime) # No TypeError ._. ``` And you have to remember to explicitly apply this to any functions you don't want being treated as booleans. You also can't do much about functions and methods you don't control; for example, you can't apply this to `str.islower` to catch things like `if some_string.islower:`. If you want to catch things like this, I recommend using static analysis tools instead. I think IDEs like PyCharm might warn you, and there should be linting tools that can catch this. --- If you want these things to work as methods, here's the extra handling for that: ``` import functools class NonBooleanFunction(object): ... # other methods omitted for brevity def __get__(self, instance, owner): if instance is None: return self return NonBooleanFunction(functools.partial(self.func, instance)) ```
Short answer: `if is_nighttime():`, with parenthesis to call it. Longer answer: `is_nighttime` points to a function, which is a non-None type. `if` looks for a condition which is a boolean, and casts the symbol `is_nighttime` to boolean. As it is not zero and not None, it is True.
1,059
15,663,899
It turns out building the following string in python... ``` # global variables cr = '\x0d' # segment terminator lf = '\x0a' # data element separator rs = '\x1e' # record separator sp = '\x20' # white space a = 'hello' b = 'world' output = a + rs + b ``` ...is not the same as it may be in C#. How do I accomplish the same in C#?
2013/03/27
[ "https://Stackoverflow.com/questions/15663899", "https://Stackoverflow.com", "https://Stackoverflow.com/users/687137/" ]
It is known as [type cast](http://en.wikipedia.org/wiki/Type_conversion) or type conversion. It is used when you want to cast one of type of data to another type of data.
``` (some_type) *apointer ``` This mean that you cast the `apointer` content to the `some_type` type
1,061
60,212,670
I have a form on an HTML page with an `input type="text"` that I'm replacing with a `textarea`. Now the form no longer works. When I try to submit it, I get an error "UnboundLocalError: local variable 'updated\_details' referenced before assignment" referring to my python code (I didn't change the python at all). **Old line in HTML** ``` <input type="text" name="comments" id="comments" placeholder="Write stuff here." style="height:150px"> </input> ``` **New Line in HTML** ``` <textarea name="comments" id="comments" placeholder="Write stuff here" </input> </textarea> ``` **Full HTML form** ``` <form action = "/insert_vote" method = "post" onsubmit=""> <div id="vote-form" action = "/insert_vote" method = "post" onsubmit=""> <div class="smalltext"> {% for dict_item in vote_choices %} <input type="radio" name="options" padding="10px" margin="10px" id="{{ dict_item['id'] }}" value="{{ dict_item['id'] }}"> {{ dict_item['choice'] }} </input><br> {% endfor %} </div> <br> <div class="mediumlefttext"> Why did you make that choice? </div> <!-- <input type="text" name="comments" id="comments" placeholder="Write stuff here." style="height:150px"> </input> <br>--> <textarea name="comments" id="comments" placeholder="Write stuff here" </input> </textarea> <!--<button onclick="javascript:login();" size="large" type="submit" value="Submit" scope="public_profile,email" returnscopes="true" onlogin="checkLoginState();">Submit</button>--> <input type="text" name="user_id" id="user_id" style="display:none;"> <input type="submit" value="Submit"> </div> </form> ``` **Python** ``` @app.route('/insert_vote', methods=['GET', 'POST']) def insert_vote(): posted = 1 global article, user_id print ("insert_vote", "this should be the facebook user id", user_id) if request.method == 'POST' or request.method == 'GET': if not request.form['options'] or request.form['comments']: flash('Please enter all the fields', 'error') else: rate = 0 # rate of votes protection against no votes vote_choice_id = int(request.form['options']) comments = request.form['comments'] # user_id = request.form['user_id'] #user_id = 1 av_obj = ArticleVote(user_id, article.id, vote_choice_id, comments) db.session.add(av_obj) try: db.session.commit() except exc.SQLAlchemyError: flash('User has already voted on this article.') posted = 0 if posted == 1: flash('Record was successfully added') else: db.session.rollback() a_obj = article # this is the current global article avs_obj = retrieve_article_vote_summary(a_obj.id) # vote_summary is a list of [tuples('True', numOfTrue), etc] total_votes = avs_obj.getTotalVotes() vote_choice_list = VoteChoice.getVoteChoiceList() vote_choices = [] for item in vote_choice_list: # looping over VoteChoice objects num = avs_obj.getVoteCount(item.choice) if total_votes > 0: rate = num / total_votes vote_choices.append([item.choice, item.color, num, rate*100, total_votes]) details = avs_obj.getVoteDetails() # 10/02 - retrieve array of tuples [(user, VoteChoice, Comments)] details_count = 0 for detail in details: details_count += 1 return redirect('/results/' + str(article.id)) ``` ... ``` @app.route('/results/<int:id>') def results(id): rate = 0 # either 0 or num/total article_list_of_one = Article.query.filter_by(id=id) a_obj = article_list_of_one[0] avs_obj = retrieve_article_vote_summary(a_obj.id) # vote_summary is a list of [tuples('True', numOfTrue), etc] total_votes = avs_obj.getTotalVotes() vote_choices = [] vote_choice_list = VoteChoice.getVoteChoiceList() for item in vote_choice_list: # looping over VoteChoice objects num = avs_obj.getVoteCount(item.choice) if total_votes > 0: # protecting against no votes rate = num/total_votes vote_choices.append([item.choice, item.color, num, rate*100, total_votes]) details = avs_obj.getVoteDetails() # 10/02 - retrieve array of tuples [(user, VoteChoice, Comments)] print("Inside results(" + str(id) + "):") details_count = 0 for detail in details: updated_details = [(user, VoteChoice, Comments, User.query.filter_by(name=user).first().fb_pic) for (user, VoteChoice, Comments) in details] #print(" " + str(details_count) + ": " + details[0] + " " + details[1] + " " + details[2]) # details_count += 1 return render_template('results.html', title=a_obj.title, id=id, image_url=a_obj.image_url, url=a_obj.url, vote_choices=vote_choices, home_data=Article.query.all(), vote_details=updated_details) ```
2020/02/13
[ "https://Stackoverflow.com/questions/60212670", "https://Stackoverflow.com", "https://Stackoverflow.com/users/12555158/" ]
Since you want to test `if(getStatisticsLinks) { ... }` and narrow the type to `LinkInterface` when test passes, this is actually very easy. Your function never returns `true`, only `false`, so the return type can be `LinkInterface | false` instead of `LinkInterface | boolean`. That said, I would suggest returning `undefined` rather than `false` here, since that's a more usual way to indicate the absence of a result. The `if` condition still works in the same way, since `undefined` is falsey. It also simplifies the implementation: ```js function getLinkInterfaceForRel(links: LinkInterface[], targetRel: string): LinkInterface | undefined { targetRel = targetRel.toLowerCase(); // could also use .find(...) here, which returns undefined if no match is found return links.filter(el => el.rel.toLowerCase() === targetRel)[0]; } ``` [Playground Link](http://www.typescriptlang.org/play/#code/JYOwLgpgTgZghgYwgAgDKgNYElzXk5AbwFgAoZC5KCAGwC5kBnMKUAczMuQAtqYHmrEB3KUAthDDcA9gBMBLdmQC+ZMjACuIBGGDSQyNpPQhsuWIggAxaVABKtABQ1MjBibOQLSANoBdABpkMDgoIzAHeiZFYQBKd0wcL3wUAB9kLVkIGFAIWSJOSmowDSgDF1NGADockFlHWmQAXgA+ZFoq6hoqsGlUaQB3aABhOEYIR1jmpqbg0PDInr7BkbGJ2NiAbhU1UhpJZFCoBNMkvEt-ZuQfQopCKloGAHIYaWknoN5s54AjUI-kBIpHJfnAAF5PZDKAJkPzbUhkBD6ZiGSQAZRCumYwAQjA8jCu4Q8Z281lskUcRyCTwA4pIAPoYuBY3S4p5bMjAGDIRzhJksnF41xTEiiCgAenFwQAngAHFDAAnE8wpW7IJEgRjSfZVGjSNi89GYxWsoWVDmkVSkIA)
You will need to write your own typeguard to "narrow down" the type, such that the TypeScript compiler knows that `getStatisticsLinks` is of type `LinkInterface` when it is being used in `this.service.get(getStatisticsLinks.href).subscribe(.....);`. This is one way you can write the type guard: ``` function isLinkInterface(obj: LinkInterface | boolean): pet is LinkInterface { return (obj as LinkInterface).href !== undefined; } ``` And then, this is how you can use it: ``` const getStatisticsLinks = getLinkInterfaceForRel(this.question.links, 'Get_Statistics'); if (getStatisticsLinks && isLinkInterface(getStatisticsLinks)) { this.service.get(getStatisticsLinks.href).subscribe(..); } ``` The `isLinkInterface()` type guard will return `true` only if `getStatisticsLinks` is of type `LinkInterface` (because it has the href property), and then `this.service.get()` will be called conditionally.
1,064
69,188,407
I try to use a loop to do some operations on the Pandas numeric and category columns. ``` df = sns.load_dataset('diamonds') print(df.dtypes,'\n') carat float64 cut category color category clarity category depth float64 table float64 price int64 x float64 y float64 z float64 dtype: object ``` In the following codes, I just simply cut and paste 'float64' and 'category' from the preceding step output. ``` for i in df.columns: if df[i].dtypes in ['float64']: print(i) for i in df.columns: if df[i].dtypes in ['category']: print(i) ``` I found that it works for 'float64' but generates an error for 'category'. Why is this ? Thanks very much !!! ``` carat depth table x y z --------------------------------------------------------------------------- TypeError Traceback (most recent call last) <ipython-input-74-8e6aa9d4726e> in <module> 4 5 for i in df.columns: ----> 6 if df[i].dtypes in ['category']: 7 print(i) TypeError: data type 'category' not understood ```
2021/09/15
[ "https://Stackoverflow.com/questions/69188407", "https://Stackoverflow.com", "https://Stackoverflow.com/users/15670527/" ]
Solution ======== Try using `pd.api.types.is_categorical_dtype`: ``` for i in df.columns: if pd.api.types.is_categorical_dtype(df[i]): print(i) ``` Or check the `dtype` name: ``` for i in df.columns: if df[i].dtype.name == 'category': print(i) ``` Output: ``` cut color clarity ``` Explanation: ============ This is a bug in Pandas, [here](https://github.com/pandas-dev/pandas/issues/16697) is the GitHub issue, one sentence is: > > `df.dtypes[colname] == 'category'` evaluates as `True` for categorical columns and raises `TypeError: data type "category"` not understood for `np.float64` columns. > > > So actually, it works, it does give `True` for categorical columns, but the problem here is that the numpy `float64` dtype checking isn't cooperated with pandas dtypes, such as `category`. If you make order the columns differently, having the first 3 columns as categorical dtype columns, it will show those column names, but once float columns come, it will raise error due to numpy and pandas type issue: ``` >>> df = df.iloc[:, 1:] >>> df cut color clarity depth table price x y z 0 Ideal E SI2 61.5 55.0 326 3.95 3.98 2.43 1 Premium E SI1 59.8 61.0 326 3.89 3.84 2.31 2 Good E VS1 56.9 65.0 327 4.05 4.07 2.31 3 Premium I VS2 62.4 58.0 334 4.20 4.23 2.63 4 Good J SI2 63.3 58.0 335 4.34 4.35 2.75 ... ... ... ... ... ... ... ... ... ... 53935 Ideal D SI1 60.8 57.0 2757 5.75 5.76 3.50 53936 Good D SI1 63.1 55.0 2757 5.69 5.75 3.61 53937 Very Good D SI1 62.8 60.0 2757 5.66 5.68 3.56 53938 Premium H SI2 61.0 58.0 2757 6.15 6.12 3.74 53939 Ideal D SI2 62.2 55.0 2757 5.83 5.87 3.64 [53940 rows x 9 columns] >>> for i in df.columns: if df[i].dtypes in ['category']: print(i) cut color clarity Traceback (most recent call last): File "<pyshell#138>", line 2, in <module> if df[i].dtypes in ['category']: TypeError: data type 'category' not understood >>> ``` As you can see, it did output the columns, but once `np.float64` dtyped columns appear, the numpy `__eq__` magic method would throw an error from numpy backend.
If you have only one possibility in your list, go with @U12-Forward's solution. Yet, if you want to match several types, you can just convert your type to check to its string representation: ``` for i in df.columns: if str(df[i].dtypes) in ['category', 'othertype']: print(i) ``` Output: ``` cut color clarity ```
1,065
48,477,200
I'm having a problem trying to extract elements from a queue until a given number. If the given number is not queued, the code should leave the queue empty and give a message saying that. Instead, I get this error message, but I'm not able to solve it: ``` Traceback (most recent call last): File "python", line 45, in <module> IndexError: list index out of range ``` This is my current code: ``` class Queue(): def __init__(self): self.items = [] def empty(self): if self.items == []: return True else: return False def insert(self, value): self.items.append(value) def extract(self): try: return self.items.pop(0) except: raise ValueError("Empty queue") def last(self): if self.empty(): return None else: return self.items[0] import random def randomlist(n2,a2,b2): list = [0] * n2 for i in range(n2): list[i] = random.randint(a2,b2) return list queue1=Queue() for i in range (0,10): queue1.insert(randomlist(10,1,70)[i]) if queue1.empty()==False : print("These are the numbers of your queue:\n",queue1.items) test1=True while test1==True: s=(input("Input a number:\n")) if s.isdigit()==True : test1=False s2=int(s) else: print("Wrong, try again\n") for i in range (0,10) : if queue1.items[i]!=s2 : queue1.extract() elif queue1.items[i]==s2 : queue1.extract() print ("Remaining numbers:\n",queue1.items) break if queue1.empty()==True : print ("Queue is empty now", cola1.items) ```
2018/01/27
[ "https://Stackoverflow.com/questions/48477200", "https://Stackoverflow.com", "https://Stackoverflow.com/users/9147054/" ]
Modifying a list while going through it is a bad idea. ``` for i in range (0,10) : if queue1.items[i]!=s2 : queue1.extract() elif queue1.items[i]==s2 : queue1.extract() print ("Remaining numbers:\n",queue1.items) ``` This code modifies your queue - items, it shortens the items-list but you still itereate over the full range if no items is found. So your internal list will get shorter and shorter and your range (i) advances towards i. Somewhen you access an `items[i]` that is no longer in your queue. Solution (Edited thanks to [Stefan Pochmann's](https://stackoverflow.com/users/1672429/stefan-pochmann) comment): ``` for _ in range(len(queue1.items)): # no hardcoded length anymore item = queue1.extract() # pop item if item == s2 : # check item for break criteria print ("Remaining numbers:\n",queue1.items) break ```
You can try replacing the last part of your code i.e. ``` for i in range (0,10) : if queue1.items[i]!=s2 : queue1.extract() elif queue1.items[i]==s2 : queue1.extract() print ("Remaining numbers:\n",queue1.items) break if queue1.empty()==True : print ("Queue is empty now", cola1.items) ``` with ``` poptill = -1 # index till where we should pop for i in range(0,len(queue1.items)): # this loop finds the index to pop queue till if queue1.items[i]==s2: poptill = i break if poptill != -1: # if item to pop was found in queue i = 0 while i <= poptill: # this loop empties the queue till that index queue1.extract() i += 1 if queue1.empty()==True : print ("Queue is empty now", queue1.items) else: print ("Remaining numbers:\n",queue1.items) else: # else item was not found in list for i in range(0,len(queue1.items)): # this loop empties the queue queue1.extract() print ("no item found, so emptied the list, numbers:\n",queue1.items) ``` Here we find the index location till where we should pop in the first loop, and then pop the queue till that index in the second loop, finally if the item to pop was not found in list we empty list in the third loop.
1,066
50,151,490
I've been trying to send free sms using way2sms. I found this link where it seemed to work on python 3: <https://github.com/shubhamc183/way2sms> I've saved this file as way2sms.py: ``` import requests from bs4 import BeautifulSoup class sms: def __init__(self,username,password): ''' Takes username and password as parameters for constructors and try to log in ''' self.url='http://site24.way2sms.com/Login1.action?' self.cred={'username': username, 'password': password} self.s=requests.Session() # Session because we want to maintain the cookies ''' changing s.headers['User-Agent'] to spoof that python is requesting ''' self.s.headers['User-Agent']="Mozilla/5.0 (X11; Ubuntu; Linux x86_64; rv:50.0) Gecko/20100101 Firefox/50.0" self.q=self.s.post(self.url,data=self.cred) self.loggedIn=False # a variable of knowing whether logged in or not if "http://site24.way2sms.com/main.action" in self.q.url: # http status 200 == OK print("Successfully logged in..!") self.loggedIn=True else: print("Can't login, once check credential..!") self.loggedIn=False self.jsid=self.s.cookies.get_dict()['JSESSIONID'][4:] # JSID is the main KEY as JSID are produced every time a session satrts def msgSentToday(self): ''' Returns number of SMS sent today as there is a limit of 100 messages everyday..! ''' if self.loggedIn == False: print("Can't perform since NOT logged in..!") return -1 self.msg_left_url='http://site24.way2sms.com/sentSMS?Token='+self.jsid self.q=self.s.get(self.msg_left_url) self.soup=BeautifulSoup(self.q.text,'html.parser') #we want the number of messages sent which is present in the self.t=self.soup.find("div",{"class":"hed"}).h2.text # div element with class "hed" -> h2 self.sent=0 for self.i in self.t: if self.i.isdecimal(): self.sent=10*self.sent+int(self.i) return self.sent def send(self,mobile_no,msg): ''' Sends the message to the given mobile number ''' if self.loggedIn == False: print("Can't perform since NOT logged in..!") return False if len(msg)>139 or len(mobile_no)!=10 or not mobile_no.isdecimal(): #checks whether the given message is of length more than 139 return False #or the mobile_no is valid self.payload={'ssaction':'ss', 'Token':self.jsid, #inorder to visualize how I came to these payload, 'mobile':mobile_no, #must see the NETWORK section in Inspect Element 'message':msg, #while messagin someone from your browser 'msgLen':'129' } self.msg_url='http://site24.way2sms.com/smstoss.action' self.q=self.s.post(self.msg_url,data=self.payload) if self.q.status_code==200: return True else: return False def sendLater(self, mobile_no, msg, date, time): #Function for future SMS feature. #date must be in dd/mm/yyyy format #time must be in 24hr format. For ex: 18:05 if self.loggedIn == False: print("Can't perform since NOT logged in..!") return False if len(msg)>139 or len(mobile_no)!=10 or not mobile_no.isdecimal(): return False dateparts = date.split('/') #These steps to check for valid date and time and formatting timeparts = time.split(':') if int(dateparts[0])<1 or int(dateparts[0])>32 or int(dateparts[1])>12 or int(dateparts[1])<1 or int(dateparts[2])<2017 or int(timeparts[0])<0 or int(timeparts[0])>23 or int(timeparts[1])>59 or int(timeparts[1])<0: return False date = dateparts[0].zfill(2) + "/" + dateparts[1].zfill(2) + "/" + dateparts[2] time = timeparts[0].zfill(2) + ":" + timeparts[1].zfill(2) self.payload={'Token':self.jsid, 'mobile':mobile_no, 'sdate':date, 'stime':time, 'message':msg, 'msgLen':'129' } self.msg_url='http://site24.way2sms.com/schedulesms.action' self.q=self.s.post(self.msg_url, data=self.payload) if self.q.status_code==200: return True else: return False def logout(self): self.s.get('http://site24.way2sms.com/entry?ec=0080&id=dwks') self.s.close() # close the Session self.loggedIn=False ``` And saved another file as smsing.py: ``` import way2sms q=way2sms.sms(1234567890,'password') #username = 1234567890 q.send('0987654321','hello') #receiver ph no.:0987654321, message=hello n=q.msgSentToday() q.logout() ``` -I've tried to pass username as string and otherwise. Password if not given as string shows error. My username and password both are correct. When I execute smsing.py... displays: ``` >>>Can't login, once check credential..! Can't perform since NOT logged in..! Can't perform since NOT logged in..! ``` With such simple code I thought it would be easy. But I am not able to find where I am going wrong. Is it because I am using Windows 7?? Can anybody please help me.?
2018/05/03
[ "https://Stackoverflow.com/questions/50151490", "https://Stackoverflow.com", "https://Stackoverflow.com/users/5250206/" ]
To send `sms` using `way2sms` account, you may use below code snippet. Before that you would be required to create an API `Key` from [here](https://smsapi.engineeringtgr.com/) ``` import requests url = "https://smsapi.engineeringtgr.com/send/" params = dict( Mobile='login username', Password='login password', Key='generated from above sms api', Message='Your message Here', To='recipient') resp = requests.get(url, params) print(resp, resp.text) ``` **N.B: There is a limit of approx 20 sms per day**
First of all make sure you have all the dependencies like requests and bs4, if not trying downloading them using pip3 since this code work on Python3 **not** python2. I have update the [repository](https://github.com/shubhamc183/way2sms). > > The mobilenumber should also be in String format > > > So, instead of ``` q=way2sms.sms(1234567890,'password') #username = 1234567890 ``` use ``` q=way2sms.sms('1234567890','password') #username = 1234567890 ```
1,070
38,885,431
I'm really stuck on this one, but I am a python (and Raspberry Pi) newbie. All I want is to output the `print` output from my python script. The problem is (I believe) that a function in my python script takes half a second to execute and PHP misses the output. This is my php script: ``` <?php error_reporting(E_ALL); ini_set('display_errors', 1); $cmd = escapeshellcmd('/var/www/html/weathertest.py'); $output = shell_exec($cmd); echo $output; //$handle = popen('/var/www/html/weathertest.py', 'r'); //$output = fread($handle, 1024); //var_dump($output); //pclose($handle); //$cmd = "python /var/www/html/weathertest.py"; //$var1 = system($cmd); //echo $var1; echo 'end'; ?> ``` I've included the commented blocks to show what else I've tried. All three output "static text end" This is the python script: ``` #!/usr/bin/env python import sys import Adafruit_DHT import time print 'static text ' humidity, temperature = Adafruit_DHT.read(11, 4) time.sleep(3) print 'Temp: {0:0.1f}C Humidity: {1:0.1f}%'.format(temperature, humidity) ``` The py executes fine on the command line. I've added the 3 second delay to make the script feel longer for my own testing. Given that I always get `static text` as an output, I figure my problem is with PHP not waiting for the Adafruit command. BUT the STRANGEST thing for me is that all three of my PHP attempts work correctly if I execute the PHP script on the command line i.e. `php /var/www/html/test.php` - I then get the desired output: ``` static text Temp: 23.0C Humidity 34.0% end ``` So I guess there's two questions: 1. How to make PHP wait for Python completion. 2. Why does the PHP command line differ from the browser?
2016/08/11
[ "https://Stackoverflow.com/questions/38885431", "https://Stackoverflow.com", "https://Stackoverflow.com/users/2066625/" ]
Following code example may help you understand what you need to do: 1. Create an array of index paths 2. Add the 20 data objects which you received from the server in your data source. 3. Now since your data source and array of index paths are on the same page, begin table view updates and insert the rows. That's it. :) Created a dummy project to answer. When you run the project and scroll, you'll see that the rows are added dynamically. <https://github.com/harsh62/stackoverflow_insert_dynamic_rows> ``` func insertRows() { let newData = ["Steve", "Bill", "Linus", "Bret"] let olderCount: Int = self.names.count names += newData var indexPaths = [NSIndexPath]() for i in 0..<newData.count {//hardcoded 20, this should be the number of new results received indexPaths.append(NSIndexPath(forRow: olderCount + i, inSection: 0)) } // tell the table view to update (at all of the inserted index paths) self.tableView.beginUpdates() self.tableView.insertRowsAtIndexPaths(indexPaths, withRowAnimation: .Top) self.tableView.endUpdates() } ```
When you call `insertRowsAtIndexPaths`, the number of row present in your data source must be equal to the previous count plus the number of rows being inserted. But you appear to be inserting one `NSIndexPath` in the table, but your `names` array presumably has 20 more items. So, make sure that the number of `NSIndexPath` items in the array passed to `insertRowsAtIndexPaths` is equal to the number of rows that have been added to `names`.
1,072
67,039,337
Here's my [small CSV file](https://github.com/gusbemacbe/aparecida-covid-19-tracker/blob/main/data/aparecida-small-sample.csv), which is totally encoded in UTF-8, and the date is totally correct. I repaired the erros from: * [Is there a function to get the difference between two values on a pandas dataframe timeseries?](https://stackoverflow.com/questions/63488407/is-there-a-function-to-get-the-difference-between-two-values-on-a-pandas-datafra) * [I can't convert the Pandas Dataframe type to datetime](https://stackoverflow.com/questions/50415981/i-cant-convert-the-pandas-dataframe-type-to-datetime) * [Pandas Python: KeyError Date](https://stackoverflow.com/questions/59617059/pandas-python-keyerror-date) ```py import datetime import altair as alt import operator import pandas as pd s = pd.read_csv('data/aparecida-small-sample.csv', parse_dates=['date']) city = s[s['city'] == 'Aparecida'] base = alt.Chart(city).mark_bar().encode(x = 'date').properties(width = 500) confirmed = alt.value("#106466") death = alt.value("#D8B08C") recovered = alt.value("#87C232") # Convert to date s['date'] = pd.to_datetime(s['date']) s = s.set_index('date') # Take `totalCases` value from CSV file, to differentiate new cases between each 2 days cases = s['totalCases'].resample('2d', on='date').last().diff() # Load the chart base.encode(y = cases, color = confirmed).properties(title = "Daily new cases") ``` The error was `KeyError: Date` which appointed `s['date'] = pd.to_datetime(s['date'])`, which is totally correct. I do not know why it insists it is incorrect. The entire error message: ``` KeyError: 'date' --------------------------------------------------------------------------- KeyError Traceback (most recent call last) /usr/lib/python3.9/site-packages/pandas/core/indexes/base.py in get_loc(self, key, method, tolerance) 3079 try: -> 3080 return self._engine.get_loc(casted_key) 3081 except KeyError as err: pandas/_libs/index.pyx in pandas._libs.index.IndexEngine.get_loc() pandas/_libs/index.pyx in pandas._libs.index.IndexEngine.get_loc() pandas/_libs/hashtable_class_helper.pxi in pandas._libs.hashtable.PyObjectHashTable.get_item() pandas/_libs/hashtable_class_helper.pxi in pandas._libs.hashtable.PyObjectHashTable.get_item() KeyError: 'date' The above exception was the direct cause of the following exception: KeyError Traceback (most recent call last) <ipython-input-37-df3f26429754> in <module> ----> 1 s['date'] = pd.to_datetime(s['date']) 2 s = s.set_index('date') 3 4 # s.groupby(['date'])[['totalCases']].resample('2d').last().diff() 5 cases = s['totalCases'].resample('2d', on='date').last().diff() /usr/lib/python3.9/site-packages/pandas/core/frame.py in __getitem__(self, key) 3022 if self.columns.nlevels > 1: 3023 return self._getitem_multilevel(key) -> 3024 indexer = self.columns.get_loc(key) 3025 if is_integer(indexer): 3026 indexer = [indexer] /usr/lib/python3.9/site-packages/pandas/core/indexes/base.py in get_loc(self, key, method, tolerance) 3080 return self._engine.get_loc(casted_key) 3081 except KeyError as err: -> 3082 raise KeyError(key) from err 3083 3084 if tolerance is not None: KeyError: 'date' ``` Another error message after correcting `full_grouped` to `s`; ``` KeyError: 'The grouper name date is not found' --------------------------------------------------------------------------- KeyError Traceback (most recent call last) <ipython-input-3-df3f26429754> in <module> 3 4 # s.groupby(['date'])[['totalCases']].resample('2d').last().diff() ----> 5 cases = s['totalCases'].resample('2d', on='date').last().diff() 6 # cases = s.groupby(['date'])[['totalCases']].resample('2d', on='date').last().diff() 7 /usr/lib/python3.9/site-packages/pandas/core/generic.py in resample(self, rule, axis, closed, label, convention, kind, loffset, base, on, level, origin, offset) 8367 8368 axis = self._get_axis_number(axis) -> 8369 return get_resampler( 8370 self, 8371 freq=rule, /usr/lib/python3.9/site-packages/pandas/core/resample.py in get_resampler(obj, kind, **kwds) 1309 """ 1310 tg = TimeGrouper(**kwds) -> 1311 return tg._get_resampler(obj, kind=kind) 1312 1313 /usr/lib/python3.9/site-packages/pandas/core/resample.py in _get_resampler(self, obj, kind) 1464 1465 """ -> 1466 self._set_grouper(obj) 1467 1468 ax = self.ax /usr/lib/python3.9/site-packages/pandas/core/groupby/grouper.py in _set_grouper(self, obj, sort) 363 else: 364 if key not in obj._info_axis: --> 365 raise KeyError(f"The grouper name {key} is not found") 366 ax = Index(obj[key], name=key) 367 KeyError: 'The grouper name date is not found' ```
2021/04/10
[ "https://Stackoverflow.com/questions/67039337", "https://Stackoverflow.com", "https://Stackoverflow.com/users/8041366/" ]
Based on your posted code I made some changes: 1. I have put in a print statement for the DataFrame after read. This should show the datatypes for each column in the DataFrame. For the date - field it should be "datetime64[ns]". 2. Afterwards you don't have to parse it again as a date. 3. Some code changes for the "cases" - field and to visualize it. ```py import datetime import altair as alt import operator import pandas as pd s = pd.read_csv('./data/aparecida-small-sample.csv', parse_dates=['date']) print(s.dtypes) confirmed = alt.value("#106466") death = alt.value("#D8B08C") recovered = alt.value("#87C232") # Take `totalCases` value from CSV file, to differentiate new cases between each 2 days city = s[s['city'] == 'Aparecida'] # Append dataframe with the new information city['daily_cases'] = city['totalCases'].diff() # Initiate chart with data base = alt.Chart(city).mark_point().encode( alt.X('date:T'), alt.Y('daily_cases:Q') ) # Load the chart base.properties(title = "Daily new cases") ``` The results of the code changes: [![enter image description here](https://i.stack.imgur.com/Hy0Ka.png)](https://i.stack.imgur.com/Hy0Ka.png)
@Gustavo Reis according to your question in the answered segment: ```py city['daily_cases'] = city['totalCases'] city['daily_deaths'] = city['totalDeaths'] city['daily_recovered'] = city['totalRecovered'] tempCityDailyCases = city[['date','daily_cases']] tempCityDailyCases["title"] = "Daily Cases" tempCityDailyDeaths = city[['date','daily_deaths']] tempCityDailyDeaths["title"] = "Daily Deaths" tempCityDailyRecovered = city[['date','daily_recovered']] tempCityDailyRecovered["title"] = "Daily Recovered" tempCity = tempCityDailyCases.append(tempCityDailyDeaths) tempCity = tempCity.append(tempCityDailyRecovered) ## Initiate chart with data totalCases = alt.Chart(tempCity).mark_bar().encode(alt.X('date:T', title=None), alt.Y('daily_cases:Q', title = None)) # color='#106466' totalDeaths = alt.Chart(tempCity).mark_bar().encode(alt.X('date:T', title=None), alt.Y('daily_deaths:Q', title = None)) # color = '#DC143C' totalRecovered = alt.Chart(tempCity).mark_bar().encode(alt.X('date:T', title=None), alt.Y('daily_recovered:Q', title = None)) # color = '#87C232' (totalCases + totalRecovered + totalDeaths).encode(color=alt.Color('title', scale=alt.Scale(range=['#106466','#DC143C','#87C232']), legend=alt.Legend(title="Art of cases") )).properties(title = "New total toll", width = 800) ``` [![enter image description here](https://i.stack.imgur.com/2u7sh.png)](https://i.stack.imgur.com/2u7sh.png)
1,073
42,015,768
How would I modify this program so that my list doesn't keep spitting out the extra text per line? The program should only output the single line that the user wants to display rather than the quotes that were added to the list before. The program will read a textfile indicated by the user, then it will display the selected line for the user and exit up a '0' input. This is in python ``` import string count = 0 #reads file def getFile(): while True: inName = input("Please enter file name: ") try: inputFile = open(inName, 'r') except: print ("I cannot open that file. Please try again.") else: break return inputFile inputFile = getFile() inputList = inputFile.readlines() for line in inputList: count = count + 1 print("There are", count, "lines.") while True: lineChoice = int(input("Please select line[0 to exit]: ")) if lineChoice == 0: break elif lineChoice > 0 and lineChoice <= count: print(inputList[:lineChoice - 1]) else: print("Invalid selection[0 to exit]") ``` Output: ``` Please enter file name: quotes.txt There are 16 lines. Please select line[0 to exit]: 1 [] Please select line[0 to exit]: 2 ['Hakuna Matata!\n'] Please select line[0 to exit]: 3 ['Hakuna Matata!\n', 'All our dreams can come true, if we have the courage to pursue them.\n'] Please select line[0 to exit]: ```
2017/02/03
[ "https://Stackoverflow.com/questions/42015768", "https://Stackoverflow.com", "https://Stackoverflow.com/users/7151347/" ]
As far as disk storage and ram memory are concerned, `'\n` is just another character. As far as tcl/tk and Python's `tkinter` wrapper are concerned, newlines are very important. Since tk's Text widget is intended to display text to humans, and since vertical scrolling is much more useful for this than horizontal scrolling, it is optimized for the former. A hundred 1000 char lines (100,000 chars total) bogs down vertical scrolling, whereas 300,000 50 char lines (15,000,000 chars total) is no problem. IDLE uses `tkinter` and the main windows are based on tk's Text widget. So if you to view text in IDLE, keep lines lengths sensible. I do not know about other IDEs that use other GUI frameworks. But even if they handle indefinitely long lines better, horizontal scrolling of a 100000 char line is pretty obnoxious.
You shouldn't have to worry about a limit. As for a new line, you can use an `if` with `\n` or a `for` loop depending on what you're going for.
1,074
49,658,679
I have a nested json and i want to find a record whose value is equal to a given number. I'm using pymongo in python wih equal operator but getting some errors: ``` from pymongo import MongoClient import datetime import json import pprint def connectDatabase(service): try: if service=="mongodb": host = 'mongodb://connecion_string.mlab.com:31989' database = 'xxx' user = 'xxx' password = 'xxx' client = MongoClient(host) client.xxx.authenticate(user, password, mechanism='SCRAM-SHA-1') db = client.xxx new_posts = [{"author": "Mike", "text": "Another post!", "tags": [ { "CSPAccountNo": "414693" }, { "CSPAccountNo": "349903" }] }] result=db.posts1.insert_many(new_posts) print (xxx.posts1.find( {"CSPAccountNo": { $eq: "414693" } } ) except Exception as e: print(e) pass print (connectDatabase("mongodb")) ``` Error: ``` File "mongo.py", line 35 print (cstore.posts1.find( {"CSPAccountNo": { $eq: "414693" } } ) ^ SyntaxError: invalid syntax ``` I'm very new to Mongodb. Any help would be appreciated
2018/04/04
[ "https://Stackoverflow.com/questions/49658679", "https://Stackoverflow.com", "https://Stackoverflow.com/users/7422128/" ]
In Python you have to use Python syntax, not JS syntax like in the Mongo shell. That means that dictionary keys need quotes: ``` print(cstore.posts1.find( {"CSPAccountNo": { "$eq": "414693" } } ) ``` (Note, in your code you never actually insert the `new_posts` into the collection, so your find call might not actually find anything.)
It should be ``` print (xxx.posts1.find( {"tags.CSPAccountNo": { $eq: "414693" } } ) ```
1,076
60,928,238
I'm using selenium in python and I'm looking to select the option Male from the below: ``` <div class="formelementcontent"> <select aria-disabled="false" class="Width150" id="ctl00_Gender" name="ctl00$Gender" onchange="javascript: return doSearch();" style="display: none;"> <option selected="selected" title="" value=""> </option> <option title="Male" value="MALE"> Male </option> <option title="Female" value="FEM"> Female </option> </select> ``` Before selecting from the dropdown, I need to switch to iframe ``` driver.switch_to.frame(iframe) ``` I've tried many options and searched extensively. This gets me most of the way. ``` driver.find_element_by_id("ctl00_Gender-button").click() WebDriverWait(driver, 10).until(EC.element_to_be_clickable((By.ID, "ctl00_Gender"))) select=Select(driver.find_element_by_id("ctl00_Gender")) check=select.select_by_visible_text('Male') ``` If I use WebDriverWait it times out. I've tried selecting by visible text and index, both give: > > ElementNotInteractableException: Element could not be scrolled into view > > >
2020/03/30
[ "https://Stackoverflow.com/questions/60928238", "https://Stackoverflow.com", "https://Stackoverflow.com/users/11483107/" ]
As per the HTML you have shared the `<select>` tag is having the value of *style* attribute set as **`display: none;`**. So using [Selenium](https://stackoverflow.com/questions/54459701/what-is-selenium-and-what-is-webdriver/54482491#54482491) it would be tough interacting with this [WebElement](https://stackoverflow.com/questions/52782684/what-is-the-difference-between-webdriver-and-webelement-in-selenium/52805139#52805139). If you access the [DOM Tree](https://javascript.info/dom-nodes) of the webpage through [google-chrome-devtools](/questions/tagged/google-chrome-devtools "show questions tagged 'google-chrome-devtools'"), presumably you will find a couple of `<li>` nodes equivalent to the `<option>` nodes within an `<ul>` node. You may be required to interact with those. You can find find a couple of relevant detailed discussions in: * [select kendo dropdown using selenium python](https://stackoverflow.com/questions/58767150/select-kendo-dropdown-using-selenium-python/58767346#58767346) * [How to test non-standard drop down lists through a crawler using Selenium and Python](https://stackoverflow.com/questions/44224924/how-to-test-non-standard-drop-down-lists-through-a-crawler-using-selenium-and-py/44235532#44235532) * [How to select an option from a dropdown of non select tag?](https://stackoverflow.com/questions/57399713/how-to-select-an-option-from-a-dropdown-of-non-select-tag/57402060#57402060)
Try below solutions: **Solution 1:** ``` select = Select(driver.find_element_by_id('ctl00_Gender')) select.select_by_value('MALE') ``` **Note** add below imports to your solution : ``` from selenium.webdriver.support.ui import Select ``` **Solution 2:** ``` driver.find_element_by_xpath("//select[@id='ctl00_Gender']/option[text()='Male']").click() ```
1,077
52,672,810
I have a Node.js API using Express.js with body parser which receives a BSON binary file from a python client. Python client code: ``` data = bson.BSON.encode({ "some_meta_data": 12, "binary_data": binary_data }) headers = {'content-type': 'application/octet-stream'} response = requests.put(endpoint_url, headers=headers, data=data) ``` Now I have an endpoint in my Node.js API where I want to deserialize the BSON data as explained in the documentation: <https://www.npmjs.com/package/bson>. What I am struggling with is how to get the binary BSON file from the request. Here is the API endpoint: ``` exports.updateBinary = function(req, res){ // How to get the binary data which bson deserializes from the req? let bson = new BSON(); let data = bson.deserialize(???); ... } ```
2018/10/05
[ "https://Stackoverflow.com/questions/52672810", "https://Stackoverflow.com", "https://Stackoverflow.com/users/3531894/" ]
You'll want to use <https://www.npmjs.com/package/raw-body> to grab the raw contents of the body. And then pass the [`Buffer`](https://nodejs.org/api/buffer.html) object to `bson.deserialize(..)`. Quick dirty example below: ``` const getRawBody = require("raw-body"); app.use(async (req, res, next) => { if (req.headers["content-type"] === "application/octet-stream") { req.body = await getRawBody(req) } next() }) ``` Then just simply do: ``` exports.updateBinary = (req, res) => { const data = new BSON().deserialize(req.body) } ```
You could as well use the [body-parser](https://github.com/expressjs/body-parser) package: ``` const bodyParser = require('body-parser') app.use(bodyParser.raw({type: 'application/octet-stream', limit : '100kb'})) app.use((req, res, next) => { if (Buffer.isBuffer(req.body)) { req.body = JSON.parse(req.body) } }) ```
1,078
73,428,442
I need to get the absolute path of a file in python, i already tried `os.path.abspath(filename)` in my code like this: ```py def encrypt(filename): with open(filename, 'rb') as toencrypt: content = toencrypt.read() content = Fernet(key).encrypt(content) with open(filename, "wb") as toencrypt: toencrypt.write(content) def checkfolder(folder): for file in os.listdir(folder): fullpath = os.path.abspath(file) if file != "decrypt.py" and file != "encrypt.py" and file != "key.txt" and not os.path.isdir(file): print(fullpath) encrypt(fullpath) elif os.path.isdir(file) and not file.startswith("."): checkfolder(fullpath) ``` the problem is that `os.path.abspath(filename)` doesn't really get the absolute path, it just attach the current working directory to the filename which in this case is completely useless. how can i do this?
2022/08/20
[ "https://Stackoverflow.com/questions/73428442", "https://Stackoverflow.com", "https://Stackoverflow.com/users/19400931/" ]
If you want to control something over time in Pygame you have two options: 1. Use [`pygame.time.get_ticks()`](https://www.pygame.org/docs/ref/time.html#pygame.time.get_ticks) to measure time and and implement logic that controls the object depending on the time. 2. Use the timer event. Use [`pygame.time.set_timer()`](https://www.pygame.org/docs/ref/time.html#pygame.time.set_timer) to repeatedly create a [`USEREVENT`](https://www.pygame.org/docs/ref/event.html) in the event queue. Change object states when the event occurs. For a timer event you need to define a unique user events id. The ids for the user events have to be between `pygame.USEREVENT` (24) and `pygame.NUMEVENTS` (32). In this case `pygame.USEREVENT+1` is the event id for the timer event. Receive the event in the event loop: ```py def spawn_ship(): # [...] def main(): spawn_ship_interval = 5000 # 5 seconds spawn_ship_event_id = pygame.USEREVENT + 1 # 1 is just as an example pygame.time.set_timer(spawn_ship_event_id, spawn_ship_interval) run = True while run: for event in pygame.event.get(): if event.type == pygame.QUIT: run = False elif event.type == spawn_ship_event_id: spawn_ship() ``` Also see [Spawning multiple instances of the same object concurrently in python](https://stackoverflow.com/questions/62112754/spawning-multiple-instances-of-the-same-object-concurrently-in-python/62112894#62112894) and [How to run multiple while loops at a time in Pygame](https://stackoverflow.com/questions/65263318/how-to-run-multiple-while-loops-at-a-time-in-pygame/65263396#65263396).
You could try making spawn\_ship async, then use a thread so it doesn't affect the main loop ```py import threading def main(): threading.Timer(5.0, spawn_ship).start() async def spawn_ship(): # ... ```
1,079
1,633,342
After looking through the many useful and shiny Python frameworks, I find none of them get close to what I need or provide *way* more than my needs. I'm looking to put something together myself; could define it as a framework, but not full-stack. However, I can't find online what the Python community sees as the correct/standard way to manage WSGI middleware in an application. I'm not looking for framework suggestions, unless its to provide an example of ways to manage WSGI middleware. Nor am I looking for information on how to get a webserver to talk to python -- that bit I understand. Rather, I'm looking for advice on how one tells python what components/middleware to put into the stack, and in which order. For instance, if I wanted to use: `Spawning-->memento-->AuthKit-->(?)-->MyApp` how would I get those components into the right order, and how would I configure an additional item (say Routes) before `MyApp`? So; Can you advise on the common/correct/standard way of managing what middleware is included in a WSGI stack for a Python application? **Edit** Thanks to Michael Dillon for recommending [A Do-It-Yourself Framework](http://pythonpaste.org/do-it-yourself-framework.html), which helps highlight my problem. The [middleware section](http://pythonpaste.org/do-it-yourself-framework.html#give-me-more-middleware) of that document states that one should wrap middleware A in middleware B, B in C, and so-on: ``` app = ObjectPublisher(Root()) wrapped_app = AuthMiddleware(app) from paste.evalexception import EvalException exc_wrapped_app = EvalException(wrapped_app) ``` Which shows how to do it in a very simple way. I understand how this works, however it seems too simple when working with a number of middleware packages. **Is there a better way to manage how these middleware components are added to the stack? Maybe a common design pattern which reads from a config file?**
2009/10/27
[ "https://Stackoverflow.com/questions/1633342", "https://Stackoverflow.com", "https://Stackoverflow.com/users/30478/" ]
That is what a framework does. Some frameworks like Django are fairly rigid and others like Pylons make it easier to mix and match. Since you will likely be using some of the WSGI components from the Paste project sooner or later, you might as well read this article from the Paste folks about a [Do-It-Yourself Framework](http://pythonpaste.org/do-it-yourself-framework.html). I'm not suggesting that you should go and build your own framework, but that the article gives a good explanation of how the WSGI stack works and how things go together.
What middleware do you think you need? You may very well not need to include any WSGI ‘middleware’-like components at all. You can perfectly well put together a loose ‘pseudo-framework’ of standalone libraries without needing to ‘wrap’ the application in middleware at all. (Personally I use a separate form-reading library, data access layer and template engine, none of which know about each other or need to start fiddling with the WSGI `environ`.)
1,080
32,295,943
I am trying to sort a list in python with integers and a float using "a.sort([1])" (I am sorting it from the second element of the list) but it keeps on saying "TypeError: must use keyword argument for key function". What should I do? Also my list looks like this: ["bob","2","6","8","5.3333333333333"]
2015/08/30
[ "https://Stackoverflow.com/questions/32295943", "https://Stackoverflow.com", "https://Stackoverflow.com/users/4464968/" ]
[Paul's answer](https://stackoverflow.com/a/32296030/21945) does a nice job of explaining how to use `sort` correctly. I want to point out, however, that sorting strings of numbers is not the same as sorting numeric values (ints and floats etc.). Sorting by strings will use character collating sequence to determine the order, e.g. ``` >>> l = ['100','1','2','99'] >>> sorted(l) ['1', '100', '2', '99'] ``` but this is probably not what you want; 100 is greater than 2 and so should appear further back in the list than 2. You can sort by numeric value, but retain strings for the list items using the `key` parameter to `sort()`: ``` >>> sorted(l, key=lambda x: float(x)) ['1', '2', '99', '100'] ``` Here the `key` is a lambda function that converts its argument to a float. The `sort()` routine will use the converted value as the sort key, not the string value that is actually contained in the list. To sort from the second list item on, do this: ``` >>> a = ["bob", "2", "6", "8", "5.3333333333333"] >>> a[1:] = sorted(a[1:], key=lambda x: float(x)) >>> a ['bob', '2', '5.3333333333333', '6', '8'] ```
try to do "a.sort()". it will sort your list. sort cant get 1 as argument. read more here: <http://www.tutorialspoint.com/python/list_sort.htm> if you trying to sort every element except the first one try to do: ``` a[1:] = sorted(a[1:]) ```
1,090
66,831,049
I am trying to document the Reports, Visuals and measures used in a PBIX file. I have a PBIX file(containing some visuals and pointing to Tabular Model in Live Mode), I then exported it as a PBIT, renamed to zip. Now in this zip file we have a folder called Report, within that we have a file called Layout. The layout file looks like a JSON file but when I try to read it via python, ``` import json # Opening JSON file f = open("C://Layout",) # returns JSON object as # a dictionary #f1 = str.replace("\'", "\"") data = json.load(f) ``` I get below issue, ``` JSONDecodeError: Expecting property name enclosed in double quotes: line 1 column 2 (char 1) ``` Renaming it to Layout.json doesn't help either and gives the same issue. Is there a easy way or a parser to specifically parse this Layout file and get below information out of it ``` Report Name | Visual Name | Column or Measure Name ```
2021/03/27
[ "https://Stackoverflow.com/questions/66831049", "https://Stackoverflow.com", "https://Stackoverflow.com/users/6668031/" ]
Not sure if you have come across an answer for your question yet, but I have been looking into something similar. Here is what I have had to do in order to get the file to parse correctly. Big items here to not is the encoding and all the whitespace replacements. data will then contain the parsed object. ``` with open('path/to/Layout', 'r', encoding="cp1252") as json_file: data_str = json_file.read().replace(chr(0), "").replace(chr(28), "").replace(chr(29), "").replace(chr(25), "") data = json.loads(data_str) ```
This script may help: <https://github.com/grenzi/powerbi-model-utilization> a portion of the script is: ``` def get_layout_from_pbix(pbixpath): """ get_layout_from_pbix loads a pbix file, grabs the layout from it, and returns json :parameter pbixpath: file to read :return: json goodness """ archive = zipfile.ZipFile(pbixpath, 'r') bytes_read = archive.read('Report/Layout') s = bytes_read.decode('utf-16-le') json_obj = json.loads(s, object_hook=parse_pbix_embedded_json) return json_obj ```
1,091
48,097,949
Hi am running the following model with statsmodel and it works fine. ``` from statsmodels.formula.api import ols from statsmodels.iolib.summary2 import summary_col #for summary stats of large tables time_FE_str = ' + C(hour_of_day) + C(day_of_week) + C(week_of_year)' weather_2_str = ' + C(weather_index) + rain + extreme_temperature + wind_speed' model = ols("activity_count ~ C(city_id)"+weather_2_str+time_FE_str, data=df) results = model.fit() print summary_col(results).tables print 'F-TEST:' hypotheses = '(C(weather_index) = 0), (rain=0), (extreme_temperature=0), (wind_speed=0)' f_test = results.f_test(hypotheses) ``` However, I do not know how to formulate the hypthosis for the F-test if I want to include the categorical variable `C(weather_index)`. I tried all for me imaginable versions but I always get an error. Did someone face this issue before? Any ideas? ``` F-TEST: Traceback (most recent call last): File "C:/VK/scripts_python/predict_activity.py", line 95, in <module> f_test = results.f_test(hypotheses) File "C:\Users\Niko\Anaconda2\envs\gl-env\lib\site-packages\statsmodels\base\model.py", line 1375, in f_test invcov=invcov, use_f=True) File "C:\Users\Niko\Anaconda2\envs\gl-env\lib\site-packages\statsmodels\base\model.py", line 1437, in wald_test LC = DesignInfo(names).linear_constraint(r_matrix) File "C:\Users\Niko\Anaconda2\envs\gl-env\lib\site-packages\patsy\design_info.py", line 536, in linear_constraint return linear_constraint(constraint_likes, self.column_names) File "C:\Users\Niko\Anaconda2\envs\gl-env\lib\site-packages\patsy\constraint.py", line 391, in linear_constraint tree = parse_constraint(code, variable_names) File "C:\Users\Niko\Anaconda2\envs\gl-env\lib\site-packages\patsy\constraint.py", line 225, in parse_constraint return infix_parse(_tokenize_constraint(string, variable_names), File "C:\Users\Niko\Anaconda2\envs\gl-env\lib\site-packages\patsy\constraint.py", line 184, in _tokenize_constraint Origin(string, offset, offset + 1)) patsy.PatsyError: unrecognized token in constraint (C(weather_index) = 0), (rain=0), (extreme_temperature=0), (wind_speed=0) ^ ```
2018/01/04
[ "https://Stackoverflow.com/questions/48097949", "https://Stackoverflow.com", "https://Stackoverflow.com/users/7969556/" ]
The methods t\_test, wald\_test and f\_test are for hypothesis test on the parameters directly and not for a entire categorical or composite effect. Results.summary() shows the parameter names that patsy created for the categorical variables. Those can be used to create contrast or restrictions for the categorical effects. As alternative anova\_lm directly computes the hypothesis test that a term,e.g. A categorical variable has no effect.
Leave out the `C()`! I tried making an analysis of these data. ``` Area Clover_yield Yarrow_stems A 19.0 220 A 76.7 20 A 11.4 510 A 25.1 40 A 32.2 120 A 19.5 300 A 89.9 60 A 38.8 10 A 45.3 70 A 39.7 290 B 16.5 460 B 1.8 320 B 82.4 0 B 54.2 80 B 27.4 0 B 25.8 450 B 69.3 30 B 28.7 250 B 52.6 20 B 34.5 100 C 49.7 0 C 23.3 220 C 38.9 160 C 79.4 0 C 53.2 120 C 30.1 150 C 4.0 450 C 20.7 240 C 29.8 250 C 68.5 0 ``` When I used the linear model in the first call to `ols` shown in the code I eventually ran into the snag you experienced. However, when I omitted mention of the fact that *Area* assumes discrete levels I was able to calculate F-tests on contrasts. ``` import pandas as pd from statsmodels.formula.api import ols df = pd.read_csv('clover.csv', sep='\s+') model = ols('Clover_yield ~ C(Area) + Yarrow_stems', data=df) model = ols('Clover_yield ~ Area + Yarrow_stems', data=df) results = model.fit() print (results.summary()) print (results.f_test(['Area[T.B] = Area[T.C], Yarrow_stems=150'])) ``` Here is the output. Notice that, the summary indicates the names that can be used in formulating contrasts for factors, in our case `Area[T.B]` and `Area[T.C]`. ``` OLS Regression Results ============================================================================== Dep. Variable: Clover_yield R-squared: 0.529 Model: OLS Adj. R-squared: 0.474 Method: Least Squares F-statistic: 9.726 Date: Thu, 04 Jan 2018 Prob (F-statistic): 0.000177 Time: 17:26:03 Log-Likelihood: -125.61 No. Observations: 30 AIC: 259.2 Df Residuals: 26 BIC: 264.8 Df Model: 3 Covariance Type: nonrobust ================================================================================ coef std err t P>|t| [0.025 0.975] -------------------------------------------------------------------------------- Intercept 57.5772 6.337 9.086 0.000 44.551 70.603 Area[T.B] 0.3205 7.653 0.042 0.967 -15.411 16.052 Area[T.C] -0.5432 7.653 -0.071 0.944 -16.274 15.187 Yarrow_stems -0.1086 0.020 -5.401 0.000 -0.150 -0.067 ============================================================================== Omnibus: 0.459 Durbin-Watson: 2.312 Prob(Omnibus): 0.795 Jarque-Bera (JB): 0.449 Skew: 0.260 Prob(JB): 0.799 Kurtosis: 2.702 Cond. No. 766. ============================================================================== Warnings: [1] Standard Errors assume that the covariance matrix of the errors is correctly specified. <F test: F=array([[ 27873807.59795523]]), p=4.939796675253845e-83, df_denom=26, df_num=2> ``` As mentioned in my comment, I'm not clear what you intended to test. **Edit**: Prompted by user333700's comment, I've tried running this code again, with two slighly different statements, `model = ols('Clover_yield ~ C(Area) + Yarrow_stems', data=df)` and `print (results.f_test(['C(Area)[T.B] = C(Area)[T.C], Yarrow_stems=150']))`. 'C(Area)[T.B]' and 'C(Area)[T.C]' came from the summary with the changed model. Thus, it doesn't matter, for this type of analysis, whether you declare with C() or not. You must simply remember to use the appropriate form for the dummy variables, as mentioned in the summary.
1,092
8,919,080
This is what my problem is: I need to make a random string 50 characters long, made up of `1`s and `0`s. I know how to solve this problem, and even have a one-liner for it. I have also looked for various solutions to this problem on SO, only to get back what I already know([1](https://stackoverflow.com/questions/2257441/python-random-string-generation-with-upper-case-letters-and-digits), [2](https://stackoverflow.com/questions/2257441/python-random-string-generation-with-upper-case-letters-and-digits), etc). But what I really want is **the most** Pythonic way of doing this. Currently, I'm leaning towards `''.join(( random.choice([0,1]) for i in xrange(50) ))` Is there a more pythonic way of doing this? Is there a built-in that does something like this, perhaps in `itertools`?
2012/01/18
[ "https://Stackoverflow.com/questions/8919080", "https://Stackoverflow.com", "https://Stackoverflow.com/users/198633/" ]
For Python2.7 or better: ``` In [83]: import random In [84]: '{:050b}'.format(random.randrange(1<<50)) Out[84]: '10011110110110000011111000011100101111101001001011' ``` (In Python2.6, use `'{0:050b}'` instead of `'{:050b}'`.) --- **Explanation**: The `string.format` method can convert integers into their binary string representations. The basic format code to do this is `'{:b}'`: ``` In [91]: '{:b}'.format(10) Out[91]: '1010' ``` To make a string of width 50, use the format code `'{:50b}'`: ``` In [92]: '{:50b}'.format(10) Out[92]: ' 1010' ``` and to fill in the whitespace with zeros, use `{:050b}`: ``` In [93]: '{:050b}'.format(10) Out[93]: '00000000000000000000000000000000000000000000001010' ``` The [syntax for str.format](http://docs.python.org/library/string.html#format-string-syntax) is a bit daunting at first. Here is my cheat sheet: ``` http://docs.python.org/library/string.html#format-string-syntax replacement_field ::= "{" field_name ["!" conversion] [":" format_spec] "}" field_name ::= (identifier|integer)("."attribute_name|"["element_index"]")* attribute_name ::= identifier element_index ::= integer conversion ::= "r" | "s" format_spec ::= [[fill]align][sign][#][0][width][,][.precision][type] fill ::= <a character other than '}'> align ::= "<" | ">" | "=" | "^" "=" forces the padding to be placed after the sign (if any) but before the digits. (for numeric types) "<" left justification ">" right justification "^" center justification sign ::= "+" | "-" | " " "+" places a plus/minus sign for all numbers "-" places a sign only for negative numbers " " places a leading space for positive numbers # for integers with type b,o,x, tells format to prefix output with 0b, 0o, or 0x. 0 enables zero-padding. equivalent to 0= fill align. width ::= integer , tells format to use a comma for a thousands separator precision ::= integer type ::= "b" | "c" | "d" | "e" | "E" | "f" | "F" | "g" | "G" | "n" | "o" | "x" | "X" | "%" c convert integer to corresponding unicode character n uses a locale-aware separator % multiplies number by 100, display in 'f' format, with percent sign ```
``` # Choose a number in [0, 1L << 50), and format it as binary. # The [2:] lops off the prefix "0b" bit_str = bin(random.randint(0, (1L << 50) - 1))[2:] # We then need to pad to 50 bits. fifty_random_bits = '%s%s' % ('0' * (50 - len(bit_str)), bit_str) ```
1,093
47,355,844
I'm trying to run a python script in a Docker container, and i don't know why, python can't find any of the python's module. I thaught it has something to do with the PYTHONPATH env variable, so i tried to add it in the Dockerfile like this : `ENV PYTHONPATH $PYTHONPATH` But it didn't work. this is what my Dockerfile looks like: ``` FROM ubuntu:16.04 MAINTAINER SaveMe SaveMe@Desperate.com ADD . /app WORKDIR /app RUN apt-get update RUN DEBIAN_FRONTEND=noninteractive apt-get install -y locales # Set the locale RUN sed -i -e 's/# en_US.UTF-8 UTF-8/en_US.UTF-8 UTF-8/' /etc/locale.gen && \ locale-gen ENV LANG en_US.UTF-8 ENV LANGUAGE en_US:en ENV LC_ALL en_US.UTF-8 ENV PYTHONPATH ./app #Install dependencies RUN echo "===> Installing sudo to emulate normal OS behavior..." RUN apt-get install -y software-properties-common RUN apt-add-repository universe RUN add-apt-repository ppa:jonathonf/python-3.6 RUN (apt-get update && apt-get upgrade -y -q && apt-get dist-upgrade - y -q && apt-get -y -q autoclean && apt-get -y -q autoremove) RUN apt-get install -y libxml2-dev libxslt-dev RUN apt-get install -y python3.6 python3.6-dev python3.6-venv openssl ca-certificates python3-pip RUN apt-get install -y python3-dev python-dev libffi-dev gfortran RUN apt-get install -y swig RUN apt-get install -y sshpass openssh-client rsync python-pip python- dev libffi-dev libssl-dev libxml2-dev libxslt1-dev libjpeg8-dev zlib1g-dev libpulse-dev RUN pip install --upgrade pip RUN pip install bugsnag #Install python package + requirements.txt RUN pip3 install -r requirements.txt CMD ["python3.6", "import_emails.py"] ``` when i'm trying to run: `sudo docker run <my_container>` i got this Traceback: ``` Traceback (most recent call last): File "import_emails.py", line 9, in <module> import bugsnag ModuleNotFoundError: No module named 'bugsnag' ``` As you can see i'm using python3.6 for this project. Any lead on how to solve this ?
2017/11/17
[ "https://Stackoverflow.com/questions/47355844", "https://Stackoverflow.com", "https://Stackoverflow.com/users/8538327/" ]
Inside the container, when I `pip install bugsnag`, I get the following: ``` root@af08af24a458:/app# pip install bugsnag Requirement already satisfied: bugsnag in /usr/local/lib/python2.7/dist-packages Requirement already satisfied: webob in /usr/local/lib/python2.7/dist-packages (from bugsnag) Requirement already satisfied: six<2,>=1.9 in /usr/local/lib/python2.7/dist-packages (from bugsnag) ``` You probably see the problem here. You're installing the package for python2.7, which is the OS default, instead of python3.6, which is what you're trying to use. Check out this answer for help resolving this issue: ["ModuleNotFoundError: No module named <package>" in my Docker container](https://stackoverflow.com/questions/47355844/modulenotfounderror-no-module-named-package-in-my-docker-container) Alternatively, this is a problem `virtualenv` and similar tools are meant to solve, you could look into that as well.
since you're using py3, try using pip3 to install bugsnag instead of pip
1,098
33,617,221
I am trying to speed up some heavy simulations by using python's multiprocessing module on a machine with 24 cores that runs Suse Linux. From reading through the documentation, I understand that this only makes sense if the individual calculations take much longer than the overhead for creating the pool etc. What confuses me is that the execution time of some of the individual processes is much longer with multiprocessing than when I just run a single process. In my actual simulations the the time increases from around 300s to up to 1500s. Interestingly this gets worse when I use more processes. The following example illustrates the problem with a slightly shorter dummy loop: ```py from time import clock,time import multiprocessing import os def simulate(params): t1 = clock() result = 0 for i in range(10000): for j in range(10000): result+=i*j pid = os.getpid() print 'pid: ',pid,' sim time: ',clock() - t1, 'seconds' return result if __name__ == '__main__': for n_procs in [1,5,10,20]: print n_procs,' processes:' t1 = time() result = multiprocessing.Pool(processes = n_procs).map(simulate,range(20)) print 'total: ',time()-t1 ``` This produces the following output: ``` 1 processes: pid: 1872 sim time: 8.1 seconds pid: 1872 sim time: 7.92 seconds pid: 1872 sim time: 7.93 seconds pid: 1872 sim time: 7.89 seconds pid: 1872 sim time: 7.87 seconds pid: 1872 sim time: 7.74 seconds pid: 1872 sim time: 7.83 seconds pid: 1872 sim time: 7.84 seconds pid: 1872 sim time: 7.88 seconds pid: 1872 sim time: 7.82 seconds pid: 1872 sim time: 8.83 seconds pid: 1872 sim time: 7.91 seconds pid: 1872 sim time: 7.97 seconds pid: 1872 sim time: 7.84 seconds pid: 1872 sim time: 7.87 seconds pid: 1872 sim time: 7.91 seconds pid: 1872 sim time: 7.86 seconds pid: 1872 sim time: 7.9 seconds pid: 1872 sim time: 7.96 seconds pid: 1872 sim time: 7.97 seconds total: 159.337743998 5 processes: pid: 1906 sim time: 8.66 seconds pid: 1907 sim time: 8.74 seconds pid: 1908 sim time: 8.75 seconds pid: 1905 sim time: 8.79 seconds pid: 1909 sim time: 9.52 seconds pid: 1906 sim time: 7.72 seconds pid: 1908 sim time: 7.74 seconds pid: 1907 sim time: 8.26 seconds pid: 1905 sim time: 8.45 seconds pid: 1909 sim time: 9.25 seconds pid: 1908 sim time: 7.48 seconds pid: 1906 sim time: 8.4 seconds pid: 1907 sim time: 8.23 seconds pid: 1905 sim time: 8.33 seconds pid: 1909 sim time: 8.15 seconds pid: 1908 sim time: 7.47 seconds pid: 1906 sim time: 8.19 seconds pid: 1907 sim time: 8.21 seconds pid: 1905 sim time: 8.27 seconds pid: 1909 sim time: 8.1 seconds total: 35.1368539333 10 processes: pid: 1918 sim time: 8.79 seconds pid: 1920 sim time: 8.81 seconds pid: 1915 sim time: 14.78 seconds pid: 1916 sim time: 14.78 seconds pid: 1914 sim time: 14.81 seconds pid: 1922 sim time: 14.81 seconds pid: 1913 sim time: 14.98 seconds pid: 1921 sim time: 14.97 seconds pid: 1917 sim time: 15.13 seconds pid: 1919 sim time: 15.13 seconds pid: 1920 sim time: 8.26 seconds pid: 1918 sim time: 8.34 seconds pid: 1915 sim time: 9.03 seconds pid: 1921 sim time: 9.03 seconds pid: 1916 sim time: 9.39 seconds pid: 1913 sim time: 9.27 seconds pid: 1914 sim time: 12.12 seconds pid: 1922 sim time: 12.17 seconds pid: 1917 sim time: 12.15 seconds pid: 1919 sim time: 12.17 seconds total: 27.4067809582 20 processes: pid: 1941 sim time: 8.63 seconds pid: 1939 sim time: 10.32 seconds pid: 1931 sim time: 12.35 seconds pid: 1936 sim time: 12.23 seconds pid: 1937 sim time: 12.82 seconds pid: 1942 sim time: 12.73 seconds pid: 1932 sim time: 13.01 seconds pid: 1946 sim time: 13.0 seconds pid: 1945 sim time: 13.74 seconds pid: 1944 sim time: 14.03 seconds pid: 1929 sim time: 14.44 seconds pid: 1943 sim time: 14.75 seconds pid: 1935 sim time: 14.8 seconds pid: 1930 sim time: 14.79 seconds pid: 1927 sim time: 14.85 seconds pid: 1934 sim time: 14.8 seconds pid: 1928 sim time: 14.83 seconds pid: 1940 sim time: 14.88 seconds pid: 1933 sim time: 15.05 seconds pid: 1938 sim time: 15.06 seconds total: 15.1311581135 ``` What I do not understand is that *some* of the processes become much slower above a certain number of CPUs. I should add that nothing else is running on this machine. Is this expected? Am I doing something wrong?
2015/11/09
[ "https://Stackoverflow.com/questions/33617221", "https://Stackoverflow.com", "https://Stackoverflow.com/users/5543796/" ]
Cores are shared resource like anything else on computer. OS will usually balance load. Meaning it will spread threads on as many cores as possible.`*` Guiding metric will be core load. So if there are less thread counts then core count some cores will sit idle. (Thread architecture prevent splitting onto multiple cores). If there are more threads then cores. OS will assign many threads to single core, and will multitask between those threads on that core. Switching from one thread to other on single core have some cost associated. Shifting task from core to another have even greater cost. (Quite significant in terms of both cores resources) OS will generally avoid such actions. **So getting back to Your story.** **Performance rouse with thread count up to core count** because there where idling cores that got new work. Few last cores though where busy with OS work anyway, so those added very little to actual performance. **Overall performance still improved** after thread count passed core count. Just because OS can switch active thread if previous got stuck on long running task (like I/O), so another one can use CPU time. **Perofrmance would decrease** if thread count would significantly pass core count. As too many threads would fight for same resource (CPU time), and switching costs would aggregate to substantial portion of CPU cycles. However from Your listing its still not happened. **As for seemingly long execution time?** It was long! Just threads did not spent it all working. OS switched them off and on to maximize CPU usage whenever anyone of them got stuck on external work (I/O), and then some more switching to more evenly spread CPU time across threads assigned to core. `*` OS may also go for least power usage, maximized I/O usage, etc. Especially Linux is very flexible here. But its out of scope ;) Read on various schedulers in Linux if interested.
This is the best answer I could come up with after looking through different questions and documentations: It is pretty widely known that `multiprocessing` in general adds some sort of overhead when it comes to run time performance. This is/can be a result of a lot of different factors such as allocating RAM space, initializing the process, waiting for termination, [etc,etc,etc](https://stackoverflow.com/questions/26630245/testing-python-multiprocessing-low-speed-because-of-overhead). This then explains the increase in time from switching to parallel processing from singular. The increase in time as the amount of processes increases can sort of be explained by the way that mutliprocessing works. The comment by ali\_m [in this link](https://stackoverflow.com/questions/28789377/python-multiprocessing-run-time-per-process-increases-with-number-of-processes) was the best that I could find that explains why this happens: > > For starters, if your threads share CPU cache you're likely to suffer a lot more cache misses, which can cause a big degradation in performance > > > This is is alike to when you try to run a lot of different programs on your computer at once: your programs start to 'lag' and slow down because your CPU can only handle so many requests at a time. Another good link that I found was [this](https://community.oracle.com/message/3777500). Although this was a question about SQL servers and using queries, the same idea can be applied to it (regarding the amount of overhead as the amount of processes/queries increase) This is not, by far, a complete answer but this is my slight understanding on why you are getting the results as you are. Conclusion? the results you are getting or both normal and expected for multiprocessing
1,099
65,157,911
I"m struggling on how to count the letter, number, and special character in a string. I'm a beginner and exploring python. Thank you in advance guys! ``` string=input("Enter string: Abc123--- ") count1=0 count2=0 count3=0 count4=0 for i in string: if(i.isletter()): count1=count1+1 count2=count2+1 count3=count3+1 count4=count2+1 print("Letter count:") print(count1) print("Number count:") print(count2) print("Special Characters count:") print(count3) print("Total characters count:") print(count4) ```
2020/12/05
[ "https://Stackoverflow.com/questions/65157911", "https://Stackoverflow.com", "https://Stackoverflow.com/users/8055559/" ]
* You should import your classes directly: ``` from Item import Item # Assuming the file name is Item.py from Inventory import Inventory # Assuming the file name is Inventory.py ``` and then you can do: ``` item1 = Item(0,"Hat", 14, 10.00) ``` * Then you use `Item` inside the `Inventory` class, but you didn't **import** `Item` there. Be careful and import it the same way with `from x import y`. * Also you have an error in the line: ``` addingItem = Item(pn,id,name,amt,cost) ``` what is `pn`? Remove it and it should work: ``` addingItem = Item(id,name,amt,cost) ``` * Also avoid using reserved names like `list`, `id` ecc... you might end up with problems.
This error statement TypeError: 'module' object is not callable is raised as you are being confused about the Class name and Module name. The problem is in the import line . You are importing a module, not a class. This happend because the module name and class name have the same name. If you have a class MyClass in a file called MyClass.py , then you should write: ``` from YourClass import YourClass ```
1,101
2,693,820
How might one extract all images from a pdf document, at native resolution and format? (Meaning extract tiff as tiff, jpeg as jpeg, etc. and without resampling). Layout is unimportant, I don't care were the source image is located on the page. I'm using python 2.7 but can use 3.x if required.
2010/04/22
[ "https://Stackoverflow.com/questions/2693820", "https://Stackoverflow.com", "https://Stackoverflow.com/users/14420/" ]
Often in a PDF, the image is simply stored as-is. For example, a PDF with a jpg inserted will have a range of bytes somewhere in the middle that when extracted is a valid jpg file. You can use this to very simply extract byte ranges from the PDF. I wrote about this some time ago, with sample code: [Extracting JPGs from PDFs](http://nedbatchelder.com/blog/200712/extracting_jpgs_from_pdfs.html).
I added all of those together in PyPDFTK [here](https://github.com/ronanpaixao/PyPDFTK/blob/master/pdf_images.py). My own contribution is handling of `/Indexed` files as such: ``` for obj in xObject: if xObject[obj]['/Subtype'] == '/Image': size = (xObject[obj]['/Width'], xObject[obj]['/Height']) color_space = xObject[obj]['/ColorSpace'] if isinstance(color_space, pdf.generic.ArrayObject) and color_space[0] == '/Indexed': color_space, base, hival, lookup = [v.getObject() for v in color_space] # pg 262 mode = img_modes[color_space] if xObject[obj]['/Filter'] == '/FlateDecode': data = xObject[obj].getData() img = Image.frombytes(mode, size, data) if color_space == '/Indexed': img.putpalette(lookup.getData()) img = img.convert('RGB') img.save("{}{:04}.png".format(filename_prefix, i)) ``` Note that when `/Indexed` files are found, you can't just compare `/ColorSpace` to a string, because it comes as an `ArrayObject`. So, we have to check the array and retrieve the indexed palette (`lookup` in the code) and set it in the PIL Image object, otherwise it stays uninitialized (zero) and the whole image shows as black. My first instinct was to save them as GIFs (which is an indexed format), but my tests turned out that PNGs were smaller and looked the same way. I found those types of images when printing to PDF with Foxit Reader PDF Printer.
1,102
94,334
What is the best python framework to create distributed applications? For example to build a P2P app.
2008/09/18
[ "https://Stackoverflow.com/questions/94334", "https://Stackoverflow.com", "https://Stackoverflow.com/users/-1/" ]
You could checkout [pyprocessing](http://pyprocessing.berlios.de/) which will be included in the standard library as of 2.6. It allows you to run tasks on multiple processes using an API similar to threading.
You could download the source of BitTorrent for starters and see how they did it. <http://download.bittorrent.com/dl/>
1,112
51,817,237
I am working on a Flask project and I am using marshmallow to validate user input. Below is a code snippet: ``` def create_user(): in_data = request.get_json() data, errors = Userschema.load(in_data) if errors: return (errors), 400 fname = data.get('fname') lname = data.get('lname') email = data.get('email') password = data.get('password') cpass = data.get('cpass') ``` When I eliminate the `errors` part, the code works perfectly. When I run it as it is, I get the following error: > > builtins.ValueError > > > ValueError: too many values to unpack (expected 2) > > > Traceback (most recent call last) > > > File > "/home/..project-details.../venv3/lib/python3.6/site-packages/flask/app.py", > line 2000, in **call** > > > error = None > > > ctx.auto\_pop(error) > > > ``` def __call__(self, environ, start_response): """Shortcut for :attr:`wsgi_app`.""" return self.wsgi_app(environ, start_response) def __repr__(self): return '<%s %r>' % ( self.__class__.__name__, self.name, ``` Note: The var `in_data` is a dict. Any ideas??
2018/08/13
[ "https://Stackoverflow.com/questions/51817237", "https://Stackoverflow.com", "https://Stackoverflow.com/users/10217900/" ]
I recommend you check your dependency versions. Per the [Marshmallow API reference](http://marshmallow.readthedocs.io/en/latest/api_reference.html#schema), schema.load returns: > > Changed in version 3.0.0b7: This method returns the deserialized data rather than a (data, errors) duple. A ValidationError is raised if invalid data are passed. > > > I suspect python is trying to unpack the dict (returned as a singular object) into two variables. The exception is raised because there is nothing to pack into the 'errors' variable. The below reproduces the error: ``` d = dict() d['test'] = 10101 a, b = d print("%s : %s" % (a, b)) ```
according to the documentation in its most recent version (3.17.1) the way of handling with validation errors is as follows: ``` from marshmallow import ValidationError try: result = UserSchema().load({"name": "John", "email": "foo"}) except ValidationError as err: print(err.messages) # => {"email": ['"foo" is not a valid email address.']} print(err.valid_data) # => {"name": "John"} ```
1,120
48,072,131
I am not sure what would be an appropriate heading for this question and this can be a repeated question as well. So please guide accordingly. I am new to python programming. I have this simple code to generate Fibonacci series. ``` 1: def fibo(n): 2: a = 0 3: b = 1 4: for x in range(n): 5: print (a, end=' ') 6: #a, b = b, a+b 7: a = b 8: b = a+b 9: print() 10: num = int(input("enter n value: ")) 11: print(fibo(num)) ``` If I execute the above code as-is the result I get is as follows ``` enter n value: 10 0 1 2 4 8 16 32 64 128 256 ``` If uncomment #6 and comment lines #7 and #8 the result I get is the actual fibo series. ``` enter n value: 10 0 1 1 2 3 5 8 13 21 34 ``` I would like to know what is the difference between ``` a, b = b, a + b ``` and ``` a = b b = a + b ``` Programming IDE used: PyCharm Community 2017.3
2018/01/03
[ "https://Stackoverflow.com/questions/48072131", "https://Stackoverflow.com", "https://Stackoverflow.com/users/1755089/" ]
``` a = b b = a + b ``` is actually: ``` a = b b = b + b ``` what you want is: ``` a = b b = old_value_of_a + b ``` When you do `a, b = b, a + b` it really is doing: ``` tmp_a = b tmp_b = a + b a = tmp_a b = tmp_b ``` which is what you want
In line 7, you've already assigned the value in `b` to `a`, so in line 8, new value for `b` is actually double the old b's value. While in line 6, the values on the right side of `=` will be using the old values, that's why you could get Fibo series.
1,121
43,566,044
Python does a lot with magic methods and most of these are part of some protocol. I am familiar with the "iterator protocol" and the "number protocol" but recently stumbled over the term ["sequence protocol"](https://docs.python.org/c-api/sequence.html#sequence-protocol). But even after some research I'm not exactly sure what the "sequence protocol" is. For example the C API function [`PySequence_Check`](https://docs.python.org/c-api/sequence.html#c.PySequence_Check) checks (according to the documentation) if some object implements the "sequence protocol". The [source code](https://github.com/python/cpython/blob/813f943c592cf225871b99cffc99304c8cbbee40/Objects/abstract.c#L1471) indicates that this is a class that's not a dict but implements a `__getitem__` method which is *roughly* identical to what the documentation on [`iter`](https://docs.python.org/library/functions.html#iter) also states: > > [...]must support the sequence protocol (the `__getitem__()` method with integer arguments starting at 0).[...] > > > But the requirement to start with `0` isn't something that's "implemented" in `PySequence_Check`. Then there is also the [`collections.abc.Sequence`](https://docs.python.org/library/collections.abc.html#collections-abstract-base-classes) type, which basically says the instance has to implement `__reversed__`, `__contains__`, `__iter__` and `__len__`. But by that definition a class implementing the "sequence protocol" isn't necessarily a Sequence, for example the ["data model"](https://docs.python.org/reference/datamodel.html#the-standard-type-hierarchy) and the abstract class guarantee that a sequence has a length. But a class just implementing `__getitem__` (passing the `PySequence_Check`) throws an exception when using `len(an_instance_of_that_class)`. Could someone please clarify for me the difference between a sequence and the sequence protocol (if there's a definition for the protocol besides reading the source code) and when to use which definition?
2017/04/23
[ "https://Stackoverflow.com/questions/43566044", "https://Stackoverflow.com", "https://Stackoverflow.com/users/5393381/" ]
It's not really consistent. Here's [`PySequence_Check`](https://github.com/python/cpython/blob/3.6/Objects/abstract.c#L1460): ``` int PySequence_Check(PyObject *s) { if (PyDict_Check(s)) return 0; return s != NULL && s->ob_type->tp_as_sequence && s->ob_type->tp_as_sequence->sq_item != NULL; } ``` `PySequence_Check` checks if an object provides the C sequence protocol, implemented through a `tp_as_sequence` member in the `PyTypeObject` representing the object's type. This `tp_as_sequence` member is a pointer to a struct containing a bunch of functions for sequence behavior, such as `sq_item` for item retrieval by numeric index and `sq_ass_item` for item assignment. Specifically, `PySequence_Check` requires that its argument is not a dict, and that it provides `sq_item`. Types with a `__getitem__` written in Python will provide `sq_item` regardless of whether they're conceptually sequences or mappings, so a mapping written in Python that doesn't inherit from `dict` will pass `PySequence_Check`. --- On the other hand, `collections.abc.Sequence` only checks whether an object concretely inherits from `collections.abc.Sequence` or whether its class (or a superclass) is explicitly `register`ed with `collections.abc.Sequence`. If you just implement a sequence yourself without doing either of those things, it won't pass `isinstance(your_sequence, Sequence)`. Also, most classes registered with `collections.abc.Sequence` don't support all of `collections.abc.Sequence`'s methods. Overall, `collections.abc.Sequence` is a lot less reliable than people commonly expect it to be. --- As for what counts as a sequence in practice, it's usually anything that supports `__len__` and `__getitem__` with integer indexes starting at 0 and isn't a mapping. If the docs for a function say it takes any sequence, that's almost always all it needs. Unfortunately, "isn't a mapping" is hard to test for, for reasons similar to how "is a sequence" is hard to pin down.
For a type to be in accordance with the sequence protocol, these 4 conditions must be met: * Retrieve elements by index `item = seq[index]` * Find items by value `index = seq.index(item)` * Count items `num = seq.count(item)` * Produce a reversed sequence `r = reversed(seq)`
1,129
26,199,376
I am trying to use level db in my python project. I zeroed in on python binding PlyVel <http://plyvel.readthedocs.org/en/latest/installation.html>, which seems to be better maintained and documented python binding. However installation fails for plyvel > > plyvel/\_plyvel.cpp:359:10: fatal error: 'leveldb/db.h' file not found > > > #include "leveldb/db.h" > > > So i believe i have to install leveldb to my machine. I did not find installation guide for leveldb for macosx. I have downloaded the tarball for leveldb, <https://code.google.com/p/leveldb/downloads/list>. Make file compiles the code, but the plylevel still fails. How should i compile the level db such that its binaries are made available to plyvel.
2014/10/05
[ "https://Stackoverflow.com/questions/26199376", "https://Stackoverflow.com", "https://Stackoverflow.com/users/2089768/" ]
In OS X, it seems like `/usr/local/include`, where the leveldb headers (db.h) live, is not visible to gcc. You need to install the Apple command line tools: ``` xcode-select --install ``` plyvel will compile after that. [Link to GH issue](https://github.com/wbolster/plyvel/issues/34). Seems to be an OS X problem.
I'm not familiar with leveldb but most direct binary installations require you to run `./configure` then `make` then `make install` before the binary is actually installed. You should try that. Also, according to this github page you should be able to install it with `gem`: <https://github.com/DAddYE/leveldb>
1,130
47,944,927
I am trying to make a GET request to a shopify store, packershoes as follow: ``` endpoint = "http://www.packershoes.com" print session.get(endpoint, headers=headers) ``` When I run a get request to the site I get the following error: ``` File "/usr/lib/python2.7/dist-packages/requests/sessions.py", line 467, in get return self.request('GET', url, **kwargs) File "/usr/lib/python2.7/dist-packages/requests/sessions.py", line 455, in request resp = self.send(prep, **send_kwargs) File "/usr/lib/python2.7/dist-packages/requests/sessions.py", line 558, in send r = adapter.send(request, **kwargs) File "/usr/lib/python2.7/dist-packages/requests/adapters.py", line 385, in send raise SSLError(e) requests.exceptions.SSLError: hostname 'packershoes.com' doesn't match either of '*.myshopify.com', 'myshopify.com' ``` When I request a different sites, it works fine.
2017/12/22
[ "https://Stackoverflow.com/questions/47944927", "https://Stackoverflow.com", "https://Stackoverflow.com/users/8528309/" ]
This looks like more of an SSL problem than a Python problem. You haven't shown us your code, so I'm making some guesses here, but it looks as if the site to which you are connecting is presenting an SSL certificate that doesn't match the hostname you're using. The resolution here is typically: * See if there is an alternative hostname you should/could be using that would match the hostname in the certificate, * Report the problem to the site you're trying to access and see if there is a configuration error on their end, or * Disable certificate validation in your code. You don't really want to do that because it would open your code up to man-in-the-middle attacks. This is discussed [in the requests documentation](http://docs.python-requests.org/en/master/user/advanced/#ssl-cert-verification). **Update** Taking a closer look at your problem, I notice that I cannot reproduce this problem myself. The certificates presented by `www.packershoes.com` are clearly for `*.myshopify.com`, but I don't get any certificate errors presumably because that address is actually an alias for `packer-shoes.myshopify.com` ``` $ host www.packershoes.com www.packershoes.com is an alias for packer-shoes.myshopify.com. packer-shoes.myshopify.com is an alias for shops.myshopify.com. shops.myshopify.com has address 23.227.38.64 ``` I wonder if your issue isn't simply related to either the version of `requests` that you're using or something in your local DNS configuration. If you replace `www.packershoes.com` in your request with `packer-shoes.myshopify.com`, does it work correctly?
Requests verifies SSL certificates for HTTPS requests, just like a web browser. By default, SSL verification is enabled, and Requests will throw a `SSLError` if it's unable to verify the certificate, you have set verify to False: ``` session.get("http://www.packershoes.com", headers=headers, verify=False) ```
1,139
22,042,673
I've setup a code in python to search for tweets using the oauth2 and urllib2 libraries only. (I'm not using any particular twitter library) I'm able to search for tweets based on keywords. However, I'm getting zero number of tweets when I search for this particular keyword - "Jurgen%20Mayer-Hermann". (this is challenge because my ultimate goal is to search for this keyword only. On the other hand when I search for the same thing online (twitter interface, I'm getting enough tweets). - <https://twitter.com/search?q=Jurgen%20Mayer-Hermann&src=typd> Can someone please see if we can identify the issue? The code is as follows: ``` def getfeed(mystr, tweetcount): url = "https://api.twitter.com/1.1/search/tweets.json?q=" + mystr + "&count=" + tweetcount parameters = [] response = twitterreq(url, "GET", parameters) res = json.load(response) return res search_str = "Jurgen Mayer-Hermann" search_str = '%22'+search_str+'%22' search = search_str.replace(" ","%20") search = search.replace("#","%23") tweetcount = str(50) res = getfeed(search, tweetcount) ``` When I print the constructed url, I get ``` https://api.twitter.com/1.1/search/tweets.json?q=%22Jurgen%20Mayer-Hermann%22&count=50 ```
2014/02/26
[ "https://Stackoverflow.com/questions/22042673", "https://Stackoverflow.com", "https://Stackoverflow.com/users/2935885/" ]
Change `$(document).load(` to [`$(document).ready(`](http://learn.jquery.com/using-jquery-core/document-ready/) ``` $(document).ready(function() { var vague = $('.zero').Vague({ intensity: 3, forceSVGUrl: false }); vague.blur(); }); ``` or use ``` $(window).load(function(){ ``` or use Shorthand for `$( document ).ready()` ``` $(function(){ ```
Try to use: ``` $(window).load(function() { ``` or: ``` $(document).ready(function() { ``` instead of: ``` $(document).load(function() { ```
1,140
56,112,849
I have a Div containing 4 images of the same size, placed in a row. I want them to occupy all the space avaible in the div by staying in the same row, with the first image in the far left and the fourth image in the far right, they also have to be equally spaced. I can accomplish this by modifying the padding of each image so I'm asking if there is a method to do it automatically. ``` <div> <img src="imgsrc\html5.svg" id="htmLogo" class="iconPgr"> <img src="imgsrc\java.svg" id="javaLogo" class="iconPgr"> <img src="imgsrc\python.svg" id="pythonLogo" class="iconPgr"> <img src="imgsrc\C++ icon.png" id="cLogo" class="iconPgr"> </div> ``` ``` #htmLogo { padding-left: 35px padding-right: 0px /* I repeat the same for every ID with different padding values so the imgs result equally spaced with htmLogo in the far right and cLogo in the far left */ ```
2019/05/13
[ "https://Stackoverflow.com/questions/56112849", "https://Stackoverflow.com", "https://Stackoverflow.com/users/11426745/" ]
You can use flexbox. Read more about it here: <https://css-tricks.com/snippets/css/a-guide-to-flexbox/> ```css #my-container { display: flex; justify-content: space-between; } ``` ```html <div id="my-container"> <img src="https://placekitten.com/50/50" /> <img src="https://placekitten.com/50/50" /> <img src="https://placekitten.com/50/50" /> <img src="https://placekitten.com/50/50" /> </div> ```
if bootstrap is available and you can change the html then you can wrap each image in a div and use bootstrap's grid system ([here is a demo](https://codepen.io/carnnia/pen/ZNprZa)). ```css #container{ width: 100%; border: 1px solid red; } .row{ text-align: center; } ``` ```html <div class="container" id="container"> <div class="row"> <div class="col-lg-3"><img src="https://image.flaticon.com/icons/svg/23/23735.svg" width=50 height=50 /></div> <div class="col-lg-3"><img src="http://freevector.co/wp-content/uploads/2012/07/14675-telegram-logo1.png" width=50 height=50/></div> <div class="col-lg-3"><img src="https://encrypted-tbn0.gstatic.com/images?q=tbn:ANd9GcTgWDI1YcQTcTa2IoQn_yuEXtWwuLy7KbkFZ5H-2F3554d_j29nAQ" width=50 height=50/></div> <div class="col-lg-3"><img src="https://image.flaticon.com/icons/png/512/130/130484.png" width=50 height=50/></div> </div><!-- row END --> </div> <!-- container END --> ```
1,143
8,051,506
am I going about this in the correct way? Ive never done anything like this before, so im not 100% sure on what I am doing. The code so far gets html and css files and that works fine, but images wont load, and will I have to create a new "if" for every different file type? or am I doing this a silly way...here is what I have: ``` import string,cgi,time from os import curdir, sep from BaseHTTPServer import BaseHTTPRequestHandler, HTTPServer import os import mimetypes #import pri port = 888 host = "0.0.0.0" class MyHandler(BaseHTTPRequestHandler): def do_GET(self): try: #RequestedURL = self.path mimeType = mimetypes.guess_type(self.path)[0] fileType = mimetypes.guess_extension(mimeType) infoList = [mimeType, fileType] if infoList[1] != ".py": self.send_response(200) self.send_header('Content-type', mimeType) self.end_headers() f = open(curdir + sep + self.path, "rb") self.wfile.write(f.read()) f.close() return if fileType == ".py": pythonFilename = self.path.lstrip("/") self.send_response(200) self.send_header('Content-type', 'text/html') self.end_headers() pyname = pythonFilename.replace("/", ".")[:-3] print pythonFilename print pyname temp1 = pyname.split(".") temp2 = temp1[-1] print temp2 module = __import__(root.index) self.wfile.write(module.root.index.do_work()) #module = __import__("test.index") #self.wfile.write( module.index.do_work()) return return except IOError: self.send_error(404,'File Not Found: %s' % self.path) def do_POST(self): global rootnode try: ctype, pdict = cgi.parse_header(self.headers.getheader('content-type')) if ctype == 'multipart/form-data': query=cgi.parse_multipart(self.rfile, pdict) self.send_response(301) self.end_headers() upfilecontent = query.get('upfile') print "filecontent", upfilecontent[0] self.wfile.write("<HTML>POST OK.<BR><BR>"); self.wfile.write(upfilecontent[0]); except : pass def main(): try: server = HTTPServer((host, port), MyHandler) print 'started httpserver:' print ("Host: " + (host)) print ("Port: " + str(port)) server.serve_forever() except KeyboardInterrupt: print '^C received, shutting down server' server.socket.close() if __name__ == '__main__': main() ``` html and css works, but png images do not load
2011/11/08
[ "https://Stackoverflow.com/questions/8051506", "https://Stackoverflow.com", "https://Stackoverflow.com/users/787367/" ]
You are on the right track with it, though your ifs are very redundant. I suggest you refactor the code to check for type using a loop and a dict: ``` mime = {"html":"text/html", "css":"text/css", "png":"image/png"} if RequestedFileType in mime.keys(): self.send_response(200) self.send_header('Content-type', mime[RequestedFileType]) self.end_headers() print RequestedFileType f = open(curdir + sep + self.path) self.wfile.write(f.read()) f.close() return ``` Also, you are sending binary files as text. Instead of open(curdir + sep + self.path) use open(curdir + sep + self.path, "b") Gergely from toptal.com
As to a panoply of `if` statements, the usual approach is to have a file that handles the mapping between extensions and mime types (look here: [List of ALL MimeTypes on the Planet, mapped to File Extensions?](https://stackoverflow.com/questions/1735659/list-of-all-mimetypes-on-the-planet-mapped-to-file-extensions)). Read that into an appropriate data structure. You should probably be opening all files as binary unless they are a text/\* mime type; for those you should ensure that your line endings are as specified in the appropriate RFC (if any - it's been years since I have needed to consult those standards, on account of not writing a web server for deployment in anger). And a syntactical point: ``` >>> ('foo') 'foo' >>> ('foo',) ('foo',) >>> ``` Your brackets are redundant. You can index on the value you are extracting.
1,144
74,061,083
My python GUI has been working fine from VSCode for months now, but today (with no changes in the code that I can find) it has been throwing me an error in the form of: Exception has occurred: ModuleNotFoundError No module named '\_tkinter' This error occurs for any import that is not commented out. The GUI works as intended when ran from the terminal using "python3 filename.py",but the run/debug function in VSCode keeps throwing that same error. I'm relatively new here so I have no clue what the problem could be, any insight or things to try would be appreciated.
2022/10/13
[ "https://Stackoverflow.com/questions/74061083", "https://Stackoverflow.com", "https://Stackoverflow.com/users/20234707/" ]
Press Ctrl+Shift+P and type "select interpreter", press Enter and select the python interpreter path that you want to use by default in current project. If currently selected one does not have some libraries installed, you may see error from Pylance.
The cause of this problem may be that there are multiple python versions on your machine, and the interpreter environment you are currently using is not the same environment where you installed the third-party library. **Solution:** 1. Use the following code to get the current interpreter path ``` import sys print(sys.executable) ``` [![enter image description here](https://i.stack.imgur.com/cvBJZ.png)](https://i.stack.imgur.com/cvBJZ.png) 2. Copy the resulting path, and then use the following commands to install third-party libraries for the current environment (using `numpy` as an example) ``` C:\Users\Admin\AppData\Local\Programs\Python\Python36\python.exe -m pip install numpy ``` [![enter image description here](https://i.stack.imgur.com/h9bgp.png)](https://i.stack.imgur.com/h9bgp.png)
1,145
35,127,452
Could someone please help me create a field in my model that generates a unique 8 character alphanumeric string (i.e. A#######) ID every time a user makes a form submission? My **models.py** form is currently as follows: ``` from django.db import models from django.contrib.auth.models import User class Transfer(models.Model): id = models.AutoField(primary_key=True) user = models.ForeignKey(User) timestamp = models.DateTimeField(auto_now_add=True, auto_now=False) ``` I have looked at pythons UUID feature but these identifiers are quite long and messy compared with what I am looking to generate. Any help would be hugely appreciated!
2016/02/01
[ "https://Stackoverflow.com/questions/35127452", "https://Stackoverflow.com", "https://Stackoverflow.com/users/2798841/" ]
Something like this: ``` ''.join(random.choice(string.ascii_uppercase) for _ in range(8)) ```
In order for the ID to be truly unique you have to keep track of previously generated unique IDs, This can be simply done with a simple sqlite DB. In order to generate a simple unique id use the following line: ``` import random import string u_id = ''.join(random.choice(string.ascii_letters + string.digits) for _ in range(8)) ``` More on strings can be found in the [docs](https://docs.python.org/2/library/string.html#string-constants)
1,146
13,529,852
I have a python GUI and i want to run a shell command which you cannot do using windows cmd i have installed cygwin and i was wondering how i would go about running cygwin instead of the windows cmd. I am wanting to use subprocess and get the results of the .sh file but my code ``` subprocess.check_output("./listChains.sh < 2p31protein.pdb") ``` This will run it in cmd and as windows will not recognize it, it will not work, so how can i get it to run in cygwin instead of cmd
2012/11/23
[ "https://Stackoverflow.com/questions/13529852", "https://Stackoverflow.com", "https://Stackoverflow.com/users/1810400/" ]
Execute a cygwin shell (e.g. `bash`) and have it run your script, instead of running your script directly: ``` subprocess.check_output("C:/cygwin/bin/bash.exe ./listChains.sh < 2p31protein.pdb") ``` Alternatively, associate the `.sh` filetype extension to open with `bash.exe`.
Using python sub-process to run a cygwin executable requires that the ./bin directory with `cygwin1.dll` be on the Windows path. `cygwin1.dll` exposes cygwin executables to Windows, allowing them to run in Windows command line and be called by Python sub-process.
1,149
74,581,136
I would like to do the same in python pandas as shown on the picture. [pandas image](https://i.stack.imgur.com/ZsHLT.png) This is sum function where the first cell is fixed and the formula calculates "**continuous sum**". I tried to create pandas data frame however I did not manage to do this exactly.
2022/11/26
[ "https://Stackoverflow.com/questions/74581136", "https://Stackoverflow.com", "https://Stackoverflow.com/users/20557036/" ]
The phrasing, and the unqualified template, aren't super-helpful in figuring out the difference, but: `transform()` does "re-boxing" into an optional, but `and_then()` does not, expecting the function to returned a boxed value on its own. So, * `transform()` is for when you want to use a function like `T2 foo(T1 x)`. * `and_then()` is for when you want to use a function like `optional<T2> bar(T1 x)`. Both `my_optional.transform(foo)` and `my_optional.and_then(bar)` return a value of type `optional<T2>`. See also [this question](https://stackoverflow.com/questions/70606173/what-are-monadic-bind-and-monadic-return-for-c23-optional).
`and_then` is monadic `bind` aka `flatmap` aka `>>=` and `transform` is functorial `map`. One can express `map` in terms of `bind` generically, but not the other way around, because a functor is not necessarily a monad. Of course the particular monad of `std::optional` can be opened at any time, so both functions are expressible in terms of ordinary pre-C++23 `std::optional` API. Thus the question why the C++ standard defines both functions is no better than the question why it defines any of the two. Perhaps the Standard wishes to give the programmer a standard functorial interface and a standard monadic interface independently. Either interface is useful and important on its own right.
1,150
60,155,460
I was planning to automate the manual steps to run the ssh commands using python. I developed the code that automatically executes the below command and log me in VM. The SSH command works fine whenever i run the code in spyder and conda prompt. The command works whenever I open the cmd and try the command directly where the key is, but fails and give error whenever i run the python script on cmd prompt ``` os.system('cmd /k "ssh -i <path to private key> <user>@<remotehost>"') ``` error: ``` 'ssh' is not recognized as an internal or external command, operable program or batch file. ``` How to solve this error to run the script on cmd? Note: The ssh commands works fine in cmd but not inside script when run on cmd
2020/02/10
[ "https://Stackoverflow.com/questions/60155460", "https://Stackoverflow.com", "https://Stackoverflow.com/users/12873907/" ]
Just resolved the issue here. I updated to the last version of all libs ``` "@react-navigation/bottom-tabs": "^5.0.1", "@react-navigation/core": "^5.1.0", "@react-navigation/material-top-tabs": "^5.0.1", "@react-navigation/native": "^5.0.1", "@react-navigation/stack": "^5.0.1", ``` and then i deleted my package-lock.json, and in your terminal go to android folder and then type ./gradlew clean after that you should run your npx react-native run-android, close your default metro terminal, and then run npx react-native start --reset-cache, worked well after doing this
Make sure you have installed latest versions of `@react-navigation/native` and `@react-navigation/bottom-tabs`: ```js npm install @react-navigation/native @react-navigation/bottom-tabs ``` Then clear the cache: ```sh npm react-native start --reset-cache ``` Or if using Expo: ```js expo start -c ```
1,153
54,337,433
I have a list of tuples and need to delete tuples if its 1st item is matching with 1st item of other tuples in the list. 3rd item may or may not be the same, so I cannot use set (I have seen this question - [Grab unique tuples in python list, irrespective of order](https://stackoverflow.com/questions/35975441/grab-unique-tuples-in-python-list-irrespective-of-order) and this is not same as my issue) For eg if I got `a` as: ``` [(0, 13, 'order1'), (14, 27, 'order2'), (14, 27, 'order2.1'), (0, 13, 'order1'), (28, 41, 'order3')] ``` I want the output as: ``` [(14, 27, 'order2'), (0, 13, 'order1'), (28, 41, 'order3')] ``` I am getting the desired output using below code. ``` for e, i in enumerate(a): r = [True if i[0] == k[0] and e != j else False for j, k in enumerate(a)] if any(r): a.pop(e) pprint(a) ``` Is there a better or more pythonic way to achieve the same?
2019/01/23
[ "https://Stackoverflow.com/questions/54337433", "https://Stackoverflow.com", "https://Stackoverflow.com/users/2565385/" ]
The usual way is keying a dict off whatever you want to dedupe by, for example: ``` >>> a = [(0, 13, 'order1'), (14, 27, 'order2'), (14, 27, 'order2.1'), (0, 13, 'order1'), (28, 41, 'order3')] >>> print(*{tup[:2]: tup for tup in a}.values()) (0, 13, 'order1') (14, 27, 'order2.1') (28, 41, 'order3') ``` This is *O(n)* time complexity, superior to *O(n log n)* groupby based approaches.
You can get the first element of each group in a grouped, sorted list: ``` from itertools import groupby from operator import itemgetter a = [(0, 13, 'order1'), (14, 27, 'order2'), (14, 27, 'order2.1'), (0, 13, 'order1'), (28, 41, 'order3')] result = [list(g)[0] for k, g in groupby(sorted(a), key=itemgetter(0))] print(result) ```
1,154
54,028,502
I have this kind of list of dictionary in python ``` [ { "compania": "Fiat", "modelo": "2014", "precio": "1000" }, { "compania": "Renault", "modelo": "2014", "precio": "2000" }, { "compania": "Volkwagen", "modelo": "2014", "precio": "3000" }, { "compania": "Chevrolet", "modelo": "2014", "precio": "1000" }, { "compania": "Peugeot", "modelo": "2014", "precio": "2000" } ] ``` That I'd like to transform into this kind of list of list of dictionary ``` { "Fiat": { "modelo": "2014", "precio": "1000" }, "Renault": { "modelo": "2014", "precio": "2000" }, "Volkwagen": { "modelo": "2014", "precio": "3000" }, "Chevrolet": { "modelo": "2014", "precio": "1000" }, "Peugeot": { "modelo": "2014", "precio": "2000" } } ```
2019/01/03
[ "https://Stackoverflow.com/questions/54028502", "https://Stackoverflow.com", "https://Stackoverflow.com/users/10864244/" ]
We can use dict comprehension ``` {a.get('compania'): {k: v for k, v in a.items() if k != 'compania'} for a in c} {'Fiat': {'modelo': '2014', 'precio': '1000'}, 'Renault': {'modelo': '2014', 'precio': '2000'}, 'Volkwagen': {'modelo': '2014', 'precio': '3000'}, 'Chevrolet': {'modelo': '2014', 'precio': '1000'}, 'Peugeot': {'modelo': '2014', 'precio': '2000'}} ``` where `c` is your original data
``` result = {} for d in l: # Store the value of the key 'compania' before popping it from the small dictionary d compania = d['compania'] d.pop('compania') # Construct new dictionary with key of the compania and value of the small dictionary without the compania key/value pair result[compania] = d print(result) ``` Output: ``` {'Chevrolet': {'modelo': '2014', 'precio': '1000'}, 'Fiat': {'modelo': '2014', 'precio': '1000'}, 'Peugeot': {'modelo': '2014', 'precio': '2000'}, 'Renault': {'modelo': '2014', 'precio': '2000'}, 'Volkwagen': {'modelo': '2014', 'precio': '3000'}} ```
1,157
55,655,666
Hello i m new at django. I installed all moduoles from anaconda. Then created a web application with ``` django-admin startproject ``` My project crated successfully. No problem Then i tried to run that project at localhost to see is everything okay or not. And i run that code in command line ``` python manage.py runserver ``` And i get that error: ``` Unhandled exception in thread started by <function check_errors. <locals>.wrapper at 0x00000221B6D45A60> Traceback (most recent call last): File "C:\Users\Sercan\Anaconda3\lib\site- packages\django\utils\autoreload.py", line 225, in wrapper fn(*args, **kwargs) File "C:\Users\Sercan\Anaconda3\lib\site- packages\django\core\management\commands\runserver.py", line 109, in inner_run autoreload.raise_last_exception() File "C:\Users\Sercan\Anaconda3\lib\site- packages\django\utils\autoreload.py", line 248, in raise_last_exception raise _exception[1] File "C:\Users\Sercan\Anaconda3\lib\site- packages\django\core\management\__init__.py", line 337, in execute autoreload.check_errors(django.setup)() File "C:\Users\Sercan\Anaconda3\lib\site- packages\django\utils\autoreload.py", line 225, in wrapper fn(*args, **kwargs) File "C:\Users\Sercan\Anaconda3\lib\site-packages\django\__init__.py", line 24, in setup apps.populate(settings.INSTALLED_APPS) File "C:\Users\Sercan\Anaconda3\lib\site- packages\django\apps\registry.py", line 112, in populate app_config.import_models() File "C:\Users\Sercan\Anaconda3\lib\site-packages\django\apps\config.py", line 198, in import_models self.models_module = import_module(models_module_name) File "C:\Users\Sercan\Anaconda3\lib\importlib\__init__.py", line 127, in import_module return _bootstrap._gcd_import(name[level:], package, level) File "<frozen importlib._bootstrap>", line 1006, in _gcd_import File "<frozen importlib._bootstrap>", line 983, in _find_and_load File "<frozen importlib._bootstrap>", line 967, in _ find_and_load_unlocked File "<frozen importlib._bootstrap>", line 677, in _load_unlocked File "<frozen importlib._bootstrap_external>", line 728, in exec_module File "<frozen importlib._bootstrap>", line 219, in _call_with_frames_removed File "C:\Users\Sercan\Anaconda3\lib\site- packages\django\contrib\auth\models.py", line 2, in <module> from django.contrib.auth.base_user import AbstractBaseUser, BaseUserManager File "C:\Users\Sercan\Anaconda3\lib\site- packages\django\contrib\auth\base_user.py", line 47, in <module> class AbstractBaseUser(models.Model): File "C:\Users\Sercan\Anaconda3\lib\site- packages\django\db\models\base.py", line 101, in __new__ new_class.add_to_class('_meta', Options(meta, app_label)) File "C:\Users\Sercan\Anaconda3\lib\site- packages\django\db\models\base.py", line 305, in add_to_class value.contribute_to_class(cls, name) File "C:\Users\Sercan\Anaconda3\lib\site- packages\django\db\models\options.py", line 203, in contribute_to_class self.db_table = truncate_name(self.db_table, connection.ops.max_name_length()) File "C:\Users\Sercan\Anaconda3\lib\site-packages\django\db\__init__.py", line 33, in __getattr__ return getattr(connections[DEFAULT_DB_ALIAS], item) File "C:\Users\Sercan\Anaconda3\lib\site-packages\django\db\utils.py", line 202, in __getitem__ backend = load_backend(db['ENGINE']) File "C:\Users\Sercan\Anaconda3\lib\site-packages\django\db\utils.py", line 110, in load_backend return import_module('%s.base' % backend_name) File "C:\Users\Sercan\Anaconda3\lib\importlib\__init__.py", line 127, in import_module return _bootstrap._gcd_import(name[level:], package, level) File "C:\Users\Sercan\Anaconda3\lib\site- packages\django\db\backends\sqlite3\base.py", line 10, in <module> from sqlite3 import dbapi2 as Database File "C:\Users\Sercan\Anaconda3\lib\sqlite3\__init__.py", line 23, in <module> from sqlite3.dbapi2 import * File "C:\Users\Sercan\Anaconda3\lib\sqlite3\dbapi2.py", line 27, in <module> from _sqlite3 import * ImportError: DLL load failed: The specified module could not be found. ``` Can someone tell me where do i make mistake and how can i fix this problem ?
2019/04/12
[ "https://Stackoverflow.com/questions/55655666", "https://Stackoverflow.com", "https://Stackoverflow.com/users/4511476/" ]
I had this problem. I solved it by running it in the Anaconda shell. 1. Open **Anaconda Shell/terminal** by pressing your Windows key and searching Anaconda 2. Go to the directory you have your django project in 3. `python manage.py runserver`
It sounds like you need to install SQLite: <https://www.sqlite.org/download.html> Or you could change the database settings in your settings file to use some other database.
1,167
55,779,936
I used pip to install keras and tensorflow, yet when I import subpackages from keras, my shell fails a check for PyBfloat16\_Type.tp\_base. I tried uninstalling and reinstalling tensorflow, but I don't know for certain what is causing this error. ``` from keras.models import Sequential from keras.layers import Dense ``` ``` 3.7.0 (v3.7.0:1bf9cc5093, Jun 27 2018, 04:59:51) [MSC v.1914 64 bit (AMD64)] Python Type "help", "copyright", "credits" or "license" for more information. >>>[evaluate machineLearning.py] Using TensorFlow backend. 2019-04-21 00:31:22.995541: F tensorflow/python/lib/core/bfloat16.cc:675] Check failed: PyBfloat16_Type.tp_base != nullptr aborted (disconnected) >>> ``` Can someone help me solve this issue?
2019/04/21
[ "https://Stackoverflow.com/questions/55779936", "https://Stackoverflow.com", "https://Stackoverflow.com/users/11262404/" ]
You may try to downgrade python to 3.6 (I know some people have troubles with tensorflow and keras using python 3.7). One simple way is to download anaconda, create a new environment with python 3.6, then install tensorflow and keras. `conda create -n myenv python=3.6` `conda activate myenv` `pip3 install tensorflow` `pip3 install keras`
You have a few options to try: First, try to uninstall and re-install the TensorFlow and see whether the problem is resolved or not (replace `tensorflow` with `tensorflow-gpu` in the following commands if you have installed the GPU version): ``` pip uninstall tensorflow pip install --no-cache-dir tensorflow ``` If the problem is not resolved, try to do the same thing with `numpy`: ``` pip uninstall numpy pip install --no-cache-dir numpy ``` Hopefully, one of these two would resolve the problem.
1,172
28,023,697
I want to setup cronjobs on various servers at the same time for Data Mining. I was also already following the steps in [Ansible and crontabs](https://stackoverflow.com/questions/21787755/ansible-and-crontabs) but so far nothing worked. Whatever i do, i get the Error Message: ``` ERROR: cron is not a legal parameter at this level in an Ansible Playbook ``` I have: Ansible 1.8.1 And for some unknown reasons, my Modules are located in: `/usr/lib/python2.6/site-packages/ansible/modules/` I would like to know which precise steps i have to follow to let Ansible install a new cronjob in the crontab file. 1. How precisely must a playbook look like to install a cronjob? 2. What is the command line to start this playbook? I'm asking this odd question because the documentation of cron is insufficient and the examples are not working. Maybe my installation is wrong too, which I want to test out with a working example of cron.
2015/01/19
[ "https://Stackoverflow.com/questions/28023697", "https://Stackoverflow.com", "https://Stackoverflow.com/users/4469762/" ]
I've got (something very much like) this in a ./roles/cron/tasks/main.yml file: ``` - name: Creates weekly backup cronjob cron: minute="20" hour="5" weekday="sun" name="Backup mysql tables (weekly schedule)" cron_file="mysqlbackup-WeeklyBackups" user="root" job="/usr/local/bin/mysqlbackup.WeeklyBackups.sh" tags: - mysql - cronjobs ``` The shell script listed in the 'job' was created a little earlier in the main.yml file. This task will create a file in /etc/cron.d/mysqlbackup-WeeklyBackups: ``` #Ansible: Backup mysql tables (weekly schedule) 20 5 * * sun root /usr/local/bin/mysqlbackup.WeeklyBackups.sh ```
If you're setting it up to run on the Crontab of the user: ``` - name: Install Batchjobs on crontab cron: name: "Manage Disk Space" minute: "30" hour: "02" weekday: "0-6" job: "home/export/manageDiskSpace.sh > home/export/manageDiskSpace.sh.log 2>&1" #user: "admin" disabled: "no" become_user: "{{ admin_user }}" tags: - cronjobs ``` Reference [1]: <https://docs.ansible.com/ansible/latest/collections/ansible/builtin/cron_module.html#examples>
1,173
61,262,487
Having an issue with Django Allauth. When I log out of one user, and log back in with another, I get this issue, both locally and in production. I'm using the latest version of Allauth, Django 3.0.5, and Python 3.7.4. It seems like this is an Allauth issue, but I haven't seen it reported online anywhere else. So just wondering what I can do next to troubleshoot. Login works fine, less I just logged out of another user. ``` 'NoneType' object has no attribute 'append' Request Method: POST Request URL: http://127.0.0.1:8000/account/login/ Django Version: 3.0.5 Exception Type: AttributeError Exception Value: 'NoneType' object has no attribute 'append' Exception Location: /Users/[USERDIR]/Sites/frontline/venv/lib/python3.7/site-packages/allauth/account/adapter.py in authentication_failed, line 507 Python Executable: /Users/[USERDIR]/Sites/frontline/venv/bin/python Python Version: 3.7.4 Python Path: ['/Users/[USERDIR]/Sites/frontline', '/Users/[USERDIR]/Sites/frontline/venv/lib/python37.zip', '/Users/[USERDIR]/Sites/frontline/venv/lib/python3.7', '/Users/[USERDIR]/Sites/frontline/venv/lib/python3.7/lib-dynload', '/Library/Frameworks/Python.framework/Versions/3.7/lib/python3.7', '/Users/[USERDIR]/Sites/frontline/venv/lib/python3.7/site-packages', '/Users/[USERDIR]/Sites/frontline/venv/lib/python3.7/site-packages/odf', '/Users/[USERDIR]/Sites/frontline/venv/lib/python3.7/site-packages/odf', '/Users/[USERDIR]/Sites/frontline/venv/lib/python3.7/site-packages/odf', '/Users/[USERDIR]/Sites/frontline/venv/lib/python3.7/site-packages/odf', '/Users/[USERDIR]/Sites/frontline/venv/lib/python3.7/site-packages/odf', '/Users/[USERDIR]/Sites/frontline/venv/lib/python3.7/site-packages/odf', '/Users/[USERDIR]/Sites/frontline/venv/lib/python3.7/site-packages/odf'] Server time: Thu, 16 Apr 2020 17:53:52 -0700 Environment: Request Method: POST Request URL: http://127.0.0.1:8000/account/login/ Django Version: 3.0.5 Python Version: 3.7.4 Installed Applications: ['django.contrib.admin', 'django.contrib.auth', 'django.contrib.contenttypes', 'django.contrib.sessions', 'django.contrib.messages', 'django.contrib.staticfiles', 'django.contrib.humanize', 'django.contrib.sites', 'django.contrib.sitemaps', 'django.contrib.postgres', 'common', 'bootstrap4', 's3direct', 'bootstrap_datepicker_plus', 'import_export', 'tinymce', 'allauth', 'allauth.account', 'allauth.socialaccount', 'debug_toolbar', 'dashboard', 'marketing'] Installed Middleware: ('debug_toolbar.middleware.DebugToolbarMiddleware', 'django.contrib.sessions.middleware.SessionMiddleware', 'django.middleware.common.CommonMiddleware', 'django.middleware.csrf.CsrfViewMiddleware', 'django.contrib.auth.middleware.AuthenticationMiddleware', 'django.contrib.messages.middleware.MessageMiddleware', 'django.middleware.clickjacking.XFrameOptionsMiddleware', 'django.middleware.security.SecurityMiddleware') Traceback (most recent call last): File "/Users/[USERDIR]/Sites/frontline/venv/lib/python3.7/site-packages/django/core/handlers/exception.py", line 34, in inner response = get_response(request) File "/Users/[USERDIR]/Sites/frontline/venv/lib/python3.7/site-packages/django/core/handlers/base.py", line 115, in _get_response response = self.process_exception_by_middleware(e, request) File "/Users/[USERDIR]/Sites/frontline/venv/lib/python3.7/site-packages/django/core/handlers/base.py", line 113, in _get_response response = wrapped_callback(request, *callback_args, **callback_kwargs) File "/Users/[USERDIR]/Sites/frontline/venv/lib/python3.7/site-packages/django/views/generic/base.py", line 71, in view return self.dispatch(request, *args, **kwargs) File "/Users/[USERDIR]/Sites/frontline/venv/lib/python3.7/site-packages/django/utils/decorators.py", line 43, in _wrapper return bound_method(*args, **kwargs) File "/Users/[USERDIR]/Sites/frontline/venv/lib/python3.7/site-packages/django/views/decorators/debug.py", line 76, in sensitive_post_parameters_wrapper return view(request, *args, **kwargs) File "/Users/[USERDIR]/Sites/frontline/venv/lib/python3.7/site-packages/allauth/account/views.py", line 138, in dispatch return super(LoginView, self).dispatch(request, *args, **kwargs) File "/Users/[USERDIR]/Sites/frontline/venv/lib/python3.7/site-packages/allauth/account/views.py", line 81, in dispatch **kwargs) File "/Users/[USERDIR]/Sites/frontline/venv/lib/python3.7/site-packages/django/views/generic/base.py", line 97, in dispatch return handler(request, *args, **kwargs) File "/Users/[USERDIR]/Sites/frontline/venv/lib/python3.7/site-packages/allauth/account/views.py", line 103, in post if form.is_valid(): File "/Users/[USERDIR]/Sites/frontline/venv/lib/python3.7/site-packages/django/forms/forms.py", line 180, in is_valid return self.is_bound and not self.errors File "/Users/[USERDIR]/Sites/frontline/venv/lib/python3.7/site-packages/django/forms/forms.py", line 175, in errors self.full_clean() File "/Users/[USERDIR]/Sites/frontline/venv/lib/python3.7/site-packages/django/forms/forms.py", line 377, in full_clean self._clean_form() File "/Users/[USERDIR]/Sites/frontline/venv/lib/python3.7/site-packages/django/forms/forms.py", line 404, in _clean_form cleaned_data = self.clean() File "/Users/[USERDIR]/Sites/frontline/venv/lib/python3.7/site-packages/allauth/account/forms.py", line 179, in clean **credentials) File "/Users/[USERDIR]/Sites/frontline/venv/lib/python3.7/site-packages/allauth/account/adapter.py", line 497, in authenticate self.authentication_failed(request, **credentials) File "/Users/[USERDIR]/Sites/frontline/venv/lib/python3.7/site-packages/allauth/account/adapter.py", line 507, in authentication_failed data.append(time.mktime(dt.timetuple())) Exception Type: AttributeError at /account/login/ Exception Value: 'NoneType' object has no attribute 'append' ```
2020/04/17
[ "https://Stackoverflow.com/questions/61262487", "https://Stackoverflow.com", "https://Stackoverflow.com/users/636064/" ]
You need to have the segue from LiveController, not from Navigation Controller
This could be a few things so try these fixes: 1. Clean and build your project. Then, run again. 2. Quit Xcode, open up project and run. 3. In the `Attribute Inspector`, remove `openWelcomePage` and leave it blank. Hope that either of these suggestions help.
1,176
8,651,095
How do you control how the order in which PyYaml outputs key/value pairs when serializing a Python dictionary? I'm using Yaml as a simple serialization format in a Python script. My Yaml serialized objects represent a sort of "document", so for maximum user-friendliness, I'd like my object's "name" field to appear first in the file. Of course, since the value returned by my object's `__getstate__` is a dictionary, and Python dictionaries are unordered, the "name" field will be serialized to a random location in the output. e.g. ``` >>> import yaml >>> class Document(object): ... def __init__(self, name): ... self.name = name ... self.otherstuff = 'blah' ... def __getstate__(self): ... return self.__dict__.copy() ... >>> doc = Document('obj-20111227') >>> print yaml.dump(doc, indent=4) !!python/object:__main__.Document otherstuff: blah name: obj-20111227 ```
2011/12/28
[ "https://Stackoverflow.com/questions/8651095", "https://Stackoverflow.com", "https://Stackoverflow.com/users/247542/" ]
Took me a few hours of digging through PyYAML docs and tickets, but I eventually discovered [this comment](https://web.archive.org/web/20170308231702/http://pyyaml.org/ticket/29) that lays out some proof-of-concept code for serializing an OrderedDict as a normal YAML map (but maintaining the order). e.g. applied to my original code, the solution looks something like: ``` >>> import yaml >>> from collections import OrderedDict >>> def dump_anydict_as_map(anydict): ... yaml.add_representer(anydict, _represent_dictorder) ... >>> def _represent_dictorder( self, data): ... if isinstance(data, Document): ... return self.represent_mapping('tag:yaml.org,2002:map', data.__getstate__().items()) ... else: ... return self.represent_mapping('tag:yaml.org,2002:map', data.items()) ... >>> class Document(object): ... def __init__(self, name): ... self.name = name ... self.otherstuff = 'blah' ... def __getstate__(self): ... d = OrderedDict() ... d['name'] = self.name ... d['otherstuff'] = self.otherstuff ... return d ... >>> dump_anydict_as_map(Document) >>> doc = Document('obj-20111227') >>> print yaml.dump(doc, indent=4) !!python/object:__main__.Document name: obj-20111227 otherstuff: blah ```
The last time I checked, Python's dictionaries weren't ordered. If you really want them to be, I strongly recommend using a list of key/value pairs. ``` [ ('key', 'value'), ('key2', 'value2') ] ``` Alternatively, define a list with the keys and put them in the right order. ``` keys = ['key1', 'name', 'price', 'key2']; for key in keys: print obj[key] ```
1,177
52,528,911
In the [docs](https://docs.aws.amazon.com/neptune/latest/userguide/access-graph-gremlin-differences.html) under **Updating a Vertex Property**, it is mentioned that one can *"update a property value without adding an additional value to the set of values"* by doing `g.V('exampleid01').property(single, 'age', 25)` In **gremlin\_python**, I am unable to run a query like the above. I get the error: ``` update_prop_overwrite = g.V().hasLabel('placeholder-vertex').property(single,'maker','unknown').next() Traceback (most recent call last): File "<stdin>", line 1, in <module> NameError: name 'single' is not defined ``` How can I resolve this so that I can replace a Vertex property value in Neptune? Without `single` the query will append the new property value to the property key if a value exists already.
2018/09/27
[ "https://Stackoverflow.com/questions/52528911", "https://Stackoverflow.com", "https://Stackoverflow.com/users/4007615/" ]
You need to be sure to import `single` which is seen [here in the code](https://github.com/apache/tinkerpop/blob/d1a3fa147d1f009ae57274827c9b59426dfc6e58/gremlin-python/src/main/jython/gremlin_python/process/traversal.py#L127) and can be imported with: ``` from gremlin_python.process.traversal import Cardinality ``` however TinkerPop documentation recommends importing all such classes with: ``` statics.load_statics(globals()) ``` You can read more about that [here](http://tinkerpop.apache.org/docs/current/reference/#_static_enums_and_methods).
``` from gremlin_python.process.traversal import Cardinality g.V().hasLabel('placeholder-vertex').property(Cardinality.single,'maker','unknown').next() ``` This should also work.
1,182
15,713,427
I want to remove rows from several data frames so that they are all length n. When I tried to use a -for- loop, the changes would not persist through the rest of the script. ``` n = 50 groups = [df1, df2, df3] for dataset in groups: dataset = dataset[:n] ``` Redefining names individually (e.g., df1 = df1[:n] ), works, but what are some alternate ways? (Either through python or pandas) More importantly, why does the -for- loop not work here? pandas == 0.10.1 python == 2.7.3
2013/03/30
[ "https://Stackoverflow.com/questions/15713427", "https://Stackoverflow.com", "https://Stackoverflow.com/users/1560238/" ]
This is a slight python mis-understanding, rather than to do with pandas specific one. :) You're re-assigning the variable used in the iteration and not changing it in the list: ``` In [1]: L = [1, 2, 3] In [2]: for i in L: i = i + 1 In [3]: L Out[3]: [1, 2, 3] ``` You want to actually change the list: ``` In [4]: for i in range(len(L)): L[i] = L[i] + 1 In [5]: L Out[5]: [2, 3, 4] ``` Or perhaps in a nicer syntax is to use `enumerate`: ``` In [6]: for i, x in enumerate(L): L[i] = x + 1 In [7]: L Out[7]: [3, 4, 5] ``` That is: ``` for i, dataset in enumerate(groups): groups[i] = dataset[:n] ```
Your code creates (and discards) a new variable `dataset` in the for-loop. Try this: ``` n = 50 groups = [df1, df2, df3] for dataset in groups: dataset[:] = dataset[:n] ```
1,185
52,465,856
``` def frame_processing(frame): out_frame = np.zeros((frame.shape[0],frame.shape[1],4),dtype = np.uint8) b,g,r = cv2.split(frame) alpha = np.zeros_like(b , dtype=np.uint8) print(out_frame.shape) print(b.shape);print(g.shape);print(r.shape);print(alpha.shape) for i in range(frame.shape[0]): for j in range(frame.shape[1]): a = (frame[i,j,0],frame[i,j,1],frame[i,j,2]) b = (225,225,225) if all(i > j for i, j in zip(a,b)): #all(a>b) : alpha[i,j] = 0 else: alpha[i,j] = 255 out_frame[:,:,0] = b out_frame[:,:,1] = g out_frame[:,:,2] = r out_frame[:,:,3] = alpha #out_frame = cv2.merge((b,g,r,alpha)) return out_frame ``` Wanted to add an alpha channel; tried `cv2.Merge()` and manual stacking of channels but failed. When using `cv2.merge()`: ``` error: OpenCV(3.4.2) C:\projects\opencv- python\opencv\modules\core\src\merge.cpp:458: error: (-215:Assertion failed) mv[i].size == mv[0].size && mv[i].depth() == depth in function 'cv::merge' ``` When manually adding channels: ``` ValueError: could not broadcast input array from shape (3) into shape (225,225) ```
2018/09/23
[ "https://Stackoverflow.com/questions/52465856", "https://Stackoverflow.com", "https://Stackoverflow.com/users/9811461/" ]
Use `cv2.inRange` to find the mask, then merge them with `np.dstack`: ``` #!/use/bin/python3 # 2018/09/24 11:51:31 (CST) import cv2 import numpy as np #frame = ... mask = cv2.inRange(frame, (225,225,225), (255,255,255)) #dst = np.dstack((frame, 255-mask)) dst = np.dstack((frame, mask)) cv2.imwrite("dst.png", dst) ``` To find the specific color, maybe you will be interested with this question: [Choosing the correct upper and lower HSV boundaries for color detection with`cv::inRange` (OpenCV)](https://stackoverflow.com/questions/10948589/choosing-the-correct-upper-and-lower-hsv-boundaries-for-color-detection-withcv/48367205#48367205)
Its a simple typo. You are changing the variable "b" in the for loop and it conflicts with variable of blue channel. Change `b = (225,225,225)` to `threshold = (225, 255, 255)` and `zip(a,b)` to `zip(a, threshold)` should fix the problem. By the way, you can use this to create your alpha channel: ``` alpha = np.zeros(b.shape, dtype=b.dtype) ``` Also you can fill your alpha channel like this if you need more speed (you can measure time difference): ``` alpha[~((b[:,:]>threshold[0]) & (g[:,:]>threshold[1]) & (r[:,:]>threshold[2]))] = 255 ``` So your function becomes: ``` def frame_processing(frame): # split channels b,g,r = cv2.split(frame) # initialize alpha to zeros alpha = np.zeros(b.shape, dtype=b.dtype) # fill alpha values threshold = (225, 225, 225) alpha[~((b[:,:]>threshold[0]) & (g[:,:]>threshold[1]) & (r[:,:]>threshold[2]))] = 255 # merge all channels back out_frame = cv2.merge((b, g, r, alpha)) return out_frame ```
1,188
60,823,720
I have a really long ordered dict that looks similar to this: ``` OrderedDict([('JIRAUSER16100', {'name': 'john.smith', 'fullname': 'John Smith', 'email': 'John.Smith@domain.test', 'active': True}), ('JIRAUSER16300', {'name': 'susan.jones', 'fullname': 'Susan Jones', 'email': 'Susan.Jones@domain.test', 'active': True})]) ``` How can I search through this list to find a key value based on a key value match? For example, for Susan Jones, I'd like to find her email based on the name value? Is there a pythonic way to find that without just looping through the entire dictionary? Currently I'm just doing this below, but it seems inefficient when I have to go through the list a thousand times. I'm curious if there is a "find" method of some sort? ``` searchname = "susan.jones" for user in my_ordered_dict.items(): if user[1]["name"] == searchname: print(user[1]["email"]) ```
2020/03/24
[ "https://Stackoverflow.com/questions/60823720", "https://Stackoverflow.com", "https://Stackoverflow.com/users/11483315/" ]
Two ways you could potentially improve on this. You say your `OrderedDict` is really long, so I'd recommend the first option, since quickly become faster than the second as the size of your data grows. 1) **use [Pandas](https://pandas.pydata.org/)**: ``` In [1]: from collections import OrderedDict In [2]: import pandas as pd In [3]: d = OrderedDict([ ...: ('JIRAUSER16100', {'name': 'john.smith', ...: 'fullname': 'John Smith', ...: 'email': 'John.Smith@domain.test', ...: 'active': True}), ...: ('JIRAUSER16300', {'name': 'susan.jones', ...: 'fullname': 'Susan Jones', ...: 'email': 'Susan.Jones@domain.test', ...: 'active': True}) ...: ]) In [4]: df = pd.DataFrame(d).T In [5]: df Out[5]: name fullname email active JIRAUSER16100 john.smith John Smith John.Smith@domain.test True JIRAUSER16300 susan.jones Susan Jones Susan.Jones@domain.test True In [6]: df.loc[df['name'] == 'susan.jones', 'email'][0] Out[6]: 'Susan.Jones@domain.test' ``` On the scale of easy-to-learn-but-weak to hard-to-learn-but-powerful, `pandas` is fairly far toward the latter extreme. There's a decent amount to unpack here if you aren't familiar with `pandas`, so for the sake of brevity I won't go into it. But feel free to comment with any questions if more explanation would help. 2) **Use the built-in [`next`](https://docs.python.org/3/library/functions.html#next) function** This will allow you to avoid looping through the full dictionary. To make a long story really short, you can pass `next` a generator with a ternary expression, and it will essentially run through an iterable until it finds the *first* item that satisfies the given condition. So in your case, ``` In [7]: next(entry['email'] for entry in d.values() if entry['name'] == 'susan.jones') Out[7]: 'Susan.Jones@domain.test' ``` would work. It will save you time verses looping through the entire dict, but unlike option 1, its speed will depend on where in your `OrderedDict` the entry you're trying to find is located. Unless you for some reason need to stick exclusively to the standard library, Pandas will be much faster on any reasonably sized dataset. Hope this helps!
If you are looking for a specific match you will have to iterate through your structure until you find it so you don't have to go through the entire dictionary. Something like: ``` In [19]: d = OrderedDict([('JIRAUSER16100', {'name': 'john.smith', 'fullname': 'John Smith', 'email': 'John.Smith@domain.test', ...: 'active': True}), ('JIRAUSER16300', {'name': 'susan.jones', 'fullname': 'Susan Jones', 'email': 'Susan.Jones@domain.tes ...: t', 'active': True})]) ...: In [20]: def find_entry_by_subkey(sub_key, sub_key_value, data): ...: for entry in data.values(): ...: if entry[sub_key] == sub_key_value: ...: return entry ...: In [21]: find_entry_by_subkey('email', 'Susan.Jones@domain.test', d) Out[21]: {'name': 'susan.jones', 'fullname': 'Susan Jones', 'email': 'Susan.Jones@domain.test', 'active': True ```
1,189
58,983,828
I am using docplex in google collab with python For the following LP, the some of the decision variables are predetermined, and the LP needs to be solved for that. It's a sequencing problem and the sequence is a set of given values. The other decision variables will be optimized based on this. ``` #Define the decision variables x = cost.continuous_var_dict(P, name='x') # The landing time of plane i alpha = cost.continuous_var_dict(P, name='alpha') # How much of deviation of landing before target landing time for plane i beta = cost.continuous_var_dict(P, name='beta') # How much of deviation of landing after target landing time for plane i delta = cost.binary_var_dict(plane_matrix,name="delta") # 1 if plane i lands before plane j; 0 o/w z = cost.binary_var_dict(plane_matrix, name="z") # 1 if plane i and j land on same runway; 0 o/w y = cost.binary_var_dict(plane_runway, name="y") # 1 if plane j lands on runway r; 0 o/w ``` So the given values are for the delta, there is a constraint to satisfy this which is as follows ``` # Constraint 2: either plane i lands before j or j lands before i cost.add_constraints(delta[i,j] + delta[j,i] == 1 for i in P for j in P if j!=i) ``` However, I get an error as follows: ``` DOcplexException Traceback (most recent call last) <ipython-input-23-441ca8cbb9d0> in <module>() 3 4 # #Constraint 2: either i lands before j or j lands before i ----> 5 cost.add_constraints(delta[i,j] + delta[j,i] == 1 for i in P for j in P if j!=i) 6 7 # #Constraint 3: Each plane can land on only one runway 4 frames /usr/local/lib/python3.6/dist-packages/docplex/mp/model.py in add_constraints(self, cts, names) 3514 return self._lfactory._new_constraint_block2(cts, names) 3515 else: -> 3516 return self._lfactory._new_constraint_block1(cts) 3517 3518 /usr/local/lib/python3.6/dist-packages/docplex/mp/mfactory.py in _new_constraint_block1(self, cts) 891 posted_cts.append(ct) 892 else: --> 893 checker.typecheck_constraint_seq(ctseq, check_linear=True, accept_range=True) 894 for ct in ctseq: 895 if filterfn(ct, ctname=None, check_for_trivial_ct=check_trivial, arg_checker=checker): /usr/local/lib/python3.6/dist-packages/docplex/mp/tck.py in typecheck_constraint_seq(self, cts, check_linear, accept_range) 354 for i, ct in enumerate(checked_cts_list): 355 if not isinstance(ct, AbstractConstraint): --> 356 self.fatal("Expecting sequence of constraints, got: {0!r} at position {1}", ct, i) 357 if check_linear: 358 if not ct.is_linear(): /usr/local/lib/python3.6/dist-packages/docplex/mp/tck.py in fatal(self, msg, *args) 229 230 def fatal(self, msg, *args): --> 231 self._logger.fatal(msg, args) 232 233 def error(self, msg, *args): # pragma: no cover /usr/local/lib/python3.6/dist-packages/docplex/mp/error_handler.py in fatal(self, msg, args) 208 resolved_message = resolve_pattern(msg, args) 209 docplex_error_stop_here() --> 210 raise DOcplexException(resolved_message) 211 212 def fatal_limits_exceeded(self): DOcplexException: Expecting sequence of constraints, got: True at position 0 ``` Please Help. I really can't figure out why that is an issue. Thank you
2019/11/21
[ "https://Stackoverflow.com/questions/58983828", "https://Stackoverflow.com", "https://Stackoverflow.com/users/3916398/" ]
`join()` doesn't do anything to the child thread -- all it does is block until the child thread has exited. It only has an effect on the calling thread (i.e. by blocking its progress). The child thread can keep running for as long as it wants (although typically you'd prefer it to exit quickly, so that the thread calling `join()` doesn't get blocked for a long time -- but that's up to you to implement)
> > And to my surprise, joining these alive threads does not remove them from list of threads that top is giving. Is this expected behaviour? > > > That suggests the thread(s) are still running. Calling `join()` on a thread doesn't have any impact on that running thread; simply the calling thread waits for the called-on thread to exit. > > found out the loop inside Threadpool destructor never moved further than first join > > > That means the first thread hasn't completed yet. So none of the other threads haven't been joined yet either (even if they have exited). However, if the thread function is implemented correctly, the first thread (and all other threads in the pool) should eventually complete and the `join()` calls should return (assuming the threads in the pool are supposed to exit - but this doesn't need to true in general. Depending on application, you could simply make the threads run forever too). So it appears there's some sort of deadlock or wait for some resource that's holding up one or more threads. So you need to run through a debugger. [Helgrind](http://valgrind.org/docs/manual/hg-manual.html) would be very useful. You could also try to reduce the number of threads (say 2) and to see if the problem becomes reproducible/obvious and then you could increase the threads.
1,190
54,440,762
I'm busy configuring a TensorFlow Serving client that asks a TensorFlow Serving server to produce predictions on a given input image, for a given model. If the model being requested has not yet been served, it is downloaded from a remote URL to a folder where the server's models are located. (The client does this). At this point I need to update the `model_config` and trigger the server to reload it. This functionality appears to exist (based on <https://github.com/tensorflow/serving/pull/885> and <https://github.com/tensorflow/serving/blob/master/tensorflow_serving/apis/model_service.proto#L22>), but I can't find any documentation on how to actually use it. I am essentially looking for a python script with which I can trigger the reload from client side (or otherwise to configure the server to listen for changes and trigger the reload itself).
2019/01/30
[ "https://Stackoverflow.com/questions/54440762", "https://Stackoverflow.com", "https://Stackoverflow.com/users/141789/" ]
So it took me ages of trawling through pull requests to finally find a code example for this. For the next person who has the same question as me, here is an example of how to do this. (You'll need the `tensorflow_serving package` for this; `pip install tensorflow-serving-api`). Based on this pull request (which at the time of writing hadn't been accepted and was closed since it needed review): <https://github.com/tensorflow/serving/pull/1065> ``` from tensorflow_serving.apis import model_service_pb2_grpc from tensorflow_serving.apis import model_management_pb2 from tensorflow_serving.config import model_server_config_pb2 import grpc def add_model_config(host, name, base_path, model_platform): channel = grpc.insecure_channel(host) stub = model_service_pb2_grpc.ModelServiceStub(channel) request = model_management_pb2.ReloadConfigRequest() model_server_config = model_server_config_pb2.ModelServerConfig() #Create a config to add to the list of served models config_list = model_server_config_pb2.ModelConfigList() one_config = config_list.config.add() one_config.name= name one_config.base_path=base_path one_config.model_platform=model_platform model_server_config.model_config_list.CopyFrom(config_list) request.config.CopyFrom(model_server_config) print(request.IsInitialized()) print(request.ListFields()) response = stub.HandleReloadConfigRequest(request,10) if response.status.error_code == 0: print("Reload sucessfully") else: print("Reload failed!") print(response.status.error_code) print(response.status.error_message) add_model_config(host="localhost:8500", name="my_model", base_path="/models/my_model", model_platform="tensorflow") ```
**Add a model** to TF Serving server and to the existing config file `conf_filepath`: Use arguments `name`, `base_path`, `model_platform` for the new model. Keeps the original models intact. Notice a small difference from @Karl 's answer - using `MergeFrom` instead of `CopyFrom` > > pip install tensorflow-serving-api > > > ``` import grpc from google.protobuf import text_format from tensorflow_serving.apis import model_service_pb2_grpc, model_management_pb2 from tensorflow_serving.config import model_server_config_pb2 def add_model_config(conf_filepath, host, name, base_path, model_platform): with open(conf_filepath, 'r+') as f: config_ini = f.read() channel = grpc.insecure_channel(host) stub = model_service_pb2_grpc.ModelServiceStub(channel) request = model_management_pb2.ReloadConfigRequest() model_server_config = model_server_config_pb2.ModelServerConfig() config_list = model_server_config_pb2.ModelConfigList() model_server_config = text_format.Parse(text=config_ini, message=model_server_config) # Create a config to add to the list of served models one_config = config_list.config.add() one_config.name = name one_config.base_path = base_path one_config.model_platform = model_platform model_server_config.model_config_list.MergeFrom(config_list) request.config.CopyFrom(model_server_config) response = stub.HandleReloadConfigRequest(request, 10) if response.status.error_code == 0: with open(conf_filepath, 'w+') as f: f.write(request.config.__str__()) print("Updated TF Serving conf file") else: print("Failed to update model_config_list!") print(response.status.error_code) print(response.status.error_message) ```
1,191
10,496,815
I have written a job server that runs 1 or more jobs concurrently (or simultaneously depending on the number of CPUs on the system). A lot of the jobs created connect to a SQL Server database, perform a query, fetch the results and write the results to a CSV file. For these types of jobs I use `pyodbc` and Microsoft SQL Server ODBC Driver 1.0 for Linux to connect, run the query, then disconnect. Each job runs as a separate process using the python multiprocessing module. The job server itself is kicked off as a double forked background process. This all ran fine until I noticed today that the first SQL Server job ran fine but the second seemed to hang (i.e. look as though it was running forever). On further investigation I noticed the process for this second job had become zombified so I ran a manual test as follows: ``` [root@myserver jobserver]# python Python 2.6.6 (r266:84292, Dec 7 2011, 20:48:22) [GCC 4.4.6 20110731 (Red Hat 4.4.6-3)] on linux2 Type "help", "copyright", "credits" or "license" for more information. import pyodbc conn = pyodbc.connect('DRIVER={SQL Server Native Client 11.0};SERVER=MY-DATABASE-SERVER;DATABASE=MY-DATABASE;UID=MY-ID;PWD=MY-PASSWORD') c = conn.cursor() c.execute('select * from my_table') <pyodbc.Cursor object at 0x1d373f0> r = c.fetchall() len(r) 19012 c.close() conn.close() conn = pyodbc.connect('DRIVER={SQL Server Native Client 11.0};SERVER=MY-DATABASE-SERVER;DATABASE=MY-DATABASE;UID=MY-ID;PWD=MY-PASSWORD') Segmentation fault ``` So as you can see the first connection to the database works fine but any subsequent attempts to connect fail with a segmentation fault. I cannot for the life of me figure out why this has started happening or the solution, all worked fine before today and no code has been changed. Any help on this issue would be much appreciated.
2012/05/08
[ "https://Stackoverflow.com/questions/10496815", "https://Stackoverflow.com", "https://Stackoverflow.com/users/1328695/" ]
I had a very similar problem and in my case the solution was to upgrade the ODBC driver on the machine I was trying to make the connection from. I'm afraid I don't know much about why that fixed the problem. I suspect something was changed or upgraded on the database server I was trying to connect to. This answer might be too late for the OP but I wanted to share it anyway since I found this question while I was troubleshooting the problem and was a little discouraged when I didn't see any answers.
I also encounter this problem recently. My config includes unixODBC-2.3.0 plus MS ODBC Driver 1.0 for Linux. After some experiments, we speculate that the problem may arise due to database upgrade (to SQLServer 2008 SP1 in our case), thus triggering some bugs in the MS ODBC driver. The problem also occurs in this thread: <http://social.technet.microsoft.com/Forums/sqlserver/en-US/23fafa84-d333-45ac-8bd0-4b76151e8bcc/sql-server-driver-for-linux-causes-segmentation-fault?forum=sqldataaccess> I also tried upgrade my driver manager to unixODBC-2.3.2 but with no luck. My final solution is using FreeTDS 0.82.6+ with unixODBC-2.3.2. This version of FreeTDS driver goes badly along with unixODBC-2.3.0, for the manager keeps complaining about function non-support of the driver. Everything goes smooth if unixODBC is upgraded.
1,197
57,465,747
I do the following operations: 1. Convert string datetime in pandas dataframe to python datetime via `apply(strptime)` 2. Convert `datetime` to posix timestamp via `.timestamp()` method 3. If I revert posix back to `datetime` with `.fromtimestamp()` I obtain different datetime It differs by 3 hours which is my timezone (I'm at UTC+3 now), so I suppose it is a kind of timezone issue. Also I understand that in apply it implicitly converts to `pandas.Timestamp`, but I don't understand the difference in this case. What is the reason for such strange behavior and what should I do to avoid it? Actually in my project I need to compare this pandas timestamps with correct poxis timestamps and now it works wrong. Below is dummy reproducible example: ``` df = pd.DataFrame(['2018-03-03 14:30:00'], columns=['c']) df['c'] = df['c'].apply(lambda x: datetime.datetime.strptime(x, '%Y-%m-%d %H:%M:%S')) dt = df['c'].iloc[0] dt >> Timestamp('2018-03-03 14:30:00') datetime.datetime.fromtimestamp(dt.timestamp()) >> datetime.datetime(2018, 3, 3, 17, 30) ```
2019/08/12
[ "https://Stackoverflow.com/questions/57465747", "https://Stackoverflow.com", "https://Stackoverflow.com/users/5331908/" ]
First, I suggest using the `np.timedelta64` dtype when working with `pandas`. In this case it makes the reciprocity simple. ``` pd.to_datetime('2018-03-03 14:30:00').value #1520087400000000000 pd.to_datetime(pd.to_datetime('2018-03-03 14:30:00').value) #Timestamp('2018-03-03 14:30:00') ``` The issue with the other methods is that POSIX has UTC as the origin, but `fromtimestamp` returns the local time. If your system isn't UTC compliant, then we get issues. The following methods will work to remedy this: ``` from datetime import datetime import pytz dt #Timestamp('2018-03-03 14:30:00') # Seemingly problematic: datetime.fromtimestamp(dt.timestamp()) #datetime.datetime(2018, 3, 3, 9, 30) ``` --- ``` datetime.fromtimestamp(dt.timestamp(), tz=pytz.utc) #datetime.datetime(2018, 3, 3, 14, 30, tzinfo=<UTC>) datetime.combine(dt.date(), dt.timetz()) #datetime.datetime(2018, 3, 3, 14, 30) mytz = pytz.timezone('US/Eastern') # Use your own local timezone datetime.fromtimestamp(mytz.localize(dt).timestamp()) #datetime.datetime(2018, 3, 3, 14, 30) ```
An answer with the [`to_datetime`](https://pandas.pydata.org/pandas-docs/stable/reference/api/pandas.to_datetime.html) function: ```py df = pd.DataFrame(['2018-03-03 14:30:00'], columns=['c']) df['c'] = pd.to_datetime(df['c'].values, dayfirst=False).tz_localize('Your/Timezone') ``` When working with date, you should always put a timezone it is easier after to work with. It does not explain the difference between the `datetime` in pandas and alone.
1,199
17,219,675
I am trying to use bash functions inside my python script to allow me to locate a specific directory and then grep a given file inside the directory. The catch is that I only have part of the directory name, so I need to use the bash function find to get the rest of the directory name (names are unique and will only ever return one folder) The code I have so far is as follows: ``` def get_tag(part_of_foldername): import subprocess import os p1 = subprocess.Popen(["find", "/path/to/directory", "-maxdepth", "1", "-name", "%s.*" % part_of_foldername, "-type", "d"], stdout=subprocess.PIPE) directory = p1.communicate()[0].strip('\n') os.chdir(directory) p2 = subprocess.Popen(["grep", "STUFF_", ".hgtags"], stdout=subprocess.PIPE) tag = p2.comminucate()[0].strip('\n') return tag ``` Here is what's really strange. This code works when you enter it line by line into interactive, but not when it's run thru a script. It also works when you import the script file into interactive and call the function, but not when it's called by the main function. The traceback I get from running the script straight is as follows: ``` Traceback (most recent call last): File "./integration.py", line 64, in <module> main() File "./integration.py", line 48, in main tag = get_tag(folder) File "./integration.py", line 9, in get_date os.chdir(directory) OSError: [Errno 2] No such file or directory: '' ``` And it's called in the main function like this: ``` if block_dict[block][0]=='0': tag = get_tag(folder) ``` with "folder" being previously defined as a string. Please note we use python 2.6 so I can't use the module check\_output unfortunately.
2013/06/20
[ "https://Stackoverflow.com/questions/17219675", "https://Stackoverflow.com", "https://Stackoverflow.com/users/2506070/" ]
The right way is checking the `Value` property of the item selected on the list control. You could use `SelectedValue` property, try something like this: ``` if(radioButtonList.SelectedValue == "Option 2") { Messagebox.Show("Warning: Selecting this option may release deadly neurotoxins") } ``` You also can check using `SelectedItem.Text` property. ``` if(radioButtonList.SelectedItem.Text == "Option 2") { Messagebox.Show("Warning: Selecting this option may release deadly neurotoxins") } ``` If you are on asp.net, you do not have Messagebox.Show, you should use a javascript alert.
Try this one.I hope it helps. ``` if(radioButtonList.SelectedValue == "Option 2") { string script = "alert('Warning: Selecting this option may release deadly neurotoxins');"; ClientScript.RegisterClientScriptBlock(this.GetType(), "Alert", script, true); } ```
1,200
39,175,648
I am using `datetime` in some Python udfs that I use in my `pig` script. So far so good. I use pig 12.0 on Cloudera 5.5 However, I also need to use the `pytz` or `dateutil` packages as well and they dont seem to be part of a vanilla python install. Can I use them in my `Pig` udfs in some ways? If so, how? I think `dateutil` is installed on my nodes (I am not admin, so how can I actually check that is the case?), but when I type: ``` import sys #I append the path to dateutil on my local windows machine. Is that correct? sys.path.append('C:/Users/me/AppData/Local/Continuum/Anaconda2/lib/site-packages') from dateutil import tz ``` in my `udfs.py` script, I get: ``` 2016-08-30 09:56:06,572 [main] ERROR org.apache.pig.tools.grunt.Grunt - ERROR 1121: Python Error. Traceback (most recent call last): File "udfs.py", line 23, in <module> from dateutil import tz ImportError: No module named dateutil ``` when I run my pig script. All my other python udfs (using `datetime` for instance) work just fine. Any idea how to fix that? Many thanks! **UPDATE** after playing a bit with the python path, I am now able to ``` import dateutil ``` (at least Pig does not crash). But if I try: ``` from dateutil import tz ``` I get an error. ``` from dateutil import tz File "/opt/python/lib/python2.7/site-packages/dateutil/tz.py", line 16, in <module> from six import string_types, PY3 File "/opt/python/lib/python2.7/site-packages/six.py", line 604, in <module> viewkeys = operator.methodcaller("viewkeys") AttributeError: type object 'org.python.modules.operator' has no attribute 'methodcaller' ``` How to overcome that? I use tz in the following manner ``` to_zone = dateutil.tz.gettz('US/Eastern') from_zone = dateutil.tz.gettz('UTC') ``` and then I change the timezone of my timestamps. Can I just import dateutil to do that? what is the proper syntax? **UPDATE 2** Following yakuza's suggestion, I am able to ``` import sys sys.path.append('/opt/python/lib/python2.7/site-packages') sys.path.append('/opt/python/lib/python2.7/site-packages/pytz/zoneinfo') import pytz ``` but now I get and error again ``` Caused by: Traceback (most recent call last): File "udfs.py", line 158, in to_date_local File "__pyclasspath__/pytz/__init__.py", line 180, in timezone pytz.exceptions.UnknownTimeZoneError: 'America/New_York' ``` when I define ``` to_zone = pytz.timezone('America/New_York') from_zone = pytz.timezone('UTC') ``` Found some hints here [UnknownTimezoneError Exception Raised with Python Application Compiled with Py2Exe](https://stackoverflow.com/questions/9158846/unknowntimezoneerror-exception-raised-with-python-application-compiled-with-py2e) What to do? Awww, I just want to convert timezones in Pig :(
2016/08/26
[ "https://Stackoverflow.com/questions/39175648", "https://Stackoverflow.com", "https://Stackoverflow.com/users/1609428/" ]
Well, as you probably know all Python UDF functions are not executed by Python interpreter, but Jython that is distributed with Pig. By default in 0.12.0 it should be [Jython 2.5.3](https://stackoverflow.com/questions/17711451/python-udf-version-with-jython-pig). Unfortunately `six` package supports Python starting from [Python 2.6](https://pypi.python.org/pypi/six) and it's package required by [`dateutil`](https://github.com/dateutil/dateutil/blob/master/setup.py). However `pytz` seems not to have such dependency, and should support Python versions starting from [Python 2.4](https://pypi.python.org/pypi/pytz). So to achieve your goal you should distribute `pytz` package to all your nodes for version 2.5 and in your Pig UDF add it's path to `sys.path`. If you complete same steps you did for `dateutil` everything should work as you expect. We are using very same approach with `pygeoip` and it works like a charm. How does it work ================ When you run Pig script that references some Python UDF (more precisely Jython UDF), you script gets compiled to map/reduce job, all `REGISTER`ed files are included in JAR file, and are distributed on nodes where code is actually executed. Now when your code is executed, Jython interpreter is started and executed from Java code. So now when Python code is executed on each node taking part in computation, all Python imports are resolved locally on node. Imports from standard libraries are taken from Jython implementation, but all "packages" have to be install otherwise, as there is no `pip` for it. So to make external packages available to Python UDF you have to install required packages manually using other `pip` or install from sources, but remember to download package **compatible with Python 2.5**! Then in every single UDF file, you have to append `site-packages` on each node, where you installed packages (it's important to use same directory on each node). For example: ``` import sys sys.path.append('/path/to/site-packages') # Imports of non-stdlib packages ``` Proof of concept ================ Let's assume some we have following files: `/opt/pytz_test/test_pytz.pig`: ``` REGISTER '/opt/pytz_test/test_pytz_udf.py' using jython as test; A = LOAD '/opt/pytz_test/test_pytz_data.csv' AS (timestamp:int); B = FOREACH A GENERATE test.to_date_local(timestamp); STORE B INTO '/tmp/test_pytz_output.csv' using PigStorage(','); ``` `/opt/pytz_test/test_pytz_udf.py`: ``` from datetime import datetime import sys sys.path.append('/usr/lib/python2.6/site-packages/') import pytz @outputSchema('date:chararray') def to_date_local(unix_timestamp): """ converts unix timestamp to a rounded date """ to_zone = pytz.timezone('America/New_York') from_zone = pytz.timezone('UTC') try : as_datetime = datetime.utcfromtimestamp(unix_timestamp) .replace(tzinfo=from_zone).astimezone(to_zone) .date().strftime('%Y-%m-%d') except: as_datetime = unix_timestamp return as_datetime ``` `/opt/pytz_test/test_pytz_data.csv`: ``` 1294778181 1294778182 1294778183 1294778184 ``` Now let's install `pytz` on our node (it has to be installed using Python version on which `pytz` is compatible with Python 2.5 (2.5-2.7), in my case I'll use Python 2.6): `sudo pip2.6 install pytz` **Please make sure, that file** `/opt/pytz_test/test_pytz_udf.py` **adds to `sys.path` reference to `site-packages` where `pytz` is installed.** Now once we run Pig with our test script: `pig -x local /opt/pytz_test/test_pytz.pig` We should be able to read output from our job, which should list: ``` 2011-01-11 2011-01-11 2011-01-11 2011-01-11 ```
From the answer to [a different but related question](https://stackoverflow.com/questions/7831649/how-do-i-make-hadoop-find-imported-python-modules-when-using-python-udfs-in-pig), it seems that you should be able to use resources as long as they are available on each of the nodes. I think you can then add the path as described in [this answer regarding jython](https://stackoverflow.com/a/23807481/983722), and load the modules as usual. > > Append the location to the sys.path in the Python script: > > > > ``` > import sys > sys.path.append('/usr/local/lib/python2.7/dist-packages') > import happybase > > ``` > >
1,202
21,998,545
I have code like: ``` While 1: A = input() print A ``` How long can I expect this to run? How many times? Is there a way I can just throw away whatever I have in A once I have printed it? How does python deal with it? Will the program crash after a while? Thank you.
2014/02/24
[ "https://Stackoverflow.com/questions/21998545", "https://Stackoverflow.com", "https://Stackoverflow.com/users/-1/" ]
When you reassign `A` to a new value, there is nothing left referring to the old value. [Garbage collection](https://stackoverflow.com/questions/4484167/details-how-python-garbage-collection-works) comes into play here, and the old object is automatically returned back to free memory. Thus you should never run out of memory with a simple loop like this.
You change the value of A so memory wouldn't be an issue as python garbage collects the old value and returns the memory... so forever
1,203
41,934,574
I am new to C# programming, I migrated from python. I want to append two or more array (exact number is not known , depends on db entry) into a single array like the list.append method in python does. Here the code example of what I want to do ``` int[] a = {1,2,3}; int[] b = {4,5,6}; int[] c = {7,8,9}; int[] d; ``` I don't want to add all the arrays at a time. I need somewhat like this ``` // I know this not correct d += a; d += b; d += c; ``` And this is the final result I want ``` d = {{1,2,3},{4,5,6},{7,8,9}}; ``` it would be too easy for you guys but then I am just starting with c#.
2017/01/30
[ "https://Stackoverflow.com/questions/41934574", "https://Stackoverflow.com", "https://Stackoverflow.com/users/7268744/" ]
Well, if you want a simple **1D** array, try `SelectMany`: ``` int[] a = { 1, 2, 3 }; int[] b = { 4, 5, 6 }; int[] c = { 7, 8, 9 }; // d == {1, 2, 3, 4, 5, 6, 7, 8, 9} int[] d = new[] { a, b, c } // initial jagged array .SelectMany(item => item) // flattened .ToArray(); // materialized as a array ``` if you want a *jagged* array (*array of arrays*) ``` // d == {{1, 2, 3}, {4, 5, 6}, {7, 8, 9}} // notice the declaration - int[][] - array of arrays of int int[][] d = new[] { a, b, c }; ``` In case you want to append arrays conditionally, not in one go, *array* is not a collection type to choose for `d`; `List<int>` or `List<int[]>` will serve better: ``` // 1d array emulation List<int> d = new List<int>(); ... d.AddRange(a); d.AddRange(b); d.AddRange(c); ``` Or ``` // jagged array emulation List<int[]> d = new List<int[]>(); ... d.Add(a); //d.Add(a.ToArray()); if you want a copy of a d.Add(b); d.Add(c); ```
It seems from your code that `d` is not a single-dimensional array, but it seems to be a jagged array (and array of arrays). If so, you can write this: ``` int[][] d = new int[][] { a, b, c }; ``` If you instead want to concatenate all arrays to a new `d`, you can use: ``` int[] d = a.Concat(b).Concat(c).ToArray(); ```
1,206
11,477,643
So I'm contemplating what language to use in the development of an app that uses OpenCV. As a part of my decision, I'm interested in knowing how easy/difficult it is to include the opencv library in the final app. I'd really like to write this in python because the opencv bindings are great, python's easy, etc. But I haven't been able to find a clear answer on stuff like "does py2app automatically bundle opencv when it sees the import cv line" (I think not) and if not, then is there a known way to do this? In general, I would like to know the best way to distribute a python desktop app with opencv.
2012/07/13
[ "https://Stackoverflow.com/questions/11477643", "https://Stackoverflow.com", "https://Stackoverflow.com/users/132467/" ]
I've effectively packaged, deployed, and shipped a Python app using OpenCV with cxFreeze. <http://cx-freeze.sourceforge.net/> Yes, cxFreeze auto picked up the python extensions to OpenCV. I had to manually copy the OpenCV DLLs (Windows), but that was a minor issue that can be solved by a post-processing step in cxFreeze. It does pick up other DLL's, so I'm not sure what the issue was. In fact, it worked so well I was surprised. The ONLY thing it didn't pick up properly was a QT library. Basically here were my steps (I'm using QT so ignore that part if you aren't): 1. cxfreeze App.py --target-dir App --base-name Win32GUI --include-modules PySide.QtNetwork 2. Copy over the opencv\build\x86\vc9\bin files to the App directory. That's it.
You can have a look at <http://www.pyinstaller.org/> It's great and it packages the necessary dll for opencv. For opencv to work it py2exe also package numpy. Make sure you add a dependency for it in case you are using py2exe.
1,209
13,053,429
I'm new to python. I am trying to create a script that gives me a different response when the same data is inputted more than once. The code goes like this : ``` def loop() : Repeat = 0 response = raw_input("enter something : ") if response == "hi" Repeat += 1 print "hello" loop() if Repeat > 2 : print "you have already said hi" loop() def main() : loop() raw_input() main() ``` The above code doesn't work. preferably I would like a statement that checks both conditions, but i'm not quite sure how this could be done.
2012/10/24
[ "https://Stackoverflow.com/questions/13053429", "https://Stackoverflow.com", "https://Stackoverflow.com/users/1636006/" ]
I would use a `dict` to store words/count. You can then inquire if the word is in the dictionary and update count ... ``` words = {} while True: word = raw_input("Say something:") if word in words: words[word] += 1 print "you already said ",words[word] continue else: words[word] = 0 #... ``` You could also do this with `try`/`except`, but I thought I'd keep it simple to start out...
Your statement above is recursively calling itself. The new instance of loop does not have access to the calling value of Repeat and instead has its own local copy of Repeat. Also, you have if `Repeat > 2`. As written this means that it won't get your other print statement until they input "hello" three times to get the counter up to 3. You probably want to make that `Repeat >= 2`. What you want is a while loop that tracks whether the input is repeated. In real life you probably want some condition to tell that while loop when to end, but you don't have whine here so you could use `while True:` to loop forever. Finally, your code only checks if they put in "hello" more than once. You could make it more general by tracking what they have already said instead, and get rid of the need to have a counter in the process. For a fast sloppy version that I haven't tested, it might loop like: ``` alreadySaid = set() #sets are effecient and only store a specific element once while True: #probably want an actual break condition here, but to do it forever this works response = raw_input("enter something : ") if response in alreadySaid: print 'You already said {}'.format(response) else: print response alreadySaid.add(response) ```
1,210
34,543,513
I'm looking for maximum absolute value out of chunked list. For example, the list is: ``` [1, 2, 4, 5, 4, 5, 6, 7, 2, 6, -9, 6, 4, 2, 7, 8] ``` I want to find the maximum with lookahead = 4. For this case, it will return me: ``` [5, 7, 9, 8] ``` How can I do simply in Python? ``` for d in data[::4]: if count < LIMIT: count = count + 1 if abs(d) > maximum_item: maximum_item = abs(d) else: max_array.append(maximum_item) if maximum_item > highest_line: highest_line = maximum_item maximum_item = 0 count = 1 ``` I know I can use for loop to check this. But I'm sure there is an easier way in python.
2015/12/31
[ "https://Stackoverflow.com/questions/34543513", "https://Stackoverflow.com", "https://Stackoverflow.com/users/517403/" ]
Using standard Python: ``` [max(abs(x) for x in arr[i:i+4]) for i in range(0, len(arr), 4)] ``` This works also if the array cannot be evenly divided.
Map the `list` to `abs()`, then [chunk the `list`](https://stackoverflow.com/questions/312443/how-do-you-split-a-list-into-evenly-sized-chunks-in-python) and send it to `max()`: ``` array = [1,2,4,5,4,5,6,7,2,6,-9,6,4,2,7,8] array = [abs(item) for item in array] # use linked question's answer to chunk # array = [[1,2,4,5], [4,5,6,7], [2,6,9,6], [4,2,7,8]] # chunked abs()'ed list values = [max(item) for item in array] ``` Result: ``` >>> values [5, 7, 9, 8] ```
1,212
57,673,070
I am logging into gmail via python and deleting emails. However when I do a search for two emails I get no results to delete. ``` mail.select('Inbox') result,data = mail.uid('search',None '(FROM target.com)') ``` The above works and will find and delete any email that had target.com in the from address. However when I send in another email address I get nothing. ``` result,data = mail.uid('search',None '(FROM "target.com" FROM "walmart.com")') ``` Yes I have both target.com and walmart.com emails in my inbox.
2019/08/27
[ "https://Stackoverflow.com/questions/57673070", "https://Stackoverflow.com", "https://Stackoverflow.com/users/11951910/" ]
First of all I can confirm this behaviour for JavaFX 13 ea build 13. This was probably a very simplistic attempt to fix an old bug which the OP has already mentioned (image turning pink) which I reported a long time ago. The problem is that JPEGS cannot store alpha information and in the past the output was just garbled when an image with an alpha channel was written out as a JPEG. The fix now just refuses to write out the image at all instead of just ignoring the alpha channel. A workaround is to make a copy of the image where you explicitly specify a color model without alpha channel. Here is the original bug report which also contains the workaround: <https://bugs.openjdk.java.net/browse/JDK-8119048> Here is some more info to simplify the conversion: If you add this line to your code ``` BufferedImage awtImage = new BufferedImage((int)img.getWidth(), (int)img.getHeight(), BufferedImage.TYPE_INT_RGB); ``` and then call `SwingFXUtils.fromFXImage(img, awtImage)` with this as the second parameter instead of `null`, then the required conversion will be done automatically and the JPEG is written as expected.
Additionally to the answer of mipa and in the case that you do not have SwingFXUtils available, you could clone the BufferedImage into another BufferedImage without alpha channel: ``` BufferedImage withoutAlpha = new BufferedImage( (int) originalWithAlpha.getWidth(), (int) originalWithAlpha.getHeight(), BufferedImage.TYPE_INT_RGB); Graphics g = withoutAlpha.getGraphics(); g.drawImage(originalWithAlpha, 0, 0, null); g.dispose(); ```
1,217
52,753,613
I have a dataframe say `df`. `df` has a column `'Ages'` `>>> df['Age']` [![Age Data](https://i.stack.imgur.com/pcs2l.png)](https://i.stack.imgur.com/pcs2l.png) I want to group this ages and create a new column something like this ``` If age >= 0 & age < 2 then AgeGroup = Infant If age >= 2 & age < 4 then AgeGroup = Toddler If age >= 4 & age < 13 then AgeGroup = Kid If age >= 13 & age < 20 then AgeGroup = Teen and so on ..... ``` How can I achieve this using Pandas library. I tried doing this something like this ``` X_train_data['AgeGroup'][ X_train_data.Age < 13 ] = 'Kid' X_train_data['AgeGroup'][ X_train_data.Age < 3 ] = 'Toddler' X_train_data['AgeGroup'][ X_train_data.Age < 1 ] = 'Infant' ``` but doing this i get this warning > > /Users/Anand/miniconda3/envs/learn/lib/python3.7/site-packages/ipykernel\_launcher.py:3: SettingWithCopyWarning: > A value is trying to be set on a copy of a slice from a DataFrame > See the caveats in the documentation: <http://pandas.pydata.org/pandas-docs/stable/indexing.html#indexing-view-versus-copy> > This is separate from the ipykernel package so we can avoid doing imports until > /Users/Anand/miniconda3/envs/learn/lib/python3.7/site-packages/ipykernel\_launcher.py:4: SettingWithCopyWarning: > A value is trying to be set on a copy of a slice from a DataFrame > > > How to avoid this warning and do it in a better way.
2018/10/11
[ "https://Stackoverflow.com/questions/52753613", "https://Stackoverflow.com", "https://Stackoverflow.com/users/4005417/" ]
Use [`pandas.cut`](http://pandas.pydata.org/pandas-docs/stable/generated/pandas.cut.html) with parameter `right=False` for not includes the rightmost edge of bins: ``` X_train_data = pd.DataFrame({'Age':[0,2,4,13,35,-1,54]}) bins= [0,2,4,13,20,110] labels = ['Infant','Toddler','Kid','Teen','Adult'] X_train_data['AgeGroup'] = pd.cut(X_train_data['Age'], bins=bins, labels=labels, right=False) print (X_train_data) Age AgeGroup 0 0 Infant 1 2 Toddler 2 4 Kid 3 13 Teen 4 35 Adult 5 -1 NaN 6 54 Adult ``` Last for replace missing value use [`add_categories`](http://pandas.pydata.org/pandas-docs/stable/generated/pandas.Series.cat.add_categories.html) with [`fillna`](http://pandas.pydata.org/pandas-docs/stable/generated/pandas.Series.fillna.html): ``` X_train_data['AgeGroup'] = X_train_data['AgeGroup'].cat.add_categories('unknown') .fillna('unknown') print (X_train_data) Age AgeGroup 0 0 Infant 1 2 Toddler 2 4 Kid 3 13 Teen 4 35 Adult 5 -1 unknown 6 54 Adult ``` --- ``` bins= [-1,0,2,4,13,20, 110] labels = ['unknown','Infant','Toddler','Kid','Teen', 'Adult'] X_train_data['AgeGroup'] = pd.cut(X_train_data['Age'], bins=bins, labels=labels, right=False) print (X_train_data) Age AgeGroup 0 0 Infant 1 2 Toddler 2 4 Kid 3 13 Teen 4 35 Adult 5 -1 unknown 6 54 Adult ```
Just use: ``` X_train_data.loc[(X_train_data.Age < 13), 'AgeGroup'] = 'Kid' ```
1,218
65,513,452
I am trying to display results in a table format (with borders) in a cgi script written in python. How to display table boarders within python code. ``` check_list = elements.getvalue('check_list[]') mydata=db.get_data(check_list) print("_____________________________________") print("<table><th>",check_list[0],"</th> <th>",check_list[1],"</th>") i = 0 #here for data in mydata: print("<tr><td>",data[0],"</td> <td>",data[1],"</td></tr>") i += 1 #here if i == 2:#here break#here print("</table>") ``` the checklist elements represent columns names from a sqlite table. and based on the selected columns the data will be shown. This code result is just showing two records result with no borders in a php page.
2020/12/30
[ "https://Stackoverflow.com/questions/65513452", "https://Stackoverflow.com", "https://Stackoverflow.com/users/12888614/" ]
Digging through issues on vercel's github I found this alternative that doesn't use next-i18next or any other nextjs magic: <https://github.com/Xairoo/nextjs-i18n-static-page-starter> It's just basic i18n using i18next that bundles all locale together with JS, so there are obvious tradeoffs but at least it works with SSG. You can build upon that to come up with something more elaborate. That's what I will do.
You can't use `export` with next.js i18n implementation. > > Note that Internationalized Routing does not integrate with next export as next export does not leverage the Next.js routing layer. Hybrid Next.js applications that do not use next export are fully supported. > > > [Next.js docs](https://nextjs.org/docs/advanced-features/i18n-routing#how-does-this-work-with-static-generation)
1,219
73,679,017
I'm very new to python, so as one of the first projects I decided to a simple log-in menu, however, it gives me a mistake shown at the bottom. The link to the tutorial I used: > > <https://www.youtube.com/watch?v=dR_cDapPWyY&ab_channel=techWithId> > > > This is the code to the log-in menu: ``` def welcome(): print("Welcome to your dashboard") def gainAccess(Username=None, Password=None): Username = input("Enter your username:") Password = input("Enter your Password:") if not len(Username or Password) < 1: if True: db = open("database.txt", "C:\Users\DeePak\OneDrive\Desktop\database.txt.txt") d = [] f = [] for i in db: a,b = i.split(",") b = b.strip() c = a,b d.append(a) f.append(b) data = dict(zip(d, f)) try: if Username in data: hashed = data[Username].strip('b') hashed = hashed.replace("'", "") hashed = hashed.encode('utf-8') try: if bcrypt.checkpw(Password.encode(), hashed): print("Login success!") print("Hi", Username) welcome() else: print("Wrong password") except: print("Incorrect passwords or username") else: print("Username doesn't exist") except: print("Password or username doesn't exist") else: print("Error logging into the system") else: print("Please attempt login again") gainAccess() # b = b.strip() # accessDb() def register(Username=None, Password1=None, Password2=None): Username = input("Enter a username:") Password1 = input("Create password:") Password2 = input("Confirm Password:") db = open("database.txt", "C:\Users\DeePak\OneDrive\Desktop\Name\database.txt") d = [] for i in db: a,b = i.split(",") b = b.strip() c = a,b d.append(a) if not len(Password1)<=8: db = open("database.txt", "C:\Users\DeePak\OneDrive\Desktop\Name\database.txt") if not Username ==None: if len(Username) <1: print("Please provide a username") register() elif Username in d: print("Username exists") register() else: if Password1 == Password2: Password1 = Password1.encode('utf-8') Password1 = bcrypt.hashpw(Password1, bcrypt.gensalt()) db = open("database.txt", "C:\Users\DeePak\OneDrive\Desktop\Name\database.txt") db.write(Username+", "+str(Password1)+"\n") print("User created successfully!") print("Please login to proceed:") # print(texts) else: print("Passwords do not match") register() else: print("Password too short") def home(option=None): print("Welcome, please select an option") option = input("Login | Signup:") if option == "Login": gainAccess() elif option == "Signup": register() else: print("Please enter a valid parameter, this is case-sensitive") # register(Username, Password1, Password2) # gainAccess(Username, Password1) home() ``` **When I run it, I get this issue:** ``` db = open("database.txt", "C:\Users\DeePak\OneDrive\Desktop\Name\database.txt") ^ SyntaxError: (unicode error) 'unicodeescape' codec can't decode bytes in position 2-3: truncated \UXXXXXXXX escape ```
2022/09/11
[ "https://Stackoverflow.com/questions/73679017", "https://Stackoverflow.com", "https://Stackoverflow.com/users/19810120/" ]
You can find the center of each region like this: ```py markers = cv2.watershed(img, markers) labels = np.unique(markers) for label in labels: y, x = np.nonzero(markers == label) cx = int(np.mean(x)) cy = int(np.mean(y)) ``` The result: [![enter image description here](https://i.stack.imgur.com/O2INd.png)](https://i.stack.imgur.com/O2INd.png) Complete example: ``` import cv2 import numpy as np img = cv2.imread("water_coins.jpg") gray = cv2.cvtColor(img, cv2.COLOR_BGR2GRAY) ret, thresh = cv2.threshold(gray, 0, 255, cv2.THRESH_BINARY_INV + cv2.THRESH_OTSU) # noise removal kernel = np.ones((3, 3), np.uint8) opening = cv2.morphologyEx(thresh, cv2.MORPH_OPEN, kernel, iterations=2) # sure background area sure_bg = cv2.dilate(opening, kernel, iterations=3) # Finding sure foreground area dist_transform = cv2.distanceTransform(opening, cv2.DIST_L2, 5) ret, sure_fg = cv2.threshold(dist_transform, 0.7 * dist_transform.max(), 255, 0) # Finding unknown region sure_fg = np.uint8(sure_fg) unknown = cv2.subtract(sure_bg, sure_fg) # Marker labelling ret, markers = cv2.connectedComponents(sure_fg) # Add one to all labels so that sure background is not 0, but 1 markers = markers + 1 # Now, mark the region of unknown with zero markers[unknown == 255] = 0 markers = cv2.watershed(img, markers) labels = np.unique(markers) for label in labels: y, x = np.nonzero(markers == label) cx = int(np.mean(x)) cy = int(np.mean(y)) color = (255, 255, 255) img[markers == label] = np.random.randint(0, 255, size=3) cv2.circle(img, (cx, cy), 2, color=color, thickness=-1) cv2.putText(img, f"{label}", (cx, cy), cv2.FONT_HERSHEY_SIMPLEX, 0.35, color, 1, cv2.LINE_AA) cv2.imwrite("out.jpg", img) ```
Erode every region independently with a structuring element that has the size of the region label. Then use any remaining pixel. In some cases (tiny regions), no pixel at all will remain. You have two options * use a pixel from the "ultimate eroded"; * use some location near the region and a leader line (but avoiding collisions is uneasy). --- You can also work with the inner distances of the regions and pick pixels with maximum distance.
1,224
54,768,539
While I iterate within a for loop I continually receive the same warning, which I want to suppress. The warning reads: `C:\Users\Nick Alexander\AppData\Local\Programs\Python\Python37\lib\site-packages\sklearn\preprocessing\data.py:193: UserWarning: Numerical issues were encountered when scaling the data and might not be solved. The standard deviation of the data is probably very close to 0. warnings.warn("Numerical issues were encountered "` The code that is producing the warning is as follows: ``` def monthly_standardize(cols, df_train, df_train_grouped, df_val, df_val_grouped, df_test, df_test_grouped): # Disable the SettingWithCopyWarning warning pd.options.mode.chained_assignment = None for c in cols: df_train[c] = df_train_grouped[c].transform(lambda x: scale(x.astype(float))) df_val[c] = df_val_grouped[c].transform(lambda x: scale(x.astype(float))) df_test[c] = df_test_grouped[c].transform(lambda x: scale(x.astype(float))) return df_train, df_val, df_test ``` I am already disabling one warning. I don't want to disable all warnings, I just want to disable this warning. I am using python 3.7 and sklearn version 0.0
2019/02/19
[ "https://Stackoverflow.com/questions/54768539", "https://Stackoverflow.com", "https://Stackoverflow.com/users/10973400/" ]
Try this at the beginning of the script to ignore specific warnings: ``` import warnings warnings.filterwarnings("ignore", message="Numerical issues were encountered ") ```
The python contextlib has a contextmamager for this: [suppress](https://docs.python.org/3/library/contextlib.html#contextlib.suppress) ``` from contextlib import suppress with suppress(UserWarning): for c in cols: df_train[c] = df_train_grouped[c].transform(lambda x: scale(x.astype(float))) df_val[c] = df_val_grouped[c].transform(lambda x: scale(x.astype(float))) ```
1,225
60,094,997
Given a triangular matrix `m` in python how best to extract from it the value at row `i` column `j`? ``` m = [1,np.nan,np.nan,2,3,np.nan,4,5,6] m = pd.DataFrame(np.array(x).reshape((3,3))) ``` Which looks like: ``` 0 1 2 0 1.0 NaN NaN 1 2.0 3.0 NaN 2 4.0 5.0 6.0 ``` I can get lower elements easily `m[2,0]` returns `4`. But if I ask for `m[0,2]` i get `nan` when I would like `4` again. What is the best way of accomplishing this with in python?
2020/02/06
[ "https://Stackoverflow.com/questions/60094997", "https://Stackoverflow.com", "https://Stackoverflow.com/users/6948859/" ]
Use `pandas.DataFrame.fillna` with transpose: ``` m = m.fillna(m.T) print(m) ``` Output: ``` 0 1 2 0 1.0 2.0 4.0 1 2.0 3.0 5.0 2 4.0 5.0 6.0 m.loc[0,2] == m.loc[2,0] == 4 # True ``` --- In case there are column names (like `A`,`B`,`C`): ``` m.where(m.notna(), m.T.values) ``` Output: ``` A B C 0 1.0 2.0 4.0 1 2.0 3.0 5.0 2 4.0 5.0 6.0 ```
The easiest way I have found to solve this is to make the matrix symmetrical, I learnt how from [this](https://stackoverflow.com/a/2573982/6948859) answer. There are a couple of steps: 1. Convert nan to 0 ``` m0 = np.nan_to_num(m) ``` 2. Add the transpose of the matrix to itself ``` m = m0 + m0.T ``` 3. Subtract the diagonal ``` m = m - np.diag(m0.diagonal()) ``` Then `m[0,2]` and `m[2,0]` will both give you `4`. ``` 0 1 2 0 1.0 2.0 4.0 1 2.0 3.0 5.0 2 4.0 5.0 6.0 ```
1,230
48,924,491
Working in python 3. So I have a dictionary like this; `{"stripes": [1,0,5,3], "legs": [4,4,2,3], "colour": ['red', 'grey', 'blue', 'green']}` I know all the lists in the dictionary have the same length, but the may not contain the same type of element. Some of them may even be lists of lists. I want to return a dictionary like this; `$>>> Get_element(2) {"stripes": 5, "legs": 2, "colour": 'blue'}` I know that dictionary comprehension is a thing, but I'm a bit confused on how to use it. I'm not sure if it's the most elegant way to achieve my goal either, can I slice a dictionary?
2018/02/22
[ "https://Stackoverflow.com/questions/48924491", "https://Stackoverflow.com", "https://Stackoverflow.com/users/7690011/" ]
The answer to your question would be in this particular case to NOT use LINQ. My advice is to avoid abusing LINQ methods like `.ToList(), .ToArray()` etc, since it has a huge impact on performance. You iterate through the collection and creating a new collection. Very often simple `foreach` much more readable and understandable than a train of LINQ methods.
Linq is usually used to query and transform some data. In case you can change how `persons` are created, this will be the best approach: ``` Person[] persons = nums.Select(n => new Person { Age = n }).ToArray(); ``` If you already have the list of persons, go with the `foreach` loop. LinQ should be only used for querying, not modifying
1,231
14,631,708
I need to set up a private PyPI repository. I've realized there are a lot of them to choose from, and after surfing around, I chose [djangopypi2](http://djangopypi2.readthedocs.org/) since I found their installation instructions the clearest and the project is active. I have never used Django before, so the question might really be a Django question. I followed the instructions and started up the application with this command: ``` $ gunicorn_django djangopypi2.website.settings ``` The repository is working as I want. After configuring '~/.pypirc', I can upload packages using: ``` $ python setup.py sdist upload -r local ``` And after configuring '~/.pip/pip.conf' with 'extra-index-url' I can install packages using: ``` $ pip install <package-name> ``` However, anyone can browse and download my packages. Authentication seems to only be needed for uploading packages. I tried using this example to require login to all pages: [Best way to make Django's login\_required the default](https://stackoverflow.com/questions/2164069/best-way-to-make-djangos-login-required-the-default/2164224#2164224) And set this: ``` LOGIN_REQUIRED_URLS = ( r'/(.*)$', ) LOGIN_REQUIRED_URLS_EXCEPTIONS = ( r'/users/login(.*)$', r'/users/logout(.*)$', ) ``` Now the webgui requires login on all pages, so that part works as expected, but I am not able to use the pip and upload utilities from the command-line anymore. I tried 'pip install xxx' using the extra-index-url setting in 'pip.conf' like this: ``` extra-index-url = http://username:password@127.0.0.1:8000/simple/ ``` but it says 'No distributions at all found for xxx' 'python setup.py sdist upload' gives: ``` Submitting dist/xxx-0.0.1.tar.gz to http://127.0.0.1:8000/ Upload failed (302): FOUND ``` So the question is, how do I enable authentication to work from 'pip' and 'python setup.py register/upload'?
2013/01/31
[ "https://Stackoverflow.com/questions/14631708", "https://Stackoverflow.com", "https://Stackoverflow.com/users/1919913/" ]
You could declare, *and implement*, a pure virtual destructor: ``` class ShapeF { public: virtual ~ShapeF() = 0; ... }; ShapeF::~ShapeF() {} ``` It's a tiny step from what you already have, and will prevent `ShapeF` from being instantiated directly. The derived classes won't need to change.
Try using a protected constructor
1,234
74,242,407
I have a code that looks like: ``` #!/usr/bin/env python '''Plot multiple DOS/PDOS in a single plot run python dplot.py -h for more usage ''' import argparse import sys import matplotlib.pyplot as plt import numpy as np parser = argparse.ArgumentParser() # parser.add_argument('--dos', help='Plot the dos', required=True) parser.add_argument('--dos', nargs='*', help='Files to plot') parser.add_argument('--label', nargs='*', help='Label of the files') parser.add_argument('--fermi', help='Fermi Energy') args = parser.parse_args() ``` If I run this code with `python foo.py -h`, I get the output: ``` usage: foo.py [-h] [--dos [DOS ...]] [--label [LABEL ...]] [--fermi FERMI] options: -h, --help show this help message and exit --dos [DOS ...] Files to plot --label [LABEL ...] Label of the files --fermi FERMI Fermi Energy ``` I know I can separately print the docstring using `print(__doc__)`. But, I want the `python foo.py -h` to print both the docstring together with the present `-h` output. That is, `python foo.py -h` should give: ``` Plot multiple DOS/PDOS in a single plot run python dplot.py -h for more usage usage: foo.py [-h] [--dos [DOS ...]] [--label [LABEL ...]] [--fermi FERMI] options: -h, --help show this help message and exit --dos [DOS ...] Files to plot --label [LABEL ...] Label of the files --fermi FERMI Fermi Energy ``` Is this possible?
2022/10/29
[ "https://Stackoverflow.com/questions/74242407", "https://Stackoverflow.com", "https://Stackoverflow.com/users/2005559/" ]
Use `window` property `localStorage` to save the `theme` value across browser sessions. Try the following code: ``` const theme = localStorage.getItem('theme') if (!theme) localStorage.setItem('theme', 'light') // set the theme; by default 'light' document.body.classList.add(theme) ```
here is a way you could do it: Cookies. ``` function toggleDarkMode() { let darkTheme= document.body; darkTheme.classList.toggle("darkMode"); document.cookie = "theme=dark"; } ``` ``` function toggleLightMode() { let lightTheme= document.body; lightTheme.classList.toggle("lightMode"); document.cookie = "theme=light"; } ``` and then to check: ``` let theme = document.cookie; if(theme === 'theme=light'){ let lightTheme= document.body; lightTheme.classList.toggle("lightMode"); } else if { let darkTheme= document.body; darkTheme.classList.toggle("darkMode"); } ```
1,237
38,571,667
I'm trying to use Python to download an Excel file to my local drive from Box. Using the boxsdk I was able to authenticate via OAuth2 and successfully get the file id on Box. However when I use the `client.file(file_id).content()` function, it just returns a string, and if I use `client.file(file_id).get()` then it just gives me a `boxsdk.object.file.File`. Does anybody know how to write either of these to an Excel file on the local machine? Or a better method of using Python to download an excel file from Box. (I discovered that `boxsdk.object.file.File` has an option `download_to(writeable_stream` [here](http://box-python-sdk.readthedocs.io/en/latest/boxsdk.object.html) but I have no idea how to use that to create an Excel file and my searches haven't been helpful).
2016/07/25
[ "https://Stackoverflow.com/questions/38571667", "https://Stackoverflow.com", "https://Stackoverflow.com/users/6635737/" ]
It is correct that the documentation and source for `download_to` can be found [here](http://box-python-sdk.readthedocs.io/en/latest/boxsdk.object.html?highlight=download_to#boxsdk.object.file.File.download_to) and [here](http://box-python-sdk.readthedocs.io/en/latest/_modules/boxsdk/object/file.html#File.download_to). I learned about it from [this answer](https://stackoverflow.com/a/49315500/778533) myself. You just need to do the following ``` path = 'anyFileName.xlsx' with open(path, 'wb') as f: client.file(file_id).download_to(f) ```
You could use python [csv library](https://docs.python.org/2/library/csv.html), along with dialect='excel' flag. It works really nice for exporting data to Microsoft Excel. The main idea is to use csv.writer inside a loop for writing each line. Try this and if you can't, post the code here.
1,238
53,077,341
I am trying to find the proper python syntax to create a flag with a value of `yes` if `columnx` contains any of the following numbers: **`1, 2, 3, 4, 5`**. ``` def create_flag(df): if df['columnx'] in (1,2,3,4,5): return df['flag']=='yes' ``` I get the following error. > > TypeError: invalid type comparison > > > Is there an obvious mistake in my syntax?
2018/10/31
[ "https://Stackoverflow.com/questions/53077341", "https://Stackoverflow.com", "https://Stackoverflow.com/users/6365890/" ]
Use `np.where` with pandas `isin` as: ``` df['flag'] = np.where(df['columnx'].isin([1,2,3,4,5]),'yes','no') ```
You have lot of problems in your code! i assume you want to try something like this ``` def create_flag(df): if df['columnx'] in [1,2,3,4,5]: df['flag']='yes' x = {"columnx":2,'flag':None} create_flag(x) print(x["flag"]) ```
1,240
25,299,625
I have implemented a REST Api (<http://www.django-rest-framework.org/>) as follows: ``` @csrf_exempt @api_view(['PUT']) def updateinfo(request, id, format=None): try: user = User.objects.get(id=id) except User.DoesNotExist: return HttpResponse(status=status.HTTP_404_NOT_FOUND) if request.method == 'PUT': serializer = UserSerializer(user, data=request.DATA) if serializer.is_valid(): serializer.save() return Response(serializer.data) return Response(serializer.errors, status=status.HTTP_400_BAD_REQUEST) ``` which works fine when I update user info through browser. But I have difficulties calling this Api using Requests (<http://docs.python-requests.org/en/latest/>). This is my code where I am calling the above api: ``` payload = {'id':id, ...} resp = requests.put(updateuserinfo_url, data=payload) ``` and this is the response that I receive: ``` resp.text {"id": ["This field is required."], ...} ``` I checked `request.DATA` and it seems that it is empty. I appreciate if someone could help with finding what is wrong with my code, or if I am missing some additional settings/arguments required to make this simple request.
2014/08/14
[ "https://Stackoverflow.com/questions/25299625", "https://Stackoverflow.com", "https://Stackoverflow.com/users/3324751/" ]
You are missing the django-rest framework parser decorator, in your case, you need use `@parser_classes((FormParser,))` to populate request.DATA dict. [Read more here](http://www.django-rest-framework.org/api-guide/parsers) try with that: ``` from rest_framework.parsers import FormParser @parser_classes((FormParser,)) @csrf_exempt @api_view(['PUT']) def updateinfo(request, id, format=None): try: user = User.objects.get(id=id) except User.DoesNotExist: return HttpResponse(status=status.HTTP_404_NOT_FOUND) if request.method == 'PUT': serializer = UserSerializer(user, data=request.DATA) if serializer.is_valid(): serializer.save() return Response(serializer.data) return Response(serializer.errors, status=status.HTTP_400_BAD_REQUEST) ```
Try doing everything using JSON. 1. Add JSONParser, like [levi](https://stackoverflow.com/users/1539655/levi) explained 2. Add custom headers to your request, like in [this example](http://docs.python-requests.org/en/latest/user/quickstart/#custom-headers) So for you, maybe something like: ``` >>> payload = {'id':id, ...} >>> headers = {'content-type': 'application/json'} >>> r = requests.put(url, data=json.dumps(payload), headers=headers) ```
1,241