qid
int64
46k
74.7M
question
stringlengths
54
37.8k
date
stringlengths
10
10
metadata
listlengths
3
3
response_j
stringlengths
29
22k
response_k
stringlengths
26
13.4k
__index_level_0__
int64
0
17.8k
56,181,987
I installed PySpark on Amazon AWS using instructions: <https://medium.com/@josemarcialportilla/getting-spark-python-and-jupyter-notebook-running-on-amazon-ec2-dec599e1c297> This works fine: ```py Import pyspark as SparkContext ``` This gives error: ``` sc = SparkContext() TypeError Traceback (most recent call last) <ipython-input-3-2dfc28fca47d> in <module> ----> 1 sc = SparkContext() TypeError: 'module' object is not callable ```
2019/05/17
[ "https://Stackoverflow.com/questions/56181987", "https://Stackoverflow.com", "https://Stackoverflow.com/users/11270319/" ]
You can just use the copy constructor of `ArrayList` which accepts a `Collection<? extends E>`: ``` List<GtbEtobsOYenibelge> listOnayStatu = servis.listOnayStatus4Belge(user.getBirimId().getId()); List<GtbEtobsOYenibelge> cloneOnayStatu = new ArrayList<>(listOnayStatu); ``` That way you create a copy of `listOnayStatu`. Also you should not rely on `clone()` anymore as it has been confirmed to been a bad decision
The method `servis.listOnayStatus4Belge` returns a [Vector](https://docs.oracle.com/javase/8/docs/api/index.html). A `Vector` implements the `List` interface but is not an `ArrayList`. Therefore you can't cast it to one. Looking at the problematic statement: ``` cloneOnayStatu = ((List) ((ArrayList) listOnayStatu).clone()); ``` You are copying a Vector and assigning it to `cloneOnayStatu`. You should be able to do it like this: ``` cloneOnayStatu = (List<GtbEtobsOYenibelge>) ((Vector<GtbEtobsOYenibelge>)listOnayStatu).clone(); ``` The `clone()` method call will return another Vector but it's declared return type is `Object`. Therefore you need to cast it to a List for the assignment to work. However, `clone()` is not used much these days. You can have more control over what kind of List you want the result to be by using a constructor, such as the following: ``` cloneOnayStatu = new ArrayList<>(listOnayStatu); ```
12,705
41,492,878
I tried to install "scholarly" package, but I keep receiving this error: ``` x86_64-linux-gnu-gcc -pthread -DNDEBUG -g -fwrapv -O2 -Wall -Wstrict-prototypes -fno-strict-aliasing -Wdate-time -D_FORTIFY_SOURCE=2 -g -fstack-protector-strong -Wformat -Werror=format-security -fPIC -I/usr/include/python2.7 -c build/temp.linux-x86_64-2.7/_openssl.c -o build/temp.linux-x86_64-2.7/build/temp.linux-x86_64-2.7/_openssl.o build/temp.linux-x86_64-2.7/_openssl.c:434:30: fatal error: openssl/opensslv.h: No such file or directory compilation terminated. error: command 'x86_64-linux-gnu-gcc' failed with exit status 1 Command "/usr/bin/python -u -c "import setuptools,tokenize;__file__='/tmp/pip-build-0OXGEx/cryptography/setup.py';exec(compile(getattr(tokenize, 'open', open)(__file__).read().replace('\r\n', '\n'), __file__, 'exec'))" install --record /tmp/pip-EdgZGB-record/install-record.txt --single-version-externally-managed --compile" failed with error code 1 in /tmp/pip-build-0OXGEx/cryptography/ ``` Already tried the solutions in the following post, but it didnt work: [pip install lxml error](https://stackoverflow.com/questions/5178416/pip-install-lxml-error/5178444)
2017/01/05
[ "https://Stackoverflow.com/questions/41492878", "https://Stackoverflow.com", "https://Stackoverflow.com/users/5413088/" ]
I had the same problem. This one helped me: ``` sudo apt-get install build-essential libssl-dev libffi-dev python-dev ``` If you are using `python3`, try to replace `python-dev` with `python3-dev`
Install lib32ncurses5-dev: ``` sudo apt-get install lib32ncurses5-dev ```
12,707
39,983,159
This is the code that results in an error message: ``` import urllib import xml.etree.ElementTree as ET url = raw_input('Enter URL:') urlhandle = urllib.urlopen(url) data = urlhandle.read() tree = ET.parse(data) ``` The error: ![error msg screenshot](https://i.stack.imgur.com/eMKS2.png) I'm new to python. I did read documentation and a couple of tutorials, but clearly I still have done something wrong. I don't believe it is the xml file itself because it does this to two different xml files.
2016/10/11
[ "https://Stackoverflow.com/questions/39983159", "https://Stackoverflow.com", "https://Stackoverflow.com/users/6938631/" ]
`data` is a reference to the XML content as a string, but the [`parse()`](https://docs.python.org/2.7/library/xml.etree.elementtree.html#xml.etree.ElementTree.parse) function expects a filename or [file object](https://docs.python.org/2/glossary.html#term-file-object) as argument. That's why there is an an error. `urlhandle` is a file object, so `tree = ET.parse(urlhandle)` should work for you.
The error message indicates that your code is trying to open a file, who's name is stored in the variable source. It's failing to open that file (IOError) because the variable source contains a bunch of XML, not a file name.
12,713
55,436,590
I am a beginner trying to learn Python. I wrote a program using Geany and would like to build and execute it but I keep getting this error: "The system cannot find the path specified". I believe I added the right info to the Path though: ``` Compile C:\Python373\python -m py_compile "%f" Execute C:\Python373\python "%f" ``` this doesn't work. Can anyone help me figure it out. Thank you.
2019/03/30
[ "https://Stackoverflow.com/questions/55436590", "https://Stackoverflow.com", "https://Stackoverflow.com/users/10286420/" ]
You can try this solution First open `sdkmanager.bat` with any text editor Then find this line ``` %JAVA_EXE%" %DEFAULT_JVM_OPTS% %JAVA_OPTS% %SDKMANAGER_OPTS% ``` And change it to this line ``` %JAVA_EXE%" %DEFAULT_JVM_OPTS% --add-modules java.xml.bind %JAVA_OPTS% %SDKMANAGER_OPTS% ``` I hope this solves your problem.
I had to do the following to fix this error on Windows 10: 1. Install JDK 8. I had JDK 12 installed but it did not seem to work with that version. 2. Add Java to my environment variable Path To add Java to your environment variable Path do the following: `Go to Computer -> Advanced system settings -> Environment variables -> PATH -> and add the path to your local java bin directory. It looks like this: C:\Program Files\Java\jdk-versionyouhave\bin`
12,716
17,260,338
I'm trying to deploy a Flask app to Heroku however upon pushing the code I get the error ``` 2013-06-23T11:23:59.264600+00:00 heroku[web.1]: Error R10 (Boot timeout) -> Web process failed to bind to $PORT within 60 seconds of launch ``` I'm not sure what to try, I've tried changing the port from 5000 to 33507, but to no avail. My Procfile looks like this: ``` web: python main.py ``` `main.py` is the main Flask file which initiates the server. Thanks.
2013/06/23
[ "https://Stackoverflow.com/questions/17260338", "https://Stackoverflow.com", "https://Stackoverflow.com/users/970323/" ]
In my Flask app hosted on Heroku, I use this code to start the server: ```py if __name__ == '__main__': # Bind to PORT if defined, otherwise default to 5000. port = int(os.environ.get('PORT', 5000)) app.run(host='0.0.0.0', port=port) ``` When developing locally, this will use port 5000, in production Heroku will set the `PORT` environment variable. (Side note: By default, Flask is only accessible from your own computer, not from any other in the network (see the [Quickstart](https://flask.palletsprojects.com/en/2.0.x/quickstart/)). Setting `host='0.0.0.0'` will make Flask available from the network)
Your `main.py` script cannot bind to a specific port, it needs to bind to the port number set in the `$PORT` environment variable. Heroku sets the port it wants in that variable prior to invoking your application. The error you are getting suggests you are binding to a port that is not the one Heroku expects.
12,719
60,136,547
I can't figure out how to use multithreading/multiprocessing in python to speed up this scraping process getting all the usernames from the hashtag 'cats' on instagram. My goal is to make this as fast as possible because currently the process is kinda slow ``` from instaloader import Instaloader HASHTAG = 'cats' loader = Instaloader(sleep=False) users = [] for post in loader.get_hashtag_posts(HASHTAG): if post.owner_username not in users: users.append(post.owner_username) print(post.owner_username) ```
2020/02/09
[ "https://Stackoverflow.com/questions/60136547", "https://Stackoverflow.com", "https://Stackoverflow.com/users/12867155/" ]
The `LockedIterator` is inspired from [here](https://stackoverflow.com/questions/1131430/are-generators-threadsafe). ``` import threading from instaloader import Instaloader class LockedIterator(object): def __init__(self, it): self.lock = threading.Lock() self.it = it.__iter__() def __iter__(self): return self def __next__(self): self.lock.acquire() try: return self.it.__next__() finally: self.lock.release() HASHTAG = 'cats' posts = Instaloader(sleep=False).get_hashtag_posts(HASHTAG) posts = LockedIterator(posts) users = set() def worker(): try: for post in posts: print(post.owner_username) users.add(post.owner_username) except Exception as e: print(e) raise threads = [] for i in range(4): t = threading.Thread(target=worker) threads.append(t) t.start() for t in threads: t.join() ```
**Goal is to have an input file and seperated output.txt files, maybe you can help me here to** It should be something with line 45 And i'm not really advanced so my try may contains some wrong code, I don't know As an example hashtags for input.txt I used the: *wqddt & d2deltas* ``` from instaloader import Instaloader import threading import io import time import sys class LockedIterator(object): def __init__(self, it): self.lock = threading.Lock() self.it = it.__iter__() def __iter__(self): return self def __next__(self): self.lock.acquire() try: return self.it.__next__() finally: self.lock.release() f = open('input.txt','r',encoding='utf-8') HASHTAG = f.read() p = HASHTAG.split('\n') PROFILE = p[:] for ind in range(len(PROFILE)): pro = PROFILE[ind] posts = Instaloader(sleep=False).get_hashtag_posts(pro) posts = LockedIterator(posts) users = set() start_time = time.time() PROFILE = p[:] def worker(): for ind in range(len(PROFILE)): pro = PROFILE[ind] try: filename = 'downloads/'+pro+'.txt' fil = open(filename,'a',newline='',encoding="utf-8") for post in posts: hashtags = post.owner_username fil.write(str(hashtags)+'\n') except: print('Skipping',pro) threads = [] for i in range(4): #Input Threads t = threading.Thread(target=worker) threads.append(t) t.start() for t in threads: t.join() end_time = time.time() print("Done") print("Time taken : " + str(end_time - start_time) + "sec") ```
12,725
20,763,448
EDITED HEAVILY with some new information (and a bounty) I am trying to create a plug in in python for gimp. (on windows) this page <http://gimpbook.com/scripting/notes.html> suggests running it from the shell, or looking at ~/.xsession-errors neither work. I am able to run it from the cmd shell, as > > gimp-2.8.exe -c --verbose ## (as suggested by <http://gimpchat.com/viewtopic.php?f=9&t=751> ) > > > this causes the output from "pdb.gimp\_message(...)" to go to a terminal. BUT !!! this only works when everything is running as expected , i get no output on crashes. i've tried print statements, they go nowhere. this other guy had a similar problem , but the discussion got sidetracked. [Plugins usually don't work, how do I debug?](https://stackoverflow.com/questions/18969820/plugins-usually-dont-work-how-do-i-debug) --- in some places i saw recommendations to run it from within the python-fu console. this gets me nowhere. i need to comment out import gimpfu, as it raises errors, and i don't get gtk working. --- my current problem is that even if the plugin registers and shows on the menu, when there is some error and it does not behave as expected, i don't know where to start looking for hints . (i've tried clicking in all sorts of contexts, w - w/o selection, with w/o image. ) I was able to copy , and execute example plugins from <http://gimpbook.com/scripting/> and i got the, working, but when a change i make breaks something, i know not what, and morphing an existing program line by line is tedious .(gimp has to be shut down and restared each time) --- so to sum up - 1- can i refresh a plugin without restarting gimp ? (so at least my slow-morph will be faster ) 2- can i run plug-ins from the python-fu shell. (as opposed to just importing them to make sure they parse.) 3- is there an error-log i am missing, or something to that effect? 4- is there a way to run gimp on windows from a shell to see output ? (am i better off under cygwin (or virtualbox.. ))? 5- i haven't yet looked up how to connect winpdb to an existing process. how would i go about connecting it to a python process that runs inside gimp? thanks
2013/12/24
[ "https://Stackoverflow.com/questions/20763448", "https://Stackoverflow.com", "https://Stackoverflow.com/users/1456530/" ]
> > 1- can i refresh a plugin without restarting gimp ? (so at least my > slow-morph will be faster ) > > > You must restart GIMP when you add a script or change register(). No need to restart when changing other parts of the script -- it runs as a separate process and will be re-read from disk each time. helpful source: <http://gimpbook.com/scripting/notes.html> > > 2- can i run plug-ins from the python-fu shell. (as opposed to just > importing them to make sure they parse.) > > > Yes, you can access to your registered plug-in in `python-fu` console as: ``` >>> pdb.name_of_registerd_plug-in ``` And can call it like: ``` >>> pdb.name_of_registerd_plug-in(img, arg1, arg2, ...) ``` Also in `python-fu` dialog console, you can click to `Browse ..` option and find your registered plug-in, and then click `Apply` , to import it to `python-fu` console. helpful source: <http://registry.gimp.org/node/28434> > > 3- is there an error-log i am missing, or something to that effect? > > > To log, you can define a function like this: ``` def gimp_log(text): pdb.gimp_message(text) ``` And use it in your code, whenever you want. To see log of that, in `gimp` program, open `Error Console` from `Dockable Dialogs` in `Windows` menu, otherwise a message box will be pop up on every time you make a log. Also you can redirect `stdin` and `stdout` to a file,: ``` import sys sys.stderr = open('er.txt', 'a') sys.stdout = open('log.txt', 'a') ``` When you do that, all of `exceptions` will go to `err.txt` and all of print out will be go to `log.txt` Note that open file with `a` option instead of `w` to keep log file. helpful sources: [How do I output info to the console in a Gimp python script?](https://stackoverflow.com/questions/9955834/how-do-i-output-info-to-the-console-in-a-gimp-python-script) <http://www.exp-media.com/content/extending-gimp-python-python-fu-plugins-part-2> > > 4- is there a way to run gimp on windows from a shell to see output ? > (am i better off under cygwin (or virtualbox.. ))? > > > I got some error for that, but may try again ... > > 5- i haven't yet looked up how to connect winpdb to an existing > process. how would i go about connecting it to a python process that > runs inside gimp? > > > First install [winpdb](http://winpdb.org/download/) , and also [wxPython](http://www.wxpython.org/) ( Winpdb GUI depends on wxPython) Note that `Gimp` has own python interpreter, and may you want to install `winpdb` to your default python interpreter or to gimp python interpreter. If you install `winpdb` to your default python interpreter, then you need to copy `rpdb2.py` installed file to `..\Lib\site-packages` of gimp python interpreter path. After that you should be able to import `pdb2` module from `Python-Fu` console of gimp: ``` GIMP 2.8.10 Python Console Python 2.7.5 (default, May 15 2013, 22:43:36) [MSC v.1500 32 bit (Intel)] >>> import rpdb2 >>> ``` Now in your plug-in code, for example in your main function add following code: ``` import rpdb2 # may be included out side of function. rpdb2.start_embedded_debugger("pass") # a password that will asked by winpdb ``` Next, go to gimp and run your python plug-in, when you run your plug-in, it will run and then wait when reach to above code. Now to open `Winpdb GUI` go to `..\PythonXX\Scripts` and run `winpdb_.pyw`. (Note that when using Winpdb for remote debugging make sure any [firewall](http://winpdb.org/docs/requirements/) on the way has TCP port 51000 open. Note that if port 51000 is taken Winpdb will search for an alternative port between 51000 and 51023.) Then in `Winpdb GUI` from `File` menu select `attach` and give `pass` as password to it, and then you can see your plug-in script on that list, select it and start your debug step by step. ![debug python gimp plugin with Winpdb](https://i.stack.imgur.com/1TuJL.jpg) helpful resource: [Installing PyGIMP on Windows](https://stackoverflow.com/questions/14592607/installing-pygimp-on-windows) Useful sources: <http://wiki.gimp.org/index.php/Hacking:Plugins> <http://www.gimp.org/docs/python/index.html> <http://wiki.elvanor.net/index.php/GIMP_Scripting> <http://www.exp-media.com/gimp-python-tutorial> <http://coderazzi.net/python/gimp/pythonfu.html> <http://www.ibm.com/developerworks/opensource/library/os-autogimp/os-autogimp-pdf.pdf>
as noted in [How do I output info to the console in a Gimp python script?](https://stackoverflow.com/questions/9955834/how-do-i-output-info-to-the-console-in-a-gimp-python-script/15637932#15637932) add ``` import sys sys.stderr = open( 'c:\\temp\\gimpstderr.txt', 'w') sys.stdout = open( 'c:\\temp\\gimpstdout.txt', 'w') ``` at the beginning of the plug in file.
12,726
55,841,631
So i have a question to create a matrix, but I'm unsure why the values are shared? Not sure if its due to the sequence being a reference type or not? If you write this code in pythontutor, you'll find that the main tuple all points to the same 'row' tuple and is shared. I understand that if I did `return row*n` it'd be shared, but **Why is it that when you concatenate tuples, or append lists, why would it then be shared (referred to the same memory address)**? ``` def make_matrix(n): row = (0, )*n board = () for i in range(n): board += (row,) return board matrix = make_board(4) print(matrix) ``` As compared to this code, where each row is separately (0,0,0,0) and not shared. ``` def make_board(n): return tuple(tuple(0 for i in range(n)) for i in range(n)) matrix = make_board(4) print(matrix) ```
2019/04/25
[ "https://Stackoverflow.com/questions/55841631", "https://Stackoverflow.com", "https://Stackoverflow.com/users/11245768/" ]
The reason why your query isn't working as expected is because you are not actually targeting the specific array element you want to update. Here's how I would write the query: ``` patients.findOneAndUpdate( {_id: "5cb939a3ba1d7d693846136c"}, {$set: {"myArray.$[el].value": 424214 } }, { arrayFilters: [{ "el.treatment": "beauty" }], new: true } ) ``` To break down what's happening: 1) First we're looking for the patient by ID 2) Then using `$set` we specify which updates are being applied. notice the `$[el]` syntax. this is referring to an individual element in the array. you can name it what ever you want but it must match the variable name used in `arrayFilters`. 3) Then in the configuration object we specify `arrayFilters` which is saying to only target array elements with "treatment" equal to "beauty" Note though that it will target and update all array elements with treatment equal to "beauty". If you haven't already you'll want to ensure that the treatment value is unique. [`$addToSet`](https://docs.mongodb.com/manual/reference/operator/update/addToSet/) could be useful here, and would be used when **adding** elements to the array.
Ok i found out and managed to update but the right answer from Frank Rose is better cause it worked in my other projects but not the current one Because i was using version 4.4 of mongoose, only version 5 and above can use arrayfilter For mongoose version < 5: ``` patients.findOneAndUpdate( { _id: "5cb939a3ba1d7d693846136c", 'images.treatment': "beauty" }, {$set: {"myArray.$.value": 424214 } }, { new: true } ) ```
12,728
50,913,172
Big hello to the Stackoverflow community, I am trying to read in a .csv file with 1370 rows and two columns: `Time` and `Speed`. ``` Time Speed 0 1 1 4 2 7 3 8 ``` I want to find the difference in `Speed` between two time steps (e.g. `Time` `2` and `1`, which is `3`) for the entire length of the data. I want to add a new column `dS` with the previously calculated difference. The data would now look like: ``` Time Speed dS 0 1 NaN 1 4 3 2 7 3 3 8 1 ``` The code I am using is as follows: ``` import pandas as pd from pandas import read_csv df2 = pd.read_csv ('speed.csv') dVV = [] for i, row in df2.iterrows(): dVV.append(df2.iloc[i+1,1] - df2.iloc[i,1]) break df2['dVV']=dVV ``` The error I am getting is: ``` ValueError Traceback (most recent call last) <ipython-input-29-4ed9fde37ff9> in <module>() 14 break 15 ---> 16 df2['dVV']=dVV 17 18 #df2.to_csv('udds_test.csv', index=False, header=True) ~\Anaconda3\lib\site-packages\pandas\core\frame.py in __setitem__(self, key, value) 2517 else: 2518 # set column -> 2519 self._set_item(key, value) 2520 2521 def _setitem_slice(self, key, value): ~\Anaconda3\lib\site-packages\pandas\core\frame.py in _set_item(self, key, value) 2583 2584 self._ensure_valid_index(value) -> 2585 value = self._sanitize_column(key, value) 2586 NDFrame._set_item(self, key, value) 2587 ~\Anaconda3\lib\site-packages\pandas\core\frame.py in _sanitize_column(self, key, value, broadcast) 2758 2759 # turn me into an ndarray -> 2760 value = _sanitize_index(value, self.index, copy=False) 2761 if not isinstance(value, (np.ndarray, Index)): 2762 if isinstance(value, list) and len(value) > 0: ~\Anaconda3\lib\site-packages\pandas\core\series.py in _sanitize_index(data, index, copy) 3119 3120 if len(data) != len(index): -> 3121 raise ValueError('Length of values does not match length of ' 'index') 3122 3123 if isinstance(data, PeriodIndex): ValueError: Length of values does not match length of index ``` I am guessing that the code is breaking after the last 1370th row. How can I tackle this?
2018/06/18
[ "https://Stackoverflow.com/questions/50913172", "https://Stackoverflow.com", "https://Stackoverflow.com/users/9957516/" ]
You can just use [`pd.Series.diff`](http://pandas.pydata.org/pandas-docs/version/0.22/generated/pandas.Series.diff.html): ``` df['ds'] = df['Speed'].diff() print(df) Time Speed ds 0 0 1 NaN 1 1 4 3.0 2 2 7 3.0 3 3 8 1.0 ``` The loop method you've attempted is not recommend when vectorised solutions such as `pd.Series.diff` are available.
Use: ``` df['Speed_avg'] = df['Speed'].rolling(2, min_periods=2).mean() df['ds'] = df['Speed'].diff() ``` Output: ``` Time Speed Speed_avg ds 0 0 1 NaN NaN 1 1 4 2.5 3.0 2 2 7 5.5 3.0 3 3 8 7.5 1.0 ```
12,729
46,501,292
I'm building a data extract using [scrapy](https://scrapy.org/) and want to normalize a raw string pulled out of an HTML document. Here's an example string: ``` Sapphire RX460 OC 2/4GB ``` Notice two groups of two whitespaces preceeding the string literal and between `OC` and `2`. Python provides trim as described in [How do I trim whitespace with Python?](https://stackoverflow.com/questions/1185524/how-do-i-trim-whitespace-with-python) But that won't handle the two spaces between `OC` and `2`, which I need collapsed into a single space. I've tried using [`normalize-space()`](http://devdocs.io/xslt_xpath/xpath/functions/normalize-space) from XPath while extracting data with my [scrapy Selector](https://doc.scrapy.org/en/latest/topics/selectors.html) and that works but the assignment verbose with strong rightward drift: ``` product_title = product.css('h3').xpath('normalize-space((text()))').extract_first() ``` Is there an elegant way to normalize whitespace using Python? If not a one-liner, is there a way I can break the above line into something easier to read without throwing an indentation error, e.g. ``` product_title = product.css('h3') .xpath('normalize-space((text()))') .extract_first() ```
2017/09/30
[ "https://Stackoverflow.com/questions/46501292", "https://Stackoverflow.com", "https://Stackoverflow.com/users/712334/" ]
You can use: ``` " ".join(s.split()) ``` where `s` is your string.
You can use a function like below with regular expression to scan for continuous spaces and replace them by 1 space ``` import re def clean_data(data): return re.sub(" {2,}", " ", data.strip()) product_title = clean(product.css('h3::text').extract_first()) ``` And then improve clean function anyway you like it
12,730
37,336,875
I have a 2 set of data i crawled from a html table using regex expression data: ``` <div class = "info"> <div class="name"><td>random</td></div> <div class="hp"><td>123456</td></div> <div class="email"><td>random@mail.com</td></div> </div> <div class = "info"> <div class="name"><td>random123</td></div> <div class="hp"><td>654321</td></div> <div class="email"><td>random123@mail.com</td></div> </div> ``` regex: ``` matchname = re.search('\<div class="name"><td>(.*?)</td>' , match3).group(1) matchhp = re.search('\<div class="hp"><td>(.*?)</td>' , match3).group(1) matchemail = re.search('\<div class="email"><td>(.*?)</td>' , match3).group(1) ``` so using the regex i can take out ``` random 123456 random@mail.com ``` so after saving this set of data into my database i want to save the next set how do i get the next set of data? i tried using findall then insert into my db but everything was in 1 line. I need the data to be in the db set by set. New to python please comment on which part is unclear will try to edit
2016/05/20
[ "https://Stackoverflow.com/questions/37336875", "https://Stackoverflow.com", "https://Stackoverflow.com/users/3797825/" ]
You should not be parsing HTML with regex. It's just a mess, do it with BS4. Doing it the right way: ``` soup = BeautifulSoup(match3, "html.parser") names = [] allTds = soup.find_all("td") for i,item in enumerate(allTds[::3]): # firstname hp email names.append((item.text, allTds[(i*3)+1].text, allTds[(i*3)+2].text)) ``` And for the sake of answering the question asked I guess I'll include a horrible ugly regex that you should never use. ESPECIALLY because it's html, don't ever use regex for parsing html. (please don't use this) ``` for thisMatch in re.findall(r"<td>(.+?)</td>.+?<td>(.+?)</td>.+?<td>(.+?)</td>", match3, re.DOTALL): print(thisMatch[0], thisMatch[1], thisMatch[2]) ```
As @Racialz pointed out, you should look into [using HTML parsers instead of regular expressions](https://stackoverflow.com/questions/1732348/regex-match-open-tags-except-xhtml-self-contained-tags). Let's take [`BeautifulSoup`](https://www.crummy.com/software/BeautifulSoup/bs4/doc/) as well as @Racialz did, but build a more robust solution. Find all `info` elements and locate all fields inside producing a list of dictionaries in the output: ``` from pprint import pprint from bs4 import BeautifulSoup data = """ <div> <div class = "info"> <div class="name"><td>random</td></div> <div class="hp"><td>123456</td></div> <div class="email"><td>random@mail.com</td></div> </div> <div class = "info"> <div class="name"><td>random123</td></div> <div class="hp"><td>654321</td></div> <div class="email"><td>random123@mail.com</td></div> </div> </div> """ soup = BeautifulSoup(data, "html.parser") fields = ["name", "hp", "email"] result = [ {field: info.find(class_=field).get_text() for field in fields} for info in soup.find_all(class_="info") ] pprint(result) ``` Prints: ``` [{'email': 'random@mail.com', 'hp': '123456', 'name': 'random'}, {'email': 'random123@mail.com', 'hp': '654321', 'name': 'random123'}] ```
12,733
39,679,940
I have two lists: ``` list1=['lo0','lo1','te123','te234'] list2=['lo0','first','lo1','second','lo2','third','te123','fourth'] ``` I want to write a python code to print the next element of list2 where item of list1 is present in list2,else write "no-match",i.e, I want the output as: ``` first second no-match fourth ``` I came up with the following code: ``` for i1 in range(len(list2)): for i2 in range(len(list1)): if list1[i2]==rlist2[i1]: desc.write(list2[i1+1]) desc.write('\n') ``` but it gives the output as: ``` first second fourth ``` and I cannot figure how to induce "no-match" where the elements aren't present in list2. Please guide! Thanks in advance.
2016/09/24
[ "https://Stackoverflow.com/questions/39679940", "https://Stackoverflow.com", "https://Stackoverflow.com/users/6708941/" ]
You're absolutely right - `messagePolling` is a function. However, `messagePolling()` is *not* a function. You can see that right in your console: ``` // assume messagePolling is a function that doesn't return anything messagePolling() // -> undefined ``` So, when you do this: ``` setTimeout(messagePolling(), 1000) ``` You're really doing this: ``` setTimeout(undefined, 1000) ``` But when you do this: ``` setTimeout(messagePolling, 1000) ``` You're *actually passing the function* to `setTimeout`. Then `setTimeout` will know to run the function you passed - `messagePolling` - later on. It won't work if it decides to call `undefined` (the result of `messagePolling()`) later, right?
Written as `setTimeout(messagePolling(),1000)` the function is executed **immediately** and a `setTimeout` is set to call `undefined` (the value returned by your function) after one second. (this should actually throw an error if ran inside Node.js, as `undefined` is not a valid function) Written as `setTimeout(messagePolling,1000)` the `setTimeout` is set to call your function after one second.
12,734
52,608,069
a python Newbie here. I am currently trying to figure out how to parse all the msg files I have stored in a specific folder and then save the body text to a csv file. ``` import win32com.client outlook = win32com.client.Dispatch("Outlook.Application").GetNamespace("MAPI") msg = outlook.OpenSharedItem(r"C:\Users\XY\Documents\Email Reader\test.msg") print(msg.Body) del outlook, msg ``` So far, I only found a way to open one specific msg file, but not all the files I stored in my folder. I think I should be able to handle storing the data in a csv file, but I just can't figure out how to read multiple msg files. Hope you can help me! cheers
2018/10/02
[ "https://Stackoverflow.com/questions/52608069", "https://Stackoverflow.com", "https://Stackoverflow.com/users/10445933/" ]
You can try something like this to iterate through every file with '.msg' extension in a directory: ``` import os pathname = os.fsencode('Pathname as string') for file in os.listdir(pathname): filename = os.fsdecode(file) if filename.endswith(".msg"): #Do something continue else: continue ``` Hope this helps!
You can use `pathlib` to iterate over the contents of the directory. Try this: ``` from pathlib import Path import win32com.client outlook = win32com.client.Dispatch("Outlook.Application").GetNamespace("MAPI") # Assuming \Documents\Email Reader is the directory containg files for p in Path(r'C:\Users\XY\Documents\Email Reader').iterdir(): if p.is_file() and p.suffix == '.msg': msg = outlook.OpenSharedItem(p) print(msg.Body) ```
12,739
39,280,060
So I was messing around in python, and developed a problem. I start out with a string like the following: ``` a = "1523467aa252aaa98a892a8198aa818a18238aa82938a" ``` For every number, you have to add it to a `sum` variable.Also, with every encounter of a letter, the index iterator must move back 2. My program keeps crashing at `isinstance()`. This is the code I have so far: ``` def sum(): a = '93752aaa746a27a1754aa90a93aaaaa238a44a75aa08750912738a8461a8759383aa328a4a4935903a6a55503605350' z = 0 for i in a: if isinstance(a[i], int): z = z + a[i] elif isinstance(a[i], str): a = a[:i] + a[(i+1):] i = i - 2 continue print z return z sum() ```
2016/09/01
[ "https://Stackoverflow.com/questions/39280060", "https://Stackoverflow.com", "https://Stackoverflow.com/users/6421595/" ]
This part is not doing what you think: ``` for i in a: if isinstance(a[i], int): ``` Since `i` is an iterator, there is no need to use `a[i]`, it will confuse Python. Also, since `a` is a string, no element of it will be an `int`, they will all be `string`. You want something like this: ``` for i in a: if i.isdigit(): z += int(i) ``` ***EDIT:*** removing elements of an iterable while iterating over it is a common problem on SO, I would recommend creating a new string with only the elements you wan to keep: ``` z = 0 b = '' for i in a: if i.isdigit(): z += int(i) b += str(i) a = b # set a back to b so the "original string" is set to a string with all non-numeric characters removed. ```
You have a few problems with your code. You don't seem to understand how `for... in` loops work, but @Will already addressed that problem in his answer. Furthermore, you have a misunderstanding of how `isinstance()` works. As the numbers are characters of a string, when you iterate over that string each character will also be a (one-length) string. `isinstance(a[i], int)` will fail for every character regardless of whether or not it can be converted to an `int`. What you actually want to do is just try converting each character to an `int` and adding it to the total. If it works, great, and if not just catch the exception and keep on going. You don't need to worry about non-numeric characters because when each one raises a `ValueError` it will simply be ignored and the next character in the string will be processed. ``` string = '93752aaa746a27a1754aa90a93aaaaa238a44a75aa08750912738a8461a8759383aa328a4a4935903a6a55503605350' def sum_(string): total = 0 for c in string: try: total += int(c) except ValueError: pass return total sum_(string) ``` Furthermore, this function is equivalent to the following one-liners: ``` sum(int(c) for c in string if c.isdigit()) ``` Or the functional style... ``` sum(map(int, filter(str.isdigit, string))) ```
12,740
8,329,601
I am a beginner in python and cant understand why this is happening: ``` from math import * print "enter the number" n=int(raw_input()) d=2 s=0 while d<n : if n%d==0: x=math.log(d) s=s+x print d d=d+1 print s,n,float(n)/s ``` Running it in Python and inputing a non prime gives the error ``` Traceback (most recent call last): File "C:\Python27\mit ocw\pset1a.py", line 28, in <module> x=math.log(d) NameError: name 'math' is not defined ```
2011/11/30
[ "https://Stackoverflow.com/questions/8329601", "https://Stackoverflow.com", "https://Stackoverflow.com/users/855763/" ]
Change ``` from math import * ``` to ``` import math ``` Using `from X import *` is generally not a good idea as it uncontrollably pollutes the global namespace and could present other difficulties.
You need to `import math` rather than `from math import *`.
12,741
49,709,826
I am on Windows 10, and I run the following Python file: ``` import subprocess subprocess.call("dir") ``` But I get the following error: ``` File "A:/python-tests/subprocess_test.py", line 10, in <module> subprocess.call(["dir"]) File "A:\anaconda\lib\subprocess.py", line 267, in call with Popen(*popenargs, **kwargs) as p: File "A:\anaconda\lib\site-packages\spyder\utils\site\sitecustomize.py", line 210, in __init__ super(SubprocessPopen, self).__init__(*args, **kwargs) File "A:\anaconda\lib\subprocess.py", line 709, in __init__ restore_signals, start_new_session) File "A:\anaconda\lib\subprocess.py", line 997, in _execute_child startupinfo) FileNotFoundError: [WinError 2] The system cannot find the file specified ``` Note that I am only using `dir` as an example here. I actually want to run a more complicated command, but I am getting the same error in that case too. Note that I am not using `shell=True`, so the answer to this question is not applicable: [Cannot find the file specified when using subprocess.call('dir', shell=True) in Python](https://stackoverflow.com/q/20330385/3486684) This is line 997 of `subprocess.py`: ``` hp, ht, pid, tid = _winapi.CreateProcess(executable, args, # no special security None, None, int(not close_fds), creationflags, env, os.fspath(cwd) if cwd is not None else None, startupinfo) ``` When I run the debugger to check out the arguments being passed to CreateProcess, I notice that `executable` is `None`. Is that normal?
2018/04/07
[ "https://Stackoverflow.com/questions/49709826", "https://Stackoverflow.com", "https://Stackoverflow.com/users/3486684/" ]
dir is a command implemented in cmd.exe so there is no dir.exe windows executable. You must call the command through cmd. ``` subprocess.call(['cmd', '/c', 'dir']) ```
You ***must*** set `shell=True` when calling `dir`, since `dir` isn't an executable (there's no such thing as dir.exe). `dir` is an [internal command](https://www.computerhope.com/jargon/i/intecomm.htm) that was loaded with cmd.exe. As you can see from the [documentation](https://docs.python.org/dev/library/subprocess.html): > > On Windows with `shell=True`, the `COMSPEC` environment variable specifies > the default shell. The only time you need to specify `shell=True` on > Windows is when the command you wish to execute is built into the > shell (e.g. **dir** or copy). You do not need `shell=True` to run a batch > file or console-based executable. > > >
12,746
48,213,605
So I have a csv file that looks like this.. ``` 1 a 2 b 3 c ``` And I want to make it look like this.. ``` 1 2 3 a b c ``` I'm at a loss for how to do this with python3, anyone have any ideas? Really appreciate it
2018/01/11
[ "https://Stackoverflow.com/questions/48213605", "https://Stackoverflow.com", "https://Stackoverflow.com/users/8761965/" ]
Are you reading the csv with pandas? you can always use numpy or pandas transpose ``` import numpy as np ar1 = np.array([[1,2,3], ['a','b','c']]) ar2 = np.transpose(ar1) Out[22]: array([['1', 'a'], ['2', 'b'], ['3', 'c']], dtype='<U11') ```
As others have mentioned, `pandas` and `transpose()` is the way to go here. Here is an example: ``` import pandas as pd input_filename = "path/to/file" # I am using space as the sep because that is what you have shown in the example # Also, you need header=None since your file doesn't have a header df = pd.read_csv(input_filename ), header=None, sep="\s+") # read into dataframe output_filename = "path/to/output" df.transpose().to_csv(output_filename, index=False, header=False) ``` **Explanation**: `read_csv()` loads the contents of your file into a `dataframe` which I called `df`. This is what `df` looks like: ``` >>> print(df) 0 1 0 1 a 1 2 b 2 3 c ``` You want to switch the rows and columns, and we can do that by calling `transpose()`. Here is what the transposed `dataframe` looks like: ``` >>> print(df.transpose()) 0 1 2 0 1 2 3 1 a b c ``` Now write the transposed `dataframe` to a file with the `to_csv()` method. By specifying `index=False` and `header=False`, we will avoid writing the header row and the index column.
12,747
5,188,285
I need to get some debugging libraries/tools to trace back the stack information print out to the stdout. Python's [traceback](http://docs.python.org/library/traceback.html) library can be an example. What can be the C++ equivalent to Python's traceback library?
2011/03/04
[ "https://Stackoverflow.com/questions/5188285", "https://Stackoverflow.com", "https://Stackoverflow.com/users/260127/" ]
This is platform-specific, and also depends on how you're compiling code. If you compile code with gcc using `-fomit-frame-pointer` it's very hard to get a useful backtrace, generally requiring heuristics. If you're using any libraries that use that flag you'll also run into problems--it's often used for heavily optimized libraries (eg. nVidia's OpenGL libraries). This isn't a self-contained solution, as it's part of a larger engine, but the code is helpful: * <https://svn.stepmania.com/svn/trunk/stepmania/src/archutils/Unix/Backtrace.cpp> (Linux, OSX) * <https://svn.stepmania.com/svn/trunk/stepmania/src/archutils/Win32/Crash.cpp> (CrashHandler::do\_backtrace for Win32) * <https://svn.stepmania.com/svn/trunk/stepmania/src/archutils/Darwin/DarwinThreadHelpers.cpp> (OSX) This includes backtracing with the frame pointer with gcc when available and heuristic backtracing when it isn't; this can tend to give spurious entries in the trace, but for getting a backtrace for a crash report it's much better than losing the trace entirely. There's other related code in those directories you'd want to look at to make use of that code (symbol lookups, signal handling); those links are a good starting point.
Try [google core dumper](http://code.google.com/p/google-coredumper/), it will give you a core dump when you need it.
12,748
14,962,289
I am running a django app with nginx & uwsgi. Here's how i run uwsgi: ``` sudo uwsgi -b 25000 --chdir=/www/python/apps/pyapp --module=wsgi:application --env DJANGO_SETTINGS_MODULE=settings --socket=/tmp/pyapp.socket --cheaper=8 --processes=16 --harakiri=10 --max-requests=5000 --vacuum --master --pidfile=/tmp/pyapp-master.pid --uid=220 --gid=499 ``` & nginx configurations: ``` server { listen 80; server_name test.com root /www/python/apps/pyapp/; access_log /var/log/nginx/test.com.access.log; error_log /var/log/nginx/test.com.error.log; # https://docs.djangoproject.com/en/dev/howto/static-files/#serving-static-files-in-production location /static/ { alias /www/python/apps/pyapp/static/; expires 30d; } location /media/ { alias /www/python/apps/pyapp/media/; expires 30d; } location / { uwsgi_pass unix:///tmp/pyapp.socket; include uwsgi_params; proxy_read_timeout 120; } # what to serve if upstream is not available or crashes #error_page 500 502 503 504 /media/50x.html; } ``` Here comes the problem. When doing "ab" (ApacheBenchmark) on the server i get the following results: nginx version: nginx version: nginx/1.2.6 uwsgi version:1.4.5 ``` Server Software: nginx/1.0.15 Server Hostname: pycms.com Server Port: 80 Document Path: /api/nodes/mostviewed/8/?format=json Document Length: 8696 bytes Concurrency Level: 100 Time taken for tests: 41.232 seconds Complete requests: 1000 Failed requests: 0 Write errors: 0 Total transferred: 8866000 bytes HTML transferred: 8696000 bytes Requests per second: 24.25 [#/sec] (mean) Time per request: 4123.216 [ms] (mean) Time per request: 41.232 [ms] (mean, across all concurrent requests) Transfer rate: 209.99 [Kbytes/sec] received ``` While running on 500 concurrency level ``` oncurrency Level: 500 Time taken for tests: 2.175 seconds Complete requests: 1000 Failed requests: 50 (Connect: 0, Receive: 0, Length: 50, Exceptions: 0) Write errors: 0 Non-2xx responses: 950 Total transferred: 629200 bytes HTML transferred: 476300 bytes Requests per second: 459.81 [#/sec] (mean) Time per request: 1087.416 [ms] (mean) Time per request: 2.175 [ms] (mean, across all concurrent requests) Transfer rate: 282.53 [Kbytes/sec] received ``` As you can see... all requests on the server fail with either timeout errors or "Client prematurely disconnected" or: ``` writev(): Broken pipe [proto/uwsgi.c line 124] during GET /api/nodes/mostviewed/9/?format=json ``` Here's a little bit more about my application: Basically, it's a collection of models that reflect MySQL tables which contain all the content. At the frontend, i have django-rest-framework which serves json content to the clients. I've installed django-profiling & django debug toolbar to see whats going on. On django-profiling here's what i get when running a single request: ``` Instance wide RAM usage Partition of a set of 147315 objects. Total size = 20779408 bytes. Index Count % Size % Cumulative % Kind (class / dict of class) 0 63960 43 5726288 28 5726288 28 str 1 36887 25 3131112 15 8857400 43 tuple 2 2495 2 1500392 7 10357792 50 dict (no owner) 3 615 0 1397160 7 11754952 57 dict of module 4 1371 1 1236432 6 12991384 63 type 5 9974 7 1196880 6 14188264 68 function 6 8974 6 1076880 5 15265144 73 types.CodeType 7 1371 1 1014408 5 16279552 78 dict of type 8 2684 2 340640 2 16620192 80 list 9 382 0 328912 2 16949104 82 dict of class <607 more rows. Type e.g. '_.more' to view.> CPU Time for this request 11068 function calls (10158 primitive calls) in 0.064 CPU seconds Ordered by: cumulative time ncalls tottime percall cumtime percall filename:lineno(function) 1 0.000 0.000 0.064 0.064 /usr/lib/python2.6/site-packages/django/views/generic/base.py:44(view) 1 0.000 0.000 0.064 0.064 /usr/lib/python2.6/site-packages/django/views/decorators/csrf.py:76(wrapped_view) 1 0.000 0.000 0.064 0.064 /usr/lib/python2.6/site-packages/rest_framework/views.py:359(dispatch) 1 0.000 0.000 0.064 0.064 /usr/lib/python2.6/site-packages/rest_framework/generics.py:144(get) 1 0.000 0.000 0.064 0.064 /usr/lib/python2.6/site-packages/rest_framework/mixins.py:46(list) 1 0.000 0.000 0.038 0.038 /usr/lib/python2.6/site-packages/rest_framework/serializers.py:348(data) 21/1 0.000 0.000 0.038 0.038 /usr/lib/python2.6/site-packages/rest_framework/serializers.py:273(to_native) 21/1 0.000 0.000 0.038 0.038 /usr/lib/python2.6/site-packages/rest_framework/serializers.py:190(convert_object) 11/1 0.000 0.000 0.036 0.036 /usr/lib/python2.6/site-packages/rest_framework/serializers.py:303(field_to_native) 13/11 0.000 0.000 0.033 0.003 /usr/lib/python2.6/site-packages/django/db/models/query.py:92(__iter__) 3/1 0.000 0.000 0.033 0.033 /usr/lib/python2.6/site-packages/django/db/models/query.py:77(__len__) 4 0.000 0.000 0.030 0.008 /usr/lib/python2.6/site-packages/django/db/models/sql/compiler.py:794(execute_sql) 1 0.000 0.000 0.021 0.021 /usr/lib/python2.6/site-packages/django/views/generic/list.py:33(paginate_queryset) 1 0.000 0.000 0.021 0.021 /usr/lib/python2.6/site-packages/django/core/paginator.py:35(page) 1 0.000 0.000 0.020 0.020 /usr/lib/python2.6/site-packages/django/core/paginator.py:20(validate_number) 3 0.000 0.000 0.020 0.007 /usr/lib/python2.6/site-packages/django/core/paginator.py:57(_get_num_pages) 4 0.000 0.000 0.020 0.005 /usr/lib/python2.6/site-packages/django/core/paginator.py:44(_get_count) 1 0.000 0.000 0.020 0.020 /usr/lib/python2.6/site-packages/django/db/models/query.py:340(count) 1 0.000 0.000 0.020 0.020 /usr/lib/python2.6/site-packages/django/db/models/sql/query.py:394(get_count) 1 0.000 0.000 0.020 0.020 /usr/lib/python2.6/site-packages/django/db/models/query.py:568(_prefetch_related_objects) 1 0.000 0.000 0.020 0.020 /usr/lib/python2.6/site-packages/django/db/models/query.py:1596(prefetch_related_objects) 4 0.000 0.000 0.020 0.005 /usr/lib/python2.6/site-packages/django/db/backends/util.py:36(execute) 1 0.000 0.000 0.020 0.020 /usr/lib/python2.6/site-packages/django/db/models/sql/query.py:340(get_aggregation) 5 0.000 0.000 0.020 0.004 /usr/lib64/python2.6/site-packages/MySQLdb/cursors.py:136(execute) 2 0.000 0.000 0.020 0.010 /usr/lib/python2.6/site-packages/django/db/models/query.py:1748(prefetch_one_level) 4 0.000 0.000 0.020 0.005 /usr/lib/python2.6/site-packages/django/db/backends/mysql/base.py:112(execute) 5 0.000 0.000 0.019 0.004 /usr/lib64/python2.6/site-packages/MySQLdb/cursors.py:316(_query) 60 0.000 0.000 0.018 0.000 /usr/lib/python2.6/site-packages/django/db/models/query.py:231(iterator) 5 0.012 0.002 0.015 0.003 /usr/lib64/python2.6/site-packages/MySQLdb/cursors.py:278(_do_query) 60 0.000 0.000 0.013 0.000 /usr/lib/python2.6/site-packages/django/db/models/sql/compiler.py:751(results_iter) 30 0.000 0.000 0.010 0.000 /usr/lib/python2.6/site-packages/django/db/models/manager.py:115(all) 50 0.000 0.000 0.009 0.000 /usr/lib/python2.6/site-packages/django/db/models/query.py:870(_clone) 51 0.001 0.000 0.009 0.000 /usr/lib/python2.6/site-packages/django/db/models/sql/query.py:235(clone) 4 0.000 0.000 0.009 0.002 /usr/lib/python2.6/site-packages/django/db/backends/__init__.py:302(cursor) 4 0.000 0.000 0.008 0.002 /usr/lib/python2.6/site-packages/django/db/backends/mysql/base.py:361(_cursor) 1 0.000 0.000 0.008 0.008 /usr/lib64/python2.6/site-packages/MySQLdb/__init__.py:78(Connect) 910/208 0.003 0.000 0.008 0.000 /usr/lib64/python2.6/copy.py:144(deepcopy) 22 0.000 0.000 0.007 0.000 /usr/lib/python2.6/site-packages/django/db/models/query.py:619(filter) 22 0.000 0.000 0.007 0.000 /usr/lib/python2.6/site-packages/django/db/models/query.py:633(_filter_or_exclude) 20 0.000 0.000 0.005 0.000 /usr/lib/python2.6/site-packages/django/db/models/fields/related.py:560(get_query_set) 1 0.000 0.000 0.005 0.005 /usr/lib64/python2.6/site-packages/MySQLdb/connections.py:8() ``` ..etc However, django-debug-toolbar shows the following: ``` Resource Usage Resource Value User CPU time 149.977 msec System CPU time 119.982 msec Total CPU time 269.959 msec Elapsed time 326.291 msec Context switches 11 voluntary, 40 involuntary and 5 queries in 27.1 ms ``` The problem is that "top" shows the load average rising quickly and apache benchmark which i ran both on the local server and from a remote machine within the network shows that i am not serving many requests / second. What is the problem? this is as far as i could reach when profiling the code so it would be appreciated if someone can point of what i am doing here. **Edit (23/02/2013): Adding more details based on Andrew Alcock's answer:** The points that require my attention / answer are (3)(3) I've executed "show global variables" on MySQL and found out that MySQL configurations had 151 for max\_connections setting which is more than enough to serve the workers i am starting for uwsgi. (3)(4)(2) The single request i am profiling is the heaviest one. It executes 4 queries according to django-debug-toolbar. What happens is that all queries run in: 3.71, 2.83, 0.88, 4.84 ms respectively. (4) Here you're referring to memory paging? if so, how could i tell? (5) On 16 workers, 100 concurrency rate, 1000 requests the load average goes up to ~ 12 I ran the tests on different number of workers (concurrency level is 100): 1. 1 worker, load average ~ 1.85, 19 reqs / second, Time per request: 5229.520, 0 non-2xx 2. 2 worker, load average ~ 1.5, 19 reqs / second, Time per request: 516.520, 0 non-2xx 3. 4 worker, load average ~ 3, 16 reqs / second, Time per request: 5929.921, 0 non-2xx 4. 8 worker, load average ~ 5, 18 reqs / second, Time per request: 5301.458, 0 non-2xx 5. 16 worker, load average ~ 19, 15 reqs / second, Time per request: 6384.720, 0 non-2xx AS you can see, the more workers we have, the more load we have on the system. I can see in uwsgi's daemon log that the response time in milliseconds increases when i increase the number of workers. On 16 workers, running 500 concurrency level requests uwsgi starts loggin the errors: ``` writev(): Broken pipe [proto/uwsgi.c line 124] ``` Load goes up to ~ 10 as well. and the tests don't take much time because non-2xx responses are 923 out of 1000 which is why the response here is quite fast as it's almost empty. Which is also a reply to your point #4 in the summary. Assuming that what i am facing here is an OS latency based on I/O and networking, what is the recommended action to scale this up? new hardware? bigger server? Thanks
2013/02/19
[ "https://Stackoverflow.com/questions/14962289", "https://Stackoverflow.com", "https://Stackoverflow.com/users/202690/" ]
**EDIT 1** Seen the comment that you have 1 virtual core, adding commentary through on all relavant points **EDIT 2** More information from Maverick, so I'm eliminating ideas ruled out and developing the confirmed issues. **EDIT 3** Filled out more details about uwsgi request queue and scaling options. Improved grammar. **EDIT 4** Updates from Maverick and minor improvements Comments are too small, so here are some thoughts: 1. Load average is basically how many processes are running on or waiting for CPU attention. For a perfectly loaded system with 1 CPU core, the load average should be 1.0; for a 4 core system, it should be 4.0. The moment you run the web test, the threading rockets and you have a *lot* of processes waiting for CPU. Unless the load average exceeds the number of CPU cores by a significant margin, it is not a concern 2. The first 'Time per request' value of 4s correlates to the length of the request queue - 1000 requests dumped on Django nearly instantaneously and took on average 4s to service, about 3.4s of which were waiting in a queue. This is due to the very heavy mismatch between the number of requests (100) vs. the number of processors (16) causing 84 of the requests to be waiting for a processor at any one moment. 3. Running at a concurrency of 100, the tests take 41 seconds at 24 requests/sec. You have 16 processes (threads), so each request is processed about 700ms. Given your type of transaction, that is a *long* time per request. This may be because: 1. The CPU cost of each request is high in Django (which is highly unlikely given the low CPU value from the debug toolbar) 2. The OS is task switching a lot (especially if the load average is higher than 4-8), and the latency is purely down to having too many processes. 3. There are not enough DB connections serving the 16 processes so processes are waiting to have one come available. Do you have at least one connection available per process? 4. There is *considerable* latency around the DB, either: 1. Tens of small requests each taking, say, 10ms, most of which is networking overhead. If so, can you introducing caching or reduce the SQL calls to a smaller number. Or 2. One or a couple of requests are taking 100's of ms. To check this, run profiling on the DB. If so, you need to optimise that request. 4. The split between system and user CPU cost is unusually high in system, although the total CPU is low. This implies that most of the work in Django is kernel related, such as networking or disk. In this scenario, it might be network costs (eg receiving and sending HTTP requests and receiving and sending requests to the DB). Sometimes this will be high because of *paging*. If there's no paging going on, then you probably don't have to worry about this at all. 5. You have set the processes at 16, but have a high load average (how high you don't state). Ideally you should always have at least *one* process waiting for CPU (so that CPUs don't spin idly). Processes here don't seem CPU bound, but have a significant latency, so you need more processes than cores. How many more? Try running the uwsgi with different numbers of processors (1, 2, 4, 8, 12, 16, 24, etc) until you have the best throughput. If you change latency of the average process, you will need to adjust this again. 6. The 500 concurrency level definitely is a problem, but is it the client or the server? The report says 50 (out of 100) had the incorrect content-length which implies a server problem. The non-2xx also seems to point there. Is it possible to capture the non-2xx responses for debugging - stack traces or the specific error message would be incredibly useful (EDIT) and is caused by the uwsgi request queue running with it's default value of 100. So, in summary: ![enter image description here](https://i.stack.imgur.com/hIK9U.png) 1. Django seems fine 2. Mismatch between concurrency of load test (100 or 500) vs. processes (16): You're pushing way too many concurrent requests into the system for the number of processes to handle. Once you are above the number of processes, all that will happen is that you will lengthen the HTTP Request queue in the web server 3. There is a large latency, so either 1. Mismatch between processes (16) and CPU cores (1): If the load average is >3, then it's probably too many processes. Try again with a smaller number of processes 1. Load average > 2 -> try 8 processes 2. Load average > 4 -> try 4 processes 3. Load average > 8 -> try 2 processes 2. If the load average <3, it may be in the DB, so profile the DB to see whether there are loads of small requests (additively causing the latency) or one or two SQL statements are the problem 4. Without capturing the failed response, there's not much I can say about the failures at 500 concurrency **Developing ideas** Your load averages >10 on a single cored machine is *really* nasty and (as you observe) leads to a lot of task switching and general slow behaviour. I personally don't remember seeing a machine with a load average of 19 (which you have for 16 processes) - congratulations for getting it so high ;) The DB performance is great, so I'd give that an all-clear right now. **Paging**: To answer you question on how to see paging - you can detect OS paging in several ways. For example, in top, the header has page-ins and outs (see the last line): ``` Processes: 170 total, 3 running, 4 stuck, 163 sleeping, 927 threads 15:06:31 Load Avg: 0.90, 1.19, 1.94 CPU usage: 1.37% user, 2.97% sys, 95.65% idle SharedLibs: 144M resident, 0B data, 24M linkedit. MemRegions: 31726 total, 2541M resident, 120M private, 817M shared. PhysMem: 1420M wired, 3548M active, 1703M inactive, 6671M used, 1514M free. VM: 392G vsize, 1286M framework vsize, 1534241(0) pageins, 0(0) pageouts. Networks: packets: 789684/288M in, 912863/482M out. Disks: 739807/15G read, 996745/24G written. ``` **Number of processes**: In your current configuration, the number of processes is *way* too high. **Scale the number of processes back to a 2**. We might bring this value up later, depending on shifting further load off this server. **Location of Apache Benchmark**: The load average of 1.85 for one process suggests to me that you are running the load generator on the same machine as uwsgi - is that correct? If so, you really need to run this from another machine otherwise the test runs are not representative of actual load - you're taking memory and CPU from the web processes for use in the load generator. In addition, the load generator's 100 or 500 threads will generally stress your server in a way that does not happen in real life. Indeed this might be the reason the whole test fails. **Location of the DB**: The load average for one process also suggest that you are running the DB on the same machine as the web processes - is this correct? If I'm correct about the DB, then the first and best way to start scaling is to move the DB to another machine. We do this for a couple of reasons: 1. A DB server needs a different hardware profile from a processing node: 1. Disk: DB needs a lot of fast, redundant, backed up disk, and a processing node needs just a basic disk 2. CPU: A processing node needs the fastest CPU you can afford whereas a DB machine can often make do without (often its performance is gated on disk and RAM) 3. RAM: a DB machine generally needs as much RAM as possible (and the fastest DB has *all* its data in RAM), whereas many processing nodes need much less (yours needs about 20MB per process - very small 4. Scaling: **Atomic** DBs scale best by having monster machines with many CPUs whereas the web tier (not having state) can scale by plugging in many identical small boxen. 2. CPU affinity: It's better for the CPU to have a load average of 1.0 and processes to have affinity to a single core. Doing so maximizes the use of the CPU cache and minimizes task switching overheads. By separating the DB and processing nodes, you are enforcing this affinity in HW. **500 concurrency with exceptions** The request queue in the diagram above is at most 100 - if uwsgi receives a request when the queue is full, the request is rejected with a 5xx error. I think this was happening in your 500 concurrency load test - basically the queue filled up with the first 100 or so threads, then the other 400 threads issued the remaining 900 requests and received immediate 5xx errors. To handle 500 requests per second you need to ensure two things: 1. The Request Queue size is configured to handle the burst: Use the `--listen` argument to `uwsgi` 2. The system can handle a throughput at above 500 requests per second if 500 is a normal condition, or a bit below if 500 is a peak. See scaling notes below. I imagine that uwsgi has the queue set to a smaller number to better handle DDoS attacks; if placed under huge load, most requests immediately fail with almost no processing allowing the box as a whole to still be responsive to the administrators. **General advice for scaling a system** Your most important consideration is probably to **maximize throughput**. Another possible need to minimize response time, but I won't discuss this here. In maximising throughput, you are trying to maximize the *system*, not individual components; some local decreases might improve overall system throughput (for example, making a change that happens to add latency in the web tier *in order to improve performance of the DB* is a net gain). Onto specifics: 1. **Move the DB to a separate machine**. After this, profile the DB during your load test by running `top` and your favorite MySQL monitoring tool. You need to be able to profile . Moving the DB to a separate machine will introduce some additional latency (several ms) per request, so expect to slightly increase the number of processes at the web tier to keep the same throughput. 2. Ensure that `uswgi` request queue is large enough to handle a burst of traffic using the `--listen` argument. This should be several times the maximum steady-state requests-per-second your system can handle. 3. On the web/app tier: **Balance the number of processes with the number of CPU cores** and the inherent latency in the process. Too many processes slows performance, too few means that you'll never fully utilize the system resources. There is no fixed balancing point, as every application and usage pattern is different, so benchmark and adjust. As a guide, use the processes' latency, if each task has: * 0% latency, then you need 1 process per core * 50% latency (i.e. the CPU time is half the actual time), then you need 2 processes per core * 67% latency, then you need 3 processes per core 4. Check `top` during the test to ensure that you are above 90% cpu utilisation (for every core) *and* you have a load average a little above 1.0. If the load average is higher, scale back the processes. If all goes well, at some point you won't be able to achieve this target, and DB might now be the bottleneck 5. At some point you will need more power in the web tier. You can either choose to add more CPU to the machine (relatively easy) and so add more processes, **and/or** you can add in more processing nodes (horizontal scaleability). The latter can be achieved in uwsgi using the method discussed [here](https://stackoverflow.com/a/15050495/1395668) by [Łukasz Mierzwa](https://stackoverflow.com/users/1154047/ukasz-mierzwa)
Adding more workers and getting less r/s means that your request "is pure CPU" and there is no IO waits that another worker can use to serve another request. If you want to scale you will need to use another server with more (or faster) cpu's. However this is a synthetic test, the number of r/s you get are the upper bound for the exact request that you are testing, once on production there are many more variables that can affect the performance.
12,756
47,943,854
I'm new to waf build tool and I've googled for answers but very few unhelpful links. Does anyone know? As wscript is essentially a python script, I suppose I could use the `os` package?
2017/12/22
[ "https://Stackoverflow.com/questions/47943854", "https://Stackoverflow.com", "https://Stackoverflow.com/users/5556905/" ]
Don't use the `os` module, instead use the `DEST_*` variables: ```py ctx.load('compiler_c') print (ctx.env.DEST_OS, ctx.env.DEST_CPU, ctx.env.DEST_BINFMT) ``` On my machine this would print `('linux', 'x86_64', 'elf')`. Then you can dispatch on that.
You can use `import` at every point where you could use it any other python script. I prefer using `platform` for programming a function os-agnostic instead on evaluate some attributes of `os`. Writing the [Build-related commands](https://waf.io/book/#_build_related_commands) example in the [waf book](https://waf.io/book/) os-agnostic, could look something like this: ```py import platform top = '.' out = 'build_directory' def configure(ctx): pass def build(ctx): if platform.system().lower().startswith('win'): cp = 'copy' else: cp = 'cp' ctx(rule=cp+' ${SRC} ${TGT}', source='foo.txt', target='bar.txt') ```
12,759
37,400,078
I am trying to translate an if-else statement written in c++ to a corresponding chunk of python code. For a C++ map dpt2, I am attempting to translate: ``` if (dpt2.find(key_t) == dpt2.end()) { dpt2[key_t] = rat; } else { dpt2.find(key_t) -> second = dpt2.find(key_t) -> second + rat; } ``` I'm not super familiar with C++, but my understanding is that the -> operator is equivalent to a method call for a class that is being referenced by a pointer. My question is how do I translate this code into something that can be handled by an OrderedDict() object in python?
2016/05/23
[ "https://Stackoverflow.com/questions/37400078", "https://Stackoverflow.com", "https://Stackoverflow.com/users/3396878/" ]
First of all, in C++ you'd write that as: ``` dpt[key_t] += rat; ``` That will do only one map lookup - as opposed to the code you wrote which does 2 lookups in the case that `key_t` isn't in the map and 3 lookups in the case that it is. --- And in Python, you'd write it much the same way - assuming you declare `dpt` to be the right thing: ``` dpt = collections.defaultdict(int) ... dpt[key_t] += rat ```
Something like this? ``` dpt2[key_t] = dpt2.get(key_t, 0) + rat ```
12,760
17,093,322
I have a large data set of urls and I need a way to parse words from the urls eg: ``` realestatesales.com -> {"real","estate","sales"} ``` I would prefer to do it in python. This seems like it should be possible with some kind of english language dictionary. There might be some ambiguous cases, but I feel like there should be a solution out there somewhere.
2013/06/13
[ "https://Stackoverflow.com/questions/17093322", "https://Stackoverflow.com", "https://Stackoverflow.com/users/1893354/" ]
This is a problem is word segmentation, and an efficient dynamic programming solution exists. [This](http://thenoisychannel.com/2011/08/08/retiring-a-great-interview-problem/) page discusses how you could implement it. I have also answered this question on SO before, but I can't find a link to the answer. Please feel free to edit my post if you do.
This might be of use to you: <http://www.clips.ua.ac.be/pattern> It's a set of modules which, depending on your system, might already be installed. It does all kinds of interesting stuff, and even if it doesn't do exactly what you need it might get you started on the right path.
12,761
14,441,412
I have python scripts and shell scripts in the same folder which both need configuration. I currently have a config.py for my python scripts but I was wondering if it is possible to have a single configuration file which can be easily read by both python scripts and also shell scripts. Can anyone give an example of the format of a configuration file best suited to being read by both python and shell.
2013/01/21
[ "https://Stackoverflow.com/questions/14441412", "https://Stackoverflow.com", "https://Stackoverflow.com/users/1738522/" ]
I think the simplest solution will be : ``` key1="value1" key2="value2" key3="value3" ``` in [shell](/questions/tagged/shell "show questions tagged 'shell'") you just have to source this env file and in Python, it's easy to parse. Spaces are not allowed around `=` For Python, see this post : [Emulating Bash 'source' in Python](https://stackoverflow.com/questions/3503719/emulating-bash-source-in-python)
This is valid in both shell and python: ``` NUMBER=42 STRING="Hello there" ``` what else do you need?
12,763
680,320
Consider the following skeleton of a models.py for a space conquest game: ``` class Fleet(models.Model): game = models.ForeignKey(Game, related_name='planet_set') owner = models.ForeignKey(User, related_name='planet_set', null=True, blank=True) home = models.ForeignKey(Planet, related_name='departing_fleet_set') dest = models.ForeignKey(Planet, related_name='arriving_fleet_set') ships = models.IntegerField() class Planet(models.Model): game = models.ForeignKey(Game, related_name='planet_set') owner = models.ForeignKey(User, related_name='planet_set', null=True, blank=True) name = models.CharField(max_length=250) ships = models.IntegerField() ``` I have many such data models for a project I'm working on, and I change the state of the game based on somewhat complicated interactions between various data objects. I want to avoid lots of unnecessary calls to the database, so once per turn, I do something like 1. Query all the fleets, planets, and other objects from the database and cache them as python objects 2. Process the game objects, resolving the state of the game 3. Save them back in the database This model seems to totally break down when using ForeignKey objects. For example, when a new fleet departs a planet, I have a line that looks something like this: ``` fleet.home.ships -= fleet.ships ``` After this line runs, I have other code that alters the number of ships at each of the planets, including the planet fleet.home. Unfortunately, the changes made in the above line are not reflected in the QuerySet of planets that I obtained earlier, so that when I save all the planets at the end of the turn, the changes to fleet.home's ships get overwritten. Is there some better way of handling this situation? Or is this just how all ORMs are?
2009/03/25
[ "https://Stackoverflow.com/questions/680320", "https://Stackoverflow.com", "https://Stackoverflow.com/users/51100/" ]
Django's ORM does not implement an [identity map](http://en.wikipedia.org/wiki/Identity_map) (it's in the [ticket tracker](http://code.djangoproject.com/ticket/17), but it isn't clear if or when it will be implemented; at least one core Django committer has [expressed opposition to it](http://spreadsheets.google.com/ccc?key=pSqnCvef6OXmGWQ9qbEVMeA)). This means that if you arrive at the same database object through two different query paths, you are working with different Python objects in memory. This means that your design (load everything into memory at once, modify a lot of things, then save it all back at the end) is unworkable using the Django ORM. First because it will often waste lots of memory loading in duplicate copies of the same object, and second because of "overwriting" issues like the one you're running into. You either need to rework your design to avoid these issues (either be careful to work with only one QuerySet at a time, saving anything modified before you make another query; or if you load several queries, look up all relations manually, don't ever traverse ForeignKeys using the convenient attributes for them), or use an alternative Python ORM that implements identity map. [SQLAlchemy](http://www.sqlalchemy.org/) is one option. Note that this doesn't mean Django's ORM is "bad." It's optimized for the case of web applications, where these kinds of issues are rare (I've done web development with Django for years and never once had this problem on a real project). If your use case is different, you may want to choose a different ORM.
This is perhaps what you are looking for: <https://web.archive.org/web/20121126091406/http://simonwillison.net/2009/May/7/mmalones/>
12,769
41,931,719
I am learning Python and I am reading the "Think Python" and doing some simple exercises included in the book. I am asked "Define a new function called do\_four that takes a function object and a value and calls the function four times, passing the value as a parameter." I am trying to compose this function with one statement by calling a function already defined called do\_twice() and test it with a function called print\_double(). Here is the code: ``` def do_twice(f, x): f(x) f(x) def do_four(f, v): do_twice(do_twice(f, v), v) def print_twice(s): print s print s s = 'abc' do_four(print_twice, s) ``` This code produces an error: ``` abc abc abc abc --------------------------------------------------------------------------- TypeError Traceback (most recent call last) <ipython-input-41-95b513e5e0ee> in <module>() ----> 1 do_four(print_twice, s) <ipython-input-40-100f8587f50a> in do_four(f, v) 1 def do_four(f, v): ----> 2 do_twice(do_twice(f, v), v) <ipython-input-38-7143620502ce> in do_twice(f, x) 1 def do_twice(f, x): ----> 2 f(x) 3 f(x) TypeError: 'NoneType' object is not callable ``` In trying to understand what is happening I tried to construct a Stack Diagram as described in the book. Here it is: [![enter image description here](https://i.stack.imgur.com/EsMCt.png)](https://i.stack.imgur.com/EsMCt.png) Could you explain the error message and comment on the Stack Diagram? Your advice will be appreciated.
2017/01/30
[ "https://Stackoverflow.com/questions/41931719", "https://Stackoverflow.com", "https://Stackoverflow.com/users/7128498/" ]
`do_twice` gets a function on the first argument, and doesn't return anything. So there is no reason to pass `do_twice` the result of `do_twice`. You need to pass it `a function`. This would do what you meant: ``` def do_four(f, v): do_twice(f, v) do_twice(f, v) ``` Very similar to how you defined `do_twice` by `f`
> > > ``` > do_twice(do_twice(f, v), v) > ^^^^^^^^^^^^^^ > > ``` > > Slightly rewritten: ``` result = do_twice(f, v) do_twice(result, v) ``` You're passing the return value of `do_twice(...)` as the first parameter to `do_twice(...)`. That parameter is supposed to be a function object. `do_twice` does not *return* anything, so `result` is `None`, which you're passing instead of the expected function object. There's no point in nesting the two `do_twice` in any way here.
12,770
24,029,634
I ran into this today and can't figure out why. I have several functions chained together that perform some time consuming operations as part of a larger pipeline. I've included these here, pared down to a test example, as best as I could. The issue is that when I call a function directly, I get the expected output (e.g., 5 different trees). However, when I call the same function in a multiprocessing pool with apply\_async (or apply, doesn't matter), I get 5 trees, but they are all the same. I've documented this in an IPython notebook, which can be viewed here: <http://nbviewer.ipython.org/gist/cfriedline/0e275d528ff1a8d674c6> In cell 91, I create 5 trees (each with 10 tips), and return two lists. The first containing the non-multiprocessing trees, and the second from apply\_async. In cell 92, you can see the results of creating trees without multiprocessing, and in 93, with multiprocessing. What I expect is that there would be a total of 10 different trees between the two tests, but instead all of the multiprocessing trees are identical. Makes little sense to me. Relevant versions of things: * Linux 2.6.18-238.12.1.el5 x86\_64 GNU/Linux * Python 2.7.6 :: Anaconda 1.9.2 (64-bit) * IPython 2.0.0 * Rpy2 2.3.9 Thanks! Chris
2014/06/04
[ "https://Stackoverflow.com/questions/24029634", "https://Stackoverflow.com", "https://Stackoverflow.com/users/1027577/" ]
I solved this one, with a point in the right direction from @mgilson. In fact, it was a random number problem, just not in python - in R (sigh). The state of R is copied when the Pool is created, meaning so is its random seed. To fix, just a little rpy2 as below calling R's set.seed function (with some process specific stuff for good measure): ``` def create_tree(num_tips, type): """ creates the taxa tree in R @param num_tips: number of taxa to create @param type: type for naming (e.g., 'taxa') @return: a dendropy Tree @rtype: dendropy.Tree """ r = rpy2.robjects.r set_seed = r('set.seed') set_seed(int((time.time()+os.getpid()*1000))) rpy2.robjects.globalenv['numtips'] = num_tips rpy2.robjects.globalenv['treetype'] = type name = _get_random_string(20) if type == "T": r("%s = rtree(numtips, rooted=T, tip.label=paste(treetype, seq(1:(numtips)), sep=''))" % name) else: r("%s = rtree(numtips, rooted=F, tip.label=paste(treetype, seq(1:(numtips)), sep=''))" % name) tree = r[name] return ape_to_dendropy(tree) ```
I'm not 100% familiar with these libraries, however, on Linux, (IIRC) `multiprocessing` uses `os.fork`. This means that the state of the random module (which you're using) will also be forked and that each of your processes will generate *the same sequence of random numbers* resulting in a not-so-random `_get_random_string` function. If I'm right, and you make the pool smaller than the number of trees that you want, you should see that you get groups of N identical trees (where N is the number of pools). I think that probably the ideal solution is to [re-seed](https://docs.python.org/2/library/random.html#random.seed) the random number generator inside of each of the processes. It's unlikely that they'll run at *exactly* the same time, so you should get differing results.
12,771
1,770,789
I'm trying to write a python script which follows the common unix command line pattern of accepting input from stdin if no file name is given. This is what I've been using: ``` if __name__ == "__main__": if len(sys.argv) > 1: stream = open(sys.argv[1]) else: stream = sys.stdin ``` Is there a more pythonic way to do that?
2009/11/20
[ "https://Stackoverflow.com/questions/1770789", "https://Stackoverflow.com", "https://Stackoverflow.com/users/170340/" ]
The [fileinput](http://docs.python.org/library/fileinput.html) module is perfect for this.
similar but one-line solution ``` stream = sys.argv[1] if len(sys.argv)>1 else sys.stdin ```
12,772
45,703,959
When trying to deploy an Flask application to my LAMP server, I got an error from [flipflop](https://github.com/Kozea/flipflop), a FastCGI/WSGI gateway which enables my application to speak the FastCGI protocol. > > ~/minimal/run.py > > > ``` from flask import Flask from flipflop import WSGIServer app = Flask(__name__) @app.route('/') def hello_world(): return 'hello, world' if __name__ == '__main__': WSGIServer(app).run() ``` Relevant part of the Apache configuration file, i.e. `/etc/httpd/conf/httpd.conf`: ``` <VirtualHost *:80> ScriptAlias / /home/apps/minimal/run.py ErrorLog /var/log/httpd/error_log </VirtualHost> ``` Error report by Apache/2.2.15: ``` [apps@kernod0 ~]$ sudo head -n 20 /var/log/httpd/error_log [sudo] password for apps: [Wed Aug 16 16:39:16 2017] [notice] suEXEC mechanism enabled (wrapper: /usr/sbin/suexec) [Wed Aug 16 16:39:16 2017] [notice] Digest: generating secret for digest authentication ... [Wed Aug 16 16:39:16 2017] [notice] Digest: done [Wed Aug 16 16:39:16 2017] [notice] Apache/2.2.15 (Unix) DAV/2 mod_fcgid/2.3.9 configured -- resuming normal operations [Wed Aug 16 16:39:16 2017] [error] [client 100.116.224.219] Traceback (most recent call last): [Wed Aug 16 16:39:16 2017] [error] [client 100.116.224.219] File "/home/apps/minimal/run.py", line 12, in <module> [Wed Aug 16 16:39:16 2017] [error] [client 100.116.224.219] WSGIServer(app).run() [Wed Aug 16 16:39:16 2017] [error] [client 100.116.224.219] File "/home/apps/minimal/flask/lib/python2.6/site-packages/flipflop.py", line 938, in run [Wed Aug 16 16:39:16 2017] [error] [client 100.116.224.219] sock.getpeername() [Wed Aug 16 16:39:16 2017] [error] [client 100.116.224.219] socket.error: [Errno 88] Socket operation on non-socket [Wed Aug 16 16:39:16 2017] [error] [client 100.116.224.219] Premature end of script headers: run.py [Wed Aug 16 16:39:17 2017] [error] [client 100.116.226.253] Traceback (most recent call last): [Wed Aug 16 16:39:17 2017] [error] [client 100.116.226.253] File "/home/apps/minimal/run.py", line 12, in <module> [Wed Aug 16 16:39:17 2017] [error] [client 100.116.226.253] WSGIServer(app).run() [Wed Aug 16 16:39:17 2017] [error] [client 100.116.226.253] File "/home/apps/minimal/flask/lib/python2.6/site-packages/flipflop.py", line 938, in run [Wed Aug 16 16:39:17 2017] [error] [client 100.116.226.253] sock.getpeername() [Wed Aug 16 16:39:17 2017] [error] [client 100.116.226.253] socket.error: [Errno 88] Socket operation on non-socket [Wed Aug 16 16:39:17 2017] [error] [client 100.116.226.253] Premature end of script headers: run.py [Wed Aug 16 16:39:17 2017] [error] [client 100.116.226.205] Traceback (most recent call last): [Wed Aug 16 16:39:17 2017] [error] [client 100.116.226.205] File "/home/apps/minimal/run.py", line 12, in <module> ``` --- In addition, even without using `flipflop`, it still doesn't work: > > ~/minimal/run.py > > > ``` from flask import Flask app = Flask(__name__) @app.route('/') def hello_world(): return 'hello, world' if __name__ == '__main__': app.run() ``` Error output: ``` [apps@kernod0 ~]$ sudo cat /var/log/httpd/error_log [Wed Aug 16 20:47:24 2017] [notice] suEXEC mechanism enabled (wrapper: /usr/sbin/suexec) [Wed Aug 16 20:47:24 2017] [notice] Digest: generating secret for digest authentication ... [Wed Aug 16 20:47:24 2017] [notice] Digest: done [Wed Aug 16 20:47:24 2017] [notice] Apache/2.2.15 (Unix) DAV/2 mod_fcgid/2.3.9 configured -- resuming normal operations [Wed Aug 16 20:47:33 2017] [error] [client 100.116.226.182] * Running on http://127.0.0.1:5000/ (Press CTRL+C to quit) [Wed Aug 16 20:47:37 2017] [error] [client 100.116.226.190] Traceback (most recent call last): [Wed Aug 16 20:47:37 2017] [error] [client 100.116.226.190] File "/home/apps/minimal/run.py", line 11, in <module> [Wed Aug 16 20:47:37 2017] [error] [client 100.116.226.190] [Wed Aug 16 20:47:37 2017] [error] [client 100.116.226.190] app.run() [Wed Aug 16 20:47:37 2017] [error] [client 100.116.226.190] File "/home/apps/minimal/flask/lib/python2.6/site-packages/flask/app.py", line 841, in run [Wed Aug 16 20:47:37 2017] [error] [client 100.116.226.190] [Wed Aug 16 20:47:37 2017] [error] [client 100.116.226.190] run_simple(host, port, self, **options) [Wed Aug 16 20:47:37 2017] [error] [client 100.116.226.190] File "/home/apps/minimal/flask/lib/python2.6/site-packages/werkzeug/serving.py", line 739, in run_simple [Wed Aug 16 20:47:37 2017] [error] [client 100.116.226.190] [Wed Aug 16 20:47:37 2017] [error] [client 100.116.226.190] inner() [Wed Aug 16 20:47:37 2017] [error] [client 100.116.226.190] File "/home/apps/minimal/flask/lib/python2.6/site-packages/werkzeug/serving.py", line 699, in inner [Wed Aug 16 20:47:37 2017] [error] [client 100.116.226.190] [Wed Aug 16 20:47:37 2017] [error] [client 100.116.226.190] fd=fd) [Wed Aug 16 20:47:37 2017] [error] [client 100.116.226.190] File "/home/apps/minimal/flask/lib/python2.6/site-packages/werkzeug/serving.py", line 593, in make_server [Wed Aug 16 20:47:37 2017] [error] [client 100.116.226.190] [Wed Aug 16 20:47:37 2017] [error] [client 100.116.226.190] passthrough_errors, ssl_context, fd=fd) [Wed Aug 16 20:47:37 2017] [error] [client 100.116.226.190] File "/home/apps/minimal/flask/lib/python2.6/site-packages/werkzeug/serving.py", line 504, in __init__ [Wed Aug 16 20:47:37 2017] [error] [client 100.116.226.190] [Wed Aug 16 20:47:37 2017] [error] [client 100.116.226.190] HTTPServer.__init__(self, (host, int(port)), handler) [Wed Aug 16 20:47:37 2017] [error] [client 100.116.226.190] File "/usr/lib64/python2.6/SocketServer.py", line 412, in __init__ [Wed Aug 16 20:47:37 2017] [error] [client 100.116.226.190] [Wed Aug 16 20:47:37 2017] [error] [client 100.116.226.190] self.server_bind() [Wed Aug 16 20:47:37 2017] [error] [client 100.116.226.190] File "/usr/lib64/python2.6/BaseHTTPServer.py", line 108, in server_bind [Wed Aug 16 20:47:37 2017] [error] [client 100.116.226.190] [Wed Aug 16 20:47:37 2017] [error] [client 100.116.226.190] SocketServer.TCPServer.server_bind(self) [Wed Aug 16 20:47:37 2017] [error] [client 100.116.226.190] File "/usr/lib64/python2.6/SocketServer.py", line 423, in server_bind [Wed Aug 16 20:47:37 2017] [error] [client 100.116.226.190] [Wed Aug 16 20:47:37 2017] [error] [client 100.116.226.190] self.socket.bind(self.server_address) [Wed Aug 16 20:47:37 2017] [error] [client 100.116.226.190] File "<string>", line 1, in bind [Wed Aug 16 20:47:37 2017] [error] [client 100.116.226.190] socket [Wed Aug 16 20:47:37 2017] [error] [client 100.116.226.190] . [Wed Aug 16 20:47:37 2017] [error] [client 100.116.226.190] error [Wed Aug 16 20:47:37 2017] [error] [client 100.116.226.190] : [Wed Aug 16 20:47:37 2017] [error] [client 100.116.226.190] [Errno 98] Address already in use [Wed Aug 16 20:47:37 2017] [error] [client 100.116.226.190] [Wed Aug 16 20:47:37 2017] [error] [client 100.116.226.190] Premature end of script headers: run.py [Wed Aug 16 20:48:33 2017] [warn] [client 100.116.226.182] Timeout waiting for output from CGI script /home/apps/minimal/run.py [Wed Aug 16 20:48:33 2017] [error] [client 100.116.226.182] Script timed out before returning headers: run.py [Wed Aug 16 20:49:33 2017] [warn] [client 100.116.226.182] Timeout waiting for output from CGI script /home/apps/minimal/run.py ```
2017/08/16
[ "https://Stackoverflow.com/questions/45703959", "https://Stackoverflow.com", "https://Stackoverflow.com/users/5399734/" ]
I've managed to run your example, but there are some tweaking involved to make it work. You might need to change paths on your system, because from your logs it seems that you're using system that runs `python2.6` and older `apache` version which still uses `httpd` file. If it is possible I would advise you to upgrade your environment. **Here is a step-by-step working solution:** 1.Install `virtualenvwrapper`: ``` sudo -EH pip2 install virtualenvwrapper ``` 2.Activte it: ``` source /usr/local/bin/virtualenvwrapper.sh ``` 3.Create virtual env: ``` mkvirtualenv minimal ``` 4.Install `flask` and `flup`: ``` pip install -U flask flup ``` `flipflop` is not working for me, but as it's README states > > This module is a simplified fork of flup, written by Allan Saddi. It only has the FastCGI part of the original module. > > > so you can safely use it. 5.Install `apache2`: ``` sudo apt-get install apache2 ``` 6.Install `libapache2-mod-fastcgi`: ``` sudo apt-get install libapache2-mod-fastcgi ``` 7.Create `/var/www/minimal/run.py`: ``` from flask import Flask app = Flask(__name__) @app.route('/') def hello_world(): return 'hello, world' ``` 8.Create `/var/www/minimal/minimal.fcgi`: ``` #!/usr/bin/python import sys import logging logging.basicConfig(stream=sys.stderr) activate_this = '/home/some_user/.virtualenvs/minimal/bin/activate_this.py' execfile(activate_this, dict(__file__=activate_this)) sys.path.insert(0,"/var/www/minimal/") from flup.server.fcgi import WSGIServer from run import app if __name__ == '__main__': WSGIServer(app).run() ``` 9.Make `minimal.fcgi` executable: ``` sudo chmod +x minimal.fcgi ``` 10.Create `minimal.conf` file (in `/etc/apache2/sites-available` on my server): ``` FastCgiServer /var/www/minimal/minimal.fcgi -idle-timeout 300 -processes 5 <VirtualHost *:80> ServerName YOUR_IP_ADDRESS DocumentRoot /var/www/minimal/ AddHandler fastcgi-script fcgi ScriptAlias / /var/www/minimal/minimal.fcgi/ <Location /> SetHandler fastcgi-script </Location> </VirtualHost> ``` 11.Enable new site: ``` sudo a2ensite minimal.conf ``` 12.Change `/var/www/` ownership to `www-data` user: ``` sudo chown -R www-data:www-data /var/www/ ``` 13.Restart `apache2`: ``` sudo /etc/init.d/apache2 restart ``` And voila! :) If you visit your server address you should see `hello, world` in your browser: [![enter image description here](https://i.stack.imgur.com/F1GQN.png)](https://i.stack.imgur.com/F1GQN.png) Also when restarting `apache` you can view `FastCGI` starting in apache's `error.log`: ``` [Thu Aug 24 16:33:09.354544 2017] [mpm_event:notice] [pid 17375:tid 139752788969344] AH00491: caught SIGTERM, shutting down [Thu Aug 24 16:33:10.414829 2017] [mpm_event:notice] [pid 17548:tid 139700962228096] AH00489: Apache/2.4.18 (Ubuntu) mod_fastcgi/mod_fastcgi-SNAP-0910052141 configured -- resuming normal operations [Thu Aug 24 16:33:10.415033 2017] [core:notice] [pid 17548:tid 139700962228096] AH00094: Command line: '/usr/sbin/apache2' [Thu Aug 24 16:33:10.415651 2017] [:notice] [pid 17551:tid 139700962228096] FastCGI: process manager initialized (pid 17551) [Thu Aug 24 16:33:10.416135 2017] [:warn] [pid 17551:tid 139700962228096] FastCGI: server "/var/www/minimal/minimal.fcgi" started (pid 17556) [Thu Aug 24 16:33:11.416571 2017] [:warn] [pid 17551:tid 139700962228096] FastCGI: server "/var/www/minimal/minimal.fcgi" started (pid 17618) [Thu Aug 24 16:33:12.422058 2017] [:warn] [pid 17551:tid 139700962228096] FastCGI: server "/var/www/minimal/minimal.fcgi" started (pid 17643) [Thu Aug 24 16:33:13.422763 2017] [:warn] [pid 17551:tid 139700962228096] FastCGI: server "/var/www/minimal/minimal.fcgi" started (pid 17651) [Thu Aug 24 16:33:14.423536 2017] [:warn] [pid 17551:tid 139700962228096] FastCGI: server "/var/www/minimal/minimal.fcgi" started (pid 17659) ```
You can't run the fastcgi script from the terminal. This script is supposed to be executed by Apache. Typically you have it configured in a `ScriptAlias` directive in your Apache config file.
12,778
33,874,089
I am trying to integrate Alipay Gateway with my website using [this](https://github.com/liuyug/django-alipay). I am getting the payment form but on redirecting to Alipay's website I am getting the `ILLEGAL_PARTNER_EXTERFACE` (pic attached) error. [![enter image description here](https://i.stack.imgur.com/3ppU1.png)](https://i.stack.imgur.com/3ppU1.png) [Few responses](https://wordpress.org/plugins/alipay-for-woocommerce/faq/) for the error online say `payment type` is different for testing environment. Can anyone give any pointers how to solve this? Any other kit for Alipay integration with a `Django(python)` based website ?
2015/11/23
[ "https://Stackoverflow.com/questions/33874089", "https://Stackoverflow.com", "https://Stackoverflow.com/users/3442820/" ]
According to the official documentation [here](https://cshall.alipay.com/support/help_detail.htm?help_id=397107), the possible reasons for that error code are: * You did not apply for this particular payment gateway type * You did apply for this payment gateway type, but it has not been approved yet * You did apply for this payment gateway type, but it has been suspended due to violation of ToS In your case, I guess it should be the first one. There are several gateway types: * Alipay\_Express (Alipay Express Checkout) * Alipay\_Secured (Alipay Secured Checkout) * Alipay\_Dual (Alipay Dual Function Checkout) * ... You need to make sure if your AliPay account is a business one, because only a business account can you use the Alipay Express gateway type. Regarding examples, you can check `liuyug/django-alipay`, which is pretty similar to `spookylukey/django-paypal`, assuming you had experience in integration with PayPal. *OT: Sorry for not providing the direct links to the GitHub repos mentioned above. StackOverflow kept saying that I need at least 10 reputation to post more than 2 links.*
Which you use Alipay gateway API? It appears you have not applied for the relevant interface privillege or incorrect **partner\_id** param. Whatever you use anyone language,they just it's based on common http request. Alipay provides a sandbox enviroment.But them use a common **partner\_id**. As far as I know none provided Alipay python SDK.
12,781
56,768,320
It often occurs to me when I try to manipulate data, for example **"UnicodeDecodeError: 'gbk' codec can't decode byte 0x91 in position 2196: illegal multibyte sequence".** I have found a way to bypass this error but my curiosity drives me to investigate what is in position 2196. ### **Here comes the question**: How to understand the number 2196? I mean what encoding should I use when I counting from 1,2,...,2196. utf-8? gbk? binary? hex or sth else? And how can I see the number in that position without throwing error? **Here is a code portion as an example:** ``` with open(r"G:\ETCData\6aMTC\2019-06-01.txt", "r") as fp: for i, line in enumerate(fp): if i == 6: pass UnicodeDecodeError Traceback (most recent call last) <ipython-input-2-6810d8c84b34> in <module>() 1 with open(r"G:\ETCData\6aMTC\2019-06-01.txt", "r") as fp: ----> 2 for i, line in enumerate(fp): 3 if i == 6: 4 pass UnicodeDecodeError: 'gbk' codec can't decode byte 0x91 in position 2196: illegal multibyte sequence ```
2019/06/26
[ "https://Stackoverflow.com/questions/56768320", "https://Stackoverflow.com", "https://Stackoverflow.com/users/6632083/" ]
You need to subscribe to the post observable returned by `method` function. It is done like this. ``` this.method().subscribe( res => { // Handle success response here }, err => { // Handle error response here } ); ```
you are getting the 400 bad request error, the payload keys are mis matching with the middle wear. please suggest pass the correct params into Request object.
12,782
3,331,850
I generated a SQL script from a C# application on Windows 7. The name entries have utf8 characters. It works find on Windows machine where I use a python script to populate the db. Now the same script fails on Linux platform complaining about those special characters. Similar things happened when I generated XML file containing utf chars on Windows 7 but fails to show up on browsers (IE, Firefox.). I used to generate such scripts on Windows XP and it worked perfect everywhere.
2010/07/26
[ "https://Stackoverflow.com/questions/3331850", "https://Stackoverflow.com", "https://Stackoverflow.com/users/243655/" ]
Please give a small example of a script with "utf8 characters" in the "name entries". Are you sure that they are `utf8` and not some windows encoding like `cp1252'? What makes you sure? Try this in Python at the command prompt: ``` ... python -c "print repr(open('small_script.sql', 'rb').read())" ``` The interesting parts of the output are where it uses `\xhh` (where h is any hex digit) to represent non-ASCII characters e.g. `\xc3\xa2` is the UTF-8 encoding of the small a with circumflex accent. Show us a representative sample of such output. Also tell us the exact error message(s) that you get from that sample script. **Update:** It appears that you have data encoded in `cp1252` or similar (`Latin1` aka `ISO-8859-1` is as rare as hen's teeth on Windows). To get that into `UTF-8` using Python, you'd do `fixed_data = data.decode('cp1252').encode('utf8')`; I can't help you with C# -- you may like to ask a separate question about that.
Assuming you're using python, make sure you are using [Unicode strings](http://evanjones.ca/python-utf8.html). For example: ``` s = "Hello world" # Regular String u = u"Hello Unicode world" # Unicdoe String ``` Edit: Here's an example of reading from a UTF-8 file from the linked site: ``` import codecs fileObj = codecs.open( "someFile", "r", "utf-8" ) u = fileObj.read() # Returns a Unicode string from the UTF-8 bytes in the file ```
12,784
63,397,618
I'm currently trying to run an application using Docker but get the following error message when I start the application: ```py error while loading shared libraries: libopencv_highgui.so.4.4: cannot open shared object file: No such file or directory ``` I assume that something is going wrong in the docker file and that the installation is not complete or correct. Therefore I have added the section about OpenCV at the end of the post. Did I miss an important step or an error in the dockerfile? ```py FROM nvidia/cuda:10.2-devel-ubuntu18.04 as TOOLKITS RUN apt-get update && apt-get install -y apt-utils # Install additional packages RUN apt-get install -y \ build-essential \ bzip2 \ checkinstall \ cmake \ curl \ gcc \ gfortran \ git \ pkg-config \ python3-pip \ python3-dev \ python3-numpy \ nano \ openexr \ unzip \ wget \ yasm FROM TOOLKITS as GIT_PULLS WORKDIR / RUN git clone https://github.com/opencv/opencv.git RUN git clone https://github.com/opencv/opencv_contrib.git FROM GIT_PULLS as OPENCV_PREPERATION RUN apt-get install -y \ libgtk-3-dev \ libavcodec-dev \ libavformat-dev \ libswscale-dev \ libv4l-dev \ libxvidcore-dev \ libx264-dev \ libjpeg-dev \ libpng-dev \ libtiff-dev \ libatlas-base-dev \ libtbb2 \ libtbb-dev \ libdc1394-22-dev FROM OPENCV_PREPERATION as OPENCV_CMAKE WORKDIR / RUN mkdir /opencv/build WORKDIR /opencv/build RUN cmake \ -DCMAKE_BUILD_TYPE=RELEASE \ -DCMAKE_INSTALL_PREFIX=/usr/local \ -DINSTALL_C_EXAMPLES=ON \ -DINSTALL_PYTHON_EXAMPLES=ON \ -DWITH_TBB=ON \ -DWITH_V4L=ON \ -DOPENCV_GENERATE_PKGCONFIG=ON \ -DWITH_OPENGL=ON \ -DOPENCV_EXTRA_MODULES_PATH=../../opencv_contrib/modules \ -DOPENCV_PC_FILE_NAME=opencv.pc \ -DBUILD_EXAMPLES=ON .. FROM OPENCV_CMAKE as BUILD_OPENCV_MAKE RUN make -j $(nproc) RUN make install FROM TOOLKITS COPY --from=XXX /opencv /opencv COPY --from=XXX /opencv_contrib /opencv_contrib ```
2020/08/13
[ "https://Stackoverflow.com/questions/63397618", "https://Stackoverflow.com", "https://Stackoverflow.com/users/13460282/" ]
I was facing the same issue before when installing OpenCV in Docker with Python image. You probably don't need this much dependencies but it's an option. I will have a lightweight version that fits my case. Please give a try for the following code: **Heavy-loaded version:** ``` FROM python:3.7 RUN apt-get update \ && apt-get install -y \ build-essential \ cmake \ git \ wget \ unzip \ yasm \ pkg-config \ libswscale-dev \ libtbb2 \ libtbb-dev \ libjpeg-dev \ libpng-dev \ libtiff-dev \ libavformat-dev \ libpq-dev \ && rm -rf /var/lib/apt/lists/* RUN pip install numpy WORKDIR / ENV OPENCV_VERSION="4.1.1" # install opencv-python from its source RUN wget https://github.com/opencv/opencv/archive/${OPENCV_VERSION}.zip \ && unzip ${OPENCV_VERSION}.zip \ && mkdir /opencv-${OPENCV_VERSION}/cmake_binary \ && cd /opencv-${OPENCV_VERSION}/cmake_binary \ && cmake -DBUILD_TIFF=ON \ -DBUILD_opencv_java=OFF \ -DWITH_CUDA=OFF \ -DWITH_OPENGL=ON \ -DWITH_OPENCL=ON \ -DWITH_IPP=ON \ -DWITH_TBB=ON \ -DWITH_EIGEN=ON \ -DWITH_V4L=ON \ -DBUILD_TESTS=OFF \ -DBUILD_PERF_TESTS=OFF \ -DCMAKE_BUILD_TYPE=RELEASE \ -DCMAKE_INSTALL_PREFIX=$(python3.7 -c "import sys; print(sys.prefix)") \ -DPYTHON_EXECUTABLE=$(which python3.7) \ -DPYTHON_INCLUDE_DIR=$(python3.7 -c "from distutils.sysconfig import get_python_inc; print(get_python_inc())") \ -DPYTHON_PACKAGES_PATH=$(python3.7 -c "from distutils.sysconfig import get_python_lib; print(get_python_lib())") \ .. \ && make install \ && rm /${OPENCV_VERSION}.zip \ && rm -r /opencv-${OPENCV_VERSION} RUN ln -s \ /usr/local/python/cv2/python-3.7/cv2.cpython-37m-x86_64-linux-gnu.so \ /usr/local/lib/python3.7/site-packages/cv2.so RUN apt-get --fix-missing update && apt-get --fix-broken install && apt-get install -y poppler-utils && apt-get install -y tesseract-ocr && \ apt-get install -y libtesseract-dev && apt-get install -y libleptonica-dev && ldconfig && apt install -y libsm6 libxext6 && apt install -y python-opencv ``` **Lightweight version:** ``` FROM python:3.7 RUN apt-get update -y RUN apt-update && apt install -y libsm6 libxext6 ``` For my case, I ended up using the heavy-loaded version just to save some hassle and both versions should work fine. For your reference, please also see [this link](https://stackoverflow.com/questions/63197519/tesseractnotfound-issue-when-containerizing-in-docker) and thanks to Neo Anderson's great help.
```sh apt-get update -y apt install -y libsm6 libxext6 apt update pip install pyglview apt install -y libgl1-mesa-glx ```
12,785
40,828,531
This is a little bit a newbie question I know. But however I couldn't find an answer to this question. I have made some websites that leverage the functionality of automatic emailling. I have made this websites using PHP. Every website I do, in the mailling part, I come accross some "redundancies". Let me give an example, from the examples of [PHPMailer](https://github.com/PHPMailer/PHPMailer) library: ``` $mail = new PHPMailer; $mail->isSMTP(); $mail->Host = 'mail.domail.com'; $mail->SMTPAuth = true; $mail->Username = 'someuser@domain.com'; // SMTP username $mail->Password = 'secret'; $mail->Port = 587; $mail->setFrom('someuser@domain.com', 'Mailer'); $mail->addAddress('to@gmail.com', 'Joe User'); // Add a recipient $mail->isHTML(true); $mail->Subject = 'Here is the subject'; $mail->Body = 'This is the HTML message body <b>in bold!</b>'; $mail->AltBody = 'This is the body in plain text for non-HTML mail clients'; ``` In these two statements, is where I thought there are redundancies: `$mail->Username = "someuser@domain.com; $mail->Password = 'secret';"` and `$mail->setFrom('someuser@domain.com')`. Here is my question? Why do I need to provide a "from" address if I already given a username and password. Shoudln't it simply log in to my email account and sent it? If I provide a user name, why do I also provide a "from" address? And vice versa. Could someone explain the reason why mailling systems work like this? I have alson seen similar structure in python's standard mailling library.
2016/11/27
[ "https://Stackoverflow.com/questions/40828531", "https://Stackoverflow.com", "https://Stackoverflow.com/users/4966877/" ]
First comments on your net's way of working: * there is no arrow back to the `off` state. So once you switch on your washing machine, won't you never be able to switch it off again ? * `drain` and `dry` both conduct back to `idle`. But when idle has a token, it will either go to delicate or to T1. The conditions ("program" chosen by the operator) don't vanish, so they would be triggered again and again. Considering the last point, I'd suggest to have a different idle for the end of the program to avoid this cycling. If you have to pass several times through the same state but take different actions depending on the progress, you have to work with more tokens. Some remarks about the net's form: * you don't need to put the 1 on every arc. You could make this more readable by Leaving the 1 out and indicating a number on an arc, only when more than one tokens would be needed. * usually, the transitions are not aligned with the arcs (although nothing forbids is) but rather perpendicular to the flow (here, horizontal) * In principle, "places" (nodes) represent states or resources, and "transitions" (rectangles) represent an event that changes the state (or an action that consumes resources). Your naming convention should better reflect this
Apparently you're missing some condition to stop the process. Now once you start your washing will continue in an endless loop.
12,787
48,108,469
I am doing some PCA using sklearn.decomposition.PCA. I found that if the input matrix X is big, the results of two different PCA instances for PCA.transform will not be the same. For example, when X is a 100x200 matrix, there will not be a problem. When X is a 1000x200 or a 100x2000 matrix, the results of two different PCA instances will be different. I am not sure what's the cause for this: I suppose there is no random elements in sklearn's PCA solver? I am using sklearn version 0.18.1. with python 2.7 The script below illustrates the issue. ``` import numpy as np import sklearn.linear_model as sklin from sklearn.decomposition import PCA n_sample,n_feature = 100,200 X = np.random.rand(n_sample,n_feature) pca_1 = PCA(n_components=10) pca_1.fit(X) X_transformed_1 = pca_1.transform(X) pca_2 = PCA(n_components=10) pca_2.fit(X) X_transformed_2 = pca_2.transform(X) print(np.sum(X_transformed_1 == X_transformed_2) ) print(np.mean((X_transformed_1 - X_transformed_2)**2) ) ```
2018/01/05
[ "https://Stackoverflow.com/questions/48108469", "https://Stackoverflow.com", "https://Stackoverflow.com/users/7439635/" ]
There's a `svd_solver` param in PCA and by default it has value "auto". Depending on the input data size, it chooses most efficient solver. Now as for your case, when size is larger than 500, it will choose `randomized`. > > svd\_solver : string {‘auto’, ‘full’, ‘arpack’, ‘randomized’} > > > **auto** : > > > the solver is selected by a default policy based on X.shape and n\_components: if the input data is larger than 500x500 and the > number of components to extract is lower than 80% of the smallest > dimension of the data, then the more efficient ‘randomized’ method is > enabled. Otherwise the exact full SVD is computed and optionally > truncated afterwards. > > > To control how the randomized solver behaves, you can set `random_state` param in PCA which will control the random number generator. Try using ``` pca_1 = PCA(n_components=10, random_state=SOME_INT) pca_2 = PCA(n_components=10, random_state=SOME_INT) ```
I had a similar problem even with the same trial number but on different machines I was getting different result setting the svd\_solver to '`arpack`' solved the problem
12,789
45,890,001
I want to capture only the lines that end with two asterisks using the following code: ``` import re total_lines = 0 processed_lines = 0 regexp = re.compile(r'[*][\s]+[*]$') for line in open('testfile.txt', 'r'): total_lines += 1 if regexp.search(line): print'Line not parsed. Format not defined yet' else: processed_lines += 1 print "Total lines: {} - Processed lines: {}".format(total_lines, processed_lines) ``` In Windows works fine. But when I used the code in CentOS, the regex does not work. This is the output for `testfile.txt` (file with 40 lines) Windows `re.__version__ = '2.2.1'`: ``` Line not parsed. Format not defined yet Line not parsed. Format not defined yet Line not parsed. Format not defined yet Line not parsed. Format not defined yet Line not parsed. Format not defined yet Total lines: 40 - Processed lines: 35 ``` Linux `re.__version__='2.2.1'`: ``` Total lines: 40 - Processed lines: 40 ``` Both OS use the same python version. You can found the `testfile.txt` [here](http://txt.do/d6jb8) and [here](http://m.uploadedit.com/bbtc/1503698518189.txt):
2017/08/25
[ "https://Stackoverflow.com/questions/45890001", "https://Stackoverflow.com", "https://Stackoverflow.com/users/2579896/" ]
Open the file in universal newline mode `rU` to support I/O on files which have a newline format that is not the native format on the platform in python 2.x, then the $ in your regex will match the EOL. ``` import re total_lines = 0 processed_lines = 0 regexp = re.compile(r'[*][\s]+[*]$') for line in open('testfile.txt', 'rU'): total_lines += 1 if regexp.search(line): print'Line not parsed. Format not defined yet' else: processed_lines += 1 print "Total lines: {} - Processed lines: {}".format(total_lines, processed_lines) ``` [PEP278](http://www.python.org/dev/peps/pep-0278/) explained what `rU` stands for: > > In a Python with universal newline support open() the mode parameter > can also be "U", meaning "open for input as a text file with universal > newline interpretation". Mode "rU" is also allowed, for symmetry with > "rb". > > >
The test file you offered doesn't contain any lines that end with two asterisks? This regex should match all lines that end with two asterisks: .\*\\*{2}$
12,790
62,131,355
I am trying to create all subset of a given string **recursively**. Given string = 'aab', we generate all subsets for the characters being distinct. The answer is: `["", "b", "a", "ab", "ba", "a", "ab", "ba", "aa", "aa", "aab", "aab", "aba", "aba", "baa", "baa"]`. I have been looking at several solutions such as [this one](https://stackoverflow.com/questions/24318311/generate-all-subsets-of-a-string-using-recursion) but I am trying to make the function accept a single variable- only the string and work with that, and can't figure out how. I have been also looking at [this](https://stackoverflow.com/questions/26332412/python-recursive-function-to-display-all-subsets-of-given-set) solution of a similar problem, but as it deals with lists and not strings I seem to have some trouble transforming that to accept and generate strings. Here is my code, in this example I can't connect the str to the list. Hence my question. **I edited the input and the output.** ``` def gen_all_strings(word): if len(word) == 0: return '' rest = gen_all_strings(word[1:]) return rest + [[ + word[0]] + dummy for dummy in rest] ```
2020/06/01
[ "https://Stackoverflow.com/questions/62131355", "https://Stackoverflow.com", "https://Stackoverflow.com/users/11469782/" ]
``` from itertools import * def recursive_product(s,r=None,i=0): if r is None: r = [] if i>len(s): return r for c in product(s, repeat=i): r.append("".join(c)) return recursive_product(s,r,i+1) print(recursive_product('ab')) print(recursive_product('abc')) ``` Output: `['', 'a', 'b', 'aa', 'ab', 'ba', 'bb']` `['', 'a', 'b', 'c', 'aa', 'ab', 'ac', 'ba', 'bb', 'bc', 'ca', 'cb', 'cc', 'aaa', 'aab', 'aac', 'aba', 'abb', 'abc', 'aca', 'acb', 'acc', 'baa', 'bab', 'bac', 'bba', 'bbb', 'bbc', 'bca', 'bcb', 'bcc', 'caa', 'cab', 'cac', 'cba', 'cbb', 'cbc', 'cca', 'ccb', 'ccc']` To be honest it feels really forced to use recursion in this case, a much simpler version that has the same results: ``` nonrecursive_product = lambda s: [''.join(c)for i in range(len(s)+1) for c in product(s,repeat=i)] ```
This is the [powerset](https://stackoverflow.com/questions/1482308/how-to-get-all-subsets-of-a-set-powerset) of the set of characters in the string. ``` from itertools import chain, combinations s = set('ab') #split string into a set of characters # combinations gives the elements of the powerset of a given length r # from_iterable puts all these into an 'iterable' # which is converted here to a list list(chain.from_iterable(combinations(s, r) for r in range(len(s)+1))) ```
12,791
40,279,577
using python package "xlsxwriter", I want to highlight cells in the following conditional range; value > 1 or value <-1 However, some cells have -inf/inf values and it fill colors them too (to yellow). Is thare any way to unhighlight them? I tried "conditional\_format" function to uncolor them, but it doesn't work. [output example](https://i.stack.imgur.com/kMqhb.png) ``` format1 = workbook.add_format({'bg_color':'#FFBF00'}) #yellow format2 = workbook.add_format({'bg_color':'#2E64FE'}) #blue format3 = workbook.add_format({'bg_color':'#FFFFFF'}) #white c_fold=[data.columns.get_loc(col) for col in data.columns if col.startswith("fold")] c_fold.sort() l=len(data)+1 worksheet.conditional_format(1,c_fold[0],l,c_fold[-1], {'type':'cell', 'criteria' : '>', 'value':1, 'format':format1, }) worksheet.conditional_format(1,c_fold[0],l,c_fold[-1], {'type':'cell', 'criteria' : '<', 'value':-1, 'format':format2, }) worksheet.conditional_format(1,c_fold[0],l,c_fold[-1], {'type':'text', 'criteria' : 'begins with', 'value':"-inf", 'format':format3, }) ``` Thanks in advance
2016/10/27
[ "https://Stackoverflow.com/questions/40279577", "https://Stackoverflow.com", "https://Stackoverflow.com/users/7079128/" ]
required\_param means that the parameter must exist (or Moodle will throw an immediate, fatal error). If the parameter is optional, then use optional\_param('name of param', 'default value', PARAM\_TEXT) instead. Then you can check to see if this has the 'default value' (I usually use null as the default value). In either case, isset() does not make sense, as the variable always has a value assigned to it.
You should compare the result of `required_param('LType',PARAM_ALPHA)` with the value you spect, instead of using isset. For example: ``` if(required_param('LType',PARAM_ALPHA) != 'some value'){ echo "salaam";exit; } ``` Or: ``` if(required_param('LType',PARAM_ALPHA) === false){ echo "salaam";exit; } ```
12,793
54,360,408
i am writing a python application that is sending continously UDP messages to a predefined network with other hosts and fixed IPs. I wrote the python application and dockerized it. The application works fine in the docker, no problems there. Unfortunately i am failing to send the UDP messages from my docker to the host so they will be sent to the other hosts in the network. The same is for receiving messages. Right now i dont know how to set up my docker so it is receiving a UDP message from a host with fixed IP adress in the network. I tried to set up my docker network with `--net host` and i sent all the UDP messages from my docker container via localhost to my host. This worked fine, too. I am missing the link where i can sent the messages no to the "outside world". I tried to make a picture of my problem. [![Docker host communication problem](https://i.stack.imgur.com/a1ohK.jpg)](https://i.stack.imgur.com/a1ohK.jpg) My Question: How do i have to set up the network communcation for my docker/host so it can receive messages via UDP from other hosts in the network? Thanks
2019/01/25
[ "https://Stackoverflow.com/questions/54360408", "https://Stackoverflow.com", "https://Stackoverflow.com/users/7864140/" ]
So i experimented a lot and i figured out, that i just need to run the docker container with the network configuration as host. The UDP socket in my container is bound to the IP adress of my host and therefore just needs to be linked to the Network of the host. Everyone who is struggeling the same issue, just run ``` docker run --network=host <YOURCONTAINER> ```
Build your own bridge --------------------- 1.Configure the new bridge. ``` $ sudo ip link set dev br0 up $ sudo ip addr add 192.168.5.1/24 dev bridge0 $ sudo ip link set dev bridge0 up ``` Confirm the new bridge’s settings. ``` $ ip addr show bridge0 4: bridge0: <BROADCAST,MULTICAST> mtu 1500 qdisc noop state UP group default link/ether 66:38:d0:0d:76:18 brd ff:ff:ff:ff:ff:ff inet 192.168.5.1/24 scope global bridge0 valid_lft forever preferred_lft forever <br/> ``` 2. Configure Docker to use the new bridge by setting the option in the daemon.json file, which is located in `/etc/docker/` on Linux or `C:\ProgramData\docker\config\` on Windows Server. On Docker for Mac or Docker for Windows, click the Docker icon, choose **Preferences**, and go to **Daemon**. If the daemon.json file does not exist, create it. Assuming there are no other settings in the file, it should have the following contents: ``` { "bridge": "bridge0" } ``` Restart Docker for the changes to take effect. 3. Confirm that the new outgoing NAT masquerade is set up. ``` $ sudo iptables -t nat -L -n Chain POSTROUTING (policy ACCEPT) target prot opt source destination MASQUERADE all -- 192.168.5.0/24 0.0.0.0/0 ``` 4.Remove the now-unused `docker0` bridge. ``` $ sudo ip link set dev docker0 down $ sudo ip link del name br0 $ sudo iptables -t nat -F POSTROUTING ``` 5.Create a new container, and verify that it is in the new IP address range. ([ref](https://docs.docker.com/v17.09/engine/userguide/networking/default_network/build-bridges/).)
12,794
54,524,124
I put together a VAE using Dense Neural Networks in Keras. During `model.fit` I get a dimension mismatch, but not sure what is throwing the code off. Below is what my code looks like ``` from keras.layers import Lambda, Input, Dense from keras.models import Model from keras.datasets import mnist from keras.losses import mse, binary_crossentropy from keras.utils import plot_model from keras import backend as K import keras import numpy as np import matplotlib.pyplot as plt import argparse import os (x_train, y_train), (x_test, y_test) = mnist.load_data() image_size = x_train.shape[1] original_dim = image_size * image_size x_train = np.reshape(x_train, [-1, original_dim]) x_test = np.reshape(x_test, [-1, original_dim]) x_train = x_train.astype('float32') / 255 x_test = x_test.astype('float32') / 255 # network parameters input_shape = (original_dim, ) intermediate_dim = 512 batch_size = 128 latent_dim = 2 epochs = 50 x = Input(batch_shape=(batch_size, original_dim)) h = Dense(intermediate_dim, activation='relu')(x) z_mean = Dense(latent_dim)(h) z_log_sigma = Dense(latent_dim)(h) def sampling(args): z_mean, z_log_sigma = args #epsilon = K.random_normal(shape=(batch, dim)) epsilon = K.random_normal(shape=(batch_size, latent_dim)) return z_mean + K.exp(z_log_sigma) * epsilon # note that "output_shape" isn't necessary with the TensorFlow backend # so you could write `Lambda(sampling)([z_mean, z_log_sigma])` z = Lambda(sampling, output_shape=(latent_dim,))([z_mean, z_log_sigma]) decoder_h = Dense(intermediate_dim, activation='relu') decoder_mean = Dense(original_dim, activation='sigmoid') h_decoded = decoder_h(z) x_decoded_mean = decoder_mean(h_decoded) print('X Decoded Mean shape: ', x_decoded_mean.shape) # end-to-end autoencoder vae = Model(x, x_decoded_mean) # encoder, from inputs to latent space encoder = Model(x, z_mean) # generator, from latent space to reconstructed inputs decoder_input = Input(shape=(latent_dim,)) _h_decoded = decoder_h(decoder_input) _x_decoded_mean = decoder_mean(_h_decoded) generator = Model(decoder_input, _x_decoded_mean) def vae_loss(x, x_decoded_mean): xent_loss = keras.metrics.binary_crossentropy(x, x_decoded_mean) kl_loss = - 0.5 * K.mean(1 + z_log_sigma - K.square(z_mean) - K.exp(z_log_sigma), axis=-1) return xent_loss + kl_loss vae.compile(optimizer='rmsprop', loss=vae_loss) print('X train shape: ', x_train.shape) print('X test shape: ', x_test.shape) vae.fit(x_train, x_train, shuffle=True, epochs=epochs, batch_size=batch_size, validation_data=(x_test, x_test)) ``` Here is the stack trace that I see when `model.fit` is called. ``` File "/home/asattar/workspace/projects/keras-examples/blogautoencoder/VariationalAutoEncoder.py", line 81, in <module> validation_data=(x_test, x_test)) File "/usr/local/lib/python2.7/dist-packages/Keras-2.2.4-py2.7.egg/keras/engine/training.py", line 1047, in fit validation_steps=validation_steps) File "/usr/local/lib/python2.7/dist-packages/Keras-2.2.4-py2.7.egg/keras/engine/training_arrays.py", line 195, in fit_loop outs = fit_function(ins_batch) File "/usr/local/lib/python2.7/dist-packages/Keras-2.2.4-py2.7.egg/keras/backend/tensorflow_backend.py", line 2897, in __call__ return self._call(inputs) File "/usr/local/lib/python2.7/dist-packages/Keras-2.2.4-py2.7.egg/keras/backend/tensorflow_backend.py", line 2855, in _call fetched = self._callable_fn(*array_vals) File "/home/asattar/.local/lib/python2.7/site-packages/tensorflow/python/client/session.py", line 1439, in __call__ run_metadata_ptr) File "/home/asattar/.local/lib/python2.7/site-packages/tensorflow/python/framework/errors_impl.py", line 528, in __exit__ c_api.TF_GetCode(self.status.status)) tensorflow.python.framework.errors_impl.InvalidArgumentError: Incompatible shapes: [128,784] vs. [96,784] [[{{node training/RMSprop/gradients/loss/dense_5_loss/logistic_loss/mul_grad/BroadcastGradientArgs}} = BroadcastGradientArgs[T=DT_INT32, _class=["loc:@train...ad/Reshape"], _device="/job:localhost/replica:0/task:0/device:CPU:0"](training/RMSprop/gradients/loss/dense_5_loss/logistic_loss/mul_grad/Shape, training/RMSprop/gradients/loss/dense_5_loss/logistic_loss/mul_grad/Shape_1)]] ``` Please note the "Incompatible shapes: [128,784] vs. [96,784]" in the stack trace" towards the end of the trace.
2019/02/04
[ "https://Stackoverflow.com/questions/54524124", "https://Stackoverflow.com", "https://Stackoverflow.com/users/1491639/" ]
According to [Keras: What if the size of data is not divisible by batch\_size?](https://stackoverflow.com/questions/37974340/keras-what-if-the-size-of-data-is-not-divisible-by-batch-size), one should better use `model.fit_generator` rather than `model.fit` here. To use `model.fit_generator`, one should define one's own generator object. Following is an example: ``` from keras.utils import Sequence import math class Generator(Sequence): # Class is a dataset wrapper for better training performance def __init__(self, x_set, y_set, batch_size=256): self.x, self.y = x_set, y_set self.batch_size = batch_size self.indices = np.arange(self.x.shape[0]) def __len__(self): return math.floor(self.x.shape[0] / self.batch_size) def __getitem__(self, idx): inds = self.indices[idx * self.batch_size:(idx + 1) * self.batch_size] batch_x = self.x[inds] batch_y = self.y[inds] return batch_x, batch_y def on_epoch_end(self): np.random.shuffle(self.indices) train_datagen = Generator(x_train, x_train, batch_size) test_datagen = Generator(x_test, x_test, batch_size) vae.fit_generator(train_datagen, steps_per_epoch=len(x_train)//batch_size, validation_data=test_datagen, validation_steps=len(x_test)//batch_size, epochs=epochs) ``` Code adopted from [How to shuffle after each epoch using a custom generator?](https://github.com/keras-team/keras/issues/9707).
Just tried to replicate and found out that when you define `x = Input(batch_shape=(batch_size, original_dim))` you're setting the batch size and it's causing a mismatch when it starts to validate. Change to ``` x = Input(shape=input_shape) ``` and you should be all set.
12,795
30,005,876
When creating a derived class, what is actually being inherited from `pygame.sprite.Sprite`? It's something that doesn't need to be set up anywhere else in a class, so what is it? Are there actual methods included with it or does python/pygame just know what do with it?
2015/05/02
[ "https://Stackoverflow.com/questions/30005876", "https://Stackoverflow.com", "https://Stackoverflow.com/users/4515529/" ]
[Use the source, Luke!!!](https://www.youtube.com/watch?v=o2we_B6hDrY) @ [pygame.sprite.Sprite](https://bitbucket.org/pygame/pygame/src/dc57da440ac3415ff679c0e9a1d6d75d949b2db9/lib/sprite.py?at=default#cl-106) inherits `object` ![enter image description here](https://i.stack.imgur.com/pIAT7.jpg)
Look it up on the original pygame website: <http://www.pygame.org/docs/ref/sprite.html#pygame.sprite.Sprite>
12,796
14,521,414
I'm currently working on a small python script, for controlling my home PC (really just a hobby project - nothing serious). Inside the script, there is two threads running at the same time using thread (might start using threading instead) like this: ``` thread.start_new_thread( Function, (Args) ) ``` Its works as intended when testing the script... but after compiling the code using Pyinstaller there are two processes (One for each thread - I think). How do I fix this?
2013/01/25
[ "https://Stackoverflow.com/questions/14521414", "https://Stackoverflow.com", "https://Stackoverflow.com/users/1995290/" ]
Just kill the loader from the main program if it really bothers you. Here's one way to do it. ``` import os import win32com.client proc_name = 'MyProgram.exe' my_pid = os.getpid() wmi = win32com.client.GetObject('winmgmts:') all_procs = wmi.InstancesOf('Win32_Process') for proc in all_procs: if proc.Properties_("Name").Value == proc_name: proc_pid = proc.Properties_("ProcessID").Value if proc_pid != my_pid: print "killed my loader %s\n" % (proc_pid) os.kill(proc_pid, 9) ```
Python code does not need to be "compiled with pyinstaller" Products like "Pyinstaller" or "py2exe" are usefull to create a single executable file that you can distribute to third parties, or relocate inside your computer without worrying about the Python instalation - however, they don add "speed" nor is the resulting binary file any more "finished" than your original .py (or .pyw on Windows) file. What these products do is to create another copy of the Python itnrepreter, alogn with all the modules your porgram use, and pack them inside a single file. It is likely the Pyinstaller keep a second process running to check things on the main script (like launching it, maybe there are options on it to keep the script running and so on). This is not part of a standard Python program. It is not likely Pyinstaller splits the threads into 2 separate proccess as that would cause compatibility problems - thread run on the same process and can transparently access the same data structures. How a "canonical" Python program runs: the main process, seen by the O.S. is the Python binary (Python.exe on Windows) - it finds the Python script it was called for - if there is a ".pyc" file for it, that is loaded - else, it loads your ".py" file and compiles that to Python byte code (not to windwos executable). This compilation is authomatic and transparent to people running the program. It is analogous to a Java compile from a .java file to a .class - but there is no explicit step needed by the programmer or user - it is made in place - and other factors control wether Python will store the resulting bytecode as .pyc file or not. To sum up: there is no performance impact in running the ".py" script directly instead of generating an .exe file with Pyinstaller or other product. You have a disk-space usage inpact if you do, though, as you will have one copy of the Python interpreter and libraries for each of your scripts. The URL pointeded by Janne Karila on the comment nails it - its even worse than I thought: in order to run yioru script, pyinstaller unpacks Python DLLs and modules in a temporary directory. The time and system resources needed todo that, compared with a single script run is non-trivial. <http://www.pyinstaller.org/export/v2.0/project/doc/Manual.html?format=raw#how-one-file-mode-works>
12,797
49,992,781
I have the following code in python2. I wanted to know if inheritance works or basic class works if we don't pass 'self' or don't have an init method in the class. here is the code ``` class Animal: def whoAmi(): print "Animal" >>> class Dog(Animal): pass ... >>> d= Dog() >>> d.whoAmi <bound method Dog.whoAmi of <__main__.Dog instance at 0x0000000004ED3348>> >>> d.whoAmi() Traceback (most recent call last): File "<stdin>", line 1, in <module> TypeError: whoAmi() takes no arguments (1 given) >>> d.whoAmi <bound method Dog.whoAmi of <__main__.Dog instance at 0x0000000004ED3348>> ``` why doesn't it print "Animal" here?
2018/04/24
[ "https://Stackoverflow.com/questions/49992781", "https://Stackoverflow.com", "https://Stackoverflow.com/users/7406832/" ]
Lets first tackle why doesn't it print "Animal". The clue is is in the error message: > > TypeError: whoAmi() takes no arguments (**1 given**) > > > When you do `d.whoAmi()`, really what Python is doing is `Dog.whoAmi(d)`. Since your method does not take any arguments, you get that exception. By convention (as is it case with many style "rules" in Python), for those methods of classes that work on instances, the first argument is called *`self`*. However, it can be called anything you want. The key thing to remember is that there **must be at least one argument**. You can name it whatever you want, but the agreement in the Python community is to call it `self`. Here is an example showing that it really doesn't matter what you call it: ``` >>> class Foo(): ... def whoami(blah): ... print "Boo" ... >>> a = Foo() >>> a.whoami() Boo ``` Inheritance works fine even if you don't have methods with `self`, as it is perfectly normal to have class-level methods in Python. All methods that have double underscores (sometimes called "dunder" methods), like `__init__` are optional. You don't have to define them if the default functionality works for you. The key thing to remember here is that the argument to `self` is passed implicitly by Python. You don't really "pass" `self`. Python knows that the method is being called on an instance, and passes the instance as the first argument to the method.
Since you’re are effectively initiating Dog, you’re creating a `self`. So, when you write `d.whoAmi()`, the interpreter inserts `self` as a function argument. If you tried: ``` d = Dog d.whoAmi() ``` It should work as expected. By the way, you should put the decorator `@staticmethod` in he top of your `whoAmi` function for it to work the way you did.
12,798
47,261,255
I'm trying to execute a dag which needs to be run only once. So I placed the dag execution interval as '@once'. However, I'm getting the error as mentioned in this link - <https://issues.apache.org/jira/browse/AIRFLOW-1400> Now i'm trying to pass the exact date of execution as below: ``` default_args = { 'owner': 'airflow', 'depends_on_past': False, 'start_date': datetime(2017,11,13), 'email': ['airflow@airflow.com'], 'email_on_failure': False, 'email_on_retry': False, 'retries': 1, 'retry_delay': timedelta(seconds=5) } dag = DAG( dag_id='dagNameTest', default_args=default_args, schedule_interval='12 09 13 11 2017',concurrency=1) ``` This is throwing error as: ``` File "/usr/lib/python2.7/site-packages/croniter/croniter.py", line 543, in expand expr_format)) CroniterBadCronError: [12 09 13 11 2017] is not acceptable, out of range ``` Can someone help to resolve this. Thanks, Arjun
2017/11/13
[ "https://Stackoverflow.com/questions/47261255", "https://Stackoverflow.com", "https://Stackoverflow.com/users/7229291/" ]
Grouping by either "TransactionCategory" or "TranCatID" will give you the desired result shown as follows: ``` SELECT TransactionCategory.TransCatName, SUM( `Value`) AS Value FROM Transactions JOIN TransactionCategory on Transactions.TransactionCategory = TransactionCategory.TranCatID GROUP BY TransactionCategory.TransactionCategory; or SELECT TransactionCategory.TransCatName, SUM( `Value`) AS Value FROM Transactions JOIN TransactionCategory on Transactions.TransactionCategory = TransactionCategory.TranCatID GROUP BY TransactionCategory.TranCatID; ```
This should do the trick ``` SELECT TransactionCategory.TransCatName, SUM(Transactions.Value) as Value FROM Transactions LEFT JOIN TransactionCategory ON TransactionCategory.TranCatID = Transaction.TransactionCategory ```
12,799
54,376,661
**To who voted to close because of unclear what I'm asking, here are the questions in my post:** 1. Can anyone tell me what's the result of `y`? 2. Is there anything called sum product in Mathematics? 3. Is `x` subject to broadcasting? 4. Why is `y` a column/row vector? 5. What if `x=np.array([[7],[2],[3]])`? ``` w=np.array([[1,2,3],[4,5,6],[7,8,9]]) x=np.array([7,2,3]) y=np.dot(w,x) ``` Can anyone tell me what's the result of `y`? [![enter image description here](https://i.stack.imgur.com/KzZnm.png)](https://i.stack.imgur.com/KzZnm.png) I deliberately Mosaic the screenshot so that you pretend you are in a test and cannot run python to get the result. <https://docs.scipy.org/doc/numpy-1.15.4/reference/generated/numpy.dot.html#numpy.dot> says > > If a is an N-D array and b is a 1-D array, it is a sum product over > the last axis of a and b. > > > Is there anything called **sum product** in Mathematics? Is `x` subject to broadcasting? Why is `y` a column/row vector? What if `x=np.array([[7],[2],[3]])`?
2019/01/26
[ "https://Stackoverflow.com/questions/54376661", "https://Stackoverflow.com", "https://Stackoverflow.com/users/746461/" ]
Since you mentioned you can use **lodash** you can use [`merge`](https://lodash.com/docs/4.17.11#merge) like so: `_.merge(obj1, obj2)` to get your desired result. See working example below: ```js const a = { 1: { foo: 1 }, 2: { bar: 2, fooBar: 3 }, 3: { fooBar: 3 }, }, b = { 1: { foo: 1, bar: 2 }, 2: { bar: 2 }, 4: {foo: 1} }, res = _.merge(a, b); console.log(res); ``` ```html <script src="https://cdn.jsdelivr.net/lodash/4.16.4/lodash.min.js"></script> ```
you can use Object.assign and and assign object properties to empty object. ``` var a = {books: 2}; var b = {notebooks: 1}; var c = Object.assign( {}, a, b ); console.log(c); ``` or You could use merge method from Lodash library. can check here : <https://www.npmjs.com/package/lodash>
12,801
37,297,472
I use Linux Mint 17 'Quiana' and I want to install Watchman to use later Ember.js. Here were my steps: ``` $ git clone https://github.com/facebook/watchman.git ``` then ``` $ cd watchman $ ./autogen.sh $ ./configure.sh ``` and, when I ran `make` to compile files, it returned the following error: ``` pywatchman/bser.c:31:20: fatal error: Python.h: no such file or directory #include <Python.h> ^ compilation terminated. error: command 'i686-linux-gnu-gcc' failed with exit status 1 make[1]: *** [py-build] Error 1 make[1]: Leaving the directory `/home/alex/watchman' make: *** [all] Error 2 ``` I tried to run ``` $ sudo apt-get install python3-dev ``` but it appears to be already in my system. What have I done wrong?
2016/05/18
[ "https://Stackoverflow.com/questions/37297472", "https://Stackoverflow.com", "https://Stackoverflow.com/users/5846366/" ]
Usually its the `python-dev` libs missing. Are you sure the configure uses the python 3 instead of python 2? Because if thats the case you should install `python-dev` instead of `python3-dev`.
Same problem if you build watchman under rasbian/raspberry. Install "python-dev". -- ``` git clone https://github.com/facebook/watchman.git cd watchman ./autogen.sh ./configure make sudo make install ```
12,811
51,811,662
in a python program I have a list that I would like to modify: ``` a = [1,2,3,4,5,1,2,3,1,4,5] ``` Say every time I see 1 in the list, I would like to replace it with 10, 9, 8. My goal is to get: ``` a = [10,9,8,2,3,4,5,10,9,8,2,3,10,9,8,4,5] ``` What's a good way to program this? Currently I have to do a 'replace' and two 'inserts' every time I see a 1 in the list. Thank you!
2018/08/12
[ "https://Stackoverflow.com/questions/51811662", "https://Stackoverflow.com", "https://Stackoverflow.com/users/2759486/" ]
You **cannot** modify your state object or any of the objects it contains directly; you must instead use `setState`. And when you're setting state based on existing state, you must use the callback version of it; [details](https://reactjs.org/docs/state-and-lifecycle.html#state-updates-may-be-asynchronous). So in your case, if you want to **add to** the existing `this.state.errors` array, then: ``` let errors = []; if(this.state.daysOfWeek.length < 1) { errors.push('Must select at least 1 day to perform workout'); } if(this.state.workoutId === '') { errors.push('Please select a workout to edit'); } if (errors.length) { this.setState(prevState => ({errors: [...prevState.errors, ...errors]})); } ``` If you want to **replace** the `this.state.errors` array, you don't need the callback form: ``` let errors = []; if(this.state.daysOfWeek.length < 1) { errors.push('Must select at least 1 day to perform workout'); } if(this.state.workoutId === '') { errors.push('Please select a workout to edit'); } if (errors.length) { // Or maybe you don't want this check and want to // set it anyway, to clear existing errors this.setState({errors}); } ```
In React, you must never assign to `this.state` directly. Use `this.setState()` instead. The reason is that otherwise React would not know you had changed the state. The only exception to this rule where you assign directly to `this.state` is in your component's constructor.
12,816
19,882,594
I am trying to pull company information from the following website: <http://www.theglobeandmail.com/globe-investor/markets/stocks/summary/?q=T-T> I see from there page source that there are nested span statements like: ``` <li class="clearfix"> <span class="label">Low</span> <span class="giw-a-t-sc-data">36.39</span> </li> <li class="clearfix"> <span class="label">Bid<span class="giw-a-t-sc-bidSize smallsize">x0</span></span> <span class="giw-a-t-sc-data">36.88</span> </li> ``` The code I wrote will grab (Low, 36.69) without problem. I have spent hours reading this forum and others trying to get bs4 to also break out (Bid, 36.88). The problem is, Bid comes out as "None" because of the nested span tags. I am an old "c" programmer (GNU Cygwin) and this python, Beautifulsoup stuff is new to me. I love it though, awesome potential for interesting and time saving scripts. Can anyone help with this question, I hope I have posed it well enough. Please keep it simple because I am definitely a newbie. thanks in advance.
2013/11/09
[ "https://Stackoverflow.com/questions/19882594", "https://Stackoverflow.com", "https://Stackoverflow.com/users/2974790/" ]
I have faced this problem. The solution is very simple (after a lot of testing and error) You must add id attribute to your tag, for instance: ``` <p:calendar id="date_selector" value="#{dpnl.fechaHasta}" pattern="dd/MM/yyyy" /> ```
At the facet named output use the tag h:outputText instead of p:inputText
12,817
53,474,065
I am trying to `upgrade` `matplotlib`. I'm doing this via `!pip` and it seems to work. When I check the list in the `IPython console`: ``` !pip list ``` It returns the latest version of `matplotlib` ``` matplotlib 3.0.2 ``` But when I check the version in the editor it returns ``` 2.2.2 ``` The very first line in the text editor shows ``` #!/usr/bin/env python3 ``` When inserting `!which pip` and `!which python` into the `IPython` `console` it returns the following: ``` !which python = /Users/XXXX/anaconda/bin/python !which pip = /Users/XXXX/anaconda/bin/pip ```
2018/11/26
[ "https://Stackoverflow.com/questions/53474065", "https://Stackoverflow.com", "https://Stackoverflow.com/users/-1/" ]
Using `str.len` ``` df[df.iloc[:,0].astype(str).str.len()!=7] A 1 1.222222 2 1.222200 ``` dput : ``` df=pd.DataFrame({'A':[1.22222,1.222222,1.2222]}) ```
See if this works `df1 = df['ZipCode'].astype(str).map(len)==5`
12,819
62,823,948
I have a dataframe with two levels of columns index. Reproducible Dataset. --------------------- ``` df = pd.DataFrame( [ ['Gaz','Gaz','Gaz','Gaz'], ['X','X','X','X'], ['Y','Y','Y','Y'], ['Z','Z','Z','Z']], columns=pd.MultiIndex.from_arrays([['A','A','C','D'], ['Name','Name','Company','Company']]) ``` ![df1](https://i.stack.imgur.com/ytkpi.png) I want to rename the duplicated MultiIndex columns, only when level-0 and level-1 combined is duplicated. Then add a suffix number to the end. Like the one below. ![df2](https://i.stack.imgur.com/PIloj.png) Below is a solution I found, but it only works for single level column index. ``` class renamer(): def __init__(self): self.d = dict() def __call__(self, x): if x not in self.d: self.d[x] = 0 return x else: self.d[x] += 1 return "%s_%d" % (x, self.d[x]) df = df.rename(columns=renamer()) ``` I think the above method can be modified to support the multi level situation, but I am too new to pandas/python. Thanks in advance. @Datanovice This is to clarify to you about the output what I need. I have the snippet below. ``` import pandas as pd import numpy as np df = pd.DataFrame( [ ['Gaz','Gaz','Gaz','Gaz'], ['X','X','X','X'], ['Y','Y','Y','Y'], ['Z','Z','Z','Z']], columns=pd.MultiIndex.from_arrays([ ['A','A','C','A'], ['A','A','C','A'], ['Company','Company','Company','Name']])) s = pd.DataFrame(df.columns.tolist()) cond = s.groupby(0).cumcount() s = [np.where(cond.gt(0),s[i] + '_' + cond.astype(str),s[i]) for i in range(df.columns.nlevels)] s = pd.DataFrame(s) #print(s) df.columns = pd.MultiIndex.from_arrays(s.values.tolist()) print(df) ``` The current result is- [![current output](https://i.stack.imgur.com/ZAz9c.png)](https://i.stack.imgur.com/ZAz9c.png) What I need is the last piece of column index should not be counted as duplicated, as as "A-A-Name" is not same with the first two. Thank you again.
2020/07/09
[ "https://Stackoverflow.com/questions/62823948", "https://Stackoverflow.com", "https://Stackoverflow.com/users/13875213/" ]
If I understand well you look for a mechanism, that allows you to display a terminal on a web server. Then you want to run an interactive python script on that terminal, right. So in the end the solution to share a terminal does not necessarily have to be written in python, right? (Though I must admit that I prefer python solutions if I find them, but sometimes being pragmatic isn't a bad idea) You might google for http and terminal emulators. Perhaps ttyd fits the bill. <https://github.com/tsl0922/ttyd> Building on linux could be done with ``` sudo apt-get install build-essential cmake git libjson-c-dev libwebsockets-dev git clone https://github.com/tsl0922/ttyd.git cd ttyd && mkdir build && cd build cmake .. make && make install ``` Usage would be something like: ttyd -p 8888 yourpythonscript.py and then you could connect with a web browser with `http://hostip:8888` you might of course 'hide' this url behind a reverse proxy and add authentification to it or add options like `--credential username:password` to password protect the url. **Addendum:** If you want to share multiple scripts with different people and the shareing is more a on the fly thing, then you might look at tty-share ( <https://github.com/elisescu/tty-share> ) and tty-server ( <https://github.com/elisescu/tty-server> ) tty-server can be run in a docker container. tty-share can be used to run a script on your machine on one of your terminals. It will output a url, that you can give to the person you want to share the specific session with) If you think that's interesting I might elaborate on this one
*>> Insert security disclaimer here <<* Easiest most hacktastic way to do it is to create a `div` element where you'll store your output and an `input` element to enter commands. Then you can ajax `POST` the command to a back-end controller. The controller would take the command and run it while capturing the output of the command and sending it back to the web page for it to render it in the `div` In python I use this to capture command output: ```py from subprocess import Popen, STDOUT, PIPE proc = Popen(['ls', '-l'], stdout=PIPE, stderr=STDOUT, cwd='/working/directory') proc.wait() return proc.stdout.read() ```
12,822
64,267,498
I try to upload a big file (4GB) with a PUT on a DRF viewset. During the upload my memory is stable. At 100%, the python runserver process takes more and more RAM and is killed by the kernel. I have a logging line in the `put` method of this `APIView` but the process is killed before this method call. I use this setting to force file usage `FILE_UPLOAD_HANDLERS = ["django.core.files.uploadhandler.TemporaryFileUploadHandler"]` Where does this memory peak comes from? I guess it try to load the file content in memory but why (and where)? More information: * I tried DEBUG true and false * The runserver is in a docker behind a traefik but there is no limitation in traefik AFAIK and the upload reaches 100% * I do not know yet if I would get the same behavior with `daphne` instead of runserver * EDIT: front use a `Content-Type multipart/form-data` * EDIT: I have tried `FileUploadParser` and `(FormParser, MultiPartParser)` for parser\_classes in my `APIView`
2020/10/08
[ "https://Stackoverflow.com/questions/64267498", "https://Stackoverflow.com", "https://Stackoverflow.com/users/5877122/" ]
TL;DR: ------ Neither a DRF nor a Django issue, it's a [2.5 years known Daphne issue](https://github.com/django/daphne/issues/126). The solution is to use uvicorn, hypercorn, or something else for the time being. Explanations ------------ What you're seeing here is not coming from Django Rest Framework as: * The FileUploadParser is meant to handle large file uploads, as [it reads the file chunk by chunk](https://github.com/encode/django-rest-framework/blob/335054a5d36b352a58286b303b608b6bf48152f8/rest_framework/parsers.py#L177-L183); * Your view not being executed rules out the parsers [which aren't executed until you access the `request.FILES`](https://github.com/encode/django-rest-framework/blob/5828d8f7ca167b11296733a2b54f9d6fca29b7b0/rest_framework/request.py#L436-L443) property The fact that you're mentioning Daphne reminds me of this [SO answer](https://stackoverflow.com/a/55237320/2441358) which mentions a similar problem and points to a code that Daphne doesn't handle large file uploads as **it loads the whole body** in RAM before passing it to the view. (The code is still present in their master branch at the time of writing) You're seeing the same behavior with `runserver` because when installed, Daphne replaces the initial runserver command with itself to provide WebSockets support for dev purposes. To make sure that it's the real culprit, try to disable Channels/run the default Django runserver and see for yourself if your app is killed by the OOM Killer.
I don't know if it works with django rest, but you can try to chunk de file. ``` [...] anexo_files = request.FILES.getlist('anexo_file_'+str(k)) index = 0 for file in anexo_files: index = index + 1 extension = os.path.splitext(str(file))[1] nome_arquivo_anexo = 'media/uploads/' + os.path.splitext(str(file))[0] + "_" + str(index) + datetime.datetime.now().strftime("%m%d%Y%H%M%S") + extension handle_uploaded_file(file, nome_arquivo_anexo) AnexoProjeto.objects.create( projeto=projeto, arquivo_anexo = nome_arquivo_anexo ) [...] ``` Where handle\_uploaded\_file is ``` def handle_uploaded_file(f, nome_arquivo): with open(nome_arquivo, 'wb+') as destination: for chunk in f.chunks(): destination.write(chunk) ```
12,823
57,420,008
Recently I came across logging in python. I have the following code in test.py file ``` import logging logger = logging.getLogger(__name__) logger.setLevel(logging.DEBUG) logger.addHandler(logging.StreamHandler()) logger.debug("test Message") ``` Now, is there any way I can print the resulting `Logrecord` object generated by `logger.debug("test Message")` because it's stated in the documentation that > > LogRecord instances are created automatically by the Logger every time something is logged > > > <https://docs.python.org/3/library/logging.html#logrecord-objects> I checked saving `debug` into a variable and print it ``` test = logger.debug("test Message") print(test) ``` the output is `NONE` My goal is to check/view the final Logrecord object generated by `logging.debug(test.py)` in the same test.py by using `print()` This is for my own understanding. ``` print(LogrecordObject.__dict__) ``` So how to get hold of the `Logrecord` object generated by `logger.debug("test Message")`
2019/08/08
[ "https://Stackoverflow.com/questions/57420008", "https://Stackoverflow.com", "https://Stackoverflow.com/users/2897115/" ]
There is no return in `debug()` ``` # Here is the snippet for the source code def debug(self, msg, *args, **kwargs): if self.isEnabledFor(DEBUG): self._log(DEBUG, msg, args, **kwargs) ``` If you wanna get LogRecord return, you need to redefine a `debug()`, you can overwrite like this: ``` import logging DEBUG_LEVELV_NUM = 9 logging.addLevelName(DEBUG_LEVELV_NUM, "MY_DEBUG") def _log(self, level, msg, args, exc_info=None, extra=None, stack_info=False): sinfo = None fn, lno, func = "(unknown file)", 0, "(unknown function)" if exc_info: if isinstance(exc_info, BaseException): exc_info = (type(exc_info), exc_info, exc_info.__traceback__) elif not isinstance(exc_info, tuple): exc_info = sys.exc_info() record = self.makeRecord(self.name, level, fn, lno, msg, args, exc_info, func, extra, sinfo) self.handle(record) return record def my_debug(self, message, *args, **kws): if self.isEnabledFor(DEBUG_LEVELV_NUM): # Yes, logger takes its '*args' as 'args'. record = self._log(DEBUG_LEVELV_NUM, message, args, **kws) return record logger = logging.getLogger(__name__) logging.Logger.my_debug = my_debug logging.Logger._log = _log logger.setLevel(DEBUG_LEVELV_NUM) logger.addHandler(logging.StreamHandler()) test = logger.my_debug('test custom debug') print(test) ``` Reference: [How to add a custom loglevel to Python's logging facility](https://stackoverflow.com/questions/2183233/how-to-add-a-custom-loglevel-to-pythons-logging-facility)
You can create a handler that instead of formatting the LogRecord instance to a string, just save it in a list to be viewed and inspected later: ``` import logging import sys # A new handler to store "raw" LogRecords instances class RecordsListHandler(logging.Handler): """ A handler class which stores LogRecord entries in a list """ def __init__(self, records_list): """ Initiate the handler :param records_list: a list to store the LogRecords entries """ self.records_list = records_list super().__init__() def emit(self, record): self.records_list.append(record) # A list to store the "raw" LogRecord instances logs_list = [] # Your logger logger = logging.getLogger(__name__) logger.setLevel(logging.DEBUG) # Add the regular stream handler to print logs to the console, if you like logger.addHandler(logging.StreamHandler(sys.stdout)) # Add the RecordsListHandler to store the log records objects logger.addHandler(RecordsListHandler(logs_list)) if __name__ == '__main__': logger.debug("test Message") print(logs_list) ``` Output: ``` test Message [<LogRecord: __main__, 10, C:/Automation/Exercises/222.py, 36, "test Message">] ```
12,824
69,497,348
I'm new to python I got a question that might be easy but i can't get it. i wanted to make aprogram that user gives email as username and password as password ,the program should check if email is in corect format and if its not it **should print something and get email again** so i used regex (Im giving this inputs to database and i thought using LIKE query but i don't think that might help) so whats the problem with my code?!it keeps wrong email ``` import re regex = r'\b[A-Za-z0-9._%+-]+@[A-Za-z0-9.-]+\.[A-Z|a-z]{2,}\b' def check(email): if (re.fullmatch(regex, email)): return else: print("corect format is like amireza@gmail.com") return while __name__ == '__main__': username = input() check(username) password = input() ```
2021/10/08
[ "https://Stackoverflow.com/questions/69497348", "https://Stackoverflow.com", "https://Stackoverflow.com/users/16260312/" ]
here is a working code for you: ```py import re regex = r'\b[A-Za-z0-9._%+-]+@[A-Za-z0-9.-]+\.[A-Z|a-z]{2,}\b' def check(email): if (re.fullmatch(regex, email)): return True else: print("invalid input! correct format is like amireza@gmail.com") return False while __name__ == '__main__': while True: username = input("please enter email\n") if check(username) is True: break password = input("please enter password\n") break print("username: %s, password: %s" % (username, password)) ``` key correction: * your helper should return a boolean which lets you know if the input is legit or not. thus I returned a boolean to outside scope one more thing: since you run it as a standalone script, the outermost while condition (`while __name__ == '__main__'`) will always be `True`, which means you have to `break` out of it when you want to end your program execution. For simplicity I'd suggest using `if __name__ == '__main__'` instead
You could change your function `check` to return a boolean output that tells you whether the check was successful, as in ``` def check(email): if (re.fullmatch(regex, email)): return True else: print("corect format is like amireza@gmail.com") return False ``` And then add a loop to your main code: ``` if __name__ == '__main__': username = input() while not check(username): username = input() ``` Note that I did not actually run your code, but this should work. EDIT: Heh, right, as the other answer explains, you should change your `while` into an `if`, I edited my code correspondingly.
12,825
58,752,089
I was writing on VisualCode studio, but I keep getting the same error message. > > selenium.common.exceptions.WebDriverException: Message: 'chromedriver.exe' executable needs to be in PATH. > > > Is it simply because you just can't run webdriver on vscode studio? I've already tried ``` from selenium import webdriver driver=webdriver.Chrome(executable_path=r"C:Users/.../chromedriver.exe") ``` ``` driver=webdriver.Chrome("C:Users/.../chromedriver.exe") ``` and basically every solution you can find online regarding this problem. I've download chromedriver from here: <https://chromedriver.chromium.org/>. I've also added the file in PATH by clicking "system">>"environment variables", and added the downloaded file containing chromedriver.exe in both user variables and system variables of PATH. I've also tried coping the chromedriver.exe file in to the python3.7/scripts file, then added the file manually in PATH, then restart my computer. Can someone please help me on this matter? or just recommended some place I can successfully run the webdriver?
2019/11/07
[ "https://Stackoverflow.com/questions/58752089", "https://Stackoverflow.com", "https://Stackoverflow.com/users/8836876/" ]
I had te same problem and there is two ways to solve this issue. The main reason of this "*unreachable*" path is that visual studio code desn't have permissions to run from the path environment unlike any other system installed programs. So by installing VS Code but the **System Installer** version would be enough. And the other way is by putting the ChromeDriver.exe file into the */Scripts* folder of your python installation folder (*i.e ...\AppData\Local\Programs\Python\Python37\Scripts*). --- I did both things and it worked for me
If you're on Windows, go into CMD (Command Prompt) and type in "chromedriver.exe." If chromedriver is executable in PATH, the system will print out "Starting Chromedriver [version]..." Else, you need to chromedriver to path. Then again, it could just be a fault of the IDE, try using Python's built-in IDLE...
12,826
3,778,486
I have visited Vim website , script section and found several synthax checkers for python. But which one to choose ? I would prefer something that supports python 3 as well, even though I code in python 2.6 currently. Do all these checkers need a module like pychecker and pyflakes ? I could install the most popular from scripts database but I thought to get some recommendations here first from what you consider the best and why. The script will have to work MACOS, windows and ubuntu, with MACOS being my highest priority. In case you are wondering I am looking for syntax checking like the one used by PyDev in Eclipse IDE which underlines with a red wavy line all erros as you type.
2010/09/23
[ "https://Stackoverflow.com/questions/3778486", "https://Stackoverflow.com", "https://Stackoverflow.com/users/453642/" ]
These two websites really boosted my Vim productivity with all languages: <http://nvie.com/posts/how-i-boosted-my-vim/> <http://stevelosh.com/blog/2010/09/coming-home-to-vim/>
Whether or not wavy red lines are displayed is related to the theme you're using, not the syntax checker or language. So long as your syntax file (try <http://www.vim.org/scripts/script.php?script_id=790> ) checks for errors, you can show the errors with something like: ``` :hi Error guifg=#ff0000 gui=undercurl ```
12,827
35,224,675
I'm preparing a toy `spark.ml` example. `Spark version 1.6.0`, running on top of `Oracle JDK version 1.8.0_65`, pyspark, ipython notebook. First, it hardly has anything to do with [Spark, ML, StringIndexer: handling unseen labels](https://stackoverflow.com/questions/34681534/spark-ml-stringindexer-handling-unseen-labels). The exception is thrown while fitting a pipeline to a dataset, not transforming it. And suppressing the exception might not be a solution here, since, I'm afraid, the dataset gets messed pretty bad in this case. My dataset is about 800Mb uncompressed, so it might be hard to reproduce (smaller subsets seem to dodge this issue). The dataset looks like this: ``` +--------------------+-----------+-----+-------+-----+--------------------+ | url| ip| rs| lang|label| txt| +--------------------+-----------+-----+-------+-----+--------------------+ |http://3d-detmold...|217.160.215|378.0| de| 0.0|homwillkommskip c...| | http://3davto.ru/| 188.225.16|891.0| id| 1.0|оформить заказ пе...| | http://404.szm.com/| 85.248.42| 58.0| cs| 0.0|kliknite tu alebo...| | http://404.xls.hu/| 212.52.166|168.0| hu| 0.0|honlapkészítés404...| |http://a--m--a--t...| 66.6.43|462.0| en| 0.0|back top archiv r...| |http://a-wrf.ru/c...| 78.108.80|126.0|unknown| 1.0| | |http://a-wrf.ru/s...| 78.108.80|214.0| ru| 1.0|установк фаркопна...| +--------------------+-----------+-----+-------+-----+--------------------+ ``` The value being predicted is `label`. The whole pipeline applied to it: ```python from pyspark.ml import Pipeline from pyspark.ml.feature import VectorAssembler, StringIndexer, OneHotEncoder, Tokenizer, HashingTF from pyspark.ml.classification import LogisticRegression train, test = munge(src_dataframe).randomSplit([70., 30.], seed=12345) pipe_stages = [ StringIndexer(inputCol='lang', outputCol='lang_idx'), OneHotEncoder(inputCol='lang_idx', outputCol='lang_onehot'), Tokenizer(inputCol='ip', outputCol='ip_tokens'), HashingTF(numFeatures=2**10, inputCol='ip_tokens', outputCol='ip_vector'), Tokenizer(inputCol='txt', outputCol='txt_tokens'), HashingTF(numFeatures=2**18, inputCol='txt_tokens', outputCol='txt_vector'), VectorAssembler(inputCols=['lang_onehot', 'ip_vector', 'txt_vector'], outputCol='features'), LogisticRegression(labelCol='label', featuresCol='features') ] pipe = Pipeline(stages=pipe_stages) pipemodel = pipe.fit(train) ``` And here is the stacktrace: ``` Py4JJavaError: An error occurred while calling o10793.fit. : org.apache.spark.SparkException: Job aborted due to stage failure: Task 18 in stage 627.0 failed 1 times, most recent failure: Lost task 18.0 in stage 627.0 (TID 23259, localhost): org.apache.spark.SparkException: Unseen label: pl-PL. at org.apache.spark.ml.feature.StringIndexerModel$$anonfun$4.apply(StringIndexer.scala:157) at org.apache.spark.ml.feature.StringIndexerModel$$anonfun$4.apply(StringIndexer.scala:153) at org.apache.spark.sql.catalyst.expressions.GeneratedClass$SpecificUnsafeProjection.evalExpr2$(Unknown Source) at org.apache.spark.sql.catalyst.expressions.GeneratedClass$SpecificUnsafeProjection.apply(Unknown Source) at org.apache.spark.sql.execution.Project$$anonfun$1$$anonfun$apply$1.apply(basicOperators.scala:51) at org.apache.spark.sql.execution.Project$$anonfun$1$$anonfun$apply$1.apply(basicOperators.scala:49) at scala.collection.Iterator$$anon$11.next(Iterator.scala:328) at scala.collection.Iterator$$anon$14.hasNext(Iterator.scala:389) at scala.collection.Iterator$$anon$11.hasNext(Iterator.scala:327) at scala.collection.Iterator$$anon$11.hasNext(Iterator.scala:327) at org.apache.spark.storage.MemoryStore.unrollSafely(MemoryStore.scala:282) at org.apache.spark.CacheManager.putInBlockManager(CacheManager.scala:171) at org.apache.spark.CacheManager.getOrCompute(CacheManager.scala:78) at org.apache.spark.rdd.RDD.iterator(RDD.scala:268) at org.apache.spark.rdd.MapPartitionsRDD.compute(MapPartitionsRDD.scala:38) at org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:306) at org.apache.spark.rdd.RDD.iterator(RDD.scala:270) at org.apache.spark.rdd.MapPartitionsRDD.compute(MapPartitionsRDD.scala:38) at org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:306) at org.apache.spark.rdd.RDD.iterator(RDD.scala:270) at org.apache.spark.scheduler.ShuffleMapTask.runTask(ShuffleMapTask.scala:73) at org.apache.spark.scheduler.ShuffleMapTask.runTask(ShuffleMapTask.scala:41) at org.apache.spark.scheduler.Task.run(Task.scala:89) at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:213) at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617) at java.lang.Thread.run(Thread.java:745) Driver stacktrace: at org.apache.spark.scheduler.DAGScheduler.org$apache$spark$scheduler$DAGScheduler$$failJobAndIndependentStages(DAGScheduler.scala:1431) at org.apache.spark.scheduler.DAGScheduler$$anonfun$abortStage$1.apply(DAGScheduler.scala:1419) at org.apache.spark.scheduler.DAGScheduler$$anonfun$abortStage$1.apply(DAGScheduler.scala:1418) at scala.collection.mutable.ResizableArray$class.foreach(ResizableArray.scala:59) at scala.collection.mutable.ArrayBuffer.foreach(ArrayBuffer.scala:47) at org.apache.spark.scheduler.DAGScheduler.abortStage(DAGScheduler.scala:1418) at org.apache.spark.scheduler.DAGScheduler$$anonfun$handleTaskSetFailed$1.apply(DAGScheduler.scala:799) at org.apache.spark.scheduler.DAGScheduler$$anonfun$handleTaskSetFailed$1.apply(DAGScheduler.scala:799) at scala.Option.foreach(Option.scala:236) at org.apache.spark.scheduler.DAGScheduler.handleTaskSetFailed(DAGScheduler.scala:799) at org.apache.spark.scheduler.DAGSchedulerEventProcessLoop.doOnReceive(DAGScheduler.scala:1640) at org.apache.spark.scheduler.DAGSchedulerEventProcessLoop.onReceive(DAGScheduler.scala:1599) at org.apache.spark.scheduler.DAGSchedulerEventProcessLoop.onReceive(DAGScheduler.scala:1588) at org.apache.spark.util.EventLoop$$anon$1.run(EventLoop.scala:48) at org.apache.spark.scheduler.DAGScheduler.runJob(DAGScheduler.scala:620) at org.apache.spark.SparkContext.runJob(SparkContext.scala:1832) at org.apache.spark.SparkContext.runJob(SparkContext.scala:1952) at org.apache.spark.rdd.RDD$$anonfun$reduce$1.apply(RDD.scala:1025) at org.apache.spark.rdd.RDDOperationScope$.withScope(RDDOperationScope.scala:150) at org.apache.spark.rdd.RDDOperationScope$.withScope(RDDOperationScope.scala:111) at org.apache.spark.rdd.RDD.withScope(RDD.scala:316) at org.apache.spark.rdd.RDD.reduce(RDD.scala:1007) at org.apache.spark.rdd.RDD$$anonfun$treeAggregate$1.apply(RDD.scala:1136) at org.apache.spark.rdd.RDDOperationScope$.withScope(RDDOperationScope.scala:150) at org.apache.spark.rdd.RDDOperationScope$.withScope(RDDOperationScope.scala:111) at org.apache.spark.rdd.RDD.withScope(RDD.scala:316) at org.apache.spark.rdd.RDD.treeAggregate(RDD.scala:1113) at org.apache.spark.ml.classification.LogisticRegression.train(LogisticRegression.scala:271) at org.apache.spark.ml.classification.LogisticRegression.train(LogisticRegression.scala:159) at org.apache.spark.ml.Predictor.fit(Predictor.scala:90) at org.apache.spark.ml.Predictor.fit(Predictor.scala:71) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:497) at py4j.reflection.MethodInvoker.invoke(MethodInvoker.java:231) at py4j.reflection.ReflectionEngine.invoke(ReflectionEngine.java:381) at py4j.Gateway.invoke(Gateway.java:259) at py4j.commands.AbstractCommand.invokeMethod(AbstractCommand.java:133) at py4j.commands.CallCommand.execute(CallCommand.java:79) at py4j.GatewayConnection.run(GatewayConnection.java:209) at java.lang.Thread.run(Thread.java:745) Caused by: org.apache.spark.SparkException: Unseen label: pl-PL. at org.apache.spark.ml.feature.StringIndexerModel$$anonfun$4.apply(StringIndexer.scala:157) at org.apache.spark.ml.feature.StringIndexerModel$$anonfun$4.apply(StringIndexer.scala:153) at org.apache.spark.sql.catalyst.expressions.GeneratedClass$SpecificUnsafeProjection.evalExpr2$(Unknown Source) at org.apache.spark.sql.catalyst.expressions.GeneratedClass$SpecificUnsafeProjection.apply(Unknown Source) at org.apache.spark.sql.execution.Project$$anonfun$1$$anonfun$apply$1.apply(basicOperators.scala:51) at org.apache.spark.sql.execution.Project$$anonfun$1$$anonfun$apply$1.apply(basicOperators.scala:49) at scala.collection.Iterator$$anon$11.next(Iterator.scala:328) at scala.collection.Iterator$$anon$14.hasNext(Iterator.scala:389) at scala.collection.Iterator$$anon$11.hasNext(Iterator.scala:327) at scala.collection.Iterator$$anon$11.hasNext(Iterator.scala:327) at org.apache.spark.storage.MemoryStore.unrollSafely(MemoryStore.scala:282) at org.apache.spark.CacheManager.putInBlockManager(CacheManager.scala:171) at org.apache.spark.CacheManager.getOrCompute(CacheManager.scala:78) at org.apache.spark.rdd.RDD.iterator(RDD.scala:268) at org.apache.spark.rdd.MapPartitionsRDD.compute(MapPartitionsRDD.scala:38) at org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:306) at org.apache.spark.rdd.RDD.iterator(RDD.scala:270) at org.apache.spark.rdd.MapPartitionsRDD.compute(MapPartitionsRDD.scala:38) at org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:306) at org.apache.spark.rdd.RDD.iterator(RDD.scala:270) at org.apache.spark.scheduler.ShuffleMapTask.runTask(ShuffleMapTask.scala:73) at org.apache.spark.scheduler.ShuffleMapTask.runTask(ShuffleMapTask.scala:41) at org.apache.spark.scheduler.Task.run(Task.scala:89) at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:213) at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617) ... 1 more ``` The most interesting line is: ``` org.apache.spark.SparkException: Unseen label: pl-PL. ``` No idea, how `pl-PL` which is a value from `lang` column could have gotten mixed up in the `label` column, which is a `float`, not `string` edited: some hasty coclusions, corrected thanks to @zero323 I've looked further into it and found, that `pl-PL` is a value from the testing part of the dataset, not training. So now I don't even know where to look for the culprit: it might easily be the `randomSplit` code, not `StringIndexer`, and who knows what else. How do I investigate this?
2016/02/05
[ "https://Stackoverflow.com/questions/35224675", "https://Stackoverflow.com", "https://Stackoverflow.com/users/3868574/" ]
Okay I think I got this. At least I got this working. Caching the dataframe(including train/test partes) solves the problem. That's what I found in this JIRA issue: <https://issues.apache.org/jira/browse/SPARK-12590>. So it's not a bug, just the fact that `randomSample` might yield a different result on the same, but differently partitioned dataset. And apparently, some of my munging functions (or `Pipeline`) involve repartition, therefore, results of the trainset recomputation from its definition might diverge. What still interests me - it's the reproducibility: it's always 'pl-PL' row that gets mixed in the wrong part of the dataset, i.e. it's not random repartition. It's deterministic, just inconsistent. I wonder how exactly it happens.
`Unseen label` [is a generic message which doesn't correspond to a specific column](https://github.com/apache/spark/blob/branch-1.6/mllib/src/main/scala/org/apache/spark/ml/feature/StringIndexer.scala#L157). Most likely problem is with a following stage: ``` StringIndexer(inputCol='lang', outputCol='lang_idx') ``` with `pl-PL` present in `train("lang")` and not present in `test("lang")`. You can correct it using `setHandleInvalid` with `skip`: ```py from pyspark.ml.feature import StringIndexer train = sc.parallelize([(1, "foo"), (2, "bar")]).toDF(["k", "v"]) test = sc.parallelize([(3, "foo"), (4, "foobar")]).toDF(["k", "v"]) indexer = StringIndexer(inputCol="v", outputCol="vi") indexer.fit(train).transform(test).show() ## Py4JJavaError: An error occurred while calling o112.showString. ## : org.apache.spark.SparkException: Job aborted due to stage failure: ## ... ## org.apache.spark.SparkException: Unseen label: foobar. indexer.setHandleInvalid("skip").fit(train).transform(test).show() ## +---+---+---+ ## | k| v| vi| ## +---+---+---+ ## | 3|foo|1.0| ## +---+---+---+ ``` or, in the latest versions, `keep`: ```py indexer.setHandleInvalid("keep").fit(train).transform(test).show() ## +---+------+---+ ## | k| v| vi| ## +---+------+---+ ## | 3| foo|0.0| ## | 4|foobar|2.0| ## +---+------+---+ ```
12,836
38,385,983
As a beginner creating a simple python text editor I have encountered a confusing bug in which I am able to print out the text file with the read\_file() function when I first open it, but after I amend the text file using write\_file(), reading the file again simple returns whitespace. Additionally, any critique of my code would be appreciated. Thank you. ``` import os def main(): file = open_file() quit = False while quit == False: print('Current file open is {}'.format(file.name)) print('(\'read\', \'write\', \'rename\', \'change file\', \'quit\',)') action = raw_input('> ') if str(action) == 'read': read_file(file) elif str(action) == 'write': file = write_file(file) elif str(action) == 'rename': file = rename(file) elif str(action) == 'change file': file.close() open_file() elif str(action) == 'quit': break else: print('Incorrect action.') def open_file(): print('Create/open a file') filename = raw_input('Filename: ') try: file = open(str(filename), 'r+') return file except: print('An error occured') return open_file() def read_file(file): try: print('{}, {}'.format(file.name, file)) print(file.read()) except: print('An error occured') return None def write_file(file): print('Type to start writing to your file.') #read_file(file) add_text = raw_input('> ') file.write(str(add_text)) return file def rename(file): new_name = raw_input('New file name: ') os.rename(file.name, str(new_name)) return file main() ```
2016/07/15
[ "https://Stackoverflow.com/questions/38385983", "https://Stackoverflow.com", "https://Stackoverflow.com/users/6335429/" ]
First, **file** is a predefined package; please don't use it for a variable name, or you may have trouble getting to some of the facilities. Try **my\_file** or just the C-language **fp** (for "file pointer"). After you write new information to the file, your position pointer (bookmark) is likely at the end of the file. Reading more will get you nowhere. You need to either close and reopen the file, or call fp.seek() to get to the desired location. For instance, **fp.seek(0)** will reset the pointer to the start of the file.
When it comes to reading and writing files in python, if you do not call the method `(filename).close()` after making a change to a file, it will not save anything to it because it thinks you're still a) writing to it or b) still reading it! Hope this helps!
12,837
2,301,163
I am looking for a way to create html files dynamically in python. I am writing a gallery script, which iterates over directories, collecting file meta data. I intended to then use this data to automatically create a picture gallery, based on html. Something very simple, just a table of pictures. I really don't think writing to a file manually is the best method, and the code may be very long. So is there a better way to do this, possibly html specific?
2010/02/20
[ "https://Stackoverflow.com/questions/2301163", "https://Stackoverflow.com", "https://Stackoverflow.com/users/106534/" ]
I think, if i understand you correctly, you can see [here, "Templating in Python"](http://wiki.python.org/moin/Templating).
Use a templating engine such as [Genshi](http://genshi.edgewall.org/) or [Jinja2](https://jinja.palletsprojects.com/en/2.11.x/).
12,838
37,536,868
Not a maths major or a cs major, I just fool around with python (usually making scripts for simulations/theorycrafting on video games) and I discovered just how bad random.randint is performance wise. It's got me wondering why random.randint or random.randrange are used/made the way they are. I made a function that produces (for all intents and actual purposes) identical results to random.randint: ``` big_bleeping_float= (2**64 - 2)/(2**64 - 2) def fastrandint(start, stop): return start + int(random.random() * (stop - start + big_bleeping_float)) ``` There is a massive 180% speed boost using that to generate an integer in the range (inclusive) 0-65 compared to random.randrange(0, 66), the next fastest method. ``` >>> timeit.timeit('random.randint(0, 66)', setup='from numpy import random', number=10000) 0.03165552873121058 >>> timeit.timeit('random.randint(0, 65)', setup='import random', number=10000) 0.022374771118336412 >>> timeit.timeit('random.randrange(0, 66)', setup='import random', number=10000) 0.01937231027605435 >>> timeit.timeit('fastrandint(0, 65)', setup='import random; from fasterthanrandomrandom import fastrandint', number=10000) 0.0067909916844523755 ``` Furthermore, the adaptation of this function as an alternative to random.choice is 75% faster, and I'm sure adding larger-than-one stepped ranges would be faster (although I didn't test that). For almost double the speed boost as using the fastrandint function you can simply write it inline: ``` >>> timeit.timeit('int(random.random() * (65 + big_bleeping_float))', setup='import random; big_bleeping_float= (2**64 - 2)/(2**64 - 2)', number=10000) 0.0037642723021917845 ``` So in summary, why am I wrong that my function is a better, why is it faster if it is better, and is there a yet even faster way to do what I'm doing?
2016/05/31
[ "https://Stackoverflow.com/questions/37536868", "https://Stackoverflow.com", "https://Stackoverflow.com/users/5511209/" ]
`random.randint()` and others are calling into `random.getrandbits()` which may be less efficient that direct calls to `random()`, but for good reason. It is actually more correct to use a `randint` that calls into `random.getrandbits()`, as it can be done in an unbiased manner. You can see that using random.random to generate values in a range ends up being biased since there are only M floating point values between 0 and 1 (for M pretty large). Take an N that doesn't divide into M, then if we write M = k N + r for `0<r<N`. At best, using `random.random() * (N+1)` we'll get `r` numbers coming out with probability (k+1)/M and `N-r` numbers coming out with probability `k/M`. (This is *at best*, using the pigeon hole principle - in practice I'd expect the bias to be even worse). Note that this bias is only noticeable for * A large number of sampling * where N is a large fraction of M the number of floats in (0,1] So it probably won't matter to you, unless you know you need unbiased values - such as for scientific computing etc. In contrast, a value from `randint(0,N)` can be unbiased by using rejection sampling from repeated calls to `random.getrandbits()`. Of course managing this can introduce additional overhead. **Aside** If you end up using a custom random implementation then From the [python 3 docs](https://docs.python.org/3/library/random.html) > > Almost all module functions depend on the basic function random(), which > generates a random float uniformly in the semi-open range [0.0, 1.0). > > > This suggests that `randint` and others may be implemented using `random.random`. If this is the case I would expect them to be slower, incurring at least one addition function call overhead per call. Looking at the code referenced in <https://stackoverflow.com/a/37540577/221955> you can see that this will happen if the random implementation doesn't provide a `getrandbits()` function.
This is probably rarely a problem but `randint(0,10**1000)` works while `fastrandint(0,10**1000)` crashes. The slower time is probably the price you need to pay to have a function that works for all possible cases...
12,847
38,101,112
I'm trying to create an iOS Titanium Module using a pre-compiled CommonJS module. As the README file says: > > All JavaScript files in the assets directory are IGNORED except if you create a > file named "com.moduletest.js" in this directory in which case it will be > wrapped by native code, compiled, and used as your module. This allows you to > run pure JavaScript modules that are pre-compiled. > > > I've created the file like this: ``` function ModuleTest(url){ if(url){ return url; } } exports.ModuleTest = ModuleTest; ``` I'm using the 5.1.2.GA SDK (also tried with 5.3.0.GA) and I can build the module successfully either with `python build.py` or `titanium build --platform iOS --build-only`. Then, in my test app doing: ``` var test = require('com.moduletest'); var url = new test.ModuleTest('http://url'); ``` Gives me this error: [![undefined is not a constructor](https://i.stack.imgur.com/D2sQ2.png)](https://i.stack.imgur.com/D2sQ2.png) undefined is not a constructor. I've been trying a lot of alternatives but nothing seems to work and I didn't find any help on documentation about pre-compiled JS modules for iOS. Actually, the same process works great for Android! Do you have some idea why? My environment: XCode 7.3.1 Operating System Name - Mac OS X Version - 10.11.5 Architecture - 64bit # CPUs - 8 Memory - 16.0GB Node.js Node.js Version - 0.12.7 npm Version - 2.11.3 Appcelerator CLI Installer - 4.2.6 Core Package - 5.3.0 Titanium CLI CLI Version - 5.0.9 node-appc Version - 0.2.31 Maybe this is something related to my Node version or appc CLI, not sure =/ Thank you!
2016/06/29
[ "https://Stackoverflow.com/questions/38101112", "https://Stackoverflow.com", "https://Stackoverflow.com/users/1272263/" ]
There are 2 solutions. 1) Don't put it in assets, but in the `/app/lib` folder as others have mentioned. 2) wrap it as an actual commonjs module, like the [module I wrote](http://github.com/Topener/To.ImageCache) In both cases, you can just use `require('modulename')`. In case 2 you will need to add it to the `tiapp.xml` file just like any other module. The path of your file will come in `/modules/commonjs/modulename/version/module.js` or something similar. My linked module will show you the requirements and paths needed.
I use a slightly different pattern that works excellent: First a small snippet from my "module": ``` Stopwatch = function(listener) { this.totalElapsed = 0; // * elapsed number of ms in total this.listener = (listener != undefined ? listener : null); // * function to receive onTick events }; Stopwatch.prototype.getElapsed = function() { return this.totalElapsed; }; module.exports = Stopwatch; ``` And then this is the way I use it: ``` var StopWatch = require('utils/StopWatch'); var stopWatch = new StopWatch(listenerFunction); console.log('elapsed: ' + stopWatch.getElapsed()); ```
12,849
71,821,635
I have installed two frameworks of Python 3.10. There is `wxPython310` for 64-bit Python. But there aren't any `wxPython` for 32-bit Python. I tried to install `wxPython` with `https://wxpython.org/Phoenix/snapshot-builds/wxPython-4.1.2a1.dev5259+d3bdb143.tar.gz`, but it shows me the error code like this. ``` Running setup.py install for wxPython ... error error: subprocess-exited-with-error × Running setup.py install for wxPython did not run successfully. │ exit code: 1 ╰─> [22 lines of output] C:\Users\tiger\AppData\Local\Programs\Python\Python310-32\lib\site-packages\setuptools\dist.py:717: UserWarning: Usage of dash-separated 'license-file' will not be supported in future versions. Please use the underscore name 'license_file' instead warnings.warn( C:\Users\tiger\AppData\Local\Programs\Python\Python310-32\lib\site-packages\setuptools\dist.py:294: DistDeprecationWarning: use_2to3 is ignored. warnings.warn(f"{attr} is ignored.", DistDeprecationWarning) running install running build C:\Users\tiger\AppData\Local\Temp\pip-req-build-b6xigzyz\build.py:42: DeprecationWarning: The distutils package is deprecated and slated for removal in Python 3.12. Use setuptools or check PEP 632 for potential alternatives from distutils.dep_util import newer, newer_group Traceback (most recent call last): File "C:\Users\tiger\AppData\Local\Temp\pip-req-build-b6xigzyz\build.py", line 49, in <module> from buildtools.wxpysip import sip_runner File "C:\Users\tiger\AppData\Local\Temp\pip-req-build-b6xigzyz\buildtools\wxpysip.py", line 20, in <module> from sipbuild.code_generator import (set_globals, parse, generateCode, ModuleNotFoundError: No module named 'sipbuild' WARNING: Building this way assumes that all generated files have been generated already. If that is not the case then use build.py directly to generate the source and perform the build stage. You can use --skip-build with the bdist_* or install commands to avoid this message and the wxWidgets and Phoenix build steps in the future. "C:\Users\tiger\AppData\Local\Programs\Python\Python310-32\python.exe" -u build.py build Command '"C:\Users\tiger\AppData\Local\Programs\Python\Python310-32\python.exe" -u build.py build' failed with exit code 1. [end of output] note: This error originates from a subprocess, and is likely not a problem with pip. error: legacy-install-failure × Encountered error while trying to install package. ╰─> wxPython note: This is an issue with the package mentioned above, not pip. hint: See above for output from the failure. ```
2022/04/11
[ "https://Stackoverflow.com/questions/71821635", "https://Stackoverflow.com", "https://Stackoverflow.com/users/15512931/" ]
There are some issues with Python 3.10. The easiest way to deal with this situation is to downgrade your python version to 3.9.13. The last wxPython came before Python 3.10 if I am not mistaken. I was going through the same situation and tried a couple of solutions because I did not want to downgrade my python version, but I was wasting too much time that I did not have. So for now just downgrade and wait for new releases, or use Tkinter.
Common problem with installing various versions is python interpreters that used for the installation Make sure you use compatible version of python to install wxPython310 What IDE you use ? for all case scenarios I would recommend to make sure that the installation done with the right Python version , if you don't know then reinstall required versions and make sure the versions you installed support the package Tutorial for installing : [Link](https://www.tutorialspoint.com/wxpython/wxpython_environment.htm)
12,850
13,827,543
I like to know what was the local variable names when they are passed to a function. I'm not sure whether this is possible at all. Let's consider this example: function definition: ``` def show(x): print(x) ``` usage: ``` a = 10 show(a) ``` this prints 10. But I like to print "a = 10". Is this possible in python?
2012/12/11
[ "https://Stackoverflow.com/questions/13827543", "https://Stackoverflow.com", "https://Stackoverflow.com/users/624074/" ]
I like the [answer to this question](http://docs.python.org/2/faq/programming.html#how-can-my-code-discover-the-name-of-an-object) that's found in the Python programming FAQ, quoting Fredrik Lundh: > > The same way as you get the name of that cat you found on your porch: > the cat (object) itself cannot tell you its name, and it > doesn’t really care – so the only way to find out what it’s called is > to ask all your neighbours (namespaces) if it’s their cat (object)... > > > ....and don’t be surprised if you’ll find that it’s known by many names, or no name at all! > > >
Here's an answer that only became possible as of Python 3.6 with f-strings: ``` x = 10 print(f'{x=}') # Outputs x=10 ```
12,851
60,877,741
I'm trying to write a script with python/numpy/scipy for data manipulation, fitting and plotting of angle dependent magnetoresistance measurements. I'm new to Python, got the frame code from my PhD advisor, and managed to add few hundred lines of code to the frame. After a while I noticed that some measurements had multiple blunders, and since the script should do all the manipulation automatically, I tried to mask those points and fit the curve to the unmasked points (the curve is a sine squared superposed on a linear function, so numpy.ma.polyfit isn't really a choice). However, after masking both x and y coordinates of the problematic points, the fitting would still take them into consideration, even though they wouldn't be shown in the plot. The example is simplified, but the same is happening; ``` import numpy.ma as ma import matplotlib.pyplot as plt from scipy.optimize import curve_fit def Funk(x, k, y0): return k*x + y0 fig,ax= plt.subplots() x=ma.masked_array([1,2,3,4,5,6,7,8,9,10],mask=[0,0,0,0,0,0,1,1,1,1]) y=ma.masked_array([1,2,3,4,5,30,35,40,45,50], mask=[0,0,0,0,0,1,1,1,1,1]) fitParamsFunk, fitCovariancesFunk = curve_fit(Funk, x, y) ax.plot(x, Funk(x, fitParamsFunk[0], fitParamsFunk[1])) ax.errorbar(x, y, yerr = None, ms=3, fmt='-o') plt.show() ``` [The second half of the points is masked and not shown in the plot, but still taken into consideration.](https://i.stack.imgur.com/GwYSs.jpg) While writing the post I figured out that I can do this: ``` def Funk(x, k, y0): return k*x + y0 fig,ax= plt.subplots() x=np.array([1,2,3,4,5,6,7,8,9,10]) y=np.array([1,2,3,4,5,30,35,40,45,50]) mask=np.array([0,0,0,0,0,1,1,1,1,1]) fitParamsFunk, fitCovariancesFunk = curve_fit(Funk, x[mask], y[mask]) ax.plot(x, Funk(x, fitParamsFunk[0], fitParamsFunk[1])) ax.errorbar(x, y, yerr = None, ms=3, fmt='-o') plt.show() ``` [What I actually wanted](https://i.stack.imgur.com/eAeWf.jpg) I guess that scipy curve\_fit isn't meant to deal with masked arrays, but I still would like to know whether there is any workaround for this (I need to work with masked arrays because the number of data points is >10e6, but I'm only plotting 100 at once, so I would need to take the mask of the part of the array that I want to plot and assign it to another array, while copying the values of the array to another or setting the original mask to False)? Thanks for any suggestions
2020/03/26
[ "https://Stackoverflow.com/questions/60877741", "https://Stackoverflow.com", "https://Stackoverflow.com/users/13131921/" ]
If you only want to consider the valid entries, you can use the inverse of the mask as an index: ``` x = ma.masked_array([1,2,3,4,5,6,7,8,9,10], mask=[0,0,0,0,0,1,1,1,1,1]) # changed mask y = ma.masked_array([1,2,3,4,5,30,35,40,45,50], mask=[0,0,0,0,0,1,1,1,1,1]) fitParamsFunk, fitCovariancesFunk = curve_fit(Funk, x[~x.mask], y[~y.mask]) ``` PS: Note that both arrays need to have the same amount of valid entries.
The use of mask in numerical calculus is equivalent to the use of the Heaviside step function in analytical calculus. For example this becomes very simple by application for piecewise linear regression: [![enter image description here](https://i.stack.imgur.com/Zg2Z3.gif)](https://i.stack.imgur.com/Zg2Z3.gif) They are several examples of piecewise linear regression in the paper : <https://fr.scribd.com/document/380941024/Regression-par-morceaux-Piecewise-Regression-pdf> Using the method shown in this paper, the very simple calculus below leads to the expected form of result : [![enter image description here](https://i.stack.imgur.com/q7AL2.gif)](https://i.stack.imgur.com/q7AL2.gif) Note : In case of large number of points, if there was several points with slightly different abscissae in the transition area it sould be more accurate to apply the case considered pages 29-31 of the paper referenced above.
12,861
46,497,838
We have a Python client that connects to the Amazon S3 via a VPC endpoint. Our code uses boto and we are able to connect and download from S3. After migration from boto to boto3, we noticed that the VPC endpoint connection no longer works. Below is a copy snippet that can reproduce the problem. ```sh python -c "import boto3; s3 = boto3.resource('s3', aws_access_key_id='foo', aws_secret_access_key='bar'); s3.Bucket('some-bucket').download_file('hello-remote.txt', 'hello-local.txt')" ``` got the below error: ``` Traceback (most recent call last): File "<string>", line 1, in <module> File "C:\Python27\lib\site-packages\boto3-1.4.0-py2.7.egg\boto3\s3\inject.py", line 163, in bucket_download_file ExtraArgs=ExtraArgs, Callback=Callback, Config=Config) File "C:\Python27\lib\site-packages\boto3-1.4.0-py2.7.egg\boto3\s3\inject.py", line 125, in download_file extra_args=ExtraArgs, callback=Callback) File "C:\Python27\lib\site-packages\boto3-1.4.0-py2.7.egg\boto3\s3\transfer.py ", line 269, in download_file future.result() File "build\bdist.win32\egg\s3transfer\futures.py", line 73, in result File "build\bdist.win32\egg\s3transfer\futures.py", line 233, in result botocore.vendored.requests.exceptions.ConnectionError: ('Connection aborted.', e rror(10060, 'A connection attempt failed because the connected party did not pro perly respond after a period of time, or established connection failed because c onnected host has failed to respond')) ``` Does anyone know if boto3 support connection to S3 via VPC endpoint and/or was able to get it to work? We are using boto3-1.4.0.
2017/09/29
[ "https://Stackoverflow.com/questions/46497838", "https://Stackoverflow.com", "https://Stackoverflow.com/users/456481/" ]
This is most likely a configuration error in your VPC endpoint policies. If your policies are correct, then Boto3 never knows exactly how it's able to reach the S3 location, it really is up to the policies to allow/forbid this type of traffic. Here's a quick walkthrough of what you can do for troubleshooting: <https://aws.amazon.com/premiumsupport/knowledge-center/connect-s3-vpc-endpoint/> Other relevant docs: * <https://docs.aws.amazon.com/vpc/latest/userguide/vpce-gateway.html> * <https://docs.aws.amazon.com/vpc/latest/userguide/vpc-endpoints-access.html#vpc-endpoint-policies> * <https://docs.aws.amazon.com/vpc/latest/userguide/vpc-endpoints-s3.html#vpc-endpoints-policies-s3>
It depends on your AWS policies and roles defined. Shortest way to make your code run is to make the S3 bucket Public [ not recommended] else add your IP in the security policies and then re-run the code. Details of it can be found here. <https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/authorizing-access-to-an-instance.html> Use IP whitelisting to secure your AWS Transfer for SFTP servers <https://aws.amazon.com/blogs/storage/use-ip-whitelisting-to-secure-your-aws-transfer-for-sftp-servers/>
12,863
50,648,152
I want to install a rpm package, (e.g. python 3), and all of its dependencies in a linux server that does not have internet connection. How can I do that?
2018/06/01
[ "https://Stackoverflow.com/questions/50648152", "https://Stackoverflow.com", "https://Stackoverflow.com/users/2128078/" ]
Assuming you already downloaded the package before from another machine that has internet access and FTP the files to your server, you can use the following command to install a rpm ``` rpm -ivh package_name_x85_64.rpm ``` options: * i = This installs a new package. * v = Print verbose information * h = Print 50 hash marks as the package archive is unpacked. You can also check the rpm manual for more options and details
There is a way, but it is quite tricky and might mess up your servers, so be **very careful**. Nomenclature: * **online** : your system that is connected to the repositories * **offline**: your system that is not connected Steps: Compress your rpm database from the **offline** system and transfer it to the **online** system: ``` cd /var/lib/rpm/ tar -cvzf /tmp/rpmdb.tgz * scp /tmp/rpmdb.tgz root@online:/tmp ``` on your **online** system; replace your rpm db with the one from the **offline** system: ``` cp -r /var/lib/rpm{,.bak} # back up your rpmdb from your online system. Make sure not to lose this!! rm -rf /var/lib/rpm/* cd /var/lib/rpm tar -xvf /tmp/rpmdb.tgz # now your online system pretends to have the rpm database from the offline system. Don't start really installing / uninstalling rpms or you'll break everything ``` now simulate your update with download-only (I didn't run this with yum but with zypper, but it should be similar): ``` zypper up --download-only ``` Now you can fetch all the downloaded packages and they should suffice for updating your offline system And now restore your **online** machine: ``` rm -rf /var/lib/rpm cp -r /var/lib/rpm{.bak,} ```
12,864
71,300,876
Using python elasticsearch-dsl: ``` class Record(Document): tags = Keyword() tags_suggest = Completion(preserve_position_increments=False) def clean(self): self.tags_suggest = { "input": self.tags } class Index: name = 'my-index' settings = { "number_of_shards": 2, } ``` When I index ``` r1 = Record(tags=['my favourite tag', 'my hated tag']) r2 = Record(tags=['my good tag', 'my bad tag']) ``` And when I try to use autocomplete with the word in the middle: ``` dsl = Record.search() dsl = dsl.suggest("auto_complete", "favo", completion={"field": "tags_suggest"}) search_response = dsl.execute() for option in search_response.suggest.auto_complete[0].options: print(option.to_dict()) ``` It won't return anything, but it will when I search "my favo". Any good practices to fix that (make it return 'my favourite tag' when I request suggestions for "favo")?
2022/02/28
[ "https://Stackoverflow.com/questions/71300876", "https://Stackoverflow.com", "https://Stackoverflow.com/users/9016861/" ]
With `bash` version >= 3.0 and a regex: ``` [[ "$string" =~ _(.+)\. ]] && echo "${BASH_REMATCH[1]}" ```
This is easy, except that it includes the initial underscore: ``` ls | grep -o "_[^.]*" ```
12,867
49,889,323
I have a script named `patchWidth.py` and it parses command line arguments with `argparse`: ``` # read command line arguments -- the code is able to process multiple files parser = argparse.ArgumentParser(description='angle simulation trajectories') parser.add_argument('filenames', metavar='filename', type=str, nargs='+') parser.add_argument('-vec', metavar='v', type=float, nargs=3) ``` Suppose this script is run with the following: ``` >>> python patchWidth.py file.dat -vec 0. 0. 1. ``` Is there a way to get this entire thing as a string in python? I would like to be able to print to the output file what command was run with what arguments.
2018/04/18
[ "https://Stackoverflow.com/questions/49889323", "https://Stackoverflow.com", "https://Stackoverflow.com/users/2112406/" ]
Yes, you can use the sys module: ``` import sys str(sys.argv) # arguments as string ``` Note that `argv[0]` is the script name. For more information, take a look at the [sys module documentation](https://docs.python.org/3/library/sys.html#sys.argv).
I do not know if it would be the best option, but... ``` import sys " ".join(sys.argv) ``` Will return a string like `/the/path/of/file/my_file.py arg1 arg2 arg3`
12,877
70,899,538
Right now I have an Arraylist in java. When I call ``` myarraylist.get(0) myarraylist.get(1) myarraylist.get(2) [0, 5, 10, 16] [24, 29, 30, 35, 41, 45, 50] [0, 6, 41, 45, 58] ``` are all different lists. What I need to do is get the first and second element of each of these lists, and put it in a list, like so: ``` [0,5] [24,29] [0,6] ``` I have tried different for loops and it seems like there is an easy way to do this in python but not in java.
2022/01/28
[ "https://Stackoverflow.com/questions/70899538", "https://Stackoverflow.com", "https://Stackoverflow.com/users/17215849/" ]
`List<Integer> sublist = myarraylist.subList(0, 2);` For `List#subList(int fromIndex, int toIndex)` the `toIndex` is exclusive. Therefore, to get the first two elements (indexes 0 and 1), the `toIndex` value has to be 2.
Try reading about Java 8 Stream API, specifically: * [map method](https://docs.oracle.com/javase/8/docs/api/java/util/stream/Stream.html#map-java.util.function.Function-) * [collect method](https://docs.oracle.com/javase/8/docs/api/java/util/stream/Stream.html#collect-java.util.stream.Collector-) This should help you achieve what you need.
12,880
24,090,225
Best way to remove all characters of a string until new line character is met python? ``` str = 'fakeline\nfirstline\nsecondline\nthirdline' into str = 'firstline\nsecondline\nthirdline' ```
2014/06/06
[ "https://Stackoverflow.com/questions/24090225", "https://Stackoverflow.com", "https://Stackoverflow.com/users/3388884/" ]
Get the index of the newline and use it to [slice](https://stackoverflow.com/questions/509211/pythons-slice-notation) the string: ``` >>> s = 'fakeline\nfirstline\nsecondline\nthirdline' >>> s[s.index('\n')+1:] # add 1 to get the character after the newline 'firstline\nsecondline\nthirdline' ``` Also, don't name your string `str` as it shadows the built in `str` function. **Edit:** Another way (from Valentin Lorentz's comment): ``` s.split('\n', 1)[1] ``` I like this better than my answer. It's splits the string just once and grabs the latter half of the split.
str.split("\n") gives a list of all the newline delimited segments. You can simply append the ones you want with + afterwards. For your case, you can use a slice ``` newstr = "".join(str.split("\n")[1::]) ```
12,882
42,136,707
Hello I'm trying to make an live info screen to a school project, I'm reading through a file which does a lot of different thing which depending of what line it's reading. ``` dclist = [] interface = "" vrfmem = "" db = sqlite3.connect('data/main.db') cursor = db.cursor() cursor.execute('''SELECT r1 FROM routers''') all_rows = cursor.fetchall() for row in all_rows: dclist.append(row[0]) for items in dclist: f = open('data/'+ items + '.txt', 'r+') for line in f: if 'interface Vlan' in line: interface = re.search(r'(?<=\interface Vlan).*', line).group(0) if 'vrf member' in line.next(): vrfmem = interface = re.search(r'(?<=\vrf member).*', line).group(0) else: vrfmem = "default" if 'ip address' in line: print(items + interface + vrfmem + "ip her" ) db.commit() db.close() ``` As seen in the code, every line in my document i want to check the next line because if it matches a certain string, i set a variable. from what i could read myself to, python has a built in function next() that is suppost to be able to do the job for me. But when i run my code im presented with `AttributeError: 'str' object has no attribute 'next' `
2017/02/09
[ "https://Stackoverflow.com/questions/42136707", "https://Stackoverflow.com", "https://Stackoverflow.com/users/4448852/" ]
You install `gulp` globally for using simple `gulp` command in your terminal and install `gulp` locally (with `package.json` dependency) in order not to lose the dependency, because you can install your project to any computer, call `npm i` and access `gulp` with `./node_modules/.bin/gulp` without any additional installations
You don't even need to have installed `gulp` globaly. Just have it locally and put gulp commands in package.json scripts like this: ``` "scripts": { "start": "gulp", "speed-test": "gulp speed-test -v", "build-prod": "gulp build-prod", "test": "NODE_ENV=test jasmine JASMINE_CONFIG_PATH=spec/support/jasmine.json" }, ``` Than everyone working on same project can just `npm install` and start running commands without even having gulp globally installed. * `npm start` will run `gulp` * `npm run speed-test` will run `gulp speed-test -v` * `npm run build-prod` will run `gulp build-prod` And of course add as many commands as you want there. And if someone from team have or wants to have `gulp` globally than they can run `gulp` commands directly from terminal.
12,884
60,445,740
I have an excel file which generates chart based on the data available, the chart name is `thisChart`. I want to copy `thisChart` from excel file to the ppt file. Now I know the 2 ways to do that ie VBA and python(using win32com.client). The problem with VBA is that its really time consuming and it randomly crashes this needing constant supervision thus I planned to do the same using python. After researching I found out about `win32com.client` in python which allowed me to do the same. I used the following script to do so. ``` # Grab the Active Instance of Excel. ExcelApp = win32com.client.GetActiveObject("Excel.Application") ExcelApp.Visible = True # Grab the workbook with the charts. xlWorkbook = ExcelApp.Workbooks.Open(r'C:\Users\prashant.kumar\Desktop\testxl.xlsx') # Create a new instance of PowerPoint and make sure it's visible. PPTApp = win32com.client.gencache.EnsureDispatch("PowerPoint.Application") PPTApp.Visible = True # Add a presentation to the PowerPoint Application, returns a Presentation Object. PPTPresentation = PPTApp.Presentations.Add() # Loop through each Worksheet. for xlWorksheet in xlWorkbook.Worksheets: # Grab the ChartObjects Collection for each sheet. xlCharts = xlWorksheet.ChartObjects() # Loop through each Chart in the ChartObjects Collection. for index, xlChart in enumerate(xlCharts): # Each chart needs to be on it's own slide, so at this point create a new slide. PPTSlide = PPTPresentation.Slides.Add(Index=index + 1, Layout=12) # 12 is a blank layout # Display something to the user. print('Exporting Chart {} from Worksheet {}'.format(xlChart.Name, xlWorksheet.Name)) # Copy the chart. xlChart.Copy() # Paste the Object to the Slide PPTSlide.Shapes.PasteSpecial(DataType=1) # Save the presentation. PPTPresentation.SaveAs(r"C:\Users\prashant.kumar\Desktop\outppt") ``` but it pastes the chart as an image whereas I want the chart to be pasted as interactive chart (just how it is when in the excel file) The reason is that the quality deteriorates and it does not give me much flexibility to add minor modifications to chart in the ppt in future when needed. Here is a comparison of the 2 outputs [![enter image description here](https://i.stack.imgur.com/6ljWE.png)](https://i.stack.imgur.com/6ljWE.png) The quality difference can be seen here and it gets worse when I zoom in. Now my question is, Is there any way to paste the chart from excel to ppt in the chart format using python or any other way which is faster than VBA? PS. I don't want to read the excel file data and generate chart in python and then paste to PPT since the actual charts are really complicated and would probably be very hard to make
2020/02/28
[ "https://Stackoverflow.com/questions/60445740", "https://Stackoverflow.com", "https://Stackoverflow.com/users/6372189/" ]
You can use `CASE` clause to differential which Unit need to be displayed. For example: ``` SELECT (CASE WHEN price_col >= 1000000 THEN CONCAT(price_col/100000,'B') WHEN price_col >= 100000 THEN CONCAT(price_col/100000,'M') WHEN price_col >= 1000 THEN CONCAT(price_col/1000,'K') ELSE price_col END) as new_price_col FROM Table ```
SELECT (CASE WHEN length(price)=9 THEN CONCAT(price/100000,'M') ELSE (CASE WHEN lenght(price)=10 THEN CONCAT(price/1000000,'B' END) END) AS price
12,885
49,059,461
I have the following Python dict: ``` { 'parameter_010': False, 'parameter_009': False, 'parameter_008': False, 'parameter_005': 'C<sub>MAX</sub>', 'parameter_004': 'L', 'parameter_007': False, 'parameter_006': 'R', 'parameter_001': 'Foo', 'id': 7542, 'parameter_003': 'D', 'parameter_002': 'M' } ``` As seen there are a number of fields named `parameter_nnn` where `nnn` is a sequential number. Some are `False` and others have values populated. I would like to generate a list with just the `parameter_nnn` field values which, but just the ones which contains a given value, sorted by number from `001` upwards. So in this specific case the desired output is: ``` ["Foo", "M", "D", "L", "CMAX", "R"] ``` Which would be the pythonic way of doing this? I obviously can start iterating but wondering if there is something better than that. Python 2.7
2018/03/01
[ "https://Stackoverflow.com/questions/49059461", "https://Stackoverflow.com", "https://Stackoverflow.com/users/5328289/" ]
So, assuming you know that you are working with a JSON and how to deserialize: ``` >>> import json >>> s = """{ ... "parameter_010": false, ... "parameter_009": false, ... "parameter_008": false, ... "parameter_005": "CMAX", ... "parameter_004": "L", ... "parameter_007": false, ... "parameter_006": "R", ... "parameter_001": "Foo", ... "id": 7542, ... "parameter_003": "D", ... "parameter_002": "M" ... }""" >>> d = json.loads(s) ``` If your `parameter_nnn` always and strictly follow this format, you can simply sort the items filtered by your requirements (since lexicographical sorting is what you want!): ``` >>> sorted([(k,v) for k, v in d.items() if v and k.startswith('parameter')]) [('parameter_001', 'Foo'), ('parameter_002', 'M'), ('parameter_003', 'D'), ('parameter_004', 'L'), ('parameter_005', 'CMAX'), ('parameter_006', 'R')] ``` If you just want the values, just do another pass: ``` >>> [v for _,v in sorted([(k,v) for k, v in d.items() if v and k.startswith('parameter')])] ['Foo', 'M', 'D', 'L', 'CMAX', 'R'] >>> ``` Note, you are going to have to loop somehow... A more readable version: ``` >>> selection = [(k,v) for k, v in d.items() if v and k.startswith('parameter')] >>> [v for _,v in sorted(selection)] ['Foo', 'M', 'D', 'L', 'CMAX', 'R'] ``` ### EDIT: Major Caveat Note, if the values can be `0` or any other falsy value that you actually want, then this won't work, so for example: ``` >>> pprint(d) {'id': 7542, 'parameter_001': 'Foo', 'parameter_002': 'M', 'parameter_003': 'D', 'parameter_004': 'L', 'parameter_005': 'CMAX', 'parameter_006': 'R', 'parameter_007': False, 'parameter_008': False, 'parameter_009': False, 'parameter_010': False, 'parameter_011': 0} >>> selection = [(k,v) for k, v in d.items() if v and k.startswith('parameter')] >>> [v for _, v in sorted(selection)] ['Foo', 'M', 'D', 'L', 'CMAX', 'R'] ``` So if you want to filter instances of *`False`* specifically (and not `0`) then you have to use *`is`*: ``` >>> selection = [(k,v) for k, v in d.items() if v is not False and k.startswith('parameter')] >>> [v for _, v in sorted(selection)] ['Foo', 'M', 'D', 'L', 'CMAX', 'R', 0] ```
Here is one solution: ``` list(zip(*sorted(i for i in d.items() if i[0].startswith('parameter') and i[1])))[1] # ('Foo', 'M', 'D', 'L', 'C<sub>MAX</sub>', 'R') ``` **Explanation** * We filter for 2 conditions: key starts with 'parameter' and value is Truthy. * `sorted` on `d.items()` returns a list of tuples sorted by dictionary key. * `list(zip(*..))[0]` returns a tuple of values after the previous filtering and sorting. * I haven't dealt with `<sub></sub>` as I have no idea where this is from and what logic should be applied to remove this (and other?) tagging.
12,886
13,637,150
I am trying to call an .exe file that's not in my local Python directory using `subprocess.call()`. The command (as I type it into cmd.exe) is exactly as follows: `"C:\Program Files\R\R-2.15.2\bin\Rscript.exe" --vanilla C:\python\buyback_parse_guide.r` The script runs, does what I need to do, and I have confirmed the output is correct. Here's my python code, which I thought would do the exact same thing: ``` ## Set Rcmd Rcmd = r'"C:\Program Files\R\R-2.15.2\bin\Rscript.exe"' ## Set Rargs Rargs = r'--vanilla C:\python\buyback_parse_guide.r' retval = subprocess.call([Rcmd,Rargs],shell=True) ``` When I call `retval` in my Python console, it returns `1` and the .R script doesn't run, but I get no errors. I'm pretty sure this is a really simple syntax error... help? Much thanks!
2012/11/30
[ "https://Stackoverflow.com/questions/13637150", "https://Stackoverflow.com", "https://Stackoverflow.com/users/489426/" ]
To quote [the docs](http://docs.python.org/2/library/subprocess.html#popen-constructor): > > If shell is True, it is recommended to pass args as a string rather than as a sequence. > > > Splitting it up (either manually, or via `shlex`) just so `subprocess` can recombine them so the shell can split them again is silly. I'm not sure why you think you need `shell=True` here. (If you don't have a good reason, you generally don't want it…) But even without `shell=True`: > > On Windows, if args is a sequence, it will be converted to a string in a manner described in Converting an argument sequence to a string on Windows. This is because the underlying CreateProcess() operates on strings. > > > So, just give the shell the command line: ``` Rcmd = r'"C:\Program Files\R\R-2.15.2\bin\Rscript.exe" --vanilla C:\python\buyback_parse_guide.r' retval = subprocess.call(Rcmd, shell=True) ```
According to [the docs](http://stat.ethz.ch/R-manual/R-patched/library/utils/html/Rscript.html), Rscript: > > … is an alternative front end for use in #! scripts and other scripting applications. > > > … is convenient for writing #! scripts… (The standard Windows command line has no concept of #! scripts, but Cygwin shells do.) > > > … is only supported on systems with the execv system call. > > > So, it is not the way to run R scripts from another program under Windows. [This answer](https://stackoverflow.com/questions/3412911/difference-between-r-exe-rcmd-exe-rscript-exe-and-rterm-exe) says: > > Rscript.exe is your friend for batch scripts… For everything else, there's R.exe > > > So, unless you have some good reason to be using Rscript outside of a batch script, you should switch to R.exe. You may wonder why it works under cmd.exe, but not from Python. I don't know the answer to that, and I don't think it's worth digging through code or experimenting to find out, but I can make some guesses. One possibility is that when you're running from the command line, that's a `cmd.exe` that controls a terminal, while when you're running from `subprocess.call(shell=True)` or `os.system`, that's a headless `cmd.exe`. Running a .bat/.cmd batch file gets you a non-headless `cmd`, but running `cmd` directly from another app does not. R has historically had all kinds of complexities dealing with the Windows terminal, which is why they used to have separate Rterm.exe and Rcmd.exe tools. Nowadays, those are both merged into R.exe, and it should work just fine either way. But if you try doing things the docs say not to do, that may not be tested, it's perfectly reasonable that it may not work. At any rate, it doesn't really matter why it works in some situations even though it's not documented to. That certainly doesn't mean it should work in other situations it's not documented to work in, or that you should try to force it to do so. Just do the right thing and run `R.exe` instead of `Rscript.exe`. Unless you have some information that contradicts everything I've found in the documentation and everywhere else I can find, I'm placing my money on Rscript.exe itself being the problem. You'll have to read the documentation on the invocation differences between `Rscript.exe` and `R.exe`, but they're not identical. According to [the intro docs](http://cran.r-project.org/doc/manuals/R-intro.html#Scripting-with-R),: > > If you just want to run a file foo.R of R commands, the recommended way is to use R CMD BATCH foo.R > > > According to your comment above: > > When I type "C:\R\R-2.15.2\bin\i386\R.exe" CMD BATCH C:\python\buyback\_parse\_guide.r into cmd.exe, the .R script runs successfully. What's the proper syntax for passing this into python? > > > That depends on the platform. On Windows, a list of arguments gets turned into a string, so you're better off just using a string so you don't have to debug the joining; on Unix, a string gets split into a list of arguments, so you're better off using a list so you don't have to debug the joining. Since there are no spaces in the path, I'd take the quotes out. So: ``` rcmd = r'C:\R\R-2.15.2\bin\i386\R.exe CMD BATCH C:\python\buyback_parse_guide.r' retval = subprocess.call(rcmd) ```
12,888
63,106,413
I'm trying to find a python solution to extract the length of a specific sequence within a fasta file using the full header of the sequence as the query. The full header is stored as a variable earlier in the pipeline (i.e. "CONTIG"). I would like to save the output of this script as a variable to then use later on in the same pipeline. Below is an updated version of the script using code provided by Lucía Balestrazzi. Additional information: The following with-statement is nested inside a larger for-loop that cycles through subsamples of an original genome. The first subsample fasta in my directory has a single sequence ">chr1:0-40129801" with a length of 40129801. I'm trying to write out a text file "OUTPUT" that has some basic information about each subsample fasta. This text file will be used as an input for another program downstream. Header names in the original fasta file are chr1, chr2, etc... while the header names in the subsample fastas are something along the lines of: batch1.fa >chr1:0-40k batch2.fa >chr1:40k-80k ...etc... ``` import Bio.SeqIO as IO record_dict = IO.to_dict(IO.parse(ORIGINAL_GENOME, "fasta")) #not the subsample with open(GENOME_SUBSAMPLE, 'r') as FIN: for LINE in FIN: if LINE.startswith('>'): #Example of "LINE"... >chr1:0-40129801 HEADER = re.sub('>','',LINE) #HEADER = chr1:0-40129801 HEADER2 = re.sub('\n','',HEADER) #HEADER2 = chr1:0-40129801 (no return character on the end) CONTIG = HEADER2.split(":")[0] #CONTIG = chr1 PART2_HEADER = HEADER2.split(":")[1] #PART2_HEADER = 0-40129801 START = int(PART2_HEADER.split("-")[0]) #START = 0 END = int(PART2_HEADER.split("-")[1]) #END = 40129801 LENGTH = END-START #LENGTH = 40129801 minus 0 = 40129801 #This is where I'm stuck... ORIGINAL_CONTIG_LENGTH = len(record_dict[CONTIG]) #This returns "KeyError: 1" #ORIGINAL_CONTIG_LENGTH = 223705999 (this is from the full genome, not the subsample). OUTPUT.write(str(START) + '\t' + str(HEADER2) + '\t' + str(LENGTH) + '\t' + str(CONTIG) + '\t' + str(ORIGINAL_CONTIG_LENGTH) + '\n') #OUTPUT = 0 chr1:0-40129801 40129801 chr1 223705999 OUTPUT.close() ``` I'm relatively new to bioinformatics. I know I'm messing up on how I'm using the dictionary, but I'm not quite sure how to fix it. Any advice would be greatly appreciated. Thanks!
2020/07/26
[ "https://Stackoverflow.com/questions/63106413", "https://Stackoverflow.com", "https://Stackoverflow.com/users/7614836/" ]
You can do it this way: ``` import Bio.SeqIO as IO record_dict = IO.to_dict(IO.parse("genome.fa", "fasta")) print(len(record_dict["chr1"])) ``` or ``` import Bio.SeqIO as IO record_dict = IO.to_dict(IO.parse("genome.fa", "fasta")) seq = record_dict["chr1"] print(len(seq)) ``` EDIT: Alternative code ``` import Bio.SeqIO as IO record_dict = IO.to_dict(IO.parse("genome.fa", "fasta") names = record_dict.keys() for HEADER in names: #HEADER = chr1:0-40129801 ORIGINAL_CONTIG_LENGTH = len(record_dict[HEADER]) CONTIG = HEADER.split(":")[0] #CONTIG = chr1 PART2_HEADER = HEADER.split(":")[1] #PART2_HEADER = 0-40129801 START = int(PART2_HEADER.split("-")[0]) END = int(PART2_HEADER.split("-")[1]) LENGTH = END-START ``` The idea is that you define the dict once, get the value of its keys (all the contigs headers) and store them as a variable, and then loop through the headers extracting the info you need. No need to loop through the file. Cheers
This works, just changed the "CONTIG" variable to a string. Thanks Lucía for all your help the last couple of days! ``` import Bio.SeqIO as IO record_dict = IO.to_dict(IO.parse(ORIGINAL_GENOME, "fasta")) #not the subsample with open(GENOME_SUBSAMPLE, 'r') as FIN: for LINE in FIN: if LINE.startswith('>'): #Example of "LINE"... >chr1:0-40129801 HEADER = re.sub('>','',LINE) #HEADER = chr1:0-40129801 HEADER2 = re.sub('\n','',HEADER) #HEADER2 = chr1:0-40129801 (no return character on the end) CONTIG = HEADER2.split(":")[0] #CONTIG = chr1 PART2_HEADER = HEADER2.split(":")[1] #PART2_HEADER = 0-40129801 START = int(PART2_HEADER.split("-")[0]) #START = 0 END = int(PART2_HEADER.split("-")[1]) #END = 40129801 LENGTH = END-START #LENGTH = 40129801 minus 0 = 40129801 #This is where I'm stuck... ORIGINAL_CONTIG_LENGTH = len(record_dict[str(CONTIG)]) #ORIGINAL_CONTIG_LENGTH = 223705999 (this is from the full genome, not the subsample). OUTPUT.write(str(START) + '\t' + str(HEADER2) + '\t' + str(LENGTH) + '\t' + str(CONTIG) + '\t' + str(ORIGINAL_CONTIG_LENGTH) + '\n') #OUTPUT = 0 chr1:0-40129801 40129801 chr1 223705999 OUTPUT.close() ```
12,889
35,846,943
I was creating a function to compute trimmed mean. To do this I removed highest and lowest percent of data and then the mean is computed as usual. What I have so far is : ``` def trimmed_mean(data, percent): from numpy import percentile if percent < 50: data_trimmed = [i for i in data if i > percentile(data, percent) and i < percentile(data, 100-percent)] else: data_trimmed = [i for i in data if i < percentile(data, percent) and i > percentile(data, 100-percent)] return sum(data_trimmed) / float(len(data_trimmed)) ``` But I do get the wrong result. So, for `[37, 33, 33, 32, 29, 28, 28, 23, 22, 22, 22, 21, 21, 21, 20, 20, 19, 19, 18, 18, 18, 18, 16, 15, 14, 14, 14, 12, 12, 9, 6]` by 10% mean should be `20.16` while I get `20.0`. Is there any other way to do removing top and bottom data in python? Or is there anything else that I have done wrong?
2016/03/07
[ "https://Stackoverflow.com/questions/35846943", "https://Stackoverflow.com", "https://Stackoverflow.com/users/4408820/" ]
You can take a look at this related question:[Trimmed Mean with Percentage Limit in Python?](https://stackoverflow.com/questions/19441730/trimmed-mean-with-percentage-limit-in-python) In short for scipy version > 0.14.0 the following does the job ``` from scipy import stats m = stats.trim_mean(X, percentage) ``` If you do not want to have an dependency on an external library then you can of course revert to an approach as shown in Chip Grandits answer.
I would suggest sorting the array first and then just take a "slice in the the middle." ``` #some "fancy" numpy sort or even just plain old sorted() #sorted_data = sorted(data) #uncomment to use plain python sorted n = len(sorted_data) outliers = n*percent/100 #may want some rounding logic if n is small trimmed_data = sorted_data[outliers: n-outliers] ```
12,890
73,603,035
We know we can use `sep.join()` or `+=` to concatenate strings. For example, ``` a = ["123f", "asd", "y] print("".join(a)) # output: 1234asdy ``` In Java, stringbuilder would creat a new string, and put the two string on the both sides of plus together, so it will cost `O(n^2)`. But in Python, how will `join` method do for multiway merge? A similar question is [How python implements concatenation?](https://stackoverflow.com/questions/56522126/how-python-implements-concatenation) It explains `+=` for two way merge.
2022/09/04
[ "https://Stackoverflow.com/questions/73603035", "https://Stackoverflow.com", "https://Stackoverflow.com/users/16395591/" ]
for cpython version 3.X you can see the source code [here](https://github.com/python/cpython/blob/main/Objects/stringlib/join.h) and it does indeed calculate the total length beforehand and only does 1 allocation. On a side note, if your application is limited by the speed of joining strings such that you have to think about join implementation then you shouldn't be using python, and instead go for c++.
The operation is O(n). `join` takes an iterable. If its not already a sequence, `join` will create one. Then, using the size of the the separator and the size of each string in the list, a new string object is created. A series of `memcpy` then creates the object. Creating the list, getting the sizes and doing the `memcpy` are all linear. `+=` is faster and still O(n). A new object the size of the two strings to be concatenated is created, and 2 memcpy do the work. Of course it is only concatenating two strings. If you want to do more, `join` soon becomes the better option.
12,893
20,169,509
Can I import a word document into a python program so that its content can be read and questions can be answered using the data in the document. what would be procedure of using the data in the file ``` with open('animal data.txt', 'r') ``` i used this but is not working
2013/11/24
[ "https://Stackoverflow.com/questions/20169509", "https://Stackoverflow.com", "https://Stackoverflow.com/users/2994135/" ]
XE16 supports OpenGL in live cards. Use the class GlRenderer: <https://developers.google.com/glass/develop/gdk/reference/com/google/android/glass/timeline/GlRenderer>
I would look at your app and determine if you want to have more user input or not and whether you want it to live in a specific part of your Timeline or just have it be launched when the user wants it. Specifically, since Live Cards live in the Timeline, they will not be able to capture the swipe backward or swipe forwards gestures since those navigate the Timeline. See the "When to use Live Cards" section of: <https://developers.google.com/glass/develop/gdk/ui/index> If you use an Immersion, however you will be able to use those swipe backwards and forwards gestures as well as these others: <https://developers.google.com/glass/develop/gdk/input/touch> This will give you complete control over the UI and touchpad, with the exception that swipe down should exit your Immersion. The downside is that once the user exits your Immersion, they will need to start it again likely with a Voice Trigger, whereas a Live Card can live on in part of your Timeline. You should be able to do your rendering in both a Surface, which a LiveCard can use or in whatever View you choose to put in your Activity which is what an Immersion is. GLSurfaceView for example may be what you need and that internally uses a Surface: <http://developer.android.com/guide/topics/graphics/opengl.html> Note that you'll want to avoid RemoteViews but I think you already figured that out.
12,894
10,443,295
So I have a set of data which I am able to convert to form separate numpy arrays of R, G, B bands. Now I need to combine them to form an RGB image. I tried 'Image' to do the job but it requires 'mode' to be attributed. I tried to do a trick. I would use Image.fromarray() to take the array to image but it attains 'F' mode by default when Image.merge requires 'L' mode images to merge. If I would declare the attribute of array in fromarray() to 'L' at first place, all the R G B images become distorted. But, if I save the images and then open them and then merge, it works fine. Image reads the image with 'L' mode. Now I have two issues. First, I dont think it is an elegant way of doing the work. So if anyone knows the better way of doing it, please tell Secondly, Image.SAVE is not working properly. Following are the errors I face: ``` In [7]: Image.SAVE(imagefile, 'JPEG') ---------------------------------------------------------------------------------- TypeError Traceback (most recent call last) /media/New Volume/Documents/My own works/ISAC/SAMPLES/<ipython console> in <module>() TypeError: 'dict' object is not callable ``` Please suggest solutions. And please mind that the image is around 4000x4000 size array.
2012/05/04
[ "https://Stackoverflow.com/questions/10443295", "https://Stackoverflow.com", "https://Stackoverflow.com/users/1372149/" ]
Your distortion i believe is caused by the way you are splitting your original image into its individual bands and then resizing it again before putting it into merge; ``` ` image=Image.open("your image") print(image.size) #size is inverted i.e columns first rows second eg: 500,250 #convert to array li_r=list(image.getdata(band=0)) arr_r=np.array(li_r,dtype="uint8") li_g=list(image.getdata(band=1)) arr_g=np.array(li_g,dtype="uint8") li_b=list(image.getdata(band=2)) arr_b=np.array(li_b,dtype="uint8") # reshape reshaper=arr_r.reshape(250,500) #size flipped so it reshapes correctly reshapeb=arr_b.reshape(250,500) reshapeg=arr_g.reshape(250,500) imr=Image.fromarray(reshaper,mode=None) # mode I imb=Image.fromarray(reshapeb,mode=None) img=Image.fromarray(reshapeg,mode=None) #merge merged=Image.merge("RGB",(imr,img,imb)) merged.show() ` ``` this works well !
If using PIL Image convert it to array and then proceed with the below, else using matplotlib or cv2 perform directly. ``` image = cv2.imread('')[:,:,::-1] image_2 = image[10:150,10:100] print(image_2.shape) img_r = image_2[:,:,0] img_g = image_2[:,:,1] img_b = image_2[:,:,2] image_2 = img_r*0.2989 + 0.587*img_g + 0.114*img_b image[10:150,10:100,0] = image_2 image[10:150,10:100,1] = image_2 image[10:150,10:100,2] = image_2 plt.imshow(image,cmap='gray') ```
12,895
58,642,357
I am trying to automate the login to the following page using selenium: <https://services.cal-online.co.il/Card-Holders/SCREENS/AccountManagement/Login.aspx?ReturnUrl=%2fcard-holders%2fScreens%2fAccountManagement%2fHomePage.aspx> Trying to find the elements of username and password using both id, css selector and xpath didn't work. ``` self._web_driver.find_element_by_xpath('//*[@id="txt-login-username"]') self._web_driver.find_element_by_id("txt-login-password") self._web_driver.find_element_by_css_selector('#txt-login-username') ``` For all three I get NoSuchElement exception I tried the following JS script: `document.getElementById('txt-login-username')` when I run this script in selenium or in firefox it returns Null but when I run it in chrome console I get a result I can use. Is there any way to make it work from the python code or to run this on the chrome console itself and not from the python execute\_script?
2019/10/31
[ "https://Stackoverflow.com/questions/58642357", "https://Stackoverflow.com", "https://Stackoverflow.com/users/9608607/" ]
To automate the login to the [page](https://services.cal-online.co.il/Card-Holders/SCREENS/AccountManagement/Login.aspx?ReturnUrl=%2fcard-holders%2fScreens%2fAccountManagement%2fHomePage.aspx) using [Selenium](https://stackoverflow.com/questions/54459701/what-is-selenium-and-what-is-webdriver/54482491#54482491) as the the desired elements are within an `<iframe>` so you have to: * Induce *WebDriverWait* for the desired *frame to be available and switch to it*. * Induce *WebDriverWait* for the desired *element to be clickable*. * You can use the following solution: + Code Block: ``` from selenium import webdriver from selenium.webdriver.common.by import By from selenium.webdriver.support.ui import WebDriverWait from selenium.webdriver.support import expected_conditions as EC options = webdriver.ChromeOptions() options.add_argument("start-maximized") driver = webdriver.Chrome(options=options, executable_path=r'C:\Utility\BrowserDrivers\chromedriver.exe') driver.get("https://services.cal-online.co.il/Card-Holders/SCREENS/AccountManagement/Login.aspx?ReturnUrl=%2fcard-holders%2fScreens%2fAccountManagement%2fHomePage.aspx") WebDriverWait(driver, 10).until(EC.frame_to_be_available_and_switch_to_it((By.XPATH,"//iframe[@id='calconnectIframe']"))) WebDriverWait(driver, 20).until(EC.element_to_be_clickable((By.XPATH, "//input[@id='txt-login-username']"))).send_keys("ariel6653") driver.find_element_by_xpath("//input[@id='txt-login-password']").send_keys("ariel6653") ``` Browser Snapshot: [![cal-online](https://i.stack.imgur.com/DJ6PV.png)](https://i.stack.imgur.com/DJ6PV.png) > > Here you can find a relevant discussion on [Ways to deal with #document under iframe](https://stackoverflow.com/questions/53203417/ways-to-deal-with-document-under-iframe) > > >
found a solution to the problem. the problem really was that the object is inside an iframe. I tried to use the solution suggested in [Get element from within an iFrame](https://stackoverflow.com/questions/1088544/get-element-from-within-an-iframe) but got a security error. the solution is to switch frame the follwoing way: `driver.switch_to.frame("iframe")` and now you can use the normal find elment
12,905
29,574,698
I'm looking to split a given string into a list with elements of equal length, I have found a code segment that works in versions earlier than python 3 which is the only version I am familiar with. ``` string = "abcdefghijklmnopqrstuvwx" string = string.Split(0 - 3) print(string) >>> ["abcd", "efgh", "ijkl", "mnop", "qrst", "uvwx"] ``` When run in python 3 it returns the following error message: ``` TypeError: Can't convert 'int' object to str implicitly ``` What changes can I make to make this compatible with python 3?
2015/04/11
[ "https://Stackoverflow.com/questions/29574698", "https://Stackoverflow.com", "https://Stackoverflow.com/users/4776196/" ]
You could try the `.clip()` function. You can use `.save()` to save the state to `.restore()` after the clip so it isn't destructive. You can set the path to whatever you would like and it will create a vector mask of that shape. ``` var canvas = document.getElementById('myCanvas'); var context = canvas.getContext('2d'); var img = document.createElement('IMG'); img.onload = function () { context.save(); context.beginPath(); context.moveTo(0, 0); context.lineTo(0, 100); context.lineTo(70, 200); context.lineTo(100, 0); context.closePath(); context.clip(); context.drawImage(img, 0, 0); context.restore(); } img.src = "https://www.noao.edu/image_gallery/images/d2/m51-400.jpg"; ``` [See Fiddle here](http://jsfiddle.net/p7b0qasm/2/).
Try something like this ``` context.fillStyle = "rgba(255, 255, 255, 1)"; context.fillRect(0, 100, 400, 400); context.fillStyle = "rgba(255, 255, 255, 1)"; context.fillRect(100, 0, 400, 400); ``` <http://jsfiddle.net/xqzxawyb/1/>
12,906
68,402,859
I am using python API to save and download model from MinIO. This is a MinIO installed on my server. The data is in binary format. ``` a = 'Hello world!' a = pickle.dumps(a) client.put_object( bucket_name='my_bucket', object_name='my_object', data=io.BytesIO(a), length=len(a) ) ``` I can see object saved through command line : ``` mc cat origin/my_bucket/my_object Hello world! ``` However, when i try to get it through Python API : ``` response = client.get_object( bucket_name = 'my_bucket', object_name= 'my_object' ) ``` response is a urllib3.response.HTTPResponse object here. I am trying to read it as : ``` response.read() b'' ``` I get a blank binary string. How can I read this object? It won't be possible for me to know its length at the time of reading it. and here is `response.__dict__` : {'headers': HTTPHeaderDict({'Accept-Ranges': 'bytes', 'Content-Length': '27', 'Content-Security-Policy': 'block-all-mixed-content', 'Content-Type': 'application/octet-stream', 'ETag': '"75687-1"', 'Last-Modified': 'Fri, 16 Jul 2021 14:47:35 GMT', 'Server': 'MinIO/DEENT.T', 'Vary': 'Origin', 'X-Amz-Request-Id': '16924CCA35CD', 'X-Xss-Protection': '1; mode=block', 'Date': 'Fri, 16 Jul 2021 14:47:36 GMT'}), 'status': 200, 'version': 11, 'reason': 'OK', 'strict': 0, 'decode\_content': True, 'retries': Retry(total=5, connect=None, read=None, redirect=None, status=None), 'enforce\_content\_length': False, 'auto\_close': True, '\_decoder': None, '\_body': None, '\_fp': <http.client.HTTPResponse object at 01e50>, '\_original\_response': <http.client.HTTPResponse object at 0x7e50>, '\_fp\_bytes\_read': 0, 'msg': None, '\_request\_url': None, '\_pool': <urllib3.connectionpool.HTTPConnectionPool object at 0x790>, '\_connection': None, 'chunked': False, 'chunk\_left': None, 'length\_remaining': 27}
2021/07/16
[ "https://Stackoverflow.com/questions/68402859", "https://Stackoverflow.com", "https://Stackoverflow.com/users/8797308/" ]
Try with response.data.decode()
The response is a `urllib3.response.HTTPResponse` object. See [urllib3 Documentation](https://urllib3.readthedocs.io/en/latest/reference/urllib3.response.html): > > Backwards-compatible with http.client.HTTPResponse but the response body is loaded and decoded on-demand when the data property is accessed. > > > Specifically, you should read the answer like this: ```py response.data # len(response.data) ``` Or, if you want to stream the object, you have examples on the `minio-py` repository: [examples/get\_objects](https://github.com/minio/minio-py/blob/release/examples/get_object.py).
12,911
45,406,332
I am very new to SQLAlchemy. I am having some difficulty setting up a one to many relationship between two models in my application. I have two models User `Photo'. A user has only one role associated with it and a role has many users associated with it. This is the code that I have in my data\_generator.py file: ``` # coding=utf-8 from sqlalchemy import Column, Integer, String, BigInteger,Date, Enum, ForeignKey from sqlalchemy.ext.declarative import declarative_base import time from sqlalchemy import create_engine from sqlalchemy.orm import sessionmaker, relationship import datetime Base = declarative_base() class User(Base): __tablename__ = 'users' id = Column(Integer(), primary_key=True) username = Column(String(30), unique=True, nullable=False) password = Column(String, default='123456', nullable=False) name = Column(String(30), nullable=False) grade = Column(String(30)) emp_no = Column(BigInteger, unique=True, nullable=False) roles = relationship('Role', back_populates='users') class Scene(Base): __tablename__ = 'scenes' id = Column(Integer, primary_key=True) scene_name = Column(String(30), nullable=False) life_time = Column(Date, nullable=False, default=datetime.datetime.strptime( time.strftime("%Y-%m-%d", time.localtime(time.time() + (12 * 30 * 24 * 3600))),'%Y-%m-%d').date()) scene_description = Column(String(150), default="") class Gateway(Base): __tablename__ = 'gateways' id = Column(Integer, primary_key=True) gateway_name = Column(String(30), nullable=False) gateway_api_key = Column(String(100), nullable=False, unique=True) gateway_type = Column(Enum('up', 'down', 'soft', name="gateway_type"), nullable=False) class Role(Base): __tablename__ = 'roles' id = Column(Integer, primary_key=True) role_name = Column(String(30), unique=True, nullable=False) users = relationship('User', back_populates='roles') def __repr__(self): return self.role_name engine = create_engine('sqlite:///memory:') Session = sessionmaker() Session.configure(bind=engine) session = Session() Base.metadata.create_all(engine) ed_user = User(name='ed', username='jack', password='123', emp_no=1, grade='1', roles=1) example_scene = Scene(scene_name='example_1', scene_description='example_description') example_gateway = Gateway(gateway_name='example_1',gateway_api_key='11111',gateway_type='up') # session.add(example_gateway) # session.commit() def init_user(flag, number): while number >= 1: if flag == 1: ed_user = User(name='ed', username='jack', password='123', emp_no=1, grade='1') pass if flag == 2: # TODO admin pass if flag == 3: # TODO teacher pass number -= 1 def init_scene(number): while number >= 1: number -= 1 # TODO scene def init_gateway(api_key, number): # TODO gateway pass if __name__ == '__main__': with session.no_autoflush: c = session.query(Gateway).all() print c[0].id ``` The error that I keep encountering is shown below: ``` /usr/bin/python2.7 /home/pajamas/PycharmProjects/untitled5/data_generator.py Traceback (most recent call last): File "/home/pajamas/PycharmProjects/untitled5/data_generator.py", line 73, in <module> ed_user = User(name='ed', username='jack', password='123', emp_no=1, grade='1', roles=1) File "<string>", line 2, in __init__ File "/home/pajamas/.local/lib/python2.7/site-packages/sqlalchemy/orm/instrumentation.py", line 347, in _new_state_if_none state = self._state_constructor(instance, self) File "/home/pajamas/.local/lib/python2.7/site-packages/sqlalchemy/util/langhelpers.py", line 764, in __get__ obj.__dict__[self.__name__] = result = self.fget(obj) File "/home/pajamas/.local/lib/python2.7/site-packages/sqlalchemy/orm/instrumentation.py", line 177, in _state_constructor self.dispatch.first_init(self, self.class_) File "/home/pajamas/.local/lib/python2.7/site-packages/sqlalchemy/event/attr.py", line 256, in __call__ fn(*args, **kw) File "/home/pajamas/.local/lib/python2.7/site-packages/sqlalchemy/orm/mapper.py", line 3088, in _event_on_first_init configure_mappers() File "/home/pajamas/.local/lib/python2.7/site-packages/sqlalchemy/orm/mapper.py", line 2984, in configure_mappers mapper._post_configure_properties() File "/home/pajamas/.local/lib/python2.7/site-packages/sqlalchemy/orm/mapper.py", line 1810, in _post_configure_properties prop.init() File "/home/pajamas/.local/lib/python2.7/site-packages/sqlalchemy/orm/interfaces.py", line 184, in init self.do_init() File "/home/pajamas/.local/lib/python2.7/site-packages/sqlalchemy/orm/relationships.py", line 1658, in do_init self._setup_join_conditions() File "/home/pajamas/.local/lib/python2.7/site-packages/sqlalchemy/orm/relationships.py", line 1733, in _setup_join_conditions can_be_synced_fn=self._columns_are_mapped File "/home/pajamas/.local/lib/python2.7/site-packages/sqlalchemy/orm/relationships.py", line 1991, in __init__ self._determine_joins() File "/home/pajamas/.local/lib/python2.7/site-packages/sqlalchemy/orm/relationships.py", line 2096, in _determine_joins "specify a 'primaryjoin' expression." % self.prop) sqlalchemy.exc.NoForeignKeysError: Could not determine join condition between parent/child tables on relationship User.roles - there are no foreign keys linking these tables. Ensure that referencing columns are associated with a ForeignKey or ForeignKeyConstraint, or specify a 'primaryjoin' expression. Process finished with exit code 1 ``` Can someone assist me with this? Help would be greatly appreciated.
2017/07/31
[ "https://Stackoverflow.com/questions/45406332", "https://Stackoverflow.com", "https://Stackoverflow.com/users/7954998/" ]
There might be three relationships between User and Role: * One to One(One user has only one Role) * Many to One(One user has many roles) * Many to Many(Many user has many roles) For One to One: ``` class Role(Base): id = Column(Integer, primary_key=True) # ... user_id = Column(Integer, ForeignKey("user.id")) class User(Base): id = Column(Integer, primary_key=True) # ... role = relationship("Role", back_populates="user", uselist=False) ``` For Many to One: ``` class Role(Base): id = Column(Integer, primary_key=True) # ... user_id = Column(Integer, ForeignKey("user.id")) class User(Base): id = Column(Integer, primary_key=True) # ... roles = relationship("Role", back_populates="user") ``` For Many to Many:(In this relation, we need a associate table) ``` roles_users = Table("roles_users", Column("role_id", Integer, ForeignKey("role.id")), Column("user_id", Integer, ForeignKey("user.id"))) class Role(Base): id = Column(Integer, primary_key=True) # ... class User(Base): id = Column(Integer, primary_key=True) # ... roles = relationship("Role", back_populates="users", secondary=roles_users) ```
I made a low-level mistake because of my lack of database and SQL alchemy. First of all, this is a typical "one to many" problem.Relationship connects two rows from two tables by users' foreign key. The role\_id is defined as the foreign key, which builds the connections. The parameter "roles.id" in "ForeignKey()" clarified this column is the id of Role's rows. relationship() 's backref specialized the role model.
12,912
10,213,509
I have a **Django** site, hosted on **Heroku**. One of the models has an image field, that takes uploaded images, resizes them, and pushes them to Amazon S3 so that they can be stored persistently. This is working well, using **PIL** ``` def save(self, *args, **kwargs): # Save this one super(Product, self).save(*args,**kwargs) # resize on file system size = 200, 200 filename = str(self.thumbnail.path) image = Image.open(filename) image.thumbnail(size, Image.ANTIALIAS) image.save(filename) # send to amazon and remove from ephemeral file system if put_s3(filename): os.remove(filename) return True ``` However, PIL seems to work fine for PNGs and GIFs, but is not compliled with **libjpeg**. On a local development environment or a fully controlled 'nix server, it is simply a case of installing the jpeg extension. But does anyone know whether Jpeg manipulation is possible using the Cedar Heroku stack? Is there something else that can be added to requirements.txt? Among other unrelated packages, the requirements.txt for this virtualenv includes: ``` Django==1.3.1 PIL==1.1.7 distribute==0.6.24 django-queued-storage==0.5 django-storages==1.1.4 psycopg2==2.4.4 python-dateutil==1.5 wsgiref==0.1.2 ``` Thanks
2012/04/18
[ "https://Stackoverflow.com/questions/10213509", "https://Stackoverflow.com", "https://Stackoverflow.com/users/267757/" ]
I use this PIL fork in requirements.txt: ``` -e hg+https://bitbucket.org/etienned/pil-2009-raclette/#egg=PIL ``` and can use JPEG without issues: ``` -------------------------------------------------------------------- PIL 1.2a0 SETUP SUMMARY -------------------------------------------------------------------- version 1.2a0 platform Python 2.7.2 (default, Oct 31 2011, 16:22:04) [GCC 4.4.3] on linux2 -------------------------------------------------------------------- *** TKINTER support not available --- JPEG support available *** WEBP support not available --- ZLIB (PNG/ZIP) support available --- FREETYPE2 support available --- LITTLECMS support available -------------------------------------------------------------------- ```
Also please consider using [Pillow](https://pypi.python.org/pypi/Pillow), the "friendly" PIL fork which offers: * Setuptools compatibility * Python 3 compatibility * Frequent release cycle * Many bug fixes
12,913
25,433,921
I need to run this file: ``` from apps.base.models import Event from apps.base.models import ProfileActiveUntil from django.template import Context from django.db.models import Q import datetime from django.core.mail import EmailMultiAlternatives from bonzer.settings import SITE_HOST import smtplib from email.mime.multipart import MIMEMultipart from email.mime.text import MIMEText from bonzer.settings import send_mail, BONZER_MAIL, BONZER_MAIL_SMTP, BONZER_MAIL_USER, BONZER_MAIL_PASS, BONZER_MAIL_USETLS today = datetime.date.today() monthAgo = today + datetime.timedelta(days=1) monthAgoMinusOneDay = today + datetime.timedelta(days=2) events = Event.objects.all() ProfileActiveUntils = ProfileActiveUntil.objects.filter(Q(active_until__range=(monthAgo, monthAgoMinusOneDay))) msg = MIMEMultipart('alternative') msg['Subject'] = "Novim dogodivscinam naproti" msg['From'] = BONZER_MAIL msg['To'] = 'jjag3r@gmail.com' text = u'bla' html = u'bla' send_mail(msg_to=msg['To'], msg_subject=msg['Subject'], msg_html=html, msg_text=text) ``` I execute it like this: `*/2 * * * * /usr/local/bin/python2.7 /home/nezap/webapps/bonzer/bonzer/apps/base/alert.py` But I get error: No module named apps.base.models. Important fact is that I can't install virtualenv on server because I don't have permissions. Also I'm kind of newbie on this stuff so I don't have a lot of skills on servers or python. Thank you.
2014/08/21
[ "https://Stackoverflow.com/questions/25433921", "https://Stackoverflow.com", "https://Stackoverflow.com/users/3216697/" ]
`cron` does not read rc shell files so you need to define the enviroment variable PYTHONPATH to include the location of the `apps` package and all other module files that are required by the script. ``` PYTHONPATH=/usr/local/lib/python2.7:/usr/lib/python2.7 */2 * * * * /usr/local/bin/python2.7 /home/nezap/webapps/bonzer/bonzer/apps/base/alert.pyr ```
I would assume this is a problem with your cwd (current working directory). An easy way to test this would be to go to the root (cd /) then run: ``` python2.7 /home/nezap/webapps/bonzer/bonzer/apps/base/alert.py ``` You should get the same error. The path you will want to use will depend on the place where you normally run the script from. I would guess it would either be: /home/nezap/webapps/bonzer/bonzer/apps/base or /home/nezap/webapps/bonzer/bonzer/ So your solution would either be: ``` */2 * * * * cd /home/nezap/webapps/bonzer/bonzer/apps/base && /usr/local/bin/python2.7 ./alert.py ``` or ``` */2 * * * * cd /home/nezap/webapps/bonzer/bonzer && /usr/local/bin/python2.7 ./apps/base/alert.py ``` basically you are telling cron to change directory to that path, then if that works(the &&) run the following command.
12,914
25,496,012
The answers to [this question](https://stackoverflow.com/questions/14043886/python-2-3-convert-integer-to-bytes-cleanly) make it seem like there are two ways to convert an integer to a `bytes` object in Python 3. They show `s = str(n).encode()` and ``` n = 5 bytes( [n] ) ``` Being the same. However, testing that shows the values returned are different: ``` print(str(8).encode()) #Prints b'8' ``` but ``` print(bytes([8])) #prints b'\x08' ``` I know that the first method changes the `int 8` into a string (`utf-8` I believe) which has the hex value of 56, but what does the second one print? Is that just the hex value of 8? (a `utf-8` value of backspace?) Similarly, are both of these one byte in size? It seems like the second one has two characters == two bytes but I could be wrong there...
2014/08/25
[ "https://Stackoverflow.com/questions/25496012", "https://Stackoverflow.com", "https://Stackoverflow.com/users/3291506/" ]
Those two examples are *not* equivalent. `str(n).encode()` takes whatever you give it, turns it into its string representation, and then encodes using a character codec like utf8. `bytes([..])` will form a bytestring with the byte values of the array given. The representation `\xFF` is in fact the hexadecimal representation of a *single* byte value. ``` >>> str(8).encode() b'8' >>> b'8' == b'\x38' True ```
`b'8'` is a `bytes` object which contains a single byte with value of the character `'8'` which is equal to `56`. `b'\x08'` is a `bytes` object which contains a single byte with value `8`, which is the same as `0x8`.
12,915
30,637,387
I'm running Notebook server on remote machine and want to somehow protect it. Unfortunately I cannot use password authentication (because if I do so then I can't use `ein`, an emacs package for ipython notebooks). The other obvious solution is to make IPython Notebook accept connections only from my local machine's ip, but it seems that there is no regular way to do this with ipython configs. Maybe I'm missing something? Or maybe there is another way to achieve my goal? UPDATE: here's a bug about it posted to EIN tracker: <https://github.com/millejoh/emacs-ipython-notebook/issues/57>. UPDATE2: thanks to millejoh (`ein` developer), now `ein` should work with password-protected notebooks, so the question is not actual anymore. Thanks everyone for your replies!
2015/06/04
[ "https://Stackoverflow.com/questions/30637387", "https://Stackoverflow.com", "https://Stackoverflow.com/users/2500596/" ]
You can set the port for iPython to a port that will only be used by iPython. And then restrict access to that port to only you local machine's IP. To set the port: Edit the ipython\_notebook\_config.py file and insert or edit the line: ``` c.NotebookApp.port = 7777 ``` where you change 7777 to the port of your choice. Assuming the remote machine in Linux. You can then use iptables to block access to that port except from your local machine: ``` iptables -I INPUT \! --src 1.2.3.4 -m tcp -p tcp --dport 7777 -j DROP # if it's not 1.2.3.4, drop it ``` where you change 1.2.3.4 to your local ip and 7777 to the port you have iPython working on. Sources: For iPython configs: [Docs](http://ipython.org/ipython-doc/1/interactive/public_server.html) For blocking IPs : [StackExchange](https://serverfault.com/questions/146569/iptables-how-to-allow-only-one-ip-through-specific-port) [cyberciti](http://www.cyberciti.biz/tips/linux-iptables-6-how-to-block-outgoing-access-to-selectedspecific-ip-address.html)
I dont found anything about other authentication way on ipython website, then you can have right. Here <http://ipython.org/ipython-doc/3/notebook/security.html> is something about ipython trust. Maybe it will be sufficient for you.
12,916
56,476,940
I have successfully installed z3 on a remote server where I am not root. when I try to run my python code I get : ``` ModuleNotFoundError: No module named 'z3' ``` I understand that I have to add it to PYTHONPATH in order to work and so I went ahead and done that like this: > > export PYTHONPATH=$HOME/usr/lib/python-2.7/site-packages:$PYTHONPATH > > > I still get the same issue though, how can I verify that it was correctly added to the variables environment? what am i doing wrong?
2019/06/06
[ "https://Stackoverflow.com/questions/56476940", "https://Stackoverflow.com", "https://Stackoverflow.com/users/10881142/" ]
Did you pass the `--python` flag when you called `scripts/mk_make.py`? See the instructions on <https://github.com/Z3Prover/z3/blob/master/README.md> on how to exactly enable Python (about all the way down in that page). Here's an example invocation: ``` python scripts/mk_make.py --prefix=/home/leo --python --pypkgdir=/home/leo/lib/python-2.7/site-packages ``` Change the directories appropriately, of course.
For Windows users that just downloaded and unzipped the compiled Z3 binary into some arbitrary directory, adding the location of the python directory in the directory where Z3 was installed to PYTHONPATH did the trick. ie in Cygwin : `$ export PYTHONPATH=<location of z3>/bin/python:$PYTHONPATH` (or the equivalent in a Windows command shell)
12,917
43,168,078
I am trying to extract how many songs are release in every year from csv. my data looks like this ``` no,artist,name,year "1","Bing Crosby","White Christmas","1942" "2","Bill Haley & his Comets","Rock Around the Clock","1955" "3","Sinead O'Connor","Nothing Compares 2 U","1990","35.554" "4","Celine Dion","My Heart Will Go On","1998","35.405" "5","Bryan Adams","(Everything I Do) I Do it For You","1991" "6","The Beatles","Hey Jude","1968" "7","Whitney Houston","I Will Always Love You","1992","34.560" "8","Pink Floyd","Another Brick in the Wall (part 2)","1980" "9","Irene Cara","Flashdance... What a Feeling","1983" "10","Elton John","Candle in the Wind '97","1992" ``` my files consists of 3000 lines data with additional fields but i am interested to extract how many songs are released in every year i tried to extract the year and songs and my code is here, but I am new in python and therefore I don't know how to solve my problem. my code is ``` from itertools import islice import csv filename = '/home/rob/traintask/top3000songs.csv' data = csv.reader(open(filename)) # Read the column names from the first line of the file fields = data.next()[3] // I tried to read the year columns print fields count = 0 for row in data: # Zip together the field names and values items = zip(fields, row) item = {} \\ here I am lost, i think i should make a dict and set year as key and no of songs as values, but I don't know how to do it # Add the value to our dictionary for (name, value) in items: item[name] = value.strip() print 'item: ', item ``` I am doing it completely wrong. but If somebody give me some hints or help that how i can count no of songs released in a year. i will be thankful.
2017/04/02
[ "https://Stackoverflow.com/questions/43168078", "https://Stackoverflow.com", "https://Stackoverflow.com/users/7438144/" ]
2 very simple lines of code: ``` import pandas as pd my_csv=pd.read_csv(filename) ``` and to get the number of songs per year: ``` songs_per_year= my_csv.groupby('year')['name'].count() ```
You can use a `Counter` object from the [`collections`](https://docs.python.org/2/library/collections.html) module.. ``` >>> from collections import Counter >>> from csv import reader >>> >>> YEAR = 3 >>> with open('file.txt') as f: ... next(f, None) # discard header ... year2rel = Counter(int(line[YEAR]) for line in reader(f)) ... >>> year2rel Counter({1992: 2, 1942: 2, 1955: 1, 1990: 1, 1991: 1, 1968: 1, 1980: 1, 1983: 1}) ```
12,918
70,946,840
Is it possible to make a dot function that is var.function() that changes var? I realise that i can do: ``` class Myclass: def function(x): return 2 Myclass.function(1): ``` But i want to change it like the default python function. ``` def function(x): return(3) x=1 x.function() print(x) ``` and it returns ``` >>> 3 ```
2022/02/01
[ "https://Stackoverflow.com/questions/70946840", "https://Stackoverflow.com", "https://Stackoverflow.com/users/18093990/" ]
You can use Pandas `.shift()` to compare the values of the series with the next row, build up a session value based on the "hops", and then group by that session value. ``` import pandas as pd df = pd.DataFrame({ 'name' : ['John', 'John', 'John', 'John', 'John', 'Emily', 'Emily', 'John'], 'app' : ['Excel','Excel','Spotify','Excel','Spotify','Excel', 'Excel', 'Excel'], 'duration':[3,2,1,1,2,4,1,3]}) session = ((df.name != df.name.shift()) | (df.app != df.app.shift())).cumsum() df2 = df.groupby(['name', 'app', session], as_index=False, sort=False)['duration'].sum() print(df2) ``` Output: ``` name app duration 0 John Excel 5 1 John Spotify 1 2 John Excel 1 3 John Spotify 2 4 Emily Excel 5 5 John Excel 3 ```
One solution would be to add a column to define hops. Then group by that column ``` hop_id = 1 for i in df.index: df.loc[i,'hop_id'] = hop_id if (df.loc[i,'Name']!= df.loc[i+1,'Name']) or (df.loc[i,'Application'] != df.loc[i+1,'Application']): hop_id = hop_id +1 df.groupby('hop_id')['Duration'].sum() ```
12,919
67,415,482
I created a Sudoku class in python, I want to solve the board and also keep an instance variable with the original board, but when I use the `solve()` method which uses the recursive backtracking algorithm `self.board` changes together with `self.solved_board` why is that, and how can I keep a variable with the original copy? ``` grid = [ [3, 0, 6, 5, 0, 8, 4, 0, 0], [5, 2, 0, 0, 0, 0, 0, 0, 0], [0, 8, 7, 0, 0, 0, 0, 3, 1], [0, 0, 3, 0, 1, 0, 0, 8, 0], [9, 0, 0, 8, 6, 3, 0, 0, 5], [0, 5, 0, 0, 9, 0, 6, 0, 0], [1, 3, 0, 0, 0, 0, 2, 5, 0], [0, 0, 0, 0, 0, 0, 0, 7, 4], [0, 0, 5, 2, 0, 6, 3, 0, 0] ] class Sudoku: def __init__(self, board): self.board = board self.solved_board = board[:] #<--- I used the [:] because I thought this will create a new list def get_board(self): return self.board def set_board(self, board): self.board = board self.solved_board = board def print_original_board(self): self.print(self.board) def print_solved_board(self): self.print(self.solved_board) def print(self, board): """Receiving a matrix and printing a board with seperation""" for i in range(len(board)): if i % 3 == 0 and i!=0: print('---------------------------------') for j in range(len(board[i])): if j%3==0 and j!=0: print(" | ", end='') print(" " + str(board[i][j]) + " ", end='') print('') def find_empty(self,board): """Receiving a matrix, loops through it, and return a tuple with the row and the column of the free stop in the matrix""" for i in range(len(board)): for j in range(len(board[i])): if board[i][j]==0: return (i,j) return None def is_valid(self, board, num, pos): """Receiving matrix, a number we want to insert, and a tuple with the row and col and will check if the row, col, and box are valid so we can place the number in the position""" # Check row for i in range(len(board[pos[0]])): if pos[0] != i and board[pos[0]][i] == num: return False # Check col for i in range(len(board)): if pos[1] != i and board[i][pos[1]] == num: return False pos_row = pos[0] // 3 pos_col = pos[1] // 3 for i in range(pos_row*3 ,pos_row*3 + 3): for j in range(pos_col * 3, pos_col*3 + 3): if (i,j) != pos and board[i][j] == num: return False return True def solve(self): """Using backtracking algorithm to solve the solved_board variable""" find = self.find_empty(self.solved_board) if not find: return True else: row, col = find for i in range(1,10): if(self.is_valid(self.solved_board, i, (row, col))): self.solved_board[row][col] = i if self.solve(): return self.solved_board self.solved_board[row][col] = 0 return False sudoku = Sudoku(grid) sudoku.print_original_board() print(" ") sudoku.solve() sudoku.print_original_board() # <---- This prints the solved board ```
2021/05/06
[ "https://Stackoverflow.com/questions/67415482", "https://Stackoverflow.com", "https://Stackoverflow.com/users/7311644/" ]
`self.solved_board = board[:]` does indeed create a new list, but it references the same inner lists as `board`. You need to go one level deeper: ``` self.solved_board = [row[:] for row in board] ```
Yeah, `board[:]` does create a new list -- of all those old inner lists: ```py In [23]: board = [[1], [2]] In [24]: board2 = board[:] In [25]: board2[0] is board[0] Out[25]: True In [26]: board2[0][0] += 10 In [28]: board Out[28]: [[11], [2]] ``` You'd need to deepcopy it; e.g., ```py solved_board = [row[:] for row in board] ```
12,920
308,254
I am running an Ubuntu 8.10, using Python 2.5 out of the box. This is fine from the system point of view, but I need Python2.4 since I dev on Zope / Plone. Well, installing python2.4 is no challenge, but I can't find a (clean) way to make iPython use it : no option in the man nor in the config file. Before, there was a ipython2.4 package but it is deprecated.
2008/11/21
[ "https://Stackoverflow.com/questions/308254", "https://Stackoverflow.com", "https://Stackoverflow.com/users/9951/" ]
Ok, I answer my own question : I'm dumb :-) ``` ls /usr/bin/ipython* /usr/bin/ipython /usr/bin/ipython2.4 /usr/bin/ipython2.5 ``` Now it's built-in...
To complement on @Peter's answer, I might add that the ipython "executable" you run are simply python script that launch the ipython shell. So a solution that worked for me was to change the python version that runs that script: ```bash $ cp ipython ipython3 $ nano ipython3 ``` Here is what the script looks like: ```py #!/usr/bin/python # -*- coding: utf-8 -*- import re import sys from IPython import start_ipython if __name__ == '__main__': sys.argv[0] = re.sub(r'(-script\.pyw|\.exe)?$', '', sys.argv[0]) sys.exit(start_ipython()) ``` You can now replace this first line with your correct version of python, for example, you may wan to use python 3: ```bash $ which python3 /usr/bin/python3 ``` so you would simply replace `/usr/bin/python` by `/usr/bin/python3` in your newly created ipython3 file!
12,923
5,784,791
I installed MySQL on my Mac OS 10.6 about a week ago, and, after some playing around, got it to work just fine. It integrated with python MySQLdb and I also got Sequel Pro to connect to the database. However, php wouldn't access the server. Even after I added a php.ini file to /etc/ and directed it toward the same sock that Sequel Pro was using: /tmp/mysql.sock. But now I can't access the local server at all. As far as I can tell, there is no mysql.sock file anywhere on my computer, not in /tmp/ or anywhere else. I can start the mysql server from Terminal, but it logs me out automatically after a minute: ``` 110425 17:36:18 mysqld_safe Logging to '/usr/local/mysql/data/dn0a208bf7.sunet.err'. 110425 17:36:18 mysqld_safe Starting mysqld daemon with databases from /usr/local/mysql/data 110425 17:37:58 mysqld_safe mysqld from pid file /usr/local/mysql/data/dn0a208bf7.sunet.pid ended ``` If I try to call "mysql" from the command line (which worked perfectly earlier today): ``` ERROR 2002 (HY000): Can\'t connect to local MySQL server through socket '/tmp/mysql.sock' (2) ``` The PHP error is of course similar: ``` PHP Warning: mysql_real_escape_string(): [2002] No such file or directory (trying to connect via unix:///tmp/mysql.sock) ``` Also, there is no "my.cnf" file in my mysql installation directory: /usr/local/mysql. There are my.cnf files for the mysql installations that come along with XAMPP. Those also have the default socket listed as '/tmp/mysql.sock', but I had to change them manually. Any ideas what's going on? Why would modifying the php.ini file have produced a change for Sequel Pro as well?
2011/04/26
[ "https://Stackoverflow.com/questions/5784791", "https://Stackoverflow.com", "https://Stackoverflow.com/users/321838/" ]
I was also getting this error on a fresh install of XAMPP. For those not comfortable with the command line, there is another way. Based on the advice above (thank you), I used my old standard "Easy Find" to locate the latest version of my.cnf. Upon opening the file in an editor I discovered that the socket file was pointing to: socket = /Applications/XAMPP/xamppfiles/var/mysql/mysql.sock I updated Navicat's Advanced properties/ Use socket file to the path above and bingo. Hope this helps someone.
If you have installed mysql through homebrew, simple `brew services restart mysql` may help.
12,933
61,261,306
I recently started exploring VS Code for developing Python code and I’m running into an issue when I try to import a module from a subfolder. The exact same code runs perfectly when I execute it in a Jupyter notebook (the subfolders contain the `__init__.py` files etc.) I believe I followed the instructions for setting up the VS Python extension correctly. Everything else except this one import command works well, but I haven’t been able to figure what exactly is going wrong. The structure of the project is as follows: The root folder, which is set as the `cwd` contains two subfolders (`src` and `bld`). `src` contains the `py`-file that imports a module that is saved in `foo.py`in the `bld`-folder using `from bld.foo import foo_function` When running the file, I get the following error: `ModuleNotFoundError: No module named ‘bld'`. I have several Anaconda Python environments installed and get the same problem with each of them. When copying `foo.py` to the `src` directory and using `from foo import foo_function` everything works. My `launch.json` file is as follows: ``` { "version": "0.2.0", "configurations": [ { "name": "Python: Current File (Integrated Terminal)", "type": "python", "request": "launch", "program": "${file}", "cwd": "${workspaceFolder}", "env": {"PYTHONPATH": "${workspaceFolder}:${workspaceFolder}/bld"}, "console": "integratedTerminal" } ] } ``` Any ideas or help would be greatly appreciated!
2020/04/16
[ "https://Stackoverflow.com/questions/61261306", "https://Stackoverflow.com", "https://Stackoverflow.com/users/1952633/" ]
Stefan‘s method worked for me. Taking as example filesystem: workspaceFolder/folder/subfolder1/subfolder2/bar.py I wasn't able to import subfolders like: `from folder.subfolder1.subfolder2 import bar` It said: `ModuleNotFoundError: No module named 'folder'` I added to .vscode/settings.json the following: ``` "terminal.integrated.env.osx": { "PYTHONPATH": "${workspaceFolder}" } ``` I also added at the beginning of my code: ``` import sys #[... more imports ...] sys.path.append(workspaceFolder) # and then, the subfolder import: from folder.subfolder1.subfolder2 import bar ``` Now, it works. Note: all my folders and subfolders have an empty file named `__init__.py`. I still had to do the steps described above. VSCode version: 1.52.0 (from 10-dec-2020)
I think I finally figured out the answer myself: The integrated terminal does not scan the `PYTHONPATH` from the `.env`-file. When running the file in an integrated window, the `PYTHONPATH` is correctly taken from `.env`, however. So in order to run my script in the terminal I had to add the `terminal.integrated.env.*` line in my `settings.json` as follows: ``` { "python.pythonPath": "/anaconda3/envs/py36/bin/python", "python.linting.enabled": true, "python.linting.pylintEnabled": true, "python.linting.flake8Enabled": false, "python.envFile": "${workspaceFolder}/.env", "terminal.integrated.env.osx": { "PYTHONPATH": "${workspaceFolder}" } } ```
12,943
61,279,933
In python, I run this simple code: ```py print('number is %.15f'%1.6) ``` which works fine (Output: `number is 1.600000000000000`), but when I take the decimal places to `16>=`, I start getting random numbers at the end. For example: ```py print('number is %.16f'%1.6) ``` Output: `number is 1.6000000000000001` and ```py print('number is %.20f'%1.6) ``` Output: `number is 1.60000000000000008882` Does this have to do with the way computers compute numbers? edit: Thanks for all the responses! I understand it now and it was fun to learn
2020/04/17
[ "https://Stackoverflow.com/questions/61279933", "https://Stackoverflow.com", "https://Stackoverflow.com/users/13159127/" ]
Work-around 1. Open Hyper-V Manager under Windows Administrative Tools 2. Note DockerDesktopVM is not running under Virtual Machines 3. Under the Actions pane, click Stop Service, then click Start Service 4. Restart Docker Desktop Its worked for me
Make sure that the VT-X virtualization is enabled in your BIOS
12,944
19,616,205
I'm trying to run a macro via python but I'm not sure how to get it working... I've got the following code so far, but it's not working. ``` import win32com.client xl=win32com.client.Dispatch("Excel.Application") xl.Workbooks.Open(Filename="C:\test.xlsm",ReadOnly=1) xl.Application.Run("macrohere") xl.Workbooks(1).Close(SaveChanges=0) xl.Application.Quit() xl=0 ``` I get the following traceback: ``` Traceback (most recent call last): File "C:\test.py", line 4, in <module> xl.Application.Run("macrohere") File "<COMObject <unknown>>", line 14, in Run File "C:\Python27\lib\site-packages\win32com\client\dynamic.py", line 282, in _ApplyTypes_ result = self._oleobj_.InvokeTypes(*(dispid, LCID, wFlags, retType, argTypes) + args) com_error: (-2147352567, 'Exception occurred.', (0, u'Microsoft Excel', u"Cannot run the macro 'macrohere'. The macro may not be available in this workbook or all macros may be disabled.", u'xlmain11.chm', 0, -2146827284), None) ``` ### EDIT ``` import win32com.client xl=win32com.client.Dispatch("Excel.Application") xl.Workbooks.Open(Filename="C:\test.xlsm",ReadOnly=1) try: xl.Application.Run("test.xlsm!testmacro.testmacro") # It does run like this... but we get the following error: # Traceback (most recent call last): # File "C:\test.py", line 7, in <module> # xl.Workbooks(1).Close(SaveChanges=0) # File "C:\Python27\lib\site-packages\win32com\client\dynamic.py", line 192, in __call__ # return self._get_good_object_(self._oleobj_.Invoke(*allArgs),self._olerepr_.defaultDispatchName,None) # com_error: (-2147352567, 'Exception occurred.', (0, None, None, None, 0, -2147352565), None) except: # Except isn't catching the above error... :( xl.Workbooks(1).Close(SaveChanges=0) xl.Application.Quit() xl=0 ```
2013/10/27
[ "https://Stackoverflow.com/questions/19616205", "https://Stackoverflow.com", "https://Stackoverflow.com/users/2487602/" ]
I did some modification to the SMNALLY's code so it can run in Python 3.5.2. This is my result: ```py #Import the following library to make use of the DispatchEx to run the macro import win32com.client as wincl def runMacro(): if os.path.exists("C:\\Users\\Dev\\Desktop\\Development\\completed_apps\\My_Macr_Generates_Data.xlsm"): # DispatchEx is required in the newest versions of Python. excel_macro = wincl.DispatchEx("Excel.application") excel_path = os.path.expanduser("C:\\Users\\Dev\\Desktop\\Development\\completed_apps\\My_Macr_Generates_Data.xlsm") workbook = excel_macro.Workbooks.Open(Filename = excel_path, ReadOnly =1) excel_macro.Application.Run\ ("ThisWorkbook.Template2G") #Save the results in case you have generated data workbook.Save() excel_macro.Application.Quit() del excel_macro ```
I suspect you haven't authorize your Excel installation to run macro from an automated Excel. It is a security protection by default at installation. To change this: 1. File > Options > Trust Center 2. Click on Trust Center Settings... button 3. Macro Settings > Check Enable all macros
12,947
73,111,056
I am fairly new at writing code and trying to teach myself python and pyspark based on searching the web for answers to my problems. I am trying to build a historical record set based on daily changes. I periodically have to bump the semantic version, but do not want to lose my already collected historical data. If the job can run incrementally then it performs the incremental transform like normal. Any and all help is appreciated. ``` SEMANTIC_VERSION = 1 # if job cannot run incrementally # joins current snapshot data with already collected historical data if cannot_not_run_incrementally: @transform( history=Output(historical_output), backup=Input(historical_output_backup), source=Input(order_input), ) def my_compute_function(source, history, backup, ctx): input_df = ( source.dataframe() .withColumn('record_date', F.current_date()) ) old_df = backup.dataframe() joined = old_df.unionByName(input_df) joined = joined.distinct() history.write_dataframe(joined) # if job can run incrementally perform incremental transform normally else: @incremental(snapshot_inputs=['source'], semantic_version=SEMANTIC_VERSION) @transform( history=Output(historical_output), backup=Output(historical_output_backup), source=Input(order_input), ) def my_compute_function(source, history, backup): input_df = ( source.dataframe() .withColumn('record_date', F.current_date()) ) history.write_dataframe(input_df.distinct() .subtract(history.dataframe('previous', schema=input_df.schema))) backup.set_mode("replace") backup.write_dataframe(history.dataframe()) ``` working code based on information from the selected answer and comments. ``` SEMANTIC_VERSION = 3 @incremental(snapshot_inputs=['source'], semantic_version=SEMANTIC_VERSION) @transform( history=Output(), backup=Output(), source=Input(), ) def compute(ctx, history, backup, source): # running incrementally if ctx.is_incremental: input_df = ( source.dataframe() .withColumn('record_date', F.current_date()) ) history.write_dataframe(input_df.subtract(history.dataframe('previous', schema=input_df.schema))) backup.set_mode("replace") backup.write_dataframe(history.dataframe().distinct()) # not running incrementally else: input_df = ( source.dataframe() .withColumn('record_date', F.current_date()) ) backup.set_mode('modify') # use replace if you want to start fresh backup.write_dataframe(input_df) history.set_mode('replace') history.write_dataframe(backup.dataframe().distinct()) ```
2022/07/25
[ "https://Stackoverflow.com/questions/73111056", "https://Stackoverflow.com", "https://Stackoverflow.com/users/19213719/" ]
You use the 'IncrementalTransformContext' of the transform to determine whether it is running incrementally. This can be seen in the code below. ``` @incremental() @transform( x=Output(), y=Input(), z=Input(), ) def compute(ctx, x, y, z): if ctx.is_incremental: ## Some Code else: ## Other Code ``` More information on `IncrementalTransformContext` can be found here on your environment ({URL}/workspace/documentation/product/transforms/python-transforms-api-incrementaltransformcontext) or here (<https://www.palantir.com/docs/foundry/transforms-python/transforms-python-api-classes/#incrementaltransformcontext>)
In an incremental transform, there is a boolean flag property called 'is\_incremental' in the [incremental transform context object](https://www.palantir.com/docs/foundry/transforms-python/incremental-reference/#incrementaltransformcontext). Therefore, I think you can do a single incremental transform definition and based on the value of the is\_incremental you do the operations you want, I would try something like this: ``` SEMANTIC_VERSION = 1 @incremental(snapshot_inputs=['source'], semantic_version=SEMANTIC_VERSION) @transform( history=Output(historical_output), backup=Input(historical_output_backup), source=Input(order_input), ) def my_compute_function(source, history, backup, ctx): input_df = ( source.dataframe() .withColumn('record_date', F.current_date()) ) # if job cannot run incrementally # joins current snapshot data with already collected historical data if not ctx.is_incremental: old_df = backup.dataframe() joined = old_df.unionByName(input_df) joined = joined.distinct() history.write_dataframe(joined) else: # if job can run incrementally perform incremental transform normally history.write_dataframe(input_df.distinct() .subtract(history.dataframe('previous', schema=input_df.schema))) backup.set_mode("replace") backup.write_dataframe(history.dataframe()) ```
12,957
4,827,244
I'm trying to get an implementation of github flavored markdown working in python, with no luck... I don't have much in the way of regex skills. Here's the ruby code from [github](https://github.com/github/github-flavored-markdown/blob/gh-pages/code.rb#L17): ``` # in very clear cases, let newlines become <br /> tags text.gsub!(/(\A|^$\n)(^\w[^\n]*\n)(^\w[^\n]*$)+/m) do |x| x.gsub(/^(.+)$/, "\\1 ") end ``` And here's what I've come up with so far in python 2.5: ``` def newline_callback(matchobj): return re.sub(r'^(.+)$','\1 ',matchobj.group(0)) text = re.sub(r'(\A|^$\n)(^\w[^\n]*\n)(^\w[^\n]*$)+', newline_callback, text) ``` There just doesn't seem to be any effect at all :-/ If anyone has a fully working implementation of [github flavored markdown](http://github.github.com/github-flavored-markdown/) in python, other than [this one](http://gregbrown.co.nz/code/githib-flavoured-markdown-python-implementation/) (doesn't seem to work for newlines), I'd love to hear about it. I'm really most concerned about the newlines. These are the tests for the regex, from github's ruby code: ``` >>> gfm_pre_filter('apple\\npear\\norange\\n\\nruby\\npython\\nerlang') 'apple \\npear \\norange\\n\\nruby \\npython \\nerlang' >>> gfm_pre_filter('test \\n\\n\\n something') 'test \\n\\n\\n something' >>> gfm_pre_filter('# foo\\n# bar') '# foo\\n# bar' >>> gfm_pre_filter('* foo\\n* bar') '* foo\\n* bar' ```
2011/01/28
[ "https://Stackoverflow.com/questions/4827244", "https://Stackoverflow.com", "https://Stackoverflow.com/users/73831/" ]
That Ruby version has **multiline modifier** in the regex, so you need to do the same in python: ``` def newline_callback(matchobj): return re.sub(re.compile(r'^(.+)$', re.M),r'\1 ',matchobj.group(0)) text = re.sub(re.compile(r'(\A|^$\n)(^\w[^\n]*\n)(^\w[^\n]*$)+', re.M), newline_callback, text) ``` So that code will (like the Ruby version) add two spaces after before newline, except if we have two newlines (paragraph). Are those test string you gave correct? That file you linked has this, and it works with that fixed code: ``` "apple\npear\norange\n\nruby\npython\nerlang" -> "apple \npear \norange\n\nruby \npython \nerlang" ```
``` return re.sub(r'^(.+)$',r'\1 ',matchobj.group(0)) ^^^--------------------------- you forgot this. ```
12,958
3,115,448
this html is [here](https://mail.google.com/mail/?ui=2&ik=a0b1e46c9c&view=att&th=1296be43b8e3bbd9&attid=0.1&disp=inline&zw) : ``` <!DOCTYPE HTML PUBLIC "-//W3C//DTD HTML 4.01 Transitional//EN"><html><head><META http-equiv="Content-Type" content="text/html; charset=utf-8"></head><body> <div bgcolor="#48486c"> <table width="720" border="0" cellspacing="0" cellpadding="0" align="center" background="http://title.jpg" height="130"> <tr height="129"> <td width="719" height="129"></td> <td width="1" height="129"></td> </tr> <tr height="1"> <td width="720" height="1"></td> <td width="1" height="1"></td> </tr> </table> <table width="720" border="0" cellspacing="0" cellpadding="0" align="center" height="203"> <tr height="20"> <td width="719" height="20"></td> <td width="1" height="20"></td> </tr> <tr height="69"> <td width="719" height="69" valign="top" align="left"> <table width="719" border="1" cellspacing="2" cellpadding="0"> <tr> <td bgcolor="a5fdf8" width="390"><b>Stream Name</b></td> <td bgcolor="a5fdf8" width="61"><b>Status</b></td> <td bgcolor="a5fdf8" width="61"><b>Duration</b></td> <td bgcolor="a5fdf8" width="185"><b>Start</b></td> </tr> <tr bgcolor="white"> <td width="390">c:\streams\ours\Sony_AVCHD_<WBR>Test_Discs_60Hz_00001.m2ts</td> <td width="61"><font color="#D0D0D0">----</font></td> <td width="61">00:00:02</td> <td width="185">2010/06/15-15:06:17</td> </tr> </table> </td> <td width="1" height="69"></td> </tr> <tr height="113"> <td width="720" height="113" colspan="2" valign="top" align="left"> <table width="721" border="1" cellspacing="2" cellpadding="0"> <tr bgcolor="a5fdf8"> <td width="299"><b>Test Category</b></td> <td width="61"><b>Error</b></td> <td width="62"><b>Warning</b></td> <td width="275"><b>Details</b></td> </tr> <tr bgcolor="white"> <td width="299"><font color="#099eac">All Tests (Sony_AVCHD_Test_Discs_60Hz_<WBR>00001.m2ts)</font></td> <td width="61"><font color="#ff0000">34787</font></td> <td width="61"><font color="#000000">0</font></td> <td width="275"></td> </tr> <tr bgcolor="white"> <td width="299"><font color="#800000"> ETSI TR-101-290 Tests</font></td> <td width="61"><font color="#800000">No Lic</font></td> <td width="61"><font color="#800000">No Lic</font></td> <td width="275"></td> </tr> <tr bgcolor="white"> <td width="299"><font color="#800000"> ISO/IEC Transport Stream Tests</font></td> <td width="61"><font color="#800000">No Lic</font></td> <td width="61"><font color="#800000">No Lic</font></td> <td width="275"></td> </tr> <tr bgcolor="white"> <td width="299"><font color="#800000"> System Data T-STD Tests</font></td> <td width="61"><font color="#800000">No Lic</font></td> <td width="61"><font color="#800000">No Lic</font></td> <td width="275"></td> </tr> <tr bgcolor="white"> <td width="299"><font color="#099eac"> Prog(1)</font></td> <td width="61"><font color="#ff0000">34787</font></td> <td width="61"><font color="#000000">0</font></td> <td width="275"></td> </tr> <tr bgcolor="white"> <td width="299"><font color="#099eac"> VES(0xe0)</font></td> <td width="61"><font color="#ff0000">34787</font></td> <td width="61"><font color="#000000">0</font></td> <td width="275"></td> </tr> <tr bgcolor="white"> <td width="299"><font color="#1010F0"> H.264/AVC Conformance</font></td> <td width="61"><font color="#ff0000">34718</font></td> <td width="61"><font color="#000000">0</font></td> <td width="275"> <a><font color="#ff0000">Sony_AVCHD_Test_Discs_60Hz_<WBR>00001.m2ts_Prog(1)_PID(0x1011)<WBR>_H264_Conf.txt</font></a><br> </td> </tr> <tr bgcolor="white"> <td width="299"><font color="#101010"> Sequence</font></td> <td width="61"><font color="#000000">0</font></td> <td width="61"><font color="#000000">0</font></td> <td width="275"></td> </tr> <tr bgcolor="white"> <td width="299"><font color="#101010"> Picture</font></td> <td width="61"><font color="#000000">0</font></td> <td width="61"><font color="#000000">0</font></td> <td width="275"></td> </tr> <tr bgcolor="white"> <td width="299"><font color="#101010"> Slice</font></td> <td width="61"><font color="#000000">0</font></td> <td width="61"><font color="#000000">0</font></td> <td width="275"></td> </tr> <tr bgcolor="white"> <td width="299"><font color="#101010"> Macroblock</font></td> <td width="61"><font color="#ff0000">34718</font></td> <td width="61"><font color="#000000">0</font></td> <td width="275"></td> </tr> <tr bgcolor="white"> <td width="299"><font color="#101010"> Block</font></td> <td width="61"><font color="#000000">0</font></td> <td width="61"><font color="#000000">0</font></td> <td width="275"></td> </tr> <tr bgcolor="white"> <td width="299"><font color="#1010F0"> HRD Tests</font></td> <td width="61"><font color="#ff0000">69</font></td> <td width="61"><font color="#000000">0</font></td> <td width="275"> <a><font color="#ff0000">Sony_AVCHD_Test_Discs_60Hz_<WBR>00001.m2ts_Prog(1)_PID(0x1011)<WBR>_H264_HRD.txt</font></a><br> </td> </tr> <tr bgcolor="white"> <td width="299"><font color="#101010"> HRD level</font></td> <td width="61"><font color="#ff0000">69</font></td> <td width="61"><font color="#000000">0</font></td> <td width="275"></td> </tr> <tr bgcolor="white"> <td width="299"><font color="#800000"> Video T-STD Tests</font></td> <td width="61"><font color="#800000">No Lic</font></td> <td width="61"><font color="#800000">No Lic</font></td> <td width="275"></td> </tr> <tr bgcolor="white"> <td width="299"><font color="#099eac"> AES(0xfd)</font></td> <td width="61"><font color="#000000">0</font></td> <td width="61"><font color="#000000">0</font></td> <td width="275"></td> </tr> <tr bgcolor="white"> <td width="299"><font color="#808080"> Audio Level Tests</font></td> <td width="61"><font color="#808080">Disabled</font></td> <td width="61"><font color="#808080">Disabled</font></td> <td width="275"></td> </tr> <tr bgcolor="white"> <td width="299"><font color="#800000"> Audio T-STD Tests</font></td> <td width="61"><font color="#800000">No Lic</font></td> <td width="61"><font color="#800000">No Lic</font></td> <td width="275"></td> </tr> </table> </td> </tr> <tr height="1"> <td width="719" height="1"></td> <td width="1" height="1"></td> </tr> </table> </div> </body></html> ``` has any python lib to do this ? thanks
2010/06/25
[ "https://Stackoverflow.com/questions/3115448", "https://Stackoverflow.com", "https://Stackoverflow.com/users/234322/" ]
[BeautifulSoup](http://www.crummy.com/software/BeautifulSoup/) gets you almost all the way there: ``` >>> import BeautifulSoup >>> f = open('a.html') >>> soup = BeautifulSoup.BeautifulSoup(f) >>> f.close() >>> g = open('a.xml', 'w') >>> print >> g, soup.prettify() >>> g.close() ``` This closes all tags properly. The only issue remaining is that the `doctype` remains `HTML` -- to change that into the doctype of your choice, you only need to change the first line, which is not hard, e.g., instead of printing the prettified text directly, ``` >>> lines = soup.prettify().splitlines() >>> lines[0] = ('<!DOCTYPE html PUBLIC "-//W3C//DTD XHTML 1.0 Transitional//EN"' '"http://www.w3.org/TR/xhtml1/DTD/xhtml1-transitional.dtd">') >>> print >> g, '\n'.join(lines) ```
lxml works well: ``` from lxml import html, etree doc = html.fromstring(open('a.html').read()) out = open('a.xhtml', 'wb') out.write(etree.tostring(doc)) ```
12,959
6,987,413
I started using the protocol buffer library, but noticed that it was using huge amounts of memory. pympler.asizeof shows that a single one of my objects is about 76k! Basically, it contains a few strings, some numbers, and some enums, and some optional lists of same. If I were writing the same thing as a C-struct, I would expect it to be under a few hundred bytes, and indeed the ByteSize method returns 121 (the size of the serialized string). Is that you expect from the library? I had heard it was slow, but this is unusable and makes me more inclined to believe I'm misusing it. **Edit** Here is an example I constructed. This is a pb file similar, but simpler than what I've been using ``` package pb; message A { required double a = 1; } message B { required double b = 1; } message C { required double c = 1; optional string s = 2; } message D { required string d = 1; optional string e = 2; required A a = 3; optional B b = 4; repeated C c = 5; } ``` And here I am using it ``` >>> import pb_pb2 >>> a = pb_pb2.D() >>> a.d = "a" >>> a.e = "e" >>> a.a.a = 1 >>> a.b.b = 2 >>> c = a.c.add() >>> c.c = 5 >>> c.s = "s" >>> import pympler.asizeof >>> pympler.asizeof.asizeof(a) 21440 >>> a.ByteSize() 42 ``` I have version 2.2.0 of protobuf (a bit old at this point), and python 2.6.4.
2011/08/08
[ "https://Stackoverflow.com/questions/6987413", "https://Stackoverflow.com", "https://Stackoverflow.com/users/189456/" ]
Object instances have a bigger memory footprint in python than in compiled languages. For example, the following code, which creates very simple classes mimicking your proto displays 1440: ``` class A: def __init__(self): self.a = 0.0 class B: def __init__(self): self.b = 0.0 class C: def __init__(self): self.c = 0.0 self.s = "" class D: def __init__(self): self.d = "" self.e = "" self.e_isset = 1 self.a = A() self.b = B() self.b_isset = 1 self.c = [C()] d = D() print asizeof(d) ``` I am not surprised that protobuf's generated classes take 20 times more memory, as they add a lot of boiler plate. The C++ version surely doesn't suffer from this.
Edit: This isn't likely your actual issue here, but we've just been experiencing a 45MB protobuf message taking > 4GB ram when decoding. It appears to be this: <https://github.com/google/protobuf/issues/156> which was known about in protobuf 2.6 and a fix was only merged onto master march 7 this year: <https://github.com/google/protobuf/commit/f6d8c833845b90f61b95234cd090ec6e70058d06>
12,962
69,495,394
input file.csv ``` ['NE,PORT,EVENT,TIME,VALUE', 'NODE,13,MAX,2021-08-30 09:15:00+01:00 DST,-10.9', 'NODE,13,MIN,2021-08-30 09:15:00+01:00 DST,-11.0', 'NODE,13,CUR,2021-08-30 09:15:00+01:00 DST,-10.9', 'NODE,13,MAX,2021-08-30 10:30:00+01:00 DST,-12.9', 'NODE,13,MIN,2021-08-30 10:30:00+01:00 DST,-10.0', 'NODE,13,CUR,2021-08-30 10:30:00+01:00 DST,-12.9'] ``` python code: ``` intext=open('file.csv', 'r') check=intext.readlines() for lista in check: lista_split=lista.split(",") lista_split.extend(['MAX','MIN','CUR']) lista_index=[0,1,2,3,4] lista_index.extend([5,6,7]) contents=list(lista_split[i] for i in lista_index) if contents[2]==('MAX'): contents[5] = contents[4]) elif contents[2]==('MIN'): contents[6] = contents[4]) elif contents[2]==('CUR'): contents[7] = contents[4]) contents.remove(contents[2]) contents.remove(contents[4]) print(contents) ``` step1 move EVENT as columns and the corresponding value, step2 clean columns (remove EVENT and VALUE), done! ``` ['NE', 'PORT', 'TIME', 'MAX', 'MIN', 'CUR'] ['NODE', '13', '2021-08-30 09:15:00+01:00 DST', '-10.9', 'MIN', 'CUR'] ['NODE', '13', '2021-08-30 09:15:00+01:00 DST', 'MAX', '-11.0', 'CUR'] ['NODE', '13', '2021-08-30 09:15:00+01:00 DST', 'MAX', 'MIN', '-10.9'] ['NODE', '13', '2021-08-30 10:30:00+01:00 DST', '-12.9', 'MIN', 'CUR'] ['NODE', '13', '2021-08-30 10:30:00+01:00 DST', 'MAX', '-10.0', 'CUR'] ['NODE', '13', '2021-08-30 10:30:00+01:00 DST', 'MAX', 'MIN', '-12.9'] ``` **target:** ``` ['NE', 'PORT', 'TIME', 'MAX', 'MIN', 'CUR'] ['NODE', '13', '2021-08-30 09:15:00+01:00 DST', '-10.9', '-11.0', '-10.9'] ['NODE', '13', '2021-08-30 10:30:00+01:00 DST', '-12.9', '-10.0', '-12.9'] ```
2021/10/08
[ "https://Stackoverflow.com/questions/69495394", "https://Stackoverflow.com", "https://Stackoverflow.com/users/17106118/" ]
One way to do this is by creating a pivot table. ``` csv = ['NE,PORT,EVENT,TIME,VALUE', 'NODE,13,MAX,2021-08-30 09:15:00+01:00 DST,-10.9', 'NODE,13,MIN,2021-08-30 09:15:00+01:00 DST,-11.0', 'NODE,13,CUR,2021-08-30 09:15:00+01:00 DST,-10.9', 'NODE,13,MAX,2021-08-30 10:30:00+01:00 DST,-12.9', 'NODE,13,MIN,2021-08-30 10:30:00+01:00 DST,-10.0', 'NODE,13,CUR,2021-08-30 10:30:00+01:00 DST,-12.9'] csv = [x.split(",") for x in csv] df = pd.DataFrame(csv[1:], columns=csv[0]) df["VALUE"] = df["VALUE"].astype(float) df = df.pivot_table("VALUE", ["NE", "PORT", "TIME"], "EVENT") df = df.reset_index().rename_axis(None, axis=1) ``` [result](https://i.stack.imgur.com/yWv6V.png)
i'll correct just your first step. your logic was good, but there was a lot of confusion due to too many lists ``` for lista in check: lista=lista.split(",") lista.extend(['MAX','MIN','CUR']) if lista[2]==('MAX'): lista[5] = lista[4] elif lista[2]==('MIN'): lista[6] = lista[4] elif lista[2]==('CUR'): lista[7] = lista[4] lista.remove(lista[2]) lista.remove(lista[3]) #you removed an index before so it's not 4 anymore print(lista) ```
12,963
685,758
I have hardly ever noticed a python program that uses else in a for loop. I recently used it to perform an action based on the loop variable condition while exiting; as it is in the scope. What is the pythonic way to use an else in a for loop? Are there any notable use cases? And, yea. I dislike using break statement. I'd rather set the looping condition complex. Would I be able to get any benefit out of it, if I don't like to use break statement anyway. Worth noting that for loop has an else since the language inception, the first ever version.
2009/03/26
[ "https://Stackoverflow.com/questions/685758", "https://Stackoverflow.com", "https://Stackoverflow.com/users/55562/" ]
Basically, it simplifies any loop that uses a boolean flag like this: ``` found = False # <-- initialize boolean for divisor in range(2, n): if n % divisor == 0: found = True # <-- update boolean break # optional, but continuing would be a waste of time if found: # <-- check boolean print n, "is composite" else: print n, "is prime" ``` and allows you to skip the management of the flag: ``` for divisor in range(2, n): if n % divisor == 0: print n, "is composite" break else: print n, "is prime" ``` Note that there is already a natural place for code to execute when you do find a divisor - right before the `break`. The only new feature here is a place for code to execute when you tried all divisor and did not find any. *This helps only in conjuction with `break`*. You still need booleans if you can't break (e.g. because you looking for the last match, or have to track several conditions in parallel). Oh, and BTW, this works for while loops just as well. any/all ------- Nowdays, if the only purpose of the loop is a yes-or-no answer, you might be able to write it much shorter with the `any()`/`all()` functions with a generator or generator expression that yields booleans: ``` if any(n % divisor == 0 for divisor in range(2, n)): print n, "is composite" else: print n, "is prime" ``` Note the elegancy! The code is 1:1 what you want to say! [This is as effecient as a loop with a `break`, because the `any()` function is short-circuiting, only running the generator expression until it yeilds `True`. In fact it's usually even faster than a loop. Simpler Python code tends to have less overhear.] This is less workable if you have other side effects - for example if you want to find the divisor. You can still do it (ab)using the fact that non-0 value are true in Python: ``` divisor = any(d for d in range(2, n) if n % d == 0) if divisor: print n, "is divisible by", divisor else: print n, "is prime" ``` but as you see this is getting shaky - wouldn't work if 0 was a possible divisor value...
Without using `break`, `else` blocks have no benefit for `for` and `while` statements. The following two examples are equivalent: ``` for x in range(10): pass else: print "else" for x in range(10): pass print "else" ``` The only reason for using `else` with `for` or `while` is to do something after the loop if it terminated normally, meaning without an explicit `break`. After a lot of thinking, I can finally come up with a case where this might be useful: ``` def commit_changes(directory): for file in directory: if file_is_modified(file): break else: # No changes return False # Something has been changed send_directory_to_server() return True ```
12,964