qid
int64
46k
74.7M
question
stringlengths
54
37.8k
date
stringlengths
10
10
metadata
listlengths
3
3
response_j
stringlengths
29
22k
response_k
stringlengths
26
13.4k
__index_level_0__
int64
0
17.8k
70,241,576
This is a very weried situation, in vscode, I edited my code in the root directory and saved it as test\_distribution.py. After I run this file in vscode terminal. Everytime I edit and save this file, it gets run automatically. So I moved this file to the subfolder of this project, but I found out it still get run when I edit and save this file. So I changed the file name to distribution\_test.py and add logging and save this file again. It turns out it will still run when I edit and save this file. How should I figure out why this is happening? What kind of info should I log out in order to figure out this issue? As you can see the 1300000\_bake.xlsx is generated by this python file, I even moved this python file to a different folder and changed the name of it as I stated above, I am very curious how to find out which program is running this Pics as follow: ``` for store_id in store_ids: df = pd.DataFrame(columns=columns) for query_date in date_range: logging.info(f"提高计算速度, 使用常规kind_code_cate1类型") kind_cate1s = normal_cate1 for kind_cate1 in kind_cate1s: for setting in [dev_setting, prod_setting]: eng = conn(setting) delivery_cycle = pd.read_sql( f"select delivery_cycle from dim_mty_goods where kind_code_cate1={kind_cate1} and can_be_ordered=1", eng, ) delivery_cycle = np.unique(delivery_cycle["delivery_cycle"].values) delivery_cycle = min(delivery_cycle) # 只有冷藏存在多个值, 1, 2, 3 取1, 其他均为单一值 start_time = time.time() logging.info( f"开始计算, store_id: {store_id}, date: {query_date}, kind_cate1: {kind_cate1}, dev 环境: {setting == dev_setting}" ) logging.info( f"bash command: {create_bash(query_date, kind_cate1, store_id, setting == dev_setting)}" ) status = os.system( create_bash( query_date=query_date, kind_cate1=kind_cate1, store_id=store_id, dev_setting=(setting == dev_setting), ) ) elapsed = time.time() - start_time if status != 0: logging.info( f'计算出现问题, store_id: {store_id}, date: {query_date}, kind code cate1: {kind_cate1}, 环境: {setting["db"]}' ) break logging.info(f"返回耗时: {elapsed}") if elapsed < 60: calculate_time = elapsed elif setting == dev_setting: calculate_time = pd.DataFrame() while calculate_time.empty: calculate_time = pd.read_sql( f'select cal_time from calculate_time where dt = "{str(query_date)}" and store_id={store_id} and kind_code_cate1={kind_cate1}', eng, ) # 等待写库 time.sleep(0.1) logging.warning( f"未查询到计算时间, store_id: {store_id}, date: {query_date}, kind_code_cate1: {kind_cate1}" ) calculate_time = min(calculate_time["cal_time"].values) else: calculate_time = elapsed logging.info(f"计算时间, {calculate_time}") arrival_sale_day = [ "sale_predict_qtty_t1", "sale_predict_qtty_t2", "sale_predict_qtty_t3", ][delivery_cycle] dev_prod = int(setting["db"] == "marsboard_dev") df_temp = pd.DataFrame() while df_temp.empty: df_temp = pd.read_sql( f'select store_id, dt, kind_code_cate1, {arrival_sale_day}, perdict_stock_qtty, predict_num, goods_id, goods_name from order_list where store_id={store_id} and kind_code_cate1={kind_cate1} and dt="{str(query_date)}"', eng, ) logging.warning(f"未查询到order list 里的信息") if pd.read_sql( f'select store_id, dt, kind_code_cate1, {arrival_sale_day}, perdict_stock_qtty, predict_num, goods_id, goods_name from order_list where store_id={store_id} and kind_code_cate1={kind_cate1} and dt="{str(query_date)}"', prod_eng, ).empty: logging.info(f"生产环境本品类亦无数据, {kind_cate1}") break time.sleep(0.1) df_temp["dev_prod"] = dev_prod df_temp["calculate_time"] = float(calculate_time) df_temp.columns = columns df = pd.concat([df_temp, df], axis=0, ignore_index=True) print(df) df.to_excel(f"{store_id}_bake.xlsx", index=False) ``` [![enter image description here](https://i.stack.imgur.com/K6vDs.png)](https://i.stack.imgur.com/K6vDs.png) [![enter image description here](https://i.stack.imgur.com/0lHOt.png)](https://i.stack.imgur.com/0lHOt.png) [![enter image description here](https://i.stack.imgur.com/xa4oJ.png)](https://i.stack.imgur.com/xa4oJ.png)
2021/12/06
[ "https://Stackoverflow.com/questions/70241576", "https://Stackoverflow.com", "https://Stackoverflow.com/users/14232877/" ]
I avoid this issue by adding ``` if __name__ == '__main': fun() ``` I guess this file will be import or load or something else, then it gets run. Anyway, this is not important.
Perhaps , You have mistakenly installed an extension 'Run on Save' . I think that's causing the Problem.
11,261
4,170,532
I was not able to come up with a better title for this post, so if anybody does not find it appropriate , please go ahead and edit it. I am using flask as my python framework, and normally I render templates doing somnething like the below:- ``` @app.route('/home') def userhome(): data=go get user details from the database return render_template("home.html",userdata=data) ``` Now I have a template name home.html in which I iterate over the values of "userdata" like userdata.name, userdata.age etc and these values take their appropriate spaces in the template. However I am working on an application in which navigation is via ajax and no fall back if javascript is not available(basically the app does not work for ppl without javascript). The navigation menu has say few tabs on the left ,(home,youroffers,yourlastreads). The right column is supposed to dynamically change based on what the user clicks. I am unable to understand how I handle templating here. Based on what the user clicks I can send him the required data from the db via a json through an xhrGET or xhrPOST.Does the entire templating have to be handled at the server end and then transfer the entire template via an ajax call. I actually dont like that idea much. Would be great if someone could point me in the right direction here. Now in the page that is loaded via ajax , there are some scripts which are present. Will these scripts work, if loaded via ajax.
2010/11/13
[ "https://Stackoverflow.com/questions/4170532", "https://Stackoverflow.com", "https://Stackoverflow.com/users/459514/" ]
I would choose server side templating, because unless you find a JS library that handles the same templating language your code isn't going go be DRY. In the `home.html` template, I'd do something like ``` <%extends base.html%> <%include _user_details.html%> ... <% footer and other stuff%> ``` And keep the actual markup in `_user_details.html`. This way, for an AJAX request you just render the `_user_details.html` partial only.
You have two options: template on the server, or template in the browser. To template in the server, you create an endpoint much like you already have, except the template only creates a portion of the page. Then you hit the URL with an Ajax call, and insert the returned HTML somewhere into your page. To template in the browser, your endpoint creates a JSON response. Then a Javascript templating library can take that JSON, create HTML from it, and insert it into the page. There are [lots of jQuery templating solutions](https://stackoverflow.com/questions/170168/jquery-templating-engines), for example.
11,264
33,262,919
When writing doc strings in python, I am wondering if the docstring should contain the exceptions that are implicitly raised or if it should also contain the exceptions I explicitly raise. Consider the function ``` def inv(a): if a == 0: raise ZeroDivisionError else: return 1/a ``` So in a docstring under the "Raises" keyword I would definetly put ZeroDivisionError. However, depending on the input I would also expect a TypeError. So would you put that also in the docstring? Due to the EAFP principle (if i understand it correctly) I don't want to check for types here, correct? Any hint (also on the code sample) is welcome.
2015/10/21
[ "https://Stackoverflow.com/questions/33262919", "https://Stackoverflow.com", "https://Stackoverflow.com/users/5152563/" ]
It depends what (or whom) you're writing the docstring for. For automatic conversion to API documentation, I like [Google-style docstrings](http://sphinxcontrib-napoleon.readthedocs.org/en/latest/example_google.html), which would look like: ``` def inv(a): """Return the inverse of the argument. Arguments: a (int): The number to invert. Returns: float: The inverse of the argument. Raises: TypeError: If 1 cannot be divided by the argument. ZeroDivisionError: If the argument is zero. """ return 1 / a ``` Here I've included *all of the exceptions that the function is likely to raise*. Note that there's no need to explicitly `raise ZeroDivisionError` - that will happen automatically if you try to divide by zero. --- However, if you aren't creating documentation from the docstring, I would probably just include the description line for such a simple function: ``` def inv(a): """Return the inverse of the argument.""" return 1 / a ``` Anyone using it is likely to know that you can't invert e.g. strings. --- > > I don't want to check for types here, correct? > > > I wouldn't - if the user, after reading your documentation, decides to pass e.g. a string into the function they should *expect* to get the `TypeError`, and there's no point testing the argument type to then raise the same exception yourself that the code would have raised anyway (again, see also the `ZeroDivisionError`!) That is, unless e.g. `float` or `complex` numbers should be invalid input, in which case you will need to handle them manually.
No. The docstring should describe the input expected. If this were my function, I would include something like this in the docstring: ``` """Return the inverse of an integer. ZeroDivisionError is raised if zero is passed to this function.""" ``` Therefore it is specified that the input should be an integer. Specifying that the function can raise a `TypeError` just overly complicates the docstring.
11,265
21,234,884
How can I set a suitable fps number in pygame for any kind of monitor running my game? I know I can set fps using `pygame.time.Clock().tick(fps)` but how can I set suitable fps? Please post example python code along with answer.
2014/01/20
[ "https://Stackoverflow.com/questions/21234884", "https://Stackoverflow.com", "https://Stackoverflow.com/users/2837388/" ]
I'm not entirely sure that I understand the question being asked, but I think you will just have to experiment with different numbers and find out what works for you. I find that around 50-100 is a good range. If what you are trying to do is make game events only update a certain number of times per second while rendering happens as fast as the computer is able to handle it, this is a very complex process and probably not very easily done in pygame.
The trick I've figured out is not limiting the frame rate, but calculating based on time. The pygame.time.Clock.tick() returns time in milliseconds, which you can then pass to other parts of the program to calculate events or animation updates. For example, I currently use a pair of variable in my Player object to store the center point of the player's character multiplied by 1000 to be able to update it as an int from `tick()`. I think update it each loop, after solving which direction the player is moving, with the following code: ``` self.x += self.speed * time * xmove self.y += self.speed * time * ymove self.rect.centerx = self.x / 1000 self.rect.centery = self.y / 1000 ``` Additionally, I use that same `time` variable as part of the Spawner class to determine when new enemies should be spawned to the map, and for the movement of the enemies and bullets in the game. This method has the bonus of keeping game play more or less the same across multiple frame rates.
11,266
58,449,314
When I activate my ipykernel\_py3 environment and try to launch Jupyter Lab in terminal, I get the repeated error messages as follows: ``` Macintosh-8:~ yuenfannie$ source activate ipykernel_py3 (ipykernel_py3) Macintosh-8:~ yuenfannie$ jupyter lab [I 12:38:19.969 LabApp] JupyterLab alpha preview extension loaded from /Users/yuenfannie/anaconda2/lib/python2.7/site-packages/jupyterlab JupyterLab v0.27.0 Known labextensions: [I 12:38:19.971 LabApp] Running the core application with no additional extensions or settings [I 12:38:19.979 LabApp] Serving notebooks from local directory: /Users/yuenfannie [I 12:38:19.980 LabApp] 0 active kernels [I 12:38:19.980 LabApp] The Jupyter Notebook is running at: http://localhost:8888/?token=81a366305f2a328dfbbc9cfcf757a30e4977d3abab54cb0f [I 12:38:19.980 LabApp] Use Control-C to stop this server and shut down all kernels (twice to skip confirmation). [C 12:38:19.981 LabApp] Copy/paste this URL into your browser when you connect for the first time, to login with a token: http://localhost:8888/?token=81a366305f2a328dfbbc9cfcf757a30e4977d3abab54cb0f [I 12:38:20.229 LabApp] Accepting one-time-token-authenticated connection from ::1 [I 12:38:22.881 LabApp] Kernel started: cf6886dd-5475-4e1e-972c-8e4614451f0e [I 12:38:24.074 LabApp] Adapting to protocol v5.0 for kernel cf6886dd-5475-4e1e-972c-8e4614451f0e [E 12:38:24.078 LabApp] Uncaught exception GET /api/kernels/cf6886dd-5475-4e1e-972c-8e4614451f0e/channels?session_id=5c81adbda104d35cec3acd907d6decc4&token=81a366305f2a328dfbbc9cfcf757a30e4977d3abab54cb0f (::1) HTTPServerRequest(protocol='http', host='localhost:8888', method='GET', uri='/api/kernels/cf6886dd-5475-4e1e-972c-8e4614451f0e/channels?session_id=5c81adbda104d35cec3acd907d6decc4&token=81a366305f2a328dfbbc9cfcf757a30e4977d3abab54cb0f', version='HTTP/1.1', remote_ip='::1') Traceback (most recent call last): File "/Users/yuenfannie/anaconda2/lib/python2.7/site-packages/tornado/websocket.py", line 546, in _run_callback result = callback(*args, **kwargs) File "/Users/yuenfannie/anaconda2/lib/python2.7/site-packages/notebook/services/kernels/handlers.py", line 262, in open super(ZMQChannelsHandler, self).open() File "/Users/yuenfannie/anaconda2/lib/python2.7/site-packages/notebook/base/zmqhandlers.py", line 176, in open self.send_ping, self.ping_interval, io_loop=loop, TypeError: __init__() got an unexpected keyword argument 'io_loop' ``` The opened JupyterLab Alpha Preview does not work at all. Unfortunately I have no clues why this happens. Any suggestions are welcome!
2019/10/18
[ "https://Stackoverflow.com/questions/58449314", "https://Stackoverflow.com", "https://Stackoverflow.com/users/4632960/" ]
This is the wrong way round and can never be true, because you're asking for `tt` to be less than 2 and at least 12 at the same time: ``` if 2 >= tt >= 12: ``` You probably meant: ``` if 2 <= tt <= 12: ``` And instead of `elif` with a condition, you can just use `else`.
That is. ``` tt = int(input("What times tables would you like: ")) if 2 <= tt <= 12: for x in range(1, 13): aw = tt * x print(tt, "x", x, " = ", aw) else: print("Please enter number between 2 and 12") ```
11,267
27,948,420
I am getting confused trying to find a way to **send the left-click in a web browser window in a specific point**. I am using Selenium `selenium-2.44.0` and Python 2.7. My ultimate goal is to be able to click in certain areas in the window, but for now I just want to make sure I am able to click at all. I thought that clicking on an area where a link is located is a good idea because then I will be able to verify the click did occur (i.e., I will be taken to another html page). All the prerequisites are met, I am able to start the browser and access and manipulate various web elements. From what I've found in the Help, one is supposed to use [move\_by\_offset](http://selenium-python.readthedocs.org/en/latest/api.html#selenium.webdriver.common.action_chains.ActionChains.move_by_offset) method to move the mouse cursor into a certain direction. However, **when I run the code, the click doesn't occur (or it occurs, but the url link is not clicked and no new page opens)**. I cannot even verify whether the click occurred. When clicking the Logout link, for instance, in a normal browser session, the logout operation occurs. ``` ... homeLink = driver.find_element_by_link_text("Home") homeLink.click() #clicking on the Home button and mouse cursor should? stay here print homeLink.size, homeLink.location helpLink = driver.find_element_by_link_text("Help") print helpLink.size, helpLink.location action = webdriver.common.action_chains.ActionChains(driver) action.move_by_offset(150,0) #move 150 pixels to the right to access Help link action.click() action.perform() ``` Here is the screenshot of the area in the web page I am working with. ![enter image description here](https://i.stack.imgur.com/daJ4N.png) The size and location of the elements are printed as below: ``` {'width': 39, 'height': 16} {'y': 47, 'x': 341} {'width': 30, 'height': 16} {'y': 47, 'x': 457} ``` The html code behind the page is below, if it's worth looking at. ``` <a href="/web/">Home</a> | <a href="/web/Account/LogOff">Logout</a> | <a href="#" onclick="HelpFile.open();">Help</a> ``` I know that I can access the link by finding an element in many ways, but I am trying to perform a click in a certain location and am using the link element just to verify the click really occurred. **How do perform the click?**
2015/01/14
[ "https://Stackoverflow.com/questions/27948420", "https://Stackoverflow.com", "https://Stackoverflow.com/users/3346915/" ]
Assuming there is nothing else going on on your page that would interfere with the click, this should do it: ``` homeLink = driver.find_element_by_link_text("Home") homeLink.click() #clicking on the Home button and mouse cursor should? stay here print homeLink.size, homeLink.location helpLink = driver.find_element_by_link_text("Help") print helpLink.size, helpLink.location action = webdriver.common.action_chains.ActionChains(driver) action.move_to_element_with_offset(homeLink, 150, 0) #move 150 pixels to the right to access Help link action.click() action.perform() ``` You should use [`move_to_element_with_offset`](http://selenium-python.readthedocs.org/en/latest/api.html#selenium.webdriver.common.action_chains.ActionChains.move_to_element_with_offset) if you want the mouse position to be relative to an element. Otherwise, [`move_by_offset`](http://selenium-python.readthedocs.org/en/latest/api.html#selenium.webdriver.common.action_chains.ActionChains.move_by_offset) moves the mouse relative to the previous mouse position. When you using `click` or `move_to_element`, the mouse is placed in the center of the element. (The [Java documentation](https://selenium.googlecode.com/git/docs/api/java/org/openqa/selenium/interactions/Actions.html#click%28org.openqa.selenium.WebElement%29) about these is explicit.)
Hard to say for you exact situation but I know a workaround to the question in the summary and bold > > send the left-click in a web browser window in a specific point > > > You just use execute\_script and do the clicking using javascript. ``` self.driver.execute_script('el = document.elementFromPoint(47, 457); el.click();') ``` It's convenient in a pinch because you can debug find the coordinates of an element in a browser by opening console and using querySelector (works the same as Webdriver's By.CSS\_SELECTOR): ``` el = document.querySelector('div > h1'); var boundaries = el.getBoundingClientRect(); console.log(boundaries.top, boundaries.right, boundaries.bottom, boundaries.left); ``` That being said, it's a really bad idea to write your tests to click a specific point. But I have found that sometimes even when el.click() or an [action chain](https://seleniumhq.github.io/selenium/docs/api/py/webdriver/selenium.webdriver.common.action_chains.html) aren't working, execute script will still work using a querySelector which is about as good as what you would be doing in Selenium. ``` self.driver.execute_script('el = document.querySelector("div > h1"); el.click();') ```
11,268
18,701,464
I wanted to write a program to scrape a website from python. Since there is no built-in possibility to do so, I decided to give the BeautifulSoup module a try. Unfortunately I encountered some problems using pip and ez\_install, since I use Windows 7 64 bit and Python 3.3. Is there a way to get the BeautifulSoup module on my Python 3.3 installation with Windows 7 64x without ez\_install or easy\_install, since I have too much trouble with this, or is there an alternative module which can be easily installed?
2013/09/09
[ "https://Stackoverflow.com/questions/18701464", "https://Stackoverflow.com", "https://Stackoverflow.com/users/2733724/" ]
Just download [here](http://www.crummy.com/software/BeautifulSoup/bs3/download//) and then add the `BeautifulSoup.py (uncompress the download tarball file use uncompress soft such as 7z)` to your python sys.path use `sys.path.append("/path/to/BeautifulSoup.py")`, of cource you can just put it under your current src dir as a normal python module or put it in the pathon's sys path. BTW the tarball install(or build) way is: ``` cd BeautifulSoup python setup.py install(or build) ``` if you are using **python3**, you can dowload the `bs4(look at the commetn under this answer)`, just let the `bs4`(in the tarball source dir) under your python's sys path and then ``` from bs4 import BeautifulSoup ``` good luck~
I haven't tried it, but check out [pip](https://stackoverflow.com/questions/4750806/how-to-install-pip-on-windows) to install python packages. It's supposed to be better. [Why use pip over easy\_install](https://stackoverflow.com/questions/3220404/why-use-pip-over-easy-install) I have personally used BeautifulSoup and like it. I have heard [pyquery](https://pypi.python.org/pypi/pyquery) is good too with a jquery like interface.
11,269
41,525,223
I'm just trying to run the simple hello world app on their tutorials page. I used to use Google App Engine frequently, but now I can't seem to get it going at all. I'm on win 10 x64. Downloaded latest google cloud sdk, and python version 2.7.13 ``` > ERROR 2017-01-07 15:25:21,219 wsgi.py:263] Traceback (most recent > call last): File "C:\Program Files (x86)\Google\Cloud > SDK\google-cloud-sdk\platform\google_appengine\google\appengine\runtime\wsgi.py", > line 240, in Handle > handler = _config_handle.add_wsgi_middleware(self._LoadHandler()) File "C:\Program Files (x86)\Google\Cloud > SDK\google-cloud-sdk\platform\google_appengine\google\appengine\runtime\wsgi.py", > line 299, in _LoadHandler > handler, path, err = LoadObject(self._handler) File "C:\Program Files (x86)\Google\Cloud > SDK\google-cloud-sdk\platform\google_appengine\google\appengine\runtime\wsgi.py", > line 85, in LoadObject > obj = __import__(path[0]) File "C:\....\python-docs-samples\appengine\standard\flask\hello_world\main.py", > line 18, in <module> > from flask import Flask File "C:\....\python-docs-samples\appengine\standard\flask\hello_world\lib\flask\__init__.py", > line 21, in <module> > from .app import Flask, Request, Response File "C:\....\python-docs-samples\appengine\standard\flask\hello_world\lib\flask\app.py", > line 27, in <module> > from . import json, cli File "C:\....\python-docs-samples\appengine\standard\flask\hello_world\lib\flask\cli.py", > line 17, in <module> > import click File "C:\....\python-docs-samples\appengine\standard\flask\hello_world\lib\click\__init__.py", > line 18, in <module> INFO 2017-01-07 10:25:21,229 module.py:806] > default: "GET / HTTP/1.1" 500 - > from .core import Context, BaseCommand, Command, MultiCommand, Group, \ File > "C:\....\python-docs-samples\appengine\standard\flask\hello_world\lib\click\core.py", line 8, in <module> > from .types import convert_type, IntRange, BOOL File "C:\....\python-docs-samples\appengine\standard\flask\hello_world\lib\click\types.py", > line 4, in <module> > from ._compat import open_stream, text_type, filename_to_ui, \ File > "C:\....\python-docs-samples\appengine\standard\flask\hello_world\lib\click\_compat.py", > line 536, in <module> > from ._winconsole import _get_windows_console_stream File "C:\....\python-docs-samples\appengine\standard\flask\hello_world\lib\click\_winconsole.py", > line 16, in <module> > import ctypes File "C:\Python27\lib\ctypes\__init__.py", line 7, in <module> > from _ctypes import Union, Structure, Array File "C:\Program Files (x86)\Google\Cloud > SDK\google-cloud-sdk\platform\google_appengine\google\appengine\tools\devappserver2\python\sandbox.py", > line 964, in load_module > raise ImportError('No module named %s' % fullname) ImportError: No module named _ctypes ```
2017/01/07
[ "https://Stackoverflow.com/questions/41525223", "https://Stackoverflow.com", "https://Stackoverflow.com/users/4769616/" ]
Google's sample Flask "Hello World" application requires `Flask 0.11.1`. This library is not vendored by Google, so developers are required to install it via `pip` > > This sample shows how to use Flask with Google App Engine Standard. > > > Before running or deploying this application, install the dependencies > using pip: > > > `pip install -t lib -r requirements.txt` > > > Flask's `setup.py` [instructs](https://github.com/pallets/flask/blob/0.11.1/setup.py) `pip` to install the [click](https://github.com/pallets/click/) library, at a release later than or equal to version 2.0. ``` install_requires=[ 'Werkzeug>=0.7', 'Jinja2>=2.4', 'itsdangerous>=0.21', 'click>=2.0', ], ``` In November 2015 [a change was committed](https://github.com/pallets/click/commit/bcf8feceb41b3f75f3cd07eb851734fa96cd1991) to `click` to improve support for unicode in the windows console. This change added the import of the `ctypes` library which Appengine is choking on, because the Appengine sandbox does not allow importing `ctypes`. The workaround for this is to overwrite the installed `click` with an earlier version (5.1 [looks like the most recent candidate](https://github.com/pallets/click/blob/master/CHANGES)): `pip install --target lib --upgrade click==5.1`
For me, the workaround mentioned here [Google's issue tracker](https://issuetracker.google.com/issues/38290292) worked * goto [sdk\_home]\google\appengine\tools\devappserver2\python\sandbox.py * find the definition of \_WHITE\_LIST\_C\_MODULES = [xxx] add following two lines to the list: '\_winreg', '\_ctypes', * try your app again. The issue was marked fixed on 7/7/2017.
11,271
52,960,057
I have an assignment where we need to create functions to display text onto the screen in python/pygame. This part I understand. What I don't is that you're supposed to create a function which creates a drop shadow. I know how to make the shadow I just don't know how to make another function to do it and have the option to call from a pre existing text one. This is what I have so far ``` import pygame import sys pygame.init() screenSizeX = 1080 screenSizeY = 720 screenSize = (screenSizeX,screenSizeY) screen = pygame.display.set_mode(screenSize,0) pygame.display.set_caption("Test Functions") WHITE = (255,255,255) GREEN = (0,255,0) BLUE = (0,0,255) RED = (255,0,0) YELLOW = (255,255,0) BLACK = (0,0,0) MAGENTA = (139,0,139) def horCenter(font, size, text, colour, y, shadow, pos): if shadow == True: fontTitle = pygame.font.SysFont(font, size) textTitle = fontTitle.render(text, True, colour) textWidth = textTitle.get_width() screen.blit(textTitle, (screenSizeX/2 - textWidth/2, y)) def verCenter(font, size, text, colour, x): fontTitle = pygame.font.SysFont(font, size) textTitle = fontTitle.render(text, True, colour) textHeight = textTitle.get_height() screen.blit(textTitle, (x, screenSizeY/2 - textHeight/2)) def cenCenter(font, size, text, colour): fontTitle = pygame.font.SysFont(font, size) textTitle = fontTitle.render(text, True, colour) textHeight = textTitle.get_height() textWidth = textTitle.get_width() screen.blit(textTitle, (screenSizeX/2 - textWidth/2, screenSizeY/2 - textHeight/2)) pygame.display.update() go = True while go: for event in pygame.event.get(): if event.type ==pygame.QUIT: go = False screen.fill(WHITE) horCenter("Comic Sans MS", 40, "Text1", MAGENTA, 100) verCenter("Georgia", 10, "Tex2", GREEN, 500) cenCenter("Impact", 50, "Text3", RED) verCenter("Verdana", 60, "89274", BLACK, 50) pygame.display.update() pygame.quit() sys.exit() ```
2018/10/24
[ "https://Stackoverflow.com/questions/52960057", "https://Stackoverflow.com", "https://Stackoverflow.com/users/10391389/" ]
A dropshadow can be rendered by drawing the text twice. The first is a grey version of the text at an offset, then the actual text at the original position. ``` def dropShadowText(screen, text, size, x, y, colour=(255,255,255), drop_colour=(128,128,128), font=None): # how much 'shadow distance' is best? dropshadow_offset = 1 + (size // 15) text_font = pygame.font.Font(font, size) # make the drop-shadow text_bitmap = text_font.render(text, True, drop_colour) screen.blit(text_bitmap, (x+dropshadow_offset, y+dropshadow_offset) ) # make the overlay text text_bitmap = text_font.render(text, True, colour) screen.blit(text_bitmap, (x, y) ) ``` So you can call it like: ``` dropShadowText(screen, "Hello World", 36, 50, 50) ```
I figured it out with help from Kingsley's response. Now you can choose to add a shadow or not, where its positioned and what colour it is. Maybe there is an easier way but this worked for me. ``` import pygame import sys pygame.init() screenSizeX = 1080 screenSizeY = 720 screenSize = (screenSizeX,screenSizeY) screen = pygame.display.set_mode(screenSize,0) pygame.display.set_caption("Test Functions") WHITE = (255,255,255) GREEN = (0,255,0) BLUE = (0,0,255) RED = (255,0,0) YELLOW = (255,255,0) BLACK = (0,0,0) MAGENTA = (139,0,139) def horCenter(font, size, text, colour, yPos, shadow, shadowPlace, shadowColour): fontTitle = pygame.font.SysFont(font, size) textTitle = fontTitle.render(text, True, colour) textWidth = textTitle.get_width() xPos = screenSizeX/2 - textWidth/2 if shadow == "True": dropShadow(font,size,text,colour, xPos,yPos, shadowPlace, shadowColour) else: screen.blit(textTitle, (xPos, yPos)) def verCenter(font, size, text, colour, xPos, shadow, shadowPlace, shadowColour): fontTitle = pygame.font.SysFont(font, size) textTitle = fontTitle.render(text, True, colour) textHeight = textTitle.get_height() yPos = screenSizeY/2 - textHeight/2 if shadow == "True": dropShadow(font,size,text,colour, xPos,yPos, shadowPlace, shadowColour) else: screen.blit(textTitle, (xPos, yPos)) def cenCenter(font, size, text, colour, shadow, shadowPlace, shadowColour): fontTitle = pygame.font.SysFont(font, size) textTitle = fontTitle.render(text, True, colour) textHeight = textTitle.get_height() textWidth = textTitle.get_width() xPos = screenSizeX/2 - textWidth/2 yPos = screenSizeY/2 - textHeight/2 if shadow == "True": dropShadow(font,size,text,colour, xPos,yPos, shadowPlace, shadowColour) else: screen.blit(textTitle, (xPos, yPos)) def dropShadow(font, size, text, colour, xPos,yPos, shadowPlace, shadowColour): fontTitle = pygame.font.SysFont(font, size) textTitle = fontTitle.render(text, True, colour) shadowTitle = fontTitle.render(text, True, shadowColour) if shadowPlace == "bottomRight": newXPos = xPos +2 newYPos = yPos + 2 screen.blit(shadowTitle, (newXPos,newYPos)) screen.blit(textTitle, (xPos,yPos)) elif shadowPlace == "bottomLeft": newXPos = xPos - 2 newYPos = yPos + 2 screen.blit(shadowTitle, (newXPos, newYPos)) screen.blit(textTitle, (xPos, yPos)) elif shadowPlace == "topRight": newXPos = xPos - 2 newYPos = yPos - 2 screen.blit(shadowTitle, (newXPos, newYPos)) screen.blit(textTitle, (xPos, yPos)) elif shadowPlace == "topLeft": newXPos = xPos + 2 newYPos = yPos - 2 screen.blit(shadowTitle, (newXPos, newYPos)) screen.blit(textTitle, (xPos, yPos)) pygame.display.update() go = True while go: for event in pygame.event.get(): if event.type ==pygame.QUIT: go = False screen.fill(WHITE) horCenter("Comic Sans MS", 40, "Thanos", MAGENTA, 100, "True", "topLeft", YELLOW) verCenter("Georgia", 10, "HULK SMASH", GREEN, 800, "True", "bottomRight", BLACK) cenCenter("Impact", 50, "MARVEL", RED, "True", "bottomRight", BLACK) verCenter("Verdana", 60, "Black Widow", BLACK, 50, "True", "topRight", RED) pygame.display.update() pygame.quit() sys.exit() ```
11,272
18,718,715
I've been looking into scraping, and I cant manage to scrape twitter searches that date long way back by using python code, i can do it however with an addon for chrome but it falls short since it will only let me obtain a very limited amount of tweets. can anybody point me in the right direction¿
2013/09/10
[ "https://Stackoverflow.com/questions/18718715", "https://Stackoverflow.com", "https://Stackoverflow.com/users/2761466/" ]
I found a related answer here: <https://stackoverflow.com/a/6082633/1449045> you can get up to 1500 tweets for a search; 3200 for a particular user's timeline source: <https://dev.twitter.com/docs/things-every-developer-should-know> see "There are pagination limits" Here you can find a list of libraries to simplify the use of the APIs, in different languages, including python <https://dev.twitter.com/docs/twitter-libraries>
You can use [snscrape](https://github.com/JustAnotherArchivist/snscrape). It doesn't require a Twitter Developer Account it can go back many years.
11,275
26,928,817
I will start by saying that I have never created an app that required installation and I have no idea how it works. But now I have to, and I am wondering how to proceed, use some kind os installation tools (installshield, wix) or create a python script and turn into an .exe. What my install needs to do is: * create a folder and add it to the system path * check a repo via http for the latest version of a file and download it into that folder How can I approach this? Also, if I use any installation tools, can that setup file be used for updating?
2014/11/14
[ "https://Stackoverflow.com/questions/26928817", "https://Stackoverflow.com", "https://Stackoverflow.com/users/2367912/" ]
I put this logic in my view's `form_valid` like so: ``` def form_valid(self, form): self.object, created = Car.objects.get_or_create( **form.cleaned_data ) return super(CarCreateView, self).form_valid(form) ``` To put that in context, I have a complaints case management system that registers new complaints in stages. When someone wants to complain their details are stored in a `Complainant` model that creates a new complainant for someone's first complaint but doesn't for subsequent ones. They are then redirected to a `Case` model form along with their `id` value, which is automatically added as the `Case` object's foreign key to the `Complainant` object. That model form looks like this: ``` class ComplainantCreateView(PermissionRequiredMixin, CreateView): model = Complainant form_class = ComplainantForm login_url = "/login/" permission_required = "complaints.add_complainant" def form_valid(self, form): self.object, created = Complainant.objects.get_or_create( **form.cleaned_data ) return HttpResponseRedirect( reverse( 'select_case', args=[self.object.pk], ) ) ```
Do it in form's `clean` method: ``` from django.forms import ModelForm from myapp.models import Car class CarForm(ModelForm): class Meta: model = Car def clean(self): if self._meta.model.objects.filter(**self.cleaned_data).count() > 0: raise forms.ValidationError('Car already exists') return self.cleaned_data ```
11,276
52,138,300
I'm trying to write simple things in a Tkinter module using python 3.6 in Anaconda. This is my code ``` from tkinter import * root = Tk() thelabel = label(root, text="this is our tk window") thelabel.pack() root.mainloop() ``` but I get the error: ``` TclError: can't invoke "label" command: application has been destroyed ERROR: execution aborted ``` and I keep getting a blank Tkinter window whenever I run my code and I don't understand what is the problem, thanks:)
2018/09/02
[ "https://Stackoverflow.com/questions/52138300", "https://Stackoverflow.com", "https://Stackoverflow.com/users/9784945/" ]
One of the recommended ways to have multiple python installations with differing libraries installed is to use [Virtualenv](https://virtualenv.pypa.io/en/stable/). This gives you the possibility to have a specific python environment with it's own set of dependencies for each project you work on. This works not only for the dependencies, but also for different versions of python. On top of that you can use [Pipenv](https://github.com/pypa/pipenv) to manage the different virtualenvs. In a `Pipfile` you can describe your required python and it's dependencies which is used by `Pipenv` to manage a python env specific for your project.
I found this to work after searching for a while. Here are the steps I followed to install an older python version alongside my standard one: * Download the Python3.6 tgz file from the official website (eg. Python-3.6.6.tgz) * Unpack it with `tar -xvzf Python-3.6.6.tgz` * `cd Python-3.6.6` * run `./configure` * run `make altinstall` to install it (`install` vs `altinstall` explanation here [Difference in details between "make install" and "make altinstall"](https://stackoverflow.com/questions/16018463/difference-in-details-between-make-install-and-make-altinstall)) You'll normally find your new python install under `/usr/local/bin`. Now you can create a new virtualenv specifying the python version to use with: * `virtualenv --python=python3.6 env3.6` * Get into the virtualenv running the command `source env3.6/bin/activate`. * Install tensorflow with the classic `pip3 install tensorflow` * Profit
11,277
27,751,072
I have an ordinary Python list that contains (multidimensional) numPy arrays, all of the same shape and with the same number of values. Some of the arrays in the list are duplicates of earlier ones. I have the problem that I want to remove all the duplicates, but the fact that the data type is numPy arrays complicates this a bit... • I can't use set() as numPy arrays are not hashable. • I can't check for duplicates during insertion, as the arrays are generated in batches by a function and added to the list with .extend(). • numPy arrays aren't directly comparable without resorting to one of numPy's own functions, so I can't just go something that uses "if x in list"... • The contents of the list need to remain numPy arrays at the end of the process; I could compare copies of the arrays converted to nested lists, but I can't convert the arrays to straight python lists permanently. Any suggestions on how I can remove duplicates efficiently here?
2015/01/03
[ "https://Stackoverflow.com/questions/27751072", "https://Stackoverflow.com", "https://Stackoverflow.com/users/1541913/" ]
Using the solutions here: [Most efficient property to hash for numpy array](https://stackoverflow.com/questions/16589791/most-efficient-property-to-hash-for-numpy-array) we see that hashing works best with a.tostring() if a is an numpy array. So: ``` import numpy as np arraylist = [np.array([1,2,3,4]), np.array([1,2,3,4]), np.array([1,3,2,4])] L = {array.tostring(): array for array in arraylist} L.values() # [array([1, 3, 2, 4]), array([1, 2, 3, 4])] ```
Here is one way using `tuple`: ``` >>> import numpy as np >>> t = [np.asarray([1, 2, 3, 4]), np.asarray([1, 2, 3, 4]), np.asarray([1, 1, 3, 4])] >>> map(np.asarray, set(map(tuple, t))) [array([1, 1, 3, 4]), array([1, 2, 3, 4])] ``` If your arrays are multidimensional, then first flatten them to a 1-by-whatever array, then use the same idea, and reshape them at the end: ``` def to_tuple(arr): return tuple(arr.reshape((arr.size,))) def from_tuple(tup, original_shape): np.asarray(tup).reshape(original_shape) ``` Example: ``` In [64]: t = np.asarray([[[1,2,3],[4,5,6]], [[1,1,3],[4,5,6]], [[1,2,3],[4,5,6]]]) In [65]: map(lambda x: from_tuple(x, t[0].shape), set(map(to_tuple, t))) Out[65]: [array([[1, 2, 3], [4, 5, 6]]), array([[1, 1, 3], [4, 5, 6]])] ``` Another option is to create a `pandas.DataFrame` out of your `ndarrays` (treating them as rows by reshaping if needed) and using the pandas built-ins for uniquifying the rows. ``` In [34]: t Out[34]: [array([1, 2, 3, 4]), array([1, 2, 3, 4]), array([1, 1, 3, 4])] In [35]: pandas.DataFrame(t).drop_duplicates().values Out[35]: array([[1, 2, 3, 4], [1, 1, 3, 4]]) ``` Overall, I think it's a bad idea to try to use `tostring()` as a quasi hash function, because you'll need more boiler plate code than in my approach just to protect against the possibility that some of the contents are mutated after they've been assigned their "hash" key in some `dict`. If the reshaping and converting to `tuple` is too slow given the size of the data, my feeling is that this reveals a more fundamental problem: the application isn't designed well around the needs (like de-duping) and trying to cram them into some Python process running in memory is probably not the right way. At that point, I would stop to consider whether something like Cassandra, which can easily build database indices on top of large columns (or multidimensional arrays) of floating point (or other) data isn't the more sane approach.
11,278
36,278,220
I have a big third-party python2.7 application, a bunch of python scripts, which is added into `PATH` and it requires python2.7 instead of python 3.5 which I have by default. ``` $ python Python 3.5.1 (default, Mar 3 2016, 09:29:07) [GCC 5.3.0] on linux Type "help", "copyright", "credits" or "license" for more information. >>> $ python2 Python 2.7.11 (default, Mar 3 2016, 11:00:04) [GCC 5.3.0] on linux2 Type "help", "copyright", "credits" or "license" for more information. >>> ``` I run that application as `$ some_app arg1 arg2`. How can make it use python2 as a temporary solution, preferably only in that single terminal window? I don't substitute a call of that application with "python". I call it like `$ some_app arg1 arg2`. I don't want to change the version of python permanently or modify the source code of the application.
2016/03/29
[ "https://Stackoverflow.com/questions/36278220", "https://Stackoverflow.com", "https://Stackoverflow.com/users/1708058/" ]
My favorite way: use virtualenv or pyenv ======================================== The most used tool for keeping multiple Python environments is [virtualenv](https://virtualenv.pypa.io/en/latest/). I like to use a sugary wrapper around it called [virtualenvwrapper](https://virtualenvwrapper.readthedocs.org/en/latest/). Usually I install it using: ``` sudo pip install virtualenvwrapper ``` Then I add a line like the following one to my shell profile: ``` source /usr/local/bin/virtualenvwrapper.sh ``` Now you can make a virtual environment for each version of Python. Gosh, you can make one for each Python project! ``` mkvirtualenv py3 mkvirtualenv py2 --python=python2 ``` Then you can switch temporarily to python2 by typing: ``` workon py2 ``` There is another tool called [pyenv](https://github.com/yyuu/pyenv) that lets you easily switch between multiple versions of Python. Quoting the project readme, "It's simple, unobtrusive, and follows the UNIX tradition of single-purpose tools that do one thing well. This project was forked from rbenv and ruby-build, and modified for Python". Looks like it is getting popular. Both are elegant, easy to use and a perfect fit for this use case (switch to a particular python version in a terminal window temporarily). The Unix way: set `$PATH` ========================= It is a best practice to write the [shebang](https://en.wikipedia.org/wiki/Shebang_(Unix)) for Python scripts like: ``` #!/usr/bin/env python ``` If the 3rd-party app you are running follows this convention, you can change the PATH. For example, suppose the default python interpreter is not the one you want: ``` $ env python --version Python 3.5.1 ``` Make a local copy of the python you want to make the default temporarily (you just have to do this one time): ``` $ mkdir ~/local $ mkdir ~/local/python2 $ ln -s `which python2` ~/local/python2/python ``` Then every time you want to make python2 the default, do: ``` $ export PATH=~/local/python2/:$PATH ``` Now the default is the version you want: ``` $ env python --version Python 2.7.10 ``` The positive karma way: if the 3rd-party app has a hardcoded path in the shebang ================================================================================ Perhaps the app author did not follow the best practices and hardcoded the Python path in the [shebang](https://en.wikipedia.org/wiki/Shebang_(Unix)). For example, suppose the first line of the program is: ``` #!/usr/bin/python ``` In this case, none of the previous tricks would work. Personally, I would change it to the canonical form and send a pull request to the author of the 3rd-party app. The bossy way: change the default Python version system-wide ============================================================ While this does not really answer your question - because it affects every window and avery other user in the system - I'm mentioning it for the sake of completeness. For example, if you are using Ubuntu you can change the default python interpreter using: ``` $ update-alternatives --config python ``` Other Linux distributions probably have something like this. Quick and dirty =============== The [shebang](https://en.wikipedia.org/wiki/Shebang_(Unix)) just tells the shell to call the script using the given interpreter. You can override the shebang when calling the app as `interpreter app`: ``` $ python2 `which app_name` ``` Or: ``` $ python2 /path/to/app_name ``` You can also just edit the program and change the [shebang](https://en.wikipedia.org/wiki/Shebang_(Unix)): ``` #!/usr/bin/python2 ``` Note that this would change permanently the Python interpreter for that program, it is not temporary nor restricted to a particular terminal. Worst, if the program is in the global PATH, all the other users of the system will be affected. If you want to restore the old behavior, you must undo your changes. That said, sometimes you just don't care about doing it right, you just want to make it work.
You can make use of terminal alias. **The alias will not be temporary** but it will not affect your system, so you can have a short alias like: **alias pyt "/usr/bin/python2.7"** in your .bashrc file inside home directory
11,281
42,128,830
I am trying a simple demo code of tensorflow from [github link](https://github.com/llSourcell/tensorflow_demo). I'm currently using python version 3.5.2 ``` Z:\downloads\tensorflow_demo-master\tensorflow_demo-master>py Python 3.5.2 (v3.5.2:4def2a2901a5, Jun 25 2016, 22:18:55) [MSC v.1900 64 bit (AMD64)] on win32<br> Type "help", "copyright", "credits" or "license" for more information. ``` I ran into this error when I tried board.py in command-line. I have installed all the dependencies that are required for this to run. ``` def _read32(bytestream): dt = numpy.dtype(numpy.uint32).newbyteorder('>') return numpy.frombuffer(bytestream.read(4), dtype=dt) def extract_images(filename): """Extract the images into a 4D uint8 numpy array [index, y, x, depth].""" print('Extracting', filename) with gzip.open(filename) as bytestream: magic = _read32(bytestream) if magic != 2051: raise ValueError( 'Invalid magic number %d in MNIST image file: %s' % (magic, filename)) num_images = _read32(bytestream) rows = _read32(bytestream) cols = _read32(bytestream) buf = bytestream.read(rows * cols * num_images) data = numpy.frombuffer(buf, dtype=numpy.uint8) data = data.reshape(num_images, rows, cols, 1) return data Z:\downloads\tensorflow_demo-master\tensorflow_demo-master>py board.py Extracting Z:/downloads/MNIST dataset\train-images-idx3-ubyte.gz Traceback (most recent call last): File "board.py", line 3, in <module> mnist = input_data.read_data_sets(r'Z:/downloads/MNIST dataset', one_hot=True) File "Z:\downloads\tensorflow_demo-master\tensorflow_demo-master\input_data.py", line 150, in read_data_sets train_images = extract_images(local_file) File "Z:\downloads\tensorflow_demo-master\tensorflow_demo-master\input_data.py", line 40, in extract_images buf = bytestream.read(rows * cols * num_images) File "C:\Users\surak\AppData\Local\Programs\Python\Python35\lib\gzip.py", line 274, in read return self._buffer.read(size) TypeError: only integer scalar arrays can be converted to a scalar index ```
2017/02/09
[ "https://Stackoverflow.com/questions/42128830", "https://Stackoverflow.com", "https://Stackoverflow.com/users/6833276/" ]
The code link you have provided uses a separate file named `input_data.py` to download data from MNIST using the following two lines in `board.py` ``` import input_data mnist = input_data.read_data_sets("/tmp/data/",one_hot=True) ``` Since MNIST data is so frequently used for demonstration purposes, Tensorflow provides a way to automatically download it. Replace the above two lines in `board.py` with the following two lines and the error will disappear. ``` from tensorflow.examples.tutorials.mnist import input_data mnist = input_data.read_data_sets('MNIST_data', one_hot=True) ```
This file is likely corrupt: ``` Z:/downloads/MNIST dataset\train-images-idx3-ubyte.gz ``` Let's analyze the error you posted. This, indicates that code is currently working with the file in question: ``` Extracting Z:/downloads/MNIST dataset\train-images-idx3-ubyte.gz ``` `Traceback` indicates that a stack trace follows: ``` Traceback (most recent call last): ``` This, indicates that you read your data sets from `'Z:/downloads/MNIST dataset'`: ``` File "board.py", line 3, in <module> mnist = input_data.read_data_sets(r'Z:/downloads/MNIST dataset', one_hot=True) ``` This, indicates that the code is extracting images: ``` File "Z:\downloads\tensorflow_demo-master\tensorflow_demo-master\input_data.py", line 150, in read_data_sets train_images = extract_images(local_file) ``` This, indicates that the code is expected to read `rows * cols * num_images` bytes: ``` File "Z:\downloads\tensorflow_demo-master\tensorflow_demo-master\input_data.py", line 40, in extract_images buf = bytestream.read(rows * cols * num_images) ``` This is the line that errors: ``` File "C:\Users\surak\AppData\Local\Programs\Python\Python35\lib\gzip.py", line 274, in read return self._buffer.read(size) TypeError: only integer scalar arrays can be converted to a scalar index ``` I expect `size` is the problematic value and was calculated on the previous line of the stacktrace. I can see at least two ways to proceed. 1. Delete the offending file and see if the problem goes away. This would allow you to verify that the file is somehow corrupt. 2. Use a debugger to step into the code and then inspect the values used to calculate the offending variable. Use the knowledge gained to proceed from there.
11,284
64,785,224
How can I remove keys from python dictionary of dictionaries based on some conditions? Example dictionary: ``` a = { 'k': 'abc', 'r': 20, 'c': { 'd': 'pppq', 'e': 22, 'g': 75 }, 'f': ''} ``` I want to remove all entries whose values are type of string. It can contain dictionary for any key. Nested key value pairs should also be handled.
2020/11/11
[ "https://Stackoverflow.com/questions/64785224", "https://Stackoverflow.com", "https://Stackoverflow.com/users/3056288/" ]
You could do: ``` a = { 'k': 'abc', 'r': 20, 'c': { 'd': 'pppq', 'e': 22, 'g': 75 }, 'f': ''} def remove_string_values(d): result = {} for k, v in d.items(): if not isinstance(v, str): result[k] = remove_string_values(v) if isinstance(v, dict) else v return result res = remove_string_values(a) print(res) ``` **Output** ``` {'r': 20, 'c': {'e': 22, 'g': 75}} ```
Something like this should work: ``` def remove_strings(a): no_strings = {} for k,v in a.items(): if type(x) is str: continue elif type(x) is dict: no_strings[k] = remove_strings(v) else: no_strings[k] = v ``` This function is recursive - it calls itself to process nested dictionaries. It should be possible to remove the keys in-place (without copying the dictionary), but the code would be a bit less readable that way.
11,287
12,006,267
So I can create Django model like this: ``` from django.db import models class Something(models.Model): title = models.TextField(max_length=200) ``` and I can work with it like this: ``` thing = Something() #set title thing.title = "First thing" #get title thing.title ``` All works as it should but I'd like to understand HOW it works. ``` title = models.TextField(max_length=200) ``` in non-Django Python code above line defines class variable title of type models.TextField and I could access it also like this: `thing.__class__.title`([link](http://getpython3.com/diveintopython3/iterators.html#a-plural-rule-iterator)) But in Django when I create instance of Something I suddenly have a title attribute where I can get/set text. And cannot access it with `thing.__class__.title` So clearly when doing thing.title I'm not accessing class variable "title" but some generated attribute/property, or? I know that fields ended up in thing.\_meta.fields but how? **What's going on and how?** 1, Does Django create property "title" behind the scenes? 2, What happened to class variable "title"?
2012/08/17
[ "https://Stackoverflow.com/questions/12006267", "https://Stackoverflow.com", "https://Stackoverflow.com/users/3272/" ]
I think its hard to beat what Django documentation has to [say on this](https://code.djangoproject.com/wiki/DevModelCreation). > > The Model class (see base.py) has a **metaclass** attribute that defines ModelBase (also in base.py) as the class to use for creating new classes. So ModelBase.**new** is called to create this new Example class. It is important to realise that we are creating the class object here, not an instance of it. In other words, Python is creating the thing that will eventually be bound to the Example name in our current namespace. > > > Basically a [metaclass](http://en.wikipedia.org/wiki/Metaclass#Python_example) defines how a class itself will be created. During creation, additional attributes/methods/anything can be bound to that class. The example this [stackoverflow answer](https://stackoverflow.com/questions/100003/what-is-a-metaclass-in-python) gives, capitalizes all the attributes of a class ``` # remember that `type` is actually a class like `str` and `int` # so you can inherit from it class UpperAttrMetaclass(type): # __new__ is the method called before __init__ # it's the method that creates the object and returns it # while __init__ just initializes the object passed as parameter # you rarely use __new__, except when you want to control how the object # is created. # here the created object is the class, and we want to customize it # so we override __new__ # you can do some stuff in __init__ too if you wish # some advanced use involves overriding __call__ as well, but we won't # see this def __new__(upperattr_metaclass, future_class_name, future_class_parents, future_class_attr): attrs = ((name, value) for name, value in future_class_attr.items() if not name.startswith('__')) uppercase_attr = dict((name.upper(), value) for name, value in attrs) return type(future_class_name, future_class_parents, uppercase_attr) ``` In a similar way, Django's metaclass for Models can digest the attributes you've applied to the class and add various useful attributes for validation/etc, including even methods and what-not.
python is extremely powerfull and permit the developer to use intrespection. django use a lot of metaclass. and it seem that models.Model use it too. see in django\db\models\base.py ``` class Model(object): __metaclass__ = ModelBase ``` i think the metaclass just take the classes attributes such a the Field and for all new instance for these Model subclass,create the apropriate variable. 1) yes, django create the instance variable of property "title" automaticaly 2) in the same way, the metaclass move the fields into the meta class...
11,288
64,694,569
aws cdk returns jsii error on empty stack. Steps to reproduce are at the hello world level which makes me think that I have a version mismatch somewhere. I have re-installed aws cli, cdk and nodejs. Any suggestions on what to look for? Steps to reproduce: ``` mkdir myfolder cdk init --language python .env\Scripts\activate.ps1 python -m pip install -r requirements.txt cdk synth ``` Returns error AND an empty stack: ``` (.env) p$[myfolder]> cdk synth d:\myfolder\.env\lib\site-packages\jsii\_embedded\jsii\jsii-runtime.js:13295 throw e; ^ Error: EOF: end of file, read at Object.readSync (fs.js:592:3) at SyncStdio.readLine (d:\myfolder\.env\lib\site-packages\jsii\_embedded\jsii\jsii-runtime.js:13278:33) at InputOutput.read (d:\myfolder\.env\lib\site-packages\jsii\_embedded\jsii\jsii-runtime.js:13203:34) at KernelHost.run (d:\myfolder\.env\lib\site-packages\jsii\_embedded\jsii\jsii-runtime.js:13021:32) at Immediate.<anonymous> (d:\myfolder\.env\lib\site-packages\jsii\_embedded\jsii\jsii-runtime.js:13029:37) at processImmediate (internal/timers.js:461:21) { errno: -4095, syscall: 'read', code: 'EOF' } Resources: CDKMetadata: Type: AWS::CDK::Metadata Properties: Modules: aws-cdk=1.69.0,@aws-cdk/cloud-assembly-schema=1.69.0,@aws-cdk/core=1.69.0,@aws-cdk/cx-api=1.69.0,@aws-cdk/region-info=1.69.0,jsii-runtime=Python/3.7.4 Metadata: aws:cdk:path: myfolder/CDKMetadata/Default Condition: CDKMetadataAvailable ``` Environment ``` - **CLI Version :aws-cli/2.0.61 Python/3.7.7 Windows/10 exe/AMD64 - **cdk Version:1.69.0 (build 2b474b9) - **Node.js Version:v14.15.0 - **OS :Windows 10 - **Language (Version):python 3.7.4 ``` Saw this error when I first started on cdk. But, 'cdk synth' showed a stack, so I pressed on. I was even able to 'cdk deploy' simple stacks. Eventually, as the code became only slightly more complex, jsii errors prevent stack creation. Code created by me throws errors on my machine but does NOT error on other machines. Working cdk code from other devs will not synth or deploy stacks on my machine. So far, I have re-installed aws cli, node.js and cdk. Any ideas where the jsii error originate or how to fix them?
2020/11/05
[ "https://Stackoverflow.com/questions/64694569", "https://Stackoverflow.com", "https://Stackoverflow.com/users/14582438/" ]
For AWS-CDK on Windows, there is at least one bug in jsii documented by AWS CDK group. Deep inside the jsiiruntime (line 13278 to be exact), aws cdk group has a comment with a link to a nodejs bug report. I reported my problem to aws-cdk which seemed to be related. They reproduced the bug and created a bug report at nodejs. This bug report has links to the other bug reports. <https://github.com/aws/aws-cdk/issues/11314> But I still needed a workaround. After much trial and error, the following 2 workarounds should work for AWS-CDK on Windows. **Workaround 1:** replace jsii 1.14.x distro in site-packages with 1.12 or 1.13 distro. After swapping out 1.14 for 1.12 or 1.13, errors stop. Getting a distro is a trick. You will have to get one from someone who has not upgraded or run pip -- which what I did. **Workaround 2:** move dev environment off Windows and onto linux or mac
TL;DR. Expanded work arounds. (A question was asked in the AWS-CDK bug report noted above). **Workaround 1: replace jsii 1.14.x distro** ***Distro folders:*** jsii is contained in 2 folders jsii and jsii-1.14.1.dist-info REPLACE both of these folders with folders from an older install -- 1.12 or 1.13. The distro folders can be found in one or both of the following locations: ***Distro location virtual environment:*** jsii distro will be found in the **site-packages** directory. Example: .env\lib\python3.8\site-packages where virtual environment name = .env, python version = 3.8 ***Distro location non-virtual:*** This should be the location if you have not created a virtual environment. Note that, again, the python version is part of the path. C:\Program Files\Python37\Lib\site-packages Note: for some reason, they were not installed here. But, that may change, so keep this in mind. C:\Users\yourusername\AppData\Roaming\Python\Python37\site-packages **Workaround 2: move to linux** In windows, install WSL 2. Install Ubuntu or your favorite linux variety. Install python Use your favorite Windows IDE from Windows! I use Visual Studio Code. Requires installing an extension if I remember correctly. WARNING: I had problems when trying to run python, cdk, etc where the files were on the Windows system (ex: /mnt/d/project). Just copy the repo to the "pure" linux file system ( ex: ~/project). I have git installed on the linux side and maintain my files as if it were a separate machine.
11,289
11,661,053
I am running Windows 7 currently, and I remember when using Linux at the school computers I was able to type "gedit &" into the terminal for example to open up the gedit text editor. I was wondering whether there is a similar process to open IDLE, and for that matter a Python program/script by typing it into the "terminal-equivalent." I'm a complete newbie, so I may be off-base a bit...anyways, so there is this terminal-like program called python.exe, and it seems like it should be able to open Python-related software (like IDLE), and I was wondering 1) what python.exe is for, 2) whether it can be treated like a Linux terminal, and 3) how to do stuff in it. I've tried various commands and I get a syntax error for virtually everything. Much appreciated!
2012/07/26
[ "https://Stackoverflow.com/questions/11661053", "https://Stackoverflow.com", "https://Stackoverflow.com/users/1553212/" ]
python.exe is the Python interpreter. All of your Python programs are executed with it. If you run it in the console, you will get an interactive prompt: ``` C:\> python Python 2.6.6 (r266:84297, Aug 24 2010, 18:46:32) [MSC v.1500 32 bit (Intel)] on win32 Type "help", "copyright", "credits" or "license" for more information. >>> ``` If you give it a Python program, it will run it: ``` C:\> python myprog.py ``` You can also ask it to run an importable module: ``` C:\> python -m modulename ``` You can run IDLE this way: ``` C:\> python -m idlelib.idle ```
python.exe **is** Python, the python interpreter specifically.
11,290
9,353,457
As I muddle my way through trying to learn how to code (in python) I've hit the same problem I hit frequently. How do you pass variables to and from functions properly. In this example, I want to create a new variable inside the function process, but I don't know how to get it back properly (this method does not work.) ``` a = "foo" def func_1(): b = "bar" print a,b return (b) func_1() print b ``` I am 'expecting' b to be available after the function call, as I have returned it... I appreciate this me not understanding properly how to implement/manage variables Thanks.
2012/02/19
[ "https://Stackoverflow.com/questions/9353457", "https://Stackoverflow.com", "https://Stackoverflow.com/users/1191626/" ]
The `b` inside `func_1()` is local to the function and not available outside of the function. You need to *assign* the returned value of the function `func_1()` to a variable name in the main body. ``` b = func_1() print b ```
> > I am 'expecting' b to be available after the function call, as I have returned it > > > The fundamental misunderstanding: you aren't returning a variable, you're returning a value. Variables are labels (names) for values. Inside the function, you have a name `b` for a value `"foo"`. You use this name to refer to the value, so that you can return it. On the outside, to use this value, you must do something with it. The call to the function - `func_1()` - is a stand-in, in whatever expression contains it, for the value that gets returned. So a line of code like `func_1()`, by itself, does nothing visible: you have an expression that evaluates to `"foo"`, but then nothing is done with `"foo"`. You could, for example, `print(func_1())` to display the string. You can assign the returned value to another variable: `c = func_1()`. Now `c` is a name for the `"foo"` value that was returned. You can use the function call as part of a larger expression: `c = func_1() + "bar"` - now `c` is a name for a string `"foobar"`, created by joining together the string returned from the function with the directly-specified one. On the other side: to get information **into** the function, you should pass it as a parameter. These are the values that go inside the parentheses: when you write the function, you put argument names (another kind of variable) inside the parentheses, and when you call the function, you put the parameters that you are calling the function with (expressions that provide the values). Now, inside the function, each time it is called, the arguments will refer to the parameter values. Let us work a simple example. We will make a function that appends "bar" to an input string, and call it a few times. ``` def add_bar(to): return to + "bar" ``` Notice how we can return any expression. When the function is called, the expression is evaluated, joining the passed-in argument string `to` to the constant `"bar"`, and the resulting value is returned. There is no variable here that represents the entire return value, because again, we are returning a value, not a variable. Now we may call this. Again, the parameter can be whatever expression - so a constant: ``` add_bar("foo") ``` or a variable that we have previously assigned a value: ``` my_string = "sand" add_bar(my_string) ``` or something more complex, maybe even involving another call to the function: ``` add_bar(add_bar("bar ") + " ") ``` But again, none of these will actually do anything, except produce the string and then throw it away. So let's fix that: ``` print("I want a " + add_bar("chocolate") + ".") ```
11,293
60,445,837
i have an api end point where i am uploading data to using python. end point accepts ``` putHeaders = { 'Authorization': user, 'Content-Type': 'application/octet-stream' } ``` My current code is doing this .Save a dictionary as csv file .Encode csv to utf8 ``` dataFile = open(fileData['name'], 'r').read()).encode('utf-8') ``` .Upload file to api end point ``` fileUpload = requests.put(url, headers=putHeaders, data=(dataFile)) ``` What i am trying to acheive is loading the data without saving so far i tried converting my dictionary to bytes using data = json.dumps(payload).encode('utf-8') and loading to api end point . This works but the output in api end point is not correct. [![enter image description here](https://i.stack.imgur.com/ihIe3.png)](https://i.stack.imgur.com/ihIe3.png) Question Does anyone know how to upload csv type data without actually saving the file ?
2020/02/28
[ "https://Stackoverflow.com/questions/60445837", "https://Stackoverflow.com", "https://Stackoverflow.com/users/10004110/" ]
Here's a [working example](https://stackblitz.com/edit/react-ts-w55wqg). You have to install the `@types/faker` package also for getting type definitions. ``` import React, { Component } from 'react'; import { render } from 'react-dom'; import Hello from './Hello'; import './style.css'; import faker from 'faker'; interface AppProps { } interface AppState { name: string; } class App extends Component<AppProps, AppState> { constructor(props) { super(props); this.state = { name: faker.name.firstName() }; } render() { return ( <div> <Hello name={this.state.name} /> </div> ); } } render(<App />, document.getElementById('root')); ``` This is `tsconfig.json` ``` { "compilerOptions": { "target": "es5", "lib": [ "dom", "dom.iterable", "esnext" ], "allowJs": true, "skipLibCheck": true, "esModuleInterop": true, "allowSyntheticDefaultImports": true, "strict": true, "forceConsistentCasingInFileNames": true, "module": "esnext", "moduleResolution": "node", "resolveJsonModule": true, "isolatedModules": true, "noEmit": true, "jsx": "react" }, "include": [ "src" ] } ```
I had the same issue and tried the above suggested solution without success. What worked for me, was changing the following line in `tsconfig.json`: `//"module": "esnext", "module": "commonjs",` Or, pass it at the command-line: `ts-node -O '{"module": "commonjs"}' ./server/generate.ts > ./server/database.json` Explanation [1]: Current node.js stable releases do not support ES modules. Additionally, ts-node does not have the required hooks into node.js to support ES modules. You will need to set "module": "commonjs" in your tsconfig.json for your code to work. [1] <https://github.com/TypeStrong/ts-node#import-statements>
11,303
2,754,753
I use pylons in my job, but I'm new to django. I'm making an rss filtering application, and so I'd like to have two backend processes that run on a schedule: one to crawl rss feeds for each user, and another to determine relevance of individual posts relative to users' past preferences. In pylons, I'd just write paster commands to update the db with that data. Is there an equivalent in django? EG is there a way to run the equivalent of `python manage.py shell` in a non-interactive mode?
2010/05/02
[ "https://Stackoverflow.com/questions/2754753", "https://Stackoverflow.com", "https://Stackoverflow.com/users/303931/" ]
Yes, this is actually how I run my cron backup scripts. You just need to load your virtualenv if you're using virtual environments and your project settings. I hope you can follow this, but after the line `# manage.py shell` you can write your code just as if you were in `manage.py shell` You can import your virtualenv like so: ``` import site site.addsitedir(VIRTUALENV_PATH + '/lib/python2.6/site-packages') ``` You can then add the django project to the path ``` import sys sys.path.append(DJANGO_ROOT) sys.path.append(PROJECT_PATH) ``` Next you load the django settings and chdir to the django project ``` import os from django.core.management import setup_environ from myproject import settings setup_environ(settings) os.chdir(PROJECT_PATH) ``` After this point your environment will be set just like if you started with `manage.py shell` You can then run anything just as if you were in the interactive shell. ``` from application.models import MyModel for element in MyModel: element.delete() ``` Here is my backup file in full. I've abstracted the process out into functions. This would be named `daily_backup` and be put into the `cron.daily` folder to be run daily. You can see how to set up the environment and modify the functionality as needed. ``` #!/usr/bin/env python import sys import os import site import logging from datetime import datetime PROJECT_NAME = 'myproject' DJANGO_ROOT = '/var/www/django' PROJECT_PATH = DJANGO_ROOT + '/' + PROJECT_NAME VIRTUALENV_PATH = '/var/www/envs/'+ PROJECT_NAME BACKUP_DIR = '/var/www/backups/%s/daily' % (PROJECT_NAME) TODAY = datetime.now().strftime('%Y%m%d-%H%M%S') FILE_NAME = PROJECT_NAME + '_' + TODAY site.addsitedir(VIRTUALENV_PATH + '/lib/python2.6/site-packages') sys.path.append(DJANGO_ROOT) sys.path.append(PROJECT_PATH) from django.core.management import setup_environ from myproject import settings setup_environ(settings) os.chdir(PROJECT_PATH) # manage.py shell from django.conf import settings logging.basicConfig(level=logging.WARN) def _setup(): if not os.path.exists(BACKUP_DIR): logging.debug('Creating backup directory ' + BACKUP_DIR) os.mkdir(BACKUP_DIR) os.mkdir(BACKUP_DIR + '/databases') else: logging.debug('Using backup directory ' + BACKUP_DIR) def _remove_old(): logging.debug('Cleaning out old backups') # keep past 7 days command = "find %s* -name '%s*' -mtime +7 -exec rm {} \\;" % (BACKUP_DIR, PROJECT_NAME) os.system(command) def _create_backup(): logging.debug('Backup database') if settings.DATABASE_ENGINE == 'mysql': command = 'mysqldump -u %s --password=%s %s > %s/databases/%s.sql' % (settings.DATABASE_USER, settings.DATABASE_PASSWORD, settings.DATABASE_NAME, BACKUP_DIR, FILE_NAME) else: command = '%s/bin/python %s/manage.py dumpdata --indent=4 > %s/databases/%s.json' % (VIRTUALENV_PATH, PROJECT_PATH, BACKUP_DIR, FILE_NAME) os.system(command) logging.debug('Backup project') command = 'tar -czf %s/%s.tgz -C %s %s/' % (BACKUP_DIR, FILE_NAME, DJANGO_ROOT, PROJECT_NAME) os.system(command) if __name__ == '__main__': _setup() _remove_old() _create_backup() ```
Sounds like you need some twod.wsgi in your life: <http://packages.python.org/twod.wsgi/>
11,304
5,482,546
so I have a list with a whole bunch of tuples ``` j = [('jHKT', 'Dlwp Dfbd Gwlgfwqs (1kkk)', 53.0), ('jHKT', 'jbdbjf Bwvbly (1kk1)', 35.0), ('jHKT', 'Tfstzfy (2006)', 9.0), ('jHKT', 'fjznfnt Dwjbzn (1kk1)', 25.0), ('jHKT', 'Vznbsq sfnkz (1k8k)', 4.0), ('jHKT', 'fxzt, Clwwny! (2005)', 8.0), ('jHKT', "Dwfs Thzs jfbn Wf'lf jbllzfd? (1kk1)", 12.0), ('jHKT', 'Chbzljbn wf thf Bwbld (1kk8)', 30.0), ('jHKT', 'Vblfdzctzwn (2006)', 8.0), ('jHKT', 'jwltbl Kwjbbt (1kk5)', 13.0)] ``` and I tried to sort it using the third element of the tuple as the index: note that the list above is just a partial list...the actual list contains thousands of elements anyways so I did: ``` j = sorted(j, key=lambda e : e[2]) ``` but then when I do that, it ends up messing up the third element of the tuple and I highly doubt that it actually sorted...here's another partial list of the output ``` ('jHKT', 'Frz yzng (2004)', 0.0) ('jHKT', 'kff thr Mvp (2003)', 0.0) ('jHKT', 'HzpHkpBvttlr.ckm: Hzp Hkp 4 Lzfr (2001)', 0.0) ('jHKT', 'z Wvlk thr Lznr (1970)', 0.0) ('jHKT', '1971: erzsknrrs kf svr (2007)', 0.0) ('jHKT', 'Wzld Rzdr, Thr (1960)', 0.0) ('jHKT', 'Dzshdkgz (2005)', 0.0) ('jHKT', 'Lzttlr Thzngs, Thr (2006)', 0.0) ('jHKT', 'Trrmznvl rrrkr (2002)', 0.0) ('jHKT', 'Hqngry Bvchrlkrs Clqb, Thr (1999)', 0.0) ('jHKT', 'Swrrt Lkvr, Bzttrr (1967)', 0.0) ('jHKT', 'Trn tk Chz tk (1990)', 0.0) ('jHKT', 'Bvr-Crl-knv (1987)', 0.0) ('jHKT', 'Rknny & Czndy zn vll kf qs (2006)', 0.0) ``` in this case, it ended up resetting all of the third element of the tuples into 0... what did I do wrong?? I'm using python 3 **##############################EDIT####################################** also, when I tried to print the list of tuples, it would return this error: ``` print(j) IOError: [Errno 22] Invalid argument ``` and the printing would abruptly stop...: ``` ('sadfasdf (1991)', 'xcvwert (1985)', 0.0), ('r3sdaf (1991)', 'jkzxkk (1993)', 0.0), ('werwww (1991)', 'Third WhTraceback (most recent call last): ``` and then the error appears **################EDIT###################** On the other hand, printing the list by iterating works just fine so ``` for i in j: print(i) ``` works fine whereas just print(j) would return that error
2011/03/30
[ "https://Stackoverflow.com/questions/5482546", "https://Stackoverflow.com", "https://Stackoverflow.com/users/380714/" ]
I think your code works correctly and you see the first part of the list, where key is realy 0.0. You just sort the list in ascending order :-)
It's probably worth comparing the sorted and unsorted lists to see if the sort is actually changing data. You could try something as simple as: ``` print sum(e[2] for e in j) j = sorted(j, key=lambda e : e[2]) print sum(e[2] for e in j) ```
11,306
70,541,783
I'm python user learning R. Frequently, I need to check if columns of a dataframe contain NaN(s). In python, I can simply do ``` import pandas as pd df = pd.DataFrame({'colA': [1, 2, None, 3], 'colB': ['A', 'B', 'C', 'D']}) df.isna().any() ``` giving me ``` colA True colB False dtype: bool ``` In R I'm struggling to find an easy solution. People refer to some apply-like methods but that seems overly complex for such a primitive task. The closest solution I've found is this: ``` library(tidyverse) df = data.frame(colA = c(1, 2, NA, 3), colB = c('A', 'B', 'C', 'D')) !complete.cases(t(df)) ``` giving ``` [1] TRUE FALSE ``` That's OKyish but I don't see the column names. If the dataframe has 50 columns I don't know which one has NaNs. Is there a better R solution?
2021/12/31
[ "https://Stackoverflow.com/questions/70541783", "https://Stackoverflow.com", "https://Stackoverflow.com/users/2743931/" ]
The best waty to check if columns have NAs is to apply a loop to the columns with a function to check whether there is `any(is.na)`. ``` lapply(df, function(x) any(is.na(x))) $colA [1] TRUE $colB [1] FALSE ``` I can see you load the tidyverse yet did not use it in your example. If we want to do this within the tidyverse, we can use purrr: ``` library(purrr) df %>% map(~any(is.na(.x))) ``` Or with dplyr: ``` library(dplyr) df %>% summarise(across(everything(), ~any(is.na(.x)))) colA colB 1 TRUE FALSE ```
The easiest way would be: ``` df = data.frame(colA = c(1, 2, NA, 3), colB = c('A', 'B', 'C', 'D')) is.na(df) ``` **Output:** ``` colA colB [1,] FALSE FALSE [2,] FALSE FALSE [3,] TRUE FALSE [4,] FALSE FALSE ``` **Update**, if you only want to see the rows containing NA: ``` > df[rowSums(is.na(df)) > 0,] colA colB 3 NA C ``` **Update2**, or to get only ColNames with information about NA (thanks to RSale for `anyNA`): ``` > lapply(df, anyNA) $colA [1] TRUE $colB [1] FALSE ```
11,309
67,126,748
I have a problem with my python(python3.8.5) project! I have two docker containers. One is used for the frontend(container2) using flask. There I have a website where I can send requests(package requests) to the second container(backend). In the backend(container1) I have to create a zip file containing multiple file. That part is done. Now the problem is how can I get this file to the frontend(container2) and from there I need to download it via the website. So I need a solution to send a http request (now I have requests.get(URL to backend)) from the frontend to the backend. In the response I need the zip file which I can download then from the website. I googled already many hours but I cannot find a good solution. I thing it would be good if I don't have to store the zip file on the frontend and on the backend. If possible I would like to have it stored just on the backend and send it to the frontend and directly download it. I hope you understand my problem and can help me. Br
2021/04/16
[ "https://Stackoverflow.com/questions/67126748", "https://Stackoverflow.com", "https://Stackoverflow.com/users/15661344/" ]
A probably incomplete solution, that does what you describe. Depending on other concerns (like file size) this might not be what you really want, but it works. As an example I have 2 flask servers: flask1 (backend) and flask2 (frontend): flask1: ``` from flask import Flask, send_file, request app = Flask(__name__) @app.route('/get_file') def get_image(): return send_file(request.args.get('name')) ``` flask2: ``` from flask import Flask, send_file, request import io import requests app = Flask(__name__) @app.route('/get_something') def get_image(): r = requests.get("http://localhost:5000/get_file?name="+request.args.get('name')) return send_file(io.BytesIO(r.content), mimetype=r.headers['Content-Type'], as_attachment=True, attachment_filename="file") ``` To test you can run them on different ports, like: ``` FLASK_APP=flask1.py flask run --port=5000 FLASK_APP=flask2.py flask run --port=8080 ``` Then if you go to the browser and access the frontend with an URL like: http://localhost:8080/get\_something?name=download.png the frontend will "pass" the request to the backend, then re-send what it receives, hence returning the file. Things that code above does not address: file naming and errors.
You could proxy the request from frontend. I assume you use something like react. So if you add a line like: ``` "proxy": "http://backend-container:8080" ``` to your package.json, frontend should proxy to the backend. Now you can send requests like http://front-end/api/hello and it will hit the backend. Not sure if it is the best solution but might work. examples: <https://create-react-app.dev/docs/proxying-api-requests-in-development/> <https://medium.com/bb-tutorials-and-thoughts/react-how-to-proxy-to-backend-server-5588a9e0347> <https://www.twilio.com/blog/react-app-with-node-js-server-proxy>
11,312
71,451,635
This is my first question in here. Question is mostly about Python but a little bit about writing crypto trading bot. I need to calculate some indicators about trading like Bollinger Bands, RSI or etc in my bot. Indicators are calculated from candlesticks data in trading. And candlesticks data updated in their time period. For example if use candlestiks data in period 5 minute i need to calculate indicators every 5 min (after candlesticks update), or if i use candlesticks data in period 1 hour period i need to calculate indicators every 1 hour. So, think that i have two classes. One is named with "TradeControl" and it has a attribute named with "candlestick\_data". And second class is named with "CandlestickUpdater". I want to manage updates for "candlestick\_data" in "TradeControl" class from "CandlestickUpdater" class. My problem is about parallel process (or threading) in python. I don't know how to do it proper way. Is there anybody here to give me some advice or sample code. And i am also wondering if i should use mutex-like structures to guard against conditions like race condition. What i am trying to do in code is like below. Also, if you have a different suggestion, I would like to consider it. ``` class TradeControl(): def __init__(self): self.candlestick_data = 0 def update_candlestick(self): """Candlesticks data will be requested from exchange and updated in here""" pass def run_trading_bot(self): while True: """Main trading algorithm is running and using self.candlestick_data in here""" pass pass pass class CandlestickUpdater(): def __init__(self, controller: TradeControl): self.controller = controller def run_candlestick_updater(self): while True: """Candlesticks data updates will be checked in here""" if need_to_update: """TradeControl will be informed in here""" self.controller.update_candlestick() pass pass pass pass ``` Thanks for your replies,
2022/03/12
[ "https://Stackoverflow.com/questions/71451635", "https://Stackoverflow.com", "https://Stackoverflow.com/users/-1/" ]
It's not quite obvious from your description what `CandlestickUpdater` is for. It looks like its purpose is to notify `TradeControl` that time has come to update `candlestick_data`, whereas updating logic itself is in the `TradeControl.update_candlestick` method. So basically `CandlestickUpdater` is a timer. There are different python libs from scheduling events. [sched](https://docs.python.org/3/library/sched.html) for example. It seems that it can be used for scheduling `TradeControl.update_candlestick` method with a required time period. If you still want to have `CandlestickUpdater` class, I think it can actually work as it's written in your code. You can these classes in threads this way: ```py from threading import Thread trade_control = TradeControl() candlestick_updater = CandlestickUpdater(controller=trace_control) Thread(target=trade_control.update_candlestick).start() Thread(target=candlestick_updater.run_candlestick_updater).start() ``` Speaking of race condition. Well, it can be the case, but it depends on your implementation of `update_candlestick` and `run_trading_bot`. It's better to create `TradeControl.lock: threading.Lock` attribute and use it when updating `candlestick_data`: <https://docs.python.org/3/library/threading.html#lock-objects>. There is also another way of communicating between threads without direct methods calls. Look at the [blinker](https://pypi.org/project/blinker/). It's nice lib to implement event-driven scenarios. You just need to create a "signal" object (outside of `TradeControl` and `CandlestickUpdater` classes) and connect `TradeControl.update_candlestick` method to it. Then you just call "signal.send" method in "CandlestickUpdater.run\_candlestick\_updater" and connected functions will be executed. It can be a little bit tricky to connect instance method to signal. But there are some solutions for it. The easiest one is to make `CandlestickUpdater.run_candlestick_updater` static if it's possible for you.
> > **Q :** *" ... should (I) use mutex-like structures to guard against conditions like race condition "... ?* > > > **A :** Given the facts, how the Python Interpreter works ( since ever & most probably forever, as Guido von Rossum has expressed himself ) with all its threads, distributing ***"a permission to work"*** for ~ 100 [ms] quanta of time, via central **G**-lobal **I**-nterpreter **L**-lock ( GIL-lock ), there are no race-conditions to meet in Python ( unless software-created live-locks or some indeed artificially generated mutually blocked obscurities ). In other words, all Python Interpreter threads are principally collision-free, as GIL-lock re-`[SERIAL]`-ises any and all threads to one-and-only-ONE-works-(if-GIL-lock-owner)-ALL-others-WAIT... > > **Q :** *" My problem is about parallel process (or threading) in python. "* > > > **A :** Thread are never even "just"-`[CONCURRENT]` in Python, so never form a True-`[PARALLEL]` form of process-orchestration at all. Processes escape from the GIL-lock re-`[SERIAL]`-isation tyrany, yet, at many add-on costs and barriers from a shift into a new paradigm of [distributed-processing](/questions/tagged/distributed-processing "show questions tagged 'distributed-processing'") If Process-based `[CONCURRENT]`-processinghappens to get onto the scene : ------------------------------------------------------------------------- If going into process-based "just"-`[CONCURRENT]`-processing ( using `joblib.Parallel(...)delayed(...)`, `multiprocessing` or other modules ), there you start with as many independent ( each being inside still GIL-lock driven, but independently one from any other - an uncoordinated freedom of mutually independent GIL-locks' reservations ) processes, top-down copies ( all Win, some MacOS ) of the original Python Interpreter ( yes, due to RAM-footprint + RAM-I/O it is `[SPACE]`- and `[TIME]`-domain expensive ). Here you cannot enjoy calling local-process Class-method or similar mechanics for propagating state-changes in-process, as your "receiver" may reside in the original Python Interpreter process, which has principally separate address-space that your local-process cannot ( and shall never ) be allowed to touch, the less change a single bit thereof. Yet we may use here tools for distributed-computing signalling and/or message-passing, like [zeromq](/questions/tagged/zeromq "show questions tagged 'zeromq'"), [pyzmq](/questions/tagged/pyzmq "show questions tagged 'pyzmq'"), [nanomsg](/questions/tagged/nanomsg "show questions tagged 'nanomsg'"), [pynng](/questions/tagged/pynng "show questions tagged 'pynng'") and the likes, that meet your needs, some of which have vast Quant/Fintech installed base with many language bindings/wrappers ( we use ZeroMQ since v2.11+ among python AI/ML-Ensemble-of-Predictors, MQL4 for driving FX-Broker XTOs & MQL4 Backtesting simulation-farm, python non-blocking distributed logger, python remote CLI agents, scale-out C language tools for multiple systems' integration glue logic, signal-provisioning service, subscriber-management, ... macro-system health-monitor & alarm system ) The above sketched distributed-computing macro-system spans several platforms and has no problems to process sub-millisecond cadence of FX-market stream of `QUOTE`-events, handle tens of thousands managed-trade positions ( driving live FX-market XTO-modifications ) and maintain all QuantFX-technical indicator re-calculations on the fly. --- Building one's own event-responsive MVC-framework -OR-re-use a robust & stable & feature-rich one : --------------------------------------------------------------------------------------------------- Having said the above, you may re-use another concept, when living in a comfort of plenty of time when going into just an H1-timeframe operations. have o look on `Tkinter.mainloop()`-examples, with tools for both event-driven & timed-callbacks, that will permit you anything within the ceiling of your imagination, including GUI and infinitely many modes of Man-Machine-Interaction modes and gadgets.
11,313
70,648,020
[![enter image description here](https://i.stack.imgur.com/dbDhf.png)](https://i.stack.imgur.com/dbDhf.png) I configured python 3.6 in JetBrains initially but uninstalled it today since it reached end-of-life. I installed version 3.10 but I keep getting the error that "Python 3.1 has reached end of date." Clicking on 'Configure Python Interpreter'>Project Settings>Modules>Dependencies>Python 3.10 doesn't solve the problem.[![enter image description here](https://i.stack.imgur.com/J6AWY.png)](https://i.stack.imgur.com/J6AWY.png) Using File->settings and searching for interpreter shows only this option [![enter image description here](https://i.stack.imgur.com/iCagV.png)](https://i.stack.imgur.com/iCagV.png). Used "Use SDK of module" and "specified Python Interpreter" options but the error persists. I can't find any other resolution on the JetBrains support page for this. Thank you for your help! * I tried removing all existing configuration SDK from File | Project Structure | Platform Settings | SDKs and adding Python 3.10 again. * A screenshot of the interpreter configured in an existing virtual environment is attached here. [![enter image description here](https://i.stack.imgur.com/3Gop3.png)](https://i.stack.imgur.com/3Gop3.png) * When I click ok, I get this error.[![enter image description here](https://i.stack.imgur.com/MThnm.png)](https://i.stack.imgur.com/MThnm.png) * I figured this may be since my venv was initially configured in the 3.6 version but I'm not able to configure a new venv with Python 3.10. (The Ok option is greyed out) [![enter image description here](https://i.stack.imgur.com/iJcKv.png)](https://i.stack.imgur.com/iJcKv.png) While trying to add venv manually using /venv or /venv2, I get the error- 'Specified path cannot be found' [![enter image description here](https://i.stack.imgur.com/rlTIz.png)](https://i.stack.imgur.com/rlTIz.png)
2022/01/10
[ "https://Stackoverflow.com/questions/70648020", "https://Stackoverflow.com", "https://Stackoverflow.com/users/13706804/" ]
We can do this with only one function, `Array.find()`. We can do this by checking if it is equal to the `Remove invalid product` inside the function. See the example below. ```js const responses = [{ "productName": "Required" }, null, { "status": "Remove invalid product" }, { "status": "Remove invalid product" } ]; console.log(responses.find(status => status?.status === 'Remove invalid product')?.status ? responses.find(status => status?.status === 'Remove invalid product').status : "No invalid product found"); ``` **Hoped this helped!**
You can do it using `array.find()`. Try this code it's help you ! ``` function App() { const responses = [ { "productName": "Required" }, null, { "status": "Remove invalid product" }, { "status": "Remove invalid product" } ]; return ( <div> checkResponses : {responses.find(x => { if (x && x.status == "Remove invalid product") return x.status }).status}</div> ); }; export default App; ```
11,314
12,491,731
I am using tkMessageBox.showinfo ([info at tutorialspoint](http://www.tutorialspoint.com/python/tk_messagebox.htm)) to popup warnings in my program. The problem happens only when the warning is called with a second TopLevel window (apart from the main one) on screen: in this case the warning remains hidden behind the second TL window. I tried to call it thus: ``` tkMessageBox.showinfo(title='Warning',message=s).lift() ``` but it doesnt work. Any ideas?
2012/09/19
[ "https://Stackoverflow.com/questions/12491731", "https://Stackoverflow.com", "https://Stackoverflow.com/users/538256/" ]
I think the message box is only ever guaranteed to be above its parent. If you create a second toplevel and you want a messagebox to be on top of that second window, make that second window the parent of the messagebox. ``` tl2 = tk.Toplevel(...) ... tkMessageBox.showinfo("Say Hello", "Hello World", parent=tl2) ```
I do not see the issue that you describe. The code I wrote below is just about the minimum needed to create a window which creates a second window. The second window creates an info box using the `showinfo` method. I wonder whether you have something besides this. (Note that I made the windows somewhat large in order to attempt cover up the info window.) ``` from Tkinter import Tk, Button, Toplevel import tkMessageBox top = Tk() def make_window(): t = Toplevel(top) t.title("I'm Window 2. Look at me too!") B2 = Button(t, text = "Click me", command = hello) B2.pack() t.geometry('500x500+50+50') def hello(): tkMessageBox.showinfo("Say Hello", "Hello World") B1 = Button(top, text = "New Window", command = make_window) B1.pack() top.title("I'm Window 1. Look at me!") top.geometry('500x500+100+100') top.mainloop() ``` This was tested on Windows 7 (64-bit) using Python 2.7 (32-bit). It produces something like this: ![enter image description here](https://i.stack.imgur.com/20apT.png)
11,317
57,672,921
For clarity, i was looking for a way to compile multiple regex at once. For simplicity, let's say that every expression should be in the format `(.*) something (.*)`. There will be no more than 60 expressions to be tested. As seen [here](https://stackoverflow.com/questions/42136040/how-to-combine-multiple-regex-into-single-one-in-python), i finally wrote the following. ```py import re re1 = r'(.*) is not (.*)' re2 = r'(.*) is the same size as (.*)' re3 = r'(.*) is a word, not (.*)' re4 = r'(.*) is world know, not (.*)' sentences = ["foo2 is a word, not bar2"] for sentence in sentences: match = re.compile("(%s|%s|%s|%s)" % (re1, re2, re3, re4)).search(sentence) if match is not None: print(match.group(1)) print(match.group(2)) print(match.group(3)) ``` As regex are separated by a pipe, i thought that it will be automatically exited once a rule has been matched. Executing the code, i have ``` foo2 is a word, not bar2 None None ``` But by inverting re3 and re1 in re.compile `match = re.compile("(%s|%s|%s|%s)" % (re3, re2, re1, re4)).search(sentence)`, i have ``` foo2 is a word, not bar2 foo2 bar2 ``` As far as i can understand, first rule is executed but not the others. Can someone please point me on the right direction on this case ? Kind regards,
2019/08/27
[ "https://Stackoverflow.com/questions/57672921", "https://Stackoverflow.com", "https://Stackoverflow.com/users/2243607/" ]
There are various issues with your example: 1. You are using a *capturing* group, so it gets the index `1` that you'd expect to reference the first group of the inner regexes. Use a non-capturing group `(?:%s|%s|%s|%s)` instead. 2. Group indexes increase even inside `|`. So`(?:(a)|(b)|(c))` you'd get: ``` >>> re.match(r'(?:(a)|(b)|(c))', 'a').groups() ('a', None, None) >>> re.match(r'(?:(a)|(b)|(c))', 'b').groups() (None, 'b', None) >>> re.match(r'(?:(a)|(b)|(c))', 'c').groups() (None, None, 'c') ``` It seems like you'd expect to only have one group 1 that returns either `a`, `b` or `c` depending on the branch... no, indexes are assigned in order from left to right without taking account the grammar of the regex. The [`regex`](https://pypi.org/project/regex/) module does what you want with numbering the groups. If you want to use the built-in module you'll have to live with the fact that numbering is not the same between different branches of the regex *if* you use named groups: ``` >>> import regex >>> regex.match(r'(?:(?P<x>a)|(?P<x>b)|(?P<x>c))', 'a').groups() ('a',) >>> regex.match(r'(?:(?P<x>a)|(?P<x>b)|(?P<x>c))', 'b').groups() ('b',) >>> regex.match(r'(?:(?P<x>a)|(?P<x>b)|(?P<x>c))', 'c').groups() ('c',) ``` (Trying to use that regex with `re` will give an error for duplicated groups).
Giacomo answered the question. However, I also suggest: 1) put the "compile" before the loop, 2) gather non empty groups in a list, 3) think about using (.+) instead of (.\*) in re1,re2,etc. ``` rex= re.compile("%s|%s|%s|%s" % (re1, re2, re3, re4)) for sentence in sentences: match = rex.search(sentence) if match: l=[ g for g in match.groups() if g!=None ] print(l[0],l[1]) ```
11,318
15,390,807
Is there an integer square root somewhere in python, or in standard libraries? I want it to be exact (i.e. return an integer), and bark if there's no solution. At the moment I rolled my own naive one: ``` def isqrt(n): i = int(math.sqrt(n) + 0.5) if i**2 == n: return i raise ValueError('input was not a perfect square') ``` But it's ugly and I don't really trust it for large integers. I could iterate through the squares and give up if I've exceeded the value, but I assume it would be kinda slow to do something like that. Also I guess I'd probably be reinventing the wheel, something like this must surely exist in python already...
2013/03/13
[ "https://Stackoverflow.com/questions/15390807", "https://Stackoverflow.com", "https://Stackoverflow.com/users/674039/" ]
Sorry for the very late response; I just stumbled onto this page. In case anyone visits this page in the future, the python module gmpy2 is designed to work with very large inputs, and includes among other things an integer square root function. Example: ``` >>> import gmpy2 >>> gmpy2.isqrt((10**100+1)**2) mpz(10000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000001L) >>> gmpy2.isqrt((10**100+1)**2 - 1) mpz(10000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000L) ``` Granted, everything will have the "mpz" tag, but mpz's are compatible with int's: ``` >>> gmpy2.mpz(3)*4 mpz(12) >>> int(gmpy2.mpz(12)) 12 ``` See [my other answer](https://stackoverflow.com/a/31224469/2554867) for a discussion of this method's performance relative to some other answers to this question. Download: <https://code.google.com/p/gmpy/>
One option would be to use the `decimal` module, and do it in sufficiently-precise floats: ``` import decimal def isqrt(n): nd = decimal.Decimal(n) with decimal.localcontext() as ctx: ctx.prec = n.bit_length() i = int(nd.sqrt()) if i**2 != n: raise ValueError('input was not a perfect square') return i ``` which I think should work: ``` >>> isqrt(1) 1 >>> isqrt(7**14) == 7**7 True >>> isqrt(11**1000) == 11**500 True >>> isqrt(11**1000+1) Traceback (most recent call last): File "<ipython-input-121-e80953fb4d8e>", line 1, in <module> isqrt(11**1000+1) File "<ipython-input-100-dd91f704e2bd>", line 10, in isqrt raise ValueError('input was not a perfect square') ValueError: input was not a perfect square ```
11,319
72,394,305
I used this command: `python manage.py runserver 0:8080` After I logined the system I can reach rest API pages, but can't reach other pages. Although I send a request, the command output does not log. ``` Quit the server with CONTROL-C. ```
2022/05/26
[ "https://Stackoverflow.com/questions/72394305", "https://Stackoverflow.com", "https://Stackoverflow.com/users/15506634/" ]
You *can* actually convert from a string holding a base-16 number to a numeric value in plain Sqlite, but it's kind of ugly, using a [recursive CTE](https://sqlite.org/lang_with.html#recursive_common_table_expressions): ```sql WITH RECURSIVE hexnumbers(hexstr) AS (VALUES ('0x123'), ('0x4'), ('0x7f')), parse_hex(num, hexstr, digit, remaining) AS (SELECT 0, upper(hexstr), 3, length(hexstr) - 2 FROM hexnumbers UNION ALL SELECT (num * 16) + CASE substr(hexstr, digit, 1) WHEN '0' THEN 0 WHEN '1' THEN 1 WHEN '2' THEN 2 WHEN '3' THEN 3 WHEN '4' THEN 4 WHEN '5' THEN 5 WHEN '6' THEN 6 WHEN '7' THEN 7 WHEN '8' THEN 8 WHEN '9' THEN 9 WHEN 'A' THEN 0xA WHEN 'B' THEN 0xB WHEN 'C' THEN 0xC WHEN 'D' THEN 0xD WHEN 'E' THEN 0xE WHEN 'F' THEN 0xF END, hexstr, digit + 1, remaining - 1 FROM parse_hex WHERE remaining > 0) SELECT hexstr, num FROM parse_hex WHERE remaining = 0; ``` gives ```none hexstr num ------ --- 0X4 4 0X7F 127 0X123 291 ``` --- If you want to go the C function route, your implementation function should look something like: ```c void hex_to_dec(sqlite3_context *context, int argc, sqlite3_value **argv) { int vtype = sqlite3_value_numeric_type(argv[0]); if (vtype == SQLITE_TEXT) const char *str = sqlite3_value_text(argv[0]); if (str && *str) { sqlite3_result_int64(context, strtol(str, NULL, 16)); } } else if (vtype == SQLITE_INTEGER) { sqlite3_result_value(context, argv[0]); } } ``` except maybe with more error checking. The one in @choroba's answer creates a blob, not an integer, which isn't want you want.
SQLite doesn't convert from hex to dec, you need to write such a function yourself. An example can be found in the [SQLite Forum](https://sqlite.org/forum/info/79dc039e21c6a1ea): ``` /* ** Function UNHEX(arg) -> blob ** ** Decodes the arg which must be an even number of hexidecimal characters into a blob and returns the blob ** */ #ifdef __cplusplus extern "C" { #endif #ifndef SQLITE_PRIVATE #define SQLITE_PRIVATE static #endif #include <stdlib.h> #include <string.h> #ifdef SQLITE_CORE #include "sqlite3.h" #else #ifdef _HAVE_SQLITE_CONFIG_H #include "config.h" #endif #include "sqlite3ext.h" SQLITE_EXTENSION_INIT1 #endif #ifndef _WIN32_WINNT #define _WIN32_WINNT 0x0600 #endif static void _unhexFunc(sqlite3_context *context, int argc, sqlite3_value **argv) { long olength = sqlite3_value_bytes(argv[0]); long length; unsigned char* data = (unsigned char*)sqlite3_value_text(argv[0]); unsigned char* blob; unsigned char* stuff; unsigned char buffer[4] = {0}; if ((olength % 2 != 0) || (olength < 2)) { return; } blob = malloc(length / 2); stuff = blob; length = olength; while (length > 0) { memcpy(buffer, data, 2); *stuff = (unsigned char)strtol(buffer, NULL, 16); stuff++; data += 2; length -= 2; } sqlite3_result_blob(context, blob, olength/2, SQLITE_TRANSIENT); free(blob); } #ifdef _WIN32 #ifndef SQLITE_CORE __declspec(dllexport) #endif #endif int sqlite3_sqlunhex_init(sqlite3 *db, char **pzErrMsg, const sqlite3_api_routines *pApi) { SQLITE_EXTENSION_INIT2(pApi); return sqlite3_create_function(db, "UNHEX", 1, SQLITE_UTF8|SQLITE_DETERMINISTIC|SQLITE_INNOCUOUS, 0, _unhexFunc, 0, 0); } #ifdef __cplusplus } #endif ``` You can then call it on the value if it starts with `0x`: ``` SELECT CASE WHEN col1 LIKE '0x%' THEN UNHEX(col1) ELSE col1 END FROM foo; ``` Much better way is to fix this in the application that stores the values.
11,329
32,215,510
I am quite new to use Frame of Tkinter in Python. I discovered Frame from the following [post](https://stackoverflow.com/questions/32198849/set-the-correct-tkinter-widgets-position-in-python) When i run the GUI the Check box are not in the same line. [![enter image description here](https://i.stack.imgur.com/jB3jB.png)](https://i.stack.imgur.com/jB3jB.png) For this reason i have the following two questions: 1) is it possible to use "grid" in the frame in order to place the widget where i wish? 2) Is it possible to place for example "Camera white balance" and "average white balance" in the same row? in grid is for example row=0, column=0 and row=0, column=1 3) if i add a line below the button "Input raw image file" using the code: ``` self.sep = Frame(self, height=2, width=450, bd=1, relief=SUNKEN) self.sep.grid(row=1, column=0, columnspan=4, padx=5, pady=5) ``` The Run works forever without showing the GUI. My code is: ``` from __future__ import division from Tkinter import * import tkMessageBox import tkFileDialog class MainWindow(Frame): def __init__(self): Frame.__init__(self) self.master.title("FOO frame") self.master.minsize(350, 150) self.grid(sticky=W+N+S+E) top_frame = Frame(self) middle_frame = Frame(self) bottom_frame = Frame(self) top_frame.pack(side="top", fill="x") middle_frame.pack(side="top", fill="x") bottom_frame.pack(side="top", fill="x") self.open = Button(top_frame,text="Input raw image file", command=self.open, activeforeground="red") self.open.pack(side="left") self.CheckVar_camera_white_balance = IntVar() self.CheckVar_camera_white_balance = Checkbutton(middle_frame, text="Camera white balance", variable=self.CheckVar_camera_white_balance, onvalue=1, offvalue=0) self.CheckVar_camera_white_balance.pack(side="top", fill="x") self.CheckVar_average_whole_image_white_balance = IntVar() self.CheckVar_average_whole_image_white_balance = Checkbutton(middle_frame, text="Average white balance", variable=self.CheckVar_average_whole_image_white_balance, onvalue=1, offvalue=0) self.CheckVar_average_whole_image_white_balance.pack() self.CheckVar_correct_chromatic_aberration = IntVar() self.CheckVar_correct_chromatic_aberration = Checkbutton(middle_frame, text="Correct chromatic aberration", variable=self.CheckVar_correct_chromatic_aberration, onvalue=1, offvalue=0) self.CheckVar_correct_chromatic_aberration.pack() self.CheckVar_fix_dead_pixels = IntVar() self.CheckVar_fix_dead_pixels = Checkbutton(middle_frame, text="Fix dead pixels", variable=self.CheckVar_fix_dead_pixels, onvalue=1, offvalue=0) self.CheckVar_fix_dead_pixels.pack() # functions def open(self): self.filename_open = tkFileDialog.askopenfilenames(defaultextension='.*') if __name__ == "__main__": d = MainWindow() d.mainloop() ```
2015/08/25
[ "https://Stackoverflow.com/questions/32215510", "https://Stackoverflow.com", "https://Stackoverflow.com/users/1493192/" ]
Answers to your Questions Bryan Oakley gave you. Here is some code so you can imagine what he is talking about: ``` from __future__ import division from Tkinter import * import tkFileDialog import tkMessageBox from ttk import Separator class MainWindow(Frame): def __init__(self): Frame.__init__(self) self.master.title("FOO frame") self.master.minsize(350, 150) self.grid(sticky=W+N+S+E) top_frame = Frame(self) middle_frame = Frame(self) bottom_frame = Frame(self) top_frame.pack(side="top", fill="x") middle_frame.pack(side="top", fill="x") bottom_frame.pack(side="top", fill="x") self.open = Button(top_frame,text="Input raw image file", command=self.open, activeforeground="red") self.open.pack(side="left") self.sep = Separator(middle_frame, orient=HORIZONTAL) self.sep.grid(row=0, column=0, columnspan=2, sticky="WE", pady=5) self.CheckVar_camera_white_balance = IntVar() self.CheckVar_camera_white_balance = Checkbutton(middle_frame, text="Camera white balance", variable=self.CheckVar_camera_white_balance, onvalue=1, offvalue=0) self.CheckVar_camera_white_balance.grid(row=1, column=0, sticky="W") self.CheckVar_average_whole_image_white_balance = IntVar() self.CheckVar_average_whole_image_white_balance = Checkbutton(middle_frame, text="Average white balance", variable=self.CheckVar_average_whole_image_white_balance, onvalue=1, offvalue=0) self.CheckVar_average_whole_image_white_balance.grid(row=1, column=1, sticky="W") self.CheckVar_correct_chromatic_aberration = IntVar() self.CheckVar_correct_chromatic_aberration = Checkbutton(middle_frame, text="Correct chromatic aberration", variable=self.CheckVar_correct_chromatic_aberration, onvalue=1, offvalue=0) self.CheckVar_correct_chromatic_aberration.grid(row=2, column=0, columnspan=2, sticky="W") self.CheckVar_fix_dead_pixels = IntVar() self.CheckVar_fix_dead_pixels = Checkbutton(middle_frame, text="Fix dead pixels", variable=self.CheckVar_fix_dead_pixels, onvalue=1, offvalue=0) self.CheckVar_fix_dead_pixels.grid(row=3, column=0, columnspan=2, sticky="W") # functions def open(self): self.filename_open = tkFileDialog.askopenfilenames(defaultextension='.*') if __name__ == "__main__": d = MainWindow() d.mainloop() ``` If you separator you can use the `Seperator`class, i was adding the seperator to the `middle_frame` so you have more control of this widget. Now you can use `column` to set "Camera white balance" and "average white balance" side by side. The columnspan attribute defines the number of columns a widget should span. So if you have more than 2 columns update that attribute. Finally if you use `sticky` you can define on which side the widget should stay.
1. Yes, it is possible to use `grid` In any frame you wish 2. Yes, it is possible to place "Camera white balance" and "average white balance" in the same row. 3. The reason the code runs forever without displaying anything is because you are using both grid (for the separator) and pack (for the frames). They both have the same parent so you can only use one or the other. To get things to align you should read the documentation to learn about all of the options available when calling `grid` and `pack`. These options allow you to center itemsm or align them alon an edge, control padding, and so on.
11,330
35,114,144
My Celery task raises a custom exception `NonTransientProcessingError`, which is then caught by `AsyncResult.get()`. Tasks.py: ``` class NonTransientProcessingError(Exception): pass @shared_task() def throw_exception(): raise NonTransientProcessingError('Error raised by POC model for test purposes') ``` In the Python console: ``` from my_app.tasks import * r = throw_exception.apply_async() try: r.get() except NonTransientProcessingError as e: print('caught NonTrans in type specific except clause') ``` But my custom exception is **`my_app.tasks.`**`NonTransientProcessingError`, whereas the exception raised by `AsyncResult.get()` is **`celery.backends.base.`**`NonTransientProcessingError`, so my `except` clause fails. ``` Traceback (most recent call last): File "<input>", line 4, in <module> File "/...venv/lib/python3.5/site-packages/celery/result.py", line 175, in get raise meta['result'] celery.backends.base.NonTransientProcessingError: Error raised by POC model for test purposes ``` If I catch the exception within the task, it works fine. It is only when the exception is raised to the `.get()` call that it is renamed. How can I raise a custom exception and catch it correctly? I have confirmed that the same happens when I define a `Task` class and raise the custom exception in its `on_failure` method. The following does work: ``` try: r.get() except Exception as e: if type(e).__name__ == 'NonTransientProcessingError': print('specific exception identified') else: print('caught generic but not identified') ``` Outputs: ``` specific exception identified ``` But this can't be the best way of doing this? Ideally I'd like to catch exception superclasses for categories of behaviour. I'm using Django 1.8.6, Python 3.5 and Celery 3.1.18, with a Redis 3.1.18, Python redis lib 2.10.3 backend.
2016/01/31
[ "https://Stackoverflow.com/questions/35114144", "https://Stackoverflow.com", "https://Stackoverflow.com/users/1308967/" ]
``` import celery from celery import shared_task class NonTransientProcessingError(Exception): pass class CeleryTask(celery.Task): def on_failure(self, exc, task_id, args, kwargs, einfo): if isinstance(exc, NonTransientProcessingError): """ deal with NonTransientProcessingError """ pass def run(self, *args, **kwargs): pass @shared_task(base=CeleryTask) def add(x, y): raise NonTransientProcessingError ``` Use a base Task with on\_failure callback to catch custom exception.
It might be a bit late, but I have a solution to solve this issue. Not sure if it is the best one, but it solved my problem at least. I had the same issue. I wanted to catch the exception produced in the celery task, but the result was of the class `celery.backends.base.CustomException`. The solution is in the following form: ```py import celery, time from celery.result import AsyncResult from celery.states import FAILURE def reraise_celery_exception(info): exec("raise {class_name}('{message}')".format(class_name=info.__class__.__name__, message=info.__str__())) class CustomException(Exception): pass @celery.task(name="test") def test(): time.sleep(10) raise CustomException("Exception is raised!") def run_test(): task = test.delay() return task.id def get_result(id): task = AsyncResult(id) if task.state == FAILURE: reraise_celery_exception(task.info) ``` In this case you prevent your program from raising `celery.backends.base.CustomException`, and force it to raise the right exception.
11,331
20,839,308
I am using python sklearn library for doing classification of data. Following is the code I have implemented. I just want to ask, is this correct way of classifying? I mean can following code potentially remove all the biases? And, is it 10-k fold cross validation? ``` cv = cross_validation.ShuffleSplit(n_samples, n_iter=3, test_size=0.1, random_state=0) knn = KNeighborsClassifier() KNeighborsClassifier(algorithm='auto', leaf_size=1, metric='minkowski', n_neighbors=2, p=2, weights='uniform') knn_score = cross_validation.cross_val_score(knn,x_data_arr, target_arr, cv=cv) print "Accuracy =", knn_score.mean() ``` Thanks!!
2013/12/30
[ "https://Stackoverflow.com/questions/20839308", "https://Stackoverflow.com", "https://Stackoverflow.com/users/2632729/" ]
you can use: ``` public String getDeviceName() { String manufacturer = Build.MANUFACTURER; String model = Build.MODEL; if (model.startsWith(manufacturer)) { return capitalize(model); } else { return capitalize(manufacturer) + " " + model; } } private String capitalize(String s) { if (s == null || s.length() == 0) { return ""; } char first = s.charAt(0); if (Character.isUpperCase(first)) { return s; } else { return Character.toUpperCase(first) + s.substring(1); } } ``` see [this](http://developer.android.com/reference/android/os/Build.html#MODEL) for Build.MODEL. more info in *[Get Android Phone Model Programmatically](https://stackoverflow.com/questions/1995439/get-android-phone-model-programmatically)*.
In order to get android device name you have to add only a single line of code: ``` android.os.Build.MODEL; ``` Found here:[getting-android-device-name](http://developer.android.com/reference/android/os/Build.html)
11,332
34,074,840
I am doing a small python script to perform a wget call, however I am encountering an issue when I am replacing the string that contains the url/ip address and that it will be given to my "wget" string ``` import os import sys usr = sys.argv[1] pswd = sys.argv[2] ipAddr = sys.argv[3] wget = "wget http://{IPaddress}" wget.format(IPaddress=ipAddr) print "The command wget is %s" %wget os.system(wget) ``` If I run that script I get the snippet below, wher I know that wget fails, because the variable ipAddr has not replaced IPaddress pattern, so I guess that the issue has to do with the slashes in the url. My question is why that pattern is not replaced? ``` python test.py 1 2 www.website.org The command wget is wget http://{IPaddress} --2015-12-03 20:26:11-- http://%7Bipaddress%7D/ Resolving {ipaddress} ({ipaddress})... failed: Name or service not known. ```
2015/12/03
[ "https://Stackoverflow.com/questions/34074840", "https://Stackoverflow.com", "https://Stackoverflow.com/users/1757475/" ]
You aren't assigning the result of the `format` call to anything - you're just throwing it away. Try this instead: ``` wget = "wget http://{IPaddress}" wget = wget.format(IPaddress=ipAddr) print "The command wget is %s" %wget os.system(wget) ``` Alternatively, this seems a bit cleaner: ``` wget = "wget http://{IPaddress}".format(IPaddress=ipAddr) print "The command wget is %s" %wget os.system(wget) ```
You need to actually format your `wget` string like this. ``` import os import sys usr = sys.argv[1] pswd = sys.argv[2] ipAddr = sys.argv[3] wget = "wget http://{IPaddress}".format(IPaddress=ipAddr) print "The command wget is %s" % wget os.system(wget) ``` `.format()` does not modify the string in place, it returns a copy of the modified string, so in your original script, the value of `wget.format(IPaddress=ipAddr)` is never actually assigned to any variable and the content of the `wget` variable remains unchanged
11,338
22,386,359
I am fairly new to python and have no html experience. The question has been asked and either not answered at all or not answered in enough detail for me to set the default font within iPython (not change to browser). Specifically, what has to be put in the css file and which css file should be used? I am on a Windows system. For reference, these are in answer to the linked SO questions below: * [in #1](https://stackoverflow.com/questions/13347691/change-ipython-notebook-font-type?rq=1): an unnamed file in `/usr/lib/python2.6/.../css/` * [in comment to #1](https://stackoverflow.com/questions/13347691/change-ipython-notebook-font-type?rq=1): change monospace font in browser - worked but font is italic * [in #2](https://stackoverflow.com/questions/18706701/change-font-background-color-in-ipython-notebook): `custom.css` in profile subdirectory `/static/custom/custom.css` Related questions: 1. [Change ipython notebook font type](https://stackoverflow.com/questions/13347691/change-ipython-notebook-font-type?rq=1) 2. [Change font & background color in ipython notebook](https://stackoverflow.com/questions/18706701/change-font-background-color-in-ipython-notebook) 3. [Changing (back to default) font in ipython notebook](https://stackoverflow.com/questions/20870335/changing-back-to-default-font-in-ipython-notebook) (unanswered) - **Edit:** Changing the monospace font in my browser worked, as suggested in an answer comment of #1. However the font is italic, which is not what is intended.
2014/03/13
[ "https://Stackoverflow.com/questions/22386359", "https://Stackoverflow.com", "https://Stackoverflow.com/users/1318479/" ]
In JupyterNotebook cell, Simply you can use: ``` %%html <style type='text/css'> .CodeMirror{ font-size: 17px; </style> ```
I would also suggest that you explore the options offered by the [**jupyter themer**](https://github.com/transcranial/jupyter-themer). For more modest interface changes you may be satisfied with running the syntax: ``` jupyter-themer [-c COLOR, --color COLOR] [-l LAYOUT, --layout LAYOUT] [-t TYPOGRAPHY, --typography TYPOGRAPHY] ``` where the options offered by *themer* would provide you with a less onerous way of making some changes in to the look of Jupyter Notebook. Naturally, you may still to prefer edit the **`.css`** files if the changes you want to apply are elaborate.
11,339
19,254,178
essentially I am writing something based off of python and I would like to, in python, be able to get the result of a javascript function. Lets say `function.js` has a bunch of functions inside it If I have some python code, in it I would like to be able to do something like the following: ``` val = some_js_function(param1,param2,...paramn) ``` now `some_js_function` would be a function from the `function.js` file. This would set the variable `val` in my Python code to the result of that JS function. **How could I go about doing this? Or do I have to rawCode a FFI for javascript myself.**
2013/10/08
[ "https://Stackoverflow.com/questions/19254178", "https://Stackoverflow.com", "https://Stackoverflow.com/users/2127988/" ]
My best theory is that this is a possible consequence of dropping and re-creating a table with the same name (<https://issues.apache.org/jira/browse/CASSANDRA-5905>). We are targeting a fix for 2.1 (<https://issues.apache.org/jira/browse/CASSANDRA-5202>); in the meantime, prefer TRUNCATE over drop/recreate.
Although its very late and you must have solve this issue as well. For others having similar problem, try this ``` sudo bash -c 'rm -rf /var/lib/cassandra/data/system/*' ``` Remember, its only for development purposes. **THIS COMMAND WILL DELETE ALL YOUR DATA.**
11,349
51,136,741
I'm trying to deploy a simple python app to Google Container Engine: I have created a cluster then run `kubectl create -f deployment.yaml` It has been created a deployment pod on my cluster. After that i have created a service as: `kubectl create -f deployment.yaml` > > Here's my Yaml configurations: > > > **pod.yaml**: > > > ``` apiVersion: v1 kind: Pod metadata: name: test-app spec: containers: - name: test-ctr image: arycloud/flask-svc ports: - containerPort: 5000 ``` > > Here's my Dockerfile: > > > ``` FROM python:alpine3.7 COPY . /app WORKDIR /app RUN pip install -r requirements.txt EXPOSE 5000 CMD python ./app.py ``` > > **deployment.yaml:** > > > ``` apiVersion: extensions/v1beta1 kind: Deployment metadata: labels: app: test-app name: test-app spec: replicas: 1 template: metadata: labels: app: test-app name: test-app spec: containers: - name: test-app image: arycloud/flask-svc resources: requests: cpu: "100m" imagePullPolicy: Always ports: - containerPort: 8080 ``` > > **service.yaml:** > > > ``` apiVersion: v1 kind: Service metadata: name: test-app labels: app: test-app spec: type: LoadBalancer ports: - name: http port: 80 protocol: TCP targetPort: 8080 nodePort: 32000 selector: app: test-app ``` > > **Ingress** > > > ``` apiVersion: extensions/v1beta1 kind: Ingress metadata: name: test-ingress annotations: nginx.ingress.kubernetes.io/rewrite-target: / spec: rules: - http: paths: - path: / backend: serviceName: frontend servicePort: 80 ``` It creates a LoadBalancer and provides an external IP, when I open the IP it returns `Connection Refused error` What's going wrong? Help me, please! Thank You, Abdul
2018/07/02
[ "https://Stackoverflow.com/questions/51136741", "https://Stackoverflow.com", "https://Stackoverflow.com/users/7644562/" ]
> > The HashedPassword + Salt is stored in a column > > > That is probably the root problem. You don't need to provide or handle a Salt. See [this answer](http://%20stackoverflow.com/a/17758221/9695604). You should not need a `GetSalt()` method. You can't simply concatenate 2 base64 strings, the decoder doesn't know how to handle that.
The Base64 format stores 6 bits per character. Which, as bytes are 8 bits, sometimes some padding is needed at the end. One or two `=` characters are appended. `=` is not otherwise used. If you concatenate two Base64 strings at the join there maybe some padding. Putting padding in the middle of a Base64 string is not valid. Instead concatenate the byte arrays, and then encode.
11,350
52,648,383
I am trying to perform a MultiOutput Regression using ElasticNet and Random Forests as follows: ```python from sklearn.ensemble import RandomForestRegressor from sklearn.multioutput import MultiOutputRegressor from sklearn.linear_model import ElasticNet X_train, X_test, y_train, y_test = train_test_split(X_features, y, test_size=0.30,random_state=0) ``` Elastic Net ```python l1_range=np.arange(0.1,1.05,0.1).tolist() regr_Enet=ElasticNetCV(cv=5,copy_X=True,n_alphas=100,l1_ratio=l1_range,selection='cyclic',normalize=False,verbose =2,n_jobs=1) regr_multi_Enet= MultiOutputRegressor(regr_Enet)##ElasticNetCV regr_multi_Enet.fit(X_train, y_train) ``` Random Forest ```python max_depth = 20 number_of_trees=100 regr_multi_RF=MultiOutputRegressor(RandomForestRegressor(n_estimators=number_of_trees,max_depth=max_depth,random_state=0,n_jobs=1,verbose=1)) regr_multi_RF.fit(X_train, y_train) y_multirf = regr_multi_RF.predict(X_test) ``` Everything is going well, however I haven't found a way to obtain the coefficients (coef\_ ) or most important features (feature\_importances\_) of the model. When I write: ```python regr_multi_Enet.coef_ regr_multi_RF.feature_importances_ ``` It shows the following error: ```python AttributeError: 'MultiOutputRegressor' object has no attribute 'feature_importances_' AttributeError: 'MultiOutputRegressor' object has no attribute 'coef_' ``` I have read the documentation on MultiOutputRegressor but I cannot find a way to extract the coefficients. How to retrieve them?
2018/10/04
[ "https://Stackoverflow.com/questions/52648383", "https://Stackoverflow.com", "https://Stackoverflow.com/users/10456915/" ]
MultiOutputRegressor itself doesn't have these attributes - you need to access the underlying estimators first using the `estimators_` attribute (which, although not mentioned in the [docs](http://scikit-learn.org/stable/modules/generated/sklearn.multioutput.MultiOutputRegressor.html), it exists indeed - see the docs for [MultiOutputClassifier](http://scikit-learn.org/stable/modules/generated/sklearn.multioutput.MultiOutputClassifier.html)). Here is a reproducible example: ```python from sklearn.multioutput import MultiOutputRegressor from sklearn.ensemble import RandomForestRegressor from sklearn.linear_model import ElasticNet # dummy data X = np.array([[-1, -1], [-2, -1], [1, 1], [2, 1]]) W = np.array([[1, 1], [1, 1], [2, 2], [2, 2]]) regr_multi_RF=MultiOutputRegressor(RandomForestRegressor()) regr_multi_RF.fit(X,W) # how many estimators? len(regr_multi_RF.estimators_) # 2 regr_multi_RF.estimators_[0].feature_importances_ # array([ 0.4, 0.6]) regr_multi_RF.estimators_[1].feature_importances_ # array([ 0.4, 0.4]) regr_Enet = ElasticNet() regr_multi_Enet= MultiOutputRegressor(regr_Enet) regr_multi_Enet.fit(X, W) regr_multi_Enet.estimators_[0].coef_ # array([ 0.08333333, 0. ]) regr_multi_Enet.estimators_[1].coef_ # array([ 0.08333333, 0. ]) ```
``` regr_multi_Enet.estimators_[0].coef_ ``` To get the coefficients of the first estimator etc.
11,351
26,292,909
this is my first post here! My goal is to duplicate the payload of a unidirectional TCP stream and send this payload to multiple endpoints concurrently. I have a working prototype written in Python, however I am new to Python, and to Socket programming. Ideally the solution is capable of running in both Windows and \*nix environments. This prototype works, however it creates a new send TCP connection for each Buffer length (currently set to 4096 bytes). The main problem with this is I will eventually run out of local ports to send from, and ideally I would like the data to pass from each single incoming TCP stream to one single TCP stream out (for each endpoint). The incoming data can vary from less than 1024 bytes to hundreds of megabytes. At the moment a new outgoing TCP stream is initiated for every 4096 bytes. I am not sure if the problem is in my implementation of threading, or if I have missed something else really obvious. In my research I have found that select() could help, however I am not sure if it would be appropriate because I may need to process some of the incoming data and respond to the sending client for certain cases in the future. Here is the code I have so far (some of the code variations I have tried are commented out): ``` #!/usr/bin/python #One way TCP payload duplication import sys import threading from socket import * bufsize = 4096 host= '' # Methods: #handles sending the data to the endpoints def send(endpoint,port,data): sendSocket = socket(AF_INET, SOCK_STREAM) #sendSocket.setblocking(1) sendSocket.setsockopt(SOL_SOCKET, SO_REUSEADDR, 1) #sendport = sendSocket.getsockname #print sendport try: sendSocket.connect((endpoint, port)) sendSocket.sendall(data) except IOError as msg: print "Send Failed. Error Code: " + str(msg[0]) + ' Message: ' + msg[1] sys.exit() #handles threading for sending data to endpoints def forward(service, ENDPOINT_LIST, port, data): #for each endpoint in the endpoint list start a new send thread for endpoint in ENDPOINT_LIST: print "Forwarding data for %s from %s:%s to %s:%s" % (service,host,port,endpoint,port) #send(endpoint,port,data) ethread = threading.Thread(target=send, args=(endpoint,port,data)) ethread.start() #handles threading for incoming clients def clientthread(conn,service,ENDPOINT_LIST,port): while True: #receive data form client data = conn.recv(bufsize) if not data: break cthread = threading.Thread(target=forward, args=(service, ENDPOINT_LIST, port, data)) cthread.start() #no data? then close the connection conn.close() #handles listening to sockets for incoming connections def listen(service, ENDPOINT_LIST, port): #create the socket listenSocket = socket(AF_INET, SOCK_STREAM) #Allow reusing addresses - I think this is important to stop local ports getting eaten up by never-ending tcp streams that don't close listenSocket.setsockopt(SOL_SOCKET, SO_REUSEADDR, 1) #try to bind the socket to host and port try: listenSocket.bind((host, port)) #display an error message if you can't except IOError as msg: print "Bind Failed. Error Code: " + str(msg[0]) + ' Message: ' + msg[1] sys.exit() #start listening on the socket listenSocket.listen(10) print "Service %s on port %s is listening" %(service,port) while True: #wait to accept a connection conn, addr = listenSocket.accept() print 'Connected to ' + addr[0] + ':' + str(addr[1]) + ' on port ' + str(port) #start new thread for each connection lthread = threading.Thread(target=clientthread , args=(conn,service,ENDPOINT_LIST,port)) lthread.start() #If no data close the connection listenSocket.close() service = "Dumb-one-way-tcp-service-name1" ENDPOINT_LIST = ["192.168.1.100","192.168.1.200"] port = 55551 listen(service,ENDPOINT_LIST,port) ``` I have looked into other libraries to try to achieve my goal, including using: * Twisted * Asyncore * Scapy However I found them quite complicated for my modest needs and programming skill level. If anyone has any suggestions on how I could refine the approach I have, or any other ways this goal could be achieved, please let me know!
2014/10/10
[ "https://Stackoverflow.com/questions/26292909", "https://Stackoverflow.com", "https://Stackoverflow.com/users/4128004/" ]
The problem is caused by missing or conflicting dependencies: 1. Add hadoop-auth to your classpath 2. If the problem still arises, remove hadoop-core from your classpath. It is conflicting with hadoop-auth. This will solve the problem.
Finally i found an answer for this. 1. Copy the PlatformName class from hadoop-auth and custom compile it locally. package org.apache.hadoop.util; public class PlatformName { ``` private static final String platformName = System.getProperty("os.name") + "-" + System.getProperty("os.arch") + "-" + System.getProperty("sun.arch.data.model"); public static final String JAVA_VENDOR_NAME = System.getProperty("java.vendor"); public static final boolean IBM_JAVA = JAVA_VENDOR_NAME.contains("IBM"); public static String getPlatformName() { return platformName; } public static void main(String[] args) { System.out.println(platformName); } } ``` 2. Copy paste the class file in your hadoop-core. You should be up and running. thanks to all
11,352
4,178,440
Any Java tutorial that resembles Mark Pilgrim's approach for [DiveIntoPython](http://www.diveintopython.net/)?
2010/11/14
[ "https://Stackoverflow.com/questions/4178440", "https://Stackoverflow.com", "https://Stackoverflow.com/users/313127/" ]
What about the official Java Tutorial? I found it pretty helpful to get started with the language.
I haven't read Dive Into Python but I do know that Bruce Eckels Thinking In Java is an excellent book and well worth a look. Be warned though - it's monster size and not easy to carry around!
11,353
741,063
I always had doubts when it comes to designing proper report of execution. Say you have the following (stupid, to be simple) case. I will use python. ``` def doStuff(): doStep1() doStep2() doStep3() ``` Now, suppose you want to give a report of the various steps, if something goes wrong etc. Not really debug: just informative behavior of the application. A first, easy solution is to put prints ``` def doStuff(): print "starting doing stuff" print "I am starting to do step 1" doStep1() print "I did step 1" print "I am starting to do step 2" doStep2() print "I did step 2" print "I am starting to do step 3" doStep3() print "I did step 3" ``` In general, this is quite bad. Suppose that this code is going to end up in a library. I would not expect my library to print stuff out. I would expect it to do the job silently. Still, sometimes I would like to provide information, not only in debug situations, but also to keep the user informed that something is actually in the process of being done. Print is also bad because you don't have control of the handling of your messages. it just goes to stdout, and there's nothing you can do about it, except redirection. Another solution is to have a module for logging. ``` def doStuff(): Logging.log("starting doing stuff") Logging.log("I am starting to do step 1") doStep1() Logging.log("I did step 1") Logging.log("I am starting to do step 2") doStep2() Logging.log("I did step 2") Logging.log("I am starting to do step 3") doStep3() Logging.log("I did step 3") ``` This has the advantage that you sort of know a unique place for your logging service, and you can tinker this service as much as you want. You can silence it, redirect it onto a file, to stdout, or even to a network. Disadvantage is that you get a very strong coupling with the Logging module. Basically every part of your code depends on it, and you have calls to logging everywhere. The third option is to have a report object with a clear interface, and you pass it around ``` def doStuff(reporter=NullReporter()): reporter.log("starting doing stuff") reporter.log("I am starting to do step 1") doStep1() reporter.log("I did step 1") reporter.log("I am starting to do step 2") doStep2() reporter.log("I did step 2") reporter.log("I am starting to do step 3") doStep3() reporter.log("I did step 3") ``` Eventually, you can also pass the reporter object to doStepX() if they have more to say. Advantage: it reduces coupling with a module, but it introduces coupling with the instantiation of the NullReporter object. This can be solved by using None as default and checking before calling log, which is clumsy, because in python you have to write a conditional every time (in C you could define a macro) ``` def doStuff(reporter=None): if reporter is not None: reporter.log("starting doing stuff") # etc... ``` Edit: Another option is to work Qt-like, and have an emit() signal strategy. As your code executes, it emits information with proper status codes, and anyone interested can subscribe to the signals and provide info. Nice and clean, very decoupled, but requires a little of coding, as I don't think this can quickly be done with the python battery included. Finally, you can raise exceptions with a meaningful error message, but this of course can be used only if you are exiting from an error condition. it does not work for occasional reporting. Edit: I'd like to clarify the fact that the situation is more general, and not limited just to a sequence of invoked steps. it could also involve control structures: ``` if disconnected: print "Trying to connect" connect() else: print "obtaining list of files from remote host" getRemoteList() ``` The report could also be into the actual routines, so you would have a "print" in the connect() and getRemoteList() routines as a first statement. The question therefore are: * What do you think is the best design for some code (notably in the case of a library) to be at the same time silent when noise could be disruptive for the client, but verbose when useful? * How to handle a balanced intermix between logic code and reporting code ? * Intermixing between code and error checking has been solved with exceptions. What could be done to partition the "noise" of reporting from the code logic ? --- Edit: more thoughts for the mind I think it's not only a matter of decoupling the Logging code from the logic code. I think it's also a matter of decoupling the information production from the information consumption. Similar techniques already exist, in particular to handle UI events, but I don't really see the same patterns applied to the logging problem. --- Edit: I accepted the answer from Marcelo because he points out at the factual evidence that a compromise is the best solution in this case, and there's no silver bullet. However, all the others were interesting answers as well, and I was really pleased to upvote all of them. Thanks for the help!
2009/04/12
[ "https://Stackoverflow.com/questions/741063", "https://Stackoverflow.com", "https://Stackoverflow.com/users/78374/" ]
I found [this](http://www.ibm.com/developerworks/linux/library/l-pymeta.html) while searching for Aspect Oriented Programming for python. I agree with other posters that such concerns shouldn't be mixed with core logic. In essence,points where you want to put logging might not always be arbitary, may be concepts like "all the points before error" could be identified by a pointcut. Other totally arbitary points can be captured using simple logging techniques.
I would use the standard [`logging`](http://docs.python.org/library/logging.html) module that's been part of the standard library since Python 2.3. That way, there is a good chance that people looking at your code will already know how the `logging` module works. And if they have to learn then at least it's well-documented, and their knowledge is transferable to other libraries that also use `logging`. Is there any feature that you want but can't find in the standard `logging` module?
11,356
44,274,464
I've got a program that reads strings with special characters (used in spanish) from a file. I then use chdir to change to a directory which its name is the string. For example in a file called "names.txt" I got the following ``` Tableta Música . . etc ``` That file is encoded in utf-8 so that I read it from python as follows ``` f=open("names.txt","r",encoding="utf-8") names=f.readlines() f.close() ``` And it does read all successfully ``` print(names) ``` output: ``` ['Tableta\n','Música\n', ...etc] ``` Problem arises when I want to change to the first directory (the first name 'Tableta',without the newline character) ``` chdir(names[0][:-1]) ``` I get the following error > > FileNotFoundError: [WinError 2] The system cannot find the file specified: "\ufeffTableta" > > > And it does only happen with the first name, which was very odd to me. With other names it is able to change directories whether they have special characters or not I supposed it had to do something with the encoding, because of that '\ufeff' extra added character. So I changed the "names.txt" file to ANSI coding and deleted all the special characters so that I could read it with python, and it worked. But thing is that I need to have that file in utf-8 coding so that I can read special characters. Is there any way I can fix this? Why is python adding that '\ufeff' character to the string and only with the first name?
2017/05/31
[ "https://Stackoverflow.com/questions/44274464", "https://Stackoverflow.com", "https://Stackoverflow.com/users/7586576/" ]
Your file "names.txt" has a Byte-Order Mask (BOM). To remove it, open the file with the following decoder: ``` f = open("names.txt", encoding="utf-8-sig") ``` As a side note, it is safer to strip a file name: `names[0].strip()` instead of `names[0][:-1]`.
The beginning of your file has the unicode BOM. Skip first character when reading your file or open it with `utf-8-sig` encoding.
11,366
66,801,663
I have several database tables that I use in Django. Now I want to design everything around the database not in the Django ORM but in the rational style of my MySQL database. This includes multiple tables for different information. I have made a drawing of what I mean. I want the `createsuperuser` command to query once for the `user` table and once for the `residence` table. This could then look like this: ``` C:\Users\Project>python manage.py createsuperuser username: input firstname: input lastname: input password: ***** country: input city: input street: input ``` [![enter image description here](https://i.stack.imgur.com/iDyX8.png)](https://i.stack.imgur.com/iDyX8.png) Edit ==== Is there maybe a way to change the command?
2021/03/25
[ "https://Stackoverflow.com/questions/66801663", "https://Stackoverflow.com", "https://Stackoverflow.com/users/14375016/" ]
<https://docs.djangoproject.com/en/3.1/howto/custom-management-commands/#module-django.core.management> You can create django custom commands which will looks like: python manage.py createsuperuser2 (or similar name) ``` class Command(BaseCommand): def handle(self, *args, **options): username = input("username") .... (next params) user = User.objects.get_or_create(username=username, defaults={"blabla"} Residence.objects.create(user=user, next params....) ```
Just setup `REQUIRED_FIELDS` inside your User class. If you don't have custom user class, create `class User(AbstractUser)` but dont forget to set `AUTH_USER_MODEL = "users.User"` in `settings.py` ``` REQUIRED_FIELDS = ["country","city","street"] ```
11,367
1,738,494
I'm having a tough time getting the logging on Appengine working. the statement ``` import logging ``` is flagged as an unrecognized import in my PyDev Appengine project. I suspected that this was just an Eclipse issue, so I tried calling ``` logging.debug("My debug statement") ``` which does not print anything out on the dev\_appserver. I haven't seen anything here <http://code.google.com/appengine/docs/python/runtime.html#Logging> that is very helpful on the matter. Any advice is much appreciated.
2009/11/15
[ "https://Stackoverflow.com/questions/1738494", "https://Stackoverflow.com", "https://Stackoverflow.com/users/211584/" ]
The problem is most likely with the setup of the dev\_appserver. Try running it with the -d flag to turn on debugging.
I have experience only with the Java App Engine, but there they set things up to only log WARN and ERROR. Try using one of those and see if you start getting output! I'm sure there are ways to loosen the filter, but I wouldn't know how to do it in Python.
11,368
50,546,740
I am using python 3.6.4 and pandas 0.23.0. I have referenced pandas 0.23.0 documentation for constructor and append. It does not mention anything about non-existent values. I didn't find any similar example. Consider following code: ``` import pandas as pd months = ["Jan", "Feb", "Mar", "Apr", "May", "Jun", "Jul", "Aug", "Sep", "Oct", "Nov", "Dec"] index_yrs = [2016, 2017, 2018] r2016 = [26, 27, 25, 22, 20, 23, 22, 20, 20, 18, 18, 19] r2017 = [20, 21, 18, 16, 15, 15, 15, 15, 13, 13, 14, 15] r2018 = [16, 18, 18, 18, 17] df = pd.DataFrame([r2016], columns = months, index = [index_yrs[0]]) df = df.append(pd.DataFrame([r2017], columns = months, index = [index_yrs[1]])) ``` Now how to add r2018 which has data only till Month of May?
2018/05/26
[ "https://Stackoverflow.com/questions/50546740", "https://Stackoverflow.com", "https://Stackoverflow.com/users/4446712/" ]
I agree with RafaelC that padding your list for 2018 data with NaNs for missing values is the best way to do this. You can use `np.nan` from Numpy (which you will already have installed since you have Pandas) to generate NaNs. ``` import pandas as pd import numpy as np months = ["Jan", "Feb", "Mar", "Apr", "May", "Jun", "Jul", "Aug", "Sep", "Oct", "Nov", "Dec"] index_yrs = [2016, 2017, 2018] ``` As a small change to your code I've put data for all three years into a `years` list which we can pass as the `data` parameter for pd.DataFrame. This eliminates the need to append each row to the previous ones. ``` r2016 = [26, 27, 25, 22, 20, 23, 22, 20, 20, 18, 18, 19] r2017 = [20, 21, 18, 16, 15, 15, 15, 15, 13, 13, 14, 15] r2018 = [16, 18, 18, 18, 17] years = [r2016] + [r2017] + [r2018] ``` This is what years looks like: [[26, 27, 25, 22, 20, 23, 22, 20, 20, 18, 18, 19], [20, 21, 18, 16, 15, 15, 15, 15, 13, 13, 14, 15], [16, 18, 18, 18, 17]]. As for padding your year 2018 with NaNs something like this might do the trick. We are just ensuring that if a year only has values for the first n months that the remaining months will be filled out with NaNs. ``` for year in years: if len(year) < 12: year.extend([np.nan] * (12 - len(year))) ``` Finally we can create your dataframe using the one liner below instead of appending row by row. ``` df = pd.DataFrame(years, columns=months, index=index_yrs).astype(float) ``` Output: ``` Jan Feb Mar Apr May Jun Jul Aug Sep Oct Nov Dec 2016 26.0 27.0 25.0 22.0 20.0 23.0 22.0 20.0 20.0 18.0 18.0 19.0 2017 20.0 21.0 18.0 16.0 15.0 15.0 15.0 15.0 13.0 13.0 14.0 15.0 2018 16.0 18.0 18.0 18.0 17.0 NaN NaN NaN NaN NaN NaN NaN ``` You may notice that I converted the dtype of the values in the dataframe to float using `.astype(float)`. I did this to make all of your columns as the same dtype. If we don't call `.astype(float)` then Jan-May will be dtype `int` and Jun-Dec will be dtype `float64`.
You can add a row using `pd.DataFrame.loc` via a series. So you only need to convert your array into a `pd.Series` object before adding a row: ``` df.loc[index_yrs[2]] = pd.Series(r2018, index=df.columns[:len(r2018)]) print(df) Jan Feb Mar Apr May Jun Jul Aug Sep Oct Nov Dec 2016 26.0 27.0 25.0 22.0 20.0 23.0 22.0 20.0 20.0 18.0 18.0 19.0 2017 20.0 21.0 18.0 16.0 15.0 15.0 15.0 15.0 13.0 13.0 14.0 15.0 2018 16.0 18.0 18.0 18.0 17.0 NaN NaN NaN NaN NaN NaN NaN ``` However, I strongly recommend you form a list of lists (with padding) before a single append. This is because `list.append`, or construction via a list comprehension, is cheap relative to repeated `pd.DataFrame.append` or `pd.DataFrame.loc`. The above solution is recommended if you absolutely must add one row at a time.
11,371
67,407,000
I have the below `string` as input: ``` 'name SP2, status Online, size 4764771 MB, free 2576353 MB, path /dev/sde, log 210 MB, port 5660, guid 7478a0141b7b9b0d005b30b0e60f3c4d, clusterUuid -8650609094877646407--116798096584060989, disks /dev/sde /dev/sdf /dev/sdg, dare 0' ``` I wrote function which convert it to `dictionary` using `python`: ``` def str_2_json(string): str_arr = string.split(',') #str_arr{0} = name SP2 #str_arr{1} = status Online json_data = {} for i in str_arr: #remove whitespaces stripped_str = " ".join(i.split()) # i.strip() subarray = stripped_str.split(' ') #subarray{0}=name #subarray{1}=SP2 key = subarray[0] #key: 'name' value = subarray[1] #value: 'SP2' json_data[key] = value #{dict 0}='name': SP2' #{dict 1}='status': online' return json_data ``` The `return` turns the `dictionary` into `json` (it has `jsonfiy`). Is there a simple/elegant way to do it better?
2021/05/05
[ "https://Stackoverflow.com/questions/67407000", "https://Stackoverflow.com", "https://Stackoverflow.com/users/14976099/" ]
Your approach is good, except for a couple weird things: * You aren't creating a *JSON* anything, so to avoid any confusion I suggest you don't name your returned dictionary `json_data` or your function `str_2_json`. JSON, or **J**ava**S**cript **O**bject **N**otation is just that -- a standard of denoting an object as text. The objects themselves have nothing to do with JSON. * You can use `i.strip()` instead of joining the splitted string (not sure why you did it this way, since you commented out `i.strip()`) * Some of your values contain multiple spaces (e.g. `"size 4764771 MB"` or `"disks /dev/sde /dev/sdf /dev/sdg"`). By your code, you end up everything after the second space in such strings. To avoid this, do `stripped_str.split(' ', 1)` which limits how many times you want to split the string. Other than that, you could create a dictionary in one line using the `dict()` constructor and a generator expression: ``` def str_2_dict(string): data = dict(item.strip().split(' ', 1) for item in string.split(',')) return data print(str_2_dict('name SP2, status Online, size 4764771 MB, free 2576353 MB, path /dev/sde, log 210 MB, port 5660, guid 7478a0141b7b9b0d005b30b0e60f3c4d, clusterUuid -8650609094877646407--116798096584060989, disks /dev/sde /dev/sdf /dev/sdg, dare 0')) ``` Outputs: ``` { 'name': 'SP2', 'status': 'Online', 'size': '4764771 MB', 'free': '2576353 MB', 'path': '/dev/sde', 'log': '210 MB', 'port': '5660', 'guid': '7478a0141b7b9b0d005b30b0e60f3c4d', 'clusterUuid': '-8650609094877646407--116798096584060989', 'disks': '/dev/sde /dev/sdf /dev/sdg', 'dare': '0' } ``` This is probably the same (practically, in terms of efficiency / time) as writing out the full loop: ``` def str_2_dict(string): data = dict() for item in string.split(','): key, value = item.strip().split(' ', 1) data[key] = value return data ```
``` import json json_data = json.loads(string) ```
11,372
55,120,078
Is there any way to add new records in the file with each field occupying specific size using python? As shown in below picture, there are together 8 columns `[column number:column bytes] ->[1:20,2:10,3:10,4:39, 6:2, 7:7,8:7]` each of different size. For example if first column value is of 20 bytes "ABBSBABBSBT ", this can contain either 10,5 or can occupy entire 20 bytes depending upon user input. In C language, we can specify the byte size during the variable initialization. How one can add new record with proper fixed spacing for each column? [![enter image description here](https://i.stack.imgur.com/2i9Ia.png)](https://i.stack.imgur.com/2i9Ia.png) Thank you in advance!
2019/03/12
[ "https://Stackoverflow.com/questions/55120078", "https://Stackoverflow.com", "https://Stackoverflow.com/users/11056039/" ]
When you use `ParseExact`, your string and format should match *exactly*. The proper format is: `ddd, d MMM yyyy hh:mm:ss zzz` (or `HH` which depends on your hour format) After you parse it, you need to use `ToString` to format it with `yyyy-MM-dd'T'hh:mm:ss` format (or `HH` which depends you want [12-hour clock](https://en.wikipedia.org/wiki/12-hour_clock) or [24-hour clock](https://en.wikipedia.org/wiki/24-hour_clock) format) I think I have to add little bit more explanation that a lot of people confuse (specially who are beginner about programming). A [`DateTime`](https://learn.microsoft.com/en-us/dotnet/api/system.datetime?view=netframework-4.7.2) instance does *not* have any format. It just have date and time values which is basicly a numeric value called [`Ticks`](https://learn.microsoft.com/en-us/dotnet/api/system.datetime.ticks?view=netframework-4.7.2). When you talk about "format" concept, that points to *textual representation* which is `string`. Since you said "`Mon, 11 Mar 2019 09:13:16 +0100` to `2019-03-11T09:13:16`", I (and probably a lot of people also) assume that you have a string as `Mon, 11 Mar 2019 09:13:16 +0100` and you want to get `2019-03-11T09:13:16` as a string from it. For that, you need to parse your string to DateTime first. For that, as you do, `ParseExact` is one option. When you parse it to `DateTime`, you get it's *textual representation*, which is `string`, with `ToString` method. This method have a few overloads and you should use [`ToString(String, IFormatProvider)` overload](https://learn.microsoft.com/en-us/dotnet/api/system.datetime.tostring?view=netframework-4.7.2#System_DateTime_ToString_System_String_System_IFormatProvider_). With that, you specify your output format as a first parameter, and your culture info as a second parameter which it *might* effect on your result string because of the `:` and `/` format specifiers since they can change based on the current culture or supplied culture. Further reading: [Custom Date and Time Format Strings](https://learn.microsoft.com/en-us/dotnet/standard/base-types/custom-date-and-time-format-strings)
You need to specify the format that the input data has (the second parameter of `DateTime.ParseExact`). In your case, the data you provide has the format `ddd, d MMM yyyy hh:mm:ss zzz`. Also, in the last line, where you print the result you have to format it. So, this is how you have to do it: ``` string dataa = "Mon, 11 Mar 2019 09:13:16 +0100"; DateTime d = new DateTime(); d = DateTime.ParseExact(dataa, "ddd, d MMM yyyy hh:mm:ss zzz", System.Globalization.CultureInfo.InvariantCulture); Console.WriteLine("data: " + d.ToString("yyyy-MM-dd'T'hh:mm:ss")); ```
11,377
61,396,228
How can I convert small float values such as *1.942890293094024e-15* or *2.8665157186802404e-07* to binary values in Python? I tried [GeeksforGeek's solution](https://www.geeksforgeeks.org/python-program-to-convert-floating-to-binary/), however, it does not work for values this small (I get this error: **ValueError: invalid literal for int() with base 10: '1.942890293094024e-15'**). The code looks like this so far: ``` def float_bin(number, places = 3): # split() seperates whole number and decimal # part and stores it in two seperate variables whole, dec = str(number).split(".") # Convert both whole number and decimal # part from string type to integer type whole = int(whole) dec = int (dec) # Convert the whole number part to it's # respective binary form and remove the # "0b" from it. res = bin(whole).lstrip("0b") + "." # Iterate the number of times, we want # the number of decimal places to be for x in range(places): # Multiply the decimal value by 2 # and seperate the whole number part # and decimal part whole, dec = str((decimal_converter(dec)) * 2).split(".") # Convert the decimal part # to integer again dec = int(dec) # Keep adding the integer parts # receive to the result variable res += whole return res # Function converts the value passed as # parameter to it's decimal representation def decimal_converter(num): while num > 1: num /= 10 return num ```
2020/04/23
[ "https://Stackoverflow.com/questions/61396228", "https://Stackoverflow.com", "https://Stackoverflow.com/users/13281273/" ]
After some consultation, I found a good [piece of code](https://trinket.io/python3/8934ea47a2) that solves my problem. I just had to apply some little tweaks on it (making it a function, removing automatic input requests etc.). Huge thanks to @kartoon!
One problem I see is with `str(number).split(".")`. I've added a simple hack before that: `number = "{:30f}".format(number)` so that there is no `e` in number. Though, I'm not sure if the result is correct.
11,378
14,582,773
my problem is how to best release memory the response of an asynchrones url fetch needs on appengine. Here is what I basically do in python: ``` rpcs = [] for event in event_list: url = 'http://someurl.com' rpc = urlfetch.create_rpc() rpc.callback = create_callback(rpc) urlfetch.make_fetch_call(rpc, url) rpcs.append(rpc) for rpc in rpcs: rpc.wait() ``` In my test scenario it does that for 1500 request. But I need an architecture to handle even much more within a short amount of time. Then there is a callback function, which adds a task to a queue to process the results: ``` def event_callback(rpc): result = rpc.get_result() data = json.loads(result.content) taskqueue.add(queue_name='name', url='url', params={'data': data}) ``` My problem is, that I do so many concurrent RPC calls, that the memory of my instance crashes: **"Exceeded soft private memory limit with 159.234 MB after servicing 975 requests total"** I already tried three things: ``` del result del data ``` and ``` result = None data = None ``` and I ran the garbage collector manually after the callback function. ``` gc.collect() ``` But nothing seem to release the memory directly after a callback functions has added the task to a queue - and therefore the instance crashes. Is there any other way to do it?
2013/01/29
[ "https://Stackoverflow.com/questions/14582773", "https://Stackoverflow.com", "https://Stackoverflow.com/users/1471612/" ]
Wrong approach: Put these urls into a (put)-queue, increase its rate to the desired value (defaut: 5/sec), and let each task handle one url-fetch (or a group hereof). Please note that theres a safety limit of 3000 url-fetch-api-calls / minute (and one url-fetch might use more than one api-call)
Use the task queue for urlfetch as well, fan out and avoid exhausting memory, register named tasks and provide the event\_list cursor to next task. You might want to fetch+process in such a scenario instead of registering new task for every process, especially if process also includes datastore writes. I also find ndb to make these async solutions more elegant. Check out Brett Slatkins talk on [scalable apps](http://www.google.com/events/io/2009/sessions/BuildingScalableComplexApps.html) and perhaps [pipelines](http://www.youtube.com/watch?v=zSDC_TU7rtc).
11,379
36,011,478
What's the most pythonic way of performing an arithmetic operation on every nth value in a list? For example, if I start with list1: ``` list1 = [1, 2, 3, 4, 5, 6, 7, 8, 9, 10] ``` I would like to add 1 to every second item, which would give: ``` list2 = [1, 3, 3, 5, 5, 7, 7, 9, 9, 11] ``` I've tried: ``` list1[::2]+1 ``` and also: ``` for x in list1: x=2 list2 = list1[::x] + 1 ```
2016/03/15
[ "https://Stackoverflow.com/questions/36011478", "https://Stackoverflow.com", "https://Stackoverflow.com/users/6052660/" ]
``` list1 = [1, 2, 3, 4, 5, 6, 7, 8, 9, 10] for i in range(1, len(list1), 2): list1[i] +=1 print(list1) ``` using i%2 seems not very efficient
Try this: ``` list1 = [1, 2, 3, 4, 5, 6, 7, 8, 9, 10] for i in range(1,len(list1),2): list1[i] += 1 ```
11,380
19,378,869
``` Error occurred during initialization of VM. Could not reserve enough space for object heap. Error: Could not create the Java Virtual Machine. Error: A fatal exception has occurred. Program will exit. ``` The bat file has the following command: ``` java -cp stanford-corenlp-3.2.0.jar;stanford-corenlp-3.2.0-models.jar;xom.jar;joda-time.jar;jollyday.jar -Xmx3g edu.stanford.nlp.pipeline.StanfordCoreNLP -props props.properties -filelist filelist.txt ``` It works from the cmd window with no errors! I have the following python code: ``` import os import subprocess os.chdir('C:/Users/Christos/Documents/stanford-corenlp-full-2013-06-20/') p = subprocess.Popen(r'start cmd /c run_mouskos.bat', shell=True) p.wait() print 'done' ``` I have also tried various other ways for executing the bat file from python with no luck. How can i run it with no errors?
2013/10/15
[ "https://Stackoverflow.com/questions/19378869", "https://Stackoverflow.com", "https://Stackoverflow.com/users/2120596/" ]
I've recently had this issue. I'm running a Django application, which is served by uWSGI. I'm actually running uWSGI processes with as-limit argument set to 512MB. After digging around this, I've discovered that every process which the application runs using subprocess, will keep same OS limits as uWSGI processes. After increasing the value of as-limit to 1GB, i was not able to reproduce this issue. Maybe this could help you
Try setting the heap size explicitly: see [Could not reserve enough space for object heap](https://stackoverflow.com/questions/4401396/could-not-reserve-enough-space-for-object-heap) This setting can be also be affected by environment variables - you should print the variables in the batch file (I think the `set` command on Windows does this). This would allow you to see if the variables are the same in the two cases you've tried. Of course, you'll need to capture the output from the batch script (or make it otherwise visible) in order to perform the comparison.
11,390
71,586,428
I want to find a concise way to sample n consecutive elements with stride m from a numpy array. The simplest case is with sampling 1 element with stride 2, which means getting every other element in a list, which can be done like this: ``` >>> a = np.arange(10) >>> a array([0, 1, 2, 3, 4, 5, 6, 7, 8, 9]) >>> a[::2] array([0, 2, 4, 6, 8]) ``` However, what if I wanted to slice n consecutive elements with a stride of m where n and m can be any integers? For example, if I wanted to slice 2 consecutive elements with a stride of 3 I would get something like this: ``` array([0, 1, 3, 4, 6, 7, 9]) ``` Is there a pythonic and concise way of doing this? Thank you!
2022/03/23
[ "https://Stackoverflow.com/questions/71586428", "https://Stackoverflow.com", "https://Stackoverflow.com/users/9079872/" ]
If `a` is long enough you could reshape, slice, and ravel ``` a.reshape(-1,3)[:,:2].ravel() ``` But `a` has to be (9,) or (12,). And the result will still be a copy. The suggested: ``` np.lib.stride_tricks.as_strided(a, (4,2), (8*3, 8)).ravel()[:-1] ``` is also a copy. The `as_strided` part is a view, but `ravel` will make a copy. And there is the ugliness of that extra element. `sliding_window_view` was added as a safer version: ``` In [81]: np.lib.stride_tricks.sliding_window_view(a,(3)) Out[81]: array([[0, 1, 2], [1, 2, 3], [2, 3, 4], [3, 4, 5], [4, 5, 6], [5, 6, 7], [6, 7, 8], [7, 8, 9]]) In [82]: np.lib.stride_tricks.sliding_window_view(a,(3))[::3,:2] Out[82]: array([[0, 1], [3, 4], [6, 7]]) ``` Again `ravel` will make a copy. This omits the "extra" `9`. `np.resize` does a `reshape` with padding (repeating `a` as needed): ``` In [83]: np.resize(a, (4,3)) Out[83]: array([[0, 1, 2], [3, 4, 5], [6, 7, 8], [9, 0, 1]]) In [84]: np.resize(a, (4,3))[:,:2] Out[84]: array([[0, 1], [3, 4], [6, 7], [9, 0]]) ```
This code might be useful, I tested it on the example in the question (n=2, m=3) ``` import numpy as np def get_slice(arr, n, m): b = np.array([]) for i in range(0, len(arr), m): b = np.concatenate((b, arr[i:i + n])) return b sliced_arr = get_slice(np.arange(10), n=2, m=3) print(sliced_arr) ``` Output ``` [0. 1. 3. 4. 6. 7. 9.] ```
11,391
63,348,785
I'm new and trying to follow this tutorial: <https://www.youtube.com/watch?v=_uQrJ0TkZlc> from 05:00:00 I follow him just like he doing, then at 05:05:04 when he run the server it's work fine for him, but for me is not. This is exactly my steps.... After Install django like this: ``` pip install django ``` I write this: ``` django-admin startproject pyshop . ``` so far all good, no error, then I try to run the server like this: ``` python manage.py runserver ``` But then I get a big error: ``` Exception ignored in thread started by: <function check_errors.<locals>.wrapper at 0x000001D36DA79AF0> Traceback (most recent call last): File "D:\User\OneDrive\Programing Code\Python\PyShop\venv\lib\site-packages\django\utils\autoreload.py", line 225, in wrapper fn(*args, **kwargs) File "D:\User\OneDrive\Programing Code\Python\PyShop\venv\lib\site-packages\django\core\management\commands\runserver.py", line 109, in inner_run autoreload.raise_last_exception() File "D:\User\OneDrive\Programing Code\Python\PyShop\venv\lib\site-packages\django\utils\autoreload.py", line 248, in raise_last_exception raise _exception[1] File "D:\User\OneDrive\Programing Code\Python\PyShop\venv\lib\site-packages\django\core\management\__init__.py", line 337, in execute autoreload.check_errors(django.setup)() File "D:\User\OneDrive\Programing Code\Python\PyShop\venv\lib\site-packages\django\utils\autoreload.py", line 225, in wrapper fn(*args, **kwargs) File "D:\User\OneDrive\Programing Code\Python\PyShop\venv\lib\site-packages\django\__init__.py", line 24, in setup apps.populate(settings.INSTALLED_APPS) File "D:\User\OneDrive\Programing Code\Python\PyShop\venv\lib\site-packages\django\apps\registry.py", line 112, in populate app_config.import_models() File "D:\User\OneDrive\Programing Code\Python\PyShop\venv\lib\site-packages\django\apps\config.py", line 198, in import_models self.models_module = import_module(models_module_name) File "C:\Users\anaconda3\lib\importlib\__init__.py", line 127, in import_module return _bootstrap._gcd_import(name[level:], package, level) File "<frozen importlib._bootstrap>", line 1014, in _gcd_import File "<frozen importlib._bootstrap>", line 991, in _find_and_load File "<frozen importlib._bootstrap>", line 975, in _find_and_load_unlocked File "<frozen importlib._bootstrap>", line 671, in _load_unlocked File "<frozen importlib._bootstrap_external>", line 783, in exec_module File "<frozen importlib._bootstrap>", line 219, in _call_with_frames_removed File "D:\User\OneDrive\Programing Code\Python\PyShop\venv\lib\site-packages\django\contrib\auth\models.py", line 2, in <module> from django.contrib.auth.base_user import AbstractBaseUser, BaseUserManager File "D:\User\OneDrive\Programing Code\Python\PyShop\venv\lib\site-packages\django\contrib\auth\base_user.py", line 47, in <module> class AbstractBaseUser(models.Model): File "D:\User\OneDrive\Programing Code\Python\PyShop\venv\lib\site-packages\django\db\models\base.py", line 101, in __new__ new_class.add_to_class('_meta', Options(meta, app_label)) File "D:\User\OneDrive\Programing Code\Python\PyShop\venv\lib\site-packages\django\db\models\base.py", line 304, in add_to_class value.contribute_to_class(cls, name) File "D:\User\OneDrive\Programing Code\Python\PyShop\venv\lib\site-packages\django\db\models\options.py", line 203, in contribute_to_class self.db_table = truncate_name(self.db_table, connection.ops.max_name_length()) File "D:\User\OneDrive\Programing Code\Python\PyShop\venv\lib\site-packages\django\db\__init__.py", line 33, in __getattr__ return getattr(connections[DEFAULT_DB_ALIAS], item) File "D:\User\OneDrive\Programing Code\Python\PyShop\venv\lib\site-packages\django\db\utils.py", line 202, in __getitem__ backend = load_backend(db['ENGINE']) File "D:\User\OneDrive\Programing Code\Python\PyShop\venv\lib\site-packages\django\db\utils.py", line 110, in load_backend return import_module('%s.base' % backend_name) File "C:\Users\anaconda3\lib\importlib\__init__.py", line 127, in import_module return _bootstrap._gcd_import(name[level:], package, level) File "D:\User\OneDrive\Programing Code\Python\PyShop\venv\lib\site-packages\django\db\backends\sqlite3\base.py", line 10, in <module> from sqlite3 import dbapi2 as Database File "C:\Users\anaconda3\lib\sqlite3\__init__.py", line 23, in <module> from sqlite3.dbapi2 import * File "C:\Users\anaconda3\lib\sqlite3\dbapi2.py", line 27, in <module> from _sqlite3 import * ImportError: DLL load failed while importing _sqlite3: The specified module could not be found. ``` Help :(
2020/08/10
[ "https://Stackoverflow.com/questions/63348785", "https://Stackoverflow.com", "https://Stackoverflow.com/users/14068193/" ]
The 2nd approach doesn't require you to have recording rules for every possible interval over which you'd like an average rate, saving resources.
The `avg_over_time(rate(metric_total[5m])[$__interval:])` calculates average of average rates. This isn't a good metric, since [average of averages doesn't equal to average](https://math.stackexchange.com/questions/95909/why-is-an-average-of-an-average-usually-incorrect). So the better approach would be to calculate `rate(metric_total[$__interval])` - it returns the real average per-second increase rate for `metric_total` over the given lookbehind window `$__interval`.
11,392
22,204,801
I need to run a simple C program several time, each one with different input string (let's say AAAAA... increasing size, until I get "TRUE" as output). e.g. ``` ./program A # output FALSE ./program AA # output FALSE ./program AAA # output FALSE ./program AAAA # output FALSE ./program AAAAA # output FALSE ./program AAAAAA # output FALSE ./program AAAAAAA # output TRUE ``` in C I would simply use a **while** loop. I know in python there is the **while** loop as well. So the python program would be: ``` strlen = 0 while TRUE strlen++ <run ./**C program** "A"*strlen > if (<program_output> = TRUE) break ``` Given that I can make the .py script executable by writing ``` #! /usr/bin/env python ``` and ``` chmod +x file.py ``` What should I do to make this work? Thanks in advance
2014/03/05
[ "https://Stackoverflow.com/questions/22204801", "https://Stackoverflow.com", "https://Stackoverflow.com/users/1219368/" ]
You could try something like this (see [docs](http://docs.python.org/2/library/subprocess.html#subprocess.call)): ``` import subprocess args = "" while True: args += "A" result = subprocess.call(["./program", "{args}".format(args=args)]) if result == 'TRUE': break ``` The `subprocess` module is preferred to the `os.popen` command, since it has been "deprecated since version 2.6." See the [os documentation](http://docs.python.org/2/library/os.html#os.popen).
**file.py** ``` import os count=10 input="A" for i in range(0, count): input_args=input_args+input_args os.popen("./program "+input_args) ``` running **file.py** would execute **./program** 10 times with increasing `A` input
11,393
48,783,650
I have a python list l.The first few elements of the list looks like below ``` [751883787] [751026090] [752575831] [751031278] [751032392] [751027358] [751052118] ``` I want to convert this list to pandas.core.series.Series with 2 leading 0.My final outcome will look like ``` 00751883787 00751026090 00752575831 00751031278 00751032392 00751027358 00751052118 ``` I'm working in Python 3.x in windows environment.Can you suggest me how to do this? Also my list contains around 2000000 elements
2018/02/14
[ "https://Stackoverflow.com/questions/48783650", "https://Stackoverflow.com", "https://Stackoverflow.com/users/9300211/" ]
you can try: ``` list=[121,123,125,145] series='00'+pd.Series(list).astype(str) print(series) ``` output: ``` 0 00121 1 00123 2 00125 3 00145 dtype: object ```
This is one way. ``` from itertools import chain; concat = chain.from_iterable import pandas as pd lst = [[751883787], [751026090], [752575831], [751031278]] pd.DataFrame({'a': pd.Series([str(i).zfill(11) for i in concat(lst)])}) a 0 00751883787 1 00751026090 2 00752575831 3 00751031278 ``` Some benchmarking, relevant since your dataframe is large: ``` from itertools import chain; concat = chain.from_iterable import pandas as pd lst = [[751883787], [751026090], [752575831], [751031278], [751032392], [751027358], [751052118]]*300000 %timeit pd.DataFrame(lst, columns=['a'])['a'].astype(str).str.zfill(11) # 1 loop, best of 3: 7.88 s per loop %timeit pd.DataFrame({'a': pd.Series([str(i).zfill(11) for i in concat(lst)])}) # 1 loop, best of 3: 2.06 s per loop ```
11,398
42,345,745
I am using python-social-auth. But when I run makemigrations and migrate. The tables "social\_auth-\*" are not created. My settings.py looks like this ``` INSTALLED_APPS += ( 'social.apps.django_app.default', ) AUTHENTICATION_BACKENDS += ( 'social.backends.facebook.FacebookOAuth2', 'social.backends.google.GoogleOAuth2', 'social.backends.twitter.TwitterOAuth', ) SOCIAL_AUTH_USER_MODEL = AUTH_USER_MODEL # Rausnehmen wenns Probleme mit der Auth. gibt SOCIAL_AUTH_PIPELINE = ( 'social.pipeline.social_auth.social_details', 'social.pipeline.social_auth.social_uid', 'social.pipeline.social_auth.auth_allowed', 'social.pipeline.social_auth.social_user', 'social.pipeline.user.get_username', 'social.pipeline.social_auth.associate_by_email', # <--- enable this one. to match users per email adress 'social.pipeline.user.create_user', 'social.pipeline.social_auth.associate_user', 'social.pipeline.social_auth.load_extra_data', 'social.pipeline.user.user_details', ) from sharadar.soc_auth_config import * ``` The same does work on another machine without any flaw. On this machine I receive : ``` Operations to perform: Apply all migrations: admin, auth, contenttypes, easy_thumbnailsguardian, main, myauth, sessions, social_auth Running migrations: Applying myauth.0002_auto_20170220_1408... OK ``` social\_auth is included here. But on a new Computer I allways receive ``` Exception Value: relation "social_auth_usersocialauth" does not exist LINE 1: ...er"."bio", "myauth_shruser"."email_verified" FROM "social_au... ``` When using google auth in my running django app social\_auth is not included when I run migrate ``` Operations to perform: Apply all migrations: admin, auth, contenttypes, easy_thumbnails, guardian, myauth, sessions Running migrations: No migrations to apply. ``` Any help is appretiated. Kind regards Michael
2017/02/20
[ "https://Stackoverflow.com/questions/42345745", "https://Stackoverflow.com", "https://Stackoverflow.com/users/7007547/" ]
I had to migrate to social-auth-core as described in this dokument : [Migrating from python-social-auth to split social](https://github.com/omab/python-social-auth/blob/master/MIGRATING_TO_SOCIAL.md) Then all is working fine. But after this problems I am thinking about changing to all-auth. Regards for any help
Strange .. when entering the admin interface I receive the exception : ``` No installed app with label 'social_django'. ``` But later in the Error Report I have : ``` INSTALLED_APPS ['django.contrib.admin', 'django.contrib.auth', ..... 'myauth.apps.MyauthConfig', 'main.apps.MainConfig', 'social.apps.django_app.default'] ``` pip3 search for python-social-auth gives : ``` python-social-auth (0.3.6) INSTALLED: 0.3.6 (latest) ``` I dont know what is going on here. Regards Michael
11,403
40,461,824
I have a Adafruit Feather Huzzah ESP8266 and want to load a lua script onto it. The script is out of [this Adafruit tutorial](https://learn.adafruit.com/manually-bridging-mqtt-mosquitto-to-adafruit-io/programming-the-esp8266) and I changed only the Wifi and MQTT connection settings. I followed the instructions at <https://github.com/4refr0nt/luatool#run> and used the following command: ``` python ./luatool.py --port /dev/tty.SLAB_USBtoUART --src LightSensor-master/init.lua --dest init.lua --verbose ``` I get the following error ``` Upload starting Stage 1. Deleting old file from flash memory ->file.open("init.lua", "w")Traceback (most recent call last): File "./luatool.py", line 272, in <module> transport.writeln("file.open(\"" + args.dest + "\", \"w\")\r") File "./luatool.py", line 111, in writeln self.performcheck(data) File "./luatool.py", line 61, in performcheck raise Exception('No proper answer from MCU') Exception: No proper answer from MCU ``` What is the error here, what am I doing wrong? I tried flashing the nodemcu dev version to the Feather. This didn't change the problem. I also read some advice to stabilize the power supply and added a battery to the feather–-also without success.
2016/11/07
[ "https://Stackoverflow.com/questions/40461824", "https://Stackoverflow.com", "https://Stackoverflow.com/users/2427707/" ]
Adding a delay of 0.6 ms to the `luatool.py` solved the problem for me: ``` python ./luatool.py --delay 0.6 --port /dev/tty.SLAB_USBtoUART --src LightSensor-master/init.lua --dest init.lua --verbose ``` I found this solution because I read some advice that the python script might try to talk to the Feather faster than the Feather can answer.
I had the same problem, I detached the cable and attached again and ran the command ``` sudo python esp8266/luatool.py --delay 0.6 --port /dev/ttyUSB0 --src init.lua --dest init.lua --restart --verbose ``` 1st time it fails but next time execute the same command and it works for me.
11,404
49,037,742
I've noticed that installing Pandas and Numpy (it's dependency) in a Docker container using the base OS Alpine vs. CentOS or Debian takes much longer. I created a little test below to demonstrate the time difference. Aside from the few seconds Alpine takes to update and download the build dependencies to install Pandas and Numpy, why does the setup.py take around 70x more time than on Debian install? Is there any way to speed up the install using Alpine as the base image or is there another base image of comparable size to Alpine that is better to use for packages like Pandas and Numpy? **Dockerfile.debian** ``` FROM python:3.6.4-slim-jessie RUN pip install pandas ``` **Build Debian image with Pandas & Numpy:** ``` [PandasDockerTest] time docker build -t debian-pandas -f Dockerfile.debian . --no-cache Sending build context to Docker daemon 3.072kB Step 1/2 : FROM python:3.6.4-slim-jessie ---> 43431c5410f3 Step 2/2 : RUN pip install pandas ---> Running in 2e4c030f8051 Collecting pandas Downloading pandas-0.22.0-cp36-cp36m-manylinux1_x86_64.whl (26.2MB) Collecting numpy>=1.9.0 (from pandas) Downloading numpy-1.14.1-cp36-cp36m-manylinux1_x86_64.whl (12.2MB) Collecting pytz>=2011k (from pandas) Downloading pytz-2018.3-py2.py3-none-any.whl (509kB) Collecting python-dateutil>=2 (from pandas) Downloading python_dateutil-2.6.1-py2.py3-none-any.whl (194kB) Collecting six>=1.5 (from python-dateutil>=2->pandas) Downloading six-1.11.0-py2.py3-none-any.whl Installing collected packages: numpy, pytz, six, python-dateutil, pandas Successfully installed numpy-1.14.1 pandas-0.22.0 python-dateutil-2.6.1 pytz-2018.3 six-1.11.0 Removing intermediate container 2e4c030f8051 ---> a71e1c314897 Successfully built a71e1c314897 Successfully tagged debian-pandas:latest docker build -t debian-pandas -f Dockerfile.debian . --no-cache 0.07s user 0.06s system 0% cpu 13.605 total ``` **Dockerfile.alpine** ``` FROM python:3.6.4-alpine3.7 RUN apk --update add --no-cache g++ RUN pip install pandas ``` **Build Alpine image with Pandas & Numpy:** ``` [PandasDockerTest] time docker build -t alpine-pandas -f Dockerfile.alpine . --no-cache Sending build context to Docker daemon 16.9kB Step 1/3 : FROM python:3.6.4-alpine3.7 ---> 4b00a94b6f26 Step 2/3 : RUN apk --update add --no-cache g++ ---> Running in 4b0c32551e3f fetch http://dl-cdn.alpinelinux.org/alpine/v3.7/main/x86_64/APKINDEX.tar.gz fetch http://dl-cdn.alpinelinux.org/alpine/v3.7/main/x86_64/APKINDEX.tar.gz fetch http://dl-cdn.alpinelinux.org/alpine/v3.7/community/x86_64/APKINDEX.tar.gz fetch http://dl-cdn.alpinelinux.org/alpine/v3.7/community/x86_64/APKINDEX.tar.gz (1/17) Upgrading musl (1.1.18-r2 -> 1.1.18-r3) (2/17) Installing libgcc (6.4.0-r5) (3/17) Installing libstdc++ (6.4.0-r5) (4/17) Installing binutils-libs (2.28-r3) (5/17) Installing binutils (2.28-r3) (6/17) Installing gmp (6.1.2-r1) (7/17) Installing isl (0.18-r0) (8/17) Installing libgomp (6.4.0-r5) (9/17) Installing libatomic (6.4.0-r5) (10/17) Installing pkgconf (1.3.10-r0) (11/17) Installing mpfr3 (3.1.5-r1) (12/17) Installing mpc1 (1.0.3-r1) (13/17) Installing gcc (6.4.0-r5) (14/17) Installing musl-dev (1.1.18-r3) (15/17) Installing libc-dev (0.7.1-r0) (16/17) Installing g++ (6.4.0-r5) (17/17) Upgrading musl-utils (1.1.18-r2 -> 1.1.18-r3) Executing busybox-1.27.2-r7.trigger OK: 184 MiB in 50 packages Removing intermediate container 4b0c32551e3f ---> be26c3bf4e42 Step 3/3 : RUN pip install pandas ---> Running in 36f6024e5e2d Collecting pandas Downloading pandas-0.22.0.tar.gz (11.3MB) Collecting python-dateutil>=2 (from pandas) Downloading python_dateutil-2.6.1-py2.py3-none-any.whl (194kB) Collecting pytz>=2011k (from pandas) Downloading pytz-2018.3-py2.py3-none-any.whl (509kB) Collecting numpy>=1.9.0 (from pandas) Downloading numpy-1.14.1.zip (4.9MB) Collecting six>=1.5 (from python-dateutil>=2->pandas) Downloading six-1.11.0-py2.py3-none-any.whl Building wheels for collected packages: pandas, numpy Running setup.py bdist_wheel for pandas: started Running setup.py bdist_wheel for pandas: still running... Running setup.py bdist_wheel for pandas: still running... Running setup.py bdist_wheel for pandas: still running... Running setup.py bdist_wheel for pandas: still running... Running setup.py bdist_wheel for pandas: still running... Running setup.py bdist_wheel for pandas: still running... Running setup.py bdist_wheel for pandas: finished with status 'done' Stored in directory: /root/.cache/pip/wheels/e8/ed/46/0596b51014f3cc49259e52dff9824e1c6fe352048a2656fc92 Running setup.py bdist_wheel for numpy: started Running setup.py bdist_wheel for numpy: still running... Running setup.py bdist_wheel for numpy: still running... Running setup.py bdist_wheel for numpy: still running... Running setup.py bdist_wheel for numpy: finished with status 'done' Stored in directory: /root/.cache/pip/wheels/9d/cd/e1/4d418b16ea662e512349ef193ed9d9ff473af715110798c984 Successfully built pandas numpy Installing collected packages: six, python-dateutil, pytz, numpy, pandas Successfully installed numpy-1.14.1 pandas-0.22.0 python-dateutil-2.6.1 pytz-2018.3 six-1.11.0 Removing intermediate container 36f6024e5e2d ---> a93c59e6a106 Successfully built a93c59e6a106 Successfully tagged alpine-pandas:latest docker build -t alpine-pandas -f Dockerfile.alpine . --no-cache 0.54s user 0.33s system 0% cpu 16:08.47 total ```
2018/02/28
[ "https://Stackoverflow.com/questions/49037742", "https://Stackoverflow.com", "https://Stackoverflow.com/users/3089468/" ]
Debian based images use only `python pip` to install packages with `.whl` format: ``` Downloading pandas-0.22.0-cp36-cp36m-manylinux1_x86_64.whl (26.2MB) Downloading numpy-1.14.1-cp36-cp36m-manylinux1_x86_64.whl (12.2MB) ``` WHL format was developed as a quicker and more reliable method of installing Python software than re-building from source code every time. WHL files only have to be moved to the correct location on the target system to be installed, whereas a source distribution requires a build step before installation. Wheel packages `pandas` and `numpy` are not supported in images based on Alpine platform. That's why when we install them using `python pip` during the building process, we always compile them from the source files in alpine: ``` Downloading pandas-0.22.0.tar.gz (11.3MB) Downloading numpy-1.14.1.zip (4.9MB) ``` and we can see the following inside container during the image building: ``` / # ps aux PID USER TIME COMMAND 1 root 0:00 /bin/sh -c pip install pandas 7 root 0:04 {pip} /usr/local/bin/python /usr/local/bin/pip install pandas 21 root 0:07 /usr/local/bin/python -c import setuptools, tokenize;__file__='/tmp/pip-build-en29h0ak/pandas/setup.py';f=getattr(tokenize, 'open', open)(__file__);code=f.read().replace('\r\n', '\n 496 root 0:00 sh 660 root 0:00 /bin/sh -c gcc -Wno-unused-result -Wsign-compare -DNDEBUG -g -fwrapv -O3 -Wall -Wstrict-prototypes -DTHREAD_STACK_SIZE=0x100000 -fPIC -Ibuild/src.linux-x86_64-3.6/numpy/core/src/pri 661 root 0:00 gcc -Wno-unused-result -Wsign-compare -DNDEBUG -g -fwrapv -O3 -Wall -Wstrict-prototypes -DTHREAD_STACK_SIZE=0x100000 -fPIC -Ibuild/src.linux-x86_64-3.6/numpy/core/src/private -Inump 662 root 0:00 /usr/libexec/gcc/x86_64-alpine-linux-musl/6.4.0/cc1 -quiet -I build/src.linux-x86_64-3.6/numpy/core/src/private -I numpy/core/include -I build/src.linux-x86_64-3.6/numpy/core/includ 663 root 0:00 ps aux ``` If we modify `Dockerfile` a little: ``` FROM python:3.6.4-alpine3.7 RUN apk add --no-cache g++ wget RUN wget https://pypi.python.org/packages/da/c6/0936bc5814b429fddb5d6252566fe73a3e40372e6ceaf87de3dec1326f28/pandas-0.22.0-cp36-cp36m-manylinux1_x86_64.whl RUN pip install pandas-0.22.0-cp36-cp36m-manylinux1_x86_64.whl ``` we get the following error: ``` Step 4/4 : RUN pip install pandas-0.22.0-cp36-cp36m-manylinux1_x86_64.whl ---> Running in 0faea63e2bda pandas-0.22.0-cp36-cp36m-manylinux1_x86_64.whl is not a supported wheel on this platform. The command '/bin/sh -c pip install pandas-0.22.0-cp36-cp36m-manylinux1_x86_64.whl' returned a non-zero code: 1 ``` Unfortunately, the only way to install `pandas` on an Alpine image is to wait until build finishes. Of course if you want to use the Alpine image with `pandas` in CI for example, the best way to do so is to compile it once, push it to any registry and use it as a base image for your needs. **EDIT:** If you want to use the Alpine image with `pandas` you can pull my [nickgryg/alpine-pandas](https://hub.docker.com/r/nickgryg/alpine-pandas/) docker image. It is a python image with pre-compiled `pandas` on the Alpine platform. It should save your time.
In this case the alpine not be the best solution change alpine for slim: FROM python:3.8.3-alpine ======================== Change to that: `FROM python:3.8.3-slim` In my case it was resolved with this small change.
11,405
72,095,609
With given 2D and 1D lists, I have to dot product them. But I have to calculate them without using `.dot`. For example, I want to make these lists ``` matrix_A = [[0, 1, 2, 3], [4, 5, 6, 7], [8, 9, 10, 11], [12, 13, 14, 15], [16, 17, 18, 19], [20, 21, 22, 23], [24, 25, 26, 27], [28, 29, 30, 31]] vector_x = [0, 1, 2, 3] ``` to this output ``` result_list = [ 14 38 62 86 110 134 158 182] ``` How can I do it by only using lists(not using `NumPy array` and `.dot`) in python?
2022/05/03
[ "https://Stackoverflow.com/questions/72095609", "https://Stackoverflow.com", "https://Stackoverflow.com/users/19021583/" ]
You could use a list comprehension with nested for loops. ```py matrix_A = [[0, 1, 2, 3], [4, 5, 6, 7], [8, 9, 10, 11], [12, 13, 14, 15], [16, 17, 18, 19], [20, 21, 22, 23], [24, 25, 26, 27], [28, 29, 30, 31]] vector_x = [0, 1, 2, 3] result_list = [sum(a*b for a,b in zip(row, vector_x)) for row in matrix_A] print(result_list) ``` Output: ``` [14, 38, 62, 86, 110, 134, 158, 182] ``` Edit: Removed the square brackets in the list comprehension following @fshabashev's comment.
If you do not mind using numpy, this is a solution ``` import numpy as np matrix_A = [[0, 1, 2, 3], [4, 5, 6, 7], [8, 9, 10, 11], [12, 13, 14, 15], [16, 17, 18, 19], [20, 21, 22, 23], [24, 25, 26, 27], [28, 29, 30, 31]] vector_x = [0, 1, 2, 3] res = np.sum(np.array(matrix_A) * np.array(vector_x), axis=1) print(res) ```
11,415
41,249,099
I am able to create a DirContext using the credentials provided. So it seems that I am connecting to the ldap server and verifying credentials but later on we do a .search on the context that we get from these credentials. Here it is failing. I have included my spring security configuration in addition to code that shows how I verified the credentials are working and code which seems to be failiing. spring-security configuration ``` <?xml version="1.0" encoding="UTF-8"?> <beans:beans xmlns="http://www.springframework.org/schema/security" xmlns:beans="http://www.springframework.org/schema/beans" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xmlns:p="http://www.springframework.org/schema/p" xsi:schemaLocation="http://www.springframework.org/schema/beans http://www.springframework.org/schema/beans/spring-beans-3.0.xsd http://www.springframework.org/schema/security http://www.springframework.org/schema/security/spring-security-3.1.xsd"> <http pattern="/ui/login" security="none"></http> <http pattern="/styles" security="none"/> <http use-expressions="true"> <intercept-url pattern="/views/*" access="isAuthenticated()" /> <intercept-url pattern="/database/upload" access="isAuthenticated()" /> <intercept-url pattern="/database/save" access="isAuthenticated()" /> <intercept-url pattern="/database/list" access="isAuthenticated()" /> <intercept-url pattern="/database/delete" access="isAuthenticated()" /> <intercept-url pattern="/project/*" access="isAuthenticated()" /> <intercept-url pattern="/file/*" access="isAuthenticated()" /> <intercept-url pattern="/amazon/*" access="isAuthenticated()" /> <intercept-url pattern="/python/*" access="isAuthenticated()" /> <intercept-url pattern="/r/*" access="isAuthenticated()" /> <intercept-url pattern="/project/*" access="isAuthenticated()" /> <intercept-url pattern="/image/*" access="isAuthenticated()" /> <intercept-url pattern="/shell/*" access="isAuthenticated()" /> <intercept-url pattern="/register" access="hasRole('ROLE_ADMIN')" /> <intercept-url pattern="/user/save" access="hasRole('ROLE_ADMIN')" /> <intercept-url pattern="/user/userAdministrator" access="hasRole('ROLE_ADMIN')" /> <intercept-url pattern="/user/list" access="isAuthenticated()" /> <intercept-url pattern="/user/archive" access="isAuthenticated()" /> <form-login login-page="/login" default-target-url="/views/main" authentication-failure-url="/loginfailed"/> <logout logout-success-url="/login" /> </http> <beans:bean id="ldapAuthProvider" class="org.springframework.security.ldap.authentication.ad.ActiveDirectoryLdapAuthenticationProvider"> <beans:constructor-arg value="simplead.blazingdb.com" /> <beans:constructor-arg value="ldap://simplead.blazingdb.com/" /> </beans:bean> <authentication-manager alias="authenticationManager" erase-credentials="true"> <authentication-provider ref="ldapAuthProvider"> </authentication-provider> </authentication-manager> </beans:beans> ``` from ActiveDirectoryLdapAuthenticationProvider ``` @Override protected DirContextOperations doAuthentication(UsernamePasswordAuthenticationToken auth) { String username = auth.getName(); String password = (String)auth.getCredentials(); DirContext ctx = bindAsUser(username, password); try { return searchForUser(ctx, username); } catch (NamingException e) { logger.error("Failed to locate directory entry for authenticated user: " + username, e); throw badCredentials(); } finally { LdapUtils.closeContext(ctx); } } ``` This returns just fine so long as I pass in the correct credentials and fails if I send the wrong credentials so I know that we are making it this far. The issue comes inside of SpringSecurityLdapTemplate ``` public static DirContextOperations searchForSingleEntryInternal(DirContext ctx, SearchControls searchControls, String base, String filter, Object[] params) throws NamingException { final DistinguishedName ctxBaseDn = new DistinguishedName(ctx.getNameInNamespace()); final DistinguishedName searchBaseDn = new DistinguishedName(base); final NamingEnumeration<SearchResult> resultsEnum = ctx.search(searchBaseDn, filter, params, searchControls); if (logger.isDebugEnabled()) { logger.debug("Searching for entry under DN '" + ctxBaseDn + "', base = '" + searchBaseDn + "', filter = '" + filter + "'"); } Set<DirContextOperations> results = new HashSet<DirContextOperations>(); try { while (resultsEnum.hasMore()) { SearchResult searchResult = resultsEnum.next(); // Work out the DN of the matched entry DistinguishedName dn = new DistinguishedName(new CompositeName(searchResult.getName())); if (base.length() > 0) { dn.prepend(searchBaseDn); } if (logger.isDebugEnabled()) { logger.debug("Found DN: " + dn); } results.add(new DirContextAdapter(searchResult.getAttributes(), dn, ctxBaseDn)); } } catch (PartialResultException e) { LdapUtils.closeEnumeration(resultsEnum); logger.info("Ignoring PartialResultException"); } if (results.size() == 0) { throw new IncorrectResultSizeDataAccessException(1, 0); } if (results.size() > 1) { throw new IncorrectResultSizeDataAccessException(1, results.size()); } return results.iterator().next(); } ``` Specifically the following line I think is where I am seeing issues. We get a return of size 0 when it is expecting 1 so it throws an error and the whole thing fails. ``` final NamingEnumeration<SearchResult> resultsEnum = ctx.search(searchBaseDn, filter, params, searchControls); ``` Whenever he we try to do resultsEnum.hasMore() we catch a PartialResultsException. I am trying to figure out why this is the case. I am using Amazon Simple directory service (the one that is backed by Samba not the MSFT version). I am very new to LDAP and Active Directory so if my question is poorly formed please let me know what information I need to add.
2016/12/20
[ "https://Stackoverflow.com/questions/41249099", "https://Stackoverflow.com", "https://Stackoverflow.com/users/1431499/" ]
In method searchForUser is called the method SpringSecurityLdapTemplate.searchForSingleEntryInternal where it's passed an array of objects. The first object of array relates to username@domain. The second one, is the username itself. So, when you are searching for (&(objectClass=user)(sAMAccountName={0})) in ActiveDirectory, you are passing username@domain as attribute to the parameter {0} of search. You just had to pass search filter like this: (&(objectClass=user)(sAMAccountName={1})) **EDIT:** I'm assuming that you have passed the searchFilter to ActiveDirectoryLdapAuthenticationProvider object. If you haven't, you have to.
The issue was pretty straightforward once I used Apache Directory Studio to try and run the ldap queries coming out of Spring Security Active Directory defaults. They assume you have an attribute called userPrincipalName which is a combination of the sAMAccountName and the domain. In the end I had to set the searchFilter to use sAMAccountName and I had to make my own version of ActiveDirectoryLdapAuthenticationProvider which only looked for users inside of the domain being used but was only comparing sAMAccountName. I only changed searchForUser but since this was a final class I did have to just copy it over. I HATE having to do this but I need to keep moving and these are not configurable options in Spring Security 3.2.9. package org.springframework.security.ldap.authentication.ad; ``` import org.springframework.dao.IncorrectResultSizeDataAccessException; import org.springframework.ldap.core.DirContextOperations; import org.springframework.ldap.core.DistinguishedName; import org.springframework.ldap.core.support.DefaultDirObjectFactory; import org.springframework.ldap.support.LdapUtils; import org.springframework.security.authentication.AccountExpiredException; import org.springframework.security.authentication.BadCredentialsException; import org.springframework.security.authentication.CredentialsExpiredException; import org.springframework.security.authentication.DisabledException; import org.springframework.security.authentication.LockedException; import org.springframework.security.authentication.UsernamePasswordAuthenticationToken; import org.springframework.security.core.GrantedAuthority; import org.springframework.security.core.authority.AuthorityUtils; import org.springframework.security.core.authority.SimpleGrantedAuthority; import org.springframework.security.core.userdetails.UsernameNotFoundException; import org.springframework.security.ldap.SpringSecurityLdapTemplate; import org.springframework.security.ldap.authentication.AbstractLdapAuthenticationProvider; import org.springframework.security.ldap.authentication.ad.ActiveDirectoryAuthenticationException; import org.springframework.util.Assert; import org.springframework.util.StringUtils; import javax.naming.AuthenticationException; import javax.naming.Context; import javax.naming.NamingException; import javax.naming.OperationNotSupportedException; import javax.naming.directory.DirContext; import javax.naming.directory.SearchControls; import javax.naming.ldap.InitialLdapContext; import java.util.*; import java.util.regex.Matcher; import java.util.regex.Pattern; public final class BlazingActiveDirectory extends AbstractLdapAuthenticationProvider { private static final Pattern SUB_ERROR_CODE = Pattern.compile(".*data\\s([0-9a-f]{3,4}).*"); // Error codes private static final int USERNAME_NOT_FOUND = 0x525; private static final int INVALID_PASSWORD = 0x52e; private static final int NOT_PERMITTED = 0x530; private static final int PASSWORD_EXPIRED = 0x532; private static final int ACCOUNT_DISABLED = 0x533; private static final int ACCOUNT_EXPIRED = 0x701; private static final int PASSWORD_NEEDS_RESET = 0x773; private static final int ACCOUNT_LOCKED = 0x775; private final String domain; private final String rootDn; private final String url; private boolean convertSubErrorCodesToExceptions; private String searchFilter = "(&(objectClass=user)(userPrincipalName={0}))"; // Only used to allow tests to substitute a mock LdapContext ContextFactory contextFactory = new ContextFactory(); /** * @param domain the domain name (may be null or empty) * @param url an LDAP url (or multiple URLs) */ public BlazingActiveDirectory(String domain, String url) { Assert.isTrue(StringUtils.hasText(url), "Url cannot be empty"); this.domain = StringUtils.hasText(domain) ? domain.toLowerCase() : null; this.url = url; rootDn = this.domain == null ? null : rootDnFromDomain(this.domain); } @Override protected DirContextOperations doAuthentication(UsernamePasswordAuthenticationToken auth) { String username = auth.getName(); String password = (String) auth.getCredentials(); DirContext ctx = bindAsUser(username, password); try { return searchForUser(ctx, username); } catch (NamingException e) { logger.error("Failed to locate directory entry for authenticated user: " + username, e); throw badCredentials(e); } finally { LdapUtils.closeContext(ctx); } } /** * Creates the user authority list from the values of the {@code memberOf} attribute obtained from the user's * Active Directory entry. */ @Override protected Collection<? extends GrantedAuthority> loadUserAuthorities(DirContextOperations userData, String username, String password) { String[] groups = userData.getStringAttributes("memberOf"); if (groups == null) { logger.debug("No values for 'memberOf' attribute."); return AuthorityUtils.NO_AUTHORITIES; } if (logger.isDebugEnabled()) { logger.debug("'memberOf' attribute values: " + Arrays.asList(groups)); } ArrayList<GrantedAuthority> authorities = new ArrayList<GrantedAuthority>(groups.length); for (String group : groups) { authorities.add(new SimpleGrantedAuthority(new DistinguishedName(group).removeLast().getValue())); } return authorities; } private DirContext bindAsUser(String username, String password) { // TODO. add DNS lookup based on domain final String bindUrl = url; Hashtable<String, String> env = new Hashtable<String, String>(); env.put(Context.SECURITY_AUTHENTICATION, "simple"); String bindPrincipal = createBindPrincipal(username); env.put(Context.SECURITY_PRINCIPAL, bindPrincipal); env.put(Context.PROVIDER_URL, bindUrl); env.put(Context.SECURITY_CREDENTIALS, password); env.put(Context.INITIAL_CONTEXT_FACTORY, "com.sun.jndi.ldap.LdapCtxFactory"); env.put(Context.OBJECT_FACTORIES, DefaultDirObjectFactory.class.getName()); try { return contextFactory.createContext(env); } catch (NamingException e) { if ((e instanceof AuthenticationException) || (e instanceof OperationNotSupportedException)) { handleBindException(bindPrincipal, e); throw badCredentials(e); } else { throw LdapUtils.convertLdapException(e); } } } private void handleBindException(String bindPrincipal, NamingException exception) { if (logger.isDebugEnabled()) { logger.debug("Authentication for " + bindPrincipal + " failed:" + exception); } int subErrorCode = parseSubErrorCode(exception.getMessage()); if (subErrorCode <= 0) { logger.debug("Failed to locate AD-specific sub-error code in message"); return; } logger.info("Active Directory authentication failed: " + subCodeToLogMessage(subErrorCode)); if (convertSubErrorCodesToExceptions) { raiseExceptionForErrorCode(subErrorCode, exception); } } private int parseSubErrorCode(String message) { Matcher m = SUB_ERROR_CODE.matcher(message); if (m.matches()) { return Integer.parseInt(m.group(1), 16); } return -1; } private void raiseExceptionForErrorCode(int code, NamingException exception) { String hexString = Integer.toHexString(code); Throwable cause = new ActiveDirectoryAuthenticationException(hexString, exception.getMessage(), exception); switch (code) { case PASSWORD_EXPIRED: throw new CredentialsExpiredException(messages.getMessage("LdapAuthenticationProvider.credentialsExpired", "User credentials have expired"), cause); case ACCOUNT_DISABLED: throw new DisabledException(messages.getMessage("LdapAuthenticationProvider.disabled", "User is disabled"), cause); case ACCOUNT_EXPIRED: throw new AccountExpiredException(messages.getMessage("LdapAuthenticationProvider.expired", "User account has expired"), cause); case ACCOUNT_LOCKED: throw new LockedException(messages.getMessage("LdapAuthenticationProvider.locked", "User account is locked"), cause); default: throw badCredentials(cause); } } private String subCodeToLogMessage(int code) { switch (code) { case USERNAME_NOT_FOUND: return "User was not found in directory"; case INVALID_PASSWORD: return "Supplied password was invalid"; case NOT_PERMITTED: return "User not permitted to logon at this time"; case PASSWORD_EXPIRED: return "Password has expired"; case ACCOUNT_DISABLED: return "Account is disabled"; case ACCOUNT_EXPIRED: return "Account expired"; case PASSWORD_NEEDS_RESET: return "User must reset password"; case ACCOUNT_LOCKED: return "Account locked"; } return "Unknown (error code " + Integer.toHexString(code) +")"; } private BadCredentialsException badCredentials() { return new BadCredentialsException(messages.getMessage( "LdapAuthenticationProvider.badCredentials", "Bad credentials")); } private BadCredentialsException badCredentials(Throwable cause) { return (BadCredentialsException) badCredentials().initCause(cause); } private DirContextOperations searchForUser(DirContext context, String username) throws NamingException { SearchControls searchControls = new SearchControls(); searchControls.setSearchScope(SearchControls.SUBTREE_SCOPE); String bindPrincipal = createBindPrincipal(username); String searchRoot = rootDn != null ? rootDn : searchRootFromPrincipal(bindPrincipal); try { String verifyName = username; if(username.indexOf("@") != -1){ verifyName = username.substring(0,username.indexOf("@")); } return SpringSecurityLdapTemplate.searchForSingleEntryInternal(context, searchControls, searchRoot, searchFilter, new Object[]{verifyName}); } catch (IncorrectResultSizeDataAccessException incorrectResults) { // Search should never return multiple results if properly configured - just rethrow if (incorrectResults.getActualSize() != 0) { throw incorrectResults; } // If we found no results, then the username/password did not match UsernameNotFoundException userNameNotFoundException = new UsernameNotFoundException("User " + username + " not found in directory.", incorrectResults); throw badCredentials(userNameNotFoundException); } } private String searchRootFromPrincipal(String bindPrincipal) { int atChar = bindPrincipal.lastIndexOf('@'); if (atChar < 0) { logger.debug("User principal '" + bindPrincipal + "' does not contain the domain, and no domain has been configured"); throw badCredentials(); } return rootDnFromDomain(bindPrincipal.substring(atChar + 1, bindPrincipal.length())); } private String rootDnFromDomain(String domain) { String[] tokens = StringUtils.tokenizeToStringArray(domain, "."); StringBuilder root = new StringBuilder(); for (String token : tokens) { if (root.length() > 0) { root.append(','); } root.append("dc=").append(token); } return root.toString(); } String createBindPrincipal(String username) { if (domain == null || username.toLowerCase().endsWith(domain)) { return username; } return username + "@" + domain; } /** * By default, a failed authentication (LDAP error 49) will result in a {@code BadCredentialsException}. * <p> * If this property is set to {@code true}, the exception message from a failed bind attempt will be parsed * for the AD-specific error code and a {@link CredentialsExpiredException}, {@link DisabledException}, * {@link AccountExpiredException} or {@link LockedException} will be thrown for the corresponding codes. All * other codes will result in the default {@code BadCredentialsException}. * * @param convertSubErrorCodesToExceptions {@code true} to raise an exception based on the AD error code. */ public void setConvertSubErrorCodesToExceptions(boolean convertSubErrorCodesToExceptions) { this.convertSubErrorCodesToExceptions = convertSubErrorCodesToExceptions; } /** * The LDAP filter string to search for the user being authenticated. * Occurrences of {0} are replaced with the {@code username@domain}. * <p> * Defaults to: {@code (&(objectClass=user)(userPrincipalName={0}))} * </p> * * @param searchFilter the filter string * * @since 3.2.6 */ public void setSearchFilter(String searchFilter) { Assert.hasText(searchFilter,"searchFilter must have text"); this.searchFilter = searchFilter; } static class ContextFactory { DirContext createContext(Hashtable<?,?> env) throws NamingException { return new InitialLdapContext(env, null); } } } ```
11,416
49,641,899
I am not familiar with how to export the list to the `csv` in python. Here is code for one list: ``` import csv X = ([1,2,3],[7,8,9]) Y = ([4,5,6],[3,4,5]) for x in range(0,2,1): csvfile = "C:/Temp/aaa.csv" with open(csvfile, "w") as output: writer = csv.writer(output, lineterminator='\n') for val in x[0]: writer.writerow([val]) ``` And I want to the result: [![enter image description here](https://i.stack.imgur.com/kqiIN.png)](https://i.stack.imgur.com/kqiIN.png) Then how to modify the code(the main problem is how to change the column?)
2018/04/04
[ "https://Stackoverflow.com/questions/49641899", "https://Stackoverflow.com", "https://Stackoverflow.com/users/6703592/" ]
To output multiple columns you can use [`zip()`](https://docs.python.org/3/library/functions.html#zip) like: ### Code: ``` import csv x0 = [1, 2, 3] y0 = [4, 5, 6] x2 = [7, 8, 9] y2 = [3, 4, 5] csvfile = "aaa.csv" with open(csvfile, "w") as output: writer = csv.writer(output, lineterminator='\n') writer.writerow(['x=0', None, None, 'x=2']) writer.writerow(['x', 'y', None, 'x', 'y']) for val in zip(x0, y0, [None] * len(x0), x2, y2): writer.writerow(val) ``` ### Results: ``` x=0,,,x=2 x,y,,x,y 1,4,,7,3 2,5,,8,4 3,6,,9,5 ```
You could try: ``` with open('file.csv') as fin: reader = csv.reader(fin) [fout.write(r[0],r[1]) for r in reader] ``` If you need further help, leave a comment.
11,417
52,149,479
I am preprocessing a timeseries dataset changing its shape from 2-dimensions (datapoints, features) into a 3-dimensions (datapoints, time\_window, features). In such perspective time windows (sometimes also called look back) indicates the number of previous time steps/datapoints that are involved as input variables to predict the next time period. In other words time windows is how much data in past the machine learning algorithm takes into consideration for a single prediction in the future. The issue with such approach (or at least with my implementation) is that it is quite inefficient in terms of memory usage since it brings data redundancy across the windows causing the input data to become very heavy. This is the function that I have been using so far to reshape the input data into a 3 dimensional structure. ``` from sys import getsizeof def time_framer(data_to_frame, window_size=1): """It transforms a 2d dataset into 3d based on a specific size; original function can be found at: https://machinelearningmastery.com/time-series-prediction-lstm-recurrent-neural-networks-python-keras/ """ n_datapoints = data_to_frame.shape[0] - window_size framed_data = np.empty( shape=(n_datapoints, window_size, data_to_frame.shape[1],)).astype(np.float32) for index in range(n_datapoints): framed_data[index] = data_to_frame[index:(index + window_size)] print(framed_data.shape) # it prints the size of the output in MB print(framed_data.nbytes / 10 ** 6) print(getsizeof(framed_data) / 10 ** 6) # quick and dirty quality test to check if the data has been correctly reshaped test1=list(set(framed_data[0][1]==framed_data[1][0])) if test1[0] and len(test1)==1: print('Data is correctly framed') return framed_data ``` I have been suggested to use [numpy's strides trick](https://docs.scipy.org/doc/numpy-1.14.0/reference/generated/numpy.lib.stride_tricks.as_strided.html#numpy.lib.stride_tricks.as_strided) to overcome such problem and reduce the size of the reshaped data. Unfortunately, any resource I found so far on this subject is focused on implementing the trick on a 2 dimensional array, just as this [excellent tutorial](https://www.youtube.com/watch?v=XijN-pjOz-A). I have been struggling with my use case which involves a 3 dimensional output. Here is the best I came out with; however, it neither succeeds in reducing the size of the framed\_data, nor it frames the data correctly as it does not pass the quality test. I am quite sure that my error is on the *strides* parameter which I did not fully understood. The *new\_strides* are the only values I managed to successfully feed to *as\_strided*. ``` from numpy.lib.stride_tricks import as_strided def strides_trick_time_framer(data_to_frame, window_size=1): new_strides = (data_to_frame.strides[0], data_to_frame.strides[0]*data_to_frame.shape[1] , data_to_frame.strides[0]*window_size) n_datapoints = data_to_frame.shape[0] - window_size print('striding.....') framed_data = as_strided(data_to_frame, shape=(n_datapoints, # .flatten() here did not change the outcome window_size, data_to_frame.shape[1]), strides=new_strides).astype(np.float32) # it prints the size of the output in MB print(framed_data.nbytes / 10 ** 6) print(getsizeof(framed_data) / 10 ** 6) # quick and dirty test to check if the data has been correctly reshaped test1=list(set(framed_data[0][1]==framed_data[1][0])) if test1[0] and len(test1)==1: print('Data is correctly framed') return framed_data ``` Any help would be highly appreciated!
2018/09/03
[ "https://Stackoverflow.com/questions/52149479", "https://Stackoverflow.com", "https://Stackoverflow.com/users/3482860/" ]
`os.system` is deprecated. Use `subprocess` instead, which will handle the quoting nicely for you. Since you have a pipe, you would normally have to create 2 `subprocess` objects, but here you just want to feed standard input so: ``` import subprocess p = subprocess.Popen(["/usr/bin/cmd","-parameters"],stdin=subprocess.PIPE) p.communicate('{{"value": {:0.0f}}}\n'.format(value).encode()) # we need to provide bytes rc = p.wait() ``` The quoting issue is gone, since you're not using a system command to provide the argument, but pure python. to test this I have changed the command to `more` so I can run it on windows (which also proves that this is portable): ``` import subprocess value=12.0 p = subprocess.Popen(["more.com"],stdin=subprocess.PIPE) p.communicate('{{"value": {:0.0f}}}\n'.format(value).encode()) rc = p.wait() ``` that prints: `{"value": 12}`
If you want to echo quotes, you need to escape them. For example: ``` echo "value" ``` > > value > > > ``` echo "\"value\"" ``` > > "value" > > > So your python code should look like ``` os.system('echo {{\\"value\\": {0:0.0f}}} | /usr/bin/cmd -parameters'.format(value)) ``` Note, that you should use double slashes `\\` because python would escape a single slash.
11,419
72,068,358
Why python doesn't raise an error when I try do this, instead it print Nothing. ``` empty = [] for i in empty: for y in i: print(y) ``` Is that python stop iterate it the first level or it just iterates over and print **None**? I found this when trying to create a infinite loop but it stop when list become empty
2022/04/30
[ "https://Stackoverflow.com/questions/72068358", "https://Stackoverflow.com", "https://Stackoverflow.com/users/18132490/" ]
There is nothing to iterate over in `empty` list. Hence, `for` loop won't run even for a single time. However, if it is required to get an error for this you can raise an exception like below: ``` empty = [] if len(empty) == 0: raise Exception("List is empty") for i in empty: for y in i: print(y) ```
Your list variable `empty` is initialized with empty list. so, the first iteration itself will not enter, since the list is empty.
11,420
60,233,935
I have been trying to get my first trial with templating with Ansible to work and I am stopped by this following exception. As far as I can see, I think I have maintained the indentation well and also validated the yml file. I don't know where to go from here, help pls! Below is the yml file followed by the exception I saw after ran the playbook. ``` --- - name: run these tasks on the host hosts: testhost: testhost1: "172.16.201.163" vars: ansible_port: 22 tasks: - name: Templating template: dest: /etc/my_test.conf owner: root src: my_test.j2 become: true ``` The output from the run ``` ERROR! Unexpected Exception, this is probably a bug: unhashable type: 'AnsibleMapping' the full traceback was: Traceback (most recent call last): File "/usr/local/bin/ansible-playbook", line 118, in <module> exit_code = cli.run() File "/usr/local/Cellar/ansible/2.7.9/libexec/lib/python3.7/site-packages/ansible/cli/playbook.py", line 122, in run results = pbex.run() File "/usr/local/Cellar/ansible/2.7.9/libexec/lib/python3.7/site-packages/ansible/executor/playbook_executor.py", line 106, in run all_vars = self._variable_manager.get_vars(play=play) File "/usr/local/Cellar/ansible/2.7.9/libexec/lib/python3.7/site-packages/ansible/vars/manager.py", line 185, in get_vars include_delegate_to=include_delegate_to, File "/usr/local/Cellar/ansible/2.7.9/libexec/lib/python3.7/site-packages/ansible/vars/manager.py", line 470, in _get_magic_variables variables['ansible_play_hosts_all'] = [x.name for x in self._inventory.get_hosts(pattern=pattern, ignore_restrictions=True)] File "/usr/local/Cellar/ansible/2.7.9/libexec/lib/python3.7/site-packages/ansible/inventory/manager.py", line 358, in get_hosts if pattern_hash not in self._hosts_patterns_cache: TypeError: unhashable type: 'AnsibleMapping' ```
2020/02/14
[ "https://Stackoverflow.com/questions/60233935", "https://Stackoverflow.com", "https://Stackoverflow.com/users/4682497/" ]
There are at minimum two things wrong with the playbook you posted: 1. `hosts:` is a `dict`, but should not be 2. `testhost:` has a `null` value [Reading the fine manual](https://docs.ansible.com/ansible/2.9/reference_appendices/playbooks_keywords.html#term-hosts) shows that `hosts:` should be a string, or `list[str]` but may not be a `dict`. Perhaps what you are trying to accomplish is best achieved via an inventory file, or dynamic inventory plugin/script
This error can also happen if you wind up with the wrong structure in your playbook. For example: ```yaml tags: # oops - role: foo/myrole ``` instead of ```yaml roles: - role: foo/myrole ```
11,422
19,306,963
In a file like: ``` jaslkfdj,asldkfj,,, slakj,aklsjf,,, lsak,sajf,,, ``` how can you split it up so there is just a key value pair of the two words? I tried to split using commas but the only way i know how to make key/value pairs is when there is only one commma in a line. python gives the error: "ValueError: too many values to unpack (expected 2)" because of the 3 extra commas at the end of each line this is what i have: ``` newdict= {} wd = open('file.csv', 'r') for line in wd: key,val = line.split(',') newdict[key]=val print(newdict) ```
2013/10/10
[ "https://Stackoverflow.com/questions/19306963", "https://Stackoverflow.com", "https://Stackoverflow.com/users/2844776/" ]
It seems more likely that what you tried is this: ``` >>> line = 'jaslkfdj,asldkfj,,,' >>> key, value = line.split(',') ValueError: too many values to unpack (expected 2) ``` There are two ways around this. First, you can split, and then just take the first two values: ``` >>> line = 'jaslkfdj,asldkfj,,,' >>> parts = line.split(',') >>> key, value = parts[:2] ``` Or you can use a `maxsplit` argument: ``` >>> line = 'jaslkfdj,asldkfj,,,' >>> key, value = line.split(',', 1) ``` This second one will leave extra commas on the end of `value`, but that's easy to fix: ``` >>> value = value.rstrip(',') ```
Try slicing for the first two values: ``` "a,b,,,,,".split(",")[:2] ``` Nice summary of slice notation in [this answer](https://stackoverflow.com/a/509295/169121).
11,423
46,554,928
I don't care if I achieve this through vim, sed, awk, python etc. I tried in all, could not get it done. For an input like this: ``` top f1 f2 f3 sub1 f1 f2 f3 sub2 f1 f2 f3 sub21 f1 f2 f3 sub3 f1 f2 f3 ``` I want: ``` top f1 f2 f3 ...sub1 f1 f2 f3 ...sub2 f1 f2 f3 ......sub21 f1 f2 f3 ...sub3 f1 f2 f3 ``` Then I want to just load this up in Excel (delimited by whitespace) and still be able to look at the hierarchy-ness of the first column! I tried many things, but end up losing the hierarchy information
2017/10/03
[ "https://Stackoverflow.com/questions/46554928", "https://Stackoverflow.com", "https://Stackoverflow.com/users/2977601/" ]
With this as the input: ``` $ cat file top f1 f2 f3 sub1 f1 f2 f3 sub2 f1 f2 f3 sub21 f1 f2 f3 sub3 f1 f2 f3 ``` Try: ``` $ sed -E ':a; s/^( *) ([^ ])/\1.\2/; ta' file top f1 f2 f3 ...sub1 f1 f2 f3 ...sub2 f1 f2 f3 ......sub21 f1 f2 f3 ...sub3 f1 f2 f3 ``` ### How it works: * `:a` This creates a label `a`. * `s/^( *) ([^ ])/\1.\2/` If the line begins with spaces, this replaces the last space in the leading spaces with a period. In more detail, `^( *)` matches all leading blanks except the last and stores them in group 1. The regex `([^ ])` (which, despite what stackoverflow makes it look like, consists of a blank followed by `([^ ])`) matches a blank followed by a nonblank and stores the nonblank in group 2. `\1.\2` replaces the matched text with group 1, followed by a period, followed by group 2. * `ta` If the substituted command resulted in a substitution, then branch back to label `a` and try over again. ### Compatibility: 1. The above was tested on modern GNU sed. For BSD/OSX sed, one might or might not need to use: ``` sed -E -e :a -e 's/^( *) ([^ ])/\1.\2/' -e ta file ``` On ancient GNU sed, one needs to use `-r` in place of `-E`: ``` sed -r ':a; s/^( *) ([^ ])/\1.\2/; ta' file ``` 2. The above assumed that the spaces were blanks. If they are tabs, then you will have to decide what your tabstop is and make substitutions accordingly.
There are two different ways to do this in vim. 1. With a regex: ``` :%s/^\s\+/\=repeat('.', len(submatch(0))) ``` This is fairly straightforward, but a little verbose. It uses the eval register (`\=`) to generate a string of `'.'`s the same length as the number of spaces at the beginning of each line. 2. With a norm command: ``` :%norm ^hviwr. ``` This is a much more conveniently short command, although it's a little harder to understand. It visually selects the spaces at the beginning of a line, and replaces the whole selection with dots. If there is no leading space, the command will fail on `^h` because the cursor attempts to move out of bounds. To see how this works, try typing `^hviwr.` on a line that has leading spaces to see it happen.
11,426
73,700,589
I have an integer array. ``` Dim a as Variant a = Array(1,2,3,4,1,2,3,4,5) Dim index as Integer index = Application.Match(4,a,0) '3 ``` index is 3 here. It returns the index of first occurrence of 4. But I want the last occurrence index of 4. In python, there is rindex which returns the reverse index. I am new to vba, any similar api available in VBA? Note: I am using Microsoft office Professional 2019 Thanks in advance.
2022/09/13
[ "https://Stackoverflow.com/questions/73700589", "https://Stackoverflow.com", "https://Stackoverflow.com/users/7006773/" ]
[`XMATCH()`](https://support.microsoft.com/en-us/office/xmatch-function-d966da31-7a6b-4a13-a1c6-5a33ed6a0312) avaibalble in O365 and Excel 2021. Try- ``` Sub ReverseMatch() Dim a As Variant Dim index As Integer a = Array(1, 2, 3, 4, 1, 2, 3, 4, 5) index = Application.XMatch(4, a, 0, -1) '-1 indicate search last to first. Debug.Print index End Sub ``` You can also try below sub. ``` Sub ReverseMatch() Dim a As Variant Dim index As Integer, i As Integer a = Array(1, 2, 3, 4, 1, 2, 3, 4, 5) For i = UBound(a) To LBound(a) Step -1 If a(i) = 4 Then Debug.Print i + 1 Exit For End If Next i End Sub ```
Try the next way, please: ``` Sub lastOccurrenceMatch() Dim a As Variant, index As Integer a = Array(1, 2, 3, 4, 1, 2, 3, 4, 5) index = Application.Match(CStr(4), Split(StrReverse(Join(a, "|")), "|"), 0) '2 Debug.Print index, UBound(a) + 1 - index + 1 End Sub ``` Or a version not raising an error in case of no match: ``` Sub lastOccurrenceMatchBetter() Dim a As Variant, index As Variant a = Array(1, 2, 3, 4, 1, 2, 3, 4, 5) index = Application.Match(CStr(4), Split(StrReverse(Join(a, "|")), "|"), 0) '2 If Not IsError(index) Then Debug.Print index, UBound(a) + 1 - index + 1 Else Debug.Print "No any match..." End If End Sub ``` **Edited**: The next version reverses the array as it is (without join/split sequence). Just using `Index`. Just for the sake of playing with arrays: ``` Sub testMatchReverseArray() Dim a As Variant, index As Integer, cnt As Long, col As String, arrRev() a = Array(1, 2, 3, 4, 1, 12, 3, 4, 15) cnt = UBound(a) + 1 col = Split(cells(1, cnt).Address, "$")(1) arrRev = Evaluate(cnt & "+1-column(A:" & col & ")") 'build the reversed columns array a = Application.index(Application.Transpose(a), arrRev, 0) Debug.Print Join(a, "|") 'just to visually see the reversed array... index = Application.match(4, a, 0) Debug.Print index, UBound(a) + 1 - index + 1 End Sub ``` Second **Edit**: The next solution matches the initial array with one loaded with only element to be searched. Then reverse it (using `StrReverse`) and search the matching positions ("1"): ``` Sub MatchLastOccurrence() Dim a(), arr(), srcVar, arrMtch(), index As Integer a = Array(1, 2, 3, 4, 1, 12, 3, 4, 15) srcVar = 4 'the array element to identify its last position arr = Array(srcVar) arrMtch = Application.IfError(Application.match(a, arr, 0), 0) Debug.Print Join(arrMtch, "|") '1 for matching positions, zero for not matching ones index = Application.match("1", Split(StrReverse(Join(arrMtch, "|")), "|"), 0) '2 Debug.Print index, UBound(a) + 1 - index + 1 End Sub ```
11,435
30,236,277
I am an enthusiastic learner of opencv and write down a code for video streaming with opencv I want to learn the use of cv2.createTrackbar() to add some interactive functionality. Though, I tried this function but its not working for me : For streaming and resizing the frame i use this code ``` import cv2 import sys import scipy.misc import scipy cap = cv2.VideoCapture(sys.argv[1]) new_size = 0.7 # value range(0,1) can be used for resizing the frame size while(1): ret, frame = cap.read() frame = scipy.misc.imresize(frame, new_size) cv2.imshow("t",frame) k = cv2.waitKey(30) & 0xff if k == 27: break ``` Then i have transformed the above code like this to add the track bar functionality to resize the frame. ``` import cv2 import sys import scipy.misc import scipy def nothing(x): pass cv2.createTrackbar('t','frame',0,1,nothing) cap = cv2.VideoCapture(sys.argv[1]) while(1): ret, frame = cap.read() j = cv2.getTrackbarPos('t','frame') frame = scipy.misc.imresize(frame, j) cv2.imshow("t",frame) k = cv2.waitKey(30) & 0xff if k == 27: break ``` but this code is not working and ended up with this error given bellow: ``` me@ubuntu:~/Desktop/final_video_soft$ python GUI_STREAM.py p.3gp Traceback (most recent call last): File "GUI_STREAM.py", line 20, in <module> frame = scipy.misc.imresize(frame, j) File "/usr/lib/python2.7/dist-packages/scipy/misc/pilutil.py", line 365, in imresize imnew = im.resize(size, resample=func[interp]) File "/usr/lib/python2.7/dist-packages/PIL/Image.py", line 1305, in resize im = self.im.resize(size, resample) TypeError: must be 2-item sequence, not float ```
2015/05/14
[ "https://Stackoverflow.com/questions/30236277", "https://Stackoverflow.com", "https://Stackoverflow.com/users/4706745/" ]
Your code will surely fail. There are too many issues indicating you haven't read the document. Even the 1st `new_size` one will **fail for sure**. `cap = cv2.VideoCapture(sys.argv[1])` this is wrong. Because it requires `int` instead of `str`. You have to do `cap = cv2.VideoCapture(int(sys.argv[1]))` another obvious error is the conflicted window name you give in following code: ``` cv2.createTrackbar('t','frame',0,1,nothing) cv2.imshow("t",frame) ``` `imshow` used a **window name** 't'. But the 't' is actually the **trackbar name**. Moreover, if you have read the document you will know the `createTrackbar` will only accept `int` as `val` and `count`. Thus you either have `j = 0` or `j = 1` in your code. The `value` is initial value of the trackbar. Thus in your case it is always 0, which will raise an error in imshow. The `getTrackbarPos` should be in event triggered callback not in the main loop. If you do it like you posted, it might still run, but it will not respond to every sliding-event. However, it does not cause visible trouble since the video capture loop is quite fast. After fix all those errors, it will ends up like this: ``` scale = 700 max_scale = 1000 def modified(): scale = 500 _scale = float(scale)/max_scale cv2.namedWindow('frame', cv2.WINDOW_AUTOSIZE) cv2.createTrackbar('t','frame', scale, max_scale, nothing) cap = cv2.VideoCapture(int(sys.argv[1])) while(1): ret, frame = cap.read() if not ret: break scale = cv2.getTrackbarPos('t','frame') if scale > 1: _scale = float(scale)/max_scale print "scale = ", _scale frame = scipy.misc.imresize(frame, _scale) cv2.imshow("frame",frame) k = cv2.waitKey(30) & 0xff if k == 27: break ```
Sorry, i am familiar with c++, here is the C++ code, hope it helps. The code mentioned below adds a contrast adjustment to the live video stream from the camera using createTrackbar function ``` #include "opencv2\highgui.hpp" #include "opencv2\core.hpp" #include <iostream> using namespace cv; using namespace std; int main(int argc, char **argv[]) { string owin = "Live Feed Original"; string mwin = "Modified Live stream"; int trackval = 50; Mat oframe; Mat inframe; VideoCapture video(0); if (!video.isOpened()) { cout << "The Camera cannot be accessed" << endl; return -1; } namedWindow(owin); moveWindow(owin, 0, 0); namedWindow(mwin); createTrackbar("Contrast", mwin, &trackval, 100); while (1) { video >> inframe; imshow(owin, inframe); inframe.convertTo(oframe, -1, trackval / 50.0); imshow(mwin, oframe); if (waitKey(33) == 27) break; } } ```
11,438
46,248,019
I am using plotly in a Jupyter notebook (python v3.6) and trying to get the example code for mapbox to work (see: <https://plot.ly/python/scattermapbox/>). When I execute the cell, I don't get any error, but I see no output either, just a blank output cell. When I mouse over the output cell area, I can see there's a frame there, and I even see the plotly icon tray at the top right, but there doesn't seem to be anything else. When I hit the 'edit chart' button at the bottom right, I get taken to the plotly page with a blank canvas. As far as I can tell, my mapbox api token is valid. My code looks identical to the example linked above: ``` data = Data([ Scattermapbox( lat=['45.5017'], lon=['-73.5673'], mode='markers', marker=Marker( size=14 ), text=['Montreal'], ) ]) layout = Layout( autosize=True, hovermode='closest', mapbox=dict( accesstoken=mapbox_token, bearing=0, center=dict( lat=45, lon=-73 ), pitch=0, zoom=1, style='light' ), ) fig = dict(data=data, layout=layout) py.iplot(fig) ``` What am I doing wrong? Thx!
2017/09/15
[ "https://Stackoverflow.com/questions/46248019", "https://Stackoverflow.com", "https://Stackoverflow.com/users/1404267/" ]
D'oh! Was using the wrong mapbox token. I should have used the public token, but instead used a private one. The only error was a js one in the console. Thanks user1561393! ``` jupyter labextension install plotlywidget ```
What is your `mapbox_token`? My guess is you haven't signed up for Mapbox to get an API token (which allows you to download their excellent tile maps for mapbox-gl). The map's not going to show without this token.
11,439
29,974,933
I am using python multiprocessing Process. Why can't I start or restart a process after it exits and I do a join. The process is gone, but in the instantiated class \_popen is not set to None after the process dies and I do a join. If I try and start again it tells me I can't start a process twice.
2015/04/30
[ "https://Stackoverflow.com/questions/29974933", "https://Stackoverflow.com", "https://Stackoverflow.com/users/686334/" ]
From the Python multiprocessing documentation. > > start() > > > Start the process’s activity. > > > This must be called at most once per process object. It arranges for the object’s run() method to be invoked in a separate process. > > > A Process object can be run only once. If you need to re-run the same routine (the target parameter) you must instantiate a new Process object. This is due to the fact that Process objects are encapsulating unique OS instances. Tip: do not play with the internals of Python Thread and Process objects, you might get seriously hurt. The \_popen attribute you're seeing is a sentinel the Process object uses to understand when the child OS process is gone. It's literally a pipe and it's used to block the join() call until the process does not terminate.
From your question we can just guess, what you're talking about. Are you using `subprocess`? How do you start your process? By invoking `call()` or `Popen()`? In case I guessed right: Just keep your `args` list from your `subprocess` call and restart your process by calling that command again. The instance of your subprocess (e.g. a `Popen` instance) represents a running process (with **one** PID) rather than an abstract application which can be run multiple time. e.g.: ``` args = ['python', 'myscript.py'] p = Popen(args) p.wait() p = Popen(args) ... ```
11,440
36,996,629
I am doing `pip install setuptools --upgrade` but getting error below ``` Installing collected packages: setuptools Found existing installation: setuptools 1.1.6 Uninstalling setuptools-1.1.6: Exception: Traceback (most recent call last): File "/Library/Python/2.7/site-packages/pip-8.1.1-py2.7.egg/pip/basecommand.py", line 209, in main status = self.run(options, args) File "/Library/Python/2.7/site-packages/pip-8.1.1-py2.7.egg/pip/commands/install.py", line 317, in run prefix=options.prefix_path, File "/Library/Python/2.7/site-packages/pip-8.1.1-py2.7.egg/pip/req/req_set.py", line 726, in install requirement.uninstall(auto_confirm=True) File "/Library/Python/2.7/site-packages/pip-8.1.1-py2.7.egg/pip/req/req_install.py", line 746, in uninstall paths_to_remove.remove(auto_confirm) File "/Library/Python/2.7/site-packages/pip-8.1.1-py2.7.egg/pip/req/req_uninstall.py", line 115, in remove renames(path, new_path) File "/Library/Python/2.7/site-packages/pip-8.1.1-py2.7.egg/pip/utils/__init__.py", line 267, in renames shutil.move(old, new) File "/System/Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7/shutil.py", line 299, in move copytree(src, real_dst, symlinks=True) File "/System/Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7/shutil.py", line 208, in copytree raise Error, errors Error: [('/System/Library/Frameworks/Python.framework/Versions/2.7/Extras/lib/python/_markerlib/__init__.py', '/tmp/pip-rV15My-uninstall/System/Library/Frameworks/Python.framework/Versions/2.7/Extras/lib/python/_markerlib/__init__.py', "[Errno 1] Operation not permitted: '/tmp/pip-rV15My-uninstall/System/Library/Frameworks/Python.framework/Versions/2.7/Extras/lib/python/_markerlib/__init__.py'"), ('/System/Library/Frameworks/Python.framework/Versions/2.7/Extras/lib/python/_markerlib/__init__.pyc', '/tmp/pip-rV15My-uninstall/System/Library/Frameworks/Python.framework/Versions/2.7/Extras/lib/python/_markerlib/__init__.pyc', "[Errno 1] Operation not permitted: '/tmp/pip-rV15My-uninstall/System/Library/Frameworks/Python.framework/Versions/2.7/Extras/lib/python/_markerlib/__init__.pyc'"), ('/System/Library/Frameworks/Python.framework/Versions/2.7/Extras/lib/python/_markerlib/markers.py', '/tmp/pip-rV15My-uninstall/System/Library/Frameworks/Python.framework/Versions/2.7/Extras/lib/python/_markerlib/markers.py', "[Errno 1] Operation not permitted: '/tmp/pip-rV15My-uninstall/System/Library/Frameworks/Python.framework/Versions/2.7/Extras/lib/python/_markerlib/markers.py'"), ('/System/Library/Frameworks/Python.framework/Versions/2.7/Extras/lib/python/_markerlib/markers.pyc', '/tmp/pip-rV15My-uninstall/System/Library/Frameworks/Python.framework/Versions/2.7/Extras/lib/python/_markerlib/markers.pyc', "[Errno 1] Operation not permitted: '/tmp/pip-rV15My-uninstall/System/Library/Frameworks/Python.framework/Versions/2.7/Extras/lib/python/_markerlib/markers.pyc'"), ('/System/Library/Frameworks/Python.framework/Versions/2.7/Extras/lib/python/_markerlib', '/tmp/pip-rV15My-uninstall/System/Library/Frameworks/Python.framework/Versions/2.7/Extras/lib/python/_markerlib', "[Errno 1] Operation not permitted: '/tmp/pip-rV15My-uninstall/System/Library/Frameworks/Python.framework/Versions/2.7/Extras/lib/python/_markerlib'")] ``` What am I missing? I tried sudo pip install also but didn't help.
2016/05/03
[ "https://Stackoverflow.com/questions/36996629", "https://Stackoverflow.com", "https://Stackoverflow.com/users/1450312/" ]
Try to upgrade manually: ``` pip uninstall setuptools pip install setuptools ``` If it doesn't work, try: `pip install --upgrade setuptools --user python` As you can see, the operation didn't get appropriate privilege: `[Errno 1] Operation not permitted: '/tmp/pip-rV15My-uninstall/System/Library/Frameworks/Python.framework/Versions/2.7/Extras/lib/python/_markerlib/markers.pyc'")`
I ran into a similar problem but with a different error, and different resolution. (My search for a solution led me here, so I'm posting my details here in case it helps.) TL;DR: if upgrading `setuptools` in a Python virtual environment appears to work, but reports `OSError: [Errno 2] No such file or directory`, try deactivating and reactivating the virtual environment before continuing, e.g.: ``` source myenv/bin/activate pip install --upgrade setuptools deactivate source myenv/bin/activate : ``` Long version ------------ I'm in the process of upgrading Python version and libraries for a long-running project. I use a python virtual environment for development and testing. Host system is MacOS 10.11.5 (El Capitan). I've discovered that `pip` needs to be updated after the virtual environment is created (apparently due to some recent `pypa` TLS changes as of 2018-04), so my initial setup looks like this (having installed the latest version of the Python 2.7 series using the downloadable MacOS installer): ``` virtualenv myenv -p python2.7 source myenv/bin/activate curl https://bootstrap.pypa.io/get-pip.py -o get-pip.py python get-pip.py ``` So far, so good :) My problem comes when I try to run: ``` pip install --upgrade setuptools ``` The installation appears to work OK, but then I get an error message, thus: ``` Collecting setuptools Using cached setuptools-39.0.1-py2.py3-none-any.whl Installing collected packages: setuptools Found existing installation: setuptools 0.6rc11 Uninstalling setuptools-0.6rc11: Successfully uninstalled setuptools-0.6rc11 Successfully installed setuptools-39.0.1 Traceback (most recent call last): File "/Users/graham/workspace/github/gklyne/annalist/anenv/bin/pip", line 11, in <module> sys.exit(main()) File "/Users/graham/workspace/github/gklyne/annalist/anenv/lib/python2.7/site-packages/pip/__init__.py", line 248, in main return command.main(cmd_args) File "/Users/graham/workspace/github/gklyne/annalist/anenv/lib/python2.7/site-packages/pip/basecommand.py", line 252, in main pip_version_check(session) File "/Users/graham/workspace/github/gklyne/annalist/anenv/lib/python2.7/site-packages/pip/utils/outdated.py", line 102, in pip_version_check installed_version = get_installed_version("pip") File "/Users/graham/workspace/github/gklyne/annalist/anenv/lib/python2.7/site-packages/pip/utils/__init__.py", line 838, in get_installed_version working_set = pkg_resources.WorkingSet() File "/Users/graham/workspace/github/gklyne/annalist/anenv/lib/python2.7/site-packages/pip/_vendor/pkg_resources/__init__.py", line 644, in __init__ self.add_entry(entry) File "/Users/graham/workspace/github/gklyne/annalist/anenv/lib/python2.7/site-packages/pip/_vendor/pkg_resources/__init__.py", line 700, in add_entry for dist in find_distributions(entry, True): File "/Users/graham/workspace/github/gklyne/annalist/anenv/lib/python2.7/site-packages/pip/_vendor/pkg_resources/__init__.py", line 1949, in find_eggs_in_zip if metadata.has_metadata('PKG-INFO'): File "/Users/graham/workspace/github/gklyne/annalist/anenv/lib/python2.7/site-packages/pip/_vendor/pkg_resources/__init__.py", line 1463, in has_metadata return self.egg_info and self._has(self._fn(self.egg_info, name)) File "/Users/graham/workspace/github/gklyne/annalist/anenv/lib/python2.7/site-packages/pip/_vendor/pkg_resources/__init__.py", line 1823, in _has return zip_path in self.zipinfo or zip_path in self._index() File "/Users/graham/workspace/github/gklyne/annalist/anenv/lib/python2.7/site-packages/pip/_vendor/pkg_resources/__init__.py", line 1703, in zipinfo return self._zip_manifests.load(self.loader.archive) File "/Users/graham/workspace/github/gklyne/annalist/anenv/lib/python2.7/site-packages/pip/_vendor/pkg_resources/__init__.py", line 1643, in load mtime = os.stat(path).st_mtime OSError: [Errno 2] No such file or directory: '/Users/graham/workspace/github/gklyne/annalist/anenv/lib/python2.7/site-packages/setuptools-0.6c11-py2.7.egg' ``` Note that the installation appears to complete successfully, followed by the `OSError` exception, which appears to be an attempt to access the old setuptools. Despite the error message, `pip` seems to work just fine for installing new packages, but my local `setup.py` fails to find its dependencies; e.g.: ``` $ python setup.py install running install : (lots of build messages) : Installed /Users/graham/workspace/github/gklyne/annalist/anenv/lib/python2.7/site-packages/oauth2client-1.2-py2.7.egg Processing dependencies for oauth2client==1.2 Searching for httplib2>=0.8 Reading https://pypi.python.org/simple/httplib2/ Couldn't find index page for 'httplib2' (maybe misspelled?) Scanning index of all packages (this may take a while) Reading https://pypi.python.org/simple/ No local packages or working download links found for httplib2>=0.8 error: Could not find suitable distribution for Requirement.parse('httplib2>=0.8') ``` But if I use pip to install the same dependency ('httplib2>=0.8'), it works fine, and I can re-run `setup.py` without any problems. At this point I'm guessing that the difference between running `setup.py` and `pip` is that the virtual environment is somehow hanging onto some old `setuptools` files, but `pip` comes with its own copy. So after upgrading setuptools and getting the `OSError: [Errno 2] No such file or directory` message, I deactivate and reactivate the virtual environment, thus: ``` deactivate source myenv/bin/activate ``` and, *viola*, `setup.py` seems to work fine!
11,441
12,683,745
Normally, one shuts down Apache Tomcat by running its `shutdown.sh` script (or batch file). In some cases, such as when Tomcat's web container is hosting a web app that does some crazy things with multi-threading, running `shutdown.sh` gracefully shuts down *some* parts of Tomcat (as I can see more available memory returning to the system), but the Tomcat process keeps running. I'm trying to write a simple Python script that: 1. Calls `shutdown.sh` 2. Runs `ps -aef | grep tomcat` to find any process with Tomcat referenced 3. If applicable, kills the process with `kill -9 <PID>` Here's what I've got so far (as a prototype - I'm brand new to Python BTW): ``` #!/usr/bin/python # Imports import sys import subprocess # Load from imported module. if __init__ == "__main__": main() # Main entry point. def main(): # Shutdown Tomcat shutdownCmd = "sh ${TOMCAT_HOME}/bin/shutdown.sh" subprocess.call([shutdownCmd], shell=true) # Check for PID grepCmd = "ps -aef | grep tomcat" grepResults = subprocess.call([grepCmd], shell=true) if(grepResult.length > 1): # Get PID and kill it. pid = ??? killPidCmd = "kill -9 $pid" subprocess.call([killPidCmd], shell=true) # Exit. sys.exit() ``` I'm struggling with the middle part - with obtaining the `grep` results, checking to see if their size is greater than 1 (since `grep` always returns a reference to itself, at least 1 result will always be returned, methinks), and then parsing that returned PID and passing it into the `killPidCmd`. Thanks in advance!
2012/10/02
[ "https://Stackoverflow.com/questions/12683745", "https://Stackoverflow.com", "https://Stackoverflow.com/users/892029/" ]
you need to replace `grepResults = subprocess.call([grepCmd], shell=true)` with `grepResults = subprocess.check_output([grepCmd], shell=true)` if you want to save the results of the command in grepResults. Then you can use split to convert that to an array and the second element of the array will be the pid: `pid = int(grepResults.split()[1])'` That will only kill the first process however. It doesn't kill all processes if more then one are open. In order to do that you would have to write: ``` grepResults = subprocess.check_output([grepCmd], shell=true).split() for i in range(1, len(grepResults), 9): pid = grepResults[i] killPidCmd = "kill -9 " + pid subprocess.call([killPidCmd], shell=true) ```
You can add "c" to ps so that only the command and not the arguments are printed. This would stop grab from matching its self. I'm not sure if tomcat shows up as a java application though, so this may not work. PS: Got this from googling: "grep includes self" and the first hit had that solution. EDIT: My bad! OK something like this then? ``` p = subprocess.Popen(["ps caux | grep tomcat"], shell=True,stdout=subprocess.PIPE) out, err = p.communicate() out.split()[1] #<-- checkout the contents of this variable, it'll have your pid! ``` Basically "out" will have the program output as a string that you can read/manipulate
11,442
22,869,920
I am trying to insert raw JSON strings into a sqlite database using the sqlite3 module in python. When I do the following: ``` rows = [["a", "<json value>"]....["n", "<json_value>"]] cursor.executemany("""INSERT OR IGNORE INTO FEATURES(UID, JSON) VALUES(?, ?)""", rows) ``` I get the following error: > > sqlite3.ProgrammingError: Incorrect number of bindings supplied. The > current statement uses 2, and there are 48 supplied. > > > How can I insert the raw json into a table? I assume it's the commas in the json string. How can I get around this?
2014/04/04
[ "https://Stackoverflow.com/questions/22869920", "https://Stackoverflow.com", "https://Stackoverflow.com/users/1087908/" ]
Your input is interpreted as a list of characters (that's where the '48 supplied' is coming from - 48 is the length of the `<json value>` string). You will be able to pass your input in as a string if you wrap it in square brackets like so ``` ["<json value>"] ``` The whole line would then look like ``` rows = [["a", ["<json value>"]]....["n", ["<json value>"]]] ```
It's kind of a long shot... but perhaps you could quote the JSON values to ensure the parsing works as desired: ``` cursor.executemany("""INSERT OR IGNORE INTO FEATURES(UID, JSON) VALUES(?, '?')""", rows) ``` EDIT: Alternatively... this might force the json into a sting in the insertion? ``` rows = [ uid, '"{}"'.format( json_val ) for uid, json_val in rows ] cursor.executemany("""INSERT OR IGNORE INTO FEATURES(UID, JSON) VALUES(?, ?)""", rows) ```
11,445
34,227,066
Using python I can easily increase the current process's niceness: ``` >>> import os >>> import psutil >>> # Use os to increase by 3 >>> os.nice(3) 3 >>> # Use psutil to set to 10 >>> psutil.Process(os.getpid()).nice(10) >>> psutil.Process(os.getpid()).nice() 10 ``` However, decreasing a process's niceness does not seem to be allowed: ``` >>> os.nice(-1) OSError: [Errno 1] Operation not permitted >>> psutil.Process(os.getpid()).nice(5) psutil.AccessDenied: psutil.AccessDenied (pid=14955) ``` What is the correct way to do this? And is the ratchet mechanism a bug or a feature?
2015/12/11
[ "https://Stackoverflow.com/questions/34227066", "https://Stackoverflow.com", "https://Stackoverflow.com/users/1448640/" ]
Linux, by default, doesn't allow unprivileged users to decrease the nice value (i.e. increase the priority) of their processes, so that one user doesn't create a high-priority process to starve out other users. Python is simply forwarding the error the OS gives you as an exception. The root user can increase the priority of processes, but running as root has other consequences.
This is not a restriction by Python or the `os.nice` interface. It is described in `man 2 nice` that only the superuser may decrease the niceness of a process: > > nice() adds inc to the nice value for the calling process. (A higher > nice value means a low priority.) Only the superuser may specify a > negative increment, or priority increase. The range for nice values is > described in getpriority(2). > > >
11,447
41,455,463
For a list of daily maximum temperature values from 5 to 27 degrees celsius, I want to calculate the corresponding maximum ozone concentration, from the following pandas DataFrame: [![enter image description here](https://i.stack.imgur.com/iGW9y.png)](https://i.stack.imgur.com/iGW9y.png) I can do this by using the following code, by changing the 5 then 6, 7 etc. ``` df_c=df_b[df_b['Tmax']==5] df_c.O3max.max() ``` Then I have to copy and paste the output values into an excel spreadsheet. I'm sure there must be a much more pythonic way of doing this, such as by using a list comprehension. Ideally I would like to generate a list of values from the column 03max. Please give me some suggestions.
2017/01/04
[ "https://Stackoverflow.com/questions/41455463", "https://Stackoverflow.com", "https://Stackoverflow.com/users/7159945/" ]
It means you are using python 2.x and you have many options because default integer division results in integer numbers option 1: import the division library ``` from __future__ import division ``` option 2: change either of the factors to float or alternatively you can change 3 to float (decimal point) by ading .0 to it or you multiply by 1.0 ``` print 'The diameter of {planet} is {measure:.2f}'.format(planet="Earth", measure=10/3.0) ```
You could also just do: ``` print 'The diameter of {} is {}'.format("Earth",10/3) ```
11,452
13,639,464
I would like a javascript function that mimics the python .format() function that works like ``` .format(*args, **kwargs) ``` A previous question gives a possible (but not complete) solution for '.format(\*args) [JavaScript equivalent to printf/string.format](https://stackoverflow.com/questions/610406/javascript-equivalent-to-printf-string-format) I would like to be able to do ``` "hello {} and {}".format("you", "bob" ==> hello you and bob "hello {0} and {1}".format("you", "bob") ==> hello you and bob "hello {0} and {1} and {a}".format("you", "bob",a="mary") ==> hello you and bob and mary "hello {0} and {1} and {a} and {2}".format("you", "bob","jill",a="mary") ==> hello you and bob and mary and jill ``` I realize that's a tall order, but maybe somewhere out there is a complete (or at least partial) solution that includes keyword arguments as well. Oh, and I hear AJAX and JQuery possibly have methods for this, but I would like to be able to do it without all that overhead. In particular, I would like to be able to use it with a script for a google doc. Thanks
2012/11/30
[ "https://Stackoverflow.com/questions/13639464", "https://Stackoverflow.com", "https://Stackoverflow.com/users/188963/" ]
UPDATE: If you're using ES6, template strings work very similarly to `String.format`: <https://developers.google.com/web/updates/2015/01/ES6-Template-Strings> If not, the below works for all the cases above, with a very similar syntax to python's `String.format` method. Test cases below. ```js String.prototype.format = function() { var args = arguments; this.unkeyed_index = 0; return this.replace(/\{(\w*)\}/g, function(match, key) { if (key === '') { key = this.unkeyed_index; this.unkeyed_index++ } if (key == +key) { return args[key] !== 'undefined' ? args[key] : match; } else { for (var i = 0; i < args.length; i++) { if (typeof args[i] === 'object' && typeof args[i][key] !== 'undefined') { return args[i][key]; } } return match; } }.bind(this)); }; // Run some tests $('#tests') .append( "hello {} and {}<br />".format("you", "bob") ) .append( "hello {0} and {1}<br />".format("you", "bob") ) .append( "hello {0} and {1} and {a}<br />".format("you", "bob", {a:"mary"}) ) .append( "hello {0} and {1} and {a} and {2}<br />".format("you", "bob", "jill", {a:"mary"}) ); ``` ```html <script src="https://ajax.googleapis.com/ajax/libs/jquery/2.1.1/jquery.min.js"></script> <div id="tests"></div> ```
This should work similar to python's `format` but with an object with named keys, it could be numbers as well. ``` String.prototype.format = function( params ) { return this.replace( /\{(\w+)\}/g, function( a,b ) { return params[ b ]; } ); }; console.log( "hello {a} and {b}.".format( { a: 'foo', b: 'baz' } ) ); //^= "hello foo and baz." ```
11,454
7,800,213
I'm trying to read data from Google Fusion Tables API into Python using the *csv* library. It seems like querying the API returns CSV data, but when I try and use it with *csv.reader*, it seems to mangle the data and split it up on every character rather than just on the commas and newlines. Am I missing a step? Here's a sample I made to illustrate, using a public table: ``` #!/usr/bin/python import csv import urllib2, urllib request_url = 'https://www.google.com/fusiontables/api/query' query = 'SELECT * FROM 1140242 LIMIT 10' url = "%s?%s" % (request_url, urllib.urlencode({'sql': query})) serv_req = urllib2.Request(url=url) serv_resp = urllib2.urlopen(serv_req) reader = csv.reader(serv_resp.read()) for row in reader: print row #prints out each character of each cell and the column headings ``` Ultimately I'd be using the *csv.DictReader* class, but the base *reader* shows the issue as well
2011/10/17
[ "https://Stackoverflow.com/questions/7800213", "https://Stackoverflow.com", "https://Stackoverflow.com/users/201103/" ]
`csv.reader()` takes in a file-like object. Change ``` reader = csv.reader(serv_resp.read()) ``` to ``` reader = csv.reader(serv_resp) ``` Alternatively, you could do: ``` reader = csv.DictReader(serv_resp) ```
It's not the CSV module that's causing the problem. Take a look at the output from `serv_resp.read()`. Try using `serv_resp.readlines()` instead.
11,455
68,043,856
I wrote python code to check how many characters need to be deleted from two strings for them to become anagrams of each other. This is the problem statement "Given two strings, and , that may or may not be of the same length, determine the minimum number of character deletions required to make and anagrams. Any characters can be deleted from either of the strings" ``` def makeAnagram(a, b): # Write your code here ac=0 # tocount the no of occurences of chracter in a bc=0 # tocount the no of occurences of chracter in b p=False #used to store result of whether an element is in that string c=0 #count of characters to be deleted to make these two strings anagrams t=[] # list of previously checked chracters for x in a: if x in t == True: continue ac=a.count(x) t.insert(0,x) for y in b: p = x in b if p==True: bc=b.count(x) if bc!=ac: d=ac-bc c=c+abs(d) elif p==False: c=c+1 return(c) ```
2021/06/19
[ "https://Stackoverflow.com/questions/68043856", "https://Stackoverflow.com", "https://Stackoverflow.com/users/9262680/" ]
You can use [`collections.Counter`](https://docs.python.org/3/library/collections.html#collections.Counter) for this: ``` from collections import Counter def makeAnagram(a, b): return sum((Counter(a) - Counter(b) | Counter(b) - Counter(a)).values()) ``` `Counter(x)` (where x is a string) returns a dictionary that maps characters to how many times they appear in the string. `Counter(a) - Counter(b)` gives you a dictionary that maps characters which are overabundant in `b` to how many times they appear in `b` more than the number of times they appear in `a`. `Counter(b) - Counter(a)` is like above, but for characters which are overabundant in `a`. The `|` merges the two resulting counters. We then take the values of this, and sum them to get the total number of characters which are overrepresented in either string. This is equivalent to the minimum number of characters that need to be deleted to form an anagram. --- As for why your code doesn't work, I can't pin down any one problem with it. To obtain the code below, all I did was some simplification (e.g. removing unnecessary variables, looping over a and b together, removing `== True` and `== False`, replacing `t` with a `set`, giving variables descriptive names, etc.), and the code began working. Here is that simplified working code: ``` def makeAnagram(a, b): c = 0 # count of characters to be deleted to make these two strings anagrams seen = set() # set of previously checked characters for character in a + b: if character not in seen: seen.add(character) c += abs(a.count(character) - b.count(character)) return c ``` I recommend you make it a point to learn how to write simple/short code. It may not seem important compared to actually tackling the algorithms and getting results. It may seem like cleanup or styling work. But it pays off enormously. Bug are harder to introduce in simple code, and easier to spot. Oftentimes simple code will be more performant than equivalent complex code too, either because the programmer was able to more easily see ways to improve it, or because the more performant approach just arose naturally from the cleaner code.
Assuming there are only lowercase letters The idea is to make character count arrays for both the strings and store frequency of each character. Now iterate the count arrays of both strings and difference in frequency of any character `abs(count1[str1[i]-‘a’] – count2[str2[i]-‘a’])` in both the strings is the number of character to be removed in either string. ```py CHARS = 26 # function to calculate minimum # numbers of characters # to be removed to make two # strings anagram def remAnagram(str1, str2): count1 = [0]*CHARS count2 = [0]*CHARS i = 0 while i < len(str1): count1[ord(str1[i])-ord('a')] += 1 i += 1 i =0 while i < len(str2): count2[ord(str2[i])-ord('a')] += 1 i += 1 # traverse count arrays to find # number of characters # to be removed result = 0 for i in range(26): result += abs(count1[i] - count2[i]) return result ``` Here time complexity is O(n + m) where n and m are the length of the two strings Space complexity is O(1) as we use only array of size 26 This can be further optimised by just using a single array for taking the count. In this case for string s1 -> we increment the counter for string s2 -> we decrement the counter ```py def makeAnagram(a, b): buffer = [0] * 26 for char in a: buffer[ord(char) - ord('a')] += 1 for char in b: buffer[ord(char) - ord('a')] -= 1 return sum(map(abs, buffer)) ``` ```py if __name__ == "__main__" : str1 = "bcadeh" str2 = "hea" print(makeAnagram(str1, str2)) ``` Output : 3
11,456
49,053,579
Say I created a django project called `django_site`. I created two sub-projects: `site1` and `polls`. You see that I have two `index.html` in two sub-projects directories. However, now, if I open on web browser `localhost:8000/site1` or `localhost:8000/polls`, they all point to the `index.html` of `polls`. How can I configure so when I open `localhost:8000/site1` it will use the `index.html` of `site1`? My `settings.py` in `django_site` directory: ``` .. TEMPLATES = [ { 'BACKEND': 'django.template.backends.django.DjangoTemplates', 'DIRS': [os.path.join(BASE_DIR, 'templates'),], 'APP_DIRS': True, 'OPTIONS': { 'context_processors': [ 'django.template.context_processors.debug', 'django.template.context_processors.request', 'django.contrib.auth.context_processors.auth', 'django.contrib.messages.context_processors.messages', ], }, }, ] .. ``` My directory structure: ``` E:. | db.sqlite3 | manage.py | tree.txt | tree1.txt | +---site1 | | admin.py | | apps.py | | models.py | | tests.py | | urls.py | | views.py | | __init__.py | | | +---migrations | | __init__.py | | | +---templates | | \---site1 | | \---templates | | index.html | | | \---__pycache__ | models.cpython-36.pyc | urls.cpython-36.pyc | views.cpython-36.pyc | __init__.cpython-36.pyc | +---django_site | | settings.py | | urls.py | | wsgi.py | | __init__.py | | | \---__pycache__ | settings.cpython-36.pyc | urls.cpython-36.pyc | wsgi.cpython-36.pyc | __init__.cpython-36.pyc | \---polls | admin.py | apps.py | models.py | tests.py | urls.py | views.py | __init__.py | +---migrations | | 0001_initial.py | | 0002_auto_20180214_0906.py | | __init__.py | | | \---__pycache__ | 0001_initial.cpython-36.pyc | 0002_auto_20180214_0906.cpython-36.pyc | __init__.cpython-36.pyc | +---static | jquery-3.3.1.min.js | +---templates | \---polls | \---templates | index.html | \---__pycache__ admin.cpython-36.pyc apps.cpython-36.pyc models.cpython-36.pyc urls.cpython-36.pyc views.cpython-36.pyc __init__.cpython-36.pyc ``` My `urls.py` in `django_site` ``` from django.urls import include, path from django.contrib import admin urlpatterns = [ path('polls/', include('polls.urls')), path('site1/', include('site1.urls')), path('admin/', admin.site.urls), ] ``` My `urls.py` in `site1` or `polls` (they are the same): ``` from django.urls import path from . import views urlpatterns = [ path('', views.index, name='index'), ] ```
2018/03/01
[ "https://Stackoverflow.com/questions/49053579", "https://Stackoverflow.com", "https://Stackoverflow.com/users/546678/" ]
You have to configure urls like this : In your `django_site directory` you have an url file. You have to get urls from all your django apps : ``` from django.urls import include, path from django.contrib import admin from polls import views from site1 import views urlpatterns = [ path('polls/', include('polls.urls')), path('site1/', include('site1.urls')), ] ``` Then, inside each django module, you should have : ``` from django.urls import include, path from django.contrib import admin from polls import views urlpatterns = [ path('', views.your_function, name="index"), ] ``` and the same thing for `site1` : ``` from django.urls import include, path from django.contrib import admin from site1 import views urlpatterns = [ path('', views.your_function, name="index"), ] ``` `views.your_function` have to return your index template : `index.html` Your function could be : ``` def MyFunction(request): return render(request, 'index.html') ``` In this way, you have to get : `path('', views.MyFunction, name="index"),`
I solved the problem by: In `views.py` in `site1` or `polls`: ``` def index(request): return render(request, 'polls/index.html', context) ``` And inside `django_site` I created a folder `templates`, then in this folder there are two folders `site1` and `polls`. In each subfolder I put `index.html` respectively.
11,457
30,013,356
I have a list of emails about 10.000 Long, with incomplete emails id, due to data unreliability and would like to know how can I complete them using python. sample emails: xyz@gmail.co xyz@gmail. xyz@gma xyz@g I've tried using `validate_email` package to filter out bad emails and have tried various regex patterns and I end up with `xyz@gmail.com.co` similar to search and replace using sublime text. I think there is a better way to this than regex and would like to know.
2015/05/03
[ "https://Stackoverflow.com/questions/30013356", "https://Stackoverflow.com", "https://Stackoverflow.com/users/394449/" ]
A strategy to consider is to build a "trie" data structure for the domains that you have such as `gma` and `gmail.co`. Then where a domain is a prefix of one other domain, you can consider going down the longer branch of the trie if there is a unique such branch. This will mean in your example replacing `gma` ultimately with `gmail.co`. There is an answer concerning [how to create a trie in Python](https://stackoverflow.com/questions/11015320/how-to-create-a-trie-in-python).
``` def email_check(): fo = open("/home/cam/Desktop/out.dat", "rw+") #output file with open('/home/cam/Desktop/email.dat','rw') as f: for line in f: at_pos=line.find('@') if line[at_pos + 1] == 'g': line=line[:at_pos+1]+'gmail.com' elif line[at_pos +1] == 'y': line=line[:at_pos+1]+'yahoomail.com' elif line[at_pos + 1] == 'h': line=line[:at_pos+1]+'hotmail.com' fo.write(line) fo.write('\n') f.close() email_check() ```
11,458
53,252,181
I'm a newby with Spark and trying to complete a Spark tutorial: [link to tutorial](https://www.youtube.com/watch?v=3CPI2D_QD44&index=4&list=PLot-YkcC7wZ_2sxmRTZr2c121rjcaleqv) After installing it on local machine (Win10 64, Python 3, Spark 2.4.0) and setting all env variables (HADOOP\_HOME, SPARK\_HOME etc) I'm trying to run a simple Spark job via WordCount.py file: ``` from pyspark import SparkContext, SparkConf if __name__ == "__main__": conf = SparkConf().setAppName("word count").setMaster("local[2]") sc = SparkContext(conf = conf) lines = sc.textFile("C:/Users/mjdbr/Documents/BigData/python-spark-tutorial/in/word_count.text") words = lines.flatMap(lambda line: line.split(" ")) wordCounts = words.countByValue() for word, count in wordCounts.items(): print("{} : {}".format(word, count)) ``` After running it from terminal: ``` spark-submit WordCount.py ``` I get below error. I checked (by commenting out line by line) that it crashes at ``` wordCounts = words.countByValue() ``` Any idea what should I check to make it work? ``` Traceback (most recent call last): File "C:\Users\mjdbr\Anaconda3\lib\runpy.py", line 193, in _run_module_as_main "__main__", mod_spec) File "C:\Users\mjdbr\Anaconda3\lib\runpy.py", line 85, in _run_code exec(code, run_globals) File "C:\Spark\spark-2.4.0-bin-hadoop2.7\python\lib\pyspark.zip\pyspark\worker.py", line 25, in <module> ModuleNotFoundError: No module named 'resource' 18/11/10 23:16:58 ERROR Executor: Exception in task 0.0 in stage 0.0 (TID 0) org.apache.spark.SparkException: Python worker failed to connect back. at org.apache.spark.api.python.PythonWorkerFactory.createSimpleWorker(PythonWorkerFactory.scala:170) at org.apache.spark.api.python.PythonWorkerFactory.create(PythonWorkerFactory.scala:97) at org.apache.spark.SparkEnv.createPythonWorker(SparkEnv.scala:117) at org.apache.spark.api.python.BasePythonRunner.compute(PythonRunner.scala:108) at org.apache.spark.api.python.PythonRDD.compute(PythonRDD.scala:65) at org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:324) at org.apache.spark.rdd.RDD.iterator(RDD.scala:288) at org.apache.spark.scheduler.ResultTask.runTask(ResultTask.scala:90) at org.apache.spark.scheduler.Task.run(Task.scala:121) at org.apache.spark.executor.Executor$TaskRunner$$anonfun$10.apply(Executor.scala:402) at org.apache.spark.util.Utils$.tryWithSafeFinally(Utils.scala:1360) at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:408) at java.util.concurrent.ThreadPoolExecutor.runWorker(Unknown Source) at java.util.concurrent.ThreadPoolExecutor$Worker.run(Unknown Source) at java.lang.Thread.run(Unknown Source) Caused by: java.net.SocketTimeoutException: Accept timed out at java.net.DualStackPlainSocketImpl.waitForNewConnection(Native Method) at java.net.DualStackPlainSocketImpl.socketAccept(Unknown Source) at java.net.AbstractPlainSocketImpl.accept(Unknown Source) at java.net.PlainSocketImpl.accept(Unknown Source) at java.net.ServerSocket.implAccept(Unknown Source) at java.net.ServerSocket.accept(Unknown Source) at org.apache.spark.api.python.PythonWorkerFactory.createSimpleWorker(PythonWorkerFactory.scala:164) ... 14 more 18/11/10 23:16:58 ERROR TaskSetManager: Task 0 in stage 0.0 failed 1 times; aborting job Traceback (most recent call last): File "C:/Users/mjdbr/Documents/BigData/python-spark-tutorial/rdd/WordCount.py", line 19, in <module> wordCounts = words.countByValue() File "C:\Spark\spark-2.4.0-bin-hadoop2.7\python\lib\pyspark.zip\pyspark\rdd.py", line 1261, in countByValue File "C:\Spark\spark-2.4.0-bin-hadoop2.7\python\lib\pyspark.zip\pyspark\rdd.py", line 844, in reduce File "C:\Spark\spark-2.4.0-bin-hadoop2.7\python\lib\pyspark.zip\pyspark\rdd.py", line 816, in collect File "C:\Spark\spark-2.4.0-bin-hadoop2.7\python\lib\py4j-0.10.7-src.zip\py4j\java_gateway.py", line 1257, in __call__ File "C:\Spark\spark-2.4.0-bin-hadoop2.7\python\lib\py4j-0.10.7-src.zip\py4j\protocol.py", line 328, in get_return_value py4j.protocol.Py4JJavaError: An error occurred while calling z:org.apache.spark.api.python.PythonRDD.collectAndServe. : org.apache.spark.SparkException: Job aborted due to stage failure: Task 0 in stage 0.0 failed 1 times, most recent failure: Lost task 0.0 in stage 0.0 (TID 0, localhost, executor driver): org.apache.spark.SparkException: Python worker failed to connect back. at org.apache.spark.api.python.PythonWorkerFactory.createSimpleWorker(PythonWorkerFactory.scala:170) at org.apache.spark.api.python.PythonWorkerFactory.create(PythonWorkerFactory.scala:97) at org.apache.spark.SparkEnv.createPythonWorker(SparkEnv.scala:117) at org.apache.spark.api.python.BasePythonRunner.compute(PythonRunner.scala:108) at org.apache.spark.api.python.PythonRDD.compute(PythonRDD.scala:65) at org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:324) at org.apache.spark.rdd.RDD.iterator(RDD.scala:288) at org.apache.spark.scheduler.ResultTask.runTask(ResultTask.scala:90) at org.apache.spark.scheduler.Task.run(Task.scala:121) at org.apache.spark.executor.Executor$TaskRunner$$anonfun$10.apply(Executor.scala:402) at org.apache.spark.util.Utils$.tryWithSafeFinally(Utils.scala:1360) at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:408) at java.util.concurrent.ThreadPoolExecutor.runWorker(Unknown Source) at java.util.concurrent.ThreadPoolExecutor$Worker.run(Unknown Source) at java.lang.Thread.run(Unknown Source) Caused by: java.net.SocketTimeoutException: Accept timed out at java.net.DualStackPlainSocketImpl.waitForNewConnection(Native Method) at java.net.DualStackPlainSocketImpl.socketAccept(Unknown Source) at java.net.AbstractPlainSocketImpl.accept(Unknown Source) at java.net.PlainSocketImpl.accept(Unknown Source) at java.net.ServerSocket.implAccept(Unknown Source) at java.net.ServerSocket.accept(Unknown Source) at org.apache.spark.api.python.PythonWorkerFactory.createSimpleWorker(PythonWorkerFactory.scala:164) ... 14 more Driver stacktrace: at org.apache.spark.scheduler.DAGScheduler.org$apache$spark$scheduler$DAGScheduler$$failJobAndIndependentStages(DAGScheduler.scala:1887) at org.apache.spark.scheduler.DAGScheduler$$anonfun$abortStage$1.apply(DAGScheduler.scala:1875) at org.apache.spark.scheduler.DAGScheduler$$anonfun$abortStage$1.apply(DAGScheduler.scala:1874) at scala.collection.mutable.ResizableArray$class.foreach(ResizableArray.scala:59) at scala.collection.mutable.ArrayBuffer.foreach(ArrayBuffer.scala:48) at org.apache.spark.scheduler.DAGScheduler.abortStage(DAGScheduler.scala:1874) at org.apache.spark.scheduler.DAGScheduler$$anonfun$handleTaskSetFailed$1.apply(DAGScheduler.scala:926) at org.apache.spark.scheduler.DAGScheduler$$anonfun$handleTaskSetFailed$1.apply(DAGScheduler.scala:926) at scala.Option.foreach(Option.scala:257) at org.apache.spark.scheduler.DAGScheduler.handleTaskSetFailed(DAGScheduler.scala:926) at org.apache.spark.scheduler.DAGSchedulerEventProcessLoop.doOnReceive(DAGScheduler.scala:2108) at org.apache.spark.scheduler.DAGSchedulerEventProcessLoop.onReceive(DAGScheduler.scala:2057) at org.apache.spark.scheduler.DAGSchedulerEventProcessLoop.onReceive(DAGScheduler.scala:2046) at org.apache.spark.util.EventLoop$$anon$1.run(EventLoop.scala:49) at org.apache.spark.scheduler.DAGScheduler.runJob(DAGScheduler.scala:737) at org.apache.spark.SparkContext.runJob(SparkContext.scala:2061) at org.apache.spark.SparkContext.runJob(SparkContext.scala:2082) at org.apache.spark.SparkContext.runJob(SparkContext.scala:2101) at org.apache.spark.SparkContext.runJob(SparkContext.scala:2126) at org.apache.spark.rdd.RDD$$anonfun$collect$1.apply(RDD.scala:945) at org.apache.spark.rdd.RDDOperationScope$.withScope(RDDOperationScope.scala:151) at org.apache.spark.rdd.RDDOperationScope$.withScope(RDDOperationScope.scala:112) at org.apache.spark.rdd.RDD.withScope(RDD.scala:363) at org.apache.spark.rdd.RDD.collect(RDD.scala:944) at org.apache.spark.api.python.PythonRDD$.collectAndServe(PythonRDD.scala:166) at org.apache.spark.api.python.PythonRDD.collectAndServe(PythonRDD.scala) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(Unknown Source) at sun.reflect.DelegatingMethodAccessorImpl.invoke(Unknown Source) at java.lang.reflect.Method.invoke(Unknown Source) at py4j.reflection.MethodInvoker.invoke(MethodInvoker.java:244) at py4j.reflection.ReflectionEngine.invoke(ReflectionEngine.java:357) at py4j.Gateway.invoke(Gateway.java:282) at py4j.commands.AbstractCommand.invokeMethod(AbstractCommand.java:132) at py4j.commands.CallCommand.execute(CallCommand.java:79) at py4j.GatewayConnection.run(GatewayConnection.java:238) at java.lang.Thread.run(Unknown Source) Caused by: org.apache.spark.SparkException: Python worker failed to connect back. at org.apache.spark.api.python.PythonWorkerFactory.createSimpleWorker(PythonWorkerFactory.scala:170) at org.apache.spark.api.python.PythonWorkerFactory.create(PythonWorkerFactory.scala:97) at org.apache.spark.SparkEnv.createPythonWorker(SparkEnv.scala:117) at org.apache.spark.api.python.BasePythonRunner.compute(PythonRunner.scala:108) at org.apache.spark.api.python.PythonRDD.compute(PythonRDD.scala:65) at org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:324) at org.apache.spark.rdd.RDD.iterator(RDD.scala:288) at org.apache.spark.scheduler.ResultTask.runTask(ResultTask.scala:90) at org.apache.spark.scheduler.Task.run(Task.scala:121) at org.apache.spark.executor.Executor$TaskRunner$$anonfun$10.apply(Executor.scala:402) at org.apache.spark.util.Utils$.tryWithSafeFinally(Utils.scala:1360) at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:408) at java.util.concurrent.ThreadPoolExecutor.runWorker(Unknown Source) at java.util.concurrent.ThreadPoolExecutor$Worker.run(Unknown Source) ... 1 more Caused by: java.net.SocketTimeoutException: Accept timed out at java.net.DualStackPlainSocketImpl.waitForNewConnection(Native Method) at java.net.DualStackPlainSocketImpl.socketAccept(Unknown Source) at java.net.AbstractPlainSocketImpl.accept(Unknown Source) at java.net.PlainSocketImpl.accept(Unknown Source) at java.net.ServerSocket.implAccept(Unknown Source) at java.net.ServerSocket.accept(Unknown Source) at org.apache.spark.api.python.PythonWorkerFactory.createSimpleWorker(PythonWorkerFactory.scala:164) ... 14 more ``` As suggested by theplatypus - checked if the 'resource' module can be imported directly from terminal - apparently not: ``` >>> import resource Traceback (most recent call last): File "<stdin>", line 1, in <module> ModuleNotFoundError: No module named 'resource' ``` In terms of installation resources - I followed instructions from [this tutorial](https://www.youtube.com/watch?v=iarn1KHeouc&index=3&list=PLot-YkcC7wZ_2sxmRTZr2c121rjcaleqv): 1. downloaded spark-2.4.0-bin-hadoop2.7.tgz from [Apache Spark website](https://spark.apache.org/downloads.html) 2. un-zipped it to my C-drive 3. already had Python\_3 installed (Anaconda distribution) as well as Java 4. created local 'C:\hadoop\bin' folder to store winutils.exe 5. created 'C:\tmp\hive' folder and gave Spark access to it 6. added environment variables (SPARK\_HOME, HADOOP\_HOME etc) Is there any extra resource I should install?
2018/11/11
[ "https://Stackoverflow.com/questions/53252181", "https://Stackoverflow.com", "https://Stackoverflow.com/users/10637265/" ]
The heart of the problem is the connection between pyspark and python, solved by redefining the environment variable. I´ve just changed the environment variable's values `PYSPARK_DRIVER_PYTHON` from `ipython` to `jupyter` and `PYSPARK_PYTHON` from `python3` to `python`. Now I'm using Jupyter Notebook, Python 3.7, Java JDK 11.0.6, Spark 2.4.2
I had the same issue. I had set all the `environment variables` correctly but still wasn't able to resolve it In my case, ``` import findspark findspark.init() ``` adding this before even creating the sparkSession helped. I was using `Visual Studio Code` on `Windows 10` and spark version was `3.2.0`. Python version is `3.9` . Note: Initially check if the paths for `HADOOP_HOME` `SPARK_HOME` `PYSPARK_PYTHON` have been set correctly
11,459
25,484,269
I am not an experienced programmer, I have a problem with my code, I think it's a logical mistake of mine but I couldn't find an answer at <http://anh.cs.luc.edu/python/hands-on/3.1/handsonHtml/whilestatements.html> . What I want is to check if the serial device is locked, and the different between conditions that "it is locked" and "it isn't locked" is that there are 4 commas `,,,,` in the line which contains `GPGGA` letters. So I want my code to start if there isn't `,,,,` but I guess my loop is wrong. Any suggestions will be appreciated. Thanks in advance. ``` import serial import time import subprocess file = open("/home/pi/allofthedatacollected.csv", "w") #"w" will be "a" later file.write('\n') while True: ser = serial.Serial("/dev/ttyUSB0", 4800, timeout =1) checking = ser.readline(); if checking.find(",,,,"): print "not locked yet" True else: False print "locked and loaded" ``` . . .
2014/08/25
[ "https://Stackoverflow.com/questions/25484269", "https://Stackoverflow.com", "https://Stackoverflow.com/users/3918035/" ]
Use `break` to exit a loop: ``` while True: ser = serial.Serial("/dev/ttyUSB0", 4800, timeout =1) checking = ser.readline(); if checking.find(",,,,"): print "not locked yet" else: print "locked and loaded" break ``` The `True` and `False` line didn't do anything in your code; they are just referencing the built-in boolean values without assigning them anywhere.
You can use a variable as condition for your `while` loop instead of just `while True`. That way you can change the condition. So instead of having this code: ``` while True: ... if ...: True else: False ``` ... try this: ``` keepGoing = True while keepGoing: ser = serial.Serial("/dev/ttyUSB0", 4800, timeout =1) checking = ser.readline(); if checking.find(",,,,"): print "not locked yet" keepGoing = True else: keepGoing = False print "locked and loaded" ``` EDIT: Or as another answerer suggests, you can just `break` out of the loop :)
11,469
31,018,497
I'm a newbie to python. I was trying to display the time duration. What I did was: ``` startTime = datetime.datetime.now().replace(microsecond=0) ... <some more codes> ... endTime = datetime.datetime.now().replace(microsecond=0) durationTime = endTime - startTime print("The duration is " + str(durationTime)) ``` The output is => The duration is 0:01:28 Can I know how to remove hour from the result? I want to display => The duration is 01:28 Thanks in advance!
2015/06/24
[ "https://Stackoverflow.com/questions/31018497", "https://Stackoverflow.com", "https://Stackoverflow.com/users/4528322/" ]
You can split your timedelta as follows: ``` >>> hours, remainder = divmod(durationTime.total_seconds(), 3600) >>> minutes, seconds = divmod(remainder, 60) >>> print '%s:%s' % (minutes, seconds) ``` This will use python's builtin divmod to convert the number of seconds in your timedelta to hours, and the remainder will then be used to calculate the minutes and seconds. You can then explicitly print the units of time you want.
You can do this by converting `durationTime` which is a `datetime.timedelta` object to a `datetime.time` object and then using `strftime`. ``` print datetime.time(0, 0, durationTime.seconds).strftime("%M:%S") ``` Another way would be to manipulate the string: ``` print ':'.join(str(durationTime).split(':')[1:]) ```
11,470
42,157,406
I'm a beginner to mongodb and python, and i'm trying to write python code to delete the documents of multiple collections older than 30 days based on date field which is of NumberLong type and also has to export the collections to CSV before deleting. I'm using below simple code to print the records as a first step by using new Date(). It is working in Mongo Shell but fails in python stating syntax error.please help. Sample data: ``` { "_id" : ObjectId("589d6eb390cc70b775892ae1"), "trgtm" : NumberLong("1486280499661") } { "_id" : ObjectId("589d602d2fa2fa6687bc7293"), "trgtm" : NumberLong("1486276781059") } { "_id" : ObjectId("589d701f90cc70b775892ae2"), "trgtm" : NumberLong("1486194463192") } { "_id" : ObjectId("589d702390cc70b775892ae3"), "trgtm" : NumberLong("1486108067444") } ``` Code ``` import pymongo from pymongo import MongoClient conn=MongoClient('localhost',27017) db=conn.mydb col=db.test query= { "date": { "$lt": new Date(new Date()).getTime() - 30 * 24 * 60 * 60 * 1000 } } cursor=col.find(query) slice=cursor[0:100] for doc in slice: print doc ```
2017/02/10
[ "https://Stackoverflow.com/questions/42157406", "https://Stackoverflow.com", "https://Stackoverflow.com/users/7544956/" ]
I use datetime for dates in python.. This is an example: ``` import datetime date = datetime.date(2017,2,10) otherDate = datetime.date(1999,1,1) date < otherDate # False ```
You have to use [datetime.datetime](https://docs.python.org/3/library/datetime.html#datetime.datetime) like this: `query = {"date": {"$lt": datetime.datetime(2021, 6, 14, 0, 0, 0, 0)}}` To use the current time you can do: `query = {"date": {"$lt": datetime.now(timezone.utc)}}` Source: <https://pymongo.readthedocs.io/en/stable/examples/datetimes.html>
11,471
41,403,465
My flask app is outputting 'no content' for the `for()` block and i dont know why. I tested my query in `app.py` , here is `app.py`: ``` # mysql config app.config['MYSQL_DATABASE_USER'] = 'user' app.config['MYSQL_DATABASE_PASSWORD'] = 'mypass' app.config['MYSQL_DATABASE_DB'] = 'mydbname' app.config['MYSQL_DATABASE_HOST'] = 'xxx.xxx.xxx.xxx' mysql = MySQL() mysql.init_app(app) c = mysql.connect().cursor() blogposts = c.execute("SELECT post_title, post_name FROM mydbname.wp_posts WHERE post_status='publish' ORDER BY RAND() LIMIT 3") @app.route('/', methods=('GET', 'POST')) def email(): form = EmailForm() if request.method == 'POST': if form.validate() == False: return 'Please fill in all fields <p><a href="/">Try Again</a></p>' else: msg = Message("Message from your visitor", sender='contact@site.com', recipients=['contact@site.com']) msg.body = """ From: %s """ % (form.email.data) mail.send(msg) #return "Successfully sent message!" return render_template('email_submit_thankyou.html') elif request.method == 'GET': return render_template('index.html', blogposts) ``` This is what my `app.py` looks like. Below is my `index.html` in `templates/`: ``` <ul> {% for blogpost in blogposts %} <li><a href="{{blogpost[1]}}">{{blogpost[0]}}</a></li> {% else %} <li>no content...</li> {% endfor %} <div class="clearL"> </div> </ul> ``` I checked the query, and it returns desired output like: ``` ID post_title post_name 1 a title here a-title-here 2 a title here2 a-title-here2 3 a title here3 a-title-here3 ``` If i try to restart the dev server by running `export FLASK APP=app.py', then`flask run`, i get an error of: ``` Error: The file/path provided (app) does not appear to exist. Please verify the path is correct. If app is not on PYTHONPATH, ensure the extension is .py ``` I've also tried running via `export FLASK_APP=app.py` then `python -m flask run` - this also gives the same error. Thoughts on how to resolve?
2016/12/30
[ "https://Stackoverflow.com/questions/41403465", "https://Stackoverflow.com", "https://Stackoverflow.com/users/700070/" ]
You haven't got anything in your template called `blogposts`. You need to use keyword arguments to pass the data: ``` return render_template('index.html', blogposts=blogposts) ``` Also note you should really do that query inside the function, otherwise it will only ever execute on process start and you'll always have the same three posts.
I had same problem. I solved it as change directory of terminal to the folder containing app.py and then run ``` export FLASK_APP=app.py ``` then, you should run ``` python -m flask run ```
11,473
35,538,814
I have a bunch of `.java` files in a directory and I want to compile all of them to `.class` files via python code. As you know, the `Javac` command line tool is the tool that I must use and it require the name of `.java` files to be equal with the class name. Unfortunately for my `.java` files, it isn't. I mean they have different random names that are not equal with their class names. So I need to extract the name of classes from the contents of `.java` files. It was simple if the line of class definition was specified, but it isn't. The `.java` file may contains some comments in the top that may contain *class* or *package* words too. The question is how I can extract package and class name of each file? For example this is contents of one of them: ```java //This is a sample package that its class name is HelloWorldApplet. in this package we blah blah blah and this class blah blah blah. package helloWorldPackage; //This is another comment that may or may not have the word "package" and "class" inside. import javacard.framework.APDU; import javacard.framework.Applet; import javacard.framework.ISO7816; import javacard.framework.ISOException; import javacard.framework.Util; /* this is also a multi line comment. blah blah blah package, blah blah blah package ... */ public class HelloWorldApplet extends Applet { private static final byte[] helloWorld = {(byte)'H',(byte)'e',(byte)'l',(byte)'l',(byte)'o',(byte)' ',(byte)'W',(byte)'o',(byte)'r',(byte)'l',(byte)'d',}; private static final byte HW_CLA = (byte)0x80; private static final byte HW_INS = (byte)0x00; public static void install(byte[] bArray, short bOffset, byte bLength) { new HelloWorldApplet().register(bArray, (short) (bOffset + 1), bArray[bOffset]); } public void process(APDU apdu) { if (selectingApplet()) { return; } byte[] buffer = apdu.getBuffer(); byte CLA = (byte) (buffer[ISO7816.OFFSET_CLA] & 0xFF); byte INS = (byte) (buffer[ISO7816.OFFSET_INS] & 0xFF); if (CLA != HW_CLA) { ISOException.throwIt(ISO7816.SW_CLA_NOT_SUPPORTED); } switch ( INS ) { case HW_INS: getHelloWorld( apdu ); break; default: ISOException.throwIt(ISO7816.SW_INS_NOT_SUPPORTED); } } private void getHelloWorld( APDU apdu) { byte[] buffer = apdu.getBuffer(); short length = (short) helloWorld.length; Util.arrayCopyNonAtomic(helloWorld, (short)0, buffer, (short)0, (short) length); apdu.setOutgoingAndSend((short)0, length); } } ``` How can I extract package name (i.e. `helloWorldPackage`) and class name(i.e `HelloWorldApplet`) of each file? Note that, the `.java` files may have different classes inside, but I need the name of that class that extends `Applet` only. **Update:** I tried the followings, but they didn't worked (Python 2.7.10): ``` import re prgFile = open(r"yourFile\New Text Document.txt","r") contents = prgFile.read() x = re.match(r"(?<=class)\b.*\b(?=extends Applet)",contents) print x x = re.match(r"^(public)+",contents) print x x = re.match(r"^package ([^;\n]+)",contents) print x x = re.match(r"(?<=^public class )\b.*\b(?= extends Applet)",contents) print x ``` Output: ``` >>> ================================ RESTART ================================ >>> None None None None >>> ```
2016/02/21
[ "https://Stackoverflow.com/questions/35538814", "https://Stackoverflow.com", "https://Stackoverflow.com/users/3580433/" ]
In many cases a simple regex will work. If you want to be 100% certain I suggest using a full-blown Java parser like [javalang](https://github.com/c2nes/javalang) to parse each file, then walk the AST to pull out the class name. Something like ``` import glob import javalang # look at all .java files in the working directory for fname in glob.glob("*.java"): # load the sourcecode with open(fname) as inf: sourcecode = inf.read() try: # parse it to an Abstract Syntax Tree tree = javalang.parse.parse(sourcecode) # get package name pkg = tree.package.name # look at all class declarations for path, node in tree.filter(javalang.tree.ClassDeclaration): # if class extends Applet if node.extends.name == 'Applet': # print the class name print("{}: package {}, main class is {}".format(fname, pkg, node.name)) except javalang.parser.JavaSyntaxError as je: # report any files which don't parse properly print("Error parsing {}: {}".format(fname, je)) ``` which gives ``` sample.java: package helloWorldPackage, main class is HelloWorldApplet ```
This regex works for me. `(?<=^public class )\b.*\b(?= extends Applet)`. The way to use it correctly: ``` re.compile(ur'(?<=^public class )\b.*\b(?= extends Applet)', re.MULTILINE) ```
11,474
42,501,900
I just wanted to ask you all about what is fitfunc, errfunc followed by scipy.optimize.leastsq is intuitively. I am not really used to python but I would like to understand this. Here is the code that I am trying to understand. ``` def optimize_parameters2(p0,mz): fitfunc = lambda p,p0,mz: calculate_sp2(p, p0, mz) errfunc = lambda p,p0,mz: exp-fitfunc(p,p0,mz) return scipy.optimize.leastsq(errfunc, p0, args=(p0,mz)) ``` Can someone please explain what this code is saying narratively word by word? Sorry for being so specific but I really do have trouble understanding what it's saying.
2017/02/28
[ "https://Stackoverflow.com/questions/42501900", "https://Stackoverflow.com", "https://Stackoverflow.com/users/6646128/" ]
Inspite of `change` event trigger the modal on `click` event of `select option` like ``` $('select option').on('click',function(){ $("#modelId").modal('show'); }); ```
You can also use modal function ``` $('select option').on('click'function(){ $('modelId').modal('show'); }); ```
11,476
58,370,832
I wanted to include an XML file in another XML file and parse it with python. I am trying to achieve it through Xinclude. There is a file1.xml which looks like ``` <?xml version="1.0"?> <root> <document xmlns:xi="http://www.w3.org/2001/XInclude"> <xi:include href="file2.xml" parse="xml" /> </document> <test>some text</test> </root> ``` and file2.xml which looks like ``` <para>This is a paragraph.</para> ``` Now in my python code i tried to access it like: ``` from xml.etree import ElementTree, ElementInclude tree = ElementTree.parse("file1.xml") root = tree.getroot() for child in root.getchildren(): print child.tag ``` It prints the tag of all child elements of root ``` document test ``` Now when i tries to print the child objects directly like ``` print root.document print root.test ``` It says the root doesnt have children named test or document. Then how am i suppose to access the content in file2.xml? I know that I can access the XML elements from python with schema like: ``` schema=etree.XMLSchema(objectify.fromstring(configSchema)) xmlParser = objectify.makeparser(schema = schema) cfg = objectify.fromstring(xmlContents, xmlParser) print cfg.elemetName # access element ``` But since here one XML file is included in another, I am confused how to write the schema. How can i solve it?
2019/10/14
[ "https://Stackoverflow.com/questions/58370832", "https://Stackoverflow.com", "https://Stackoverflow.com/users/3186922/" ]
Not sure why you want to use XInclude, but including an XML file in another one is a basic mechanism of SGML and XML, and can be achieved without XInclude as simple as: ``` <!DOCTYPE root [ <!ENTITY externaldoc SYSTEM "file2.xml"> ]> <root> <document> &externaldoc; </document> <test>some text</test> </root> ```
You need to make xml.etree to include the files referenced with xi:include. I have added the key line to your original example: ``` from xml.etree import ElementTree, ElementInclude tree = ElementTree.parse("file1.xml") root = tree.getroot() #here you make the parser actually include every referenced file ElementInclude.include(root) #and now you are good to go for child in root.getchildren(): print child.tag ``` For a detailed reference about includes in python, see the includes section in the official Python documentation <https://docs.python.org/3/library/xml.etree.elementtree.html>
11,479
48,848,829
I'm coming from Python so trying to figure out basic things in Js. I have the following: ``` { name: 'Jobs ', path: '/plat/jobs', meta: { label: 'jobs', link: 'jobs/Basic.vue' }, ``` what I want is to create this block for each element in a list using a for loop (including the brackets) in python this would be something like ``` for i in items: { name: i.name, path: i.path, meta: { label: i.label, link: i.link } ``` How do I do this in js? what even is the object type here? Is it just a javascript dictionary? ``` children: [ let new_items = items.map(i => ({ name: i.name, path: i.path, meta: { label: i.label, link: i.link } })); console.log(new_items); component: lazyLoading('android/Basic') } ] } ``` I don't think this will work because i need each dictionary listed under children.
2018/02/18
[ "https://Stackoverflow.com/questions/48848829", "https://Stackoverflow.com", "https://Stackoverflow.com/users/2680978/" ]
Included is an example that operates on an array of objects, returning a new array of objects. An object is overloaded in JavaScript, but in this case it is synonymous in other languages to a hash, object, or dictionary (depending on application). ```js let items = [{ name: 'foo', path: 'foopath', label: 'foolabel', link: 'foolink' }, { name: 'bar', path: 'barpath', label: 'barlabel', link: 'barlink' }]; let new_items = items.map(i => ({ name: i.name, path: i.path, meta: { label: i.label, link: i.link } })); console.log(new_items); ``` That said, a new array is not necessary since it's possible to edit the object directly: ```js let items = [{ name: 'foo', path: 'foopath', label: 'foolabel', link: 'foolink' }, { name: 'bar', path: 'barpath', label: 'barlabel', link: 'barlink' }]; items.forEach(i => { i.meta = {label:i.label, link:i.link}; delete i.label; delete i.link; }); console.log(items); ```
I would use the list.map function ``` items.map(i => { return { name: i.name, path: i.path, meta: { label: i.label, link: i.link } })) ``` This will return a new list of items as specified NOTE: this can be simplified further with an implicit return ``` items.map(i => ({ name: i.name, path: i.path, meta: { label: i.label, link: i.link } })) ```
11,481
45,705,876
Say I am working with OpenGL in python. Often times you make a call such as ``` glutDisplayFunc(display) ``` where display is a function that you have written. What if I have a class ``` class foo: #self.x=5 def display(self, maybe some other variables): #run some code print("Hooray!, X is:", self.x) ``` and I want to pass that display function and have it print "Hooray!, X is: 5" Would I pass it as ``` h = foo() glutDisplayFunc(h.display) ``` Could this work?
2017/08/16
[ "https://Stackoverflow.com/questions/45705876", "https://Stackoverflow.com", "https://Stackoverflow.com/users/4327053/" ]
You can call method of another component from a different component but it will not update the value of the calling component without some tweaking like `Event Emitters` if they have a parent child relationship or `Shared Services` or using `ngrx redux pattern` How to Call a different component method be like Component1 ``` test(){ console.log("Test"); } ``` Component 2 ``` working(){ let component = new Component1(); component.test(); } ``` Now to update the value in component 2 you might have to use any of the above. For Event Emitters follow this [link](https://rahulrsingh09.github.io/AngularConcepts/inout) For Shared services follow this [link](https://rahulrsingh09.github.io/AngularConcepts/faq) For ngrx follow this [link](https://rahulrsingh09.github.io/AngularConcepts/ngrx)
You **`cannot`** do that, There are two possible ways you could achieve this, 1. use **[`angular service`](https://angular.io/tutorial/toh-pt4)** to pass the data between two components 2. use **[`Event Emitters`](https://angular.io/api/core/EventEmitter)** to pass the value among the components.
11,482
5,369,546
I tried to install python below way. But this did not work. This take "error: bad install directory or PYTHONPATH". [What's the proper way to install pip, virtualenv, and distribute for Python?](https://stackoverflow.com/questions/4324558/whats-the-proper-way-to-install-pip-virtualenv-and-distribute-for-python/4325047#4325047) Make directory ``` $ mkdir -p ~/.python ``` Add .bashrc ``` #Use local python export PATH=$HOME/.python/bin:$PATH export PYTHONPATH=$HOME/.python ``` Create a file ~/.pydistutils.cfg ``` [install] prefix=~/.python ``` Get install script ``` $ cd ~/src $ curl -O http://python-distribute.org/distribute_setup.py ``` Execute and Error ``` $ python ./distribute_setup.py Extracting in /tmp/tmpsT2kdA Now working in /tmp/tmpsT2kdA/distribute-0.6.15 Installing Distribute Before install bootstrap. Scanning installed packages No setuptools distribution foundrunning install Checking .pth file support in /home/sane/.python/lib/python2.6/site-packages//usr/bin/python -E -c pass TEST FAILED: /home/sane/.python/lib/python2.6/site-packages/ does NOT support .pth files error: bad install directory or PYTHONPATH You are attempting to install a package to a directory that is not on PYTHONPATH and which Python does not read ".pth" files from. The installation directory you specified (via --install-dir, --prefix, or the distutils default setting) was: /home/sane/.python/lib/python2.6/site-packages/ and your PYTHONPATH environment variable currently contains: '/home/sane/.python' Here are some of your options for correcting the problem: * You can choose a different installation directory, i.e., one that is on PYTHONPATH or supports .pth files * You can add the installation directory to the PYTHONPATH environment variable. (It must then also be on PYTHONPATH whenever you run Python and want to use the package(s) you are installing.) * You can set up the installation directory to support ".pth" files by using one of the approaches described here: http://packages.python.org/distribute/easy_install.html#custom-installation-locations Please make the appropriate changes for your system and try again. Something went wrong during the installation. See the error message above. ``` My environment('sane' is my unix user name.) ``` $ python -V Python 2.6.4 $ which python /usr/bin/python $ uname -a Linux localhost.localdomain 2.6.34.8-68.fc13.x86_64 #1 SMP Thu Feb 17 15:03:58 UTC 2011 x86_64 x86_64 x86_64 GNU/Linux ```
2011/03/20
[ "https://Stackoverflow.com/questions/5369546", "https://Stackoverflow.com", "https://Stackoverflow.com/users/104080/" ]
For completeness, here's another way of causing this: ``` @if(condition) { <input type="hidden" value="@value"> } ``` The problem is that the unclosed element makes it not obvious enough that the content is an html block (but we aren't *always* doing xhtml, right?). In this scenario, you can use: ``` @if(condition) { @:<input type="hidden" value="@value"> } ``` or ``` @if(condition) { <text><input type="hidden" value="@value"></text> } ```
I've gotten this issue with Razor. I'm not sure if it's a bug in the parser or what, but the way I've solved it is to break up the: ``` @using(Html.BeginForm()) { <h1>Example</h1> @foreach (var post in Model.Posts) { Html.RenderPartial("ShowPostPartial", post); } } ``` into: ``` @{ Html.BeginForm(); } <h1>Example</h1> @foreach (var post in Model.Posts) { Html.RenderPartial("ShowPostPartial", post); } @{ Html.EndForm(); } ```
11,483
40,619,916
I am supposed to check if a word or sentence is a palindrome using code, and I was able to check for words, but I'm having trouble checking sentences as being a palindrome. Here's my code, it's short but I'm not sure how else to add it to check for sentence palindromes. I'm sort of a beginner at python, and I've already looked at other people's code, and they are too complicated for me to **really** understand. Here's what I wrote: ``` def is_palindrome(s): if s[::1] == s[::-1]: return True else: return False ``` Here is an example of a sentence palindrome: "Red Roses run no risk, sir, on nurses order." (If you ignore spaces and special characters)
2016/11/15
[ "https://Stackoverflow.com/questions/40619916", "https://Stackoverflow.com", "https://Stackoverflow.com/users/7087165/" ]
``` import string def is_palindrome(s): whitelist = set(string.ascii_lowercase) s = s.lower() s = ''.join([char for char in s if char in whitelist]) return s == s[::-1] ```
Without for: ``` word_pilandrom = str(input("Please enter a word: ")) new_word=word_pilandrom.lower().replace(" ","") if new_word[::1] == new_word[::-1]: print("OK") else: print("NOT") ```
11,489
39,834,949
I'm aware this is normally done with twistd, but I'm wanting to use iPython to test out code 'live' on twisted code. [How to start twisted's reactor from ipython](https://stackoverflow.com/questions/4673375/how-to-start-twisteds-reactor-from-ipython) asked basically the same thing but the first solution no longer works with current ipython/twisted, while the second is also unusable (thread raises multiple errors). <https://gist.github.com/kived/8721434> has something called TPython which purports to do this, but running that seems to work except clients never connect to the server (while running the same clients works in the python shell). Do I *have* to use Conch Manhole, or is there a way to get iPython to play nice (probably with \_threadedselect). For reference, I'm asking using ipython 5.0.0, python 2.7.12, twisted 16.4.1
2016/10/03
[ "https://Stackoverflow.com/questions/39834949", "https://Stackoverflow.com", "https://Stackoverflow.com/users/4443898/" ]
Async code in general can be troublesome to run in a live interpreter. It's best just to run an async script in the background and do your iPython stuff in a separate interpreter. You can intercommunicate using files or TCP. If this went over your head, that's because it's not always simple and it might be best to avoid the hassle of possible. However, you'll be happy to know there is an awesome project called `crochet` for using Twisted in non-async applications. It truly is one of my favorite modules and I'm shocked that it's not more widely used (you can change that ;D though). The `crochet` module has a `run_in_reactor` decorator that runs a Twisted reactor in a separate thread managed by `crochet` itself. Here is a quick class example that executes requests to a Star Wars RESTFul API, then stores the JSON response in a list. ``` from __future__ import print_function import json from twisted.internet import defer, task from twisted.web.client import getPage from crochet import run_in_reactor, setup as setup_crochet setup_crochet() class StarWarsPeople(object): people_id = [_id for _id in range(1, 89)] people = [] @run_in_reactor def requestPeople(self): """ Request Star Wars JSON data from the SWAPI site. This occurs in a Twisted reactor in a separate thread. """ for _id in self.people_id: url = 'http://swapi.co/api/people/{0}'.format(_id).encode('utf-8') d = getPage(url) d.addCallback(self.appendJSON) def appendJSON(self, response): """ A callback which will take the response from the getPage() request, convert it to JSON, then append it to self.people, which can be accessed outside of the crochet thread. """ response_json = json.loads(response.decode('utf-8')) #print(response_json) # uncomment if you want to see output self.people.append(response_json) ``` Save this in a file (example: `swapi.py`), open iPython, import the newly created module, then run a quick test like so: ``` from swapi import StarWarsPeople testing = StarWarsPeople() testing.requestPeople() from time import sleep for x in range(5): print(len(testing.people)) sleep(2) ``` As you can see it runs in the background and stuff can still occur in the main thread. You can continue using the iPython interpreter as you usually do. You can even have a manhole running in the background for some cool hacking too! References ========== * <https://crochet.readthedocs.io/en/1.5.0/introduction.html#crochet-use-twisted-anywhere>
While this doesn't answer the question I thought I had, it does answer (sort of) the question I posted. Embedding ipython works in the sense that you get access to business objects with the reactor running. ``` from twisted.internet import reactor from twisted.internet.endpoints import serverFromString from myfactory import MyFactory class MyClass(object): def __init__(self, **kwargs): super(MyClass, self).__init__(**kwargs) server = serverFromString(reactor, 'tcp:12345') server.list(MyFactory(self)) def interact(): import IPython IPython.embed() reactor.callInThread(interact) if __name__ == "__main__": myclass = MyClass() reactor.run() ``` Call the above with `python myclass.py` or similar.
11,499
24,053,152
I want to convert date like Jun 28 in datetime format like 2014-06-28. I tried following code and many more variation which gives me correct output in ipython but I m unable to save the record in database. It throws error as value has an invalid date format. It must be in YYYY-MM-DD format. Can anyone help me to fix this issue ? Following is the code snippet ``` m = "Jun" d = 28 y = datetime.datetime.now().year m = strptime(m,'%b').tm_mon if m > datetime.datetime.now().month: y=y-1 new_date = str(d)+" "+str(m)+" "+str(y) new_date = datetime.datetime.strptime(new_date, '%b %d %Y').date() ``` my models.py is as ``` class Profile(models.Model): Name = models.CharField(max_length = 256, null = True, blank = True) Location = models.CharField(max_length = 256, null = True, blank = True) Degree = models.CharField(max_length = 256, null = True, blank = True) Updated_on = models.DateField(null = True, blank = True) ``` Code that saves to model is like ``` def save_record(self): try: record = Profile(Name= indeed.name, Location = loc, Degree = degree, Updated_on = new_date, ) record.save() print "Record added" except Exception as err: print "Record not added ",err pass ``` Thanks in advance
2014/06/05
[ "https://Stackoverflow.com/questions/24053152", "https://Stackoverflow.com", "https://Stackoverflow.com/users/3446107/" ]
Once you have a `date` object, you can use the `strftime()` function to [format](https://docs.python.org/2/library/datetime.html#strftime-and-strptime-behavior) it into a string. Let's say `new_date` is your date object from your question. Then you can do: ``` new_date.strftime('%Y-%m-%d') ``` Btw, you can do the same with a `datetime` object too. **EDIT:** Double check whether your `Updated_on` field uses `DateField` or `DateTimeField`. That will affect whether you use a `datetime.date()` object or `datetime.datetime()` object, respectively.
I tried on console: ``` >>import datetime >>datetime.datetime.strptime("Jun-08-2013", '%b-%d-%Y').date() datetime.date(2013, 6, 8) ``` There are several errors in the code. So solution should be: ``` m = "Jun" d = 28 if datetime.datetime.strptime("Aug",'%b').month > datetime.datetime.now().month: y= (datetime.datetime.now() - relativedelta(years=1)).year #from dateutil.relativedelta import relativedelta else: y=datetime.datetime.now().year new_date = str(m)+"-"+str(d)+"-"+str(y) new_date = datetime.datetime.strptime(new_date, '%b-%d-%Y').date() ``` `new_date` is a date object, so it should be saved to `models.DateField()` without any problem(including format issues).
11,500
64,956,344
I have created a python program that works fine. but I want to make it more neat, I ask the user for numbers, number1, number2 etc. up to 5. I want to just do this with a for loop though, like this ``` for i in range(0,5): number(i) = int(input("what is the number")) ``` I realise that this code docent actually work but that is kind of what I want to do.
2020/11/22
[ "https://Stackoverflow.com/questions/64956344", "https://Stackoverflow.com", "https://Stackoverflow.com/users/14640673/" ]
What your code is, cannot be done per se but it can be done using lists like this: ``` number_list = [] for i in range(5); number = int(input("What is the number?")) number_list.append(number) // Now you can access the list using indexes for j in len(number_list): print(number_list[j]) ```
If you want to save also variable names you can use python `dictionaries` like this: ``` number = {} for i in range(0,5): number[f'number{i}'] = int(input("what is the number ")) print(number) ``` output: ``` {'number0': 5, 'number1': 4, 'number2': 5, 'number3': 6, 'number4': 3} ```
11,501
849,058
Consider this python program: ``` import sys lc = 0 for line in open(sys.argv[1]): lc = lc + 1 print lc, sys.argv[1] ``` Running it on my 6GB text file, it completes in ~ 2minutes. Question: **is it possible to go faster?** Note that the same time is required by: ``` wc -l myfile.txt ``` so, I suspect the anwer to my quesion is just a plain "no". Note also that my real program is doing something more interesting than just counting the lines, so please give a generic answer, **not** line-counting-tricks (like keeping a line count metadata in the file) PS: I tagged "linux" this question, because I'm interested only in linux-specific answers. Feel free to give OS-agnostic, or even other-OS answers, if you have them. See also the [follow-up question](https://stackoverflow.com/questions/860893/is-python-automagically-parallelizing-io-and-cpu-or-memory-bound-sections)
2009/05/11
[ "https://Stackoverflow.com/questions/849058", "https://Stackoverflow.com", "https://Stackoverflow.com/users/25891/" ]
as others have said - "no" Almost all of your time is spent waiting for IO. If this is something that you need to do more than once, *and* you have a machine with tons of ram, you could keep the file in memory. If your machine has 16GB of ram, you'll have 8GB available at /dev/shm to play with. Another option: If you have multiple machines, this problem is trivial to parallelize. Split the it among multiple machines, each of them count their newlines, and add the results.
Note that Python I/O is implemented in C, so there is not much luck speeding it up further.
11,504
58,837,883
I was automating software installation using python's pyautogui model. So, I crop some images from installation screen like for clicking next, accept the terms and conditions. Using image search I am able to locate the image on the screen and able to click on the right areas. Works fine in my system. However, the script does not work in other systems as image search is unsuccessful. May be because image is cropped in my system and being searched in other system. The resolutions of both the systems are same but the screen size is different(like 15 inch, 17 inch). My question was does the function locateOnScreen is compatible across different machines? How can I resolve this problem given that I need to deploy this automation across multiple systems in the company? The code is pasted below: ``` import os import time import pyautogui from pywinauto.application import Application fsv = Application(backend="win32").start("sandra_24.61.exe") while(1): s = pyautogui.locateOnScreen("C:\\WV_Project\\testcaseAutomation\\images\\ok.png") if (s==None): print("wait for 1 sec for ok button to come") time.sleep(1) else: pyautogui.click(s.left,s.top) print("Ok clicked") break while(1): s = pyautogui.locateOnScreen("C:\\WV_Project\\testcaseAutomation\\images\\acceptRadio.png") if (s==None): print("wait for 1 sec for accept radio button to come") time.sleep(1) else: x=s.left y=s.top pyautogui.click(s.left,s.top) print("accept clicked") break; time.sleep(2) x = x+366 y=y+78 pyautogui.click(x,y) print("next clicked") time.sleep(2) pyautogui.click(x,y) time.sleep(2) print("next clicked") time.sleep(2) pyautogui.click(x,y) print("next clicked") time.sleep(2) pyautogui.click(x,y) time.sleep(2) print("next clicked") pyautogui.click(x,y) time.sleep(2) print("next clicked") pyautogui.click(x,y) print("install clicked") time.sleep(50) while(1): time.sleep(2) try: x,y = pyautogui.locateCenterOnScreen("C:\\WV_Project\\testcaseAutomation\\images\\finish.png") pyautogui.click(x,y) break except: print("Exception occurred") print("Sandra is successfully installed.") ```
2019/11/13
[ "https://Stackoverflow.com/questions/58837883", "https://Stackoverflow.com", "https://Stackoverflow.com/users/2853557/" ]
As far as I can assume! The problem is with image resolution. In my company, I also have a robot that automates some complex task. All the monitors here are same but still, I was facing some trouble matching images. Cropped image from one PC was not working on another. So what I am doing right now is using "SNIPPING TOOLS" to take screenshots in every pc. This solves the problem easily but this solution takes time. If you are not using more than 10 or 20 different PC's then this solution may help. If the problem exists then you may try by reducing the **CONFIDENCE LEVEL** like below: ``` x,y = pyautogui.locateCenterOnScreen("C:\\WV_Project\\testcaseAutomation\\images\\finish.png", grayscale=True, confidence=.5) ``` Try using different confidence levels. You will also need OPENCV for using CONFIDENCE. use "pip install opencv-python" for installing OPENCV from command prompt.
You can change the resolution if you make script in `1366x768`and trying in 1920x1080 it will never work because you need to add new image there are two ways you can use: 1:> replace the image 2:> add the x,y position **Way:** ``` xy = pyautogui.locateCenterOnScreen('your directory', grayscale=True, confidence=.5) pg.click(xy) ``` 3:> you can capture icon with inspect **you can also right click andd click on inspect and go to applications and find that image in image folder save it and it will work on all screen**
11,514
41,734,584
I have some background in machine learning and python, but I am just learning TensorFlow. I am going through the [tutorial on deep convolutional neural nets](https://www.tensorflow.org/tutorials/deep_cnn/) to teach myself how to use it for image classification. Along the way there is an exercise, which I am having trouble completing. **EXERCISE:** The model architecture in inference() differs slightly from the CIFAR-10 model specified in cuda-convnet. In particular, the top layers of Alex's original model are locally connected and not fully connected. Try editing the architecture to exactly reproduce the locally connected architecture in the top layer. The exercise refers to the inference() function in the [cifar10.py model](https://github.com/tensorflow/models/blob/master/tutorials/image/cifar10/cifar10.py). The 2nd to last layer (called local4) has a shape=[384, 192], and the top layer has a shape=[192, NUM\_CLASSES], where NUM\_CLASSES=10 of course. I think the code that we are asked to edit is somewhere in the code defining the top layer: ``` with tf.variable_scope('softmax_linear') as scope: weights = _variable_with_weight_decay('weights', [192, NUM_CLASSES], stddev=1/192.0, wd=0.0) biases = _variable_on_cpu('biases', [NUM_CLASSES], tf.constant_initializer(0.0)) softmax_linear = tf.add(tf.matmul(local4, weights), biases,name=scope.name _activation_summary(softmax_linear) ``` But I don't see any code that determines the probability of connecting between layers, so I don't know how we can change the model from fully connected to locally connected. Does somebody know how to do this?
2017/01/19
[ "https://Stackoverflow.com/questions/41734584", "https://Stackoverflow.com", "https://Stackoverflow.com/users/6862133/" ]
I'm also working on this exercise. I'll try and explain my approach properly, rather than just give the solution. It's worth looking back at the mathematics of a fully connected layer (<https://www.tensorflow.org/get_started/mnist/beginners>). So the linear algebra for a fully connected layer is: *y = W \* x + b* where *x* is the n dimensional input vector, *b* is an *n* dimensional vector of biases, and *W* is an *n*-by-*n* matrix of weights. The *i* th element of *y* is the sum of the *i* th row of W multiplied element-wise with *x*. So....if you only want *y[i]* connected to *x[i-1]*, *x[i]*, and *x[i+1]*, you simply set all values in the *i* th row of *W* to zero, apart from the *(i-1)* th, *i* th and *(i+1)* th column of that row. Therefore to create a locally connected layer, you simply enforce *W* to be a banded matrix (<https://en.wikipedia.org/wiki/Band_matrix>), where the size of the band is equal to the size of the locally connected neighbourhoods you want. Tensorflow has a function for setting a matrix to be banded (`tf.batch_matrix_band_part(input, num_lower, num_upper, name=None)`). This seems to me to be the simplest mathematical solution to the exercise.
I'll try to answer your question although I'm not 100% I got it right as well. Looking at the cuda-convnet [architecture](https://github.com/akrizhevsky/cuda-convnet2/blob/master/layers/layers-cifar10-11pct.cfg) we can see that the TensorFlow and cuda-convnet implementations start to differ after the second pooling layer. TensorFlow implementation implements two fully connected layers and softmax classifier. cuda-convnet implements two locally connected layers, one fully connected layer and softmax classifier. The code snippet you included refers only to the softmax classifier and is in fact shared between the two implementations. To reproduce the cuda-convnet implementation using TensorFlow we have to replace the existing fully connected layers with two locally connected layers and a fully connected one. Since Tensor doesn't have locally connected layers as part of the SDK we have to figure out a way to implement it using the existing tools. Here is my attempt to implement the first locally connected layers: ``` with tf.variable_scope('local3') as scope: shape = pool2.get_shape() h = shape[1].value w = shape[2].value sz_local = 3 # kernel size sz_patch = (sz_local**2)*shape[3].value n_channels = 64 # Extract 3x3 tensor patches patches = tf.extract_image_patches(pool2, [1,sz_local,sz_local,1], [1,1,1,1], [1,1,1,1], 'SAME') weights = _variable_with_weight_decay('weights', shape=[1,h,w,sz_patch, n_channels], stddev=5e-2, wd=0.0) biases = _variable_on_cpu('biases', [h,w,n_channels], tf.constant_initializer(0.1)) # "Filter" each patch with its own kernel mul = tf.multiply(tf.expand_dims(patches, axis=-1), weights) ssum = tf.reduce_sum(mul, axis=3) pre_activation = tf.add(ssum, biases) local3 = tf.nn.relu(pre_activation, name=scope.name) ```
11,515
2,860,106
Would it be possible to create a class interface in python and various implementations of the interface. Example: I want to create a class for pop3 access (and all methods etc.). If I go with a commercial component, I want to wrap it to adhere to a contract. In the future, if I want to use another component or code my own, I want to be able to swap things out and not have things very tightly coupled. Possible? I'm new to python.
2010/05/18
[ "https://Stackoverflow.com/questions/2860106", "https://Stackoverflow.com", "https://Stackoverflow.com/users/39677/" ]
Of course. There is no need to create a base class or an interface in this case either, as everything is dynamic.
Yes, this is possible. There are typically no impediments to doing so: just keep a stable API and change how you implement it.
11,516