qid
int64
46k
74.7M
question
stringlengths
54
37.8k
date
stringlengths
10
10
metadata
listlengths
3
3
response_j
stringlengths
29
22k
response_k
stringlengths
26
13.4k
__index_level_0__
int64
0
17.8k
56,526,857
I am developing a software in which I have an OpenGL window. I am creating the GUI of the software using `PyQt5` and for the opengGL Window I am using `QOpengGLWidget` and for object selection, I am using [**Stencil** **Buffer**](https://en.wikibooks.org/wiki/OpenGL_Programming/Object_selection) and reading **STENCIL\_INDEX** as the following function on `mousePressEvent(self, event)`: `id = glReadPixels(event.x(), self.height - event.y() - 1, 1, 1, GL_DEPTH_COMPONENT, GL_FLOAT)` but this is not working when I am using with Qt5 GUI and showing following error: ``` Traceback (most recent call last): File "/home/akv26/Desktop/internProject/Main/opengl.py", line 92, in mousePressEvent val = glReadPixels(event.x(), self.height - event.y() - 1, 1, 1, GL_DEPTH_COMPONENT, GL_FLOAT) File "/home/akv26/.local/lib/python3.7/site-packages/OpenGL/GL/images.py", line 371, in glReadPixels imageData File "/home/akv26/.local/lib/python3.7/site-packages/OpenGL/platform/baseplatform.py", line 402, in __call__ return self( *args, **named ) File "/home/akv26/.local/lib/python3.7/site-packages/OpenGL/error.py", line 232, in glCheckError baseOperation = baseOperation, OpenGL.error.GLError: GLError( err = 1282, description = b'invalid operation', baseOperation = glReadPixels, cArguments = ( 183, 228, 1, 1, GL_DEPTH_COMPONENT, GL_FLOAT, array([[0.]], dtype=float32), ) ) Aborted (core dumped) ``` and here is the code for my custom `QOpenGLWidget`: ``` import sys from PyQt5.QtWidgets import QOpenGLWidget from PyQt5.QtCore import QSize, Qt from OpenGL.GL import * from OpenGL.GLUT import * from OpenGL.GLU import * from math import * class MyGLWidget(QOpenGLWidget): def initializeGL(self): glutInit() glClearColor(1.0, 0.5, 0.2, 0.5) glEnable(GL_DEPTH_TEST) glEnable(GL_LIGHTING) glEnable(GL_COLOR_MATERIAL) glEnable(GL_LIGHT0) glEnable(GL_LIGHT1) glEnable(GL_NORMALIZE) glShadeModel(GL_SMOOTH) def resizeGL(self, w, h): self.height=h if h==0: h=1 ratio = w * 1.0 / h # Use the Projection Matrix glMatrixMode(GL_PROJECTION) # Reset Matrix glLoadIdentity() # Set the viewport to be the entire window glViewport(0, 0, w, h) # Set the correct perspective. gluPerspective(45.0, ratio, 0.1, 100.0) # Get Back to the Modelview glMatrixMode(GL_MODELVIEW) def paintGL(self): glClearStencil(0) glClear(GL_STENCIL_BUFFER_BIT) glEnable(GL_STENCIL_TEST) glStencilOp(GL_KEEP, GL_KEEP, GL_REPLACE) ambientColor = [0.2,0.2,0.2,1.0] glLightModelfv(GL_LIGHT_MODEL_AMBIENT, ambientColor) lightColor0 = [0.5,0.5,0.5,1.0] lightPos0 = [0, 0, -10.0, 1.0] glLightfv(GL_LIGHT0, GL_DIFFUSE, lightColor0) glLightfv(GL_LIGHT0, GL_POSITION, lightPos0) lightColor1 = [0.5, 0.2, 0.2, 1.0] lightPos1 = [-1.0, 0.5, 0.5, 0.0] glLightfv(GL_LIGHT1, GL_DIFFUSE, lightColor1) glLightfv(GL_LIGHT1, GL_POSITION, lightPos1) glClear(GL_COLOR_BUFFER_BIT | GL_DEPTH_BUFFER_BIT) glLoadIdentity() gluLookAt(-5, 5, -5, 0, 0, 0, 0.0, 1.0, 0.0) self.drawSome() def drawSome(self): glStencilFunc(GL_ALWAYS, 1, -1) glBegin(GL_QUADS) glColor3f(0.0, 0.5, 0.0) glNormal3f(0,0,-1) glVertex3f(-1,-1,0) glVertex3f(-1,1,0) glVertex3f(1,1,0) glVertex3f(1,-1,0) glEnd() glTranslatef(0,0,0) glColor3f(0.5,0,0) glStencilFunc(GL_ALWAYS, 2, -1) glutSolidCube(1.0) def mousePressEvent(self, event): id = glReadPixels(event.x(), self.height - event.y() - 1, 1, 1, GL_STENCIL_INDEX, GL_UNSIGNED_INT) print(id) ``` and the same is working properly when I am using this with GLUT window as following: ``` from OpenGL.GL import * from OpenGL.GLUT import * from OpenGL.GLU import * from math import * import numpy as np window_width=0 window_height=0 def changeSize(w, h): global window_height, window_width window_width=w window_height=h # Prevent a divide by zero, when window is too short # (you cant make a window of zero width). if h==0: h=1 ratio = w * 1.0 / h # Use the Projection Matrix glMatrixMode(GL_PROJECTION) # Reset Matrix glLoadIdentity() # Set the viewport to be the entire window glViewport(0, 0, w, h) # Set the correct perspective. gluPerspective(45.0, ratio, 0.1, 100.0) # Get Back to the Modelview glMatrixMode(GL_MODELVIEW) def drawSome(): glStencilFunc(GL_ALWAYS, 1, -1) glBegin(GL_QUADS) glColor3f(0.0, 0.5, 0.0) glNormal3f(0,0,-1) glVertex3f(-1,-1,0) glVertex3f(-1,1,0) glVertex3f(1,1,0) glVertex3f(1,-1,0) glEnd() glTranslatef(0,0,0) glColor3f(0.5,0,0) glStencilFunc(GL_ALWAYS, 2, -1) glutSolidCube(1.0) def renderScene(): glClearStencil(0) glClear(GL_STENCIL_BUFFER_BIT) glEnable(GL_STENCIL_TEST) glStencilOp(GL_KEEP, GL_KEEP, GL_REPLACE) ambientColor = [0.2,0.2,0.2,1.0] glLightModelfv(GL_LIGHT_MODEL_AMBIENT, ambientColor) lightColor0 = [0.5,0.5,0.5,1.0] #Color lightPos0 = [0, 0, -10.0, 1.0] glLightfv(GL_LIGHT0, GL_DIFFUSE, lightColor0) glLightfv(GL_LIGHT0, GL_POSITION, lightPos0) lightColor1 = [0.5, 0.2, 0.2, 1.0] lightPos1 = [-1.0, 0.5, 0.5, 0.0] glLightfv(GL_LIGHT1, GL_DIFFUSE, lightColor1) glLightfv(GL_LIGHT1, GL_POSITION, lightPos1) # Clear Color and Depth Buffers glClear(GL_COLOR_BUFFER_BIT | GL_DEPTH_BUFFER_BIT) # Reset transformations glLoadIdentity() # Set the camera # gluLookAt(x, 1.0, z, x+lx, 1.0, z+lz, 0.0, 1.0, 0.0) gluLookAt(-5, 5, -5, 0, 0, 0, 0.0, 1.0, 0.0) drawSome() glutSwapBuffers() def mouse(button, state, x, y): global window_height id = glReadPixels(x, window_height - y - 1, 1, 1, GL_STENCIL_INDEX, GL_UNSIGNED_INT) print(id[0][0]) if __name__ == '__main__': glutInit() glutInitDisplayMode(GLUT_DEPTH | GLUT_DOUBLE | GLUT_RGBA) # glutInitWindowPosition(100,100) glutInitWindowSize(320,320) glutCreateWindow("Lighthouse3D - GLUT Tutorial") glClearColor(1.0, 0.5, 0.2, 0.5) # register callbacks glutDisplayFunc(renderScene) glutReshapeFunc(changeSize) glutMouseFunc(mouse) # glutTimerFunc(0, Timer, 0) # here are the new entries glutIgnoreKeyRepeat(1) # OpenGL init glEnable(GL_DEPTH_TEST) glEnable(GL_LIGHTING) glEnable(GL_COLOR_MATERIAL) glEnable(GL_LIGHT0) glEnable(GL_LIGHT1) glEnable(GL_NORMALIZE) glShadeModel(GL_SMOOTH) glutMainLoop() ``` **How to resolve the problem with QT5 GUI ?**
2019/06/10
[ "https://Stackoverflow.com/questions/56526857", "https://Stackoverflow.com", "https://Stackoverflow.com/users/-1/" ]
`strip()` does not respect count or order when it removes characters from the end of your string. The argument you passed it, `"-Vaccines"`, contains an "a", so it will remove the "a" from "Rivera". It does not matter that it already removed an "a" from "Vaccines" and it does not matter that it doesn't come between a V and a c. Consider another example: ``` >>> "abcXqrqqqrrrqrqrqrqrqqrr".strip("qr") 'abcX' ``` Many `q`s and `r`s are removed here, even though the argument to `strip` contains only one of each. In general, `strip` is not suitable for removing a static number of characters from the end of a string. One possible alternative is to use regex, which can match a literal character sequence that appears at the end of a string: ``` >>> import re >>> ac = "Pearl Rivera-Vaccines" >>> re.sub("-Vaccines$", "", ac) 'Pearl Rivera' ``` --- In his answer, Tom Karzes observes that this approach doesn't readily work on strings that contain characters that have special meanings in a regex. For instance, ``` >>> import re >>> s = "foo^bar" >>> re.sub("^bar$", "", s) 'foo^bar' ``` `^` has a special meaning in regex, so the pattern `"^bar$"` fails to match the end of the string `s`. If the string you want to match contains special characters, you should escape it, either manually or with an `re.escape` call. ``` >>> import re >>> s = "foo^bar" >>> re.sub(r"\^bar$", "", s) 'foo' >>> re.sub(re.escape("^bar") + "$", "", s) 'foo' ```
The problem is that the argument to `strip` isn't used the way you think it is. The argument isn't treated as a sequence of characters, but rather as a set of characters. Any character in the argument string is removed. For example: ``` "abaca".strip("ac") ``` Produces: ``` 'b' ``` since all instances of `"a"` and `"b"` have been removed. If you just want to remove a suffix from a string, you can do something like: ``` ac = "Pearl Rivera-Vaccines" s = "-Vaccines" b = ac if b.endswith(s): b = b[:-len(s)] ``` This will result in `b` having the value: ``` 'Pearl Rivera' ``` Note that this will be faster than using the `re` module. It will also be more flexible, since it will work with any non-empty string (whereas creating a regular expression will require escaping certain characters).
9,397
64,211,267
I want to install package MetaTrader5 on my Linux [Fedora], but this package is supported only for Windows. **And my question is:** Is it possible to install Windows Python packages on Linux? and after installation import in my python file? **My Solution** 1. install `wine` (learn more about [wine](https://www.winehq.org/)) 2. download [python for windows](https://www.python.org/downloads/windows/), than install with `wine <path to .exe>` 3. `wine pip install MetaTrader5` or another windows only library 4. (example) create virtual enviroment `wine venv <name project>` and select 5. than import library to file, install package [3] in your virtual enviroment and you can you use it. 6. run with `wine python <file.py>`
2020/10/05
[ "https://Stackoverflow.com/questions/64211267", "https://Stackoverflow.com", "https://Stackoverflow.com/users/13033299/" ]
you can install `wine` to deal with the windows installer files in your fedora. as per the doc, you can run the following comands one by one to install wine: `sudo dnf clean all && sudo dnf update` `dnf config-manager --add-repo https://dl.winehq.org/wine-builds/fedora/32/winehq.repo` `sudo dnf install winehq-stable` after that, to confirm the installation: run `wine --version` , this should be executed without any error. lastly, dowmload the `mt5setup.exe` and open it with `wine`. also, have a look at this answer: <https://stackoverflow.com/a/44031808/10918344> hope you find it helpful
Yes it possible to use Wine. Its simply to do with **CrossOver**, its more stable. Meta Trader compiled for the Wine. Ive use this thing. Work correctly Meta Trader and Meta editor.
9,398
41,883,254
I am trying to process a form in django/python using the following code. --- home.html: ```html <form action="{% url 'home:submit' %}" method='post'> ``` --- views.py: ```py def submit(request): a = request.POST(['initial']) return render(request, 'home/home.html', { 'error_message': "returned" }) ``` --- urls.py: ```py from django.conf.urls import url from . import views urlpatterns = [ url(r'^submit/$', views.submit, name='submit') ] ``` when I try to run it in a browser I get the error: `NoReverseMatch at /home/ u'home' is not a registered namespace` and another error message indicating a problem with the form.
2017/01/26
[ "https://Stackoverflow.com/questions/41883254", "https://Stackoverflow.com", "https://Stackoverflow.com/users/7169431/" ]
In your main project, open url.py first. Then check, there should be app\_name declared at first. If it is not, declare it. For example, my app name is user info which is declared in url.py ``` app_name = "userinfo" urlpatterns = [ url(r'home/', views.home, name='home'), url(r'register/', views.registration, name='register') ] ```
Maybe someone will find this suggestion helpful. Go to your applications `urls.py` and type this before the urlpatterns: ``` app_name = 'Your app name' ```
9,399
59,918,219
``` AttributeError at /addpatient_to_db 'QuerySet' object has no attribute 'wardno' ``` Request Method: POST Request URL: <http://127.0.0.1:8000/addpatient_to_db> Django Version: 2.2.5 Exception Type: `AttributeError` Exception Value: `'QuerySet' object has no attribute 'wardno'` Exception Location: `C:\Users\Saurabh Patil\Desktop\SanjeevniHospital\admininterface\views.py in addpatient_to_db, line 114` Python Executable: `C:\Users\Saurabh Patil\AppData\Local\Programs\Python\Python37\python.exe` Python Version: 3.7.4 Python Path: ``` ['C:\\Users\\Saurabh Patil\\Desktop\\SanjeevniHospital', 'C:\\Users\\Saurabh ' 'Patil\\AppData\\Local\\Programs\\Python\\Python37\\python37.zip', 'C:\\Users\\Saurabh Patil\\AppData\\Local\\Programs\\Python\\Python37\\DLLs', 'C:\\Users\\Saurabh Patil\\AppData\\Local\\Programs\\Python\\Python37\\lib', 'C:\\Users\\Saurabh Patil\\AppData\\Local\\Programs\\Python\\Python37', 'C:\\Users\\Saurabh Patil\\AppData\\Roaming\\Python\\Python37\\site-packages', 'C:\\Users\\Saurabh ' 'Patil\\AppData\\Local\\Programs\\Python\\Python37\\lib\\site-packages'] ``` Server time: Sun, 26 Jan 2020 12:07:40 +0000 Here is my code ***views.py*** ``` def addpatient_to_db(request): if request.method == 'POST': name = request.POST['name'] age = request.POST['age'] sex = request.POST['sex'] address = request.POST['address'] contno = request.POST['contno'] wardno = request.POST['wardno'] bedno = request.POST['bedno'] doa = request.POST['doa'] docass = request.POST['docass'] pii = request.POST['pii'] alldata = patientdetails.objects.all() if alldata.wardno == wardno and alldata.bedno == bedno: # <---- This is the issuing line return render(request, "addpatient.html") else: addp = patientdetails(name=name, age=age, sex=sex, address=address, mobno=contno, wardno=wardno, bedno=bedno, dateofallot=doa, docass=docass, illness_issue=pii) addp.save() return redirect('addpatient') else: return render(request, 'addpatient.html') ``` ***models.py*** ``` from django.db import models # Create your models here. class patientdetails(models.Model): name = models.CharField(max_length=50) age = models.CharField(max_length=3) sex = models.CharField(max_length=10) address = models.CharField(max_length=100) mobno = models.CharField(max_length=10) wardno = models.CharField(max_length=3) bedno = models.CharField(max_length=3) dateofallot = models.CharField(max_length=10) docass = models.CharField(max_length=50) illness_issue = models.CharField(max_length=50) ```
2020/01/26
[ "https://Stackoverflow.com/questions/59918219", "https://Stackoverflow.com", "https://Stackoverflow.com/users/11056524/" ]
**TLDR:** **Solution is that you should either provide engine and user\_id in params or you should remove presence true validation and add optional true case (for user association) from model.** **Explanation:** If your model says that it should validate presence of `engine` then how can you not provide engine param (in form). When you post the form without engine, what happens is that your model does not save it and as you have handled that case, it moves on. As it belongs to user, same goes for user ID. although you could make it optional too by adding optional: true both in schema and model (because a car can "be" without a user IRL but depends here in your use-case). Going one step further, to exactly understand the issue yourself, use pry or byebug to see the params and happenings in runtime. Easier and faster way to verify the error is that put a bang in front of create method and it will show the error like: `if @car.save!`. One more thing: copy the car params and try to do this in rails console yourself manually with bang. It will give you the cause. These things will help you diagnose model save/create issues in rails. Happy Coding :)
From your log file ``` Started POST "/cars" for ::1 at 2020-01-26 14:45:06 +0000 ``` Shows that the create action is being called on the cars controller The parameters being passed in are ``` {"authenticity_token"=>"Oom+xdVDc0PqSwLbLIEP0R8H6U38+v9ISVql4Fr/0WSxZGSrxzTHccsgghd1U30OugcUBAA1R4BtsB0YigAUtA==", "car"=>{"vehicle_type"=>"Sports", "car_type"=>"Private", "seat"=>"5", "colour_type"=>"Black", "transmission_type"=>"Automatic"}, "commit"=>"Create car"} ``` The first method called on your create action is authenticate\_user ``` before_action :authenticate_user!, except: [:show] ``` You can see that this happens ``` User Load (0.4ms) SELECT "users".* FROM "users" WHERE "users"."id" = ? ORDER BY "users"."id" ASC LIMIT ? [["id", 1], ["LIMIT", 1]] ``` According to your log the next thing that happens is ``` Rendering cars/new.html.erb within layouts/application ``` Which means that the else clause was hit `render :new, notice: "Something went wrong..."` ``` @car = current_user.cars.build(car_params) if @car.save redirect_to listing_car_path(@car), notice: "Saved..." else render :new, notice: "Something went wrong..." end ``` Therefore the car did not save so validation must have failed. Your new car form should have an errors loop to display all the errors. If it did have this then your user (You) would know what went wrong Something like this ``` <% if car.errors.any? %> <div id="error_explanation"> <h2><%= pluralize(car.errors.count, "error") %> prohibited this car from being saved:</h2> <ul> <% car.errors.full_messages.each do |message| %> <li><%= message %></li> <% end %> </ul> </div> <% end %> ``` Should do the trick but it's standard in all rails generated forms so it should be there anyway
9,409
62,377,906
I want to run a batch file in a Conda environment, not in the **base** env, but in another virtual environment (here **pylayers**). I copied the `activate.bat` script from `F:\Anaconda3\Scripts` to `F:\Anaconda3\envs\pylayers\Scripts`. And my batch script (`installer_win.bat`) is: ``` call F:\Anaconda3\envs\pylayers\Scripts\activate.bat conda install numpy --yes conda install mayavi --yes conda install shapely --yes conda install libgdal --yes conda install gdal --yes conda install h5py --yes conda install seaborn --yes conda install PIL --yes conda install basemap --yes conda install wxpython --yes conda install netCDF4 --yes pip install protobuf pip install tqdm pip install descartes pip install bitstring pip install geocoder pip install triangle pip install osmapi pip install pyshp pip install Image pip install pathos pip install SimPy==2.2 pip install simplekml pip install smopy python setup.py install ``` When I execute the `installer_win.bat` file, it shows the following behavior: ``` Output: #stops after executing very first line in the batch file. (base) C:\Users\mkdth>activate pylayers (pylayers) C:\Users\mkdth>cd /d F:\Pycharm\Projects\pylayers-master (pylayers) F:\Pycharm\Projects\pylayers-master>installer_win.bat (pylayers) F:\Pycharm\Projects\pylayers-master>call F:\Anaconda3\envs\pylayers\Scripts\activate.bat The system cannot find the path specified. (pylayers) F:\Pycharm\Projects\pylayers-master>conda install numpy --yes Collecting package metadata (current_repodata.json): done Solving environment: done ## Package Plan ## environment location: F:\Anaconda3\envs\pylayers added / updated specs: - numpy The following packages will be downloaded: package | build ---------------------------|----------------- openssl-1.1.1g | he774522_0 4.8 MB ------------------------------------------------------------ Total: 4.8 MB The following NEW packages will be INSTALLED: numpy-base pkgs/main/win-64::numpy-base-1.18.1-py36hc3f5095_1 The following packages will be SUPERSEDED by a higher-priority channel: ca-certificates conda-forge::ca-certificates-2020.4.5~ --> pkgs/main::ca-certificates-2020.1.1-0 certifi conda-forge::certifi-2020.4.5.2-py36h~ --> pkgs/main::certifi-2020.4.5.1-py36_0 numpy conda-forge::numpy-1.18.5-py36h4d86e3~ --> pkgs/main::numpy-1.18.1-py36h93ca92e_0 openssl conda-forge --> pkgs/main Downloading and Extracting Packages openssl-1.1.1g | 4.8 MB | ############################################################################ | 100% Preparing transaction: done Verifying transaction: done Executing transaction: done (pylayers) F:\Pycharm\Projects\pylayers-master> ``` I also tried to modify the batch file to activate **pylayers** environment from the **base** env and run the `conda` or `pip` commands one by one, but it installs to **base** environment only. See the bat script Attempt 1 --------- `Installer_win.bat` file: ``` call F:\Anaconda3\Scripts\activate.bat activate pylayers pause conda install numpy --yes conda install mayavi --yes conda install shapely --yes conda install libgdal --yes conda install gdal --yes conda install h5py --yes conda install seaborn --yes conda install PIL --yes conda install basemap --yes conda install wxpython --yes conda install netCDF4 --yes pip install protobuf pip install tqdm pip install descartes pip install bitstring pip install geocoder pip install triangle pip install osmapi pip install pyshp pip install Image pip install pathos pip install SimPy==2.2 pip install simplekml pip install smopy python setup.py install ``` Output: activates pylayers env and stops ``` (pylayers) F:\Pycharm\Projects\pylayers-master>installer_win.bat (pylayers) F:\Pycharm\Projects\pylayers-master>call F:\Anaconda3\Scripts\activate.bat (base) F:\Pycharm\Projects\pylayers-master>activate pylayers (pylayers) F:\Pycharm\Projects\pylayers-master> ``` Attempt2: --------- `Installer_win.bat` file: ``` call F:\Anaconda3\Scripts\activate.bat activate pylayers conda install numpy --yes conda install mayavi --yes conda install shapely --yes conda install libgdal --yes conda install gdal --yes conda install h5py --yes conda install seaborn --yes conda install PIL --yes conda install basemap --yes conda install wxpython --yes conda install netCDF4 --yes pip install protobuf pip install tqdm pip install descartes pip install bitstring pip install geocoder pip install triangle pip install osmapi pip install pyshp pip install Image pip install pathos pip install SimPy==2.2 pip install simplekml pip install smopy python setup.py install ``` Output: #activates base env and stops ``` (pylayers) F:\Pycharm\Projects\pylayers-master>installer_win.bat (pylayers) F:\Pycharm\Projects\pylayers-master>call F:\Anaconda3\Scripts\activate.bat (base) F:\Pycharm\Projects\pylayers-master>activate pylayers ``` Attempt 3 --------- `Installer_win.bat` file: ``` call F:\Anaconda3\Scripts\activate.bat conda install numpy --yes conda install mayavi --yes conda install shapely --yes conda install libgdal --yes conda install gdal --yes conda install h5py --yes conda install seaborn --yes conda install PIL --yes conda install basemap --yes conda install wxpython --yes conda install netCDF4 --yes pip install protobuf pip install tqdm pip install descartes pip install bitstring pip install geocoder pip install triangle pip install osmapi pip install pyshp pip install Image pip install pathos pip install SimPy==2.2 pip install simplekml pip install smopy python setup.py install ``` Output: #starts installing in base env ``` (pylayers) F:\Pycharm\Projects\pylayers-master>installer_win.bat (pylayers) F:\Pycharm\Projects\pylayers-master>call F:\Anaconda3\Scripts\activate.bat (base) F:\Pycharm\Projects\pylayers-master>activate pylayers (pylayers) F:\Pycharm\Projects\pylayers-master>installer_win.bat (pylayers) F:\Pycharm\Projects\pylayers-master>call F:\Anaconda3\Scripts\activate.bat (base) F:\Pycharm\Projects\pylayers-master>conda install numpy --yes Collecting package metadata (current_repodata.json): failed CondaError: KeyboardInterrupt ^CTerminate batch job (Y/N)? ^C ``` Can someone help me to run this batch file under a Conda virtual env, please? Any suggestions would be very much appreciated.
2020/06/14
[ "https://Stackoverflow.com/questions/62377906", "https://Stackoverflow.com", "https://Stackoverflow.com/users/11051182/" ]
There are two ways to go. I think the first is the cleaner way to go. Option 1: YAML Definition ========================= If the entire procedure is only for installations, it can be condensed into a single [YAML environment definition](https://docs.conda.io/projects/conda/en/latest/user-guide/tasks/manage-environments.html#create-env-file-manually) and executed in a one go. This includes the local package installation, which for the current example I'll call `foo`. **foo\_install.yaml** ``` name: foo_install channels: - defaults dependencies: - numpy - mayavi - shapely - libgdal - gdal - h5py - seaborn - PIL - basemap - wxpython - netCDF4 - pip - pip: - protobuf - tqdm - descartes - bitstring - geocoder - triangle - osmapi - pyshp - Image - pathos - SimPy==2.2 - simplekml - smopy - -e ./../foo # this assumes running from inside the `foo` folder ``` To install this into an arbitrary env (e.g., `pylevels`) as an *addition* to the env, one would use ``` conda env update -n pylevels -f foo_install.yaml ``` where this is run from within the `foo` folder. See [the Advanced Pip Example](https://github.com/conda/conda/blob/4.8.3/tests/conda_env/support/advanced-pip/environment.yml) for a showcase of other options (basically, everything `pip install` accepts can be included in a YAML file). Option 2: `conda run` ===================== More generically, one can run arbitrary scripts inside a specific environment by using `conda run`. This avoids having to include activation commands inside scripts themselves. In this case, you could run your script with ``` conda run -n pylevels installer_win.bat ``` I would only resort to this if you need to also configure some environment variables or something similar. In that case, I would still make the YAML, and reduce all the installation to the single line: **installer\_win.bat** ``` conda env update -f foo_install.yaml ``` and include further actions after that.
To run a bat file from a dos prompt inside a new (non-base) conda env, you can try something like this: prompt> cmd "/c activate ds\_tensorflow && myfile.bat && deactivate" contents of myfile.bat to show you are in the non-base env: ``` echo hello python -c "import sys; print(sys.version)" ``` You can replace myfile.bat in the >> line above with your bat file. This will also work without the bat file, just run each command within the /c activate .... deactivate wrapper. ``` cmd "/c activate ds_tensorflow && python -c "import sys; print(sys.version)" && deactivate" ```
9,410
66,577,552
I would like to suppress some sensitive information displayed in the logs. Code: ``` pipeline { parameters { string(defaultValue: '', description: '', name: 'pki_client_cacert_password', trim: true) string(defaultValue: '', description: '', name: 'db_url', trim: true) } stages { stage('DeployToDev') { steps { script{ env.artifacts = sh( returnStdout: true, script: "/var/lib/jenkins/python_jobs/venv/bin/python3 /var/lib/jenkins/python_jobs/encrypter_creds.py --db_url=${env.db_url} --pki_client_cacert_password='${env.pki_client_cacert_password}'" ) } } } } } ``` Output: ``` + /var/lib/jenkins/python_jobs/venv/bin/python3 /var/lib/jenkins/python_jobs/encrypter_creds.py --db_url=<hide this from jenkins logs> '--pki_client_cacert_password=<hide this from jenkins logs>' ```
2021/03/11
[ "https://Stackoverflow.com/questions/66577552", "https://Stackoverflow.com", "https://Stackoverflow.com/users/1234419/" ]
First option is to set it as environment var: ``` withEnv(["MYSECRET=${params.pki_client_cacert_password}", "MYURL=${env.db_url}"]) { env.artifacts = sh( returnStdout: true, script: '.. python3 .. encrypter_creds.py --db_url=$MYURL ' + ' --pki_client_cacert_password=$MYSECRET' ) ``` Note the single quotes around the command to prevent any Groovy string interpolation. Second option is to save something like `pki_client_cacert_password` (which hopefully doesn't change much) into Jenkins Credentials Store and use it `withCredentials`: ``` withCredentials([usernamePassword( credentialsId: 'MY_PKI_CLIENT_CREDENTIALS', passwordVariable: 'DB_URL', usernameVariable: 'PKI_USER')]) { env.artifacts = sh( returnStdout: true, script: '.. python3 .. encrypter_creds.py --db_url=$DB_URL' + ' --pki_client_cacert_password=$PKI_PASSWORD' } ``` You can also roll your own third option, e.g. by writing the info you need into a file and modifying your script to read the parameters from that file.
I have added @MaratC suggestion however that did not help much, so I ended up adding `set +x` and `set -x` more so this question was related to [Echo off in Jenkins Console Output](https://stackoverflow.com/questions/26797219/echo-off-in-jenkins-console-output) which worked as expected ``` script{ withEnv(["ENV_PKI_CLIENT_CACERT_PASSWORD=${params.pki_client_cacert_password}", "ENV_DB_URL=${params.db_url}"]) { env.artifacts = sh( returnStdout: true, script: """ set +x /var/lib/jenkins/python_jobs/venv/bin/python3 /var/lib/jenkins/python_jobs/encrypter_creds.py --db_url=${ENV_DB_URL} --pki_client_cacert_password='${ENV_PKI_CLIENT_CACERT_PASSWORD}' set -x """ ) } } ``` Jenkins output ``` + set +x [Pipeline] ..... [Pipeline] ..... ```
9,411
54,811,697
I am learning python, coming from c++. From what I am reading, there only appears to be two forms of for loops in python. I can either iterate over a range, or through the elements of a collection. I suppose the former really is the latter...so maybe one form. Is there any other form? I am used to looping until a condition is false. There doesn't seem to be a way to evaluate a condition in a for loop in Python. Is that correct? Since there is no (init;condition;post) , how would you do something like iterate over every other string in a collection of strings, in python? C++ would look like: ``` string mystuff[] = {"poop", "dooky", "doodoo"}; for(size_t index = 0; index < 3; index += 2) cout << mystuff[index]; ```
2019/02/21
[ "https://Stackoverflow.com/questions/54811697", "https://Stackoverflow.com", "https://Stackoverflow.com/users/5191073/" ]
A for loop is the same thing as a while loop, just in a slightly different syntax. If you have a for loop that looks like ``` for(init; condition; post) //do something ``` that is equivalent to the while loop ``` init; while(condition) //do something post; ``` because Python has while loops, if you wanted to iterate over every other string, you would change ``` string mystuff[] = {"poop", "dooky", "doodoo"}; for(size_t index = 0; index < 3; index += 2) cout << mystuff[index]; ``` to ``` mystuff = [ "poop", "dooky", "doodoo" ] index = 0 while index < 3: print(mystuff[index]) index += 2 ```
(Most Common)Three kinds of loops : 1. `Count controlled` - You dont have this kind of loop in python 2. `Condition controlled` - while loop in python 3. `Collection controlled` - for loop in python
9,412
6,289,699
> > **Possible Duplicate:** > > [How do I get the name of a function or method from within a Python function or method?](https://stackoverflow.com/questions/245304/how-do-i-get-the-name-of-a-function-or-method-from-within-a-python-function-or-me) > > > The below code function return it's name. It works but I still have to specify the 'test.' in front of of the `__name__`. Is there any general way to refer the `__name__` ? ``` def test(): print test.__name__ ```
2011/06/09
[ "https://Stackoverflow.com/questions/6289699", "https://Stackoverflow.com", "https://Stackoverflow.com/users/240041/" ]
If I understand correctly - [Inspect](http://docs.python.org/library/inspect.html) is what you are looking for. ``` import inspect def test(): print inspect.stack()[0][3] ```
There is no way to access the function name in a python function without using a backtrace. ``` In [1]: import inspect In [2]: def test(): ...: print inspect.stack()[0][3] ...: In [3]: test() test ``` So, what you want to use is `inspect.stack()[0][3]` or, if you want to move it into a separate function, `inspect.stack()[1][3]` (from [How do I get the name of a function or method from within a Python function or method?](https://stackoverflow.com/questions/245304/how-do-i-get-the-name-of-a-function-or-method-from-within-a-python-function-or-me))
9,422
614,458
How do i look for the following pattern using regular expression in python? for the two cases Am looking for str2 after the "=" sign * Case 1: `str1=str2` * Case 2: `str1 = str2` please note there can be a **space or none** between the either side of the "=" sign Mine is like this, but only works for one of the cases! ``` m=re.search('(?<=str\s\=\s)\w+','str = str2') ``` returns str2 Help! Gath
2009/03/05
[ "https://Stackoverflow.com/questions/614458", "https://Stackoverflow.com", "https://Stackoverflow.com/users/74170/" ]
if you indeed have only such simple strings to parse you don't need regular expression. you can just partition on `=` and strip (or even lstrip) last element of a resulting tuple: ``` >>> case = 'str = str2' >>> case.partition('=')[2].lstrip() 'str2' ``` it'll be much faster than regexps. and just to show how fast i've made a simple test: ``` >>> timeit.timeit("'str1 = str2 '.partition('=')[2].strip()") 0.49051564213846177 >>> timeit.timeit("'str1 = str2 '.split('=')[1].strip()") 0.97673281637025866 >>> timeit.timeit('import re') 0.65663786250422618 ```
``` re.search(r'=\s*(.*)', 'str = str2').group(1) ``` or if you just want a single word: ``` re.search(r'=\s*(\w+)', 'str = str2').group(1) ``` Extended to specific initial string: ``` re.search(r'\bstr\s*=\s*(\w+)', 'str=str2').group(1) ``` `\b` = word boundary, so won't match `"somestr=foo"` It would be quicker to go trough all options once, instead of searching for single options one at the time: ``` option_str = "a=b, c=d, g=h" options = dict(re.findall(r'(\w+)\s*=\s*(\w+)', option_str)) options['c'] # -> 'd' ```
9,423
37,974,645
I recently started learning ruby, coming from a background in python. This is one of my first programs above a few lines in length, and I have made a mistake somewhere in the syntax that I cannot catch, that is causing the program to fail with the error "unexpected end-of-output, expected keyword\_end" Here is the code, many thanks for any help! It is a program that converts numbers from digit form into english. ``` def toEnglish number if number < 0 return 'Please try again, negative numbers are not allowed' end if number == 0 return 'Zero' end if number > 0 numEnglish = '' left = number.to_s.size ones = ['one', 'two', 'three', 'four', 'five', 'six', 'seven', 'eight', 'nine'] tens = ['ten', 'twenty', 'thirty', 'forty', 'fifty', 'sixty', 'seventy', 'eighty', 'ninety'] teens = ['eleven', 'twelve', 'thirteen', 'fourteen', 'fifteen', 'sixteen', 'seventeen', 'eighteen', 'nineteen'] while left.to_i > 0 firstDigit = number.to_s[0].to_i secondDigit = number.to_s[1].to_i sum = (firstDigit.to_i * 10) + secondDigit.to_i if secondDigit != 0 if sum < 20 and sum > 10 numEnglish << teens[sum.to_i - 11] + 'hundred' elsif (sum % 10) == 0 numEnglish << tens[(sum.to_i / 10)].to_s + 'hundred' else numEnglish << tens[(firstDigit.to_i / 10) - 1] + ones[secondDigit - 1] end end numEnglish << ones[firstDigit.to_i - 1] numberRev = number.to_s.reverse number = (numberRev.to_i / 100).to_s.reverse end if left == 0 return numEnglish end end end puts toEnglish(54) puts toEnglish(447) ``` At the moment, the code runs without errors, but I am left with a blinking cursor like the one I would get from the `gets` method.
2016/06/22
[ "https://Stackoverflow.com/questions/37974645", "https://Stackoverflow.com", "https://Stackoverflow.com/users/5506705/" ]
One missing `end` after line 35, i.e. at ``` number = (numberRev / 100).reverse # HERE! there should be an `end` if left == 0 return numEnglish ``` In addition, at line 18 and 19 you are asking `number[0]` for the first non-zero digit of `Integer`, while ~~[the class](http://ruby-doc.org/core/Integer.html) does not have a member `#[]`~~ `#[]` in both [Bignum](http://ruby-doc.org/core-2.3.7/Bignum.html#method-i-5B-5D) and [Fixnum](http://ruby-doc.org/core-2.3.7/Fixnum.html#method-i-5B-5D) do not do what you mean. Try something else, for example, `number.to_s[0].to_i`. Note: it is widely accepted to indent with two spaces, not eight.
Some advice: * use two spaces. It's more common and easier to read and spot such errors. * use an editor like Sublime Text or a IDE like RubyMine. ST has several add-ons that facilitate debugging. In this case all I had to do is run a "Beautify-Ruby" on the code and it gave me this output. It looks like you are missing an `end` somewhere. I'm not sure where because there are other errors also, such as line 20 and 34. But, even after fixing these, your code runs forever. The code is surely not Ruby-like, there are far better contructs to be used. For instance, the loop where you can't exit, a `while` is seldom used, rather use an `each` or something similar: ``` def toEnglish number if number < 0 return 'Please try again, negative numbers are not allowed' end if number == 0 return 'Zero' end if number > 0 numEnglish = '' left = number.to_s.size ones = ['one', 'two', 'three', 'four', 'five', 'six', 'seven', 'eight', 'nine'] tens = ['ten', 'twenty', 'thirty', 'forty', 'fifty', 'sixty', 'seventy', 'eighty', 'ninety'] teens = ['eleven', 'twelve', 'thirteen', 'fourteen', 'fifteen', 'sixteen', 'seventeen', 'eighteen', 'nineteen'] while left.to_i > 0 firstDigit = number[0] secondDigit = number[1] sum = (firstDigit.to_i * 10) + secondDigit+to_i if secondDigit != 0 if sum < 20 and sum > 10 numEnglish << teens[sum.to_i - 11] + 'hundred' elsif (sum % 10) == 0 numEnglish << tens[(sum.to_i / 10)].to_s + 'hundred' else numEnglish << tens[(firstDigit.to_i / 10) - 1] + ones[secondDigit - 1] end end numEnglish << ones[firstDigit.to_i - 1] numberRev = number.reverse number = (numberRev / 100).reverse if left == 0 return numEnglish end end end puts toEnglish(54) puts toEnglish(447) ``` I understand this exercise is to learn Ruby because there are already gems that do this conversion for you. See also "[Using Ruby convert numbers to words?](https://stackoverflow.com/questions/19445003/using-ruby-convert-numbers-to-words)".
9,433
70,630,205
I don't know how to get the all total price of the product that the customer bought from my system. the price of each item I can get but the price of all the item that have been choose i can't get the output. I'm using python. Below is my code: ``` #below is the price of products shopping_cart = {} option = 1 while option!= 0: option = int(input("What type of milk you want to purchased: ")) if option == 1: print("coke") qnty = int(input("Enter the quantity: ")) total = qnty*8.49 elif option == 2: print("dutch lady") qnty = int(input("Enter the quantity: ")) total = qnty*8.50 elif option == 3: print("fanta") total = qnty*3.00 elif option == 4: print("Yoghurt") qnty = int(input("Enter the quantity: ")) total = qnty*16.50 elif option == 5: print("pepsi") qnty = int(input("Enter the quantity: ")) total = qnty*18.50 elif option == 6: print("beer") qnty = int(input("Enter the quantity: ")) total = qnty*20.70 else: ("Not found") def totalcart (pr,qnty): #is my code correct here? totalcart = (pr * qny) return total print("The total price in your cart is ", (total)) break print("Come Again") ```
2022/01/08
[ "https://Stackoverflow.com/questions/70630205", "https://Stackoverflow.com", "https://Stackoverflow.com/users/-1/" ]
Just putting `("Not found")` as a statement doesn't print anything. You need to add your item total into `total_cost` immediately after the print statement. There's no point defining a function, since you don't call the function. So, replace everything after that last "else:" with this: ``` else: total = 0 print("Not found") total_cost += total print("The total price in your cart is RM ", total_cost) print("Please Come Again") ```
You can add `total_cost += total` for every choice, So its added to the total cost when somebody choose the item and if he add another it also adds to the total cost. ```py elif option == 6: print("Foaming Milk") qnty = int(input("Enter the quantity: ")) total = qnty*12.70 print("The price is: " + str(total)) total_cost += total ```
9,434
65,797,818
I'm trying to understand regex and i have an error message. I don't understand this message because I don't use list. If I use just print(number) it doesn't delete the [ ] of the result. The goal is to obtain only the number in the sentence as result. I use python 3.6. Below my code: ``` import re user_sentence = input(">>> : ") while user_sentence != "quit": if re.findall("[0-9]+", user_sentence): number = re.findall("[0-9]+", user_sentence) print(number.group()) user_sentence = input(">>> : ") else: print("no") user_sentence = input(">>> : ") ```
2021/01/19
[ "https://Stackoverflow.com/questions/65797818", "https://Stackoverflow.com", "https://Stackoverflow.com/users/15039238/" ]
`groups()` (with an `s`) is on the return value of a `re.match`, not a `findall`. If you're looking to print each item in the list returned by a findall, you can print use `join` to join them on a space, or loop over the list. Note that the list may include tuples though (not in your case, but in general). Example of joining them on a space: ```py import re user_sentence = input(">>> : ") while user_sentence != "quit": if re.findall("[0-9]+", user_sentence): number = re.findall("[0-9]+", user_sentence) print(" ".join(number)) user_sentence = input(">>> : ") else: print("no") user_sentence = input(">>> : ") ``` Or by looping over and printing: ```py for n in number: print(n) ```
findall returns list type by default and list does not have any such method/attribute. If you try to running print (number), you see a list as the output. But to convert the list elements into number you can add these lines after you have evaluated number ``` number = re.findall("[0-9]+", user_sentence) #add these lines for num in number : result = result + str(num) print (int(result)) ```
9,436
29,802,931
I have a csv file of customer ids (`CRM_id`). I need to get their primary keys (an autoincrement int) from the customers table of the database. (I can't be assured of the integrity of the `CRM_id`s so I chose not to make that the primary key). So: ``` customers = [] with open("CRM_ids.csv", 'r', newline='') as csvfile: customerfile = csv.DictReader(csvfile, delimiter = ',', quotechar='"', skipinitialspace=True) #only one "CRM_id" field per row customers = [c for c in customerfile] ``` So far so good? I think this is the most pythonesque way of doing that (but happy to hear otherwise). Now comes the ugly code. It works, but I hate appending to the list because that has to copy and reallocate memory for each loop, right? Is there a better way (pre-allocate + enumerate to keep track of the index comes to mind, but maybe there's an even quickler/better way by being clever with the SQL so as not to do several thousand separate queries...)? ``` cnx = mysql.connector.connect(user='me', password=sys.argv[1], host="localhost", database="mydb") cursor = cnx.cursor() select_customer = ("SELECT id FROM customers WHERE CRM_id = %(CRM_id)s LIMIT 1;") c_ids = [] for row in customers: cursor.execute(select_customer, row) #note fetchone() returns a tuple, but the SELECTed set #only has a single column so we need to get this column with the [0] c_ids.extend(cursor.fetchall()) c_ids = [c[0] for c in c_ids] ``` Edit: Purpose is to get the primary keys in a list so I can use these to allocate some other data from other CSV files in linked tables (the customer id primary key is a foreign key to these other tables, and the allocation algorithm changes, so it's best to have the flexibility to do the allocation in python rather than hard coding SQL queries). I know this sounds a little backwards, but the "client" only works with spreadsheets rather than an ERP/PLM, so I have to build the "relations" for this small app myself.
2015/04/22
[ "https://Stackoverflow.com/questions/29802931", "https://Stackoverflow.com", "https://Stackoverflow.com/users/3790954/" ]
What about changing your query to get what you want? ``` crm_ids = ",".join(customers) select_customer = "SELECT UNIQUE id FROM customers WHERE CRM_id IN (%s);" % crm_ids ``` MySQL should be fine with even a multi-megabyte query, according to [the manual](http://dev.mysql.com/doc/refman/5.0/en/packet-too-large.html); if it gets to be a *really* long list, you can always break it up - two or three queries is guaranteed much faster than a few thousand.
how about storing your csv in a dict instead of a list: ``` customers = [c for c in customerfile] ``` becomes: ``` customers = {c['CRM_id']:c for c in customerfile} ``` then select the entire xref: ``` result = cursor.execute('select id, CRM_id from customers') ``` and add the new rowid as a new entry in the dict: ``` for row in result: customers[row[1]]['newid']=row[0] ```
9,438
55,351,039
There is an excel file logging a set of data. Its columns are as below, where each column is seperated by comma. ``` SampleData year,date,month,location,time,count 2019,20,Jan,Japan,22:33,1 2019,31,Jan,Japan,19:21,1 2019,1,Jan,Japan,8:00,1 2019,4,Jan,Japan,4:28,2 2019,13,Feb,Japan,6:19,1 ``` From this data, I would like to create python pandas dataframe, which looks like below. ``` DataFrame u_datetime,location,count 1547991180,Japan,1 1548930060,Japan,1 1546297200,Japan,1 1546543680,Japan,2 1550006340,Japan,1 ``` One of the DataFrame methods can be useful for this operation, but it does not take date with one digit. ``` pandas.to_datetime( DataFrame["year"].astype(str) + DataFrame["month"].astype(str) + DataFrame["date"].astype(str) + DataFrame["time"].astype(str), format="%Y%b%d%-H%M" ) ``` Could anybody give me a hand? Thank you.
2019/03/26
[ "https://Stackoverflow.com/questions/55351039", "https://Stackoverflow.com", "https://Stackoverflow.com/users/11258673/" ]
try this ``` from datetime import datetime data['datetime'] = data[['year','date','month','time']].apply(lambda x: datetime.strptime(str(x['year'])+'-'+str(x['date'])+'-'+str(x['month'])+' '+str(x['time']), "%Y-%d-%b %H:%M").timestamp(), axis=1) data[['datetime','location','count']] ``` **Output** ​ ``` datetime \ 0 1548003780.0 1 1548942660.0 2 1546309800.0 3 1546556280.0 4 1550018940.0 location \ 0 Japan 1 Japan 2 Japan 3 Japan 4 Japan count 0 1 1 1 2 1 3 2 4 1 ```
You are close, need `%Y%b%d%H:%M` format and then convert to unix time by cast to `int64` with integer division by `10**9`: ``` s = (DataFrame["year"].astype(str)+ DataFrame["month"].astype(str)+ DataFrame["date"].astype(str)+ DataFrame["time"].astype(str)) DataFrame['u_datetime'] = pd.to_datetime(s, format="%Y%b%d%H:%M").astype(np.int64) // 10**9 DataFrame = DataFrame[['u_datetime','location','count']] print (DataFrame) u_datetime location count 0 1548023580 Japan 1 1 1548962460 Japan 1 2 1546329600 Japan 1 3 1546576080 Japan 2 4 1550038740 Japan 1 ```
9,439
5,195,295
I some lines of code that read a csv file in a certain format. What I'd like now (and couldn't figure out a solution for) is that the file name stops being stale and becomes an actual user input when calling the python program. Now I have a static file name in the code, eg: ``` reader = csv.reader(open("file.csv", "rb")) ``` I'd like this to become an information provided by the user. Is this done using raw\_input or is there a way to do it by having the user writing something in the lines of: ``` python testfile.py --input:file.csv ``` Appreciate your help in advance!
2011/03/04
[ "https://Stackoverflow.com/questions/5195295", "https://Stackoverflow.com", "https://Stackoverflow.com/users/644926/" ]
The simplest way is to write your script like this: ``` import sys reader = csv.reader(open(sys.argv[1], "rb")) ``` and then run it like this: ``` python testfile.py file.csv ``` You should put in some error checking, eg: ``` if len(sys.argv) < 2: print "Usage..." sys.exit(1) ``` For more power, use the bult in `optparse` / `argparse` library.
You can use [optparse](http://docs.python.org/library/optparse.html) ([argparse](http://docs.python.org/library/argparse.html#module-argparse) after 2.7) or [getopt](http://docs.python.org/library/getopt.html) to parse command line parameters. Only use getopt if you're already familiar with argument parsing in C. Otherwise argparse is the easiest to use.
9,441
11,724,779
I am trying to install a local version of ScrumDo for testing. Only then I come to the point in my installation that I have to run: > > source bin/activate > > pip install -r requirements.txt > > > I get the error: > > Downloading/unpacking django-storages > > > > > > > Cannot fetch index base URL http : //b.pypi.python.org/simple/ > > > > Could not find any downloads that satisfy the requirement django-storages > > > > > > > > > No distributions at all found for django-storages > > Storing complete log in ./pip-log.txt > > > > I googled a bit and search here and on Stack Overflow and found that I should add a --proxy= option and/or unset my http\_proxy environment variable. Yet my install does not have a proxy and the environment var is not set. I tried running > > pip install -r requirements.txt --proxy= > > > Yet the error remains the same. I also created a /root/.pip/pip.conf containing: > > [global] > > index-url = http : //b.pypi.python.org/simple > > > and checked it the server actually was online and if the package django-storages existed, this was both true. a last thing I tried, since the [install doc of ScrumDo](https://github.com/ScrumDoLLC/ScrumDo/wiki/Set-up) says so: > > pip install -U Django==1.1.4 > > > yet again no succes... the error always remains the same, any one got any Ideas? my pip-error.log shows the following (the URL works in firefox on a different machine in the same network that also uses no proxy, and I can ping it from the same machine): ``` /var/www/ScrumDo/pinax-env/bin/pip run on Mon Jul 30 10:24:08 2012 proxy): Downloading/unpacking Django==1.1.4 proxy): Getting page http://b.pypi.python.org/simple/Django proxy): Could not fetch URL http://b.pypi.python.org/simple/Django: HTTP Error 404: Not Found proxy): Will skip URL http://b.pypi.python.org/simple/Django when looking for download links for Django==1.1.4 proxy): Getting page http://b.pypi.python.org/simple/ proxy): Could not fetch URL http://b.pypi.python.org/simple/: HTTP Error 404: Not Found proxy): Will skip URL http://b.pypi.python.org/simple/ when looking for download links for Django==1.1.4 proxy): Cannot fetch index base URL http://b.pypi.python.org/simple/ proxy): URLs to search for versions for Django==1.1.4: proxy): * http://b.pypi.python.org/simple/Django/1.1.4 proxy): * http://b.pypi.python.org/simple/Django/ proxy): Getting page http://b.pypi.python.org/simple/Django/1.1.4 proxy): Getting page http://b.pypi.python.org/simple/Django/ proxy): Could not fetch URL http://b.pypi.python.org/simple/Django/1.1.4: HTTP Error 404: Not Found proxy): Will skip URL http://b.pypi.python.org/simple/Django/1.1.4 when looking for download links for Django==1.1.4 proxy): Could not fetch URL http://b.pypi.python.org/simple/Django/: HTTP Error 404: Not Found proxy): Will skip URL http://b.pypi.python.org/simple/Django/ when looking for download links for Django==1.1.4 proxy): Could not find any downloads that satisfy the requirement Django==1.1.4 No distributions at all found for Django==1.1.4 proxy): Exception information: proxy): Traceback (most recent call last): File "/var/www/ScrumDo/pinax-env/lib/python2.6/site-packages/pip-0.6.1-py2.6.egg/pip.py", line 482, in main proxy): self.run(options, args) proxy): File "/var/www/ScrumDo/pinax-env/lib/python2.6/site-packages/pip-0.6.1-py2.6.egg/pip.py", line 675, in run proxy): requirement_set.install_files(finder, force_root_egg_info=self.bundle) proxy): File "/var/www/ScrumDo/pinax-env/lib/python2.6/site-packages/pip-0.6.1-py2.6.egg/pip.py", line 2422, in install_files proxy): url = finder.find_requirement(req_to_install, upgrade=self.upgrade) proxy): proxy): File "/var/www/ScrumDo/pinax-env/lib/python2.6/site-packages/pip-0.6.1-py2.6.egg/pip.py", line 1485, in find_requirement proxy): proxy): raise DistributionNotFound('No distributions at all found for %s' % req) proxy): proxy): DistributionNotFound: No distributions at all found for Django==1.1.4 ```
2012/07/30
[ "https://Stackoverflow.com/questions/11724779", "https://Stackoverflow.com", "https://Stackoverflow.com/users/1411141/" ]
You can try installing django-storages on its own.. try this? ``` sudo pip install https://bitbucket.org/david/django-storages/get/def732408163.zip ```
Try giving the proxy settings in the command as such ``` pip --proxy=http://user:password@Proxy:PortNumber install -r requirements.txt ``` or try ``` export http_proxy=http://user:password@Proxy:PortNumber ```
9,449
53,162,325
I use google composer. I have a dag that uses the `panda.read_csv()` function to read a `.csv.gz` file. The DAG keeps trying without showing any errors. Here is the airflow log: ``` *** Reading remote log from gs://us-central1-data-airflo-dxxxxx-bucket/logs/youtubetv_gcpbucket_to_bq_daily_v2_csv/file_transfer_gcp_to_bq/2018-11-04T20:00:00/1.log. [2018-11-05 21:03:58,123] {cli.py:374} INFO - Running on host airflow-worker-77846bb966-vgrbz [2018-11-05 21:03:58,239] {models.py:1196} INFO - Dependencies all met for <TaskInstance: youtubetv_gcpbucket_to_bq_daily_v2_csv.file_transfer_gcp_to_bq 2018-11-04 20:00:00 [queued]> [2018-11-05 21:03:58,297] {models.py:1196} INFO - Dependencies all met for <TaskInstance: youtubetv_gcpbucket_to_bq_daily_v2_csv.file_transfer_gcp_to_bq 2018-11-04 20:00:00 [queued]> [2018-11-05 21:03:58,298] {models.py:1406} INFO - ---------------------------------------------------------------------- --------- Starting attempt 1 of ---------------------------------------------------------------------- --------- [2018-11-05 21:03:58,337] {models.py:1427} INFO - Executing <Task(BranchPythonOperator): file_transfer_gcp_to_bq> on 2018-11-04 20:00:00 [2018-11-05 21:03:58,338] {base_task_runner.py:115} INFO - Running: ['bash', '-c', u'airflow run youtubetv_gcpbucket_to_bq_daily_v2_csv file_transfer_gcp_to_bq 2018-11-04T20:00:00 --job_id 15096 --raw -sd DAGS_FOLDER/dags/testdags/youtubetv_gcp_to_bq_v2.py'] ``` python code in DAG: ``` from datetime import datetime,timedelta from airflow import DAG from airflow import models import os import io,logging, sys import pandas as pd from io import BytesIO, StringIO from airflow.operators.dummy_operator import DummyOperator from airflow.operators.subdag_operator import SubDagOperator from airflow.operators.python_operator import BranchPythonOperator from airflow.operators.bash_operator import BashOperator #GCP from google.cloud import storage import google.cloud from google.cloud import bigquery from google.oauth2 import service_account from airflow.operators.slack_operator import SlackAPIPostOperator from airflow.models import Connection from airflow.utils.db import provide_session from airflow.utils.trigger_rule import TriggerRule def readCSV(checked_date,file_name, **kwargs): subDir=checked_date.replace('-','/') fileobj = get_byte_fileobj(BQ_PROJECT_NAME, YOUTUBETV_BUCKET, subDir+"/"+file_name) df_chunks = pd.read_csv(fileobj, compression='gzip',memory_map=True, chunksize=1000000) # return TextFileReader print ("done reaCSV") return df_chunks ``` DAG: ``` file_transfer_gcp_to_bq = BranchPythonOperator( task_id='file_transfer_gcp_to_bq', provide_context=True, python_callable=readCSV, op_kwargs={'checked_date': '2018-11-03', 'file_name':'daily_events_xxxxx_partner_report.csv.gz'} ) ``` The DAG is successfully run on my local airflow version. ``` def readCSV(checked_date,file_name, **kwargs): subDir=checked_date.replace('-','/') fileobj = get_byte_fileobj(BQ_PROJECT_NAME, YOUTUBETV_BUCKET, subDir+"/"+file_name) df = pd.read_csv(fileobj, compression='gzip',memory_map=True) return df ``` tested get\_byte\_fileobj and it works as a stand alone function.
2018/11/05
[ "https://Stackoverflow.com/questions/53162325", "https://Stackoverflow.com", "https://Stackoverflow.com/users/2183883/" ]
Based on this discussion [airflow google composer group](https://groups.google.com/forum/#!topic/cloud-composer-discuss/alnKzMjEj8Q) it is a known issue. One of the reason can be because of overkilling all the composer resources (in my case memory)
I have a [similar issue](https://groups.google.com/forum/#!topic/cloud-composer-discuss/lKsy7X1fy00) recently. In my case it's beacause the kubernetes worker overload. You can watch the worker performance on kubernetes dashboard too see whether your case is cluster overloadding issue. If yes, you can try set the value of an airflow configuration `celeryd_concurrency` lower to reduce the parallism in a worker and see whether the cluster loads goes down [![enter image description here](https://i.stack.imgur.com/KGoOk.png)](https://i.stack.imgur.com/KGoOk.png)
9,454
57,602,726
I'm wondering why we can have Tensorflow [run in a multi-thread fashion](https://www.tensorflow.org/guide/performance/overview#optimizing_for_cpu) while python can only execute one thread at a time due to [GIL](https://wiki.python.org/moin/GlobalInterpreterLock)?
2019/08/22
[ "https://Stackoverflow.com/questions/57602726", "https://Stackoverflow.com", "https://Stackoverflow.com/users/7850499/" ]
The GIL's restriction is slightly more subtle: only one thread at a time can be *executing Python bytecode*. Extensions using Python's C API (like `tensorflow`) can release the GIL if they don't need it. I/O operations like using files or sockets also tend to release the GIL because they generally involve lots of waiting. So threads executing extensions or waiting for I/O can run while another thread is executing Python bytecode.
Most of the `tensorflow` core is written in `C++` and the python APIs are just the wrappers around it. While running the `C++` code regular python restrictions do not apply.
9,455
4,631,377
I am very new to PyDev and Python, though I have used Eclipse for Java plenty. I am trying to work through some of the Dive Into Python examples and this feels like an extremely trivial problem that's just becoming exceedingly annoying. I am using Ubuntu Linux 10.04. I want to be able to use the file odbchelper.py, which is located in the directory `/Desktop/Python_Tutorials/diveintopython/py` Here is my example.py file that I'm working on in my PyDev/Eclipse project: ``` import sys sys.path.append("~/Desktop/Python_Tutorials/diveintopython/py") ``` This works fine, but then I want the next line of my code to be: ``` import odbchelper ``` and this causes an unresolved import error every time. I have added `__init__.py` files to just about every directory possible and it doesn't help anything. I've tried adding `__init__.py` files one at a time to the various levels of directories between the project location and the odbchelper.py file, and I've also tried adding the `__init__.py` files to all of the directories in between simultaneously. Neither works. All I want to do is have a project somewhere in some other directory, say `/Desktop/MyStuff/Project`, in which I have example.py ... and then from example.py I want to import odbchelper.py from `/Desktop/Python_Tutorials/diveintopython/py/` Every message board response I can find just saying to use the `sys.path.append()` function to add this directory to my path, and then import it ... but that is precisely what I am doing in my code and it's not working. I have also tried the `Ctrl`-`1` trick to suppress the error message, but the program is still not functioning correctly. I get an error, `ImportError: No module named odbchelper`. So it's clearly not getting the path added, or there is some problem that all of my many permutations of adding `__init__.py` files has missed. It's very frustrating that something this simple... calling things from some file that exists somewhere else on my machine... requires this much effort.
2011/01/07
[ "https://Stackoverflow.com/questions/4631377", "https://Stackoverflow.com", "https://Stackoverflow.com/users/567620/" ]
I am using eclipse kepler 4.3, PyDev 3.9.2 and on my ubuntu 14.04 I encountered with the same problem. I tried and spent hours, with all the above most of the options but in vain. Then I tried the following which was great: * Select **Project**-> RightClick-> **PyDev**-> **Remove PyDev Project Config** * file-> **restart** And I was using Python 2.7 as an interpreter, although it doesn’t effect, I think.
Following, in my opinion will solve the problem 1. Adding the **init**.py to your "~/Desktop/Python\_Tutorials/diveintopython/py" folder 2. Go to Window --> Preferences --> PyDev --> Interpreters --> Python Interpreter to remove your Python Interpreter setting (reason being is because PyDev unable to auto refresh any updates made to any System PythonPath) 3. Add in the Interpreter with the same details as before (this will refresh your Python Interpreter setting with updates made to your PythonPath) 4. Finally since your "~/Desktop/Python\_Tutorials/diveintopython/py" folder not a standard PythonPath, you will need to add it in. There are two ways to do it a. As per what David German suggested. However this only applicable for the particular projects you are in b. Add in "~/Desktop/Python\_Tutorials/diveintopython/py" into a new PythonPath under Window --> Preferences --> PyDev --> Interpreters --> Python Interpreter --> Libraries subtab --> NewFolder Hope it helps.
9,456
62,504,019
I am calling the Giphy API using another [wrapper API](https://github.com/Giphy/giphy-python-client) which returns a **list of dictionaries**. I am having hard times to serialize the data to return it to AJAX. The data is returned as `InlineResponse200` with three [properties](https://github.com/Giphy/giphy-python-client/blob/master/docs/InlineResponse200.md). ([docu](https://github.com/Giphy/giphy-python-client/blob/master/docs/Gif.md)) The problem is that my view is not able to return the JSON properly: ``` # Traceback [2020-06-23 14:58:54,086] log: ERROR - Internal Server Error: /get_gifs/ Traceback (most recent call last): File "C:\Users\Jonas\AppData\Local\Programs\Python\Python38-32\lib\site-packages\django\core\handlers\exception.py", line 34, in inner response = get_response(request) File "C:\Users\Jonas\AppData\Local\Programs\Python\Python38-32\lib\site-packages\django\core\handlers\base.py", line 115, in _get_response response = self.process_exception_by_middleware(e, request) File "C:\Users\Jonas\AppData\Local\Programs\Python\Python38-32\lib\site-packages\django\core\handlers\base.py", line 113, in _get_response response = wrapped_callback(request, *callback_args, **callback_kwargs) File "C:\Users\Jonas\Desktop\finsphere\finsphere\blog\views.py", line 234, in get_gifs return JsonResponse(api_response.data[0]) File "C:\Users\Jonas\AppData\Local\Programs\Python\Python38-32\lib\site-packages\django\http\response.py", line 554, in __init__ raise TypeError( TypeError: In order to allow non-dict objects to be serialized set the safe parameter to False. [23/Jun/2020 14:58:54] "POST /get_gifs/ HTTP/1.1" 500 17874 ``` If I add `safe=False` it returns `TypeError: Object of type Gif is not JSON serializable` I don't get this since `api_response.data[0]` is a cristal clear dictionary. **Desired outcome:** Get the Giphy object logged in the success function of Ajax. **AJAX** ``` (function($) { $('#btnSearch').on('click', function(e) { var query = $('#search').val(); console.log(query); e.preventDefault(); $.ajax({ type: 'post', async: true, url: '/get_gifs/', data: { 'query': query, 'csrfmiddlewaretoken': window.CSRF_TOKEN // from blog.html }, success: function(response) { }, error: function(xhr, status, error) { // shit happens friends! } }); }); }(jQuery)); ``` (Inserted my original -free- API key for reproduction) **Views.py** ``` def get_gifs(request): # create an instance of the API class api_instance = giphy_client.DefaultApi() # API Key api_key = 'NGSKWrBqtIq1rFU1Ka11D879Y1u4Igia' # Search term q = request.POST.get('query') print(q) # Query parameters limit = 2 offset = 0 rating = 'g' lang = 'en' fmt = 'json' try: # Search Endpoint api_response = api_instance.gifs_search_get(api_key, q, limit=limit, offset=offset, rating=rating, lang=lang, fmt=fmt) pprint(api_response) except ApiException as e: print("Exception when calling DefaultApi->gifs_search_get: %s\n" % e) return JsonResponse(api_response.data[0]) ``` **API fetched object (pprint api\_response)** ``` {'data': [{'bitly_gif_url': 'https://gph.is/g/EJWjdvN', 'bitly_url': 'https://gph.is/g/EJWjdvN', 'content_url': '', 'create_datetime': None, 'embed_url': 'https://giphy.com/embed/J0JGg6doLfmV0yZmIB', 'featured_tags': None, 'id': 'J0JGg6doLfmV0yZmIB', 'images': {'downsized': {'height': '250', 'size': '350582', 'url': 'https://media3.giphy.com/media/J0JGg6doLfmV0yZmIB/giphy.gif?cid=ecefd82565bc1664c2b17e3e4b60d88c736d0c6b5a39d682&rid=giphy.gif', 'width': '478'}, 'downsized_large': {'height': '250', 'size': '350582', 'url': 'https://media3.giphy.com/media/J0JGg6doLfmV0yZmIB/giphy.gif?cid=ecefd82565bc1664c2b17e3e4b60d88c736d0c6b5a39d682&rid=giphy.gif', 'width': '478'}, 'preview_gif': {'height': '134', 'size': '49623', 'url': 'https://media3.giphy.com/media/J0JGg6doLfmV0yZmIB/giphy-preview.gif?cid=ecefd82565bc1664c2b17e3e4b60d88c736d0c6b5a39d682&rid=giphy-preview.gif', 'width': '256'}}, 'import_datetime': '2020-06-15 10:01:39', 'is_anonymous': None, 'is_community': None, 'is_featured': None, 'is_hidden': None, 'is_indexable': None, 'is_realtime': None, 'is_removed': None, 'is_sticker': False, 'rating': 'g', 'slug': 'MITEF-mitefarab-asc2020-J0JGg6doLfmV0yZmIB', 'source': 'www.mitefarab.org', 'source_post_url': 'www.mitefarab.org', 'source_tld': '', 'tags': None, 'trending_datetime': '0000-00-00 00:00:00', 'type': 'gif', 'update_datetime': None, 'url': 'https://giphy.com/gifs/MITEF-mitefarab-asc2020-J0JGg6doLfmV0yZmIB', 'user': {'avatar_url': 'https://media2.giphy.com/avatars/MITEF/8FTlysEjtXzx.jpg', 'banner_url': '', 'display_name': 'MITEF Pan Arab', 'profile_url': 'https://giphy.com/MITEF/', 'twitter': None, 'username': 'MITEF'}, 'username': 'MITEF'}, {'bitly_gif_url': 'https://gph.is/g/ZdxQQpP', 'bitly_url': 'https://gph.is/g/ZdxQQpP', 'content_url': '', 'create_datetime': None, 'embed_url': 'https://giphy.com/embed/hTJF0O4vDkJsUi1h8Q', 'featured_tags': None, 'id': 'hTJF0O4vDkJsUi1h8Q', 'images': {'downsized': {'height': '480', 'size': '310971', 'url': 'https://media3.giphy.com/media/hTJF0O4vDkJsUi1h8Q/giphy.gif?cid=ecefd82565bc1664c2b17e3e4b60d88c736d0c6b5a39d682&rid=giphy.gif', 'width': '480'}, 'preview': {'height': '480', 'mp4': 'https://media3.giphy.com/media/hTJF0O4vDkJsUi1h8Q/giphy-preview.mp4?cid=ecefd82565bc1664c2b17e3e4b60d88c736d0c6b5a39d682&rid=giphy-preview.mp4', 'mp4_size': '15536', 'width': '480'}, 'preview_gif': {'height': '480', 'size': '22387', 'url': 'https://media3.giphy.com/media/hTJF0O4vDkJsUi1h8Q/giphy-preview.gif?cid=ecefd82565bc1664c2b17e3e4b60d88c736d0c6b5a39d682&rid=giphy-preview.gif', 'width': '480'}}, 'import_datetime': '2019-07-19 22:27:40', 'is_anonymous': None, 'is_community': None, 'is_featured': None, 'is_hidden': None, 'is_indexable': None, 'is_realtime': None, 'is_removed': None, 'is_sticker': False, 'rating': 'g', 'slug': 'RecargaPay-cashback-recargapay-paguetudopelocelular-hTJF0O4vDkJsUi1h8Q', 'source': 'www.recargapay.com.br', 'source_post_url': 'www.recargapay.com.br', 'source_tld': '', 'tags': None, 'trending_datetime': '0000-00-00 00:00:00', 'type': 'gif', 'update_datetime': None, 'url': 'https://giphy.com/gifs/RecargaPay-cashback-recargapay-paguetudopelocelular-hTJF0O4vDkJsUi1h8Q', 'user': {'avatar_url': 'https://media0.giphy.com/avatars/RecargaPay/msKTiPaVkvqd.png', 'banner_url': 'https://media0.giphy.com/headers/RecargaPay/kg023vdaAaWA.gif', 'display_name': 'RecargaPay', 'profile_url': 'https://giphy.com/RecargaPay/', 'twitter': None, 'username': 'RecargaPay'}, 'username': 'RecargaPay'}], 'meta': {'msg': 'OK', 'response_id': '65bc1664c2b17e3e4b60d88c736d0c6b5a39d682', 'status': 200}, 'pagination': {'count': 2, 'offset': 0, 'total_count': 10}} ```
2020/06/21
[ "https://Stackoverflow.com/questions/62504019", "https://Stackoverflow.com", "https://Stackoverflow.com/users/12001122/" ]
``` render(request, template_name, context=None, content_type=None, status=None, using=None) ``` > > `render()` Combines a given template with a given context dictionary > and returns an HttpResponse object with that rendered text. > > > You can either use Django defaults `JsonResponse` class or Django REST framework `Response` class to return JSON responses. ``` from django.http import JsonResponse return JsonResponse(data=api_response.data) from rest_framework.response import Response return Response(data=api_response.data) ``` tried it on the ipython shell and it works just fine. ``` In [15]: response = Response(api_response.data[0]) In [16]: response Out[16]: <Response status_code=200, "text/html; charset=utf-8"> ``` `response.data` gives me the serialized response.
Python has a built in function for converting dicts to json. ``` import json data = api_response.data return render(request, json.dumps(data)) ``` If you use that in your return statement it should return json.
9,466
16,518,142
I'm trying out threads in python. I want a spinning cursor to display while another method runs (for 5-10 mins). I've done out some code but am wondering is this how you would do it? i don't like to use globals, so I assume there is a better way? ``` c = True def b(): for j in itertools.cycle('/-\|'): if (c == True): sys.stdout.write(j) sys.stdout.flush() time.sleep(0.1) sys.stdout.write('\b') else: return def a(): global c #code does stuff here for 5-10 minutes #simulate with sleep time.sleep(2) c = False Thread(target = a).start() Thread(target = b).start() ``` EDIT: Another issue now is that when the processing ends the last element of the spinning cursor is still on screen. so something like `\` is printed.
2013/05/13
[ "https://Stackoverflow.com/questions/16518142", "https://Stackoverflow.com", "https://Stackoverflow.com/users/1803007/" ]
You could use events: <http://docs.python.org/2/library/threading.html> I tested this and it works. It also keeps everything in sync. You should avoid changing/reading the same variables in different threads without synchronizing them. ``` #!/usr/bin/python from threading import Thread from threading import Event import time import itertools import sys def b(event): for j in itertools.cycle('/-\|'): if not event.is_set(): sys.stdout.write(j) sys.stdout.flush() time.sleep(0.1) sys.stdout.write('\b') else: return def a(event): #code does stuff here for 5-10 minutes #simulate with sleep time.sleep(2) event.set() def main(): c = Event() Thread(target = a, kwargs = {'event': c}).start() Thread(target = b, kwargs = {'event': c}).start() if __name__ == "__main__": main() ``` Related to 'kwargs', from Python docs (URL in the beginning of the post): ``` class threading.Thread(group=None, target=None, name=None, args=(), kwargs={}) ... kwargs is a dictionary of keyword arguments for the target invocation. Defaults to {}. ```
This is not properly synchronized. But I will not try to explain it all to you right now because it's a whole lot of knowledge. Try to read this: <http://effbot.org/zone/thread-synchronization.htm> But in your case it's not that bad that things aren't synchronized correctyl. The only thing that could happen, is that the spining bar spins a few ms longer than the background task actually needs.
9,469
3,076,798
I'm trying to start a text editor (nano) from inside Python, have the user enter text, and then capture the text once they writeout (Control-O). I haven't worked with the `subprocess` module before, nor pipes, so I don't know what to try next. So far I have this code: ``` a = subprocess.Popen('nano', stdout=subprocess.PIPE, shell=True) ``` Where `a` should capture the output. This code, however, doesn't bring up nano, and instead makes the python terminal behave strangely. Up and down keys (history) stop working and the backspace key becomes dysfunctional. Can someone point me in the right direction to solve this issue? I realize that I may need to read up on pipes in Python, but the only info I can find is the `pipes` module and it doesn't help much.
2010/06/19
[ "https://Stackoverflow.com/questions/3076798", "https://Stackoverflow.com", "https://Stackoverflow.com/users/104297/" ]
Control-O in Nano writes to **the file being edited**, i.e., *not* to standard output -- so, forego the attempt to capture stdout and just read the file once the user writes it out and exits Nano. E.g., on my Mac: ``` >>> import tempfile >>> f = tempfile.NamedTemporaryFile(mode='w+t', delete=False) >>> n = f.name >>> f.close() >>> import subprocess >>> subprocess.call(['nano', n]) ``` Here, I write "Hello world!" then hit control-O Return control-X , and: ``` 0 >>> with open(n) as f: print f.read() ... Hello world! >>> ```
I'm not sure you can capture what the user has entered into `nano`. After all, that's nano's job. What you can (and I think should do) to get user input from an editor is to spawn it off with a [temporary file](http://docs.python.org/library/tempfile.html). Then when the user has entered what he wants, he saves and quits. Your program reads the content from the file and then deletes it. Just spawn the editor using `os.system`. Your terminal is behaving funny because nano is a full screen program and will use terminal escape sequences (probably via. curses) the manipulate the screen and cursor. If you spawn it unattached to a terminal, it will misbehave. Also, you should consider opening `$EDITOR` if it's defined rather than nano. That's what people would expect.
9,471
61,109,903
I'm new to python and programing and I'm trying to make a code to display an image with some data from a `.fits` file. I'm first trying to make this example I found from this site: <https://docs.astropy.org/en/stable/generated/examples/io/plot_fits-image.html#sphx-glr-download-generated-examples-io-plot-fits-image-py>. When I run it, it shows everything it should, except the figure, which is the most important part. How do I make the figure show up? The code is the following: ```py import matplotlib.pyplot as plt from astropy.visualization import astropy_mpl_style plt.style.use(astropy_mpl_style) from astropy.utils.data import get_pkg_data_filename from astropy.io import fits image_file = get_pkg_data_filename('tutorials/FITS-images/HorseHead.fits') fits.info(image_file) image_data = fits.getdata(image_file, ext=0) print(image_data.shape) plt.figure() plt.imshow(image_data, cmap='gray') plt.colorbar() ```
2020/04/08
[ "https://Stackoverflow.com/questions/61109903", "https://Stackoverflow.com", "https://Stackoverflow.com/users/13263274/" ]
Do you like it better that way? ``` if ( !(profile.name.hasError || profile.crn.hasError || profile.employeesNbr.hasError || profile.phoneNumber.hasError || profile.userRole.hasError) && profile.personCheck.hasError ) { showModal.modal.error = true; } ``` or if you prefer something more fancy: ``` if ( (Object.entries(state.profile) .filter(prop => prop[1].hasError).length === 1) && // there is exactly one error state.profile.personCheck.hasError // personCheck has error ) ```
It would be easy to simplify it into a loop if you were doing the same check for all properties in the profile. Doing a different check for `personCheck` makes it hard to reduce it like that. You can use an array for all the properties that you want the same check on. ``` if (profile.personCheck.hasError && ["name", "crn", "employeesNbr", "phoneNumber", "userRole"].every(prop => !profile[prop].hasError)) { showModal.modal.error = true; } ```
9,472
65,183,558
Let's say I want to implement some **list class** in python with extra structure, like a new constructor. I wrote: ``` import random class Lis(list): def __init__(self, n): self = [] for i in range(n): self.append(random.randint(0, 30)) ``` Now doing `Lis(3)` gives me an empty list. I don't know where I did it wrong.
2020/12/07
[ "https://Stackoverflow.com/questions/65183558", "https://Stackoverflow.com", "https://Stackoverflow.com/users/4821555/" ]
You are overriding the object with `self = []` try the following ``` import random class Lis(list): def __init__(self, n): for i in range(n): self.append(random.randint(0, 30)) ```
if you want to return a list from object call ```py class MyClass(object): def __init__(self): print "never called in this case" def __new__(cls): return [1,2,3] obj = MyClass() print(obj) ``` [How to return a value from \_\_init\_\_ in Python?](https://stackoverflow.com/questions/2491819/how-to-return-a-value-from-init-in-python) or if you want an modified object try: ```py class MyClass: def __getitem__(self, key): not_self = [] for i in range(n): not_self.append(random.randint(0, 30)) return not_self myobj = MyClass() myobj[3] #Output: 6 ``` [How to override the [] operator in Python?](https://stackoverflow.com/questions/1957780/how-to-override-the-operator-in-python) I am not sure if using self like you did is healthy.
9,475
12,454,675
I made a simple server and a simple client with `socket` module in python. server: ``` # server.py import socket s = socket.socket() host = socket.gethostname() port = 1234 s.bind((host, port)) s.listen(5) while True: c, addr = s.accept() print('Got connection from', addr) c.send(b'Thank you for your connecting') c.close() ``` and client: ``` #client.py import socket s = socket.socket() host = socket.gethostname() port = 1234 s.connect((host, port)) print(s.recv(1024)) ``` I started the server and then started 4 clients and got output in server's console as below: ```none Got connection from ('192.168.0.99', 49170) Got connection from ('192.168.0.99', 49171) Got connection from ('192.168.0.99', 49172) Got connection from ('192.168.0.99', 49173) ``` what is the second part in the tuple?
2012/09/17
[ "https://Stackoverflow.com/questions/12454675", "https://Stackoverflow.com", "https://Stackoverflow.com/users/1477871/" ]
From the [`socket` documentation](http://docs.python.org/library/socket.html): > > A pair (host, port) is used for the AF\_INET address family, where host is a string representing either a hostname in Internet domain notation like 'daring.cwi.nl' or an IPv4 address like '100.50.200.5', and port is an integer. > > > So the second value is the port number used by the client side for the connection. When a TCP/IP connection is established, the client picks an outgoing port number to communicate with the server; the server return packets are to be addressed to that port number.
Quote from [python documentation](http://docs.python.org/library/socket.html#socket.socket.accept): > > `socket.accept()` > > > Accept a connection. The socket must be bound to an address and listening for connections. The return value is a pair `(conn, address)` where conn is a new socket object usable to send and receive data on the connection, and address is the address bound to the socket on the other end of the connection. > > > What `address` is you can find in same doc [from words "Socket addresses are represented as follows"](http://docs.python.org/library/socket.html#module-socket).
9,476
54,730,763
I am trying to understand how RxPy works, I am getting this error > > type object 'ObservableBase' has no attribute 'create' > > > I am using python 3.6 and my code is ``` from rx import Observable stocks = [ {'TCKR': 'APPL', 'PRICE': 200}, {'TCKR': 'GOOG', 'PRICE': 90}, {'TCKR': 'TSLA', 'PRICE': 120}, {'TCKR': 'MSFT', 'PRICE': 150}, {'TCKR': 'INTL', 'PRICE': 70}, {'TCKR': 'ELLT', 'PRICE': 0} ] def buy_stock_events(observer): for stock in stocks: if (stock['PRICE'] > 100): observer.on_next(stock['TCKR']) elif (stock['PRICE'] <= 0): observer.on_error(stock['TCKR']) observer.on_completed() source = Observable.create(buy_stock_events) source.subscribe(on_next=lambda value: print("Received Instruction to buy {0}".format(value)), on_completed=lambda: print("Completed trades"), on_error=lambda e: print(e)) ```
2019/02/17
[ "https://Stackoverflow.com/questions/54730763", "https://Stackoverflow.com", "https://Stackoverflow.com/users/5291507/" ]
They have updated RxPy module. Install 1.6.1, it will solve the problem. Thanks
I have found the solution, change the code from ``` from rx import Observable source = Observable.create(buy_stock_events) ``` to ``` import rx source = rx.Observable.create(buy_stock_events) ``` and it's working
9,477
54,417,893
I have a business problem, I have run the regression model in python to predict my target value. When validating it with my test set I came to know that my predicted variable is very far from my actual value. Now the thing I want to extract from this model is that, which feature played the role to deviate my predicted value from actual value (let say difference is in some threshold value)? I want to rank the features impact wise so that I could address to my client. Thanks
2019/01/29
[ "https://Stackoverflow.com/questions/54417893", "https://Stackoverflow.com", "https://Stackoverflow.com/users/4634959/" ]
It depends on the estimator you chose, linear models often have a coef\_ method you can call to get the coef used for each feature, given they are normalized this tells you what you want to know. As told above for tree model you have the feature importance. You can also use libraries like treeinterpreter described here: [Interpreting Random Forest](http://blog.datadive.net/interpreting-random-forests/) [examples](http://blog.datadive.net/random-forest-interpretation-with-scikit-learn/)
You can have a look at this - [Feature selection](https://scikit-learn.org/stable/modules/feature_selection.html)
9,478
58,558,135
I'm writing a wrapper or pipeline to create a tfrecords dataset to which I would like to supply a function to apply to the dataset. I would like to make it possible for the user to inject a function defined in another python file which is called in my script to transform the data. Why? The only thing the user has to do is write the function which brings his data into the right format, then the existing code does the rest. I'm aware of the fact that I could have the user write the function in the same file and call it, or to have an import statement etc. So as a minimal example, I would like to have file y.py ```py def main(argv): # Parse args etc, let's assume it is there. dataset = tf.data.TFRecordDataset(args.filename) dataset = dataset.map(args.function) # Continue with doing stuff that is independent from actual content ``` So what I'd like to be able to do is something like this ``` python y.py --func x.py my_func ``` And use the function defined in x.py my\_func in dataset.map(...) Is there a way to do this in python and if yes, which is the best way to do it?
2019/10/25
[ "https://Stackoverflow.com/questions/58558135", "https://Stackoverflow.com", "https://Stackoverflow.com/users/6489953/" ]
1. Pass the name of the file as an argument to your script (and function name) 2. Read the file into a string, possibly extracting the given function 3. use Python [exec()](https://www.programiz.com/python-programming/methods/built-in/exec) to execute the code An example: ``` file = "def fun(*args): \n return args" func = "fun(1,2,3)" def execute(func, file): program = file + "\nresult = " + func local = {} exec(program, local) return local['result'] r = execute(func, file) print(r) ``` Similar to [here](https://stackoverflow.com/questions/23917776/how-do-i-get-the-return-value-when-using-python-exec-on-the-code-object-of-a-fun) however we must use `locals()` as we are not calling `exec` in global scope. Note: the use of `exec` is somewhat dangerous, you should be sure that the function is safe - if you are using it then its fine! Hope this helps.
Ok so I have composed the answer myself now using the information from comments and [this answer](https://stackoverflow.com/a/58558215/6489953). ```py import importlib, inspect, sys, os # path is given path to file, funcion_name is name of function and args are the function arguments # Create package and module name from path package = os.path.dirname(path).replace(os.path.sep,'.') module_name = os.path.basename(path).split('.')[0] # Import module and get members module = importlib.import_module(module_name, package) members = inspect.getmembers(module) # Find matching function function = [t[1] for t in members if t[0] == function_name][0] function(args) ``` This exactly solves the question, since I get a callable function object which I can call, pass around, use it as a normal function.
9,480
45,346,418
I am trying to generate word vectors using PySpark. Using gensim I can see the words and the closest words as below: ```python sentences = open(os.getcwd() + "/tweets.txt").read().splitlines() w2v_input=[] for i in sentences: tokenised=i.split() w2v_input.append(tokenised) model = word2vec.Word2Vec(w2v_input) for key in model.wv.vocab.keys(): print key print model.most_similar(positive=[key]) ``` Using PySpark ```python inp = sc.textFile("tweet.txt").map(lambda row: row.split(" ")) word2vec = Word2Vec() model = word2vec.fit(inp) ``` How can I generate the words from the vector space in model? That is the pyspark equivalent of the gensim `model.wv.vocab.keys()`? Background: I need to store the words and the synonyms from the model in a map so I can use them later for finding the sentiment of a tweet. I cannot reuse the word-vector model in the map functions in pyspark as the model belongs to the spark context (error pasted below). I want the pyspark word2vec version instead of gensim because it provides better synonyms for certain test words. ```python Exception: It appears that you are attempting to reference SparkContext from a broadcast variable, action, or transformation.SparkContext can only be used on the driver, not in code that it run on workers. ``` Any alternative solution is also welcome.
2017/07/27
[ "https://Stackoverflow.com/questions/45346418", "https://Stackoverflow.com", "https://Stackoverflow.com/users/6071445/" ]
The equivalent command in Spark is `model.getVectors()`, which again returns a dictionary. Here is a quick toy example with only 3 words (`alpha, beta, charlie`), adapted from the [documentation](https://spark.apache.org/docs/latest/api/python/pyspark.mllib.html#pyspark.mllib.feature.Word2Vec): ```python sc.version # u'2.1.1' from pyspark.mllib.feature import Word2Vec sentence = "alpha beta " * 100 + "alpha charlie " * 10 localDoc = [sentence, sentence] doc = sc.parallelize(localDoc).map(lambda line: line.split(" ")) word2vec = Word2Vec() model = word2vec.fit(doc) model.getVectors().keys() # [u'alpha', u'beta', u'charlie'] ``` Regarding finding synonyms, you may find [another answer of mine](https://stackoverflow.com/questions/34172242/spark-word2vec-vector-mathematics/34298583#34298583) useful. Regarding the error you mention and a possible workaround, have a look at [this answer](https://stackoverflow.com/questions/41046843/how-to-load-a-word2vec-model-and-call-its-function-into-the-mapper/41190031#41190031) of mine.
And as suggested [here](https://stackoverflow.com/questions/48062278/would-model-getvectors-keys-return-all-the-keys-from-a-model), if you want to include all the words in your document set the MinCount parameter accordingly (default=5): ``` word2vec = Word2Vec() word2vec.setMinCount(1) ```
9,481
68,821,867
I'm trying to install `pyinstaller 3.5` in `python 3.4.3` but i get this error: ``` ERROR: Command "python setup.py egg_info" failed with error code 1 in C:\Users\DTI~1.DES\AppData\Local\Temp\pip-install-_dyh3r_g\pefile\ ``` The command i use is this: ``` pip install pyinstaller==3.5 ``` I'm using the latest version that `3.4.3` can use of pip, setuptools and wheel. ``` pip 19.1.1 setuptools 43.0.0 wheel 0.33.6 ``` I appreciate all the help. this is the full log: ``` pip install pyinstaller=="3.5" DEPRECATION: Python 3.4 support has been deprecated. pip 19.1 will be the last one supporting it. Please upgrade your Python as Python 3.4 won't be maintained after March 2019 (cf PEP 429). Collecting pyinstaller==3.5 Using cached https://files.pythonhosted.org/packages/e2/c9/0b44b2ea87ba36395483a672fddd07e6a9cb2b8d3c4a28d7ae76c7e7e1e5/PyInstaller-3.5.tar.gz Installing build dependencies ... done Getting requirements to build wheel ... done Preparing wheel metadata ... done Collecting pywin32-ctypes>=0.2.0 (from pyinstaller==3.5) Using cached https://files.pythonhosted.org/packages/9e/4b/3ab2720f1fa4b4bc924ef1932b842edf10007e4547ea8157b0b9fc78599a/pywin32_ctypes-0.2.0-py2.py3-none-any.whl Requirement already satisfied: setuptools in c:\python34\lib\site-packages (from pyinstaller==3.5) (43.0.0) Collecting altgraph (from pyinstaller==3.5) Using cached https://files.pythonhosted.org/packages/ee/3d/bfca21174b162f6ce674953f1b7a640c1498357fa6184776029557c25399/altgraph-0.17-py2.py3-none-any.whl Collecting pefile>=2017.8.1 (from pyinstaller==3.5) Using cached https://files.pythonhosted.org/packages/f9/1e/fc4fac0169d16a98577809400bbcfac8ad1900fa792184327b360ea51fc6/pefile-2021.5.13.tar.gz ERROR: Complete output from command python setup.py egg_info: ERROR: Traceback (most recent call last): File "<string>", line 1, in <module> File "C:\Users\DTI~1.DES\AppData\Local\Temp\pip-install-3rgqa38b\pefile\setup.py", line 86, in <module> long_description = "\n".join(_read_doc().split('\n')), File "C:\Users\DTI~1.DES\AppData\Local\Temp\pip-install-3rgqa38b\pefile\setup.py", line 33, in _read_doc tree = ast.parse(f.read()) File "c:\python34\lib\ast.py", line 35, in parse return compile(source, filename, mode, PyCF_ONLY_AST) File "<unknown>", line 3789 f'Export directory contains more than 10 repeated entries ' ^ SyntaxError: invalid syntax ---------------------------------------- ERROR: Command "python setup.py egg_info" failed with error code 1 in C:\Users\DTI~1.DES\AppData\Local\Temp\pip-install-3rgqa38b\pefile\ ```
2021/08/17
[ "https://Stackoverflow.com/questions/68821867", "https://Stackoverflow.com", "https://Stackoverflow.com/users/15496915/" ]
`pefile` (one of the PyInstaller dependencies) [requires python >= 3.6.0](https://github.com/erocarrera/pefile/blob/master/setup.py#L89)
Try `pip install pyinstaller` This should work or If you wanted the specific version you can do that by `pip install pyinstaller == "3.5"`
9,482
8,011,087
Disclaimer: I'm working in a project where exist an "huge" webapp that have an api for mobiles, so change the api is not an option. This application was developed time ago and several developers have worked on it, Having said that, the problem is this; In the api for mobile of this site (just views than returns json data), the code is looking for a token but does in the headers of request: ``` token = request.META.get('HTTP_TOKEN') ``` When I test this api locally, works fine, but in production doesn't, so, I try to figure out whats going on and found this: django converts headers, even custom headers to keys in request.META, I use urllib2 and [requests](http://readthedocs.org/docs/requests/en/latest/) for test the api and the problem in production is that in production server the request.META never has a key called HTTP\_TOKEN, so, doing a little of debug I seriously think the problem is the way we serve the django application. We are using django1.3, nginx, gunicorn, virtualenvwrapper, python2.7. My prime suspect is nginx, I think, in someway nginx receive the header but don' forward it to django, I try to do some research about this, but I only found infor about security headers and custom headers from nginx, but I dont find doc or something about how to tell nginx that allows that header and don't remove it. I need help here, the first thing is test if nginx receives the header, but I just know a little about nginx and I don't know how to tell it to log the headers of requests. Thanks **Update** [nginx conf file](http://dpaste.com/hold/652805/)
2011/11/04
[ "https://Stackoverflow.com/questions/8011087", "https://Stackoverflow.com", "https://Stackoverflow.com/users/150647/" ]
In your nginx configuration file (f.e. **mysite\_nginx.conf**) in the **server** section add this parameter: `uwsgi_pass_request_headers on;`. For example: ``` server { # the port your site will be served on listen 8000; ... underscores_in_headers on; } ``` And if access to Django goes through **uwsgi\_pass**, you need to add this one parameter `uwsgi_pass_request_headers on;` in **location** section. For example: ``` location / { include /etc/nginx/uwsgi_params; # the uwsgi_params file you installed uwsgi_pass_request_headers on; uwsgi_pass django; } ```
The answers above are enough for us to figure out the way. But there is still an annoying point we need to know. Yes, it is very annoying. ``` proxy_set_header X_FORWARDED_FOR # oops it never works. 1.16.1 on centos7 proxy_set_header X-FORWARDED-FOR # this will do the job ``` So you get it. Underscore could never appear in the customized variable name. Use a hyphen instead. Maybe `Nginx` uses underscores in some grammar cases. Someone pointed out the official reference will be appreciated.
9,483
62,187,893
I am new to python and I would like to print different words from the sentence. Below is the text file ``` Test.txt 1. As more than one social media historian has reminded few people, the Stonewall uprising was a rio-— more pointedly, a riot against police brutality, in response to an NYPD raid of Greenwich Village’s Stonewall Inn, in the early morning of June 29, 1969. 2. As more than one social media historian has reminded majority people, the Stonewall uprising was a riot — more pointedly, a riot against police brutality, in response to an NYPD raid of Greenwich Village’s Stonewall Inn, in the evening of July 12, 1979. ``` From the above text file, I want 10th word(few), 11th word(people), 39th word(morning), 43rd word (1969), from the first line. And for the second line also 10th word(majority), 11th word (people), 39th word (evening), 43rd (1979). Note: In the Test.txt 1. means line one and 2. means line 2 For this, I tried using the split function. ``` with open('Test.txt', 'r') as file: for line in file: print (line.split('reminded',1)[1]) ``` This is the output that I got ``` few people, the Stonewall uprising was a riot - more pointedly, a riot against police brutality, in response to an NYPD raid of Greenwich Villagers Stonewall Inn, in the early morning of June 29, 1969. majority people, the Stonewall uprising was a riot — more pointedly, a riot against police brutality, in response to an NYPD raid of Greenwich Village’s Stonewall Inn, in the evening of July 12, 1979. ``` How do I print only one word after using split function? I am pretty that I still need to improve this code. Any help would be appreciated. Thanks in advance.
2020/06/04
[ "https://Stackoverflow.com/questions/62187893", "https://Stackoverflow.com", "https://Stackoverflow.com/users/11144691/" ]
here is your code: ``` datalist = ['few','people','morning','1969','majority','people','evening','1979'] with open('file.txt', 'r') as file: data = file.read().replace('\n', ' ') outputlist = data.replace('.','').split(" ") for data in outputlist: if data in datalist: print(data) ```
Using split("reminded"), you splitted your sentence into two pieces addressable using indexes 0 and 1, but if you want to separate the words of the second part you can do another split on space. For example: ``` line.split('reminded',1)[1].split(" ") ``` In this way you will have a list of string which contains the single words of the second part of your sentence. After this you can think about clean these words from special characters, punctuation, ecc. Follow this in order to do it: [Remove all special characters, punctuation and spaces from string](https://stackoverflow.com/questions/5843518/remove-all-special-characters-punctuation-and-spaces-from-string)
9,493
7,538,628
So, once again, I make a nice python program which makes my life ever the more easier and saves a lot of time. Ofcourse, this involves a virtualenv, made with the `mkvirtualenv` function of virtualenvwrapper. The project has a requirements.txt file with a few required libraries (requests too :D) and the program won't run without these libraries. I am trying to add a `bin/run-app` executable shell script which would be in my path (symlink actually). Now, inside this script, I need to switch to the virtualenv before I can run this program. So I put this in ``` #!/bin/bash # cd into the project directory workon "$(cat .venv)" python main.py ``` A file `.venv` contains the virtualenv name. But when I run this script, I get `workon: command not found` error. Of course, I have the virtualenvwrapper.sh sourced in my bashrc but it doesn't seem to be available in this shell script. So, how can I access those virtualenvwrapper functions here? Or am I doing this the wrong way? How do you launch your python tools, each of which has its own virtualenv!?
2011/09/24
[ "https://Stackoverflow.com/questions/7538628", "https://Stackoverflow.com", "https://Stackoverflow.com/users/151048/" ]
I can't find the way to trigger the commands of `virtualenvwrapper` in shell. But this trick can help: assume your env. name is `myenv`, then put following lines at the beginning of scripts: ``` ENV=myenv source $WORKON_HOME/$ENV/bin/activate ```
add these lines to your .bashrc or .bash\_profile ``` export WORKON_HOME=~/Envs source /usr/local/bin/virtualenvwrapper.sh ``` and reopen your terminal and try
9,495
69,624,176
I need to make a letter "C" print using python the code I currently have is down below but I'm not sure how to add 2 stars to the end of the letter. Needed in python. **Here is my current output:** ``` Enter an odd number 5 or greater: 5 *** * * * * *** ``` **Here is my needed Output:** ``` Enter an odd number 5 or greater: 5 *** * * * * * * *** ``` **Current Code:** ``` import math # Purpose: Draw one symbol on either side of the line # leaving (width-2) spaces between the symbols def draw_side(width,symbol,height): print(symbol + " "*(width)) # Purpose: Produce a letetr 'A' drawn with the symbol # provided using the given width and height def draw_A(width,height,symbol): print(" ") mid = math.floor(height/2) print(" " + symbol*(width)) for i in range(1, height): if i == mid: print(symbol) else: draw_side(width,symbol,height) print(" " + symbol*(width)) # Purpose: Prompt the user for an integer 5 or greater # and return valid user input def get_height(): height = int(input("Enter an odd number 5 or greater: ")) while(1): if (height % 2)==0 or (height < 5) : height = int(input("-> Error! Try again: ")) else : break; return height # Purpose: Calls helper function to get height, and calculates width. # Finally calls draw_A() to draw letter 'A' with given symbol def draw_letter(symbol): height = get_height() width = height - 2 draw_A(width,height,symbol) draw_letter('*') ```
2021/10/19
[ "https://Stackoverflow.com/questions/69624176", "https://Stackoverflow.com", "https://Stackoverflow.com/users/17187287/" ]
You can decouple authentication with authorization to allow more flexible connections between all three entities: Browser, HTTP server, and DB. To make your second example work you could do: * The HTTP server (US) submits asynchroneously the query to the DB (Asia) and requests a auth token for it. * The HTTP server (US) sends the auth token back to the browser (Europe), while the query is now running. * The browser (Europe) now initiates a second HTTP call against the DB (Asia) using the auth token, and maybe the queryID as well. * The DB will probably need to implement a simple token auth protocol. It should: + Authenticate the incoming auth token. + Retrieve the session. + Start streaming the query result set back to the caller. For the DB server, there are plenty of out-of-the-box slim docker images you can spin in seconds that implement authorization server and that can listen to the browser using nginx. As you can see the architecture can be worked out. However, the DB server in Asia will need to be revamped to implement some kind of token authorization. The simplest and widespread strategy is to use OAuth2, that is all the rage nowadays.
Building on @TheImpalers answer: How about add another table to your remote DB that is just for retrieving query result? When client asks the backend service for database query, the backend service will generate a UUID or other secure token and tell the DB to run the query and store it under the given UUID. The backend service also returns the UUID to the client who can then retrieve the associated data from the DB directly.
9,505
43,556,353
I have started learning python and using online interpreter for python 2.9-pythontutor ``` x=5,6 if x==5: print "5" else: print "not" ``` It goes in else loop and print not. why is that? what exactly x=5,6 means?
2017/04/22
[ "https://Stackoverflow.com/questions/43556353", "https://Stackoverflow.com", "https://Stackoverflow.com/users/7747088/" ]
`,` is tuple expr, where `x,y` will return a tuple `(x,y)` so expression `5,6` will return a tuple `(5,6)` `x` is nether `5` nor `6` but a tuple
When you declared `x = 5, 6` you made it a tuple. Then later when you do `x == 5` this translates to `(5, 6) == 5` which is not true, so the else branch is run. If instead you did `x[0] == 5` that would be true, and print 5. Because we are accessing the 0 index of the tuple, which is equal to 5. Check out [some tutorials on tuples](https://www.tutorialspoint.com/python/python_tuples.htm) for more info.
9,507
8,572,830
I am building a django application which depends on a python module where a SIGINT signal handler has been implemented. Assuming I cannot change the module I am dependent from, how can I workaround the "signal only works in main thread" error I get integrating it in Django ? Can I run it on the Django main thread? Is there a way to inhibit the handler to allow the module to run on non-main threads ? Thanks!
2011/12/20
[ "https://Stackoverflow.com/questions/8572830", "https://Stackoverflow.com", "https://Stackoverflow.com/users/898179/" ]
Django's built-in development server has auto-reload feature enabled by default which spawns a new thread as a means of reloading code. To work around this you can simply do the following, although you'd obviously lose the convenience of auto-reloading: ``` python manage.py runserver --noreload ``` You'll also need to be mindful of this when choosing your production setup. At least some of the deployment options (such as threaded fastcgi) are certain to execute your code outside main thread.
Although the question does not describe exactly the situation you are in, here is some more generic advice: The signal is only sent to the main thread. For this reason, the signal handler should be in the main thread. From that point on, the action that the signal triggers, needs to be communicated to the other threads. I usually do this using [Events](http://docs.python.org/library/threading.html#event-objects). The signal handler sets the event, which the other threads will read, and then realize that action X has been triggered. Obviously this implies that the event attribute should be shared among the threads.
9,510
40,390,705
I would like to make an intention list like python does. ``` list = [1,2,3,4] newList = [ i * 2 for i in list ]  ``` Using std,iterator and lambda function, it should be possible to do the same things in one line. ``` std::vector<int> list = {1,2,3,4} ; std::vector<int> newList = ``` Could you complete it ?
2016/11/02
[ "https://Stackoverflow.com/questions/40390705", "https://Stackoverflow.com", "https://Stackoverflow.com/users/2708072/" ]
[`std::transform`](http://en.cppreference.com/w/cpp/algorithm/transform) lets you transform values and put them somewhere else: ``` std::vector<int> list = {1,2,3,4}; std::vector<int> newList; std::transform( list.cbegin(), list.cend(), back_inserter(newList), [](int x) { return x * 2; }); ``` But really, C++ is not the language for conciseness.
I found this solution. But it's not very nice . ``` std::vector<int> list = {1,2,3,4}; std::vector<int> newList; std::for_each(list.begin(), list.end(),[&newList](int val){newList.push_back(val*2);}); ```
9,515
11,333,261
my views.py file code: ``` #!/usr/bin/python from django.template import loader, RequestContext from django.http import HttpResponse #from skey import find_root_tags, count, sorting_list from search.models import Keywords from django.shortcuts import render_to_response as rr def front_page(request): if request.method == 'POST' : from skey import find_root_tags, count, sorting_list str1 = request.POST['word'] fo = open("/home/pooja/Desktop/xml.txt","r") for i in range(count.__len__()): file = fo.readline() file = file.rstrip('\n') find_root_tags(file,str1,i) list.append((file,count[i])) sorting_list(list) for name, count1 in list: s = Keywords(file_name=name,frequency_count=count1) s.save() fo.close() list1 = Keywords.objects.all() t = loader.get_template('search/results.html') c = Context({'list1':list1,}) return HttpResponse(t.render(c)) else : str1 = '' list = [] template = loader.get_template('search/front_page.html') c = RequestContext(request) response = template.render(c) return HttpResponse(response) <body BGCOLOR = #9ACD32"> <ul> { % for l in list1 %} <li> {{l.file_name}}, {{l.frquency_count}}</li> { % endfor %} </ul> </body> ``` on running my app on server, it asks me for the word and redirects to results.html and is giving this output: ``` { % for l in list1 %} - List item, { % endfor %} ``` why is this happening, where am I mistaking?whereas values are getting stored in table , I checked it through admin page , then why is'it displaying? Please help.
2012/07/04
[ "https://Stackoverflow.com/questions/11333261", "https://Stackoverflow.com", "https://Stackoverflow.com/users/1493850/" ]
``` { % for l in list1 %} ``` should be ``` {% for l in list1 %} ``` and ``` { % endfor %} ``` should be ``` {% endfor %} ```
never put a space between '{' and '%'
9,516
13,515,471
I'm generating a bar-chart with matplotlib. It all works well but I can't figure out how to prevent the labels of the x-axis from overlapping each other. Here an example: ![enter image description here](https://i.stack.imgur.com/BCm0v.png) Here is some sample SQL for a postgres 9.1 database: ``` drop table if exists mytable; create table mytable(id bigint, version smallint, date_from timestamp without time zone); insert into mytable(id, version, date_from) values ('4084036', '1', '2006-12-22 22:46:35'), ('4084938', '1', '2006-12-23 16:19:13'), ('4084938', '2', '2006-12-23 16:20:23'), ('4084939', '1', '2006-12-23 16:29:14'), ('4084954', '1', '2006-12-23 16:28:28'), ('4250653', '1', '2007-02-12 21:58:53'), ('4250657', '1', '2007-03-12 21:58:53') ; ``` And this is my python-script: ``` # -*- coding: utf-8 -*- #!/usr/bin/python2.7 import psycopg2 import matplotlib.pyplot as plt fig = plt.figure() # for savefig() import pylab ### ### Connect to database with psycopg2 ### try: conn_string="dbname='x' user='y' host='z' password='pw'" print "Connecting to database\n->%s" % (conn_string) conn = psycopg2.connect(conn_string) print "Connection to database was established succesfully" except: print "Connection to database failed" ### ### Execute SQL query ### # New cursor method for sql cur = conn.cursor() # Execute SQL query. For more than one row use three '"' try: cur.execute(""" -- In which year/month have these points been created? -- Need 'yyyymm' because I only need Months with years (values are summeed up). Without, query returns every day the db has an entry. SELECT to_char(s.day,'yyyymm') AS month ,count(t.id)::int AS count FROM ( SELECT generate_series(min(date_from)::date ,max(date_from)::date ,interval '1 day' )::date AS day FROM mytable t ) s LEFT JOIN mytable t ON t.date_from::date = s.day GROUP BY month ORDER BY month; """) # Return the results of the query. Fetchall() = all rows, fetchone() = first row records = cur.fetchall() cur.close() except: print "Query could not be executed" # Unzip the data from the db-query. Order is the same as db-query output year, count = zip(*records) ### ### Plot (Barchart) ### # Count the length of the range of the count-values, y-axis-values, position of axis-labels, legend-label plt.bar(range(len(count)), count, align='center', label='Amount of created/edited points') # Add database-values to the plot with an offset of 10px/10px ax = fig.add_subplot(111) for i,j in zip(year,count): ax.annotate(str(j), xy=(i,j), xytext=(10,10), textcoords='offset points') # Rotate x-labels on the x-axis fig.autofmt_xdate() # Label-values for x and y axis plt.xticks(range(len(count)), (year)) # Label x and y axis plt.xlabel('Year') plt.ylabel('Amount of created/edited points') # Locate legend on the plot (http://matplotlib.org/users/legend_guide.html#legend-location) plt.legend(loc=1) # Plot-title plt.title("Amount of created/edited points over time") # show plot pylab.show() ``` Is there a way how I can prevent the labels from overlapping each other? Ideally in an automatic way, because I can't predict the amount of bars.
2012/11/22
[ "https://Stackoverflow.com/questions/13515471", "https://Stackoverflow.com", "https://Stackoverflow.com/users/1808868/" ]
I think you're confused on a few points about how matplotlib handles dates. You're not actually plotting dates, at the moment. You're plotting things on the x-axis with `[0,1,2,...]` and then manually labeling every point with a string representation of the date. Matplotlib will automatically position ticks. However, you're over-riding matplotlib's tick positioning functionality (Using `xticks` is basically saying: "I want ticks in exactly these positions".) At the moment, you'll get ticks at `[10, 20, 30, ...]` if matplotlib automatically positions them. However, these will correspond to the values that you used to plot them, not the dates (which you didn't use when plotting). You probably want to actually plot things using dates. Currently, you're doing something like this: ``` import datetime as dt import matplotlib.dates as mdates import numpy as np import matplotlib.pyplot as plt # Generate a series of dates (these are in matplotlib's internal date format) dates = mdates.drange(dt.datetime(2010, 01, 01), dt.datetime(2012,11,01), dt.timedelta(weeks=3)) # Create some data for the y-axis counts = np.sin(np.linspace(0, np.pi, dates.size)) # Set up the axes and figure fig, ax = plt.subplots() # Make a bar plot, ignoring the date values ax.bar(np.arange(counts.size), counts, align='center', width=1.0) # Force matplotlib to place a tick at every bar and label them with the date datelabels = mdates.num2date(dates) # Go back to a sequence of datetimes... ax.set(xticks=np.arange(dates.size), xticklabels=datelabels) #Same as plt.xticks # Make space for and rotate the x-axis tick labels fig.autofmt_xdate() plt.show() ``` ![enter image description here](https://i.stack.imgur.com/0Qi3s.png) Instead, try something like this: ``` import datetime as dt import matplotlib.dates as mdates import numpy as np import matplotlib.pyplot as plt # Generate a series of dates (these are in matplotlib's internal date format) dates = mdates.drange(dt.datetime(2010, 01, 01), dt.datetime(2012,11,01), dt.timedelta(weeks=3)) # Create some data for the y-axis counts = np.sin(np.linspace(0, np.pi, dates.size)) # Set up the axes and figure fig, ax = plt.subplots() # By default, the bars will have a width of 0.8 (days, in this case) We want # them quite a bit wider, so we'll make them them the minimum spacing between # the dates. (To use the exact code below, you'll need to convert your sequence # of datetimes into matplotlib's float-based date format. # Use "dates = mdates.date2num(dates)" to convert them.) width = np.diff(dates).min() # Make a bar plot. Note that I'm using "dates" directly instead of plotting # "counts" against x-values of [0,1,2...] ax.bar(dates, counts, align='center', width=width) # Tell matplotlib to interpret the x-axis values as dates ax.xaxis_date() # Make space for and rotate the x-axis tick labels fig.autofmt_xdate() plt.show() ``` ![enter image description here](https://i.stack.imgur.com/jTdzq.png)
As for your question on how to show only every 4th tick (for example) on the xaxis, you can do this: ``` import matplotlib.ticker as mticker myLocator = mticker.MultipleLocator(4) ax.xaxis.set_major_locator(myLocator) ```
9,517
37,827,920
I followed [this](http://www.samontab.com/web/2014/06/installing-opencv-2-4-9-in-ubuntu-14-04-lts/#comment-72178) to install opencv. When I tested the C and Java samples, they worked fine. But the python samples resulted in a ``` import cv2 ImportError: No module named cv2 ``` How can I fix this? I am using python 2.7 and ubuntu 14.04.
2016/06/15
[ "https://Stackoverflow.com/questions/37827920", "https://Stackoverflow.com", "https://Stackoverflow.com/users/6417704/" ]
This is a [well known bug](https://connect.microsoft.com/SQLServer/feedback/details/499608/ssms-can-not-paste-more-than-43679-characters-from-a-column-in-grid-mode) in SSMS, You can't paste more than 43679 char from a grid view column and unfortunately this limit can't be increased, You can get around this by displaying your Data in Xml format instead of nvarchar
The datatypes like NCHAR, NVARCHAR, NVARCHAR(MAX) stores half of CHAR, VARCHAR & NVARCHAR(MAX). Because these datatype used to store UNICODE characters. Use these datatypes when you need to store data other then default language (Collation). UNICODE characters take 2 bytes for each character. That's why lenth of NCHAR, NVARCHAR, NVARCHAR(MAX) stores half of CHAR, VARCHAR & NVARCHAR(MAX).
9,522
65,716,401
I am trying to install numpy on a macOS Big Sur but got this error. I've tried update pip and setuptool, also update xcode, but the error still appears ``` ERROR: Command errored out with exit status 1: command: /Users/mac/opt/miniconda3/bin/python -u -c 'import sys, setuptools, tokenize; sys.argv[0] = '"'"'/private/var/folders/24/2bgc31xs4w51ksff8v5kphcc0000gn/T/pip-install-3jq2_h06/numpy_a0d6a6e5f34d4b3b887afd79c04b5c7c/setup.py'"'"'; __file__='"'"'/private/var/folders/24/2bgc31xs4w51ksff8v5kphcc0000gn/T/pip-install-3jq2_h06/numpy_a0d6a6e5f34d4b3b887afd79c04b5c7c/setup.py'"'"';f=getattr(tokenize, '"'"'open'"'"', open)(__file__);code=f.read().replace('"'"'\r\n'"'"', '"'"'\n'"'"');f.close();exec(compile(code, __file__, '"'"'exec'"'"'))' bdist_wheel -d /private/var/folders/24/2bgc31xs4w51ksff8v5kphcc0000gn/T/pip-wheel-hvs0y1qx cwd: /private/var/folders/24/2bgc31xs4w51ksff8v5kphcc0000gn/T/pip-install-3jq2_h06/numpy_a0d6a6e5f34d4b3b887afd79c04b5c7c/ ... error: Command "gcc -Wno-unused-result -Wsign-compare -Wunreachable-code -DNDEBUG -g -fwrapv -O3 -Wall -Wstrict-prototypes -I/Users/mac/opt/miniconda3/include -arch x86_64 -I/Users/mac/opt/miniconda3/include -arch x86_64 -D_FILE_OFFSET_BITS=64 -D_LARGEFILE_SOURCE=1 -D_LARGEFILE64_SOURCE=1 -Inumpy/core/include -Ibuild/src.macosx-10.9-x86_64-3.7/numpy/core/include/numpy -Inumpy/core/src/private -Inumpy/core/src -Inumpy/core -Inumpy/core/src/npymath -Inumpy/core/src/multiarray -Inumpy/core/src/umath -Inumpy/core/src/npysort -I/Users/mac/opt/miniconda3/include/python3.7m -Ibuild/src.macosx-10.9-x86_64-3.7/numpy/core/src/private -Ibuild/src.macosx-10.9-x86_64-3.7/numpy/core/src/npymath -Ibuild/src.macosx-10.9-x86_64-3.7/numpy/core/src/private -Ibuild/src.macosx-10.9-x86_64-3.7/numpy/core/src/npymath -Ibuild/src.macosx-10.9-x86_64-3.7/numpy/core/src/private -Ibuild/src.macosx-10.9-x86_64-3.7/numpy/core/src/npymath -c numpy/random/mtrand/mtrand.c -o build/temp.macosx-10.9-x86_64-3.7/numpy/random/mtrand/mtrand.o -MMD -MF build/temp.macosx-10.9-x86_64-3.7/numpy/random/mtrand/mtrand.o.d" failed with exit status 1 ``` Also when pip trying to reinstall numpy, error messages appears like this ``` ERROR: Command errored out with exit status 1: /Users/mac/opt/miniconda3/bin/python -u -c 'import sys, setuptools, tokenize; sys.argv[0] = '"'"'/private/var/folders/24/2bgc31xs4w51ksff8v5kphcc0000gn/T/pip-install-3jq2_h06/numpy_a0d6a6e5f34d4b3b887afd79c04b5c7c/setup.py'"'"'; __file__='"'"'/private/var/folders/24/2bgc31xs4w51ksff8v5kphcc0000gn/T/pip-install-3jq2_h06/numpy_a0d6a6e5f34d4b3b887afd79c04b5c7c/setup.py'"'"';f=getattr(tokenize, '"'"'open'"'"', open)(__file__);code=f.read().replace('"'"'\r\n'"'"', '"'"'\n'"'"');f.close();exec(compile(code, __file__, '"'"'exec'"'"'))' install --record /private/var/folders/24/2bgc31xs4w51ksff8v5kphcc0000gn/T/pip-record-4yafo15v/install-record.txt --single-version-externally-managed --compile --install-headers /Users/mac/opt/miniconda3/include/python3.7m/numpy Check the logs for full command output. ``` Any Ideas?
2021/01/14
[ "https://Stackoverflow.com/questions/65716401", "https://Stackoverflow.com", "https://Stackoverflow.com/users/15004197/" ]
Looks like `gcc` compiler or some system library dependencies problem. Here is mine `gcc` version (MacOS Catalina) ``` $ which gcc ``` ``` /usr/bin/gcc ``` ``` $ gcc --version ``` ``` Configured with: --prefix=/Library/Developer/CommandLineTools/usr --with-gxx-include-dir=/Library/Developer/CommandLineTools/SDKs/MacOSX.sdk/usr/include/c++/4.2.1 Apple clang version 11.0.0 (clang-1100.0.33.16) Target: x86_64-apple-darwin19.6.0 Thread model: posix InstalledDir: /Library/Developer/CommandLineTools/usr/bin ``` Are your macbook architecture is `x86_64`? If that is new `ARM64` one, architecture may be a reason. Also `build/temp.macosx-10.9-x86_64-3.7/...` and `build/src.macosx-10.9-x86_64-3.7/...` looking pretty old (OS X 10.9?). I'm not familiar with pip build processes, but looks like `python 3.7.X` libraries don't updated long time. Newer python version is `3.9.X`. Try to use it, that may be solution.
According to similar problem in this link (<https://github.com/numpy/numpy/issues/12026>). You installation tries to compile numpy on your system, which is not necessary. Try to install concrete version of numpy, e.g. `pip3 install numpy==1.19.5`
9,524
41,769,507
I have a set of strings that's JSONish, but totally JSON uncompliant. It's also kind of CSV, but values themselves sometimes have commas. The strings look like this: ATTRIBUTE: Value of this attribute, ATTRIBUTE2: Another value, but this one has a comma in it, ATTRIBUTE3:, another value... The only two patterns I can see that would mostly work are that the attribute names are in caps and followed by a : and space. After the first attribute, the pattern is , name-in-caps : space. The data is stored in Redshift, so I was going to see if I can use regex to resolved this, but my regex knowledge is limited - where would I start? If not, I'll resort to python hacking.
2017/01/20
[ "https://Stackoverflow.com/questions/41769507", "https://Stackoverflow.com", "https://Stackoverflow.com/users/3430943/" ]
**Mistake # 1** ``` if (a=0) // condition will be always FALSE ``` must be ``` if (a==0) ``` or better ``` if (0 == a) ``` **Mistake # 2** ``` scanf("%d", &b); // when b is float ``` instead of ``` scanf("%f", &b); ``` **UPDATE:** Actually, for case of checking results of `scanf` I personally prefer to use `!=` with number of values entered with the last `scanf`. E.g. if two comma separated integers required to continue calculation snippet can be: ``` int x, y; int check; do{ printf("Enter x,y:"); check = scanf("%d,%d", &x, &y); // enter format is x,y while(getchar()!='\n'); // clean the input buffer }while(check != 2); ``` that loop will re-ask for input if `check` is not `2`, i.e. if it is `0` (when even the first value is incorrect, e.g. `abc,12`) or if it is `1` (when user forgot comma or enter not a number after comma, e.g. `12,y`
Code with corrections and comments - also available here - <http://ideone.com/eqzRQe> ``` #include <stdio.h> #include <math.h> int main(void) { float b; // printf("Eneter a float number"); printf("Enter a float number"); // Corrected typo fflush(stdout); // Send the buffer to the console so the user can see it int a=0; // a=5; -- Not required a=scanf("%f", &b); // See the manual page for reading floats if (a==0) // Need comparison operator not assignemnt { printf("scanf error: (%d)\n",a); // A better error message could be placed here } else { printf("%g\n", b); // Just to check the input with ideone - debugging printf("%g %g %g",floor(b), round(b), ceil(b)); } return 0; // You need the semi-colon here } ``` For VenuKant Sahu benefit > > Return Value > > > These functions return the number of input items successfully matched > and assigned, which can be fewer than provided for, or even zero in > the event of an early matching failure. > > > The value EOF is returned if the end of input is reached before either > the first successful conversion or a matching failure occurs. EOF is > also returned if a read error occurs, in which case the error > indicator for the stream (see ferror(3)) is set, and errno is set > indicate the error. > > >
9,525
39,540,128
I was playing around with the `dis` library to gather information about a function (like what other functions it called). The documentation for [`dis.findlabels`](https://docs.python.org/2/library/dis.html#dis.findlabels) sounds like it would return other function calls, but I've tried it with a handful of functions and it always returns an empty list. > > > ``` > dis.findlabels(code) > > Detect all offsets in the code object code which > are jump targets, and return a list of these offsets. > > ``` > > What is this function *supposed* to do and *how* would you use it?
2016/09/16
[ "https://Stackoverflow.com/questions/39540128", "https://Stackoverflow.com", "https://Stackoverflow.com/users/1547004/" ]
No, do not explicitly assign a default value to `Freight`. The warning is legitimate, because you never really assign a value to the field. You do not assign a value, because the field gets populated by magic. (Incidentally, that's why I do not like magic; but that's a different story altogether.) So, the best approach is to acknowledge the fact that the warning is legitimate but accounted for, and to explicitly suppress it. So, take a look at the documentation of the `#pragma warn` directive: <https://msdn.microsoft.com/en-us/library/441722ys.aspx>
You essentially have two choices and which way to go really depends on the intent (to suggest one or the other is subjective). First, you could eliminate the warning if the design requirement of your `Orders` type dictates that it should have a null default value. ``` public string Freight = null; ``` The above merely clarifies that intent and therefore eliminates the warning. The alternative is to suppress the warning as the other answers mention. In your case, if the assumption is that the value should have been set via Reflection then this alternative seems reasonable if not preferable in such a case.
9,526
27,872,305
I have a table with 12 columns and want to select the items in the first column (`qseqid`) based on the second column (`sseqid`). Meaning that the second column (`sseqid`) is repeating with different values in the 11th and 12th columns, which are`evalue`and`bitscore`, respectively. The ones that I would like to get are having the **lowest**`evalue`and the **highest**`bitscore`(when`evalue`s are the same, the rest of the columns can be ignored and the data is down below). So, I have made a short code which uses the second columns as a key for the dictionary. I can get five different items from the second column with lists of `qseqid`+`evalue`and`qseqid`+`bitscore`. Here is the code: ``` #!usr/bin/python filename = "data.txt" readfile = open(filename,"r") d = dict() for i in readfile.readlines(): i = i.strip() i = i.split("\t") d.setdefault(i[1], []).append([i[0],i[10]]) d.setdefault(i[1], []).append([i[0],i[11]]) for x in d: print(x,d[x]) readfile.close() ``` But, I am struggling to get the `qseqid` with the lowest evalue and the highest bitscore for each sseqid. Is there any good logic to solve the problem? The`data.txt`file (including the header row and with`»`representing tab characters) ```none qseqid»sseqid»pident»length»mismatch»gapopen»qstart»qend»sstart»send»evalue»bitscore ACLA_022040»TBB»32.71»431»258»8»39»468»24»423»2.00E-76»240 ACLA_024600»TBB»80»435»87»0»1»435»1»435»0»729 ACLA_031860»TBB»39.74»453»251»3»1»447»1»437»1.00E-121»357 ACLA_046030»TBB»75.81»434»105»0»1»434»1»434»0»704 ACLA_072490»TBB»41.7»446»245»3»4»447»3»435»2.00E-120»353 ACLA_010400»EF1A»27.31»249»127»8»69»286»9»234»3.00E-13»61.6 ACLA_015630»EF1A»22»491»255»17»186»602»3»439»8.00E-19»78.2 ACLA_016510»EF1A»26.23»122»61»4»21»127»9»116»2.00E-08»46.2 ACLA_023300»EF1A»29.31»447»249»12»48»437»3»439»2.00E-45»155 ACLA_028450»EF1A»85.55»443»63»1»1»443»1»442»0»801 ACLA_074730»CALM»23.13»147»101»4»6»143»2»145»7.00E-08»41.2 ACLA_096170»CALM»29.33»150»96»4»34»179»2»145»1.00E-13»55.1 ACLA_016630»CALM»23.9»159»106»5»58»216»4»147»5.00E-12»51.2 ACLA_031930»RPB2»36.87»1226»633»24»121»1237»26»1219»0»734 ACLA_065630»RPB2»65.79»1257»386»14»1»1252»4»1221»0»1691 ACLA_082370»RPB2»27.69»1228»667»37»31»1132»35»1167»7.00E-110»365 ACLA_061960»ACT»28.57»147»95»5»146»284»69»213»3.00E-12»57.4 ACLA_068200»ACT»28.73»463»231»13»16»471»4»374»1.00E-53»176 ACLA_069960»ACT»24.11»141»97»4»581»718»242»375»9.00E-09»46.2 ACLA_095800»ACT»91.73»375»31»0»1»375»1»375»0»732 ``` And here's a little more readable version of the table's contents: ```none 0 1 2 3 4 5 6 7 8 9 10 11 qseqid sseqid pident length mismatch gapopen qstart qend sstart send evalue bitscore ACLA_022040 TBB 32.71 431 258 8 39 468 24 423 2.00E-76 240 ACLA_024600 TBB 80 435 87 0 1 435 1 435 0 729 ACLA_031860 TBB 39.74 453 251 3 1 447 1 437 1.00E-121 357 ACLA_046030 TBB 75.81 434 105 0 1 434 1 434 0 704 ACLA_072490 TBB 41.7 446 245 3 4 447 3 435 2.00E-120 353 ACLA_010400 EF1A 27.31 249 127 8 69 286 9 234 3.00E-13 61.6 ACLA_015630 EF1A 22 491 255 17 186 602 3 439 8.00E-19 78.2 ACLA_016510 EF1A 26.23 122 61 4 21 127 9 116 2.00E-08 46.2 ACLA_023300 EF1A 29.31 447 249 12 48 437 3 439 2.00E-45 155 ACLA_028450 EF1A 85.55 443 63 1 1 443 1 442 0 801 ACLA_074730 CALM 23.13 147 101 4 6 143 2 145 7.00E-08 41.2 ACLA_096170 CALM 29.33 150 96 4 34 179 2 145 1.00E-13 55.1 ACLA_016630 CALM 23.9 159 106 5 58 216 4 147 5.00E-12 51.2 ACLA_031930 RPB2 36.87 1226 633 24 121 1237 26 1219 0 734 ACLA_065630 RPB2 65.79 1257 386 14 1 1252 4 1221 0 1691 ACLA_082370 RPB2 27.69 1228 667 37 31 1132 35 1167 7.00E-110 365 ACLA_061960 ACT 28.57 147 95 5 146 284 69 213 3.00E-12 57.4 ACLA_068200 ACT 28.73 463 231 13 16 471 4 374 1.00E-53 176 ACLA_069960 ACT 24.11 141 97 4 581 718 242 375 9.00E-09 46.2 ACLA_095800 ACT 91.73 375 31 0 1 375 1 375 0 732 ```
2015/01/10
[ "https://Stackoverflow.com/questions/27872305", "https://Stackoverflow.com", "https://Stackoverflow.com/users/1918515/" ]
``` #!usr/bin/python import csv DATA = "data.txt" class Sequence: def __init__(self, row): self.qseqid = row[0] self.sseqid = row[1] self.pident = float(row[2]) self.length = int(row[3]) self.mismatch = int(row[4]) self.gapopen = int(row[5]) self.qstart = int(row[6]) self.qend = int(row[7]) self.sstart = int(row[8]) self.send = int(row[9]) self.evalue = float(row[10]) self.bitscore = float(row[11]) def __str__(self): return ( "{qseqid}\t" "{sseqid}\t" "{pident}\t" "{length}\t" "{mismatch}\t" "{gapopen}\t" "{qstart}\t" "{qend}\t" "{sstart}\t" "{send}\t" "{evalue}\t" "{bitscore}" ).format(**self.__dict__) def entries(fname, header_rows=1, dtype=list, **kwargs): with open(fname) as inf: incsv = csv.reader(inf, **kwargs) # skip header rows for i in range(header_rows): next(incsv) for row in incsv: yield dtype(row) def main(): bestseq = {} for seq in entries(DATA, dtype=Sequence, delimiter="\t"): # see if a sequence with the same sseqid already exists prev = bestseq.get(seq.sseqid, None) if ( prev is None or seq.evalue < prev.evalue or (seq.evalue == prev.evalue and seq.bitscore > prev.bitscore) ): bestseq[seq.sseqid] = seq # display selected sequences keys = sorted(bestseq) for key in keys: print(bestseq[key]) if __name__ == "__main__": main() ``` which results in ``` ACLA_095800 ACT 91.73 375 31 0 1 375 1 375 0.0 732.0 ACLA_096170 CALM 29.33 150 96 4 34 179 2 145 1e-13 55.1 ACLA_028450 EF1A 85.55 443 63 1 1 443 1 442 0.0 801.0 ACLA_065630 RPB2 65.79 1257 386 14 1 1252 4 1221 0.0 1691.0 ACLA_024600 TBB 80.0 435 87 0 1 435 1 435 0.0 729.0 ```
``` filename = 'data.txt' readfile = open(filename,'r') d = dict() sseqid=[] lines=[] for i in readfile.readlines(): sseqid.append(i.rsplit()[1]) lines.append(i.rsplit()) sorted_sseqid = sorted(set(sseqid)) sdqDict={} key =None for sorted_ssqd in sorted_sseqid: key=sorted_ssqd evalue=[] bitscore=[] qseid=[] for line in lines: if key in line: evalue.append(line[10]) bitscore.append(line[11]) qseid.append(line[0]) sdqDict[key]=[qseid,evalue,bitscore] print sdqDict print 'TBB LOWEST EVALUE' + '---->' + min(sdqDict['TBB'][1]) ##I think you can do the list manipulation below to find out the qseqid readfile.close() ```
9,529
44,063,297
In python, I'm trying to extract 4 charterers before and after '©' symbol,this code extracts the characters after ©,can anyone help printing the characters before © (I don't want the entire string to get print,only few characters) ``` import re html = "This is all test and try things that's going on bro Copyright© Bro Code Bro" if "©" in html: symbol=re.findall(r"(?<=©).+$",html,re.M) print(symbol[0][0:100]) ```
2017/05/19
[ "https://Stackoverflow.com/questions/44063297", "https://Stackoverflow.com", "https://Stackoverflow.com/users/6228540/" ]
``` html = "This is all test and try things that's going on bro Copyright© Bro Code Bro" html = html.split("©") print(html[0][-4:]) print(html[1][:4]) ``` Output : ``` ight Bro ```
Try doing it this way : ``` if "©" in html: pos_c = html.find("©") symbol = html[pos_c-4:pos_c] print symbol ```
9,534
65,635,575
I get the following error when I attempt to load a saved `sklearn.preprocessing.MinMaxScaler` ``` /shared/env/lib/python3.6/site-packages/sklearn/base.py:315: UserWarning: Trying to unpickle estimator MinMaxScaler from version 0.23.2 when using version 0.24.0. This might lead to breaking code or invalid results. Use at your own risk. UserWarning) [2021-01-08 19:40:28,805 INFO train.py:1317 - main ] EXCEPTION WORKER 100: Traceback (most recent call last): ... File "/shared/core/simulate.py", line 129, in process_obs obs = scaler.transform(obs) File "/shared/env/lib/python3.6/site-packages/sklearn/preprocessing/_data.py", line 439, in transform if self.clip: AttributeError: 'MinMaxScaler' object has no attribute 'clip' ``` I trained the scaler on one machine, saved it, and pushed it to a second machine where it was loaded and used to transform input. ``` # loading and transforming import joblib from sklearn.preprocessing import MinMaxScaler scaler = joblib.load('scaler') assert isinstance(scaler, MinMaxScaler) data = scaler.transform(data) # throws exception ```
2021/01/08
[ "https://Stackoverflow.com/questions/65635575", "https://Stackoverflow.com", "https://Stackoverflow.com/users/7253453/" ]
The issue is you are training the scaler on a machine with an older verion of sklearn than the machine you're using to load the scaler. Noitce the `UserWarning` `UserWarning: Trying to unpickle estimator MinMaxScaler from version 0.23.2 when using version 0.24.0. This might lead to breaking code or invalid results. Use at your own risk. UserWarning)` The solution is to fix the version mismatch. Either by upgrading one sklearn to `0.24.0` or downgrading to `0.23.2`
I solved this issue with `pip install scikit-learn==0.23.2` in my conda or cmd. Essentially downgrading the scikit module helped.
9,541
40,511,177
I am actually new to python. While learning it I came across this piece of code. Python official document says when encounter with continue statement, control will shift to the beginning of the loop, but in this case it is shifting to final statement and executing from there onward. Is this a bug in python or what? Can somebody please explain this to me? Thanks. ``` def askint(): while True: try: val =int(input("pleas enter an integer ")) except: print ("it seems like you did'n enter an integer ") continue else: print ("yep that's an integer thank you") break finally: print ('control is now on finally me') print ('i am also getting executed ') askint() ```
2016/11/09
[ "https://Stackoverflow.com/questions/40511177", "https://Stackoverflow.com", "https://Stackoverflow.com/users/6250108/" ]
The `finally` code is always executed in a `try/except` block. The `continue` doesn't skip it (or it would be a bug in python).
The `finally` clause **must be executed** no matter what happens, and so it is. This is why it's called `finally`: it doesn't matter whether what you have *tried* succeeded or raised an *except*ion, it is *always* executed.
9,546
30,080,491
I am trying to reproduce the algorithm described in the Isolation Forest paper in python. <http://cs.nju.edu.cn/zhouzh/zhouzh.files/publication/icdm08b.pdf?q=isolation> This is my current code: ``` import numpy as np import sklearn as sk import matplotlib.pyplot as plt import pandas as pd from sklearn.decomposition import PCA def _h(i): return np.log(i) + 0.5772156649 def _c(n): if n > 2: h = _h(n-1) return 2*h - 2*(n - 1)/n if n == 2: return 1 else: return 0 def _anomaly_score(dict_scores, n_samples): score = np.array([np.mean(dict_scores[k]) for k in dict_scores.keys()]) score = -score/_c(n_samples) return 2**score def _split_data(X): ''' split the data in the left and right nodes ''' n_samples, n_columns = X.shape n_features = n_columns - 1 feature_id = np.random.randint(low=0, high=n_features-1) feature = X[:, feature_id] split_value = np.random.choice(feature) left_X = X[feature <= split_value] right_X = X[feature > split_value] return left_X, right_X, feature_id, split_value def iTree(X, add_index=False, max_depth = np.inf): ''' construct an isolation tree and returns the number of step required to isolate an element. A column of index is added to the input matrix X if add_index=True. This column is required in the algorithm. ''' n_split = {} def iterate(X, count = 0): n_samples, n_columns = X.shape n_features = n_columns - 1 if count > max_depth: for index in X[:,-1]: n_split[index] = count return if n_samples == 1: index = X[0, n_columns-1] n_split[index] = count return else: lX, rX, feature_id, split_value = _split_data(X) # Uncomment the print to visualize a draft of # the construction of the tree #print(lX[:,-1], rX[:,-1], feature_id, split_value, n_split) n_samples_lX, _ = lX.shape n_samples_rX, _ = rX.shape if n_samples_lX > 0: iterate(lX, count+1) if n_samples_rX >0: iterate(rX, count+1) if add_index: n_samples, _ = X.shape X = np.c_[X, range(n_samples)] iterate(X) return n_split class iForest(): ''' Class to construct the isolation forest. -n_estimators: is the number of trees in the forest, -sample_size: is the bootstrap parameter used during the construction of the forest, -add_index: adds a column of index to the matrix X. This is required and add_index can be set to False only if the last column of X contains already indeces. -max_depth: is the maximum depth of each tree ''' def __init__(self, n_estimators=20, sample_size=None, add_index = True, max_depth = 100): self.n_estimators = n_estimators self.sample_size = sample_size self.add_index = add_index self.max_depth = max_depth return def fit(self, X): n_samples, n_features = X.shape if self.sample_size == None: self.sample_size = int(n_samples/2) if self.add_index: X = np.c_[X, range(n_samples)] trees = [iTree(X[np.random.choice(n_samples, self.sample_size, replace=False)], max_depth=self.max_depth) for i in range(self.n_estimators)] self.all_anomaly_score_ = {k:None for k in range(n_samples)} for k in self.all_anomaly_score_.keys(): self.all_anomaly_score_[k] = np.array([tree[k] for tree in trees if k in tree]) self.anomaly_score_ = _anomaly_score(self.all_anomaly_score_, n_samples) return self ``` The main part of the code is the iTree function that returns a dictionary with the number of steps required to isolate each sample. A column of index is attached to the input matrix `X` in order to make easier to understand what are the samples in each node. When I compare the result obtained with my code and the ones obtained with the isolation forest available for R I get different results. Consider for example that stackloss dataset : ``` data = pd.read_csv("stackloss.csv") X = data.as_matrix()[:, 1:] max_depth = 100 itree = iTree(X, add_index=True, max_depth=max_depth) #example of isolation tree iforest = iForest(n_estimators=1, max_depth=max_depth, sample_size=21) # isolation forest iforest.fit(X) sol = np.argsort(iforest.anomaly_score_) #correct sol = [10 5 4 8 12 9 11 17 6 19 7 14 13 15 18 3 20 16 2 1 0] ``` `sol` is often different by correct solution obtained with the R software. <https://r-forge.r-project.org/projects/iforest/> The correct solution in R has been obtained with: ``` > tr = IsolationTrees(stackloss,ntree = 100000,hlim = 100, rFactor = 1) > as = AnomalyScore(stackloss, tr) > order(as$outF) [1] 11 6 5 9 13 10 12 18 7 20 8 15 14 16 19 4 21 17 3 2 1 > order(as$outF)-1 [1] 10 5 4 8 12 9 11 17 6 19 7 14 13 15 18 3 20 16 2 1 0 > ``` Where is the mistake ?
2015/05/06
[ "https://Stackoverflow.com/questions/30080491", "https://Stackoverflow.com", "https://Stackoverflow.com/users/2411173/" ]
``` self.anomaly_score_ = _anomaly_score(self.all_anomaly_score_, n_samples) ``` You're calculating \_anomaly\_score with n\_samples which is total number of samples. However, you are building trees with subsamples. Therefore, when you're calculating the average search length '\_c(n)' you should use sample\_size instead of n\_samples as the trees are build with subsamples. So, I believe your code should be: ``` self.anomaly_score_ = _anomaly_score(self.all_anomaly_score_, self.sample_size) ```
There is a pull-request in scikit-learn: <https://github.com/scikit-learn/scikit-learn/pull/4163>
9,551
46,572,148
I am receiving the following error message in spyder. Warning: You are using requests version , which is older than requests-oauthlib expects, please upgrade to 2.0.0 or later. I am not sure how i upgrade requests. I am using python 2.7 as part of an anaconda installation
2017/10/04
[ "https://Stackoverflow.com/questions/46572148", "https://Stackoverflow.com", "https://Stackoverflow.com/users/7426556/" ]
**1)** To the best of my knowledge as of early Nov, 2017 you are correct: there is no commonly available way to compile Swift to WebAssembly. Maybe some enterprising hacker somewhere has made it happen but if so she hasn't shared her code with us yet. **2)** In order to enable Wasm support you will probably need to hack on a few different parts. I think you could do it without knowing much of anything about the internals of the compiler (e.g. the parser & optimizers), but you'd need to learn about how the toolchain works and how it integrates with the platform at runtime. You can learn a *ton* about what you'd need to do by studying how Swift was ported to Android. Luckily, Brian Gesiak posted a *really* detailed blog post about exactly how that port worked (warning: small Patreon donation required): <https://modocache.io/how-to-port-the-swift-runtime-to-android> Seriously, you would be nuts to embark on this project without reading that article. Though I'm *NOT* an expert, based on that port and my (basic) understanding of Swift, I think the rough overview of where you'd need to hack would be: * The Swift compiler + You'll need to teach it about the Wasm "triple" used by LLVM, so it knows how to integrate with the rest of its toolchain + You'll need to set up a WebAssembly platform so that people can write `#if os(WebAssembly)` in places that require conditional compilation + You'll also need to set up similar build-time macros. The Android article explains this sort of thing really well. * The Swift runtime + This is written in C++ and it needs to run on Wasm + Since Wasm is an unusual platform there will probably be some work here. You might need to provide compatibility shims for various system calls and the like. + Projects like Emscripten have demonstrated lots of success compiling C++ to Wasm. * The Swift standard library + In theory you can write & run Swift code that doesn't use the standard library, but who would want to? + Also in theory this should "just work" if the runtime works, but you will likely need to use your `#if os(WebAssembly)` feature here to work around platform irregularities * Bonus: The Foundation and Dispatch libraries + If you want to use existing Swift code these two libraries will be essential. ### Links: * Brian Gesiak's awesome blog post: <https://modocache.io/how-to-port-the-swift-runtime-to-android> * Link to the Android port's pull request: <https://github.com/apple/swift/pull/1442> * Interesting article about the challenges and rewards of cross-platform Swift: <https://medium.com/@ephemer/how-we-put-an-app-in-the-android-play-store-using-swift-67bd99573e3c>
WebAssembly target would be like a generic unix target for llvm, so I think someone needs develop that port. Please note that Swift -> Wasm in browser would be pretty much useless because Wasm has no DOM or DOM API access so you still need JavaScript to do anything meaningful, thus the question: why would anyone bother to make the port? It looks like JavaScript remains the only web language. If you don't like JavaScript you better forget the web development. Chances are that Swift will run on Android before it runs on the web so stick with Swift/iOS and then make the port to Android whenever that becomes possible. People don't use the web/browser that much anyway.
9,556
30,229,231
I got a problem when I am using python to save an image from url either by urllib2 request or urllib.urlretrieve. That is the url of the image is valid. I could download it manually using the explorer. However, when I use python to download the image, the file cannot be opened. I use Mac OS preview to view the image. Thank you! UPDATE: The code is as follow ``` def downloadImage(self): request = urllib2.Request(self.url) pic = urllib2.urlopen(request) print "downloading: " + self.url print self.fileName filePath = localSaveRoot + self.catalog + self.fileName + Picture.postfix # urllib.urlretrieve(self.url, filePath) with open(filePath, 'wb') as localFile: localFile.write(pic.read()) ``` The image URL that I want to download is <http://site.meishij.net/r/58/25/3568808/a3568808_142682562777944.jpg> This URL is valid and I can save it through the browser but the python code would download a file that cannot be opened. The Preview says "It may be damaged or use a file format that Preview doesn't recognize." I compare the image that I download by Python and the one that I download manually through the browser. The size of the former one is several byte smaller. So it seems that the file is uncompleted, but I don't know why python cannot completely download it.
2015/05/14
[ "https://Stackoverflow.com/questions/30229231", "https://Stackoverflow.com", "https://Stackoverflow.com/users/4898319/" ]
It is the simplest way to download and save the image from internet using **urlib.request** package. Here, you can simply pass the image URL(from where you want to download and save the image) and directory(where you want to save the download image locally, and give the image name with .jpg or .png) Here I given "local-filename.jpg" replace with this. **Python 3** ``` import urllib.request imgURL = "http://site.meishij.net/r/58/25/3568808/a3568808_142682562777944.jpg" urllib.request.urlretrieve(imgURL, "D:/abc/image/local-filename.jpg") ``` You can download multiple images as well if you have all the image URLs from the internet. Just pass those image URLs in for loop, and the code automatically download the images from the internet.
download and save image to directory ==================================== ``` import requests headers = {"User-Agent": "Mozilla/5.0 (X11; Linux x86_64; rv:60.0) Gecko/20100101 Firefox/60.0", "Accept": "text/html,application/xhtml+xml,application/xml;q=0.9,*/*;q=0.8", "Accept-Language": "en-US,en;q=0.9" } img_data = requests.get(url=image_url, headers=headers).content with open(create_dir() + "/" + 'image_name' + '.png', 'wb') as handler: handler.write(img_data) ``` for creating directory ``` def create_dir(): # Directory dir_ = "CountryFlags" # Parent Directory path parent_dir = os.path.dirname(os.path.realpath(__file__)) # Path path = os.path.join(parent_dir, dir_) os.mkdir(path) return path ```
9,566
31,550,622
I can call MATLAB from my system python: ``` >>> import matlab.engine >>> ``` but when I load a virtual environment, I now get a segfault: ``` >>> import matlab.engine Segmentation fault: 11 ``` I've run the [setup.py install instructions](http://www.mathworks.com/help/matlab/matlab_external/install-the-matlab-engine-for-python.html) for both system python and my virtual environment. I expected questions [like this one](http://www.mathworks.com/matlabcentral/answers/180518-why-does-matlab-engine-for-python-crash-when-using-a-non-system-default-version-of-python-on-mac), in which I have to set the `DYLD_LIBRARY_PATH` explicitly would fix things, but I don't have that environment variable set when I run my system python. What could be different between the two python implementations that would cause this segfault? EDIT: I'm using OS X Yosemite on a Late 2013 13" Macbook Pro. I'm using Python 2.7 from a freshly installed virtual environment (NOT a virtual machine).
2015/07/21
[ "https://Stackoverflow.com/questions/31550622", "https://Stackoverflow.com", "https://Stackoverflow.com/users/362703/" ]
I did this: ``` cd "matlabroot\extern\engines\python" python setup.py install --prefix="installdir" ``` Where the `installdir` is my virtualenv, `matlabroot` the directory for the MatLab install. Seems to work with my windows server, so far, so good. Reference here: <https://www.mathworks.com/help/matlab/matlab_external/install-matlab-engine-api-for-python-in-nondefault-locations.html>
I ran the "`python setup.py install`" from `matlabroot\extern\engines\python` with my virtual environment active. Note that I did use `venv`.
9,576
37,465,816
The Getting Started docs for aiohttp give the following client example: ``` import asyncio import aiohttp async def fetch_page(session, url): with aiohttp.Timeout(10): async with session.get(url) as response: assert response.status == 200 return await response.read() loop = asyncio.get_event_loop() with aiohttp.ClientSession(loop=loop) as session: content = loop.run_until_complete( fetch_page(session, 'http://python.org')) print(content) ``` And they give the following note for Python 3.4 users: > > If you are using Python 3.4, please replace await with yield from and > async def with a @coroutine decorator. > > > If I follow these instructions I get: ``` import aiohttp import asyncio @asyncio.coroutine def fetch(session, url): with aiohttp.Timeout(10): async with session.get(url) as response: return (yield from response.text()) if __name__ == '__main__': loop = asyncio.get_event_loop() with aiohttp.ClientSession(loop=loop) as session: html = loop.run_until_complete( fetch(session, 'http://python.org')) print(html) ``` However, this will not run, because `async with` is not supported in Python 3.4: ``` $ python3 client.py File "client.py", line 7 async with session.get(url) as response: ^ SyntaxError: invalid syntax ``` How can I translate the `async with` statement to work with Python 3.4?
2016/05/26
[ "https://Stackoverflow.com/questions/37465816", "https://Stackoverflow.com", "https://Stackoverflow.com/users/58866/" ]
Just don't use the result of `session.get()` as a context manager; use it as a coroutine directly instead. The request context manager that `session.get()` produces would normally [*release* the request](http://pythonhosted.org/aiohttp/client_reference.html#aiohttp.ClientResponse.release) on exit, but [so does using `response.text()`](http://pythonhosted.org/aiohttp/client_reference.html#aiohttp.ClientResponse.text), so you could ignore that here: ``` @asyncio.coroutine def fetch(session, url): with aiohttp.Timeout(10): response = yield from session.get(url) return (yield from response.text()) ``` The request wrapper returned here doesn't have the required asynchronous methods (`__aenter__` and `__aexit__`), they omitted entirely when not using Python 3.5 (see the [relevant source code](https://github.com/KeepSafe/aiohttp/blob/master/aiohttp/client.py#L491-L560)). If you have more statements between the `session.get()` call and accessing the `response.text()` awaitable, you probably want to use a `try:..finally:` anyway to release the connection; the Python 3.5 release context manager also *closes* the response if an exception occurred. Because a `yield from response.release()` is needed here, this can't be encapsulated in a context manager before Python 3.4: ``` import sys @asyncio.coroutine def fetch(session, url): with aiohttp.Timeout(10): response = yield from session.get(url) try: # other statements return (yield from response.text()) finally: if sys.exc_info()[0] is not None: # on exceptions, close the connection altogether response.close() else: yield from response.release() ```
`aiohttp`'s [examples](https://github.com/KeepSafe/aiohttp/tree/master/examples) implemented using 3.4 syntax. Based on [json client example](https://github.com/KeepSafe/aiohttp/blob/master/examples/client_json.py) your function would be: ``` @asyncio.coroutine def fetch(session, url): with aiohttp.Timeout(10): resp = yield from session.get(url) try: return (yield from resp.text()) finally: yield from resp.release() ``` **Upd:** Note that Martijn's solution would work for simple cases, but may lead to unwanted behavior in specific cases: ``` @asyncio.coroutine def fetch(session, url): with aiohttp.Timeout(5): response = yield from session.get(url) # Any actions that may lead to error: 1/0 return (yield from response.text()) # exception + warning "Unclosed response" ``` Besides exception you'll get also warning "Unclosed response". This may lead to connections leak in complex app. You will avoid this problem if you'll manually call `resp.release()`/`resp.close()`: ``` @asyncio.coroutine def fetch(session, url): with aiohttp.Timeout(5): resp = yield from session.get(url) try: # Any actions that may lead to error: 1/0 return (yield from resp.text()) except Exception as e: # .close() on exception. resp.close() raise e finally: # .release() otherwise to return connection into free connection pool. # It's ok to release closed response: # https://github.com/KeepSafe/aiohttp/blob/master/aiohttp/client_reqrep.py#L664 yield from resp.release() # exception only ``` I think it's better to follow official examples (and `__aexit__` [implementation](https://github.com/KeepSafe/aiohttp/blob/master/aiohttp/client.py#L555)) and call `resp.release()`/`resp.close()` explicitly.
9,578
52,211,917
Trying to create a batch script for windows that runs a program with python3 if available else python2. I know the script can be executed with `$py -2 script.py` and py3 with `$py -3 script.py`. and if I run `py -0`, it returns all the python versions. How do I build this script? I do not want to check if the python directory is available or not, I'd prefer to check in a way that is python location agnostic.
2018/09/06
[ "https://Stackoverflow.com/questions/52211917", "https://Stackoverflow.com", "https://Stackoverflow.com/users/5252492/" ]
Not a full solution, but a method to detect which version of Python is installed: You can check if Python 3 is installed by running `py -3 --version` and then checking the `%ERRORLEVEL%` variable in the batch script. If it is 0, then `py -3 --version` was successful, i.e. Python 3 is installed on the system. If it is nonzero, then Python 3 is not installed.
You can do it without temporary file ``` set PYTHON_MAJOR_VERSION=0 for /f %%i in ('python -c "import sys; print(sys.version_info[0])"') do set PYTHON_MAJOR_VERSION=%%i ```
9,579
67,644,959
Since the code was cryptic I decided to reformulate it. This code is trying to remove the second element from a linked list (the node with number 2 on "int data"). The first parameter of remove\_node is the address of the first pointer of the list, so if I have to remove the first pointer of the list I am able to start the list from the next pointer. The problem is inside the second while loop, inside the if clause, more specifically with the free(previous->next) function, it is changing the address pointed by address\_of\_ptr (aka \*address\_of\_ptr) and I can't understand why this is happening. Could someone enlighten me? ``` #include <stdlib.h> typedef struct node { struct node *next; int data; } node; void remove_node(node **address_of_ptr, int int_to_remove) { node *previous; while (*address_of_ptr != NULL && (*address_of_ptr)->data == int_to_remove) { previous = *address_of_ptr; *address_of_ptr = (*address_of_ptr)->next; free(previous); } previous = *address_of_ptr; address_of_ptr = &((*address_of_ptr)->next); while (*address_of_ptr != NULL) { if ((*address_of_ptr)->data == int_to_remove) { address_of_ptr = &((*address_of_ptr)->next); free(previous->next); previous->next = *address_of_ptr; /*or previous->next = (*address_of_pointer)->next; free(*address_of_pointer); address_of_pointer = &previous->next; */ } else { previous = *address_of_ptr; address_of_ptr = &((*address_of_ptr)->next); } } } int main(void) { node *head = malloc(sizeof(node)); node *a = malloc(sizeof(node)); node *b = malloc(sizeof(node)); head->next = a; a->next = b; b->next = NULL; head->data = 1; a->data = 2; b->data = 1; remove_node(&head, 2); return 0; } ``` I figured it out using pythontutor.com. Here is an image of the execution about to replace *address\_of\_ptr* with the address of the next pointer in the list: [about to replace *address\_of\_ptr*](https://i.stack.imgur.com/NqMAr.png) In my mind what would happen is that *address\_of\_ptr* would move from *head* to *a*: [like this](https://i.stack.imgur.com/c2r9l.png) But what actually happens is this: [*address\_of\_ptr* move from head to the first node's *next* variable](https://i.stack.imgur.com/56UPn.png) Which is also the address of the next node (such as *a*). Inside the if clause: [*address\_of\_ptr* is set to the second node's *next* variable](https://i.stack.imgur.com/OjsIq.png) What I was expecting was it being set to *b*: [Like this](https://i.stack.imgur.com/TFtRv.png) Since it becomes equivalent to *previous->next*, it causes program to free *address\_of\_ptr*.
2021/05/22
[ "https://Stackoverflow.com/questions/67644959", "https://Stackoverflow.com", "https://Stackoverflow.com/users/15996314/" ]
First of all, if the first `while` loop stops due to `*address_of_ptr == NULL`, then the line `address_of_ptr = &((*address_of_ptr)->next);` will cause undefined behavior, due to deferencing a `NULL` pointer. However, this will not happen with the sample input provided by the function `main`, so it is not the cause of your problem. --- Your problem is the following: When first executing inside the `if` block of the second `while` loop (which will be in the first iteration of the second `while` loop with your sample input), `address_of_ptr` will point to the `next` member of the first node in the linked list, so the pointer will have the value of `&head->next`. This is because you have the line `address_of_ptr = &((*address_of_ptr)->next);` in several places in your program, and it will execute one of these lines exactly once by the time you enter the `if` block. The first line of that `if` block is also the line `address_of_ptr = &((*address_of_ptr)->next);` so that after executing that line, `address_of_ptr` will point to the `next` member of the second node in the linked list, so the pointer will have the value of `&a->next`. At that time, the value of `previous` will be `head`, because at the time `previous = *address_of_ptr;` was executed, `address_of_ptr` had the value of `&head`. Therefore, when the next line `free(previous->next);` is executed, `previous->next` will have the value of `a`, which ends the lifetime of that node. As already stated, at this time `address_of_ptr` will have the value `&a->next`, which means that `address_of_ptr` is now a dangling pointer, because it is now pointing to a freed memory location. Therefore, it is not surprising that `*address_of_ptr` changes after the `free` statement, because freed memory can change at any time. --- Note that the function `remove_node` can be implemented in a much simpler way: ``` void remove_node( node **pp_head, int int_to_remove ) { node **pp = pp_head, *p; //process one node per loop iteration while ( (p = *pp) != NULL ) { //check if node should be removed if ( p->data == int_to_remove ) { //unlink the node from the linked list *pp = p->next; //free the unlinked node free( p ); } else { //go to next node in the list pp = &p->next; } } } ```
This line: ``` begin_list = &previous->next; ``` You might be mis-thinking how pointers to pointers work. Since `begin_list` is a `t_list **`, you generally want to assign to `*begin_list`. Also, since `previous->next` is already an address, you don't want to assign the address to `*begin_list`, just the value, so this should work instead: ``` *begin_list = previous->next; ``` Same for the similar line below it.
9,581
50,187,041
I need help with my python program. I'm doing a calculator. The numbers must be formed, but for some reason they do not add up. It seems that I did everything right, but the program does not work. Please help me. [Picture](https://i.stack.imgur.com/777SB.png) Code: ``` a = input('Enter number A \n'); d = input('Enter sign operations \n') b = input('Enter number B \n') c = a + b if str(d) == "+": int(c) == "a + b" print('Answer: ' + c) ```
2018/05/05
[ "https://Stackoverflow.com/questions/50187041", "https://Stackoverflow.com", "https://Stackoverflow.com/users/9744650/" ]
Please don't post screenshots. Copy and paste text and use the {} CODE markdown. What data type is returned by input()? It's always a string. It doesn't matter what you type. Where is the variable c actually calculated in this program? Line 4. What types of data are used to compute c? Two strings. What happens when you use the "+" operation on two strings instead of two numbers? Try running your program and when it prompts you to "enter number A", type "Joe". When it prompts you to "enter number B", type "Bob". What does your program do? You need to create numerical objects from each of the strings you entered if you want to do arithmetic. I think that you tried what you thought would do that on line 7. It doesn't work though. "==" is used to test for equality, not to assign a value. The single "=" is used to bind values to variable names. You do that correctly on lines 1 through 4. Notice that the plain variable name is always by itself on the left of the "=" sign. You do all the fancy stuff on the right side of the "=". You can actually delete lines 6 and 7 and the output of the program will not change.
`a` and `b` are strings. `a + b` concatenates strings `a` and `b`. You need to convert the strings to int: `c = int(a) + int(b)` And remove the lines: ``` if str(d) == "+": int(c) == "a + b" ```
9,585
9,776,332
I wrote a python program that needs to call aircrack program to some tasks, but I run into trouble with the privilege. Initially the aircrack program is called in command line, it requires "sudo" at the beginning. After that I checked the location of the executable and found that it locates under `/usr/sbin/`. Right now my program is driven by apache and the default user is www-data. If I execute the program with user www-data without "sudo", it won't give any result. I googled a little bit on this problem, and found that we can change the owner of aircrack program by executing `sudo chmod 4755 /usr/sbin/airodump-ng`. I tried it and the privilege is changed from `-rwxr-xr-x` to `-rwsr-xr-x`. I executed the command once again without "sudo" but it doesn't make any difference. My questions is that: how can I change the privilege of a program so that it can be executed by any user without "sudo"? If I run the program with "sudo", then the output files are owned by "root", which adds more complexity to my program. Please help, thanks.
2012/03/19
[ "https://Stackoverflow.com/questions/9776332", "https://Stackoverflow.com", "https://Stackoverflow.com/users/636625/" ]
first, none of us can replace the chmod man page (so i'll start by quoting it): ``` A numeric mode is from one to four octal digits (0-7), derived by adding up the bits with values 4, 2, and 1. Omitted digits are assumed to be leading zeros. The first digit selects the set user ID (4) and set group ID (2) and restricted deletion or sticky (1) attributes. The second digit selects permissions for the user who owns the file: read (4), write (2), and execute (1); the third selects permissions for other users in the file's group, with the same values; and the fourth for other users not in the file's group, with the same values. ``` so, if you want the program to be executable by all users, you 'chmod 111 file'. however, iirc you need read permissions as well for most things, so 'chmod 555 file'. also, the reason i'm making the bits the same is because you said you wanted 'everyone' to be able to do some.
You have to set the suid flag for all users. ``` sudo chmod ogu+sxr /usr/sbing/airodump-ng ```
9,588
7,377,494
I'm trying to scrape a page using python The problem is, I keep getting Errno54 Connection reset by peer. The error comes when I run this code - ``` urllib2.urlopen("http://www.bkstr.com/webapp/wcs/stores/servlet/CourseMaterialsResultsView?catalogId=10001&categoryId=9604&storeId=10161&langId=-1&programId=562&termId=100020629&divisionDisplayName=Stanford&departmentDisplayName=ILAC&courseDisplayName=126&sectionDisplayName=01&demoKey=d&purpose=browse") ``` this happens for all the urls on this pag- what is the issue?
2011/09/11
[ "https://Stackoverflow.com/questions/7377494", "https://Stackoverflow.com", "https://Stackoverflow.com/users/915672/" ]
``` $> telnet www.bkstr.com 80 Trying 64.37.224.85... Connected to www.bkstr.com. Escape character is '^]'. GET /webapp/wcs/stores/servlet/CourseMaterialsResultsView?catalogId=10001&categoryId=9604&storeId=10161&langId=-1&programId=562&termId=100020629&divisionDisplayName=Stanford&departmentDisplayName=ILAC&courseDisplayName=126&sectionDisplayName=01&demoKey=d&purpose=browse HTTP/1.0 Connection closed by foreign host. ``` You're not going to have any joy fetching that URL from python, or anywhere else. If it works in your browser then there must be something else going on, like cookies or authentication or some such. Or, possibly, the server's broken or they've changed their configuration. Try opening it in a browser that you've never accessed that site in before to check. Then log in and try it again. Edit: It was cookies after all: ``` import cookielib, urllib2 cj = cookielib.CookieJar() opener = urllib2.build_opener(urllib2.HTTPCookieProcessor(cj)) #Need to set a cookie opener.open("http://www.bkstr.com/") #Now open the page we want data = opener.open("http://www.bkstr.com/webapp/wcs/stores/servlet/CourseMaterialsResultsView?catalogId=10001&categoryId=9604&storeId=10161&langId=-1&programId=562&termId=100020629&divisionDisplayName=Stanford&departmentDisplayName=ILAC&courseDisplayName=126&sectionDisplayName=01&demoKey=d&purpose=browse").read() ``` The output looks ok, but you'll have to check that it does what you want :)
I came across a similar error just recently. The connection was dropping out and being reset. I tried cookiejars, extended delays and different headers/useragents, but nothing worked. In the end the fix was simple. I went from urllib2 to requests. The old; ``` import urllib2 opener = urllib2.build_opener() buf = opener.open(url).read() ``` The new; ``` import requests buf = requests.get(url).text ``` After that everything worked perfectly.
9,589
50,983,646
I have a dataframe like this: ``` df = pd.DataFrame({'timestamp':pd.date_range('2018-01-01', '2018-01-02', freq='2h', closed='right'),'col1':[np.nan, np.nan, np.nan, 1,2,3,4,5,6,7,8,np.nan], 'col2':[np.nan, np.nan, 0, 1,2,3,4,5,np.nan,np.nan,np.nan,np.nan], 'col3':[np.nan, -1, 0, 1,2,3,4,5,6,7,8,9], 'col4':[-2, -1, 0, 1,2,3,4,np.nan,np.nan,np.nan,np.nan,np.nan] })[['timestamp', 'col1', 'col2', 'col3', 'col4']] ``` which looks like this: ``` timestamp col1 col2 col3 col4 0 2018-01-01 02:00:00 NaN NaN NaN -2.0 1 2018-01-01 04:00:00 NaN NaN -1.0 -1.0 2 2018-01-01 06:00:00 NaN 0.0 NaN 0.0 3 2018-01-01 08:00:00 1.0 1.0 1.0 1.0 4 2018-01-01 10:00:00 2.0 NaN 2.0 2.0 5 2018-01-01 12:00:00 3.0 3.0 NaN 3.0 6 2018-01-01 14:00:00 NaN 4.0 4.0 4.0 7 2018-01-01 16:00:00 5.0 NaN 5.0 NaN 8 2018-01-01 18:00:00 6.0 NaN 6.0 NaN 9 2018-01-01 20:00:00 7.0 NaN 7.0 NaN 10 2018-01-01 22:00:00 8.0 NaN 8.0 NaN 11 2018-01-02 00:00:00 NaN NaN 9.0 NaN ``` Now, I want to find an efficient and pythonic way of chopping off (for each column! Not counting timestamp) before the first valid index and after the last valid index. In this example I have 4 columns, but in reality I have a lot more, 600 or so. I am looking for a way of chop of all the NaN values before the first valid index and all the NaN values after the last valid index. One way would be to loop through I guess.. But is there a better way? This way has to be efficient. I tried to "unpivot" the dataframe using melt, but then this didn't help. An obvious point is that each column would have a different number of rows after the chopping. So I would like the result to be a list of data frames (one for each column) having timestamp and the column in question. For instance: ``` timestamp col1 3 2018-01-01 08:00:00 1.0 4 2018-01-01 10:00:00 2.0 5 2018-01-01 12:00:00 3.0 6 2018-01-01 14:00:00 NaN 7 2018-01-01 16:00:00 5.0 8 2018-01-01 18:00:00 6.0 9 2018-01-01 20:00:00 7.0 10 2018-01-01 22:00:00 8.0 ``` **My try** I tried like this: ``` final = [] columns = [c for c in df if c !='timestamp'] for col in columns: first = df.loc[:, col].first_valid_index() last = df.loc[:, col].last_valid_index() final.append(df.loc[:, ['timestamp', col]].iloc[first:last+1, :]) ```
2018/06/22
[ "https://Stackoverflow.com/questions/50983646", "https://Stackoverflow.com", "https://Stackoverflow.com/users/6435921/" ]
Since it's a response header i assume you mean this: ``` ctx.Response.Header.Set("Access-Control-Allow-Origin", "*") ```
Another option if you are not using `Context`: ``` func setResponseHeader(h http.HandlerFunc) http.HandlerFunc { return func(w http.ResponseWriter, r *http.Request) { w.Header().Set("Access-Control-Allow-Origin", "*") h.ServeHTTP(w, r) } } ``` `setResponseHeader` is essentially a decorator of the argument `HandlerFunc` `h`. When you assemble your routes, you can do something like this: ``` http.HandleFunc("/api/endpoint", setResponseHeader(myHandlerFunc)) http.ListenAndServe(":8000", nil) ```
9,590
41,846,085
I am doing this python programming and stuck with an issue. The program cannot call the value of the n\_zero when I put it in the conditional statement. Here's the program ``` import numpy as np n_zero=int(input('Insert the amount of 0: ')) n_one =int(input('Insert the amount of 1: ')) n_two =int(input('Insert the amount of 2: ')) n_three = int(input('Insert the amount of 3: ')) data = [0]*n_zero + [1]*n_one + [2]*n_two + [3]*n_three print len(data) if len(data)==(2(n_zero)-1): np.random.shuffle(data) datastring = ''.join(map(str, data)) print ("Data string is : %s " % datastring ) else: print(error) ``` this is the error ``` if len(data)==(2(n_zero)-1): TypeError: 'int' object is not callable ``` Thank you
2017/01/25
[ "https://Stackoverflow.com/questions/41846085", "https://Stackoverflow.com", "https://Stackoverflow.com/users/7456346/" ]
In python you don't cast the left value variable as you don't specify the left value type. ``` n_zero=int(input('Insert the amount of 0: ')) ``` **Regarding your edit:** What exactly are you trying to reach? if multiply than use the operator \* ``` if len(data)==(2*(n_zero)-1): ... ```
``` if len(data) == (2(n_zero) - 1): ``` should become: ``` if len(data) == (2 * (n_zero) - 1): ``` Note `*` in second example. You must explicitly provide operator, Python will not assume that You want to multiply those 2 numbers if You don't tell it.
9,592
62,191,477
somebody can explain why docker does not wanna run Django server Thats my structure of project: ``` app bankproject env docker-compose.yml Dockerfile manage.py requirements.txt ``` There is my file Docker: ``` # pull official base image FROM python:3.8.0-alpine # set work directory WORKDIR /app # set environment variables ENV PYTHONDONTWRITEBYTECODE 1 ENV PYTHONUNBUFFERED 1 # install dependencies RUN pip install --upgrade pip COPY ./requirements.txt requirements.txt RUN pip install -r requirements.txt # copy project COPY . /app ``` and my docker-compose.yml ``` version: '3.7' services: web: build: ./ command: python manage.py runserver 0.0.0.0:8000 volumes: - ./app/:/app ports: - 8000:8000 env_file: - ./.env.dev ``` Actually in folder env I have file .env.dev and it is consist : ``` DEBUG=1 SECRET_KEY=foo DJANGO_ALLOWED_HOSTS=localhost 127.0.0.1 [::1] ``` The mistake I got : web\_1 | python: can't open file 'manage.py': [Errno 2] No such file or directory app\_web\_1 exited with code 2
2020/06/04
[ "https://Stackoverflow.com/questions/62191477", "https://Stackoverflow.com", "https://Stackoverflow.com/users/13678793/" ]
You haven't declared a `ref` and passed it to `HeaderComponent`, see [`useRef` hook](https://reactjs.org/docs/hooks-reference.html#useref). ``` const App = () => { const calendarRef = useRef(); React.useEffect(() => { console.log(calendarRef.current); }, []); return ( <div className="App"> <HeaderComponent calendarRef={calendarRef} /> <Calendar ref={calendarRef} height="100vh" /> </div> ); }; // HeaderComponent function HeaderComponent({ calendarRef }) { const handleNext = () => { const calendarInstance = calendarRef.current.getInstance(); calendarInstance.next(); }; return ... } ``` [![Edit Q-62191456-RefExample](https://codesandbox.io/static/img/play-codesandbox.svg)](https://codesandbox.io/s/purple-hooks-kwgd3?fontsize=14&hidenavigation=1&theme=dark)
You should define a ref in the App component and pass it on to the HeaderComponent ``` const App = () => { let calendarRef = useRef(); return ( <div className="App"> <HeaderComponent calendarRef={calendarRef} /> <Calendar ref={calendarRef} height="100vh" /> </div> ); }; export default App; ``` --- ``` function HeaderComponent({ calendarRef }) { let handleNext = () => { let calendarInstance = calendarRef.current.getInstance(); calendarInstance.next(); }; return ( <div> <button>Prev</button> <button onClick={handleNext}>Next</button> </div> ); } export default HeaderComponent; ``` [Codesandbox](https://codesandbox.io/s/affectionate-mcclintock-v9lgw?file=/src/App.tsx)
9,593
32,080,635
I am looking into Python's **Least Recently Used (LRU) cache** implementation [here](http://code.activestate.com/recipes/578078-py26-and-py30-backport-of-python-33s-lru-cache/). Can someone please explain what the `_make_key` function is doing? ``` def _make_key(args, kwds, typed, kwd_mark = (object(),), fasttypes = {int, str, frozenset, type(None)}, sorted=sorted, tuple=tuple, type=type, len=len): 'Make a cache key from optionally typed positional and keyword arguments' key = args if kwds: sorted_items = sorted(kwds.items()) key += kwd_mark for item in sorted_items: key += item if typed: key += tuple(type(v) for v in args) if kwds: key += tuple(type(v) for k, v in sorted_items) elif len(key) == 1 and type(key[0]) in fasttypes: return key[0] return _HashedSeq(key) ```
2015/08/18
[ "https://Stackoverflow.com/questions/32080635", "https://Stackoverflow.com", "https://Stackoverflow.com/users/4677617/" ]
The function is converting a set of function arguments to a sequence that (a) can be hashed and (b) returns the same hash value for the same arguments, if a function is invoked multiple times with the same arguments. Obviously, `args` and `kwargs` are intended to be the positional and keyword arguments of a generic function definition: ``` def foo(*args, **kwargs): pass ``` `args` is a list, which is hashable as long as all its elements are hashable; `kwargs` is however a `dict` which is not a hashable type. So the function uses `.items()` to convert it to a list of key-value pairs; it is then sorted because the hash value of a list is sensitive to the order of elements, and a `dict` can list its items in arbitrary order. The purpose of `kwd_mark` is to ensure that keyword arguments cannot be confused with positional arguments that might happen to consist of the same key-value pair as an item of `kwargs`. As a default argument, its value is assigned when the function is defined; this guarantees that its sentinel object will never appear as a function argument.
Summary ------- The *\_make\_key()* function flattens the arguments into a compact tuple that can be used to determine whether the arguments for two calls are the same. Generated keys -------------- The call `f(10, 20, x=30, y=40)` and `f(10, 20, y=40, x=30)` both have the same key: ``` (10, 20, <object object at 0x7fae2bb25040>, 'x', 30, 'y', 40) ``` The *10* and *20* positional arguments are record in the order specified. The *kwd\_mark* separates the keyword arguments from the positional arguments. The keyword arguments are sorted so that `x=30, y=40` is recognized as being the same as `y=40, x=30`. If *typed* is true, then the key also records the argument types: ``` (10, 20, # positional args <object object at 0x7fae2bb25040>, # kwd_mark 'x', 30, 'y', 40, # keyword args <class 'int'>, <class 'int'>, # types of the args <class 'int'>, <class 'int'>) ```
9,594
51,361,441
I want to rewrite a `python` code for calculating accumulated result of `max`. I refered to numpy [documentation](https://docs.scipy.org/doc/numpy/reference/generated/numpy.ufunc.accumulate.html) Input: `[7200,7050,7300,7500,7440,7200,7300,7280,7400]` Output: `[7200, 7200, 7300, 7500, 7500, 7500, 7500, 7500, 7500]` I understand that I can do it in a loop, but I'm looking for a compact, one-line solution if possible
2018/07/16
[ "https://Stackoverflow.com/questions/51361441", "https://Stackoverflow.com", "https://Stackoverflow.com/users/3911211/" ]
Something like this is inline, more or less: ```js var max = 0; var input = [7200, 7050, 7300, 7500, 7440, 7200, 7300, 7280, 7400]; var output = input.map(function(e) { return max > e ? max : max = e; }); console.log(output); ``` Inspired by [Pierre's answer](https://stackoverflow.com/a/51361720/2959522), here's another one line solution with map ```js var input = [7200, 7050, 7300, 7500, 7440, 7200, 7300, 7280, 7400]; var output = input.map(function(e, i) { return Math.max(...input.slice(0, i + 1)) }); console.log(output); ```
One line solution with `Array.from()` : ```js var arr = [7200, 7050, 7300, 7500, 7440, 7200, 7300, 7280, 7400]; var output = Array.from({length:arr.length}, (el,i) => Math.max(...arr.slice(0,i+1))) console.log(output); ```
9,595
53,057,646
Here is my code for upload a file to S3 bucket sing boto3 in python. ``` import boto3 def upload_to_s3(backupFile, s3Bucket, bucket_directory, file_format): s3 = boto3.resource('s3') s3.meta.client.upload_file(backupFile, s3Bucket, bucket_directory.format(file_format)) upload_to_s3('/tmp/backup.py', 'bsfbackup', 'pfsense/{}', 'hello.py') ``` My question is about, I want to print the "Upload success" once the upload is succeeded, and print "Upload is failed" and the error stack if the uploading is failed. Any help ? Thanks.
2018/10/30
[ "https://Stackoverflow.com/questions/53057646", "https://Stackoverflow.com", "https://Stackoverflow.com/users/3440631/" ]
A CAN transceiver is just a high speed step down converter. (on a basic level) CAN protocol works in a variant of voltage ranges. MCP2551 is a set CAN transceiver suitable for 12V and 24V systems. With added features to help with the physical layer like `externally-controlled slope` for reduced RFI emissions, `detection of ground fault`, `voltage brown-out protection`, etc. It has no dependency on the CAN logic. It is just to help you with the bare physical layer. To answer your question: As RishabhHardas recommended, use the HAL library provided by STM32 through CubeMx. **Using CubeMx** This is a software provided by ST-Micro to help you setup the boilerplate code for any peripheral application. You can also check out the examples projects provided by STM in the Cube. This will give you a kick-start in understanding CAN on STM32 > > STM32Cube\_FW\_F4\_V1.9.0\Projects\STM324xG\_EVAL\Examples\CAN\CAN\_Networking > > > After setting it up, you'll be able to call `HAL_CAN_Transmit()` and `HAL_CAN_Receive()` by including the header. Check out [this](https://community.st.com/s/question/0D50X00009XkWCJSA3/can-bus-cubemx-settings-and-transmit-f091cc) discussion on STM32-Community.
For software, look for the CANtact open source project on Github. It is an implementation for the STM32F042. I had to adapt the project to build it under Atollic but it was not too hard and it works. It provides a SLCAN type of interface over a virtual COM port over USB, which is very fast and convenient. There is also CAN code for the STM32F103 (Bluepill) (Google "lawicel-slcan") but that chip is not as convenient because you cannot use both CAN and USB at the same time (they share RAM buffers) so if you want CAN, you will not have USB, and routing the CAN messages over a UART will severely limit the bandwidth. That may be OK if your entire application runs on the STM32.
9,597
59,433,681
I have the following data in terms of dataframe ``` data = pd.DataFrame({'colA': ['a', 'c', 'a', 'e', 'c', 'c'], 'colB': ['b', 'd', 'b', 'f', 'd', 'd'], 'colC':['SD100', 'SD200', 'SD300', 'SD400', 'SD500', 'SD600']}) ``` I want the output as attached [enter image description here][2] I want to achieve this using pandas dataframe in python Can somebody help me?
2019/12/21
[ "https://Stackoverflow.com/questions/59433681", "https://Stackoverflow.com", "https://Stackoverflow.com/users/12054665/" ]
You can try: ``` Column A Column B Column C 0 a b SD100 1 c d SD200 2 a b SD300 3 e f SD400 4 c d SD500 5 c d SD600 ``` --- ``` >>> df.groupby(['Column A', 'Column B']).agg(list) Column C Column A Column B a b [SD100, SD300] c d [SD200, SD500, SD600] e f [SD400] ```
I don't know why you want to make multindex, but you can simply `sort_values` or use `groupby`. ```py import pandas as pd df = pd.DataFrame({"ColumnA":['a','c','a','e','c','c'], "ColumnB":['b','d','b','f','d','d'], "ColumnC":['SD100','SD200','SD300','SD400','SD500','SD600']}) print(df) ``` ``` ColumnA ColumnB ColumnC 0 a b SD100 1 c d SD200 2 a b SD300 3 e f SD400 4 c d SD500 5 c d SD600 ``` ```py df = df.sort_values(by=['ColumnA','ColumnB']) df.set_index(['ColumnA', 'ColumnB','ColumnC'], inplace=True) df ```
9,598
10,308,639
I want to install the newest version of `numpy` (a numerical library for Python), and the version (v1.6.1) is not yet in the [Ubuntu Oneiric repositories](https://launchpad.net/ubuntu/oneiric/+source/python-numpy). When I went ahead to manually install it, I read in the [INSTALL](https://github.com/numpy/numpy/blob/master/INSTALL.txt) file that `numpy` needs to be built with the same compiler that built `LAPACK` (a fortran lib used by `numpy`). Unfortunately, I don't know which compiler that is. I didn't install `LAPACK` myself - `apt-get` did, back when I installed an older `numpy` (v1.5.1) using `apt`. If I had to guess, I'd say `gfortran`, but I'd rather not mess this up. How do I figure out which compiler built my current installation of `LAPACK`? Is there any easy way - perhaps running some fortran code that uses it and examining the output? Thanks!
2012/04/25
[ "https://Stackoverflow.com/questions/10308639", "https://Stackoverflow.com", "https://Stackoverflow.com/users/781938/" ]
From the same `INSTALL` file you referenced... ``` How to check the ABI of blas/lapack/atlas ----------------------------------------- One relatively simple and reliable way to check for the compiler used to build a library is to use ldd on the library. If libg2c.so is a dependency, this means that g77 has been used. If libgfortran.so is a a dependency, gfortran has been used. If both are dependencies, this means both have been used, which is almost always a very bad idea. ``` If I had to guess, I would probably guess gfortran also as the only two free fortran compilers that I know of are g77 and gfortran and g77 development is pretty much dead as far as I know ... Another thing to check is g77 (by default) appended two underscores to symbols whereas gfortran (by default) only appends one. This is probably the thing that is most important for numpy to know ... although there may be other subtle differences (if numpy is doing some dirty hacking to get at information stored in a common block for instance).
I know of no easy way, though you may find `readelf -a /usr/lib/$SHARED_OBJECT` illuminating, where `$SHARED_OBJECT` is something like `/usr/lib/atlas-base/liblapack_atlas.so.3gf.0` (you'll have to look in your `/usr/lib` to see what your exact filename is). However, there is another, quite different way to get information, since you are using Ubuntu, a flavor of Debian. 1. Find out which to binary package $BINARY\_PKG your Lapack shared object belongs by `dpkg -l | grep -E '(lapack|atlas)` and/or `dpkg -S $SHARED_OBJECT`. 2. Find out from which source package $SOURCE\_PKG the binary package $BINARY\_PKG was built by `dpkg -s $BINARY_PACKAGE`. In the output, look for the `Source:` line. 3. Change into some temporary working directory. 4. Issue `apt-get source $SOURCE_PKG`. (Alternately, `apt-get source $BINARY_PKG` has the same effect.) 5. Issue `ls` and notice the Lapack source directory. 6. Change into the Lapack source directory. 7. Examine the file `debian/control`, paying special attention to the relevant `Build-Depends:` line. This will tell what was necessary to build the package. 8. Issue `dpkg -s build-essential`, which will give you additional information about compilers available to build every package in your distribution (you may have to install the `build-essential` package, first). All this is of course a lot of work, and none of it is in the nature of a simple formula that just gives you the answer you seek; but it does give you places to look for the answer. Good luck.
9,601
69,102,556
I'm having troubles with grouping my list so let's say I have this: ``` data = [ {'records-0': '1'}, {'records-0-item1': '2'}, {'records-0-item2': '3'},{'records-0-item3': '4'}, {'records-1': '1'}, {'records-1-item1': '2'}, {'records-1-item2': '3'}, ] ``` What I'm trying to have is my list sorted based on the index that is in my key (if that makes any sense). So my expected outpout would be kinda like this. ``` sortedData = [ {0: [{'records-0': '1'}, {'records-0-item1': '2'}, {'records-0-item2': '3'},{'records-0-item3': '4'},]}, {1: [{'records-1': '1'}, {'records-1-item1': '2'}, {'records-1-item2': '3'},] ] ``` I've tried looking up for grouping lists, but that all is based on one specific key. like this <https://www.geeksforgeeks.org/group-list-of-dictionary-data-by-particular-key-in-python/> Any ideas? **EDIT**: The data in the list is based on user input, so there could also be a: ``` {'records-2': '1'}, {'records-3': '1'}, {'records-4': '1'}, etc ```
2021/09/08
[ "https://Stackoverflow.com/questions/69102556", "https://Stackoverflow.com", "https://Stackoverflow.com/users/11868566/" ]
One way is to use [default\_dict](https://docs.python.org/3/library/collections.html#collections.defaultdict). First, just define a helper method to get the key from the string: ``` def get_key_from(string): return int(string.split('-')[1]) ``` Then ``` from collections import defaultdict sortedData_res = defaultdict(list) for h in data: k = next(iter(h)) sortedData_res[get_key_from(k)].append({k: ''}) ``` So you finally get this result: ``` sortedData_res #defaultdict(list, # {0: [{'records-0': ''}, # {'records-0-item1': ''}, # {'records-0-item2': ''}, # {'records-0-item3': ''}], # 1: [{'records-1': ''}, # {'records-1-item1': ''}, # {'records-1-item2': ''}]}) ``` --- Or adjust the code to get something different. For example if you `.append(h)` instead you get: ``` #defaultdict(list, # {0: [{'records-0': '1'}, # {'records-0-item1': '2'}, # {'records-0-item2': '3'}, # {'records-0-item3': '4'}], # 1: [{'records-1': '1'}, # {'records-1-item1': '2'}, # {'records-1-item2': '3'}]}) ```
I suggest you reformat your sortedData and remove lists. You could gather your data only into dict. *Edited* example: (should work as is) ```py def sort_data(l_: list): d_ = dict() for d in l_: for k, v in d.items(): i = re.split('-', k)[1] if not d_.get(int(i)): d_[int(i)] = dict() d_[int(i)][k] = v continue d_[int(i)][k] = v return d_ ``` ```py >>> data = [ {'records-0': '1'}, {'records-0-item1': '2'}, {'records-0-item2': '3'}, {'records-0-item3': '4'}, {'records-1': '1'}, {'records-1-item1': '2'}, {'records-1-item2': '3'}, ] >>> print(sort_data(data)) { 0: { 'records-0': '1', 'records-0-item1': '2', 'records-0-item2': '3', 'records-0-item3': '4' }, 1: { 'records-1': '1', 'records-1-item1': '2', 'records-1-item2': '3' } } ```
9,602
24,946,479
Are most functions for http requests synchronous by default? I came from Javascript and usage of AJAX and just started working with http requests in Python. To my surprise, it seems as though http request functions by default are synchronous, so I do not need to deal with any asynchronous behavior. For example, I'm working with the [Requests](http://docs.python-requests.org/en/latest/) http library, and although the docs don't explicitly say whether http requests are synchronous or not, by my own testing the calls seem to be synchronous. So is it that most http requests are by default synchronous in nature, or most functions are designed that way? And my first experience with http requests (JS AJAX) just happened to be of an asynchronous nature?
2014/07/25
[ "https://Stackoverflow.com/questions/24946479", "https://Stackoverflow.com", "https://Stackoverflow.com/users/1502780/" ]
It's the language, in python most apis are synchronous by default, and the async version is usually an "advanced" topic. If you were using nodejs, they would be async; even if using javascript, the reason ajax is asynchronous is because of the nature of the browser.
`requests` is completely synchronous/blocking. [`grequests`](https://github.com/kennethreitz/grequests) is what you are looking for: > > GRequests allows you to use Requests with Gevent to make asynchronous > HTTP Requests easily. > > > See also: * [Asynchronous Requests with Python requests](https://stackoverflow.com/questions/9110593/asynchronous-requests-with-python-requests) * [Ideal method for sending multiple HTTP requests over Python?](https://stackoverflow.com/questions/10555292/ideal-method-for-sending-multiple-http-requests-over-python) * [Multiple (asynchronous) connections with urllib2 or other http library?](https://stackoverflow.com/questions/4119680/multiple-asynchronous-connections-with-urllib2-or-other-http-library)
9,603
40,981,120
I have installed python 3.5, and need to install pywin (pywin32) however, pip cannot find it. Note, i have just PIP install'ed send2trash and gitpython successfully ``` Could not find a version that satisfies the requirement pywin32 (from versions: ) ``` A few possibly relevant data points: * new install of python 3.5 * windows 7 x64 * python 2.7 was previously installed on the machine * as mentioned, several other packages were installed fine via PIP * running these commands from git-bash, which came from the git windows installer, installed some time ago. -- I have gnu grep in my path, so i believe i selected the git installer option to put the whole mysys toolchain in my path full --verbose output: ``` C:\Users\USER>pip install pywin32 --proxy http://proxy.COMPANY.com:8080 Collecting pywin32 Could not find a version that satisfies the requirement pywin32 (from versions: ) No matching distribution found for pywin32 C:\Users\USER>pip install pywin32 --proxy http://proxy.COMPANY.com:8080 --verbose Config variable 'Py_DEBUG' is unset, Python ABI tag may be incorrect Config variable 'WITH_PYMALLOC' is unset, Python ABI tag may be incorrect Collecting pywin32 1 location(s) to search for versions of pywin32: * https://pypi.python.org/simple/pywin32/ Getting page https://pypi.python.org/simple/pywin32/ Looking up "https://pypi.python.org/simple/pywin32/" in the cache Current age based on date: 61 Freshness lifetime from max-age: 600 Freshness lifetime from request max-age: 600 The response is "fresh", returning cached response 600 > 61 Analyzing links from page https://pypi.python.org/simple/pywin32/ Could not find a version that satisfies the requirement pywin32 (from versions: ) Cleaning up... No matching distribution found for pywin32 Exception information: Traceback (most recent call last): File "c:\users\USER\appdata\local\programs\python\python35\lib\site-packages\pip\basecommand.py", line 215, in main status = self.run(options, args) File "c:\users\USER\appdata\local\programs\python\python35\lib\site-packages\pip\commands\install.py", line 324, in run requirement_set.prepare_files(finder) File "c:\users\USER\appdata\local\programs\python\python35\lib\site-packages\pip\req\req_set.py", line 380, in prepare_files ignore_dependencies=self.ignore_dependencies)) File "c:\users\USER\appdata\local\programs\python\python35\lib\site-packages\pip\req\req_set.py", line 554, in _prepare_file require_hashes File "c:\users\USER\appdata\local\programs\python\python35\lib\site-packages\pip\req\req_install.py", line 278, in populate_link self.link = finder.find_requirement(self, upgrade) File "c:\users\USER\appdata\local\programs\python\python35\lib\site-packages\pip\index.py", line 514, in find_requirement 'No matching distribution found for %s' % req pip.exceptions.DistributionNotFound: No matching distribution found for pywin32 ```
2016/12/05
[ "https://Stackoverflow.com/questions/40981120", "https://Stackoverflow.com", "https://Stackoverflow.com/users/309433/" ]
I think you could use `pip install pypiwin32` instead.
If you are using a Python 3.5+ then you could add pypiwin32==223 to your requirements.txt file instead of pywin32
9,604
70,456,516
Here is the minimal code needed to reproduce the problem. I call an API with a callback function that prints what comes out of the API call. If I run this code in Jupyter, I get the output. If I run it with `python file.py` I don't get any output. I already checked the API's code, but that does nothing weird. Setting `DEBUGGING` to `True` isn't helpful either. Note that there is once dependency; the Bitvavo API. Install it with `pip install python-bitvavo-api` ``` # %% from python_bitvavo_api.bitvavo import Bitvavo # %% def generic_callback(response): print(f"log function=get_markets, {response=}") bitvavo = Bitvavo({"DEBUGGING": False}) websocket = bitvavo.newWebsocket() # %% websocket.markets(options={"market": "BTC-EUR"}, callback=generic_callback) ``` Here is the expected output I get from Jupyter: ``` log function:get_markets, response={'market': 'BTC-EUR', 'status': 'trading', 'base': 'BTC', 'quote': 'EUR', 'pricePrecision': 5, 'minOrderInBaseAsset': '0.0001', 'minOrderInQuoteAsset': '5', 'orderTypes': ['market', 'limit', 'stopLoss', 'stopLossLimit', 'takeProfit', 'takeProfitLimit']} ```
2021/12/23
[ "https://Stackoverflow.com/questions/70456516", "https://Stackoverflow.com", "https://Stackoverflow.com/users/12210524/" ]
Because you're using a websocket, the callback is executed by a different thread, which means that if you don't wait, the main thread (which is to receive the `print`'s output) will already be killed. Add a `sleep(1)` (that's seconds, not ms) at the end and the output will show. PS: The reason Jupyter *does* show the output, is because Jupyter keeps an interactive window open the entire time, even far after you've run the code :)
``` import time from python_bitvavo_api.bitvavo import Bitvavo # %% def generic_callback(response): print(f"log function=get_markets, {response=}") bitvavo = Bitvavo({"DEBUGGING": False}) websocket = bitvavo.newWebsocket() # Wait N.1 required to receive output, otherwise the main thread is killed time.sleep(1) # %% websocket.markets(options={"market": "BTC-EUR"}, callback=generic_callback) # Wait N.2 required to receive output, otherwise the main thread is killed time.sleep(1) ```
9,614
28,622,452
I want to print the files in subdirectory which is 2-level inside from root directory. In shell I can use the below find command ``` find -mindepth 3 -type f ./one/sub1/sub2/a.txt ./one/sub1/sub2/c.txt ./one/sub1/sub2/b.txt ``` In python How can i accomplish this. I know the basis syntax of os.walk, glob and fnmatch. But dont know how to specify the limit (like mindepeth and maxdepth in bash)
2015/02/20
[ "https://Stackoverflow.com/questions/28622452", "https://Stackoverflow.com", "https://Stackoverflow.com/users/4566486/" ]
You could use `.count()` method to find the depth: ``` import os def files(rootdir='.', mindepth=0, maxdepth=float('inf')): root_depth = rootdir.rstrip(os.path.sep).count(os.path.sep) - 1 for dirpath, dirs, files in os.walk(rootdir): depth = dirpath.count(os.path.sep) - root_depth if mindepth <= depth <= maxdepth: for filename in files: yield os.path.join(dirpath, filename) elif depth > maxdepth: del dirs[:] # too deep, don't recurse ``` Example: ``` print('\n'.join(files(mindepth=3))) ``` [The answer to the related question uses the same technique](https://stackoverflow.com/a/234329/4279).
You cannot specify any of this to [os.walk](https://docs.python.org/2/library/os.html#os.walk). However, you can write a function that does what you have in mind. ``` import os def list_dir_custom(mindepth=0, maxdepth=float('inf'), starting_dir=None): """ Lists all files in `starting_dir` starting from a `mindepth` and ranging to `maxdepth` If `starting_dir` is `None`, the current working directory is taken. """ def _list_dir_inner(current_dir, current_depth): if current_depth > maxdepth: return dir_list = [os.path.relpath(os.path.join(current_dir, x)) for x in os.listdir(current_dir)] for item in dir_list: if os.path.isdir(item): _list_dir_inner(item, current_depth + 1) elif current_depth >= mindepth: result_list.append(item) if starting_dir is None: starting_dir = os.getcwd() result_list = [] _list_dir_inner(starting_dir, 1) return result_list ``` EDIT: Added the corrections, reducing unnecessary variable definitions. 2nd Edit: Included 2Rings suggestion to make it list the very same files as `find`, i.e. `maxdepth` is exclusive. 3rd EDIT: Added other remarks by 2Ring, also changed the path to `relpath` to return the output in the same format as `find`.
9,615
45,445,455
In python I might have a function like this: ``` def sum_these(x, y=None): if y is None: y = 1 return x + y ``` What is the equivalent use in julia? To be exact I know I could probably do: ``` function sum_these(x, y=0) if y == 0 y = 1 end x + y end ``` However I'd rather not use zero, instead some value with the same meaning as `None` in python **EDIT** Just for clarity's sake, the result of these examples functions isn't important. The events(any) that happen if y is None are important, and for my cases setting y=[some number] is undesirable. After some search I think, unless someone provides a better solution is to do something like: ``` function sum_these(x, y=nothing) if y == nothing do stuff end return something ```
2017/08/01
[ "https://Stackoverflow.com/questions/45445455", "https://Stackoverflow.com", "https://Stackoverflow.com/users/6411264/" ]
Maybe use multiple dispatch with an empty fallback? ``` function f(x, y=nothing) ... do_something(x, y) ... return something end do_something(x, y) = nothing function do_something(x, y::Void) ... end ``` add other relevant vars to `do_something` as necessary, and return something or mutate as necessary.
Your question is a bit confusing. It seems like what you want is ``` function sum_these(x, y=1) return x + y end ``` But that doesn't quite do what you are asking either, since even if you call `sum_these(3, 0)` in your example, it replaces 0 with 1. Also in Python, I would use ``` def sum_these(x, y=1): return x + y ``` Perhaps I misunderstand your question.
9,616
48,216,974
The curve is: ``` import numpy as np import scipy.stats as sp from scipy.optimize import curve_fit from lmfit import minimize, Parameters, Parameter, report_fit# import xlwings as xw import os import pandas as pd ``` I tried running a simply curve fit from scipy: This returns ``` Out[156]: (array([ 1., 1.]), array([[ inf, inf], [ inf, inf]])) ``` Ok, so I thought I need to start bounding my parameters, which is what solver does. I used lmfit to do this: ``` params = Parameters() params.add ``` I tried to nudge python to the right direction by changing the starting parameters: or using curvefit:
2018/01/11
[ "https://Stackoverflow.com/questions/48216974", "https://Stackoverflow.com", "https://Stackoverflow.com/users/4504877/" ]
The curve fitting runs smoothly when we provide a good starting point. We can get one * by linear regression on `sp.norm.ppf(x_data)` and `np.log(y_data)` * or by fitting the free (non-clipped) model first Alternatively, if you want the computer to find the solution without "help" * use a stochastic algorithm like basin hopping (it's unfortunately not 100% autonomous, I had to increase the step size from its default value) * brute force it (this requires the user to provide a search grid) All four methods yield the same result and it is better than the excel result. ``` import numpy as np import scipy.stats as sp from scipy.optimize import curve_fit, basinhopping, brute def func1(x, n1, n2): return np.clip(np.exp(n2 + n1*(sp.norm.ppf(x))),25e4,10e6) def func_free(x, n1, n2): return np.exp(n2 + n1*(sp.norm.ppf(x))) def sqerr(n12, x, y): return ((func1(x, *n12) - y)**2).sum() x_data = np.array((1e-04,9e-01,9.5835e-01,9.8e-01,9.9e-01,9.9e-01)) y_data = np.array((250e3,1e6,2.5e6,5e6,7.5e6,10e6)) # get a good starting point # either by linear regression lin = sp.linregress(sp.norm.ppf(x_data), np.log(y_data)) # or by using the free (non-clipped) version of the formula (n1f, n2f), infof = curve_fit(func_free, x_data, y_data, (1, 1)) # use those on the original problem (n1, n2), info = curve_fit(func1, x_data, y_data, (lin.slope, lin.intercept)) (n12, n22), info2 = curve_fit(func1, x_data, y_data, (n1f, n2f)) # OR # use basin hopping hop = basinhopping(sqerr, (1, 1), minimizer_kwargs=dict(args=(x_data, y_data)), stepsize=10) # OR # brute force it brt = brute(sqerr, ((-100, 100), (-100, 100)), (x_data, y_data), 201, full_output=True) # all four solutions are essentially the same: assert np.allclose((n1, n2), (n12, n22)) assert np.allclose((n1, n2), hop.x) assert np.allclose((n1, n2), brt[0]) # we are actually a bit better than excel n1excel, n2excel = 1.7925, 11.6771 print('solution', n1, n2) print('error', ((func1(x_data, n1, n2) - y_data)**2).sum()) print('excel', ((func1(x_data, n1excel, n2excel) - y_data)**2).sum()) ``` Output: ``` solution 2.08286042997 11.1397332743 error 3.12796761241e+12 excel 5.80088578059e+12 ``` Remark: One easy optimization - which I left out for simplicity and because things are fast enough anyway - would have been to pull `sp.norm.ppf` out of the model function. This is possible because it does not depend on the fit parameters. So when any of our solvers calls the function it always does the exact same computation - `sp.norm.ppf(x_data)` - first, so we might as well have precomputed it. This observation is also our reason for using `sp.norm.ppf(x_data)` in the linear regression.
I think that part of the problem is that you have only 5 observations, 2 at the same value of `x` and the model does not perfectly represent your data. I also recommend trying to fit in the log of the model to the log of the data. And, if you expect `n2` to be ~10, you should use that as a starting value. Arbitrarily applying `min` and `max` to the calculated model, especially with values so close to your data range, makes very little sense. Doing this will prevent the fit method from exploring the effect of changing the parameter values on the model function. With some modifications to your code to omit the first data point (which seems to be throwing off the model), I find this: ``` import numpy as np import scipy.stats as sp from lmfit import Parameters, minimize, fit_report import matplotlib.pyplot as plt x_data = np.array((1e-04,9e-01,9.5835e-01,9.8e-01,9.9e-01,9.9e-01)) y_data = np.array((250e3,1e6,2.5e6,5e6,7.5e6,10e6)) x_data = x_data[1:] y_data = y_data[1:] def func(params, x, data, m1=250e3,m2=10e6): n1 = params['n1'].value n2 = params['n2'].value model = n2 + n1*(sp.norm.ppf(x)) return np.log(data) - model params = Parameters() params.add('n1', value= 1, min=0.01, max=20) params.add('n2', value= 10, min=0, max=20) result = minimize(func, params, args=(x_data[1:-1], y_data[1:-1])) print(fit_report(result)) n1 = result.params['n1'].value n2 = result.params['n2'].value ym = np.exp(n2 + n1*(sp.norm.ppf(x_data))) plt.plot(x_data, y_data, 'o') plt.plot(x_data, ym, '-') plt.legend(['data', 'fit']) plt.show() ``` gives a report of ``` [[Fit Statistics]] # fitting method = leastsq # function evals = 15 # data points = 3 # variables = 2 chi-square = 0.006 reduced chi-square = 0.006 Akaike info crit = -14.438 Bayesian info crit = -16.241 [[Variables]] n1: 1.85709072 +/- 0.190473 (10.26%) (init= 1) n2: 11.5455736 +/- 0.390805 (3.38%) (init= 10) [[Correlations]] (unreported correlations are < 0.100) C(n1, n2) = -0.993 ``` and a plot of [![enter image description here](https://i.stack.imgur.com/LUmrQ.png)](https://i.stack.imgur.com/LUmrQ.png)
9,619
10,745,138
i'm new on python. i wrote a script to connect to a host and execute one command ``` ssh = paramiko.SSHClient() ssh.set_missing_host_key_policy(paramiko.AutoAddPolicy()) ssh.connect(host, username=user, password=pw) print 'running remote command' stdin, stdout, stderr = ssh.exec_command(command) stdin.close() for line in stdout.read().splitlines(): print '%s$: %s' % (host, line) if outfile != None: f_outfile.write("%s\n" %line) for line in stderr.read().splitlines(): print '%s$: %s' % (host, line + "\n") if outfile != None: f_outfile.write("%s\n" %line) ssh.close() if outfile != None: f_outfile.close() print 'connection to %s closed' %host except: e = sys.exc_info()[1] print '%s' %e ``` works fine when then remote command doesn't need a tty. i found an invoke\_shell example [Nested SSH session with Paramiko](https://stackoverflow.com/questions/1911690/nested-ssh-session-with-paramiko). i'm not happy with this solution, because if a server has an prompt that isn't specified in my script -> infinite loop or a specified prompt in the script is a string in the return text -> not all data will be received. is there a better solution maybe where stdout and stderr are send back like in my script?
2012/05/24
[ "https://Stackoverflow.com/questions/10745138", "https://Stackoverflow.com", "https://Stackoverflow.com/users/1008764/" ]
There is extensive paramiko API documentation you can find at: <http://docs.paramiko.org/en/stable/index.html> I use the following method to execute commands on a password protected client: ``` import paramiko nbytes = 4096 hostname = 'hostname' port = 22 username = 'username' password = 'password' command = 'ls' client = paramiko.Transport((hostname, port)) client.connect(username=username, password=password) stdout_data = [] stderr_data = [] session = client.open_channel(kind='session') session.exec_command(command) while True: if session.recv_ready(): stdout_data.append(session.recv(nbytes)) if session.recv_stderr_ready(): stderr_data.append(session.recv_stderr(nbytes)) if session.exit_status_ready(): break print 'exit status: ', session.recv_exit_status() print ''.join(stdout_data) print ''.join(stderr_data) session.close() client.close() ```
ThePracticalOne - you are hero! I had problems with exec\_command (which is a member of Client) I tried to run powershell commands over ssh on Windows server, and only your example with ``` client = paramiko.Transport((hostname, port)) client.connect(username=username, password=password) ``` and ``` while True: if session.recv_ready(): stdout_data.append(session.recv(nbytes)) if session.recv_stderr_ready(): stderr_data.append(session.recv_stderr(nbytes)) if session.exit_status_ready(): break ``` helped to me!
9,622
25,086,088
I've spent the better part of an afternoon trying to import the xlrd module, it works when i do it in the shell but when i try to run any file I get an import error. Please could somebody provide a solution? (I'm a beginner, so please be excruciatingly specific) This code: ``` #!/usr/bin/python import os os.chdir("C:/Users/User/Documents/Python/xlrd") import xlrd ``` returns the error: ``` Traceback (most recent call last): File "C:\Users\User\Documents\Python\Programs\Radiocarbon27.py", line 4 in <module> import xlrd ImportError: No module named xlrd ``` The path of the setup.py which contains the setup.py file is C:\Users\User\Documents\Python\xlrddocs thanks!
2014/08/01
[ "https://Stackoverflow.com/questions/25086088", "https://Stackoverflow.com", "https://Stackoverflow.com/users/3900424/" ]
They are overlapping because you've given them all absolute position and left 0. Absolute position removes the element from the normal flow of the page and puts it exactly where you indicate using the top/left/right/bottom properties. They will overlap as long as they have the same parent and same position properties.
They are overlapping because you are using position absolute. instead place the divs at the top of the html page and do this instead: ``` <div id="left" style="float:left;width:60%;height:100%;background:#e6e6e6;"> <div id="map" style="float:left;width:60%;height:400px">Map goes here.</div> <div id="details" style="float:left;left:0px;width:60%;height:400px">Details</div> </div> <div id="description" style="float:left;width:20%;height:100%;background:#ffffff;"> </div> <div id="resource" style="float:left;width:20%;height:100%;background:#ffffff;"> </div> ``` This will place the divs beside each other
9,632
55,550,259
I would like to find a way to reverse the bits of each character in a string using python. For example, if my first character was `J`, this is ASCII `0x4a` or `0b01001010`, so would be reversed to `0x52` or `0b01010010`. If my second character was `K`, this is `0b01001011`, so would be reversed to `0xd2` or `0b11010010`, etc. The end result should be returned as a `string`. Speed is my number one priority here, so I am looking for a fast way to accomplish this.
2019/04/06
[ "https://Stackoverflow.com/questions/55550259", "https://Stackoverflow.com", "https://Stackoverflow.com/users/4263190/" ]
If speed is your goal and you are working with ASCII so you only have 256 8-bit values to handle, calculate the reversed-byte values beforehand and put them in a `bytearray`, then look them up by indexing into the `bytearray`.
``` a=bin(ord("a")) '0b'+a[::-1][0:len(a)-2] ``` If you want to do it for a lot of characters, then there are only 256 ascii characters. Store the reversed strings in a hashmap and do lookups on the hashmap. Time complexity of those lookups is O(1), but there's a fix setup time.
9,634
22,384,783
I am trying to use C# classes from python, using python.net on mono / ubuntu. So far I managed to do a simple function call with one argument work. What I am now trying to do is pass a python callback to the C# function call. I tried the following variations below, none worked. Can someone show how to make that work? ``` // C# - testlib.cs class MC { public double method1(int n) { Console.WriteLine("Executing method1" ); /* .. */ } public double method2(Delegate f) { Console.WriteLine("Executing method2" ); /* ... do f() at some point ... */ /* also tried f.DynamicInvoke() */ Console.WriteLine("Done executing method2" ); } } ``` Python script ``` import testlib, System mc = testlib.MC() mc.method1(10) # that works def f(): print "Executing f" mc.method2(f) # does not know of method2 with that signature, fair enough... # is this the right way to turn it into a callback? f2 = System.AssemblyLoad(f) # no error message, but f does not seem to be invoked mc.method2(f2) ```
2014/03/13
[ "https://Stackoverflow.com/questions/22384783", "https://Stackoverflow.com", "https://Stackoverflow.com/users/1477436/" ]
Try to pass `Action` or `Func` instead of just raw function: I used IronPython here (because right now I don't have mono installed on any of my machines but according of Python.NET [documentation](http://pythonnet.sourceforge.net/readme.html) I think it should work Actually your code is almost ok but you need to import `Action` or `Func` delegate depends on what you need. python code: ``` import clr from types import * from System import Action clr.AddReferenceToFileAndPath(r"YourPath\TestLib.dll") import TestLib print("Hello") mc = TestLib.MC() print(mc.method1(10)) def f(fakeparam): print "exec f" mc.method2(Action[int](f)) ``` This is a console output: ``` Hello Executing method1 42.0 Executing method2 exec f Done executing method2 ``` C# code: ``` using System; namespace TestLib { public class MC { public double method1(int n) { Console.WriteLine("Executing method1"); return 42.0; /* .. */ } public double method2(Delegate f) { Console.WriteLine("Executing method2"); object[] paramToPass = new object[1]; paramToPass[0] = new int(); f.DynamicInvoke(paramToPass); Console.WriteLine("Done executing method2"); return 24.0; } } } ``` I read docs for Python.net [Using Generics](http://pythonnet.sourceforge.net/readme.html#generics) again and also found this [Python.NET Naming and resolution of generic types](https://mail.python.org/pipermail/pythondotnet/2006-April/000472.html) look like you need to specify parameter type explicitly [quote from there](https://mail.python.org/pipermail/pythondotnet/2006-April/000472.html): > > a (reflected) generic type definition (if there exists a generic type > definition with the > given base name, and no non-generic type with that name). This generic type > definition can be bound into a closed generic type using the [] syntax. Trying to > instantiate a generic type def using () raises a TypeError. > > >
It looks like you should define your Delegate explicitly: ``` class MC { // Define a delegate type public delegate void Callback(); public double method2(Callback f) { Console.WriteLine("Executing method2" ); /* ... do f() at some point ... */ /* also tried f.DynamicInvoke() */ Console.WriteLine("Done executing method2" ); } } ``` Then from the Python code (this is a rough guess based from the [docs](http://pythonnet.sourceforge.net/readme.html#delegates)): ``` def f(): print "Executing f" # instantiate a delegate f2 = testlib.MC.Callback(f) # use it mc.method2(f2) ```
9,636
56,692,868
I am trying to write a mock lottery simulator as a thought excercize and for some introductory python practice, where each team would have 2x the odds of getting the first pick as the team that preceded them in the standings. The code below works (although I am sure there is a more efficient way to write it), but now I would like to figure out a way to find each individual teams odds of getting a certain draft spot based on their odds. I believe there are 12! ways for the order to be set, so it would be next to impossible to compute by hand. Is there a way to run a simulation in python (say 10 million times) and see what percentage of those times each team ends up in a certain spot of the shuffled list? ``` import random from time import sleep first = [1*['team 1']] second = [2*['team 2']] third = [4*['team 3']] fourth = [8*['team 4']] fifth = [16*['team 5']] sixth = [32*['team 6']] seventh = [64*['team 7']] eighth = [128*['team 8']] ninth = [256*['team 9']] tenth = [512*['team 10']] eleventh = [1024*['team 11']] twelfth = [2048*['team 12']] total = [] for i in first: for x in i: total.append(x) for i in second: for x in i: total.append(x) for i in third: for x in i: total.append(x) for i in fourth: for x in i: total.append(x) for i in fifth: for x in i: total.append(x) for i in sixth: for x in i: total.append(x) for i in seventh: for x in i: total.append(x) for i in eighth: for x in i: total.append(x) for i in ninth: for x in i: total.append(x) for i in tenth: for x in i: total.append(x) for i in eleventh: for x in i: total.append(x) for i in twelfth: for x in i: total.append(x) random.shuffle(total) order = [] for i in total: if i not in order: order.append(i) print('the twelfth pick goes to {}'.format(order[11])) sleep(1) print('the eleventh pick goes to {}'.format(order[10])) sleep(1) print('the tenth pick goes to {}'.format(order[9])) sleep(1) print('the ninth pick goes to {}'.format(order[8])) sleep(1) print('the eighth pick goes to {}'.format(order[7])) sleep(1) print('the seventh pick goes to {}'.format(order[6])) sleep(2) print('the sixth pick goes to {}'.format(order[5])) sleep(2) print('the fifth pick goes to {}'.format(order[4])) sleep(2) print('the fourth pick goes to {}'.format(order[3])) sleep(3) print('the third pick goes to {}'.format(order[2])) sleep(3) print('the second pick goes to {}'.format(order[1])) sleep(3) print('the first pick goes to {}'.format(order[0])) ```
2019/06/20
[ "https://Stackoverflow.com/questions/56692868", "https://Stackoverflow.com", "https://Stackoverflow.com/users/10918881/" ]
Instead of doing your sampling like this, I would use a discrete distribution for the probability of getting the team i, and sample using random.choices. We update the distribution after the sampling by discarding all the tickets from that team (since it cannot appear again). ``` from random import choices ticket_amounts = [2**i for i in range(12)] teams = [ i + 1 for i in range(12)] for i in range(12): probabilities = [count/sum(ticket_amounts) for count in ticket_amounts] team_picked = choices(teams, probabilities)[0] print("team picked is{}".format(team_picked)) # update probabilities by discarding all the tickets # belonging to picked team ticket_amounts[team_picked-1] = 0 ```
Here is what you want I think, but as the other comment said it runs very slow and I would not try 10 million times, its slow enough as is. ``` from collections import Counter for i in range(1,10000): random.shuffle(total) countList.append(total[0]) print Counter(countList) ``` add the for loop to the end of your code and the import at the top.
9,637
56,533,066
I'm trying to write a request using Python Requests which sends a request to Docusign. I need to use the legacy authorization header, but unfortunately it seems most documentation for this has been removed. When I send the request I get an error as stated in the title. From research, I found that special characters in the password can cause this issue, so I've confirmed that my password has no special characters, and that my API key is correct. I am currently sending the header as a stringified dictionary as shown below. I have tried it several other ways, and this seems to be the closest, but it still results in the error. Other ways I've tried include attempting to write out the header as a single string (not forming a dictionary first), but that didn't seem to work any better. ``` docusign_auth_string = {} docusign_auth_string["Username"] = docusign_user docusign_auth_string["Password"] = docusign_password docusign_auth_string["IntegratorKey"] = docusign_key docusign_auth_string = str(docusign_auth_string) headers = {'X-DocuSign-Authentication': docusign_auth_string} response = requests.post(docusign_url, headers=headers, data=body_data) ``` The above code returns a 401 with the message, INVALID\_TOKEN\_FORMAT "The security token format does not conform to expected schema." The header I am sending looks as follows: {'X-DocuSign-Authentication': "{'Username': 'test@test.com', 'Password': 'xxxxxxxxxx', 'IntegratorKey': 'xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx'}"} When I send the request via Postman, it works just fine. In Postman I enter the header name as X-Docusign-Authentication, and the value as: {"Username":"{{ds\_username}}","Password":"{{ds\_password}}","IntegratorKey":"{{ds\_integrator\_key}}"} (subbing the same variable values as in the python code). Therefore it definitely has something to do with the way Requests is sending the header. Does anyone know why I might be getting the above error?
2019/06/10
[ "https://Stackoverflow.com/questions/56533066", "https://Stackoverflow.com", "https://Stackoverflow.com/users/6490748/" ]
I'm able to reproduce this behavior: It looks like DocuSign doesn't accept Single Quotes around the sub-parameters of the x-DocuSign-Authentication header value. Your example fails: ``` {'Username': 'test@test.com', 'Password': 'xxxxxxxxxx', 'IntegratorKey': 'xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx'} ``` This has more success: ``` {"Username": "test@test.com", "Password": "xxxxxxxxxx", "IntegratorKey": "xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx"} ``` I'm not familiar enough with Python to advise if there's a different code structure you can follow to use double quotes instead of single. Worst case scenario, you may need to manually set the Header Value to follow that format.
Since you are having success using Postman, it will help to get exactly what is being sent via your request. For this use: ```py response = requests.get(your_url, headers=your_headers) x = response.request.headers() print(x) ``` This will show you exactly what requests is preparing and sending off. If you post that response here id be happy to help more. [How can I see the entire HTTP request that's being sent by my Python application?](https://stackoverflow.com/questions/10588644/how-can-i-see-the-entire-http-request-thats-being-sent-by-my-python-application) The 2nd answer shows all the possible parameters of your response object.
9,639
18,139,910
I want to save an ID between requests, using Flask `session` cookie, but I'm getting an `Internal Server Error` as result, when I perform a request. I prototyped a simple Flask app for demonstrating my problem: ``` #!/usr/bin/env python from flask import Flask, session app = Flask(__name__) @app.route('/') def run(): session['tmp'] = 43 return '43' if __name__ == '__main__': app.run() ``` Why I can't store the `session` cookie with the following value when I perform the request?
2013/08/09
[ "https://Stackoverflow.com/questions/18139910", "https://Stackoverflow.com", "https://Stackoverflow.com/users/2236401/" ]
According to [Flask sessions documentation](http://flask.pocoo.org/docs/quickstart/#sessions): > > ... > What this means is that the user could look at the contents of your > cookie but not modify it, unless they know the secret key used for > signing. > > > In order to use sessions you **have to set a secret key**. > > > Set *secret key*. And you should return string, not int. ``` #!/usr/bin/env python from flask import Flask, session app = Flask(__name__) @app.route('/') def run(): session['tmp'] = 43 return '43' if __name__ == '__main__': app.secret_key = 'A0Zr98j/3yX R~XHH!jmN]LWX/,?RT' app.run() ```
Under `app = Flask(__name__)` place this: `app.secret_key = os.urandom(24)`.
9,641
52,113,440
Looking for data splitter line by line, by using python * RegEx? * Contain? As example file "file" contain: ``` X X Y Z Z Z ``` I need the clean way to split this file into 3 different ones, based on letter **As a sample:** ``` def split_by_platform(FILE_NAME): with open(FILE_NAME, "r+") as infile: Data = infile.read() If the file contains "X" write to x.txt If the file contains "Y" write to y.txt If the file contains "Z" write to z.txt ``` **x.txt** file will look like: ``` X X ``` **y.txt** file will look like: ``` Y ``` **z.txt** file will look like: ``` Z Z Z ```
2018/08/31
[ "https://Stackoverflow.com/questions/52113440", "https://Stackoverflow.com", "https://Stackoverflow.com/users/-1/" ]
EDIT thanks to @bruno desthuilliers, who reminded me of the correct way to go here: Iterate over the file object (not 'readlines'): ``` def split_by_platform(FILE_NAME, out1, out2, out3): with open(FILE_NAME, "r") as infile, open(out1, 'a') as of1, open(out2, 'a') as of2, open(out3, 'a') as of3: for line in infile: if "X" in line: of1.write(line) elif "Y" in line: of2.write(line) elif "Z" in line: of3.write(line) ``` EDIT on a hint of @dim: Here the more general approach for an arbitrary length list of flag chars: ``` def loop(infilename, flag_chars): with open(infilename, 'r') as infile: for line in infile: for c in flag_chars: if c in line: with open(c+'.txt', 'a') as outfile: outfile.write(line) ```
This should do it: ``` with open('my_text_file.txt') as infile, open('x.txt', 'w') as x, open('y.txt', 'w') as y, open('z.txt', 'w') as z: for line in infile: if line.startswith('X'): x.write(line) elif line.startswith('Y'): y.write(line) elif line.startswith('Z'): z.write(line) ```
9,644
54,721,703
I am following this tutorial to install TensorFlow(<https://www.tensorflow.org/install/pip>), but in the last command: ``` python -c "import tensorflow as tf; tf.enable_eager_execution(); print(tf.reduce_sum(tf.random_normal([1000, 1000])))" ``` I get this result: ``` ModuleNotFoundError: No module named 'numpy.core._multiarray_umath' ImportError: numpy.core.multiarray failed to import The above exception was the direct cause of the following exception: Traceback (most recent call last): File "<frozen importlib._bootstrap>", line 980, in _find_and_load SystemError: <class '_frozen_importlib._ModuleLockManager'> returned a result with an error set ImportError: numpy.core._multiarray_umath failed to import ImportError: numpy.core.umath failed to import 2019-02-16 12:56:50.178364: F tensorflow/python/lib/core/bfloat16.cc:675] Check failed: PyBfloat16_Type.tp_base != nullptr ``` I have already installed `numpy` as you can see: ``` pip3 install numpy Requirement already satisfied: numpy in c:\programdata\anaconda3\lib\site-packages (1.15.4) ``` So why do I get this error message and how can I fix it on Windows 10?
2019/02/16
[ "https://Stackoverflow.com/questions/54721703", "https://Stackoverflow.com", "https://Stackoverflow.com/users/3486308/" ]
Upgrade the numpy to solve the error ``` pip install numpy --upgrade ```
ensure that you're using python 3.x by running it as ```py python3 -c "import tensorflow as tf; tf.enable_eager_execution(); print(tf.reduce_sum(tf.random_normal([1000, 1000])))" ```
9,646
69,837,913
I am trying to unpickle a file but i get this error while running the following code: ``` import pickle import pandas as pd import numpy unpickled_df = pd.read_pickle("./ToyData.pickle") unpickled_df ``` or ``` import pickle # load : get the data from file data = pickle.load(open('ToyData.pickle', "rb")) ``` error output: --- ``` AttributeError Traceback (most recent call last) <ipython-input-3-4f30cc427816> in <module> 1 import pickle 2 # load : get the data from file ----> 3 data = pickle.load(open('ToyData.pickle', "rb")) 4 # loads : get the data from var 5 #data = pickle.load(var) AttributeError: Can't get attribute 'PandasIndexAdapter' on <module 'xarray.core.indexing' from 'C:\\Users\\User\\anaconda3\\lib\\site-packages\\xarray\\core\\indexing.py'> ``` How can i solve this. I have tried to install xarray, dask and other xarray dependancies using the code below: ``` python -m pip install "xarray[complete]" python -m pip install "xarray[io]" # Install optional dependencies for handling I/O #python -m pip install "xarray[accel]" # Install optional dependencies for accelerating xarray #python -m pip install "xarray[parallel]" # Install optional dependencies for dask arrays #python -m pip install "xarray[viz]" # Install optional dependencies for visualization conda install xarray-0.16.1-py_0 ``` I used anaconda jupyter notebook to run the script above. I am unable to read the pickle file.
2021/11/04
[ "https://Stackoverflow.com/questions/69837913", "https://Stackoverflow.com", "https://Stackoverflow.com/users/17326771/" ]
You cannot pickle.load files in a newly updated version of xarray that were made in a previous version of xarray. This is a known error that has no solution as "pickling is not recommended for long-term storage". <https://github.com/pydata/xarray/discussions/5642> .hdf or .json are better alternatives for long-term storage of your data since those are more likely to be supported in future versions.
I had the same problem with `results = torch.load("results.pth.tar")` and get "`AttributeError: Can't get attribute 'PandasIndexAdapter' on <module 'xarray.core.indexing'`". I solve it by changing the version I have on my computer by the version the file.pth.tar was saved with. In my case the file was saved with xarray version '0.14.1' and on my compter I had '2022.3.0'. I change with `pip install xarray==0.14.1` and it works ! If you are able to find the version "ToyData.pickle" is saved with install this version in the environment of your notebook. Otherwise you can try with every single version of xarray (good luck) ! I hope it can help !
9,656
68,964,555
I don't know why I am getting this error, the official document reference <https://scikit-learn.org/stable/modules/generated/sklearn.metrics.det_curve.html#sklearn.metrics.det_curve> **Code:** ``` import numpy as np from sklearn.metrics import det_curve fpr, fnr, thresholds = det_curve(y_test, y_pred) print(fpr, fnr, thresholds) ``` **Error:** ``` --------------------------------------------------------------------------- ImportError Traceback (most recent call last) <ipython-input-46-d8d6f0b546ca> in <module>() 10 11 import numpy as np ---> 12 from sklearn.metrics import det_curve 13 14 fpr, fnr, thresholds = det_curve(y_test['cEXT'], y_pred) ImportError: cannot import name 'det_curve' from 'sklearn.metrics' (/usr/local/lib/python3.7/dist-packages/sklearn/metrics/__init__.py) --------------------------------------------------------------------------- NOTE: If your import is failing due to a missing package, you can manually install dependencies using either !pip or !apt. To view examples of installing some common dependencies, click the "Open Examples" button below. --------------------------------------------------------------------------- ```
2021/08/28
[ "https://Stackoverflow.com/questions/68964555", "https://Stackoverflow.com", "https://Stackoverflow.com/users/1385107/" ]
Looks like some issue with a missing package. Try the following: ``` pip uninstall -v scikit-learn pip install -v scikit-learn ``` This might install the related dependencies along with it.
Same kind of problem happened to me in my Jupyter notebook. I uninstall and re-install both scikit-learn and imblearn. It didn't work. Then **restarting the kernel** and running again solved the problem.
9,657
36,314,411
Given a file with resolution-compressed binary data, I would like to convert the sub-byte bits into their integer representations in python. By this I mean I need to interpret `n` bits from a file as an integer. Currently I am reading the file into `bitarray` objects, and am converting subsets of the objects into integers. The process works but is fairly slow and cumbersome. Is there a better way to do this, perhaps with the `struct` module? ``` import bitarray bits = bitarray.bitarray() with open('/dir/to/any/file.dat','r') as f: bits.fromfile(f,2) # read 2 bytes into the bitarray ## bits 0:4 represent a field field1 = int(bits[0:4].to01(), 2) # Converts to a string of 0s and 1s, then int()s the string ## bits 5:7 represent a field field2 = int(bits[4:7].to01(), 2) ## bits 8:16 represent a field field3 = int(bits[7:16].to01(), 2) print """All bits: {bits}\n\tfield1: {b1}={field1}\n\tfield2: {b2}={field2}\n\tfield3: {b3}={field3}""".format( bits=bits, b1=bits[0:4].to01(), field1=field1, b2=bits[4:7].to01(), field2=field2, b3=bits[7:16].to01(), field3=field3) ``` Outputs: ``` All bits: bitarray('0000100110000000') field1: 0000=0 field2: 100=4 field3: 110000000=384 ```
2016/03/30
[ "https://Stackoverflow.com/questions/36314411", "https://Stackoverflow.com", "https://Stackoverflow.com/users/1176806/" ]
The whole solution :) Only javascript section is modified. ``` <div id="divCountries" class="fieldRow"> <div class="leftLabel labelWidth20"> <label for="txtcountries">Country:</label> </div> <div class="LeftField"> <div class="formField34"> <select id="txtCountries" type="text" name="Countries" alt="Countries" title="Countries" onchange="disableState()"> <option value=""></option> <option value="Afghanistan">Afghanistan</option> <option value="Albania">Albania</option> <option value="Algeria">Algeria</option> <option value="UnitedStates">United States</option> </select> </div> </div> <div class="leftLabel labelWidth20"> <label for="txtCustomerStates">State (USA):</label> </div> <div class="LeftField"> <div class="formField40"> <select id="txtCustomerStates" type="text" name="State" alt="United States" title="United States"> <option value=""></option> <option value="Alabama">Alabama</option> <option value="Alaska">Alaska</option> <option value="American Samoa">American Samoa</option> <option value="Arizona">Arizona</option> <option value="Arkansas">Arkansas</option> </select> </div> </div> <div id="divCityProvince" class="fieldRow"> <div class="leftLabel labelWidth20"> <label for="txtCityProvince">City/Province/Town<br>(International): </label> </div> <div class="formField34"> <input id="txtCityProvince" type="text" class="textfield" alt="City/Province/Town (International)" title="City/Province/Town (International)"> </div> </div> <script> function disableState() { if (document.getElementById("txtCountries").value === "UnitedStates") { document.getElementById("txtCustomerStates").disabled=false; document.getElementById("txtCityProvince").disabled='true'; } else { document.getElementById("txtCityProvince").disabled=false; document.getElementById("txtCustomerStates").disabled='true'; } } </script> ```
See this [fiddle](https://jsfiddle.net/lalu050/Lcu4jp91/) --------------------------------------------------------- That was because the value that was returned from `document.getElementById("txtCountries").value` was `UnitedStates` and not `United States`. Please note that the option for United States was as follows ``` <option value="UnitedStates">United States</option> ``` Notice that there is no space between the `United` and `States` in the value. **JS** ``` function disableState() { if (document.getElementById("txtCountries").value === "UnitedStates") { document.getElementById("txtCityProvince").disabled = 'true'; } else { document.getElementById("txtCustomerStates").disabled = 'true'; } } ```
9,658
59,307,832
I'm slowly trying to get my head around classes. I have a few working examples which i kinda understand but can someone please explain to me why this doesn’t work? ``` class python: def __init__(self,name): self.name=name def changename(self,newname): self.name=newname abc=python('python') print abc.name abc.changename = 'anaconda' print abc.name ``` All I’m trying to do here is change the value of `abc.name` at some point later in the code (it doesn’t need to be name if that’s a special word, but I did try name2, etc, same results...) If I do `print abc.changename` then I get the output anaconda but that’s not really what i wanted. Any help would be much appreciated. Also...Is it possible to do something like that at a later point in the life cycle of the code? ``` def newstuff(self, value1, value2): self.newvalue1 = value1 self.newvalue2 = value2 ``` So that I would have access to 2 new ‘things’ `abc.newvalue1` and `abc.newvalue2`. Does that make sense?? Sorry for the ‘things; I’m still trying to grasp which is an attribute, object, method, item, etc...
2019/12/12
[ "https://Stackoverflow.com/questions/59307832", "https://Stackoverflow.com", "https://Stackoverflow.com/users/6037160/" ]
You can have an HTML5 `audio` tag in base64 encoding as so: ```html <audio controls autoplay loop src="data:audio/ogg;base64,BASE64CODE" /> ``` No need for a `type`! :)
If your audio files are over about 20MB then you might run into performance problems and errors in some browsers (notably Firefox). For this reason I recommend converting the Base64 into a binary Blob as described [here](https://stackoverflow.com/a/40329529/1417989) using the [convertDataURIToBinary](https://gist.github.com/borismus/1032746) snippet and then setting that as the `audio.src`.
9,660
2,162,914
I have the following Python script that reads numbers and outputs an error if the input is not a number. ``` import fileinput import sys for line in (txt.strip() for txt in fileinput.input()): if not line.isdigit(): sys.stderr.write("ERROR: not a number: %s\n" % line) ``` If I get the input from stdin, I have to press `Ctrl` + `D` **twice** to end the program. Why? I only have to press `Ctrl` + `D` once when I run the Python interpreter by itself. ``` bash $ python test.py 1 2 foo 4 5 <Ctrl+D> ERROR: not a number: foo <Ctrl+D> bash $ ```
2010/01/29
[ "https://Stackoverflow.com/questions/2162914", "https://Stackoverflow.com", "https://Stackoverflow.com/users/46821/" ]
I wrote an explanation about this in my answer to this question. [How to capture Control+D signal?](https://stackoverflow.com/questions/1516122/how-to-capture-controld-signal) In short, Control-D at the terminal simply causes the terminal to flush the input. This makes the `read` system call return. The first time it returns with a non-zero value (if you typed something). The second time, it returns with 0, which is code for "end of file".
The first time it considers it to be input, the second time it's for keeps! This only occurs when the input is from a tty. It is likely because of the terminal settings where characters are buffered until a newline (carriage return) is entered.
9,661
43,518,430
How to convert ``` json_decode = [{"538":["1,2,3","hello world"]},{"361":["0,9,8","x,x,y"]}] ``` to ``` {"538":["1,2,3","hello world"],"361":["0,9,8","x,x,y"]} ``` in python?
2017/04/20
[ "https://Stackoverflow.com/questions/43518430", "https://Stackoverflow.com", "https://Stackoverflow.com/users/6742101/" ]
***Try like this:*** ``` @Override protected void onCreate(Bundle savedInstanceState) { super.onCreate(savedInstanceState); setContentView(R.layout.activity_main); EditText edittext = (EditText) findViewbyId(R.id.edittext); editText.setText("DefaultValue"); } ```
Add in xml layout file ``` android:text="defaultVal" ``` In onClick method or constructor/init method in java ``` editText.setText("DefaultValue"); ```
9,669
17,803,254
I want to find my public ip adress from python program. So far this is the only site <http://www.whatismyip.com/> and <http://whatismyip.org/> which gives ip without proxy rest all give the proxy. Now .org site is using image and first one writes ip across many span elements so i can't grab with urllib. Any other idea or site so that i can get my ip
2013/07/23
[ "https://Stackoverflow.com/questions/17803254", "https://Stackoverflow.com", "https://Stackoverflow.com/users/1667349/" ]
I usually use <http://httpbin.org/>: ``` import requests ip = requests.get('http://httpbin.org/ip').json()['origin'] ```
Use [lxml](http://lxml.de/) ``` import urllib import lxml.html u = urllib.urlopen('http://www.whatismyip.com/') html = u.read() u.close() root = lxml.html.fromstring(html) print ''.join(x.text for x in root.cssselect('#greenip *')) ```
9,670
14,140,089
I have already posted a question today and it had 2 problems on it. One of which was solved perfectly, then it got a little complicated. So forgive me but I am posting the other question separately as it confused some peeps: I am new to python so apologies in advance. Any help is much appreciated. I have been stuck on this code for 2weeks now and I have tunnel vision and cannot work it out: Basically our assignment was to get to grips with Object-Oriented Programming. We unfortunately have to use "get" and "set" which I've learnt a lot of people dislike, however, as per our tutor we have to do it like that. We were told tp create a program whereby the user is presented with a screen with 3 options. 1. adding a student. 2. viewing a student and 3. removing a student.. within my AddStudent function I have to ask the user to enter fname Lname age degree studying id number (these are the easy bits) and also module name and grade for each module, I have managed to create a loop whereby it will ask the user over and over to enter modules and corresponding grades and will break from said loop when the user enters -1 into the modulname field. However, when trying saving it to a list named students[] ... (which is at the very top of my code above all functions, to apparently make it global) it saves all input from the user re: age name etc but when it comes to saving module names and grades it only saves the last input and not the multiple inputs I need it to. I am unsure if it is within my AddStudent function where it isn't saving or within my ViewStudent function: Both are below (remember I HAVE to use the GET and SET malarky) ;) ``` students[] # Global List def addStudent(): print print "Adding student..." student = Student() firstName = raw_input("Please enter the student's first name: ") lastName = raw_input("Please enter the student's last name: ") degree = raw_input("Please enter the name of the degree the student is studying: ") studentid = raw_input("Please enter the students ID number: ") age = raw_input("Please enter the students Age: ") while True: moduleName = raw_input("Please enter module name: ") if moduleName == "-1": break grade = raw_input ("Please enter students grade for " + moduleName+": ") student.setFirstName(firstName) # Set this student's first name student.setLastName(lastName) student.setDegree(degree)# Set this student's last name student.setGrade(grade) student.setModuleName(moduleName) student.setStudentID(studentid) student.setAge(age) students.append(student) print "The student",firstName+' '+lastName,"ID number",studentid,"has been added to the system." ``` ........................ ``` def viewStudent(): print "Printing all students in database : " for person in students: print "Printing details for: " + person.getFirstName()+" "+ person.getLastName() print "Age: " + person.getAge() print "Student ID: " + person.getStudentID() print "Degree: " + person.getDegree() print "Module: " + person.getModuleName() print "Grades: " + person.getGrade() ```
2013/01/03
[ "https://Stackoverflow.com/questions/14140089", "https://Stackoverflow.com", "https://Stackoverflow.com/users/1945133/" ]
You can use [String.Join](http://msdn.microsoft.com/en-us/library/dd783876(v=vs.100).aspx?cs-save-lang=1&cs-lang=csharp#code-snippet-1). ``` string.Join("\n", errorMessages); ```
Use join ``` string.Join(System.Environment.NewLine, errorMessages); ```
9,671
40,613,480
[![enter image description here](https://i.stack.imgur.com/vIEKx.png)](https://i.stack.imgur.com/vIEKx.png) I was defining a function Heiken Ashi which is one of the popular chart type in Technical Analysis. I was writing a function on it using Pandas but finding little difficulty. This is how Heiken Ashi [HA] looks like- ``` Heikin-Ashi Candle Calculations HA_Close = (Open + High + Low + Close) / 4 HA_Open = (previous HA_Open + previous HA_Close) / 2 HA_Low = minimum of Low, HA_Open, and HA_Close HA_High = maximum of High, HA_Open, and HA_Close Heikin-Ashi Calculations on First Run HA_Close = (Open + High + Low + Close) / 4 HA_Open = (Open + Close) / 2 HA_Low = Low HA_High = High ``` There is a lot of stuff available on various websites using for loop and pure python but i think Pandas can also do job well. This is my progress- ``` def HA(df): df['HA_Close']=(df['Open']+ df['High']+ df['Low']+ df['Close'])/4 ha_o=df['Open']+df['Close'] #Creating a Variable #(for 1st row) HA_O=df['HA_Open'].shift(1)+df['HA_Close'].shift(1) #Another variable #(for subsequent rows) df['HA_Open']=[ha_o/2 if df['HA_Open']='nan' else HA_O/2] #(error Part Where am i going wrong?) df['HA_High']=df[['HA_Open','HA_Close','High']].max(axis=1) df['HA_Low']=df[['HA_Open','HA_Close','Low']].min(axis=1) return df ``` Can Anyone Help me with this please?` It doesnt work.... I tried on this- ``` import pandas_datareader.data as web import HA import pandas as pd start='2016-1-1' end='2016-10-30' DAX=web.DataReader('^GDAXI','yahoo',start,end) ``` This is the New Code i wrote ``` def HA(df): df['HA_Close']=(df['Open']+ df['High']+ df['Low']+df['Close'])/4 ...: ha_o=df['Open']+df['Close'] ...: df['HA_Open']=0.0 ...: HA_O=df['HA_Open'].shift(1)+df['HA_Close'].shift(1) ...: df['HA_Open']= np.where( df['HA_Open']==np.nan, ha_o/2, HA_O/2 ) ...: df['HA_High']=df[['HA_Open','HA_Close','High']].max(axis=1) ...: df['HA_Low']=df[['HA_Open','HA_Close','Low']].min(axis=1) ...: return df ``` But still the HA\_Open result was not satisfactory
2016/11/15
[ "https://Stackoverflow.com/questions/40613480", "https://Stackoverflow.com", "https://Stackoverflow.com/users/7053990/" ]
Here is the fastest, accurate and efficient implementation as per my tests: ``` def HA(df): df['HA_Close']=(df['Open']+ df['High']+ df['Low']+df['Close'])/4 idx = df.index.name df.reset_index(inplace=True) for i in range(0, len(df)): if i == 0: df.set_value(i, 'HA_Open', ((df.get_value(i, 'Open') + df.get_value(i, 'Close')) / 2)) else: df.set_value(i, 'HA_Open', ((df.get_value(i - 1, 'HA_Open') + df.get_value(i - 1, 'HA_Close')) / 2)) if idx: df.set_index(idx, inplace=True) df['HA_High']=df[['HA_Open','HA_Close','High']].max(axis=1) df['HA_Low']=df[['HA_Open','HA_Close','Low']].min(axis=1) return df ``` Here is my test algorithm (essentially I used the algorithm provided in this post to benchmark the speed results): ``` import quandl import time df = quandl.get("NSE/NIFTY_50", start_date='1997-01-01') def test_HA(): print('HA Test') start = time.time() HA(df) end = time.time() print('Time taken by set and get value functions for HA {}'.format(end-start)) start = time.time() df['HA_Close_t']=(df['Open']+ df['High']+ df['Low']+df['Close'])/4 from collections import namedtuple nt = namedtuple('nt', ['Open','Close']) previous_row = nt(df.ix[0,'Open'],df.ix[0,'Close']) i = 0 for row in df.itertuples(): ha_open = (previous_row.Open + previous_row.Close) / 2 df.ix[i,'HA_Open_t'] = ha_open previous_row = nt(ha_open, row.Close) i += 1 df['HA_High_t']=df[['HA_Open_t','HA_Close_t','High']].max(axis=1) df['HA_Low_t']=df[['HA_Open_t','HA_Close_t','Low']].min(axis=1) end = time.time() print('Time taken by ix (iloc, loc) functions for HA {}'.format(end-start)) ``` Here is the output I got on my i7 processor (please note the results may vary depending on your processor speed but I assume that the results will be similar): ``` HA Test Time taken by set and get value functions for HA 0.05005788803100586 Time taken by ix (iloc, loc) functions for HA 0.9360761642456055 ``` My experience with Pandas shows that functions like `ix`, `loc`, `iloc` are slower in comparison to `set_value` and `get_value` functions. Moreover computing value for a column on itself using `shift` function gives erroneous results.
Numpy version working with Numba ``` @jit(nopython=True) def heiken_ashi_numpy(c_open, c_high, c_low, c_close): ha_close = (c_open + c_high + c_low + c_close) / 4 ha_open = np.empty_like(ha_close) ha_open[0] = (c_open[0] + c_close[0]) / 2 for i in range(1, len(c_close)): ha_open[i] = (c_open[i - 1] + c_close[i - 1]) / 2 ha_high = np.maximum(np.maximum(ha_open, ha_close), c_high) ha_low = np.minimum(np.minimum(ha_open, ha_close), c_low) return ha_open, ha_high, ha_low, ha_close ```
9,681
37,187,962
I have problem with python selenium phantomjs which i couldn't solve. element.location returns wrong location. when I see cropped image it is showing part of desired image and also unwanted one. It worked on firefox perfectly but doesn't work on phantomjs. Here is code: ``` def screenOfElement(self, _element): _location = _element.location _size = _element.size _wholePage = Image.open(StringIO.StringIO(base64.decodestring(self.webdriver.get_screenshot_as_base64()))) _left = _location['x'] _top = _location['y'] _right = _location['x'] + _size['width'] _bottom = _location['y'] + _size['height'] return _wholePage.crop((_left, _top, _right, _bottom)) ``` Thanks.
2016/05/12
[ "https://Stackoverflow.com/questions/37187962", "https://Stackoverflow.com", "https://Stackoverflow.com/users/3425211/" ]
You can use square brackets to create a reference to an array: ``` pass_in( $str, [qw(A B C D E)]); ``` [perldoc perlref](http://perldoc.perl.org/perlref.html#Making-References)
In order to pass an in array, you have must an array to pass! `qw()` does not create an array. It just puts a bunch of scalars on the stack. That for which you are looking is `[ ]`. It conveniently creates an array, initializes the array using the expression within, and returns a reference to the array. ``` pass_in( $str, [qw( A B C D E )] ); ``` Alternatively, you could rewrite your subroutine to accept a list of values. ``` sub pass_in { my $str = shift; for my $e (@_) { print "I see str $str and list elem: $e\n"; } return 0; } pass_in( "hello", qw( A B C D E ) ); ```
9,691
23,183,868
I was going through a very simple python3 guide to using string operations and then I ran into this weird error: ``` In [4]: # create string string = 'Let\'s test this.' # test to see if it is numeric string_isnumeric = string.isnumeric() Out [4]: AttributeError Traceback (most recent call last) <ipython-input-4-859c9cefa0f0> in <module>() 3 4 # test to see if it is numeric ----> 5 string_isnumeric = string.isnumeric() AttributeError: 'str' object has no attribute 'isnumeric' ``` The problem is that, as far as I can tell, `str` **[DOES](http://www.tutorialspoint.com/python/string_isnumeric.htm)** have an attribute, `isnumeric`.
2014/04/20
[ "https://Stackoverflow.com/questions/23183868", "https://Stackoverflow.com", "https://Stackoverflow.com/users/2935984/" ]
No, `str` objects do not have an `isnumeric` method. `isnumeric` is only available for unicode objects. In other words: ``` >>> d = unicode('some string', 'utf-8') >>> d.isnumeric() False >>> d = unicode('42', 'utf-8') >>> d.isnumeric() True ```
`isnumeric()` only works on Unicode strings. To define a string as Unicode you could change your string definitions like so: ``` In [4]: s = u'This is my string' isnum = s.isnumeric() ``` This will now store False. Note: I also changed your variable name in case you imported the module string.
9,692
46,257,064
I'm trying to create a piece of code in python that allows the user to enter their username, password and date of birth and then allows them to change this information. This is what I have so far. ``` import sqlite3 conn=sqlite3.connect("Database.db") cursor=conn.cursor() def createTable(): cursor.execute("CREATE TABLE IF NOT EXISTS userInfo(username TEXT, password TEXT, dateOfBirth TEXT)") def enterData(): inputUser=input("Enter your username") inputPass=input("Enter your password") inputDoB=input("Enter your date of birth") cursor.execute("INSERT INTO userInfo (username, password, dateOfBirth) VALUES (?, ?, ?)", (inputUser, inputPass, inputDoB)) conn.commit() def modifyData(): update=input("would you like to change your username") if update=="yes": newUsername=input("Update your username") cursor.execute("UPDATE userInfo SET username='?' WHERE username='?'",(newUsername, user)) conn.commit createTable() enterData() modifyData() ``` It allows me to enter the data but I don't know the specific syntax for updating the data value to a variable. When I try and run this code, this error appears: ``` sqlite3.ProgrammingError: Incorrect number of bindings supplied. The current statement uses 0, and there are 7 supplied. ``` Any help would be greatly appreciated.
2017/09/16
[ "https://Stackoverflow.com/questions/46257064", "https://Stackoverflow.com", "https://Stackoverflow.com/users/8619721/" ]
When I've copied it and run the first error that occurred said that variable `user` is not defined. Therefore you need to either send it to function `modifyData` from `enterData` or ask the user for it. Then I've got the error you've mentioned. Just remove the single quotes around `?` Code: ``` def modify_data(): update = input('Would you like to change your username[y/N]: ') if update.lower() == 'y': old_username = input('Old username: ') new_username = input('Update your username: ') cursor.execute('UPDATE userInfo SET username=? WHERE username=?', (new_username, old_username)) conn.commit() ``` By the way, I've found [this tool](http://sqlitebrowser.org/) that can help you visualize it. (It's free and open source)
You probably want `raw_input`, not `input`. `raw_input` reads data from standard input and returns it. `input()` is equivelant to `eval(raw_input())`: i.e., it evaluates the input as Python code. I'm not 100% sure if this is your root problem though, if you've already been using quotes around your input? Or maybe you're not -- can you provide all the input and output from a run of this program?
9,697
67,670,537
I have an Amazon S3 server filled with multiple buckets, each bucket containing multiple subfolders. There are easily 50,000 files in total. I need to generate an excel sheet that contains the path/url of each file in each bucket. For eg, If I have a bucket called b1, and it has a file called f1.txt, I want to be able to export the path of f1 as b1/f1.txt. This needs to be done for every one of the 50,000 files. I have tried using S3 browsers like Expandrive and Cyberduck, however they require you to select each and every file to copy their urls. I also tried exploring the boto3 library in python, however I did not come across any in built functions to get the file urls. I am looking for any tool I can use, or even a script I can execute to get all the urls. Thanks.
2021/05/24
[ "https://Stackoverflow.com/questions/67670537", "https://Stackoverflow.com", "https://Stackoverflow.com/users/8018877/" ]
Do you have access to the aws cli? `aws s3 ls --recursive {bucket}` will list all nested files in a bucket. Eg this bash command will list all buckets, then recursively print all files in each bucket: ``` aws s3 ls | while read x y bucket; do aws s3 ls --recursive $bucket | while read x y z path; do echo $path; done; done ``` (the 'read's are just to strip off uninteresting columns). nb I'm using v1 CLI.
Amazon s3 inventory can help you with this use case. Do evaluate that option. refer: <https://docs.aws.amazon.com/AmazonS3/latest/userguide/storage-inventory.html>
9,698
52,169,443
Currently, I'm facing an issue with uploading (using python) EMOJI data to the BIG QUERY This is sample code which I'm trying to upload to BQ: ``` {"emojiCharts":{"emoji_icon":"\ud83d\udc4d","repost": 4, "doc": 4, "engagement": 0, "reach": 0, "impression": 0}} {"emojiCharts":{"emoji_icon":"\ud83d\udc49","repost": 4, "doc": 4, "engagement": 43, "reach": 722, "impression": 4816}} {"emojiCharts":{"emoji_icon":"\u203c","repost": 4, "doc": 4, "engagement": 0, "reach": 0, "impression": 0}} {"emojiCharts":{"emoji_icon":"\ud83c\udf89","repost": 5, "doc": 5, "engagement": 43, "reach": 829, "impression": 5529}} {"emojiCharts":{"emoji_icon":"\ud83d\ude34","repost": 5, "doc": 5, "engagement": 222, "reach": 420, "impression": 2805}} {"emojiCharts":{"emoji_icon":"\ud83d\ude31","repost": 3, "doc": 3, "engagement": 386, "reach": 2868, "impression": 19122}} {"emojiCharts":{"emoji_icon":"\ud83d\udc4d\ud83c\udffb","repost": 5, "doc": 5, "engagement": 43, "reach": 1064, "impression": 7098}} {"emojiCharts":{"emoji_icon":"\ud83d\ude3b","repost": 3, "doc": 3, "engagement": 93, "reach": 192, "impression": 1283}} {"emojiCharts":{"emoji_icon":"\ud83d\ude2d","repost": 6, "doc": 6, "engagement": 212, "reach": 909, "impression": 6143}} {"emojiCharts":{"emoji_icon":"\ud83e\udd84","repost": 8, "doc": 8, "engagement": 313, "reach": 402, "impression": 2681}} {"emojiCharts":{"emoji_icon":"\ud83d\ude18","repost": 7, "doc": 7, "engagement": 0, "reach": 8454, "impression": 56366}} {"emojiCharts":{"emoji_icon":"\ud83d\ude05","repost": 5, "doc": 5, "engagement": 74, "reach": 1582, "impression": 10550}} {"emojiCharts":{"emoji_icon":"\ud83d\ude04","repost": 5, "doc": 5, "engagement": 73, "reach": 3329, "impression": 22206}} ``` Issues is that big query cannot see any of this emoji (`\ud83d\ude04`) and will display only in this format (`\u203c`) Even if the field is **STRING** it displays 2 black rombs, why BQ cannot display emoji as a string without converting it to the actual emoji? **Questions:** Is there are any way to upload EMOJI to Big Query that it will load up correctly? - "**will be used in Google Data Studio**" Should I manually (hardcoded) change all emoji code the acceptable ones, which is the acceptable format?
2018/09/04
[ "https://Stackoverflow.com/questions/52169443", "https://Stackoverflow.com", "https://Stackoverflow.com/users/-1/" ]
As user 'numeral' mentions in their comment: > > Check out [charbase.com/1f618-unicode-face-throwing-a-kiss](http://charbase.com/1f618-unicode-face-throwing-a-kiss) What you want is to convert the javascript escape characters to actual unicode data. > > > , you need to change the encoding of the emojis for them to be accurately represented as one character: ``` SELECT "\U0001f604 \U0001f4b8" -- , "\ud83d\udcb8" -- , "\ud83d\ude04" ``` The 2nd and 3d line fail with an error like `Illegal escape sequence: Unicode value \ud83d is invalid at [2:7]`, but the first line gives the correct display in BigQuery and Data Studio: [![enter image description here](https://i.stack.imgur.com/COVg7.png)](https://i.stack.imgur.com/COVg7.png) [![enter image description here](https://i.stack.imgur.com/qls2X.png)](https://i.stack.imgur.com/qls2X.png) Additional thoughts about this: * <https://stackoverflow.com/search?q=%5Cud83d>
Python does not support "surrogate characters" representation which is composed of multiple UTF-16 characters and some emojis (over `0xFFFF`) use them. For example, can be represented by `\U0001f3e6` (UTF-32) in Python and some languages uses `\ud83c\udfe6`. For those values are less than `0xFFFF`, python and other languages both use the same representation, e.g. `\u3020` (〠). To solve the encoding issue, you can manually convert the emoji characters or consider using some libraries, e.g. <https://github.com/hartwork/surrogates> to convert them to UTF-32. Also, BigQueqry Python client's `load_table_from_json` had a bug about those characters whose values are over `0xFFFF`, even you use correct UTF-32 representation. It just released a new version to fix it a couple of days ago. ref: <https://github.com/googleapis/python-bigquery/releases/tag/v2.24.0> Some references about Bank emoji listing different representation: * <https://www.fileformat.info/info/unicode/char/1f3e6/index.htm> * <https://charbase.com/1f3e6-unicode-bank>
9,700
68,633,577
I am a beginner in python and I have started with web scraping, I want to extract data from a tourist site I need the names of the hotels, the arrangements available in each hotel and the price but I got stuck in the list of arrangements, each hotel can have several arrangements but it doesn't work and I don't know why. I put at your disposal my code and the output provided if any of you can help me and thank you in advance. ``` from time import sleep from selenium import webdriver from selenium.webdriver.support.ui import WebDriverWait PATH = "C:\\Users\\marketing2\\Documents\\chromedriver.exe" driver = webdriver.Chrome(PATH) driver.get('https://tn.tunisiebooking.com/') wait = WebDriverWait(driver, 20) # write script //Your Script Seems fine script = "document.getElementById('ville_des').value ='Sousse';document.getElementById('depart').value ='05/08/2021';document.getElementById('checkin').value ='05/08/2021';document.getElementById('select_ch').value = '1';" # Execute script driver.execute_script(script) # click bouton search btn_rechercher = driver.find_element_by_id('boutonr') btn_rechercher.click() sleep(10) # click bouton details btn_plus = driver.find_element_by_id('plus_res') btn_plus.click() sleep(10) #getting the hotel names and by xpath in a loop hotels=[] pensions=[] for v in range(1, 5): hotel = driver.find_element_by_xpath('/html/body/div[6]/div[2]/div[1]/div/div[2]/div/div[4]/div[' + str(v) + ']/div/div[3]/div[1]/div[1]/span/a/h3').get_attribute('innerHTML') for j in range (1,3): pension= driver.find_element_by_xpath('/html/body/div[6]/div[2]/div[1]/div/div[2]/div/div[4]/div[1]/div/div[3]/div[3]/div[1]/div[1]/form/div[1]/div[' + str(j) + ']/u').get_attribute('innerHTML') pensions.append((pension)) hotels.append((hotel,pensions)) print(hotels) ```
2021/08/03
[ "https://Stackoverflow.com/questions/68633577", "https://Stackoverflow.com", "https://Stackoverflow.com/users/16571797/" ]
You are assigning `0` to `stringArray`, `booleanArray` and `numberArray`. A `0` isn't an array, so you can't add values to it like that. ([Assigning values to a primitive type is a noop](https://stackoverflow.com/a/5201148).) Assign an array to those values and use `.push()` on them to add values to those arrays. (Using `[i]` will leave gaps in your array, because each iteration `i` will be incremented but only one of the arrays will get a value at that index, leaving the other two blank.) ```js let data = [ 538, 922, "725", false, 733, 170, 294, "336", false, "538", 79, 390, true, "816", "50", 920, "845", "728", "133", "468", false, true, 520, 747, false, "540", true, true, "262", 625, "614", 316, "978", "865", "506", 552, 187, "444", 676, 544, 840, 691, "50", 21, "926", "10", 37, "704", "36", 978, 830, 438, true, "713", 497, "371", 243, 638, 454, 291, "185", "941", "581", 336, 458, "585", false, 870, "639", "360", 599, "644", "767", 878, "646", 117, "933", 48, 934, "155", "749", 808, 922, 94, "360", "395", false, 407, true, false, "97", "233", 70, "609", false, false, "13", "698", 797, "789" ]; let sortedData = { stringArray: [], booleanArray: [], numberArray: [], } for (let i = 0; i < data.length; i++) { if (typeof data[i] == 'string') { sortedData.stringArray.push(data[i]); } else if (typeof data[i] == 'boolean') { sortedData.booleanArray.push(data[i]); } else if (typeof data[i] == 'number') { sortedData.numberArray.push(data[i]); } } console.log(sortedData); ```
Change the values of zero to an array like this: ``` let sortedData = { stringArray: [], booleanArray: [], numberArray: [], } ``` ``` // set value on array item // || // \/ sortedData.stringArray[i] === 'something' ```
9,701
15,021,877
I am developing a GUI with PyQt, to perform visual analysis of the data collected during some experiments. The GUI asks the user to indicate the directory where the data to be analyzed is located: ``` class ExperimentAnalyzer(QtGui.QMainWindow): ## other stuff here def loadExperiment(self): directory = QtGui.QFileDialog.getExistingDirectory(self, "Select Directory") ## load data from directory here ``` The GUI provides a ***play*** functionality, by means of which the user can see how experimental data changes over time. This is implemented through a **QTimer**: ``` def playOrPause(self): ## play if self.appStatus.timer is None: self.appStatus.timer = QtCore.QTimer(self) self.appStatus.timer.connect(self.appStatus.timer, QtCore.SIGNAL("timeout()"), self.nextFrame) self.appStatus.timer.start(40) ## pause else: self.appStatus.timer.stop() self.appStatus.timer = None ``` If I ***play*** the temporal sequence of data, then ***pause***, then try to change the directory to load data from a new experiment, I experience a **segmentation fault**. Using some debug prints, I found out that the application crashes when I call ``` QtGui.QFileDialog.getExistingDirectory(self, "Select Directory") ``` in the ***loadExperiment*** method. I'm pretty new to Qt and I think I'm not handling the timer properly. I'm using PyQt 4.9, Python 2.7.3 on Ubuntu 10.04. **Edit-1**: ----------- After Luke's answer, I went back to my code. Here's the ***nextFrame*** method, invoked every time the timer emits the ***timeout*** signal. It updates a QGraphicsScene element contained in the GUI: ``` def nextFrame(self): image = Image.open("<some jpg>") w, h = image.size imageQt = ImageQt.ImageQt(image) pixMap = QtGui.QPixmap.fromImage(imageQt) self.scene.clear() self.scene.addPixmap(pixMap) self.view.fitInView(QtCore.QRectF(0, 0, w, h), QtCore.Qt.KeepAspectRatio) ``` where the self.scene and the self.view objects have been previously instantiated in the GUI constructor as ``` self.view = QtGui.QGraphicsView(self) self.scene = QtGui.QGraphicsScene() self.view.setScene(self.scene) ``` I found out that commenting this line: ``` # self.scene.addPixmap(pixMap) ``` and repeating the same operations sequence reported above, the segmentation fault does not occur anymore. **Edit-2**: ----------- Running the application with gdb (with python-dbg), it turns out that the segmentation fault occurs after a call to QPainter::drawPixmap. ``` (gdb) bt #0 0xb6861f1d in ?? () from /usr/lib/i386-linux-gnu/libQtGui.so.4 #1 0xb685d491 in ?? () from /usr/lib/i386-linux-gnu/libQtGui.so.4 #2 0xb693bcd3 in ?? () from /usr/lib/i386-linux-gnu/libQtGui.so.4 #3 0xb69390bc in ?? () from /usr/lib/i386-linux-gnu/libQtGui.so.4 #4 0xb6945c77 in ?? () from /usr/lib/i386-linux-gnu/libQtGui.so.4 #5 0xb68bd424 in QPainter::drawPixmap(QPointF const&, QPixmap const&) () from /usr/lib/i386-linux-gnu/libQtGui.so.4 ``` Therefore, it's not a problem related to timer handling, as I believed in first instance. The segmentation fault happens because I'm doing something wrong with the pixMap.
2013/02/22
[ "https://Stackoverflow.com/questions/15021877", "https://Stackoverflow.com", "https://Stackoverflow.com/users/1759574/" ]
You can do in this way: ``` $(function(){ $('.yourelem, .targetDiv').click(function(ev){ $('.targetDiv').slideDown('fast'); ev.stopPropagation(); }); $(document).click(function(){ $('.targetDiv').slideUp('fast'); }); }); ``` [See the action in jsbin](http://jsbin.com/ayuhol/1/edit) ---------------------------------------------------------
Try this : HTML ``` <a id="show" href="#">show</a> <div class="test" style="display: none;"> hey </div> ``` JS ``` $('a#show').click(function(event) { event.stopPropagation(); $('.test').toggle(); }); $('html').click(function() { $('.test').hide(); }); ```
9,706
43,222,378
I'm working on my python script to extract multiple strings from a .csv file but I can't recover the Spanish characters (like á, é, í) after I open the file and read the lines. This is my code so far: ``` import csv list_text=[] with open(file, 'rb') as data: reader = csv.reader(data, delimiter='\t') for row in reader: print row[0] list_text.extend(row[0]) print list_text ``` And I get something like this: ``` 'Vivió el sueño, ESPAÑOL...' ['Vivi\xc3\xb3 el sue\xc3\xb1o, ESPA\xc3\x91OL...'] ``` I don't know why it prints it in the correct form but when I append it to the list is not correct. **Edited:** The problem is that I need to recover the characters because, after I read the file, the list has thousands of words in it and I don't need to print it I need to use regex to get rid of the punctuation, but this also deletes the backslash and the word is incomplete.
2017/04/05
[ "https://Stackoverflow.com/questions/43222378", "https://Stackoverflow.com", "https://Stackoverflow.com/users/7817760/" ]
When you print a list it shows all the cancelled characters, that way `\n` and other characters don't throw off the list display, so if you print the string it will work properly: ``` 'Vivi\xc3\xb3 el sue\xc3\xb1o, ESPA\xc3\x91OL...'.decode('utf-8') ```
1. use unicodecsv instead of csv , csv doesn't support well unicode 2. open the file with codecs, and 'utf-8' **see code below** ``` import unicodecsv as csv import codecs list_text=[] with codecs.open(file, 'rb','utf-8') as data: reader = csv.reader(data, delimiter='\t') for row in reader: print row[0] list_text.extend(row[0]) print list_text ```
9,711
4,673,166
I'm trying to use httplib to send credit card information to authorize.net. When i try to post the request, I get the following traceback: ``` File "./lib/cgi_app.py", line 139, in run res = method() File "/var/www/html/index.py", line 113, in ProcessRegistration conn.request("POST", "/gateway/transact.dll", mystring, headers) File "/usr/local/lib/python2.7/httplib.py", line 946, in request self._send_request(method, url, body, headers) File "/usr/local/lib/python2.7/httplib.py", line 987, in _send_request self.endheaders(body) File "/usr/local/lib/python2.7/httplib.py", line 940, in endheaders self._send_output(message_body) File "/usr/local/lib/python2.7/httplib.py", line 803, in _send_output self.send(msg) File "/usr/local/lib/python2.7/httplib.py", line 755, in send self.connect() File "/usr/local/lib/python2.7/httplib.py", line 1152, in connect self.timeout, self.source_address) File "/usr/local/lib/python2.7/socket.py", line 567, in create_connection raise error, msg gaierror: [Errno -2] Name or service not known ``` I build my request like so: ``` mystring = urllib.urlencode(cardHash) headers = {"Content-Type": "text/xml", "Content-Length": str(len(mystring))} conn = httplib.HTTPSConnection("secure.authorize.net:443", source_address=("myurl.com", 443)) conn.request("POST", "/gateway/transact.dll", mystring, headers) ``` to add another layer to this, it was working on our development server which has httplib 2.6 and without the source\_address parameter in httplib.HTTPSConnection. Any help is greatly appreciated. =========================================================== EDIT: I can run it from command line. Apparently this is some sort of permissions issue. Any ideas what permissions I would need to grant to which users to make this happen? Possibly Apache can't open the port?
2011/01/12
[ "https://Stackoverflow.com/questions/4673166", "https://Stackoverflow.com", "https://Stackoverflow.com/users/355689/" ]
> > gaierror: [Errno -2] Name or service not known > > > This error often indicates a failure of your DNS resolver. Does `ping secure.authorize.net` return successful replies from the same server that receives the gaierror? Does the hostname have a typo in it?
pass the port separately from the host: ``` conn = httplib.HTTPSConnection("secure.authorize.net", 443, ....) ```
9,714
63,587,626
I am receiving the following download error when I attempt to install Jupyter Notebook on Windows: ``` ERROR: Could not install packages due to an EnvironmentError: [Errno 2] No such file or directory: 'C:\\Users\\*redacted*\\AppData\\Local\\Packages\\PythonSoftwareFoundation.Python.3.8_qbz5n2kfra8p0\\LocalCache\\local-packages\\Python38\\site-packages\\jedi\\third_party\\django-stubs\\django-stubs\\contrib\\contenttypes\\management\\commands\\remove_stale_contenttypes.pyi' ``` I located the `commands` folder and the file `remove_stale_contenttypes.pyi` was not present. I did a file search for my CPU and the file was not found in another location. I have never used python, pip, or jupyter before. I am attempting to install them in preparation for a class.
2020/08/25
[ "https://Stackoverflow.com/questions/63587626", "https://Stackoverflow.com", "https://Stackoverflow.com/users/14166558/" ]
Try uninstalling virtualenv or pipenv (whichever you are using) and then reinstalling. If this doesn't work try installing conda. There are two versions of it: Anaconda Miniconda I would recommend going with miniconda as it is a lightweight installation but does not have a GUI. Here is the [link](https://docs.conda.io/en/latest/miniconda.html) to install it. Afterwards to create a virtual environment do this: Go to the conda terminal or cmd and type in `conda create --name myenv`(and change the name of the env to whatever you like). This should create your environment. Afterwards to activate it, type in `conda activate name` (Name is again what you put up there) Thats it. You have now created a conda env. So afterwards, whenever you want to access this enviornment again, use the activate command. As for installing jupyter notebook, first activate your env and the run this: ``` conda install -c conda-forge notebook ``` This should install jupyter notebook in that environment. To access that jupyter notebook again, always activate the enviornment and then type in `jupyter notebook`. If this seems a bit too much for you, you should actually have a program named jupyter notebook(env name) in your computer after you successfully installed jupyter. Just click on that and it will handle everything for you. Please let me know if you have trouble doing this.
The easiest way to setup Jupyter Notebook is using pip if you do not require conda. Since you are new to python, first create a new virtual environment using virtualenv. Installing pip (Ignore if already installed): Download get-pip.py for Windows and run `python get-pip.py` Installing virtualenv: `pip install virtualenv` Creating a new virtual environment: `virtualenv your_env_name` Activate Virtualenv: `your_env_name\Scripts\activate` Installing Jupyter Notebooks: `pip install notebook` You can launch the notebook server using: `jupyter notebook`
9,720