qid
int64
46k
74.7M
question
stringlengths
54
37.8k
date
stringlengths
10
10
metadata
listlengths
3
3
response_j
stringlengths
29
22k
response_k
stringlengths
26
13.4k
__index_level_0__
int64
0
17.8k
56,331,413
I am wondering how I can save whatever I added to a list when I close a python file. For example, in this "my contact" program that I wrote below, if I add information about 'Jane Doe', what could I do so that next time I open up the same file, Jane Doe still exists. ``` def main(): myBook = Book([{"name": 'John Doe', "phone": '123-456-7890', "address": '1000 Constitution Ave'}]) class Book: def __init__(self, peoples): self.peoples = peoples self.main_menu() def main_menu(self): print('Main Menu') print('1. Display Contact Names') print('2. Search For Contacts') print('3. Edit Contact') print('4. New Contact') print('5. Remove Contact') print('6. Exit') self.selection = input('Enter a # form the menu: ') if (self.selection == "1"): self.display_names() if (self.selection == "2"): self.search() if (self.selection == "3"): self.edit() if (self.selection == "4"): self.new() if (self.selection == "5"): self.delete() if (self.selection == "6"): self.end() def display_names(self): for people in self.peoples: print("Name: " + people["name"]) self.main_menu() def search(self): searchname = input('What is the name of your contact: ') for index in range(len(self.peoples)): if (self.peoples[index]["name"] == searchname): print("Name: " + self.peoples[index]["name"]) print("Address: " + self.peoples[index]["address"]) print("Phone: " + self.peoples[index]["phone"]) self.main_menu() def edit(self): searchname = input('What is the name of the contact that you want to edit: ') for index in range(len(self.peoples)): if (self.peoples[index]["name"] == searchname): self.peoples.pop(index) name = input('What is your name: ') address = input('What is your address: ') phone = input('What is your phone number: ') self.peoples.append({"name": name, "phone": phone, "address": address}) self.main_menu() def new(self): name = input('What is your name: ') address = input('What is your address: ') phone = input('What is your phone number: ') self.peoples.append({"name": name, "phone": phone, "address": address}) self.main_menu() def delete(self): searchname = input('What is the name of the contact that you want to delete: ') for index in reversed(range(len(self.peoples))): if (self.peoples[index]["name"] == searchname): self.peoples.pop(index) print(searchname, 'has been removed') self.main_menu() def end(self): print('Thank you for using the contact book, have a nice day') print('Copyright Carson147 2019©, All Rights Reserved') main() ```
2019/05/27
[ "https://Stackoverflow.com/questions/56331413", "https://Stackoverflow.com", "https://Stackoverflow.com/users/11563937/" ]
Use a module from the [Data Persistence](https://docs.python.org/3/library/persistence.html) section of the standard library, or save it as [json](https://docs.python.org/3/library/json.html), or as a [csv file](https://docs.python.org/3/library/csv.html).
You just convert your list to array inside in function . ``` np.save('path/to/save', np.array(your_list)) ``` to load : ``` arr=np.load(''path/to/save.npy').tolist() ``` I hope it will be helpful
14,974
73,542,262
I've created 3 files, `snek.py`, `requirements.txt` and `runsnek.py`. `runsnek.py` installs all the required modules in `requirements.txt` with pip and runs `snek.py`. Everything works fine on Windows 10, but when trying to run on Ubuntu (WSL2), an error is thrown: ``` ❯ python runsnek.py Requirement already up-to-date: pathlib in /home/rootuser/.local/lib/python3.8/site-packages (from -r requirements.txt (line 2)) (1.0.1) Traceback (most recent call last): File "snek.py", line 1, in <module> from pathlib import Path ImportError: No module named pathlib ``` I'm not sure what could've caused the problem on Linux. It might be some kind of pip modules path that isn't defined. `printenv` does not show anything containing the word python. files ===== Here are all of the mentioned files. `runsnek.py`: ``` import os, platform os.system('pip install --upgrade -r requirements.txt') if platform.system() == 'Windows': os.system('py snek.py') elif '': raise Warning('snek could not be ran, try running snek.py instead') else: os.system('python snek.py') ``` `requirements.txt`: ``` # pip reqs pathlib ``` `snek.py`: ``` from pathlib import Path cwd = Path('.') # [...] ```
2022/08/30
[ "https://Stackoverflow.com/questions/73542262", "https://Stackoverflow.com", "https://Stackoverflow.com/users/17747971/" ]
It seems you are using python2 in your WSL2 instance. In the line `os.system('python snek.py')` it should run python2 instead of python3. To correct the problem, you can change this line of code by `os.system('python3 snek.py')`.
Your `run` file can be simplified: ``` import sys, os print('Running with ' + sys.executable) os.system(sys.executable + ' -m pip install --upgrade -r requirements.txt') os.system(sys.executable +' snek.py') ``` `sys.executable` always contains the path of the python interpreter running the current script. Using `python -m pip install` also ensures that the same python interpreter is used for pip installing, which solves your original problem
14,982
72,097,284
How can you set the desktop background to a solid color programmatically in python? The reason I want this is to make myself a utility which changes the background color depending on which of several virtual desktops I'm using.
2022/05/03
[ "https://Stackoverflow.com/questions/72097284", "https://Stackoverflow.com", "https://Stackoverflow.com/users/169774/" ]
With a little help of `row_number` analytic function: ``` SQL> ALTER TABLE dummy_test_table ADD batch_id VARCHAR2 (10); Table altered. SQL> UPDATE dummy_test_table a 2 SET a.batch_id = 3 (WITH 4 temp 5 AS 6 (SELECT seq_no, 7 ROW_NUMBER () OVER (ORDER BY seq_no) batch_id 8 FROM (SELECT DISTINCT seq_no 9 FROM dummy_test_table)) 10 SELECT LPAD (t.batch_id, 3, '0') 11 FROM temp t 12 WHERE t.seq_no = a.seq_no); 9 rows updated. ``` Result: ``` SQL> SELECT * 2 FROM dummy_test_table 3 ORDER BY seq_no, batch_id; SEQ_NO BATCH_ID ---------- ---------- 0000000957 001 0000000957 001 0000000957 001 0000000958 002 0000000958 002 0000000958 002 0000000959 003 0000000969 004 0000000969 004 9 rows selected. SQL> ```
One option is to use `DENSE_RANK()` analytic function within a MERGE DML statement such as ```sql MERGE INTO dummy_test_table d1 USING (SELECT seq_no, LPAD(DENSE_RANK() OVER(ORDER BY seq_no), 3, '0') AS dr FROM dummy_test_table) d2 ON (d1.rowid = d2.rowid) WHEN MATCHED THEN UPDATE SET d1.batch_id = dr ``` `[Demo](https://dbfiddle.uk/?rdbms=oracle_21&fiddle=f420bf79feb8379cfe2d11a5c3bfa4a0)` In my opinion, no need to add an extra column and populate it. Rather, you can use such a query or create a SQL-view(*and query that whenever needed*) : ```sql --CREATE OR REPLACE v_dts AS SELECT seq_no, LPAD(DENSE_RANK() OVER(ORDER BY seq_no), 3, '0') AS batch_id FROM dummy_test_table ```
14,983
24,027,579
I am working on a project where I have a client server model in python. I set up a server to monitor requests and send back data. PYZMQ supports: tcp, udp, pgm, epgm, inproc and ipc. I have been using tcp for interprocess communication, but have no idea what i should use for sending a request over the internet to a server. I simply need something to put in: ``` socket.bind(BIND_ADDRESS) ``` [DIAGRAM: Client Communicating over internet to server running a program](http://i.imgur.com/oqUaId4.png)
2014/06/04
[ "https://Stackoverflow.com/questions/24027579", "https://Stackoverflow.com", "https://Stackoverflow.com/users/2383037/" ]
Any particular reason you're not using `ipc` or `inproc` for interprocess communication? Other than that, generally, you can consider `tcp` the universal communicator; it's not always the best choice, but no matter what (so long as you actually have an IP address) it will work. Here's what you need to know when making a choice between transports: 1. PGM/EPGM are multicast transports - the idea is that you send one message and it gets delivered as a single message until the last possible moment where it will be broken up into multiple messages, one for each receiver. Unless you *absolutely know* you need this, you don't need this. 2. IPC/Inproc are for interprocess communication... if you're communicating between different threads in the same process, or different processes on the same logical host, then these might be appropriate. You get the benefit of a little less overhead. If you might ever add new logical hosts, this is probably not appropriate. 3. Russle Borogove enumerates the difference between TCP and UDP well. Typically you'll want to use TCP. Only if absolute speed is more important than reliability then you'll use UDP. It was always my understanding that UDP wasn't supported by ZMQ, so if it's there it's probably added by the pyzmq binding. Also, I took a look at your diagram - you *probably* want the server ZMQ socket to `bind` and the client ZMQ socket to `connect`... there are some reasons why you might reverse this, but as a general rule the server is considered the "reliable" peer, and the client is the "transient" peer, and you want the "reliable" peer to bind, the "transient" peer to connect.
Over the internet, TCP or UDP are the usual choices. I don't know if pyzmq has its own delivery guarantees on top of the transport protocol. If it doesn't, TCP will guarantee in-order delivery of all messages, while UDP may drop messages if the network is congested. If you don't know what you want, TCP is the simplest and safest choice.
14,984
42,972,184
I am new to python. As part of my project, I am working with python2.7. I am dealing with multiple files in python. Here I am facing a problem to use a variable of particular function from another file which was I already imported in my current file. Please help me to achieve this. ``` file1.py class connect(): # Contains different definitions def output(): a = "Hello" data = // some operations return data file2.py from file1 import * # Here I need to access both 'a' and 'data' variable from output() ```
2017/03/23
[ "https://Stackoverflow.com/questions/42972184", "https://Stackoverflow.com", "https://Stackoverflow.com/users/7167331/" ]
So you have edited it quite a bit since I started writing about conventions so I have started again. First, your return statement is out of indentation, it should be indented into the output method. ``` def output(): a = "Hello" data = // some operations return data ``` Second, the convention in Python regarding class names is CamelCase, which means your class should be called "Connect". There is also no need to add the round brackets when your class doesn't inherit anything. Third, right now you can only use "data" since only data is returned. What you can do is return both a and data by replacing your return statement to this: ``` return a, data ``` Then in your second file, all you have to do is write `a_received, data_received = connect.output()` Full code example: file1.py ``` class Connect: def output(): a = "Hello" data = "abc" return a, data ``` file2.py ``` from file1 import Connect a_received, data_received = Connect.output() # Print results print(a_received) print(data_received) ``` Fourth, there are other ways to combat this, like create **instance variables** for example and then there is no need for return. file1.py ``` class Connect: def output(self): self.a = "Hello" self.data = "abc" ``` file2.py ``` from file1 import Connect connection = Connect() connection.output() print(connection.a) print(connection.data) ``` There is also the **class variable** version. file1.py ``` class Connect: def output(): Connect.a = "Hello" Connect.data = "abc" ``` file2.py ``` from file1 import Connect Connect.output() print(Connect.a) print(Connect.data) ``` Eventually, the "right" way to do it depends on the use.
One option you have is to return all the data you need from the function: file1.py ``` class connect(): # Contains different definitions def output(): a = "Hello" data = // some operations return a,data # Return all the variables as a tuple ``` file2.py ``` from file1 import connect c = connect() a,data = c.output() # Now you have local variables 'a' and 'data' from output() ```
14,985
17,438,469
This python3.3 code on win 7, why I got error: ``` import random guesses_made = 0 name = raw_input('Hello! What is your name?\n') number = random.randint(1, 20) print "Well, {0}, I am thinking of a number between 1 and 20" # error here !!! **print "Well, {0}, I am thinking of a number between 1 and 20" ^ SyntaxError: invalid syntax** ``` Thanks !!!
2013/07/03
[ "https://Stackoverflow.com/questions/17438469", "https://Stackoverflow.com", "https://Stackoverflow.com/users/2420472/" ]
Two things: In python 3, `raw_input()` [has been changed](http://docs.python.org/3.0/whatsnew/3.0.html#builtins) to `input()`. Also, [`print` is no longer a statement but a function](http://docs.python.org/3.0/whatsnew/3.0.html#print-is-a-function), so you must do: ``` print("Well, {0}, I am thinking of a number between 1 and 20") ```
I think that last line should read: ``` print("Well, {0}, I am thinking of a number between 1 and 20".format(name)) ``` This was tested. I am pretty new to p3.3, so go easy on me :)
14,986
5,823,163
I'm currently in the process of programming a server which can let clients interact with a piece of hardware. For the interested readers it's a device which monitors the wavelength of a set of lasers concurrently (and controls the lasers). The server should be able to broadcast the wavelengths (a list of floats) on a regular basis and let the clients change the settings of the device through dll calls. My initial idea was to write a custom protocol to handle the communication, but after thinking about how to handle TCP fragmentation and data encoding I bumped into Twisted, and it looks like most of the work is already done if I use perspective broker to share the data and call server methods directly from the clients. This solution might be a bit overkill, but for me it appeared obvious, what do you think? My main concern arrose when I thought about the clients. Basically I need two types of clients, one which just displays the wavelengths (this should be straight forward) and a second which can change the device settings and get feedback when it's changed. My idea was to create a single client capable of both, but thinking about combining it with our previous system got me thinking... The second client should be controlled from an already rather complex python framework which controls a lot of independant hardware with relatively strict timing requirements, and the settings of the wavelengthmeter should then be called within this sequential code. Now the thing is, how do I mix this with the Twisted client? As I understand Twisted is not threadsafe, so I can't simply spawn a new thread running the reactor and then inteact with it from my main thread, can I? Any suggestions for writing this server/client framework through different means than Twisted are very welcome! Thanks
2011/04/28
[ "https://Stackoverflow.com/questions/5823163", "https://Stackoverflow.com", "https://Stackoverflow.com/users/729885/" ]
You can start the reactor in a dedicated thread, and then issue calls to it with [`blockingCallFromThread`](http://twistedmatrix.com/documents/11.0.0/api/twisted.internet.threads.html#blockingCallFromThread) from your existing "sequential" code. Also, I'd recommend [AMP](http://twistedmatrix.com/documents/11.0.0/api/twisted.protocols.amp.html) for the protocol rather than PB, since AMP is more amenable to heterogeneous environments ([see amp-protocol.net for independent protocol information](http://amp-protocol.net/)), and it sounds like you have a substantial amount of other technology you might want to integrate with this system.
Have you tried [zeromq](http://www.zeromq.org/)? It's a library that simplifies working with sockets. It can operate over TCP and implements several topologies, such as publisher/subscriber (for broadcasting data, such as your laser readings) and request/response (that you can use for you control scheme). There are bindings for several languages and the site is full of examples. Also, it's amazingly fast. Good stuff.
14,987
11,274,290
I have made a `.deb` of my app using [fpm](https://github.com/jordansissel/fpm/wiki): ``` fpm -s dir -t deb -n myapp -v 9 -a all -x "*.git" -x "*.bak" -x "*.orig" \ --after-remove debian/postrm --after-install debian/postinst \ --description "Automated build." -d mysql-client -d python-virtualenv home ``` Among other things, the `postinst` script is supposed to create a user for the app: ``` #!/bin/sh set -e APP_NAME=myapp case "$1" in configure) virtualenv /home/$APP_NAME/local #supervisorctl start $APP_NAME ;; # http://www.debian.org/doc/manuals/securing-debian-howto/ch9.en.html#s-bpp-lower-privs install|upgrade) # If the package has default file it could be sourced, so that # the local admin can overwrite the defaults [ -f "/etc/default/$APP_NAME" ] && . /etc/default/$APP_NAME # Sane defaults: [ -z "$SERVER_HOME" ] && SERVER_HOME=/home/$APP_NAME [ -z "$SERVER_USER" ] && SERVER_USER=$APP_NAME [ -z "$SERVER_NAME" ] && SERVER_NAME="" [ -z "$SERVER_GROUP" ] && SERVER_GROUP=$APP_NAME # Groups that the user will be added to, if undefined, then none. ADDGROUP="" # create user to avoid running server as root # 1. create group if not existing if ! getent group | grep -q "^$SERVER_GROUP:" ; then echo -n "Adding group $SERVER_GROUP.." addgroup --quiet --system $SERVER_GROUP 2>/dev/null ||true echo "..done" fi # 2. create homedir if not existing test -d $SERVER_HOME || mkdir $SERVER_HOME # 3. create user if not existing if ! getent passwd | grep -q "^$SERVER_USER:"; then echo -n "Adding system user $SERVER_USER.." adduser --quiet \ --system \ --ingroup $SERVER_GROUP \ --no-create-home \ --disabled-password \ $SERVER_USER 2>/dev/null || true echo "..done" fi # … and a bunch of other stuff. ``` It seems like the `postinst` script is being called with `configure`, but not with `install`, and I am trying to understand why. In `/var/log/dpkg.log`, I see the lines I would expect: ``` 2012-06-30 13:28:36 configure myapp 9 9 2012-06-30 13:28:36 status unpacked myapp 9 2012-06-30 13:28:36 status half-configured myapp 9 2012-06-30 13:28:43 status installed myapp 9 ``` I checked that `/etc/default/myapp` does not exist. The file `/var/lib/dpkg/info/myapp.postinst` exists, and if I run it manually with `install` as the first parameter, it works as expected. Why is the `postinst` script not being run with `install`? What can I do to debug this further?
2012/06/30
[ "https://Stackoverflow.com/questions/11274290", "https://Stackoverflow.com", "https://Stackoverflow.com/users/17498/" ]
I think the example script you copied is simply wrong. `postinst` is not supposed to be called with any `install` or `upgrade` argument, ever. The authoritative definition of the dpkg format is the Debian Policy Manual. The current version describes `postinst` in [chapter 6](http://www.debian.org/doc/debian-policy/ch-maintainerscripts.html) and only lists `configure`, `abort-upgrade`, `abort-remove`, `abort-remove`, and `abort-deconfigure` as possible first arguments. I don't have complete confidence in my answer, because your bad example is still up on debian.org and it's hard to believe such a bug could slip through.
I believe the answer provided by Alan Curry is incorrect, at least as of 2015 and beyond. There must be some fault with the way the that your package is built or an error in the `postinst` file which is causing your problem. You can debug your install by adding the `-D` (debug) option to your command line i.e.: ``` sudo dpkg -D2 -i yourpackage_name_1.0.0_all.deb ``` `-D2` should sort out this type of issue for the record the debug levels are as follows: ``` Number Description 1 Generally helpful progress information 2 Invocation and status of maintainer scripts 10 Output for each file processed 100 Lots of output for each file processed 20 Output for each configuration file 200 Lots of output for each configuration file 40 Dependencies and conflicts 400 Lots of dependencies/conflicts output 10000 Trigger activation and processing 20000 Lots of output regarding triggers 40000 Silly amounts of output regarding triggers 1000 Lots of drivel about e.g. the dpkg/info dir 2000 Insane amounts of drivel ``` The `install` command calls the `configure` option and in my experience the `postinst` script will always be run. One thing that may trip you up is that the `postrm` script of the "old" version, if upgrading a package, will be run **after** your current packages `preinst` script, this can cause havoc if you don't realise what is going on. From the dpkg man page: Installation consists of the following steps: ``` 1. Extract the control files of the new package. 2. If another version of the same package was installed before the new installation, execute prerm script of the old package. 3. Run preinst script, if provided by the package. 4. Unpack the new files, and at the same time back up the old files, so that if something goes wrong, they can be restored. 5. If another version of the same package was installed before the new installation, execute the postrm script of the old pack‐ age. Note that this script is executed after the preinst script of the new package, because new files are written at the same time old files are removed. 6. Configure the package. Configuring consists of the following steps: 1. Unpack the conffiles, and at the same time back up the old conffiles, so that they can be restored if something goes wrong. 2. Run postinst script, if provided by the package. ```
14,988
39,103,057
Ok So ive been able to send mail and read mail but I am now trying to attach an attachment to the mail and it doesnt seem to append the document as expected. I dont get any errors but I also dont get the mail if I attempt to add the attachment. The library im using is [here](https://github.com/Narcolapser/python-o365) The returned value frome the function is `True` but an email never arrives if i remove the `m.attachments.append('/path/to/data.xls')` line the email arrives as expected (without an attachment of course). **Code** ``` def sendAddresses(username, password): try: authenticiation = (username, password) m = Message(auth=authenticiation) m.attachments.append('/path/to/data.xls') m.setRecipients("email@address.com") m.setSubject("Test Subject") m.setBody("Test Email") m.sendMessage() except Exception, e: print e return False return True ```
2016/08/23
[ "https://Stackoverflow.com/questions/39103057", "https://Stackoverflow.com", "https://Stackoverflow.com/users/3449832/" ]
I've reproduced your attached command as a ``` public class MyBehavior : Behavior<ListBox> { ``` to a XAML ``` <ListBox SelectedItem="SelCust" Name="MyListBox" Loaded="MyListBox_Loaded" IsSynchronizedWithCurrentItem="True" DisplayMemberPath="Name" ItemsSource="{Binding Customers}"> <i:Interaction.Behaviors> <local:MyBehavior/> </i:Interaction.Behaviors> <i:Interaction.Triggers> <i:EventTrigger EventName="Loaded"> <i:InvokeCommandAction Command="{Binding Path=LoadCommand}"/> </i:EventTrigger> </i:Interaction.Triggers> </ListBox> ``` where I've also added a binding for the Load event to the ViewModel ``` public CustomerViewModel() { IList<Customer> customers = Customer.GetCustomers().ToList(); _customerView = CollectionViewSource.GetDefaultView(customers); _customerView.MoveCurrentToLast(); _customerView.CurrentChanged += CustomerSelectionChanged; } private void CustomerSelectionChanged(object sender, EventArgs e) { // React to the changed selection Debug.WriteLine("Here"); var sel = (sender as CollectionView).CurrentItem; if ( sel!= null) { //Do Something } } private DelegateCommand loadCommand; public ICommand LoadCommand { get { if (loadCommand == null) { loadCommand = new DelegateCommand(VMLoad); } return (ICommand)loadCommand; } } bool IsLoaded = false; private void VMLoad(object obj) { IsLoaded = true; } ``` and in the code-behind ``` public MainWindow() { InitializeComponent(); DataContext = new CustomerViewModel(); } private void MyListBox_Loaded(object sender, RoutedEventArgs e) { MyListBox.ScrollIntoView(MyListBox.Items[MyListBox.Items.Count - 1]); } ``` When I debug it, I see that the this is the sequence of events fired: 1. `CurrentChanged` wiht the last item of the collection 2. `Loaded` handler **in the code-behind** 3. `LoadCommand` in the ViewModel and only **after** that 4. `ScrollIntoView` from the `AssociatedObject_SelectionChanged` So basically I'm suggesting a couple of things: * add (another) `ScrollIntoView` (for the last item of the collection) from the `Loaded` handler in the code-behind * for whatever action you need to perform *when you have to detect if some items are already visible to user*, first check `IsLoaded` to exclude any transient effect
Why not simply scroll to the last value in your collection after you set the DataContext or ItemSource? No data will render until you set your data context, and until you exit the constructor. To my understanding if you do the following to steps in sequence in the constructor, it should work as expected. ```cs listBox.DataContext = _items; listBox.ScrollIntoView(_items.Last()); ```
14,991
8,029,363
I am new to django. I have version 1.3.1 installed. I have created two projects: **projectone** and **projecttwo** using django-admin.py And in **projectone** I have an app called **blog** created using python manage.py startapp In **projecttwo** setings.py file when put the following in installed\_apps: ``` INSTALLED_APPS = ( other code goes here... 'projectone.blog' ) ``` And then when I run projecttwo using manage.py I get: ``` Error: No module named projectone.blog ``` I have \_\_ init \_\_.py files placed correctly. I just cannot figure out why. Maybe because projectone is not in pythonpath? Is that what django-admin.py does? and not doing it on mine for some reason? I am not sure.
2011/11/06
[ "https://Stackoverflow.com/questions/8029363", "https://Stackoverflow.com", "https://Stackoverflow.com/users/614954/" ]
Look at what manage.py does: <https://docs.djangoproject.com/en/dev/ref/django-admin/#django-admin-py-and-manage-py> It dynamically adds your apps to the python path when you use it - i.e. when you are using **runserver** during development. You have two separate projects so when you run either one you will only have the apps from that particular project on the python path. To use an app from one project 'outside', you need to manually add these apps to your global python path if you want to use them outside of the current project
You are trying to install a **Project** in your INSTALLED\_APPS on settings.py, those are different projects. Instead you need to create just one project and create differents apps. Remember that apps are meant to be reusable so if you need the blog app in other project just reuse it. If you are new to Django you should read the Tutorial in the [documentation](https://docs.djangoproject.com/en/1.3/)
14,992
7,459,766
I got the below failure while trying to get MySQL-python installed on my Ubuntu/Linux Box.From the below it seem like the issue is `sh: mysql_config: not found` Could someone advice me on what to do? ``` rmicro@ubuntu:~$ pip install MySQL-python Downloading/unpacking MySQL-python Downloading MySQL-python-1.2.3.tar.gz (70Kb): 70Kb downloaded Running setup.py egg_info for package MySQL-python sh: mysql_config: not found Traceback (most recent call last): File "<string>", line 14, in <module> File "/home/rmicro/build/MySQL-python/setup.py", line 15, in <module> metadata, options = get_config() File "setup_posix.py", line 43, in get_config libs = mysql_config("libs_r") File "setup_posix.py", line 24, in mysql_config raise EnvironmentError("%s not found" % (mysql_config.path,)) EnvironmentError: mysql_config not found Complete output from command python setup.py egg_info: sh: mysql_config: not found Traceback (most recent call last): File "<string>", line 14, in <module> File "/home/rmicro/build/MySQL-python/setup.py", line 15, in <module> metadata, options = get_config() File "setup_posix.py", line 43, in get_config libs = mysql_config("libs_r") File "setup_posix.py", line 24, in mysql_config raise EnvironmentError("%s not found" % (mysql_config.path,)) EnvironmentError: mysql_config not found ---------------------------------------- Command python setup.py egg_info failed with error code 1 ```
2011/09/18
[ "https://Stackoverflow.com/questions/7459766", "https://Stackoverflow.com", "https://Stackoverflow.com/users/618677/" ]
On Ubuntu it is advised to use the distributions repository. So installing python-mysqldb should be straight forward: ``` sudo apt-get install python-mysqldb ``` If you actually want to use pip to install, which is as mentioned before not the suggested path but possible, please have a look at this previously asked question and answer: [pip install mysql-python fails with EnvironmentError: mysql\_config not found](https://stackoverflow.com/questions/5178292/pip-install-mysql-python-show-error) Here is a very comprehensive guide by the developer: <http://mysql-python.blogspot.no/2012/11/is-mysqldb-hard-to-install.html> To get all the prerequisites for python-mysqld to install it using pip (which you will want to do if you are using virtualenv), run this: ``` sudo apt-get install build-essential python-dev libmysqlclient-dev ```
In python3 with virtualenv on a Ubuntu Bionic machine the following commands worked for me: ``` sudo apt install build-essential python-dev libmysqlclient-dev sudo apt-get install libssl-dev pip install mysqlclient ```
14,993
21,979,134
If I do `print(1 >> 2)` I get 0. If I do `print(2 << 1)` I get 4. If I do `print(9 << 3)` I get 72 If I do `print(3 >> 9)` I get 0 What do `>>` and `<<` do in python?
2014/02/24
[ "https://Stackoverflow.com/questions/21979134", "https://Stackoverflow.com", "https://Stackoverflow.com/users/2840324/" ]
They are bitwise shift operators. For example, 2 has the binary equivalent 00000010, so 2 << 1 is 00000010 shifted left 1 time. This yields 00000100, which is 4. 1 >> 2 is 00000001 shifted right 2 times which is 00000000 (the 1 falls off the end after the first shift though, so 1>>1 is also 0), obviously that is 0.
Bitwise shift left and bitwise shift right. They're roughly equivalent to doubling (`<<`) or halving (`>>`) just like decimal shift left is roughly equivalent to multiplying by 10 and decimal shift right is roughly equivalent to dividing by 10.
15,003
1,529,002
So, I was playing around with Python while answering [this question](https://stackoverflow.com/questions/1528932/how-to-create-inline-objects-with-properties-in-python/), and I discovered that this is not valid: ``` o = object() o.attr = 'hello' ``` due to an `AttributeError: 'object' object has no attribute 'attr'`. However, with any class inherited from object, it is valid: ``` class Sub(object): pass s = Sub() s.attr = 'hello' ``` Printing `s.attr` displays 'hello' as expected. Why is this the case? What in the Python language specification specifies that you can't assign attributes to vanilla objects? --- For other workarounds, see [How can I create an object and add attributes to it?](https://stackoverflow.com/questions/2827623).
2009/10/07
[ "https://Stackoverflow.com/questions/1529002", "https://Stackoverflow.com", "https://Stackoverflow.com/users/14902/" ]
To support arbitrary attribute assignment, an object needs a `__dict__`: a dict associated with the object, where arbitrary attributes can be stored. Otherwise, there's nowhere to *put* new attributes. An instance of `object` does **not** carry around a `__dict__` -- if it did, before the horrible circular dependence problem (since `dict`, like most everything else, inherits from `object`;-), this would saddle *every* object in Python with a dict, which would mean an overhead of *many* bytes per object that currently doesn't have or need a dict (essentially, all objects that don't have arbitrarily assignable attributes don't have or need a dict). For example, using the excellent `pympler` project (you can get it via svn from [here](http://code.google.com/p/pympler/source/checkout)), we can do some measurements...: ``` >>> from pympler import asizeof >>> asizeof.asizeof({}) 144 >>> asizeof.asizeof(23) 16 ``` You wouldn't want every `int` to take up 144 bytes instead of just 16, right?-) Now, when you make a class (inheriting from whatever), things change...: ``` >>> class dint(int): pass ... >>> asizeof.asizeof(dint(23)) 184 ``` ...the `__dict__` *is* now added (plus, a little more overhead) -- so a `dint` instance can have arbitrary attributes, but you pay quite a space cost for that flexibility. So what if you wanted `int`s with just *one* extra attribute `foobar`...? It's a rare need, but Python does offer a special mechanism for the purpose... ``` >>> class fint(int): ... __slots__ = 'foobar', ... def __init__(self, x): self.foobar=x+100 ... >>> asizeof.asizeof(fint(23)) 80 ``` ...not *quite* as tiny as an `int`, mind you! (or even the two `int`s, one the `self` and one the `self.foobar` -- the second one can be reassigned), but surely much better than a `dint`. When the class has the `__slots__` special attribute (a sequence of strings), then the `class` statement (more precisely, the default metaclass, `type`) does **not** equip every instance of that class with a `__dict__` (and therefore the ability to have arbitrary attributes), just a finite, rigid set of "slots" (basically places which can each hold one reference to some object) with the given names. In exchange for the lost flexibility, you gain a lot of bytes per instance (probably meaningful only if you have zillions of instances gallivanting around, but, there *are* use cases for that).
It is simply due to optimization. Dicts are relatively large. ``` >>> import sys >>> sys.getsizeof((lambda:1).__dict__) 140 ``` Most (maybe all) classes that are defined in C do not have a dict for optimization. If you look at the [source code](http://svn.python.org/view/python/trunk/Objects/object.c?revision=74457&view=markup) you will see that there are many checks to see if the object has a dict or not.
15,004
68,968,534
In Python 2, there is a comparison function. > > A comparison function is any callable that accept two arguments, compares them, and returns a negative number for less-than, zero for equality, or a positive number for greater-than. > > > In Python 3, the comparison function is replaced with a key function. > > A key function is a callable that accepts one argument and returns another value to be used as the sort key. > > > Now, I've a list of `tuple[int, int, str]` that I want to sort, and the string can be `S` or `E`. There are some tie breaker rules that use values from two tuples. Given a tuple x: int, y: int, z: str, the rules are as follows: 1. If x are equal, and both z = S, larger y comes first. 2. If x are equal, and both z = E, smaller y comes first. 3. If x are equal, and one z = E another z = S, S record comes first. A Python 2 style implementation is as follows: ``` def _cmp(coord1: tuple[int, int, str], coord2: tuple[int, int, str]) -> int: x1, y1, type1 = coord1 x2, y2, type2 = coord2 if x1 != x2: return x1 - x2 if type1 == type2: return y2 - y1 if type1 == "S" else y1 - y2 return -1 if type1 == "S" else 1 ``` Expected output: ``` [(0, 3, "S"), (0, 2, "S"), (1, 2, "E"), (2, 3, "E")], [(3, 3, "S"), (4, 2, "S"), (5, 2, "E"), (5, 3, "E")], [(6, 2, "S"), (7, 3, "S"), (7, 2, "E"), (8, 3, "E")] ``` I'm aware that a comparison function can be converted to a key function using [functools.cmp\_to\_key](https://docs.python.org/3/library/functools.html#functools.cmp_to_key); however, I'm wondering if there's a way to implement it directly as a key function since the docs say: > > This function is primarily used as a transition tool for programs being converted from Python 2 which supported the use of comparison functions. > > >
2021/08/28
[ "https://Stackoverflow.com/questions/68968534", "https://Stackoverflow.com", "https://Stackoverflow.com/users/839733/" ]
Consider how tuples *normally* compare: element by element, going to the next element when the current values are equal (sometimes called *lexicographic order*). Our required comparison algorithm, rewritten in steps that match that general approach, is: * First, we want to compare the `x` values, putting them in ascending order. * Then we want to compare the `z` values; we want tuples with an S to come first. This is the opposite of what normally happens, and we can't easily specify reverse order for only part of the key, and we can't negate a string value. However, since only S and E are possible, we can simply map S to 0 and E to 1. Or more simply, S can map to False and E to True, since those are *numerically* equal to 0 and 1 respectively. * Finally, if the `z` values were equal, we want to compare the `y` values - in normal order if we have a E, and in reverse order (so, negate the numeric values) if we have a S. So, we create a key that corresponds to that rule: * The first element is `x`. * The second element is our translation of the original `z` value. * The third element is the `y` value, possibly negated depending on `z`. In code: ``` def fancy_key(xyz: tuple[int, int, str]) -> tuple[int, bool, int]: x, y, z = xyz is_e = z == 'E' return (x, is_e, (y if is_e else -y)) ```
Since the original version of this was rec'd for deletion as supposedly not actually answering the question due to it... being too long, I guess?, here's a shorter version of the exact same answer that gives less insight but uses fewer words. (Yes, it was too long. It also answered the question. "omg tl;dr" shouldn't be sufficient cause to claim that it didn't.) The accepted answer uses the entire original tuple as the key, and simply reorders it so that the built-in sort can handle it correctly. It is, in my opinion, the best solution, as even though that's not really a "key" by any meaningful typical definition of a "key" (and it certainly doesn't give the efficiency benefits of a key-based sort) the efficiency of the sort itself is probably as close to the original compare-based sort as you can get it. *Additionally*, there is another way to solve the problem of converting the comparison function to a key function, which is the way `cmp_to_key` solves it (as the OP alluded to). That is to use the key function to instead define a class with a single sort item (here, your tuple) as the sole member variable and define the class's `__lt__()` built-in (and ideally the other comparison built-ins as well) by having them call the comparison function with self as the first parameter. To be exceedingly clear about this: that is a *second solution* to the problem. I don't know that it can be made any clearer that the above approach -- the approach that a Python standard library function itself uses -- does indeed solve the problem stated by the OP. Hopefully the review agrees, I suppose. How and why does that work, and what are the pros and cons of going about it that way instead? Well, I'd explain, but it has unfortunately been deemed outside of the scope of the question to do so by the deletion review, which suggested asking a separate question to get more detail on how it works. As the OP seems more than capable and can likely figure it out themselves, I'll just leave that part out and make the reviewers happy. I've learned my lesson from how they treated the version of the question the OP posted yesterday. (Ironically, lengthening the answer by including a code example that would basically be just a class definition and would only be useful to those that don't know how to define one already would seem to be unobjectionable, perhaps even preferable, to reviewers -- for those that need that, I'm sure there are already asked-and-answered questions elsewhere on SO regarding how to define and instantiate a class.) Now that it is hopefully clear that an additional answer to the question was provided, I'll also note firstly that I still think the currently accepted answer is better for this specific case, and secondly that the `cmp_to_key` approach doesn't really create a traditional "key function" by any commonly accepted plain-face meaning of the term, and there likely really isn't any simple way to programmatically construct a "real" key function from a comparison function in the general case as is implied by the portion of the `cmp_to_key` docs quoted by the OP in the question itself (hopefully noting that it was specifically stated in the question is sufficient indication to reviewers that it is a relevant observation, despite it being seemingly insufficient the first time around).
15,014
62,633,601
I want to re-implement a certain API client, which is written in Python, in JavaScript. I fail to replicate the HMAC SHA256 signing function. For some keys the output is identical, but for some it is different. It appears that the output is the same when the key consists of printable characters after decoding its Base64 representation. Python ------ ``` #!/usr/bin/env python3 import base64 import hashlib import hmac def sign_string(key_b64, to_sign): key = base64.b64decode(key_b64) signed_hmac_sha256 = hmac.HMAC(key, to_sign.encode(), hashlib.sha256) digest = signed_hmac_sha256.digest() return base64.b64encode(digest).decode() print(sign_string('VGhpcyBpcyBhIHByaW50YWJsZSBzdHJpbmcuCg==', "my message")) print(sign_string('dGhlIHdpbmQgb2YgTXQuIEZ1amkK', "my message")) print(sign_string('pkmNNJw3alrpIBi5t5Pxuym00M211oN86IhLZVT8', "my message")) ``` JavaScript ---------- ``` <script src="https://cdnjs.cloudflare.com/ajax/libs/crypto-js/3.1.9-1/crypto-js.min.js"></script> <script src="https://cdnjs.cloudflare.com/ajax/libs/crypto-js/3.1.9-1/hmac-sha256.min.js"></script> <script src="https://cdnjs.cloudflare.com/ajax/libs/crypto-js/3.1.9-1/enc-base64.min.js"></script> <script> function sign_string(key_b64, to_sign) { key = atob(key_b64) var hash = CryptoJS.HmacSHA256(to_sign, key); var hashInBase64 = CryptoJS.enc.Base64.stringify(hash); document.write(hashInBase64 + '<br>'); } sign_string('VGhpcyBpcyBhIHByaW50YWJsZSBzdHJpbmcuCg==', "my message") sign_string('dGhlIHdpbmQgb2YgTXQuIEZ1amkK', "my message") sign_string('pkmNNJw3alrpIBi5t5Pxuym00M211oN86IhLZVT8', "my message") </script> ``` Outputs ------- **Python** ``` TdhfUQfym16HyWQ8wxQeNVvJKr/tp5rLKHYQSpURLpw= pQ5NzK3KIWjqc75AXBvWgLK8X0kZvjRHXrLAdxIN+Bk= 8srAvMucCd91CWI7DeCFjxJrEYllaaH63wmVlMk0W+I= ``` **JavaScript** ``` TdhfUQfym16HyWQ8wxQeNVvJKr/tp5rLKHYQSpURLpw= pQ5NzK3KIWjqc75AXBvWgLK8X0kZvjRHXrLAdxIN+Bk= 31QxOpifnpFUpx/sn336ZmmjkYbLlNrs8NP9om6nPeY= ``` As you can see the first two are the same, while the last is different. **How can I change the JavaScript code to behave the same as the python code?**
2020/06/29
[ "https://Stackoverflow.com/questions/62633601", "https://Stackoverflow.com", "https://Stackoverflow.com/users/1447243/" ]
The base64 encoded secret you are trying to give to CryptoJs does not represent a valid UTF-8 string, which CryptoJS requires. You can use [this tool](https://onlineutf8tools.com/convert-hexadecimal-to-utf8) to check for validity. `atob()` is encoding agnostic and just converts byte by byte, and does not check if it's valid UTF-8. Here I did the decoding of the base64 secret with CryptoJS's own decoder and it throws an error saying it's invalid UTF-8: ``` <script src="https://cdnjs.cloudflare.com/ajax/libs/crypto-js/3.1.9-1/crypto-js.min.js"></script> <script src="https://cdnjs.cloudflare.com/ajax/libs/crypto-js/3.1.9-1/hmac-sha256.min.js"></script> <script src="https://cdnjs.cloudflare.com/ajax/libs/crypto-js/3.1.9-1/enc-base64.min.js"></script> <script> function sign_string(key_b64, to_sign) { var key = CryptoJS.enc.Base64.parse(key_b64).toString(CryptoJS.enc.Utf8); var hash = CryptoJS.HmacSHA256(to_sign, key); var hashInBase64 = CryptoJS.enc.Base64.stringify(hash); document.write(hashInBase64 + '<br>'); } sign_string('VGhpcyBpcyBhIHByaW50YWJsZSBzdHJpbmcuCg==', "my message") sign_string('dGhlIHdpbmQgb2YgTXQuIEZ1amkK', "my message") sign_string('pkmNNJw3alrpIBi5t5Pxuym00M211oN86IhLZVT8', "my message") </script> ``` I also found a way you can use raw bytes for the key. This works for the last key but not for the first two. ``` var key = CryptoJS.enc.Hex.parse(toHex(atob(key_b64))); ``` Now if you combine these two methods you can have a real solution. This final code gives identical output as python: ``` <script src="https://cdnjs.cloudflare.com/ajax/libs/crypto-js/3.1.9-1/crypto-js.min.js"></script> <script src="https://cdnjs.cloudflare.com/ajax/libs/crypto-js/3.1.9-1/hmac-sha256.min.js"></script> <script src="https://cdnjs.cloudflare.com/ajax/libs/crypto-js/3.1.9-1/enc-base64.min.js"></script> <script> function sign_string(key_b64, to_sign) { try { var key = CryptoJS.enc.Base64.parse(key_b64).toString(CryptoJS.enc.Utf8); } catch { var key = CryptoJS.enc.Hex.parse(toHex(atob(key_b64))); } var hash = CryptoJS.HmacSHA256(to_sign, key); var hashInBase64 = CryptoJS.enc.Base64.stringify(hash); document.write(hashInBase64 + '<br>'); } function toHex(str) { var result = ''; for (var i=0; i<str.length; i++) { if (str.charCodeAt(i).toString(16).length === 1) { result += '0' + str.charCodeAt(i).toString(16); } else { result += str.charCodeAt(i).toString(16); } } return result; } sign_string('VGhpcyBpcyBhIHByaW50YWJsZSBzdHJpbmcuCg==', "my message") sign_string('dGhlIHdpbmQgb2YgTXQuIEZ1amkK', "my message") sign_string('pkmNNJw3alrpIBi5t5Pxuym00M211oN86IhLZVT8', "my message") sign_string('xTsHZGfWUmnIpSu+TaVraECU88O3j9qVjlwTWGb/C8k=', "my message") </script> ```
In the third example you are using different parameters in the python and JavaScript versions. In python: sign\_string('xTsHZGfWUmnIpSu+TaVraECU88O3j9qVjlwTWGb/C8k=', "my message") In JavaScript: sign\_string('pkmNNJw3alrpIBi5t5Pxuym00M211oN86IhLZVT8', "my message")
15,017
43,216,256
I am trying to do some deep learning work. For this, I first installed all the packages for deep learning in my Python environment. Here is what I did. In Anaconda, I created an environment called `tensorflow` as follows ``` conda create -n tensorflow ``` Then installed the data science Python packages, like Pandas, NumPy, etc., inside it. I also installed TensorFlow and Keras there. Here is the list of packages in that environment ``` (tensorflow) SFOM00618927A:dl i854319$ conda list # packages in environment at /Users/i854319/anaconda/envs/tensorflow: # appdirs 1.4.3 <pip> appnope 0.1.0 py36_0 beautifulsoup4 4.5.3 py36_0 bleach 1.5.0 py36_0 cycler 0.10.0 py36_0 decorator 4.0.11 py36_0 entrypoints 0.2.2 py36_1 freetype 2.5.5 2 html5lib 0.999 py36_0 icu 54.1 0 ipykernel 4.5.2 py36_0 ipython 5.3.0 py36_0 ipython_genutils 0.2.0 py36_0 ipywidgets 6.0.0 py36_0 jinja2 2.9.5 py36_0 jsonschema 2.5.1 py36_0 jupyter 1.0.0 py36_3 jupyter_client 5.0.0 py36_0 jupyter_console 5.1.0 py36_0 jupyter_core 4.3.0 py36_0 Keras 2.0.2 <pip> libpng 1.6.27 0 markupsafe 0.23 py36_2 matplotlib 2.0.0 np112py36_0 mistune 0.7.4 py36_0 mkl 2017.0.1 0 nbconvert 5.1.1 py36_0 nbformat 4.3.0 py36_0 notebook 4.4.1 py36_0 numpy 1.12.1 <pip> numpy 1.12.1 py36_0 openssl 1.0.2k 1 packaging 16.8 <pip> pandas 0.19.2 np112py36_1 pandocfilters 1.4.1 py36_0 path.py 10.1 py36_0 pexpect 4.2.1 py36_0 pickleshare 0.7.4 py36_0 pip 9.0.1 py36_1 prompt_toolkit 1.0.13 py36_0 protobuf 3.2.0 <pip> ptyprocess 0.5.1 py36_0 pygments 2.2.0 py36_0 pyparsing 2.1.4 py36_0 pyparsing 2.2.0 <pip> pyqt 5.6.0 py36_2 python 3.6.1 0 python-dateutil 2.6.0 py36_0 pytz 2017.2 py36_0 PyYAML 3.12 <pip> pyzmq 16.0.2 py36_0 qt 5.6.2 0 qtconsole 4.3.0 py36_0 readline 6.2 2 scikit-learn 0.18.1 np112py36_1 scipy 0.19.0 np112py36_0 setuptools 34.3.3 <pip> setuptools 27.2.0 py36_0 simplegeneric 0.8.1 py36_1 sip 4.18 py36_0 six 1.10.0 <pip> six 1.10.0 py36_0 sqlite 3.13.0 0 tensorflow 1.0.1 <pip> terminado 0.6 py36_0 testpath 0.3 py36_0 Theano 0.9.0 <pip> tk 8.5.18 0 tornado 4.4.2 py36_0 traitlets 4.3.2 py36_0 wcwidth 0.1.7 py36_0 wheel 0.29.0 <pip> wheel 0.29.0 py36_0 widgetsnbextension 2.0.0 py36_0 xz 5.2.2 1 zlib 1.2.8 3 (tensorflow) SFOM00618927A:dl i854319$ ``` You can see that `jupyter` is also installed. Now, when I open up the Python interpreter in this environment and I run the basic TensorFlow command, it all works fine. However, I wanted to do the same thing in the Jupyter notebook. So, I created a new directory (outside of this environment). ``` mkdir dl ``` In that, I activated `tensorflow` environment ``` SFOM00618927A:dl i854319$ source activate tensorflow (tensorflow) SFOM00618927A:dl i854319$ conda list ``` And I can see the same list of packages in that. Now, I open up a Jupyter notebook ``` SFOM00618927A:dl i854319$ source activate tensorflow (tensorflow) SFOM00618927A:dl i854319$ jupyter notebook ``` It opens up a new notebook in the browser. But when I just import basic python libraries in that, like pandas, it says "no packages available". I am not sure why is that when the same environment has all those packages and in the same directory, if I use Python interpreter it shows all packages. ``` import pandas --------------------------------------------------------------------------- ModuleNotFoundError Traceback (most recent call last) <ipython-input-4-d6ac987968b6> in <module>() ----> 1 import pandas ModuleNotFoundError: No module named 'pandas' ``` Why jupyter notebook is not picking up these modules? So, Jupyter notebook doesn't show env as the interpreter [![enter image description here](https://i.stack.imgur.com/whyaq.png)](https://i.stack.imgur.com/whyaq.png)
2017/04/04
[ "https://Stackoverflow.com/questions/43216256", "https://Stackoverflow.com", "https://Stackoverflow.com/users/2769240/" ]
I have found a fairly simple way to do this. Initially, through your Anaconda Prompt, you can follow the steps in this official Tensorflow site - [here](https://www.tensorflow.org/install/install_windows). You have to follow the steps as is, no deviation. Later, you open the Anaconda Navigator. In Anaconda Navigator, go to Applications On --- section. Select the drop down list, after following above steps you must see an entry - tensorflow into it. Select tensorflow and let the environment load. Then, select Jupyter Notebook in this new context, and install it, let the installation get over. After that you can run the Jupyter notebook like the regular notebook in tensorflow environment.
It is better to create new environment with new name ($newenv):`conda create -n $newenv tensorflow` Then by using anaconda navigator under environment tab you can find newenv in the middle column. By clicking on the play button open terminal and type: `activate tensorflow` Then install tensorflow inside the newenv by typing: `pip install tensorflow` Now you have tensorflow inside the new environment so then install jupyter by typing: `pip install jupyter notebook` Then just simply type: `jupyter notebook` to run the jupyter notebook. Inside of the jupyter notebook type: `import tensorflow as tf` To test the the tf you can use [THIS LINK](https://www.tensorflow.org/tutorials)
15,018
19,151,734
I have data in the following format: ``` user,item,rating 1,1,3 1,2,2 2,1,2 2,4,1 ``` and so on I want to convert this in matrix form So, the out put is like this ``` Item--> 1,2,3,4.... user 1 3,2,0,0.... 2 2,0,0,1 ``` ....and so on.. How do I do this in python? THanks
2013/10/03
[ "https://Stackoverflow.com/questions/19151734", "https://Stackoverflow.com", "https://Stackoverflow.com/users/902885/" ]
``` data = [ (1,1,3), (1,2,2), (2,1,2), (2,4,1), ] #import csv #with open('data.csv') as f: # next(f) # Skip header # data = [map(int, row) for row in csv.reader(f)] # # Python 3.x: map(int, row) -> tuple(map(int, row)) n = max(max(user, item) for user, item, rating in data) # Get size of matrix matrix = np.zeros((n, n)) for user, item, rating in data: matrix[user-1][item-1] = rating # Convert to 0-based index. for row in matrix: print(row) ``` prints ``` [3, 2, 0, 0] [2, 0, 0, 1] [0, 0, 0, 0] [0, 0, 0, 0] ```
a different approach from @falsetru, do you read from file in write to file? may be work with dictionary ``` from collections import defaultdict valdict=defaultdict(int) nuser=0 nitem=0 for line in infile: eachline=line.strip().split(",") valdict[tuple(eachline[0:2])]=eachline[2] nuser=max(nuser,eachline[0]) nitem=max(nitem,eachline[1]) towrite=",".join(range(1,nuser+1))+"\n" for i in range(1:nuser+1): towrite+=str(i) for j in range(1:nitem+1): towrite+=","+str(valdict[i,j]) towrite+="\n" outfile.write(towrite) ```
15,028
74,165,151
Let's say I have following python code: ``` import numpy as np import matplotlib.pyplot as plt fig=plt.figure() ax=plt.axes(projection='3d') x=y=np.linspace(1,10,100) X,Y=np.meshgrid(x,y) Z=np.sin(X)**3+np.cos(Y)**3 ax.plot_surface(X,Y,Z) plt.show() ``` How do I calculate from this code the gradient and plot it? I am also confused in what numpy.gradient() function exaclty returns. I have here the graph of the function. [![enter image description here](https://i.stack.imgur.com/eAzOE.png)](https://i.stack.imgur.com/eAzOE.png)
2022/10/22
[ "https://Stackoverflow.com/questions/74165151", "https://Stackoverflow.com", "https://Stackoverflow.com/users/12548284/" ]
gradient is a vector. It has 2 components (in this case, since we are dealing with function ℝ²→ℝ, X,Y↦Z(X,Y)), one which is ∂Z/∂X, (also a function of X and Y), another which is ∂Z/∂Y. So, np.gradients returns both. `np.gradient(Z)`, called with a 100×100 array of Z, returns a list [∂Z/∂X, ∂Z/∂Y], both being also 100×100 arrays of values: a 100×100 arrays of ∂Z/∂X values, and a 100×100 arrays of ∂Z/∂Y values. As for how to plot it, it is up to you. How would you like to plot it? You could use the gradient to alter colors, for example. Or draw arrows.
Here is the practical way to achieve it with Python: ``` import numpy as np import matplotlib.pyplot as plt # Some scalar function of interest: def z(x, y): return np.power(np.sin(x), 3) + np.power(np.cos(y), 3) # Grid for gradient: xmin, xmax = -7, 7 x = y = np.linspace(xmin, xmax, 100) X, Y = np.meshgrid(x, y) # Compute gradient: dZ = np.gradient(Z, x, y) # Gradient magnitude (arrow colors): M = np.hypot(*dZ) # Grid for contour: xh = yh = np.linspace(xmin, xmax, 400) Xh, Yh = np.meshgrid(xh, yh) # Plotting gradient & contour: fig, axe = plt.subplots(figsize=(12, 12)) axe.contour(Xh, Yh, Zh, 30, cmap="jet", linewidths=0.75) axe.quiver(X, Y, dZ[1], dZ[0], M, cmap="jet", units='xy', pivot='tail', width=0.03, scale=5) axe.set_aspect("equal") # Don't stretch the scale axe.grid() ``` It renders: [![enter image description here](https://i.stack.imgur.com/Fg42y.jpg)](https://i.stack.imgur.com/Fg42y.jpg) There are two visualizations of interest to see the gradient: * Quiver which renders the vector field, see [`plt.quiver`](https://matplotlib.org/stable/api/_as_gen/matplotlib.pyplot.quiver.html) and [advanced quiver](https://matplotlib.org/stable/gallery/images_contours_and_fields/quiver_demo.html#sphx-glr-gallery-images-contours-and-fields-quiver-demo-py) to tune arrows; * Contour which renders isopleth curves (levels), see [`plt.contour`](https://matplotlib.org/stable/api/_as_gen/matplotlib.pyplot.contour.html) to adapt levels. As expected [gradient is orthogonal to contour](https://math.stackexchange.com/questions/1059293/proving-gradient-of-a-function-is-always-perpendicular-to-the-contour-lines) curves.
15,029
60,473,135
I am using python 3.7 on Spyder. Here is my simple code to store string elements ['a','b'] in a list L as sympy symbols. As output, I have new list L with two Symbols [a,b] in it. But when I try to use these symbols in my calculation I get an error saying a & b are not defined. Any suggestions on how can I fix this? Basically, what I want to do is use string elements in a list as symbols for sympy calculations. Any suggestions on other methods to do this are welcomed. Thank you. ``` import sympy as sm L=['a','b'] print(L) for j in range(len(L)): L[j] = sm.symbols(L[j]) B=sm.solve(a**2 - 1, a) print(B) ``` Here is the error: ``` runfile('C:/Users/bhise/.spyder-py3/temp.py', wdir='C:/Users/bhise/.spyder-py3') ['a', 'b'] Traceback (most recent call last): File "<ipython-input-43-6826047bb7df>", line 1, in <module> runfile('C:/Users/bhise/.spyder-py3/temp.py', wdir='C:/Users/bhise/.spyder-py3') File "C:\Users\bhise\Anaconda3\lib\site- packages\spyder_kernels\customize\spydercustomize.py", line 827, in runfile execfile(filename, namespace) File "C:\Users\bhise\Anaconda3\lib\site- packages\spyder_kernels\customize\spydercustomize.py", line 110, in execfile exec(compile(f.read(), filename, 'exec'), namespace) File "C:/Users/bhise/.spyder-py3/temp.py", line 10, in <module> B=sm.solve(a**2 - 1, a) NameError: name 'a' is not defined ```
2020/03/01
[ "https://Stackoverflow.com/questions/60473135", "https://Stackoverflow.com", "https://Stackoverflow.com/users/11039234/" ]
Not knowing the second template argument to `std::array<>` means your `test` class should be templated as well. ``` template <std::size_t N> class test { void function(const std::array<int, N> & myarr) { // ... } }; ``` By the way, it's better to pass `myarr` as `const &`.
You could use an approach like: ``` #include<array> using namespace std; template <size_t N> class test { void function(const array<int, N> & myarr) { /* My code */ } }; ``` But keep in mind that `std::array` is not a dynamic array. You have to know the sizes at compile time. If you get to know the sizes later at runtime of your program you should consider using `std::vector` instead: ``` #include<vector> using namespace std; class test { void function(const vector<int> & myvec) { /* My code */ } }; ``` In that variant you don't need to pass the size at all.
15,030
59,410,455
I have an application with python, flask, and flask\_mysqldb. When I execute the first query, everything works fine, but the second query always throws an error (2006, server has gone away). Everything I found on the web says this error is a timeout issue, which doesn't seem to be my case because: 1 - I run the second query just a few seconds after running the first 2 - My timeout configuration is set to 8 hours I don't know what else this could be, here is the code that I am running: ``` import os from flask import Flask from flask import render_template from flaskext.mysql import MySQL import endpoints.usuario as usuario app = Flask(__name__, static_folder='/root/sftp/atom-projects/flask-example/public/') app.config['MYSQL_HOST'] = '123' app.config['MYSQL_USER'] = '123' app.config['MYSQL_PASSWORD'] = '123' app.config['MYSQL_DB'] = '123' app.add_url_rule('/usuarios', 'usuarios', usuario.list_all, methods=['GET']) @app.errorhandler(404) def not_found(e): return app.send_static_file('index.html') ``` here is the code for the usuarios file: ``` from flask_mysqldb import MySQL from flask import Flask, make_response from flask import current_app from flask import request import bcrypt def list_all(): mysql = MySQL(current_app) cursor = mysql.connection.cursor() cursor.execute("select * from usuario") records = cursor.fetchall() usuarios = [] for row in records: usuarios.append({"id":row[0], "nome":row[1], "email":row[2], "senha":row[3], "tipo":row[4]}) for usuario in usuarios: tipo = None cursor.execute("select * from tipo_usuario where id = %s", [usuario['tipo']]) records = cursor.fetchall() for row in records: usuario['tipo'] = {"id":row[0], "permissao":row[1]} return make_response({"msg":'', "error":False, "data":usuarios}) ``` I have this running on nginx + gunicorn, here is the log : ``` gunicorn -w 1 --reload main:app [2019-12-19 12:53:21 +0000] [5356] [INFO] Starting gunicorn 20.0.4 [2019-12-19 12:53:21 +0000] [5356] [INFO] Listening at: http://127.0.0.1:8000 (5356) [2019-12-19 12:53:21 +0000] [5356] [INFO] Using worker: sync [2019-12-19 12:53:21 +0000] [5359] [INFO] Booting worker with pid: 5359 [2019-12-19 12:53:28 +0000] [5359] [ERROR] Error handling request /usuarios Traceback (most recent call last): File "/usr/local/lib/python3.5/dist-packages/gunicorn/workers/sync.py", line 134, in handle self.handle_request(listener, req, client, addr) File "/usr/local/lib/python3.5/dist-packages/gunicorn/workers/sync.py", line 175, in handle_request respiter = self.wsgi(environ, resp.start_response) File "/usr/local/lib/python3.5/dist-packages/flask/app.py", line 2463, in __call__ return self.wsgi_app(environ, start_response) File "/usr/local/lib/python3.5/dist-packages/flask/app.py", line 2457, in wsgi_app ctx.auto_pop(error) File "/usr/local/lib/python3.5/dist-packages/flask/ctx.py", line 452, in auto_pop self.pop(exc) File "/usr/local/lib/python3.5/dist-packages/flask/ctx.py", line 438, in pop app_ctx.pop(exc) File "/usr/local/lib/python3.5/dist-packages/flask/ctx.py", line 238, in pop self.app.do_teardown_appcontext(exc) File "/usr/local/lib/python3.5/dist-packages/flask/app.py", line 2320, in do_teardown_appcontext func(exc) File "/usr/local/lib/python3.5/dist-packages/flask_mysqldb/__init__.py", line 100, in teardown ctx.mysql_db.close() MySQLdb._exceptions.OperationalError: (2006, '') ``` If I run it with more workers, I can run a few more (depending on how many workers) queries, what could be causing this?
2019/12/19
[ "https://Stackoverflow.com/questions/59410455", "https://Stackoverflow.com", "https://Stackoverflow.com/users/5844134/" ]
> > I've noticed that the same data stored inside integer had reversed byte order than when stored as char > > > This implies that the file was stored with different byte endianness than what the CPU uses. In the example output, you can see that the CPU uses little-endian (least significant byte first). Given that the order was the opposite in the file, we can deduce that the file uses big-endian (most significant byte first). Big-endian is commonly used in data exchange formats. > > I would like to know why is that and eventualy how to change that. > > > POSIX has standard functions for converting big endian to native endianness (the `ntoh` family of functions). Standard C++ does not, but it is fairly straight forward to implement. However, there are some mistakes that are easy to make, so it will be safer to use an existing library.
As @Mat briefly explained, you're running into something called "endianness". There's "Big Endian", where the most significant bits are at the beginning?! (yes, it's a bit counter-intuitive), and "Little Endian", where the least significant bits are at the beginning. > > For example: Arabic numerals are big endian. "1234" is "one-thousand two hundred thirty four", not "four thousand three hundred twenty one". The most significant digits come first. > > > I'd be shocked to find that there aren't dozens of different open source functions handling this problem out there. A quick google search turned up: <https://www.boost.org/doc/libs/1_61_0/libs/endian/doc/index.html> This is caused by different CPU architectures. Some are big endian, some are little. There's almost certainly a list at Mat's linked wikipedia page. When they write out their bits to their own storage, they often write them "natively", in their own endian format. This could be a big problem when a server talks to clients using a variety of cpu types (every web-server ever, most cross-platform networked games, etc.). In those cases, the communication protocol must specify which endianness they're using and then the software must translate as needed. ### Edit the edit: "Endianness" should be called "startianness". Counter intuitive names are bad. "Principle of Least Surprise" good. Ah well. When it matters just use an existing library. POSIX has a collection of not-terribly-standardized-names for functions that do the work. There's the boost library I linked above. I've used proprietary libraries on a couple projects. I'm quite sure there are others out there as well, many open sourced.
15,031
26,266,437
I just installed python 2.7 and also pip to the 2.7 site package. When I get the version with: ``` pip -V ``` It shows: ``` pip 1.3.1 from /usr/lib/python2.6/site-packages (python 2.6) ``` How do I use the 2.7 version of pip located at: ``` /usr/local/lib/python2.7/site-packages ```
2014/10/08
[ "https://Stackoverflow.com/questions/26266437", "https://Stackoverflow.com", "https://Stackoverflow.com/users/84885/" ]
There should be a binary called "pip2.7" installed at some location included within your $PATH variable. You can find that out by typing ``` which pip2.7 ``` This should print something like '/usr/local/bin/pip2.7' to your stdout. If it does not print anything like this, it is not installed. In that case, install it by running ``` $ wget https://bootstrap.pypa.io/pip/2.7/get-pip.py $ sudo python2.7 get-pip.py ``` Now, you should be all set, and ``` which pip2.7 ``` should return the correct output.
An alternative is to call the `pip` module by using python2.7, as below: ``` python2.7 -m pip <commands> ``` For example, you could run `python2.7 -m pip install <package>` to install your favorite python modules. Here is a reference: <https://stackoverflow.com/a/50017310/4256346>. In case the pip module has not yet been installed for this version of python, you can run the following: ``` python2.7 -m ensurepip ``` Running this command will "bootstrap the pip installer". Note that running this may require administrative privileges (i.e. `sudo`). Here is a reference: <https://docs.python.org/2.7/library/ensurepip.html> and another reference <https://stackoverflow.com/a/46631019/4256346>.
15,032
64,882,005
I have a python list shown below. I want to remove all the elements after a specific character `''` Note1: The number of elements before `''` can vary. I am developing a generic code. Note2: There can be multiple `''` I want to remove after the first `''` Note3: Slice is not applicable because it supports only integers Can someone please help me with this? Thank you very much. ``` ['iter,objective,inf_pr,inf_du,lg(mu),||d||,lg(rg),alpha_du,alpha_pr,ls', '0,8.5770822e+000,1.35e-002,1.73e+001,-1.0,0.00e+000,-,0.00e+000,0.00e+000,0', '1,8.3762931e+000,1.29e-002,1.13e+001,-1.0,9.25e+000,-,9.86e-001,4.62e-002f,2', '5,8.0000031e+000,8.86e-010,1.45e-008,-5.7,1.88e-004,-,1.00e+000,1.00e+000h,1', '6,7.9999994e+000,1.28e-013,2.18e-012,-8.6,2.31e-006,-,1.00e+000,1.00e+000h,1', '', 'Number,of,Iterations....:,6', '', '(scaled),(unscaled)', 'Objective...............:,7.9999994450134029e+000,7.9999994450134029e+000', 'Dual,infeasibility......:,2.1781026770818554e-012,2.1781026770818554e-012', 'Constraint,violation....:,1.0658141036401503e-013,1.2789769243681803e-013', 'Complementarity.........:,2.5067022522763431e-009,2.5067022522763431e-009', 'Overall,NLP,error.......:,2.5067022522763431e-009,2.5067022522763431e-009', '', '', ```
2020/11/17
[ "https://Stackoverflow.com/questions/64882005", "https://Stackoverflow.com", "https://Stackoverflow.com/users/14606112/" ]
``` list = ['a', 'b', 'c', '', 'd', 'e'] list = list[:list.index('')] #list is now ['a', 'b', 'c'] ``` Explanation: `list.index('')` finds the first instance of `''` in the list. `list[:x]` gives the first `x` elements of the list. This code will throw an exception if `''` is not in the list.
You have a list and want to delete everything after a value that meets some sort of criteria. You can enumerate the list, searching for that value and delete the remaining slice. `list.index` will tell you the index of value that exactly matches some object like `""`. ``` test = ["foo", "bar", "" "baz", "", "quux"] try: del test[test.index("")] except ValueError: pass ``` If you have a more complex criteria, you can do your own scan ``` test = ["foo", "bar", " % " "baz", "", "quux"] for i, val in enumerate(test): if "%" in val: del test[i:] break ``` If this list is really file, you could look for the value as you read and short-circuit. ``` test = [] with open("foo.txt") as f: for line in f: line = line.strip() if line == "": break test.append(line) ```
15,038
30,744,415
Like many before me I don´t succeed in installing a few Python packages (mysql, pycld2, etc.) on Windows. I have a Windows 8 machine, 64-bit, and Python 3.4. At first I got the well-known error "can´t find vcvarsall.bat - install VS C++ 10.0". This I tried to solve by installing MinGW and use that as compiler. This did not work. Then finally I found an installer for this VS C++ 10.0 here <http://microsoft-visual-cpp-express.soft32.com/free-download/>. This doesn´t work too good either. Now it seems to find the vcvarsall file but instead gives me a couple of new errors ``` nclude -IC:\Python34\include /Tc_mysql.c /Fobuild\temp.win32-3.4\Release\_mysql. obj /Zl_mysql.c_mysql.c(42) : fatal error C1083: Cannot open include file: 'config-win.h': No such file or directory error: command 'C:\\Program Files (x86)\\Microsoft Visual Studio 10.0\\VC\\BIN\\cl.exe' failed with exit status 2 ``` And: ``` pycldmodule.cc bindings\pycldmodule.cc(16) : fatal error C1083: Cannot open include file: ' strings.h': No such file or directory error: command 'C:\\Program Files (x86)\\Microsoft Visual Studio 10.0\\VC\\BIN\\cl.exe' failed with exit status 2 ``` So now it doesn´t find strings.h and config-win.h and I´m too new to these sorts of problems to know what to look for. Anyone knows what I should do? The thing is that I could just not use Windows and go over to Ubuntu as, for what I´ve understood, works painlessly with python. However, I have to use the win32com package which doesn´t exist on Ubuntu (have I understood that right?). If I can´t solve these installing hassles on Windows, would a solution be to use a Windows virtual machine for the win32com part and do the rest on a host Ubuntu (or the other way around)? Would there be anyway to communicate between the two in that case? I.e. sending strings or arrays of data.
2015/06/09
[ "https://Stackoverflow.com/questions/30744415", "https://Stackoverflow.com", "https://Stackoverflow.com/users/2069136/" ]
I would recommend installing Ubuntu (as a Ubuntu user), you can dual-boot. However, that isn't an answer. MySQLClient (the fork for Python3) is available a precompiled binary from here: <http://www.lfd.uci.edu/~gohlke/pythonlibs/#mysqlclient> Try to find precompiled binaries for simplicity sake. As far as troubleshooting the install goes, I've tried the recommend VC Studio 9.0 on fresh installs and it cannot find stdint.h (which, like yours, suggests it's more than broken).
I grew frustrated with trying to get python and other packages to compile/play nicely on Windows as well. Switching over to Ubuntu was a breath of fresh air, for sure. The win32com package is made specifically for Windows hosts, so that could not longer be used, but there are other ways to achieve the same thing in Ubuntu. Are you trying to target Windows specifically? What are you using win32com for?
15,040
32,330,838
I am new to Python. I have a code both in python 3.x & python 2.x (Actually, it is a library which has been written in 2.x). I am calling a function in python 2.x from python 3.x. The library return a HTTPResponse (from python 2.x). I am not able to parse the HTTPResponse in my code (In Python 3.x). **My request is :** ``` jsonData = {'string': post_message_data['Message']} url = "%s/testurl/" % (settings.STIX_API_URL) response = requests.post(url, jsonData) ``` **Processing request in Python 2.x** In python 2.x I am processing this request & sending back http response which is plain text reply parsed from email. ``` htmldata = request.body strdata = json.loads(htmldata) html = strdata['string'] reply = quotations.extract_from(html, 'text/html') reply = quotations.extract_from_html(html) return HttpResponse(json.dumps({'reply':reply}), mimetype='application/json') ``` Now my question is "how to get that json data in json format in the function called in 3.x" I have tried response.read() , response.readall() , response.content. Each time getting different errors.
2015/09/01
[ "https://Stackoverflow.com/questions/32330838", "https://Stackoverflow.com", "https://Stackoverflow.com/users/5268513/" ]
This should work: ``` beans = { myBean(MyBeanImpl) { bean -> bean.scope = 'prototype' someProperty = 42 otherProperty = "blue" bookService = ref("bookService") } } ```
I agree with Jeff Scott Brown. How do you know it doesn't work? We're using Grails 2.3.9. I have this in my resources.groovy: ``` httpBuilderPool(HTTPBuilder) { bean -> bean.scope = 'prototype' // A new service is created every time it is injected into another class client = ref('httpClientPool') } ... ``` and this in a Spock Integration Test: ``` import grails.test.spock.IntegrationSpec import org.apache.http.impl.client.CloseableHttpClient import org.apache.log4j.Logger class JukinHttpBuilderSpec extends IntegrationSpec { private final Logger log = Logger.getLogger(getClass()) def httpBuilderPool def grailsApplication void "test jukinHttpBuilder instantiates"() { expect: httpBuilderPool httpBuilderPool.client instanceof CloseableHttpClient } void "test httpBuilderPool is prototype instaance"() { when: 'we call getBean on the application context' def someInstanceIds = (1..5).collect { grailsApplication.mainContext.getBean('httpBuilderPool').toString() }.toSet() log.info someInstanceIds.toString() then: 'we should get a new instance every access' someInstanceIds.size() == 5 } void "test injected httpBuilderPool is prototype instance"() { when: 'we access the injeted httpBuilderPool' def someInstanceIds = (1..5).collect { httpBuilderPool.toString() }.toSet() log.info someInstanceIds.toString() then: 'it uses the same instance every time' someInstanceIds.size() == 1 } } ``` which shows me it works in 2.3.9.
15,048
30,364,874
I'm trying to teach myself python and I'm quite new to parsing concepts. I'm trying to parse the output from my fire service pager, it seems to follow a consistent pattern as follows: ``` (UNIT1, UNIT2, UNIT3) 911-STRU (Box# 12345) aBusiness 12345 Street aTown (Xstr CrossStreet1/CrossStreet2) building fire, persons reported #F123456 ``` It seems that each section is separated by the use of () brackets the fields break down as follows ``` (Responded trucks) CallSource-JobClassification (Box number if available) Building Name, Building Address (Cross streets) Description of job #JobNumber ``` Scrap that, just got a call while writing this. If no box number is provided then that section is skipped entirely meaning that it goes straight to the address section, therefore I can't count on parsing using the brackets. So to the parsing experts out there, can I attack this with pyparsing or will I need a custom parser? Furthermore can I target specific sections with a parser so it doesn't matter what order they appear in, as is the case with the Box# being an optional field? My goal is to take this input, tidy it up with parsing and then send it via Twitter, SMS, email or all of the above. Many thanks in advance EDIT: I've got this 99% working using the following code ``` import re sInput = ('(UNIT123, UNIT1234) AMB-MED APPLE HEADQUARTERS 1 INFINITE LOOP CUPERTINO. (XStr DE ANZA BLVD/MARIANI AVE) .42YOM CARDIAC ARREST. #F9876543') #sInput = '(UNIT123, UNIT1234) ALARM-SPRNKLR (Alarm Type MANUAL/SMOKE) (Box 12345) APPLE HEADQUARTERS 1 INFINITE LOOP CUPERTINO. (XStr DE ANZA BLVD/MARIANI AVE) #F9876544' # Matches truck names using the consistent four uppercase letters followed by three - four numbers. pAppliances = re.findall(r'\w[A-Z]{3}\d[0-9]{2,3}', sInput) # Matches source and job type using the - as a guide, this section is always proceeded by the trucks on the job # therefore is always proceeded by a ) and a space. Allows between 3-9 characters either side of the - this is # to allow such variations as 911-RESC, FAA-AIRCRAFT etc. pJobSource = re.findall(r'\) ([A-Za-z1-9]{2,8}-[A-Za-z1-9]{2,8})', sInput) # Gets address by starting at (but ignoring) the job source e.g. -RESC and capturing everything until the next . period # the end of the address section always has a period. Uses ?; to ignore up to two sets of brackets that may appear in # the string for things such as box numbers or alarm types. pAddress = re.findall(r'-[A-Z1-9]{2,8} (.*?)\. \(', sInput) pAddressOptionTwo = re.findall(r'-[A-Z1-9]{2,8}(?: \(.*?\))(?: \(.*?\)) (.*?)\. \(', sInput) # Finds the specified cross streets as they are always within () brackets, each bracket has a space immediately # before or after and the work XStr is always present. pCrossStreet = re.findall(r' \((XStr.*?)\) ', sInput) # The job details / description is always contained between two . periods e.g. .42YOM CARDIAC ARREST. each period # has a space either immediately before or after. pJobDetails = re.findall(r' \.(.*?)\. ', sInput) # Job number is always in the format #F followed by seven digits. The # is always proceeded by a space. Allowed # between 1 and 8 digits for future proofing. pJobNumber = re.findall(r' (#F\d{0,7})', sInput) print pAppliances print pJobSource print pAddress print pCrossStreet print pJobDetails print pJobNumber ``` When run on the uncommented sInput string it returns the following ``` ['UNIT123', 'UNIT1234'] ['AMB-MED'] ['APPLE HEADQUARTERS 1 INFINITE LOOP CUPERTINO'] ['XStr DE ANZA BLVD/MARIANI AVE'] ['42YOM CARDIAC ARREST'] ['#F9876543'] ``` However when I run it on the commented sInput string I get the following ``` ['UNIT123', 'UNIT1234'] ['ALARM-SPRNKLR'] ['(Alarm Type MANUAL/SMOKE) (Box 12345) APPLE HEADQUARTERS 1 INFINITE LOOP CUPERTINO'] ['XStr DE ANZA BLVD/MARIANI AVE'] [] ['#F9876544'] ``` This is because two option bracket sets have been included in this message. I managed to correct this using the pAddressOptionTwo line however when the first string is then applied it returns no address at all as it didn't find the brackets. So the new refocused question is: How can I make an optional argument in the regex line. If there are brackets present ignore them and their contents and return the rest of the string OR if there are no brackets present continue as per normal.
2015/05/21
[ "https://Stackoverflow.com/questions/30364874", "https://Stackoverflow.com", "https://Stackoverflow.com/users/4889516/" ]
I think your best/easiest option is to use [regular expressions](https://docs.python.org/2/library/re.html), defining a pattern that will match all or parts of your input string and extract the pieces that you're interested in. [PyParsing](http://pyparsing.wikispaces.com/) will probably work fine too. I have not used it myself but the first few examples looks like some kind of higher level wrapper around regex, although I would expect it differs in many aspects once you delve deeper into it. Another option is to define a [lexer](http://en.wikipedia.org/wiki/Lexical_analysis) and create a parser from it using [PLY](http://www.dabeaz.com/ply/). That would probably be overkill for your use case however, as it is aimed more at parsing programming language and natural language syntax.
If you know pyparsing, then it might be easier to go with it. The `()` can always be treated as optional. Pyparsing will make certain things easier out of the box. If you are not so familiar with pyparsing, and your main goal is learning python, then hand craft your own parser in pure python. Nothing better at learning a new language than re-inventing some wheels :-)
15,049
21,331,730
I want to install PHP on the server. and I want to install it with Python script. Can I include PHP (some version number) in the reqirements.txt file and install it on the server? If not, then how can I install PHP on the server using a python script?
2014/01/24
[ "https://Stackoverflow.com/questions/21331730", "https://Stackoverflow.com", "https://Stackoverflow.com/users/1162512/" ]
You can get the counts by using ``` df.groupby([df.index.date, 'action']).count() ``` or you can plot directly using this method ``` df.groupby([df.index.date, 'action']).count().plot(kind='bar') ``` You could also just store the results to `count` and then plot it separately. This is assuming that your index is already in datetimeindex format, otherwise follow the directions of @mkln above.
Starting from ``` mydate col_name 0 2000-12-29 00:10:00 action1 1 2000-12-29 00:20:00 action2 2 2000-12-29 00:30:00 action2 3 2000-12-29 00:40:00 action1 4 2000-12-29 00:50:00 action1 5 2000-12-31 00:10:00 action1 6 2000-12-31 00:20:00 action2 7 2000-12-31 00:30:00 action2 ``` You can do ``` df['mydate'] = pd.to_datetime(df['mydate']) df = df.set_index('mydate') df['day'] = df.index.date counts = df.groupby(['day', 'col_name']).agg(len) ``` but perhaps there's an even more straightforward way. the above should work anyway. If you want to use counts as a DataFrame, I'd then transform it back ``` counts = pd.DataFrame(counts, columns=['count']) ```
15,050
40,821,733
I'm currently using vagrant and set it up to connect to my local computer's port 5000 and when I move to localhost:5000, the default ubuntu webpage appears to confirm that I'm connected. However, it tells me to manipulate the app using the index.html in there but I already have a whole Python flask app stored somewhere on github that I want to just git install and run using flask. My flask app works on vagrant because I've tested it out already. How do I change the pages localhost:5000 is displaying to that of my flask app? For reference, here's my flask app python code (html templates are in their own folders and didn't include): ``` import os from angular_flask import app def runserver(): port = int(os.environ.get('PORT', 5000)) app.run(host='0.0.0.0', port=port) if __name__ == '__main__': runserver() ``` I've also added this to my vagrantfile ``` config.vm.network "forwarded_port", guest: 80, host: 5000 ``` which allows me to see the pages on my localhost but I want to change the pages viewed to that of the same thing I set up here: <https://cs3319asst3.herokuapp.com/>
2016/11/26
[ "https://Stackoverflow.com/questions/40821733", "https://Stackoverflow.com", "https://Stackoverflow.com/users/4297337/" ]
Create a static field inside class, and increment it in constructor. something like this: ``` class A { public: A() : itemnumber(nextNum) { ++nextNum; } private: int itemnumber; static int nextNum; } // in CPP file initialize it int A::nextNum = 1; ``` Also, don't forget to increment static field in copy and move constructors\operators.
with a static variable like ``` class rect{ public: static int num; rect(){num++;} }; int rect::num =0; int main(){ rect a(); cout << rect::num; } ```
15,053
49,469,409
I am relatively new to programming. I'm trying to run the following: ``` from mpl_toolkits.basemap import Basemap import matplotlib.pyplot as plt import numpy as np my_map = Basemap(projection = 'ortho', lat_0=50, lon_0=-100, resolution = 'l', area_thresh=1000.0) my_map.drawcoastlines() my_map.drawcountries() my_map.fillcontinents(color='red') plt.show() ``` However, I get "AttributeError: 'AxesSubplot' object has no attribute 'get\_axis\_bgcolor'" I'm using python 3.6, matplotlib 2.2.0, basemap 1.0.7. They were downloaded using Anaconda. OS - Mac 10.12.4 How do I get rid of this error?
2018/03/24
[ "https://Stackoverflow.com/questions/49469409", "https://Stackoverflow.com", "https://Stackoverflow.com/users/9545845/" ]
The matplotlib deprecated the get\_axis\_bgcolor. You'll need to update basemap to version 1.1.0 to fix this error. It's installable via conda-forge, via: ``` conda install -c conda-forge basemap ``` In case you'll get error like, "Unable to open boundary dataset file. Only the 'crude' and 'low', resolution datasets are installed by default." Install, the additional files via: ``` conda install -c conda-forge basemap-data-hires ```
In addition to [@user45237841](https://stackoverflow.com/users/8861059/user45237841)'s answer, you can also change the `resolution` to `c` or `l` to resolve this error `Unable to open boundary dataset file. Only the 'crude' and 'low', resolution datasets are installed by default.` ``` my_map = Basemap(projection = 'ortho', lat_0=50, lon_0=-100, resolution = 'c', area_thresh=1000.0) # c is for crude and l is for low ```
15,054
64,641,472
Accidentally my python script has made a table with name as "ext\_data\_content\_modec --replace" which we want to delete. However BQ doesn't seem to recognize the table with spaces and keywords(--replace). We have tried many variants of bq rm , as well as tried deleting the from BQ console but it doesn't work For example, see below (etlt\_dsc is dataset name). ``` $ bq rm 'etlt_dsc.ext_data_content_modec --replace' BigQuery error in rm operation: Not found: Table boeing-prod-atm-next-dsc:etlt_dsc.ext_data_content_modec --replace ``` Besides above we tried below commands but nothing worked ``` bq rm "etlt_dsc.ext_data_content_modec --replace" bq rm [etlt_dsc.ext_data_content_modec --replace] bq rm [etlt_dsc.ext_data_content_modec --replace'] bq rm etlt_dsc.ext_data_content_modec \--replace ``` Would anyone has input for us please ?
2020/11/02
[ "https://Stackoverflow.com/questions/64641472", "https://Stackoverflow.com", "https://Stackoverflow.com/users/13078109/" ]
You would do something like this: ``` Map<String, B> bById = ... Map<String, B> bByName = ... for (A a : listA) { B b = bById.getOrDefault(a.id, bByName.get(a.name)); if (b != null) { a.setPersonalDetails(b.getPersonalDetails); } } ```
you can use comparator to achieve this.Just a example psudo code is given below ```java Collections.sort(listA, Comparator.comparing(A::getId) .thenComparing(A::getName) .thenComparing(A::getAge)); ```
15,059
50,863,799
I'm pretty new to Python, but have Python 3.6 installed, and running a few other programs perfectly. I'm trying to pull data using the pandas\_datareader module but keep running into this issue. Operating system: OSX.I've visited the other threads on similar errors and tried their methods to no avail. Additional concern: When using Sublime Text, if I run it as a Python (instead of Python3) build, it funcitons fine, but all my other accompanying programs are written on Python3. Is there a way of making this work on 3.6 that I'm missing? I have already visited the 'is\_list\_like' error question, and have changed the fred.py file to pandas.api.types in the import line. ``` Traceback (most recent call last): File "/Users/scottgolightly/Desktop/python_work/data_read_practice.py", line 3, in <module> import pandas_datareader.data as web File "/Library/Frameworks/Python.framework/Versions/3.6/lib/python3.6/site- packages/pandas_datareader/__init__.py", line 2, in <module> from .data import (DataReader, Options, get_components_yahoo, File "/Library/Frameworks/Python.framework/Versions/3.6/lib/python3.6/site- packages/pandas_datareader/data.py", line 14, in <module> from pandas_datareader.fred import FredReader File "/Library/Frameworks/Python.framework/Versions/3.6/lib/python3.6/site- packages/pandas_datareader/fred.py", line 1, in <module> from pandas.core.common import is_list_like ImportError: cannot import name 'is_list_like' ```
2018/06/14
[ "https://Stackoverflow.com/questions/50863799", "https://Stackoverflow.com", "https://Stackoverflow.com/users/7075311/" ]
As has been noted, `is_list_like` has been moved from `pandas.core.common` to `pandas.api.types`. There are several paths forward for you. 1. My (highly) recommended solution: download Conda and set up an environment with a version of Pandas prior to v0.23.0. 2. You can install the development version of Pandas, with a patch in place: `pip install git+https://github.com/pydata/pandas-datareader.git` 3. Since you say that you have a version of Pandas in a different environment that works, I suspect the Python calling it is version 2.X. If so, try using [past.autotranslate](http://python-future.org/translation.html) to import the older version of Pandas. 4. If this working version of Pandas actually belongs to a Python 3.X site-packages, then you can manually import it using: `sys.path.insert(0, '/path/to/other/pandas')`
Small workaround is to define it like this: ``` import pandas as pd pd.core.common.is_list_like = pd.api.types.is_list_like import pandas_datareader ```
15,060
4,645,822
I've been struggling with the [cutting stock problem](http://en.wikipedia.org/wiki/Cutting_stock_problem) for a while, and I need to do a funcion that given an array of values, gives me an array of array for all the possible combinations. I trying to do this function, but (as everything in python), I think someone must have done it better :). I think the name of the function is combination. Does anyone know whats the best way to do this, and what's the best module and function for this P.s. I have read some papers on the matter, but the matematical terms dazzle me :)
2011/01/10
[ "https://Stackoverflow.com/questions/4645822", "https://Stackoverflow.com", "https://Stackoverflow.com/users/-1/" ]
``` >>> from itertools import permutations >>> x = range(3) >>> list(permutations(x)) [(0, 1, 2), (0, 2, 1), (1, 0, 2), (1, 2, 0), (2, 0, 1), (2, 1, 0)] >>> ```
Do you mean [itertools.combinations](http://docs.python.org/library/itertools.html#itertools.combinations)?
15,061
22,476,489
We can convert a `datetime` value in to decimal using following function. ``` import time from datetime import datetime t = datetime.now() t1 = t.timetuple() print time.mktime(t1) ``` Output : ``` Out[9]: 1395136322.0 ``` Similarly is there a way to convert strings in to a `decimal` using python?. Example string. ``` "Mozilla/5.0 (X11; Ubuntu; Linux x86_64; rv:27.0) Gecko/20100101 Firefox/27.0" ```
2014/03/18
[ "https://Stackoverflow.com/questions/22476489", "https://Stackoverflow.com", "https://Stackoverflow.com/users/461436/" ]
If you want an integer to uniquely identify a string, I'd go for hashing functions, like SHA. They return the same value for the same input. ``` import hashlib def sha256_hash_as_int(s): return int(hashlib.sha256(s).hexdigest(), 16) ``` If you use Python 3, you first have to encode `s` to some concrete encoding, like UTF-8. Furthermore, take a look at the `hashlib` module and decide if you really need an integer, or if the output of `hexdigest()` isn't OK for you, too.
You can use [hash](http://docs.python.org/2.7/library/functions.html#hash) function: ``` >>> hash("Mozilla/5.0 (X11; Ubuntu; Linux x86_64; rv:27.0) Gecko/20100101 Firefox/27.0") 1892010093 ```
15,063
37,737,098
I have a string time coming from a third party (external to my python program), and I need to compare that time to right now. How long ago was that time? How can I do this? I've looked at the `datetime` and `time` libraries, as well as `pytz`, and can't find an obvious way to do this. It should automatically incorporate DST because the third party doesn't explicitly state its offset, only the timezone (US/Eastern). I've tried this, and it fails: ``` dt = datetime.datetime.strptime('June 10, 2016 12:00PM', '%B %d, %Y %I:%M%p') dtEt = dt.replace(tzinfo=pytz.timezone('US/Eastern')) now = datetime.datetime.now() now - dtEt ``` TypeError: can't subtract offset-naive and offset-aware datetimes
2016/06/09
[ "https://Stackoverflow.com/questions/37737098", "https://Stackoverflow.com", "https://Stackoverflow.com/users/6365333/" ]
Signing a commit will change the commit metadata, and thus change the underlying SHA1 commit ID. As you probably know, for Git, this has the same consequence of trying to change the contents of your history. If you want to just re-sign your last commit you could run: `git commit -S --amend` If you want to re-sign a commit in the middle of your history you could do a couple of things, all of them being a bit nasty if you ask me: 1. You could `reset --soft` to the commit you want to sign. Run `git commit -S --amend` and then commit all the staged changes. This would *merge* all your history after that commit into a single commit 2. Branch out (for safety) and `reset --hard` to the commit you want to sign. Sign it, and if you want to perserve commit history you could now `git cherry-pick NEXTCOMMIT -S` to re-build the whole signed history.
If you want to sign all the existing commits on the branch without do any changes to them: ``` git rebase --exec 'git commit --amend --no-edit -n -S' -i origin/HEAD ```
15,064
22,486,519
I am trying to create a fabric script that will install the erlang solutions R15B02 package and am having some difficulty. I have the following code in my fabric script: ``` sudo("apt-get update") sudo("apt-get -qy install python-software-properties") sudo('add-apt-repository "deb http://packages.erlang-solutions.com/debian quantal contrib"') sudo('add-apt-repository "deb http://packages.erlang-solutions.com/debian precise contrib"') sudo('add-apt-repository "deb http://packages.erlang-solutions.com/debian oneiric contrib"') sudo('add-apt-repository "deb http://packages.erlang-solutions.com/debian lucid contrib"') sudo("wget http://packages.erlang-solutions.com/debian/erlang_solutions.asc") sudo("sudo apt-key add erlang_solutions.asc") sudo("apt-get update") sudo("apt-get -qy install ca-certificates-java default-jre-headless fontconfig fontconfig-config hicolor-icon-theme icedtea-6-jre-cacao icedtea-6-jre-jamvm java-common libatk1.0-0 libatk1.0-data libavahi-client3 libavahi-common-data libavahi-common3 libcairo2 libcups2 libdatrie1 libfontconfig1 libgdk-pixbuf2.0-0 libgdk-pixbuf2.0-common libgl1-mesa-dri libgl1-mesa-glx libglapi-mesa libgstreamer-plugins-base0.10-0 libgstreamer0.10-0 libgtk2.0-0 libgtk2.0-bin libgtk2.0-common libice6 libjasper1 libjpeg-turbo8 libjpeg8 libllvm3.0 libnspr4 libnss3 libnss3-1d liborc-0.4-0 libpango1.0-0 libpixman-1-0 libsm6 libthai-data libthai0 libtiff4 libwxbase2.8-0 libwxgtk2.8-0 libx11-xcb1 libxcb-glx0 libxcb-render0 libxcb-shm0 libxcomposite1 libxcursor1 libxdamage1 libxfixes3 libxft2 libxi6 libxinerama1 libxrandr2 libxrender1 libxxf86vm1 openjdk-6-jre-headless openjdk-6-jre-lib shared-mime-info ttf-dejavu-core tzdata-java x11-common tzdata") sudo("apt-get -qy install erlang") ``` This works wonderful for installing 16B but, one of the applications I need to install on these servers has some incompatabilities with 16B currently. Is there a way that I can specify the R15B02 package? When I run apt-cache showpkg erlang, I only see packages for 16B and 14B.
2014/03/18
[ "https://Stackoverflow.com/questions/22486519", "https://Stackoverflow.com", "https://Stackoverflow.com/users/232337/" ]
You can also use one of these projects for installing and managing different versions of Erlang on the same computer: * <https://github.com/spawngrid/kerl> * <https://github.com/metadave/erln8>
If you can find the file 'esl-erlang\_15.b.2-1~ubuntu~precise\_i386.deb' or the 64 bit version, those could be installed with dpkg. If you find these, to install both at once, extract the .deb with `dpkg -x esl-erlang_15.b.2-1~ubuntu~precise_i386.deb` and move the binaries inside somewhere else. If you can't find that .deb file, you can download the source and compile it, configuring it to install somewhere different with `./configure --prefix=/path/to/old/lib/install/path`. You could put the old version in a different directory and call it with the variable `LD_PRELOAD` set to, for example, `/usr/old/path/to/old/version/of/erlang/SharedObjectFile.so`. So to run the program that takes the old version, do this: `~$ LD_PRELOAD=/usr/old/path/to/old/version/of/erlang/oldErlangLib.so ProgramToRun` I hope this is what you meant. Every time you run the program with the old depencencies this variable will have to be set. You can also set multiple preloads with a space between the libraries to override. Be sure to escape these spaces with double quotes or `\(space character goes here)`.
15,067
68,076,036
In my main directory I have two programs: `main.py` and a myfolder (which is a directory). The `main.py` file has these 2 lines: ``` from myfolder import Adding print(Adding.execute()) ``` Inside the myfolder directory, I have 3 python files only: `__init__.py`, `abstract_math_ops.py`, and `adding.py`. The `__init__.py` file as just one line: ``` from myfolder.adding import Adding ``` The `abstract_math_ops.py` file is just an abstract class with one method: ``` from abc import ABC, abstractmethod class AbstractMathOps(ABC): @staticmethod @abstractmethod def execute(*args): pass ``` The `adding.py` file is: ``` from abstract_math_ops import AbstractMathOps class Adding(AbstractMathOps): @staticmethod def execute(*args): --doing some addition -- ``` When I execute `main.py`, I get the error: ``` ~\myfolder\adding.py in <module> ---> 1 from abstract_math_ops import AbstractMathOps ModuleNotFoundError: No module named 'abstract_math_ops ' ``` If I put the abstract class inside the adding.py file, I do not get any errors. However, I need to put it in a different file as other python files (i.e. substracting.py, multipying.py) can be created following the footprint of the AbstractMathOps abstract class. Also, that way I do not have everything in a single file. How can I solve this problem?
2021/06/22
[ "https://Stackoverflow.com/questions/68076036", "https://Stackoverflow.com", "https://Stackoverflow.com/users/7705108/" ]
Your folder is a package, and you can't import sibling-submodules of a package in the way you're trying to do in `adding.py`. You either need to use an absolute import (`from myfolder.abstract_math_ops import AbstractMathOps`, which works the same anywhere), or use an explicit relative import (`from .abstract_math_ops import AbstractMathOps`, which only works from within the same package). But if the two modules in your package are really as short as you've shown, maybe you should reconsider making `myfolder` a package at all. You could very easily define all of your classes in single `myfolder.py` file, and it would be easier to make sense of since the classes are so interrelated.
Try with: ``` from .abstract_math_ops import AbstractMathOps ``` You need to add the relative location of the file for the import to work in this case.
15,068
51,156,919
I'm currently reading a dummy.txt, the content showing as below: ``` 8t1080 0.077500 0.092123 -0.079937 63mh9j 0.327872 -0.074191 -0.014623 63l2o3 0.504010 0.356935 -0.275896 64c97u 0.107409 0.021140 -0.000909 ``` Now, I am reading it using python like below: ``` lines = open("dummy.txt", "r").readlines() ``` I wanted to make a structure so that I can have a list... or array (doesn't matter) of arrays. Each smaller array will have the 0th element as string, and the following decimals will be a float. In order to do that, I'm currently trying: ``` for line in lines: line = line.split() for x in range(1, len(line)): line[x] = float(line[x]) ``` Interestingly, this doesn't work, since ``` for line in lines: line = line.split() ``` wouldn't actually split the line, and change the read data (lines variable, to be specific). Meanwhile, below works, and successfully modifies the read data (lines variable). ``` for x in range(0, len(lines)): lines[x] = lines[x].split() for x in range(1, len(line)): line[x] = float(line[x]) ``` So what is the difference between the two for loop that has two different results?
2018/07/03
[ "https://Stackoverflow.com/questions/51156919", "https://Stackoverflow.com", "https://Stackoverflow.com/users/8221657/" ]
You would greatly benefit from [`pandas`](https://pandas.pydata.org/) in this case: ``` import pandas as pd df = pd.read_csv('dummy.txt', sep=' ', header=None) >>> df.values array([['8t1080', 0.0775, 0.092123, -0.079937], ['63mh9j', 0.327872, -0.074191, -0.014622999999999999], ['63l2o3', 0.5040100000000001, 0.356935, -0.27589600000000003], ['64c97u', 0.10740899999999999, 0.02114, -0.000909]], dtype=object) ``` Or all in one go (without saving it your text file as a dataframe object): ``` my_array = pd.read_csv('dummy.txt', sep=' ', header=None).values >>> my_array array([['8t1080', 0.0775, 0.092123, -0.079937], ['63mh9j', 0.327872, -0.074191, -0.014622999999999999], ['63l2o3', 0.5040100000000001, 0.356935, -0.27589600000000003], ['64c97u', 0.10740899999999999, 0.02114, -0.000909]], dtype=object) ```
In your first case it IS working, however each time the for loops the line variable is reset to the next value, and its current value is lost to recieve the next one. ``` aux=[] for line in lines: #here the program changes the value of line line = line.split() # here you change the value of line for x in range(1, len(line)): line[x] = float(line[x]) aux.append(line) ``` using an auxiliar var you can "save" your values to later use
15,069
43,113,717
I have the text file like this ``` Ethernet adapter Local Area Connection: Connection-specific DNS Suffix . : example.com IPv6 Address. . . . . . . . . . . : xxxx:xxxx:xxxx:xxxx:xxxx:xxxx:xxxx Temporary IPv6 Address. . . . . . : xxxx:xxxx:xxxx:xxxx:xxxx:xxxx:xxxx Link-local IPv6 Address . . . . . : xxxx:xxxx:xxxx:xxxx:xxxx IPv4 Address. . . . . . . . . . . : 10.0.6.106 Subnet Mask . . . . . . . . . . . : 255.255.0.0 Default Gateway . . . . . . . . . : xxxx:xxxx:xxxx:xxxx:xxxx:xxxx:xxxx Ethernet adapter Local Area Connection: Connection-specific DNS Suffix . : example.com IPv6 Address. . . . . . . . . . . : xxxx:xxxx:xxxx:xxxx:xxxx:xxxx:xxxx Temporary IPv6 Address. . . . . . : xxxx:xxxx:xxxx:xxxx:xxxx:xxxx:xxxx Link-local IPv6 Address . . . . . : xxxx:xxxx:xxxx:xxxx:xxxx Subnet Mask . . . . . . . . . . . : 255.255.0.0 Default Gateway . . . . . . . . . : xxxx:xxxx:xxxx:xxxx:xxxx:xxxx:xxxx 10.0.0.1 Ethernet adapter Local Area Connection: Connection-specific DNS Suffix . : example.com IPv6 Address. . . . . . . . . . . : xxxx:xxxx:xxxx:xxxx:xxxx:xxxx:xxxx Temporary IPv6 Address. . . . . . : xxxx:xxxx:xxxx:xxxx:xxxx:xxxx:xxxx Link-local IPv6 Address . . . . . : xxxx:xxxx:xxxx:xxxx:xxxx IPv4 Address. . . . . . . . . . . : 10.0.6.107 Subnet Mask . . . . . . . . . . . : 255.255.0.0 Default Gateway . . . . . . . . . : xxxx:xxxx:xxxx:xxxx:xxxx:xxxx:xxxx 10.0.0.1 Ethernet adapter Local Area Connection: Connection-specific DNS Suffix . : example.com IPv6 Address. . . . . . . . . . . : xxxx:xxxx:xxxx:xxxx:xxxx:xxxx:xxxx Temporary IPv6 Address. . . . . . : xxxx:xxxx:xxxx:xxxx:xxxx:xxxx:xxxx Link-local IPv6 Address . . . . . : xxxx:xxxx:xxxx:xxxx:xxxx IPv4 Address. . . . . . . . . . . : 10.0.6.108 Subnet Mask . . . . . . . . . . . : 255.255.0.0 Default Gateway . . . . . . . . . : xxxx:xxxx:xxxx:xxxx:xxxx:xxxx:xxxx Ethernet adapter Local Area Connection: Connection-specific DNS Suffix . : example.com IPv6 Address. . . . . . . . . . . : xxxx:xxxx:xxxx:xxxx:xxxx:xxxx:xxxx Temporary IPv6 Address. . . . . . : xxxx:xxxx:xxxx:xxxx:xxxx:xxxx:xxxx Link-local IPv6 Address . . . . . : xxxx:xxxx:xxxx:xxxx:xxxx Subnet Mask . . . . . . . . . . . : 255.255.0.0 Default Gateway . . . . . . . . . : xxxx:xxxx:xxxx:xxxx:xxxx:xxxx:xxxx 10.0.0.1 Ethernet adapter Local Area Connection: Connection-specific DNS Suffix . : example.com IPv6 Address. . . . . . . . . . . : xxxx:xxxx:xxxx:xxxx:xxxx:xxxx:xxxx Temporary IPv6 Address. . . . . . : xxxx:xxxx:xxxx:xxxx:xxxx:xxxx:xxxx Link-local IPv6 Address . . . . . : xxxx:xxxx:xxxx:xxxx:xxxx IPv4 Address. . . . . . . . . . . : 10.0.6.109 Subnet Mask . . . . . . . . . . . : 255.255.0.0 Default Gateway . . . . . . . . . : xxxx:xxxx:xxxx:xxxx:xxxx:xxxx:xxxx 10.0.0.1 ``` I want to print all the IPv4 Addresses from this file using python script. Currently I am able to print only first IPv4 (10.0.6.106) address only with the below python script. **ip = open("ip.txt").read().split("IPv4 ")[1].split(":")[1].split("\n")[0].strip() print ip** please help me to print all the IPv4 adresses.
2017/03/30
[ "https://Stackoverflow.com/questions/43113717", "https://Stackoverflow.com", "https://Stackoverflow.com/users/3636467/" ]
yes, using @IdClass annotation. ``` @Entity @IdClass(EmployeeKey.class) public class Employee { @Id private int id; @Id private int departmendId; } public class EmployeeKey implements Serializable { private int id; private int departmendId; } public interface EmployeeRepository extends JpaRepository<Employee, EmployeeKey>,{} ```
Even if the underlying table does not have an explicit primary key specified, I am sure there is at least one column that is defined as unique (or has a unique index specified for it). You can add the @Id annotation to the entity field relevant to that column and that will the sufficient for the persistence provider. You probably would not want to specify any generator of course. If you happen to have a few unique columns which could form a sort of natural key then you can specify a composite key using the `@EmbeddedId` strategy or the `@IdClass` strategy in your entity.
15,071
30,625,787
This might seem simple but it has flummoxed me for a day, so I'm turning to ya'll. I have a valid Python dictionary: ``` {'numeric_int2': {'(113.7, 211.4]': 3, '(15.023, 113.7]': 4, '(211.4, 309.1]': 5, '(309.1, 406.8]': 4, '(406.8, 504.5]': 5, '(504.5, 602.2]': 7, '(602.2, 699.9]': 4, '(699.9, 797.6]': 5, '(797.6, 895.3]': 4, '(895.3, 993]': 6}} ``` I want to convert it to valid JSON, which requires removing the single quotes. The desired end result is: ``` {"numeric_int2": {"(113.7, 211.4]": 3, "(15.023, 113.7]": 4, "(211.4, 309.1]": 5, "(309.1, 406.8]": 4, "(406.8, 504.5]": 5, "(504.5, 602.2]": 7, "(602.2, 699.9]": 4, "(699.9, 797.6]": 5, "(797.6, 895.3]": 4, "(895.3, 993]": 6}} ``` I've tried every way I can think of from json.dumps() or anything else. How can I do this? Points if it is fast. I should add, when I try to use json.dumps() on the dictionary, I get an error: ``` TypeError: 1 is not JSON serializable ``` This is my complete code: ``` In [17]: import pandas as pd import numpy as np import itertools import simplejson ​ raw_data = { 'numeric_float1': list([np.random.random() for _ in range(0, 47)]+[np.nan]), 'numeric_float2': list([np.random.random() for _ in range(0, 47)]+[np.nan]), 'numeric_float3': list([np.random.random() for _ in range(0, 47)]+[np.nan]), } ​ df = pd.DataFrame(raw_data) ​ df_labels = [ 'category1:category', 'numeric_float1:numeric', 'numeric_float2:numeric', 'numeric_float3:numeric' ] ​ columns = list(zip([w.split(':')[0] for w in df_labels],[w.split(':')[1] for w in df_labels])) In [18]: def count_by_value(df,columns): numeric = [c[0] for c in columns if c[1] == 'numeric'] output = {} for column in df[numeric]: output[column] = pd.cut(df[column],10).value_counts().to_dict() output = simplejson.dumps(output) return output In [19]: # Test the function count_by_value(df,columns) --------------------------------------------------------------------------- TypeError Traceback (most recent call last) <ipython-input-19-02e2e6cb949b> in <module>() 1 # Test the function ----> 2 count_by_value(df,columns) <ipython-input-18-c2d882f5652d> in count_by_value(df, columns) 9 output[column] = pd.cut(df[column],10).value_counts().to_dict() 10 ---> 11 output = simplejson.dumps(output) 12 13 return output /Users/antonnobla/anaconda/lib/python3.4/site-packages/simplejson/__init__.py in dumps(obj, skipkeys, ensure_ascii, check_circular, allow_nan, cls, indent, separators, encoding, default, use_decimal, namedtuple_as_object, tuple_as_array, bigint_as_string, sort_keys, item_sort_key, for_json, ignore_nan, int_as_string_bitcount, **kw) 368 and not kw 369 ): --> 370 return _default_encoder.encode(obj) 371 if cls is None: 372 cls = JSONEncoder /Users/antonnobla/anaconda/lib/python3.4/site-packages/simplejson/encoder.py in encode(self, o) 268 # exceptions aren't as detailed. The list call should be roughly 269 # equivalent to the PySequence_Fast that ''.join() would do. --> 270 chunks = self.iterencode(o, _one_shot=True) 271 if not isinstance(chunks, (list, tuple)): 272 chunks = list(chunks) /Users/antonnobla/anaconda/lib/python3.4/site-packages/simplejson/encoder.py in iterencode(self, o, _one_shot) 350 Decimal=decimal.Decimal) 351 try: --> 352 return _iterencode(o, 0) 353 finally: 354 key_memo.clear() /Users/antonnobla/anaconda/lib/python3.4/site-packages/simplejson/encoder.py in default(self, o) 245 246 """ --> 247 raise TypeError(repr(o) + " is not JSON serializable") 248 249 def encode(self, o): TypeError: 4 is not JSON serializable ```
2015/06/03
[ "https://Stackoverflow.com/questions/30625787", "https://Stackoverflow.com", "https://Stackoverflow.com/users/2935984/" ]
**it appears both simplejson and json work as expected to me**, however simplejson is faster than json(by quite a bit) and it seems to work fine with your data ``` import simplejson,json print simplejson.dumps({'numeric_int2': {'(113.7, 211.4]': 3, '(15.023, 113.7]': 4, '(211.4, 309.1]': 5, '(309.1, 406.8]': 4, '(406.8, 504.5]': 5, '(504.5, 602.2]': 7, '(602.2, 699.9]': 4, '(699.9, 797.6]': 5, '(797.6, 895.3]': 4, '(895.3, 993]': 6}}) print json.dumps({'numeric_int2': {'(113.7, 211.4]': 3, '(15.023, 113.7]': 4, '(211.4, 309.1]': 5, '(309.1, 406.8]': 4, '(406.8, 504.5]': 5, '(504.5, 602.2]': 7, '(602.2, 699.9]': 4, '(699.9, 797.6]': 5, '(797.6, 895.3]': 4, '(895.3, 993]': 6}}) ```
Found the answer. Here is the function that works: ``` # Count the frequency of each value def count_by_value(df,columns): # Selects appropriate columns for the action numeric = [c[0] for c in columns if c[1] == 'numeric'] # Returns 0 if none of the appropriate columns exists if len(numeric) == 0: return 0 output = pd.DataFrame() for column in df[numeric]: output[column] = pd.cut(df[column],10).value_counts().to_dict() output = output.to_json() # output results return output ```
15,072
59,146,674
I have a batch file which is running a python script and in the python script, I have a subprocess function which is being ran. I have tried `subprocess.check_output`, `subprocess.run`, `subprocess.Popen`, all of them returns me an empty string only when running it using a batch file. If I run it manually or using an IDE, I get the response correctly. Below is the code for `subprocess.run`: ``` response = subprocess.run(fileCommand, shell=True, cwd=pSetTableauExeDirectory, capture_output=True) self.writeInLog(' Command Response: \t' + str(response)) ``` ***Response is in stdout=b''*** **When ran in batch file and from task scheduler:** > > Command Response: CompletedProcess(args='tableau refreshextract > --config-file "Z:\XXX\tableau\_config\SampleSuperStore.txt"', > returncode=0, stdout=b'', stderr=b'') > > > **When ran manually or in IDE:** > > Command Response: CompletedProcess(args='tableau refreshextract > --config-file "Z:\XXX\tableau\_config\SampleSuperStore.txt"', > returncode=0, stdout=b'Data source refresh completed.\r\n0 rows uploaded.\r\n', stderr=b'') > > > Batch file which runs the python program. Parameters are parsed to the python application ``` SET config=SampleSuperStore.txt CALL C:\XXX\AppData\Local\Continuum\anaconda3\Scripts\activate.bat C:\XXX\AppData\Local\Continuum\anaconda3\python.exe Z:\XXX\pMainManual.py "%config%" ``` Why is that?? --Complete python code--- ``` try: from pWrapper import wrapper import sys except Exception as e: print(str(e)) class main: def __init__(self): self.tableauPath = 'C:\\Program Files\\Tableau\\Tableau 2018.3\\bin\\' self.tableauCommand = 'tableau refreshextract --config-file' def runJob(self,argv): self.manual_sProcess(argv[1]) def manual_sProcess(self,tableauConfigFile): new_wrapper = wrapper() new_wrapper.tableauSetup(self.tableauPath,self.tableauCommand) if new_wrapper.tableauConfigExists(tableauConfigFile): new_wrapper.tableauCommand(tableauConfigFile) if __name__ == "__main__": new_main = main() new_main.runJob(sys.argv) ``` Wrapper class: ``` def tableauCommand(self,tableauConfigFile): command = self.setTableauExeDirectory + ' ' + self.refreshConfigCommand + ' "' + tableauConfigFile + '"' self.new_automateTableauExtract.runCommand(tableauConfigFile,command,self.refreshConfigCommand,self.tableauFilePath,self.setTableauExeDirectory) ``` Automate Class: ``` def runCommand(self,pConfig,pCommand,pRefreshConfigCommand,pFilePath,pSetTableauExeDirectory): try: fileCommand = pRefreshConfigCommand + ' "' + pFilePath + '"' response = subprocess.run(fileCommand, shell=True, cwd=pSetTableauExeDirectory, capture_output=True) self.writeInLog(' Command Response: \t' + str(response)) except Exception as e: self.writeInLog('Exception in function runCommand: ' + str(e)) ``` UPDATE: I initially thought that the bat file was causing this issue but it looks like it works when running manually a batch file but not when it is set on task scheduler
2019/12/02
[ "https://Stackoverflow.com/questions/59146674", "https://Stackoverflow.com", "https://Stackoverflow.com/users/2865368/" ]
Updated ------- First of all, if there is a need to run `anaconda-prompt` by calling `activate.bat` file, you can simply do as follows: ``` import subprocess def call_anaconda_venv(): subprocess.call('python -m venv virtual.env') subprocess.call('cmd.exe /k /path/venv/Scripts/activate.bat') if __name__ == "__main__": call_anaconda_venv() ``` * The result of the above code would be a **running instance** of `anaconda-prompt` as required. --- Now as Problem Seems Like: --- > > I have a `batch file` which is **running a python script** and in the python script, I have a `subprocess` function which is being run. > > > I have implemented the same program as required; Suppose we have * **Batch File** ---> `script.bat` \*\*\*\* includes a *command* to run python script i.e `test.py`. \*\*\*\* * **Python Script File** ---> `test.py` \*\*\*\* includes a *method* to run commands using `subprocess`. \*\*\*\* * **Batch File** ---> `sys_info.bat` \*\*\*\* includes a *command* which would give the system information of my computer. \*\*\*\* --- Now First, `script.bat` includes a command that will run the required python script as given below; ``` python \file_path\test.py pause ``` Here, `pause` command is used to prevent auto-closing console after execution. Now we have `test.py`, python script which includes `subprocess` method to run required commands and get their **output**. --- ``` from subprocess import check_output class BatchCommands: @staticmethod def run_commands_using_subprocess(commands): print("Running commands from File: {}".format(commands)) value = check_output(commands, shell=True).decode() return value @staticmethod def run(): commands_from_file = "\file-path\sys_info.bat" print('##############################################################') print("Shell Commands using >>> subprocess-module <<<") print('##############################################################') values = BatchCommands.run_commands_using_subprocess(commands_from_file) print(values) if __name__ == '__main__': BatchCommands.run() ``` --- Now, in the end, I have a `sys_info.bat` file which includes commands to renew the IP-Adress of my computer. Commands in `sys_info.bat` file are as follows; ``` systeminfo ``` Place multiple commands in `sys_info.bat` file, then you can also run multiple commands at a time like: ``` ipconfig/all ipconfig/release ipconfig/reset ipconfig/renew ipconfig ``` Before to use the file, set all files `directory paths`, and run the batch file i.e `script.py` in `command-prompt` as follows; * Run command-prompt or terminal as an `administrator`. ``` run \file_path\script.py ``` Here is the result after running the batch file in the `terminal`. [![Output of Given Batch File using Task Schedular which runs a sub-process method.](https://i.stack.imgur.com/OOGOC.gif)](https://i.stack.imgur.com/OOGOC.gif)
This is happening because your ide is not running in a shell that works in the way that open subprocess is expecting. If you set SHELL=False and specify the absolute path to the batch file it will run. you might still need the cwd if the batch file requires it.
15,073
34,339,867
I am trying to match the following strings: ``` 2 match virtual-address 172.29.210.119 tcp eq www 4 match virtual-address 172.29.210.147 tcp any ``` The expected output: ``` 172.29.210.119 tcp www 172.29.210.147 tcp any ``` I am using pattern: ``` match virtual-address (\d+\.\d+\.\d+\.\d+)\s?(\w+)? (?>eq)?\s?(\d+|\w+) ``` I get the expected output with that pattern testing in: <https://regex101.com/> But when I use the same pattern to match in python, I get the following error: ``` Traceback (most recent call last): File ".\ace2f5_parser.py", line 119, in <module> virtual_ip_proto_port = re.findall(pattern_virtual_ip_proto_port, line) File "C:\Users\hpokhare\AppData\Local\Programs\Python\Python35-32\lib\re.py", line 213, in findall return _compile(pattern, flags).findall(string) File "C:\Users\hpokhare\AppData\Local\Programs\Python\Python35-32\lib\re.py", line 293, in _compile p = sre_compile.compile(pattern, flags) File "C:\Users\hpokhare\AppData\Local\Programs\Python\Python35-32\lib\sre_compile.py", line 536, in compi p = sre_parse.parse(p, flags) File "C:\Users\hpokhare\AppData\Local\Programs\Python\Python35-32\lib\sre_parse.py", line 829, in parse p = _parse_sub(source, pattern, 0) File "C:\Users\hpokhare\AppData\Local\Programs\Python\Python35-32\lib\sre_parse.py", line 437, in _parse_ itemsappend(_parse(source, state)) File "C:\Users\hpokhare\AppData\Local\Programs\Python\Python35-32\lib\sre_parse.py", line 767, in _parse len(char) + 1) sre_constants.error: unknown extension ?> at position 53 ``` What does the error mean. Doesn't it support ?>. Any ideas on how to resolve the issue.
2015/12/17
[ "https://Stackoverflow.com/questions/34339867", "https://Stackoverflow.com", "https://Stackoverflow.com/users/4611991/" ]
You can use this regex in Python: ``` \bmatch virtual-address (\d+\.\d+\.\d+\.\d+)\s?(\w+) (?:eq\s+)?(\w+) ``` [RegEx Demo](https://regex101.com/r/tX8mB2/1) Python regex doesn't support *Atomic Group* syntax `(?>..)` like PCRE.
If you modify the flavor of regex101 for "python", you will see that you can not use `(?>eq)?` An alternative to what you want is to use `$`, to assert position at end of a line. Using `(\w+)$` will catch the last of the string sentence. ``` import re text = [ '2 match virtual-address 172.29.210.119 tcp eq www', '4 match virtual-address 172.29.210.147 tcp any' ] regexp = re.compile(r'match virtual-address (\d+\.\d+\.\d+\.\d+)\s(\w+).*?\s(\w+)$') for i in text: ip, protocol, url = regexp.search(i).groups() print(ip, protocol, url, '', sep='\n') ```
15,074
22,976,523
I'm working on a small app that pulls data out of a list stored in a list, passes it through a class init, and then displays/allows user to work. Everything was going fine until i tried to format the original 'list' in the IDLE so it was easier to read (for me). so I'd change 9 to 09, 8 to 08. etc It was a simple formating/spacing change and it broke the entire god damn program, citing 'invalid token'. WTF is this, I thought. So then I opened the interpreter and started typing: ``` >x = [5,5] #Control >x [5, 5] >>> y=[05,05] #control2 >>> y [5, 5] >>> z = [05, "ge"] #test. 'Integer', before string, apparantly works. >>> z [5, 'ge'] > a = ["ge", 09] #test2. String, before 'integer', cocks things up. SyntaxError: invalid token >>> b= ["ge", 9] #test3, this works fine. >>> b ['ge', 9] ``` I guess my question is... why does this occur? Why is python interpret these integers as 'tokens' when they follow strings, but as integers when they follow integers?
2014/04/10
[ "https://Stackoverflow.com/questions/22976523", "https://Stackoverflow.com", "https://Stackoverflow.com/users/2584933/" ]
It's nothing to do with lists or strings. When you prefix a number with `0`, it's interpreted as [octal](http://en.wikipedia.org/wiki/Octal). And 9 is not a valid octal digit! ``` Python 2.7.6 Type "help", "copyright", "credits" or "license" for more information. >>> 09 File "<stdin>", line 1 09 ^ SyntaxError: invalid token >>> 011 9 ``` Note that in Python3, this gives you the error for any 0-prefixed number, presumably to reduce confusion of the type you are experiencing. To specify octal in Python3, you must use `0o` as a prefix. ``` Python 3.3.3 Type "help", "copyright", "credits" or "license" for more information. >>> 09 File "<stdin>", line 1 09 ^ SyntaxError: invalid token >>> 011 File "<stdin>", line 1 011 ^ SyntaxError: invalid token >>> 0o11 9 >>> 0o9 File "<stdin>", line 1 0o9 ^ SyntaxError: invalid token >>> ```
It's not just Python, it's most programming languages. Starting a number with a zero signifies that the number is in octal, which means only digits `0-7` are valid. Thus, ``` 5 == 05 6 == 06 7 == 07 8 == 010 9 == 011 ... 15 == 017 16 == 020 ... 255 == 0377 ``` Similarly, prefix `0x` means the number is hexadecimal (so, valid digits are `0-9` and `a-f`: `255 == 0xff`)
15,075
23,120,865
Apologies if this is a basic question, but let us say I have a tab delimited file named `file.txt` formatted as follows: ``` Label-A [tab] Value-1 Label-B [tab] Value-2 Label-C [tab] Value-3 [...] Label-i [tab] Value-n ``` I want [xlrd](https://pypi.python.org/pypi/xlrd) or [openpyxl](http://pythonhosted.org/openpyxl/) to add this data to the excel worksheet named `Worksheet` in the file `workbook.xlsx` such that the cells contain the following values. I do not want to affect the contents of any other part of `workbook.xlsx` other than the two columns that are affected ``` A1=Label-A B1=Value-1 A2=Label-B B2=Value-2 [etc.] ``` EDIT: Solution ``` import sys import csv import openpyxl tab_file = sys.stdin.readlines() reader = csv.reader(tab_file, delimiter='\t') first_row = next(reader) num_cols = len(first_row) try: workbook = sys.argv[1] write_sheet = sys.argv[2] except Exception: raise sys.exit("ERROR") try: first_col = int(sys.argv[3]) except Exception: first_col = 0 tab_reader = csv.reader(tab_file, delimiter='\t') xls_book = openpyxl.load_workbook(filename=workbook) sheet_names = xls_book.get_sheet_names() xls_sheet = xls_book.get_sheet_by_name(write_sheet) for row_index, row in enumerate(tab_reader): number = 0 col_number = first_col while number < num_cols: cell_tmp = xls_sheet.cell(row = row_index, column = col_number) cell_tmp.value = row[number] number += 1 col_number += 1 xls_book.save(workbook) ```
2014/04/16
[ "https://Stackoverflow.com/questions/23120865", "https://Stackoverflow.com", "https://Stackoverflow.com/users/3543052/" ]
Since you said you are used to working in Bash, I'm assuming you're using some kind of Unix/Linux, so here's something that will work on Linux. Before pasting the code, I'd like to point a few things: Working with Excel in Unix (and Python) is not that straightforward. For instance, you can't open an Excel sheet for reading and writing at the same time (at least, not as far as I know, although I must recognize that I have never worked with the `openpyxl` module). Python has two well known modules (that I am used to working with **:-D** ) when it comes to handling Excel sheets: One is for reading Excel sheets ([xlrd](http://www.lexicon.net/sjmachin/xlrd.html)) and the second one for writing them ([xlwt](https://secure.simplistix.co.uk/svn/xlwt/trunk/xlwt/doc/xlwt.html?p=4966)) With those two modules, if you want to modify an existing sheet, as I understand you want to do, you need to read the existing sheet, copying it to a writable sheet and edit that one. Check the question/answers in [this other S.O. question](https://stackoverflow.com/questions/2725852/writing-to-existing-workbook-using-xlwt) that explain it with some more detail. Reading *whatever*-separated files is much easier thanks to the [csv](https://docs.python.org/2/library/csv.html) module (its prepared for comma-separated files, but it can be easily tweaked for other separators). Check it out. Also, I wasn't very sure from your example if the contents of the tab-separated file indicate somehow the row indexes on the Excel sheet or they're purely positional. When you say that in the tab-separated file you have `Value-2`, I wasn't sure if that `2` meant the second row on the Excel file or it was just an example of some text. I assumed the latest (which is easier to deal with), so whatever pair *Label Value* appears on the first row of your tab-separated file will be the first pair on the first row of the Excel file. It this is not the case, leave a comment a we will deal with it **;-)** Ok, so let's assume the following scenario: You have a tab-separated file like this: *stack37.txt*: ``` Label-A Value-1 Label-B Value-2 Label-C Value-3 ``` The excel file you want to modify is *stack37.xls*. It only has one sheet (or better said, the sheet you want to modify is the first one in the file) and it initially looks like this (in LibreOffice Calc): ![enter image description here](https://i.stack.imgur.com/q6iHX.png) Now, this is the python code (I stored it in a file called *stack37.py* and it's located in the same directory of the tab-separated file and the excel file): ``` import csv import xlwt import xlrd from xlutils import copy as xl_copy with open('stack37.txt') as tab_file: tab_reader = csv.reader(tab_file, delimiter='\t') xls_readable_book = xlrd.open_workbook('stack37.xls') xls_writeable_book = xl_copy.copy(xls_readable_book) xls_writeable_sheet = xls_writeable_book.get_sheet(0) for row_index, row in enumerate(tab_reader): xls_writeable_sheet.write(row_index, 0, row[0]) xls_writeable_sheet.write(row_index, 1, row[1]) xls_writeable_book.save('stack37.xls') ``` After you run this code, the file *stack37.xls* will look like this: ![enter image description here](https://i.stack.imgur.com/aKkIB.png) What I meant about not knowing what you exactly wanted to do with the values in your tab-separated file is that regardless of what you name your items in there, it will modify the first row of the excel sheet, then the second... (even if your first `Value` is called `Value-2`, the code above will not put that value on the second row of the Excel sheet, but on the fist row) It just assumes the first line in the tab-separated file corresponds with the values to set on the first row of the Excel sheet. Let explain with an slightly modified example: Let's assume your original Excel file looks like the original excel file on my screenshot (the full of `| Hello-Ax | Bye-Bx |`) but your tab-separated file now looks like this: *stack37.txt*: ``` foo bar baz baz2 ``` After you run *stack37.py*, this is how your Excel will look like: ![enter image description here](https://i.stack.imgur.com/0bAl5.png) (see? first row of the tab-separated file goes to the first row in the Excel file) **UPDATE 1**: I'm trying the `openpyxl` module myself... Theoretically (according to the documentation) the following should work (note that I've changed the extensions to Excel 2007/2010 `.xlsx`): ``` import csv import openpyxl with open('stack37.txt') as tab_file: tab_reader = csv.reader(tab_file, delimiter='\t') xls_book = openpyxl.load_workbook(filename='stack37.xlsx') sheet_names = xls_book.get_sheet_names() xls_sheet = xls_book.get_sheet_by_name(sheet_names[0]) for row_index, row in enumerate(tab_reader): cell_tmp1 = xls_sheet.cell(row = row_index, column = 0) cell_tmp1.value = row[0] cell_tmp2 = xls_sheet.cell(row = row_index, column = 1) cell_tmp2.value = row[1] xls_book.save('stack37_new.xlsx') ``` But if I do that, my LibreOffice refuses to open the newly generated file `stack37_new.xlsx` (maybe is because my LibreOffice is old? I'm in a Ubuntu 12.04, LibreOffice version 3.5.7.2... who knows, maybe is just that)
That's a job for VBA, but if I had to do it in Python I would do something like this: ``` import Excel xl = Excel.ExcelApp(False) wb = xl.app.Workbooks("MyWorkBook.xlsx") wb.Sheets("Ass'y").Cells(1, 1).Value2 = "something" wb.Save() ``` With an helper `Excel.py` class like this: ``` import win32com.client class ExcelApp(object): def __init__(self, createNewInstance, visible = False): self._createNewInstance=createNewInstance if createNewInstance: self.app = win32com.client.Dispatch('Excel.Application') if visible: self.app.Visible = True else: self.app = win32com.client.GetActiveObject("Excel.Application") def __exit__(self): if self.app and self._createNewInstance: self.app.Quit() def __del__(self): if self.app and self._createNewInstance: self.app.Quit() def quit(self): if self.app: self.app.Quit() ```
15,078
2,830,953
I have a script which contains two classes. (I'm obviously deleting a lot of stuff that I don't believe is relevant to the error I'm dealing with.) The eventual task is to create a decision tree, as I mentioned in [this](https://stackoverflow.com/questions/2726167/parse-a-csv-file-using-python-to-make-a-decision-tree-later) question. Unfortunately, I'm getting an infinite loop, and I'm having difficulty identifying why. I've identified the line of code that's going haywire, but I would have thought the iterator and the list I'm adding to would be different objects. Is there some side effect of list's .append functionality that I'm not aware of? Or am I making some other blindingly obvious mistake? ``` class Dataset: individuals = [] #Becomes a list of dictionaries, in which each dictionary is a row from the CSV with the headers as keys def field_set(self): #Returns a list of the fields in individuals[] that can be used to split the data (i.e. have more than one value amongst the individuals def classified(self, predicted_value): #Returns True if all the individuals have the same value for predicted_value def fields_exhausted(self, predicted_value): #Returns True if all the individuals are identical except for predicted_value def lowest_entropy_value(self, predicted_value): #Returns the field that will reduce <a href="http://en.wikipedia.org/wiki/Entropy_%28information_theory%29">entropy</a> the most def __init__(self, individuals=[]): ``` and ``` class Node: ds = Dataset() #The data that is associated with this Node links = [] #List of Nodes, the offspring Nodes of this node level = 0 #Tree depth of this Node split_value = '' #Field used to split out this Node from the parent node node_value = '' #Value used to split out this Node from the parent Node def split_dataset(self, split_value): #Splits the dataset into a series of smaller datasets, each of which has a unique value for split_value. Then creates subnodes to store these datasets. fields = [] #List of options for split_value amongst the individuals datasets = {} #Dictionary of Datasets, each one with a value from fields[] as its key for field in self.ds.field_set()[split_value]: #Populates the keys of fields[] fields.append(field) datasets[field] = Dataset() for i in self.ds.individuals: #Adds individuals to the datasets.dataset that matches their result for split_value datasets[i[split_value]].individuals.append(i) #<---Causes an infinite loop on the second hit for field in fields: #Creates subnodes from each of the datasets.Dataset options self.add_subnode(datasets[field],split_value,field) def add_subnode(self, dataset, split_value='', node_value=''): def __init__(self, level, dataset=Dataset()): ``` My initialisation code is currently: ``` if __name__ == '__main__': filename = (sys.argv[1]) #Takes in a CSV file predicted_value = "# class" #Identifies the field from the CSV file that should be predicted base_dataset = parse_csv(filename) #Turns the CSV file into a list of lists parsed_dataset = individual_list(base_dataset) #Turns the list of lists into a list of dictionaries root = Node(0, Dataset(parsed_dataset)) #Creates a root node, passing it the full dataset root.split_dataset(root.ds.lowest_entropy_value(predicted_value)) #Performs the first split, creating multiple subnodes n = root.links[0] n.split_dataset(n.ds.lowest_entropy_value(predicted_value)) #Attempts to split the first subnode. ```
2010/05/13
[ "https://Stackoverflow.com/questions/2830953", "https://Stackoverflow.com", "https://Stackoverflow.com/users/27290/" ]
This is fairly easy. ``` var timerID; $("#left").hover(function() { timerID = setInterval(slideLeft, 1000); }, function() { clearInterval(timerID); }); function slideLeft() { $("#slider").animate({left: -30}); } ``` and similar for right. You only need to use `hover()` if there's something you need to stop when the mouse leaves the area. Otherwise you can simply use the `mouseover()` event. **Note:** the actual effect I've put in there probably isn't right but it depends entirely on the slider plugin you're using. Adjust as required.
You don't have to check where the mouse is, as the `mouseout` event will be triggered when the mouse leaves the element. To make the movement repeat while the mouse is hovering the element, start an interval that you stop when the mouse leaves the element: ``` $(function(){ var moveInterval; $('#moveLeft').hover(function(){ moveInterval = window.setInterval(function(){ // here you move the slider }, 100); }, function(){ window.clearInterval(moveInterval); }); }); ```
15,080
69,512,596
I've recently started learning how to code in python. I wanted to know if there is a norm or specific rule for the position of statements while using functions. eg: ``` def example(x): y = 7 print("Default value is", y) print("Value entered is", x) a = int(input("Enter a value: ")) example(a) ``` Would it be better to move the input statement inside the function? Will this pose any problems in more complex programs?
2021/10/10
[ "https://Stackoverflow.com/questions/69512596", "https://Stackoverflow.com", "https://Stackoverflow.com/users/17117924/" ]
In larger programs, pretty much everything will be in functions (or methods, which are a kind of function). The only code at the top level will be a couple of lines to call the first function (often called `main`). The question then is whether to put the `input` into the same function, or into separate functions. That's a more general question of how to organise code, how large or small to make the functions and how to separate concerns. For `input()` in particular, it's probably better to put it in a separate function from the calculation; that way, you will be able to use the same calculation functions (a) with user-supplied input; (b) with values coming from elsewhere in the program; and, importantly (c) from tests. Similarly, probably separate the output from the calculation; again, you'll then be able to (a) print it out; (b) further process it or write it to file, database, etc; and (c) check it in tests. That way, you'll have three functions, each with a separate concern: one to get the input from the user and convert it to `int`; a second one to do whatever calculation is required; and a third one to format up the result neatly and print it for the user.
If the result of input statement will only be used inside one function, then moving the statement into that function might be better in the future when your code becomes more complex
15,081
4,646,659
How to convert the web site develpoed in django, python into desktop application. I am new to python and django can you please help me out Thanks in Advance
2011/01/10
[ "https://Stackoverflow.com/questions/4646659", "https://Stackoverflow.com", "https://Stackoverflow.com/users/569806/" ]
I would try to replicate the Django application functionality with the [PyQt toolkit](http://www.riverbankcomputing.co.uk/software/pyqt/intro). You can in fact embed web content in PyQt applications, with the help of QtWebKit. I would post some potentially useful links, but apparently I have too low a reputation to post more than one :)
I have `django manage.py runserver` in .bat file and a localhost bookmark bar in a browser and whola a django-desktop-app. Or make your own browser that opens localhost. [Creating a web-browser with Python and PyQT](https://pythonspot.com/creating-a-webbrowser-with-python-and-pyqt-tutorial/)
15,082
11,170,478
I have a command line program developed in c. Lets say, i have a parser written in C. Now i am developing a project with gui in python and i need that parser for python project. In c we can invoke a system call and redirect the output to system.out or a file. Is there are any way to do this python? I have both code and executable file of c program. Thanks in advance.
2012/06/23
[ "https://Stackoverflow.com/questions/11170478", "https://Stackoverflow.com", "https://Stackoverflow.com/users/1135245/" ]
I would not expect MySQL to give that error message, but many other databases do. In other databases you can work around it by repeating the column definition: ``` SELECT amount1 + amount2 as totalamount FROM Donation WHERE amount1 + amount2 > 1000 ``` Or you can use a subquery to avoid the repitition: ``` SELECT totalamount FROM ( select amount1 + amount2 as totalamount , * from Donation ) as SubQueryAlias WHERE totalamount > 1000 ``` [Live example at SQL Fiddle.](http://sqlfiddle.com/#!2/ce40c/2/0)
No way. **WHERE** filters column while **HAVING** filters on aggregates. See [SQL Having](http://www.w3schools.com/sql/sql_having.asp)
15,092
6,418,199
I was looking up the pypy project (Python in Python), and started pondering the issue of what is running the outer layer of python? Surely, I conjectured, it can't be as the old saying goes "turtles all the way down"! Afterall, python is not valid x86 assembly! Soon I remembered the concept of bootstrapping, and looked up compiler bootstrapping. "Ok", I thought, "so it can be either written in a different language or hand compiled from assembly". In the interest of performance, I'm sure C compilers are just built up from assembly. This is all well, but the question still remains, how does the computer get that assembly file?! Say I buy a new cpu with nothing on it. During the first operation I wish to install an OS, which runs C. What runs the C compiler? Is there a miniature C compiler in the BIOS? Can someone explain this to me?
2011/06/20
[ "https://Stackoverflow.com/questions/6418199", "https://Stackoverflow.com", "https://Stackoverflow.com/users/322900/" ]
> > In the interest of performance, I'm sure C compilers are just built up from assembly. > > > C compilers are, nowadays, (almost?) completely written in C (or higher-level languages - Clang is C++, for instance). Compilers gain little to nothing from including hand-written assembly code. The things that take most time are as slow as they are because they solve very hard problems, where "hard" means "big computational complexity" - rewriting in assembly brings at most a constant speedup, but those don't really matter anymore at that level. Also, most compilers want high portability, so architecture-specific tricks in the front and middle end are out of question (and in the backends, they' not desirable either, because they may break cross-compilation). > > Say I buy a new cpu with nothing on it. During the first operation I wish to install an OS, which runs C. What runs the C compiler? Is there a miniature C compiler in the BIOS? > > > When you're installing an OS, there's (usually) no C compiler run. The setup CD is full of readily-compiled binaries for that architecture. If there's a C compiler included (as it's the case with many Linux distros), that's an already-compiled exectable too. And those distros that make you build your own kernel etc. also have at least one executable included - the compiler. That is, of course, unless you have to compile your own kernel on an existing installation of anything with a C compiler. If by "new CPU" you mean a new architecture that isn't backwards-compatible to anything that's yet supported, self-hosting compilers can follow the usual porting procedure: First write a backend for that new target, then compile yourself for it, and suddenly you got a mature compiler with a battle-hardened (compiled a whole compiler) native backend on the new platform.
If you buy a new machine with a pre-installed OS, it doesn't even need to include a compiler anywhere, because all the executable code has been compiled on some other machine, by whoever provides the OS - your machine doesn't need to compile anything itself. How do you get to this point if you have a completely new CPU architecture? In this case, you would probably start by writing a new code generation back-end for your new CPU architecture (the "target") for an existing C compiler that runs on some other platform (the "host") - a [cross-compiler](http://en.wikipedia.org/wiki/Cross_compiler). Once your cross-compiler (running on the host) works well enough to generate a correct compiler (and necessary libraries, etc.) that will run on the target, then you can compile the compiler with itself on the target platform, and end up with a target-native compiler, which runs on the target and generates code which runs on the target. It's the same principle with a new language: you have to write code in an existing language that you do have a toolchain for, which will compile your new language into something that you can work with (let's call this the "bootstrap compiler"). Once you get this working well enough, you can write a compiler in your new language (the "real compiler"), and then compile the real compiler with the bootstrap compiler. At this point you're writing the compiler for your new language in the new language itself, and your language is said to be "self-hosting".
15,095
37,959,217
I'm using PM2 to run a Python program in the background like so `pm2 start helloworld.py` and it works perfectly fine. However, within `helloworld.py` I have several print statements that act as logs. For example, when a network request comes in or if a database value is updated. When I run `helloworld.py` like so: `python3 helloworld.py` all these print statements are visible and I can debug my application. However, when running `pm2 logs helloworld` none of these print statements show up.
2016/06/22
[ "https://Stackoverflow.com/questions/37959217", "https://Stackoverflow.com", "https://Stackoverflow.com/users/896112/" ]
This question is a few months old, so maybe you figured this out a while ago, but it was one of the top google hits when I was having the same problem so I thought I'd add what I found. Seems like it's an issue with how python buffers sys.stdout. In some platforms/instances, when called by say pm2 or nohup, the sys.stdout stream may not get flushed until the process exits. Passing the "-u" argument to the python interpreter stops it from buffering sys.stdout. In the process.json for pm2 I added "interpreter\_args": "-u" and I'm getting logs normally now.
Check the folder #HOME/.pm2/logs See for example the folder structure section here: <http://pm2.keymetrics.io/docs/usage/quick-start/> Also consider using a configuration file with an explicit logs folder that is relative to your scripts. (Note this folder must exist before pm2 can use it.) See <http://pm2.keymetrics.io/docs/usage/application-declaration/> ``` { "apps": [ { "script": "app/server.js", "log_date_format": "YYYY-MM-DD HH:mm Z", "error_file": "logs/server.web.error.log", "out_file": "logs/server.web.out.log", ... ``` Nice way to follow these log files is to run tail ``` tail -f logs/*.log ``` UPDATE: To be clear, using a configuration file works for python scripts. Just create a json configuration file that specifies your script and where you want the output to go. For example ``` { "apps": [ { "name": "Test Python", "script": "test.py", "out_file": "test.out.log", } ] } ``` Then run it pm2 start test.json Look for the process id in the results. Use this process ID to stop your process and to see where the log file is. E.g. pm2 show 3
15,098
41,247,600
For the following two dataframes: ``` df1 = pd.DataFrame({'name': pd.Series(["A", "B", "C"]), 'value': pd.Series([1., 2., 3.])}) name value 0 A 1.0 1 B 2.0 2 C 3.0 df2 = pd.DataFrame({'name': pd.Series(["A", "C", "D"]), 'value': pd.Series([1., 3., 5.])}) name value 0 A 1.0 1 C 3.0 2 D 5.0 ``` I would like to keep only the rows in `df2` where the value in the `name` column overlaps with a value in the `name` column of `df1`, i.e. produce the following dataframe: ``` name value 0 A 1.0 1 C 3.0 ``` I have tried a number of approaches but I am new to python and pandas and don't appreciate the syntax coming from R. Why does this line of code not work, and what would? ``` df2[df2["name"] in df1["name"]] ```
2016/12/20
[ "https://Stackoverflow.com/questions/41247600", "https://Stackoverflow.com", "https://Stackoverflow.com/users/4424484/" ]
You can use [`isin`](http://pandas.pydata.org/pandas-docs/stable/generated/pandas.Series.isin.html): ``` print (df2[df2["name"].isin(df1["name"])]) name value 0 A 1.0 1 C 3.0 ``` Another faster solution with [`numpy.intersect1d`](https://docs.scipy.org/doc/numpy/reference/generated/numpy.intersect1d.html): ``` val = np.intersect1d(df2["name"], df1["name"]) print (val) ['A' 'C'] print (df2[df2.name.isin(val)]) name value 0 A 1.0 1 C 3.0 ```
Slightly different method that might be useful on your actual data, you could use an "inner join" (the intersection) a la SQL. More useful if your columns aren't duplicated in both data frames (e.g. merging two different data sets with some common key) ``` df1 = pd.DataFrame({'name': pd.Series(["A", "B", "C"]), 'value': pd.Series([1., 2., 3.])}) df2 = pd.DataFrame({'name': pd.Series(["A", "C", "D"]), 'value': pd.Series([1., 3., 5.])}) # supposedly for the join you should be able to tell join on='<column_name>', 'name' here, # but wasn't working for me. df1.set_index('name', inplace=True) df2.set_index('name', inplace=True) df1.join(df2, how='inner', rsuffix='_other') # value value_other # name # A 1.0 1.0 # C 3.0 3.0 ``` Changing `how` to `outer` would give you the intersection of the two, `left` for just `df1` rows, `right` for `df2`.
15,099
70,026,043
I'm trying to figure out why I'm getting this error message. I'm running the webdriver.Chrome() with selenium in a Windows environment. When I run: ``` driver.get("http://www.google.com") ``` Or any url for that matter, python returns an HTML doc to the terminal saying; ``` Access denied, your system policy has denied access to the requested URL. For assistance contact your network support team ``` I'm not not sure what the issue so I need to figure out the error so I can send it to my IT. This is on a company managed device.
2021/11/18
[ "https://Stackoverflow.com/questions/70026043", "https://Stackoverflow.com", "https://Stackoverflow.com/users/16909626/" ]
An alternative to using `upvar` is to use `set <var_name>` to retrieve the value of var\_name. When <var\_name> is `${mod}_sig`, then you can use `set` to retrieve the value of the variable without the possibility of altering the value of the original variable (like `upvar`) ``` set modules {moduleA moduleB moduleC} set moduleA_sig {1 2 3 4} set moduleB_sig {11 22 33 44} set moduleC_sig {111 222 333 444} foreach mod $modules { # Get the value of the variable named ${mod}_sig. puts "$mod: [set ${mod}_sig]" } ```
This is where you want the [`upvar`](https://www.tcl-lang.org/man/tcl8.6/TclCmd/upvar.htm) command to "alias" one variable to another. ``` set modules {moduleA moduleB moduleC} set moduleA_sig {1 2 3 4} set moduleB_sig {11 22 33 44} set moduleC_sig {111 222 333 444} foreach mod $modules { upvar 0 ${mod}_sig this puts [list $mod $this] } ``` outputs ```none moduleA {1 2 3 4} moduleB {11 22 33 44} moduleC {111 222 333 444} ``` Here, use level 0 to indicate the current stack frame. Any modifications you make to `$this` will be reflected in `$moduleX_sig`. ``` set x 42 upvar 0 x y incr y puts $x ;# => 43 ```
15,100
41,789,133
Tensorflow r0.12's documentation for tf.nn.rnn\_cell.LSTMCell describes this as the init: ``` tf.nn.rnn_cell.LSTMCell.__call__(inputs, state, scope=None) ``` where `state` is as follows: > > state: if state\_is\_tuple is False, this must be a state Tensor, 2-D, batch x state\_size. If state\_is\_tuple is True, this must be a tuple of state Tensors, both 2-D, with column sizes c\_state and m\_state. > > > What aare `c_state` and `m_state` and how do they fit into LSTMs? I cannot find reference to them anywhere in the documentation. [Here is a link to that page in the documentation.](https://web.archive.org/web/20170223030652/https://www.tensorflow.org/versions/r0.11/api_docs/python/rnn_cell/rnn_cells_for_use_with_tensorflow_s_core_rnn_methods)
2017/01/22
[ "https://Stackoverflow.com/questions/41789133", "https://Stackoverflow.com", "https://Stackoverflow.com/users/5299052/" ]
I've stumbled upon same question, here's how I understand it! Minimalistic LSTM example: ``` import tensorflow as tf sample_input = tf.constant([[1,2,3]],dtype=tf.float32) LSTM_CELL_SIZE = 2 lstm_cell = tf.nn.rnn_cell.BasicLSTMCell(LSTM_CELL_SIZE, state_is_tuple=True) state = (tf.zeros([1,LSTM_CELL_SIZE]),)*2 output, state_new = lstm_cell(sample_input, state) init_op = tf.global_variables_initializer() sess = tf.Session() sess.run(init_op) print sess.run(output) ``` Notice that `state_is_tuple=True` so when passing `state` to this `cell`, it needs to be in the `tuple` form. `c_state` and `m_state` are probably "Memory State" and "Cell State", though I honestly am NOT sure, as these terms are only mentioned in the docs. In the code and papers about `LSTM` - letters `h` and `c` are commonly used to denote "output value" and "cell state". <http://colah.github.io/posts/2015-08-Understanding-LSTMs/> Those tensors represent combined internal state of the cell, and should be passed together. Old way to do it was to simply concatenate them, and new way is to use tuples. OLD WAY: ``` lstm_cell = tf.nn.rnn_cell.BasicLSTMCell(LSTM_CELL_SIZE, state_is_tuple=False) state = tf.zeros([1,LSTM_CELL_SIZE*2]) output, state_new = lstm_cell(sample_input, state) ``` NEW WAY: ``` lstm_cell = tf.nn.rnn_cell.BasicLSTMCell(LSTM_CELL_SIZE, state_is_tuple=True) state = (tf.zeros([1,LSTM_CELL_SIZE]),)*2 output, state_new = lstm_cell(sample_input, state) ``` So, basically all we did, is changed `state` from being 1 tensor of length `4` into two tensors of length `2`. The content remained the same. `[0,0,0,0]` becomes `([0,0],[0,0])`. (This is supposed to make it faster)
Maybe this excerpt from the code will help ``` def __call__(self, inputs, state, scope=None): """Long short-term memory cell (LSTM).""" with vs.variable_scope(scope or type(self).__name__): # "BasicLSTMCell" # Parameters of gates are concatenated into one multiply for efficiency. if self._state_is_tuple: c, h = state else: c, h = array_ops.split(1, 2, state) concat = _linear([inputs, h], 4 * self._num_units, True) # i = input_gate, j = new_input, f = forget_gate, o = output_gate i, j, f, o = array_ops.split(1, 4, concat) new_c = (c * sigmoid(f + self._forget_bias) + sigmoid(i) * self._activation(j)) new_h = self._activation(new_c) * sigmoid(o) if self._state_is_tuple: new_state = LSTMStateTuple(new_c, new_h) else: new_state = array_ops.concat(1, [new_c, new_h]) return new_h, new_state ```
15,102
71,524,462
I'm coding a little tool that displays the key presses on the screen with Tkinter, useful for screen recording. **Is there a way to get a listener for all key presses of the system *globally* with Tkinter?** (for every keystroke including `F1`, `CTRL`, ..., even when the Tkinter window does not have the focus) I currently know a solution with `pyHook.HookManager()`, `pythoncom.PumpMessages()`, and also solutions from [Listen for a shortcut (like WIN+A) even if the Python script does not have the focus](https://stackoverflow.com/questions/59191177/listen-for-a-shortcut-like-wina-even-if-the-python-script-does-not-have-the-f) but is there a 100% `tkinter` solution? Indeed, [`pyhook`](https://pypi.org/project/pyHook/) is only for Python 2, and [`pyhook3`](https://github.com/gggfreak2003/PyHook3) seems to be abandoned, so I would prefer a **built-in Python3 / Tkinter solution** for Windows.
2022/03/18
[ "https://Stackoverflow.com/questions/71524462", "https://Stackoverflow.com", "https://Stackoverflow.com/users/1422096/" ]
**Solution 1**: if you need to catch keyboard events in your current window, you can use: ``` from tkinter import * def key_press(event): key = event.char print(f"'{key}' is pressed") root = Tk() root.geometry('640x480') root.bind('<Key>', key_press) mainloop() ``` **Solution 2**: if you want to capture keys regardless of which window has focus, you can use [`keyboard`](https://github.com/boppreh/keyboard)
As suggested in [tkinter using two keys at the same time](https://stackoverflow.com/questions/39606019/tkinter-using-two-keys-at-the-same-time), you can detect all key pressed at the same time with the following: ```py history = [] def keyup(e): print(e.keycode) if e.keycode in history : history.pop(history.index(e.keycode)) var.set(str(history)) def keydown(e): if not e.keycode in history : history.append(e.keycode) var.set(str(history)) root = Tk() root.bind("<KeyPress>", keydown) root.bind("<KeyRelease>", keyup) ```
15,108
56,600,918
When you call DataFrame.to\_numpy(), pandas will find the NumPy dtype that can hold all of the dtypes in the DataFrame. But how to perform the reverse operation? I have an 'numpy.ndarray' object 'pred'. It looks like this: > > [[0.00599913 0.00506044 0.00508315 ... 0.00540191 0.00542058 0.00542058]] > > > I am trying to do like this: ``` pred = np.uint8(pred) print("Model predict:\n", pred.T) ``` But I get: > > [[0 0 0 ... 0 0 0]] > > > Why, after the conversion, I do not get something like this: > > 0 0 0 0 0 0 ... 0 0 0 0 0 0 > > > And how to write the pred to a file? ``` pred.to_csv('pred.csv', header=None, index=False) pred = pd.read_csv('pred.csv', sep=',', header=None) ``` Gives an error message: ``` AttributeError Traceback (most recent call last) <ipython-input-68-b223b39b5db1> in <module>() ----> 1 pred.to_csv('pred.csv', header=None, index=False) 2 pred = pd.read_csv('pred.csv', sep=',', header=None) AttributeError: 'numpy.ndarray' object has no attribute 'to_csv' ``` Please help me figure this out.
2019/06/14
[ "https://Stackoverflow.com/questions/56600918", "https://Stackoverflow.com", "https://Stackoverflow.com/users/11376406/" ]
`pred` is an `ndarray`. It does not have a `to_csv` method. That's something a `pandas` `DataFrame` has. But lets look at the first stuff. Copying your array display, adding commas, lets me make a list: ``` In [1]: alist = [[0.00599913, 0.00506044, 0.00508315, 0.00540191, 0.00542058, 0. ...: 00542058]] In [2]: alist Out[2]: [[0.00599913, 0.00506044, 0.00508315, 0.00540191, 0.00542058, 0.00542058]] ``` and make an array from that: ``` In [3]: arr = np.array(alist) In [8]: print(arr) [[0.00599913 0.00506044 0.00508315 0.00540191 0.00542058 0.00542058]] ``` or the `repr` display that `ipython` gives as the default: ``` In [4]: arr Out[4]: array([[0.00599913, 0.00506044, 0.00508315, 0.00540191, 0.00542058, 0.00542058]]) ``` Because of the double brackets, this is a 2d array. Its transpose will have shape (6,1). ``` In [5]: arr.shape Out[5]: (1, 6) ``` Conversion to `uint8` works as expected (I prefer the `astype` version). But ``` In [6]: np.uint8(arr) Out[6]: array([[0, 0, 0, 0, 0, 0]], dtype=uint8) In [7]: arr.astype('uint8') Out[7]: array([[0, 0, 0, 0, 0, 0]], dtype=uint8) ``` The converted shape is as before (1,6). The conversion is nearly meaningless. The values are all small between 1 and 0. Converting to small (1 byte) unsigned integers predictably produces all 0s.
```py import numpy as np import pandas as pd x = [1,2,3,4,5,6,7] x = np.array(x) y = pd.Series(x) print(y) y.to_csv('a.csv') ```
15,109
56,651,258
I am trying to install the **owlready2** lib in Ubuntu by following the method below but I face a problem. * I updated the system and applications * Installed Python 3 and made it the working version (default) * Installed pip3 * Used pip and pip3 to install the owlready2 lib But I faced the below problem which seems to be a problem with the library package: > > error: can't copy './hermit/org/semanticweb/hermiT/hierarchy': > doesn't exist or not a regular file" > > > Command /usr/bin/python3 -c "import setuptools, > tokenize;**file**='/tmp/pip\_buil > d\_root/owlready2/setup.py';exec(compile(getattr(tokenize, 'open', > open)(**file** ).read().replace('\r\n', '\n'), **file**, 'exec'))" > install --record /tmp/pip-lq v533ik-record/install-record.txt > --single-version-externally-managed --compile f ailed with error code 1 in /tmp/pip\_build\_root/owlready2 Storing debug log for failure in > /home/ubuntu/.pip/pip.log > > > Does anyone have any ideas how to resolve this?
2019/06/18
[ "https://Stackoverflow.com/questions/56651258", "https://Stackoverflow.com", "https://Stackoverflow.com/users/5851759/" ]
Try to install your package with the following command: ``` python3 -m pip install -I owlready2 ``` If pip3 does not work, you also install Owlready2 manually : download the sources, then run in a terminal: ``` cd /path/to/Owlready2 python setup.py build python setup.py install # as root ``` Also, that would be a good ide to install pip3 and try to install your package with `pip3`, commands below: ``` curl "https://bootstrap.pypa.io/get-pip.py" -o "get-pip.py" python3 get-pip.py --user ```
I encountered the same problem. It seems that the issue might lie in something that was added in version 0.14 (at the time of writing the newest version is 0.19). If the owlready2 version is newer than 0.13 then you will encounter the problem. I have tested these Python versions - 3.7.3 (works), 3.6.8(works), 3.5.2(works until v0.13), 3.4.3(works until v0.13) To install version v0.13 of owlready2: ```html pip install owlready2==0.13 ```
15,112
12,199,819
I'm a beginner programmer, and i've been trying to use the python markdown library in my web app. everything works fine, except the nl2br extension. When I tried to convert text file to html using md.convert(text), it doesn't see to convert newlines to `<br>`. for example, before I convert, the text is: ``` Puerto Rico =========== ------------------------------ ### Game Rules hello world! ``` after I convert, I get: ``` <h1>Puerto Rico</h1> <hr /> <h3>Game Rules</h3> <p>hello world!</p> ``` My understanding is that the blank spaces are represented by '\n' and should be converted to `<br>`, but I'm not getting that result. Here's my code: ``` import markdown md = markdown.Markdown(safe_mode='escape',extensions=['nl2br']) html = md.convert(text) ``` Please let me know if you have any idea or can point me in the right direction. Thank you.
2012/08/30
[ "https://Stackoverflow.com/questions/12199819", "https://Stackoverflow.com", "https://Stackoverflow.com/users/1623886/" ]
Try adding two or more white spaces at the end of a line to insert `<br/>` tags Example: ``` hello world ``` results in ``` <p>hello <br> world</p> ``` Notice that there are two spaces after the word hello. This only works if you have some text before the two spaces at the end of a line. But this has nothing to do with your nl2br extension, this is markdown standard. My advice is, if you don't explicitly have to do this conversion, just don't do it. Using paragraphs alias `<p>`-tags is the cleaner way to seperate text regions. If you simply want to have more space after your `<h3>` headlines then define some css for it: ``` h3 { margin-bottom: 4em; } ``` Image if you do spacing with `<br>`-tags after your headlines in all your 500 wiki pages and later you decide that it's 20px too much space. Then you have to edit all your pages by hand and remove two `<br>`-tags on every page. Otherwise you just edit one line in a css file.
Found this question looking for a clarification myself. Hence adding an update despite being 7 years late. --- Reference thread on the Python Markdown Project: <https://github.com/Python-Markdown/markdown/issues/707> Turns out that this is indeed the expected behaviour, and hence, the `nl2br` extension converts only *single* newlines occurring *within* a block, **not** around it. Which means, ``` This is a block This is a different block ``` gets converted to ```html <p>This is<br/>block</p>\n<p>This is a different<br/>block</p> ``` but when you have **distinct, separate blocks**, ``` Puerto Rico =========== ------------------------------ ### Game Rules hello world! ``` all surrounding newlines are collapsed, and no `<br/>` tags are injected. ```html <h1>Puerto Rico</h1> <hr /> <h3>Game Rules</h3> <p>hello world!</p> ```
15,113
44,548,111
I am working on a **SaaS** solution currently provisioning **sonarqube and gerrit** applications on kubernetes. As part of that I want to create a new schema in my postgres database for every new application that I provision. Application is connecting using following connection string, (i.e., instance1, instance2, instance3... and so on) ``` jdbc:postgresql://localhost/gerrit?user=instance1&password=instance1&currentSchema=instance1 ``` The solution works fine for the first occurrence of provisioning gerrit and sonarqube by creating associated tables in the new schema. However, it fails on second time with another new schema in the same database, these failures are most likely associated with application trying to create the associated tables but it **already exists**. I am creating the schema with following sql. ``` create user instance1 with login password 'instance1'; CREATE SCHEMA instance1 AUTHORIZATION instance1; ALTER ROLE instance1 SET search_path=instance1; create user instance2 with login password 'instance2'; CREATE SCHEMA instance2 AUTHORIZATION instance2; ALTER ROLE instance2 SET search_path=instance2; ``` I am having difficulty in understanding this behavior, how can two separate applications configured against two different schemas of same database could see each others tables. In order to reproduce this problem I quickly wrote a python script to connect to two different schemas of same database and create the same table and it works fine. ``` import psycopg2 import sys import random _user = raw_input("user: ") con = None try: con = psycopg2.connect(database='gerrit', user=_user, password=_user, host='localhost') cur = con.cursor() cur.execute('SELECT version()') ver = cur.fetchone() print ver table_name = 'tbl_%d' %(1)#random.randint(1,100)) cur.execute('CREATE TABLE %s (id serial, name varchar(32));' %(table_name)) cur.execute('INSERT INTO %s values (1, \'%s\');' %(table_name, table_name+_user)) con.commit() cur.execute('SELECT * from %s' %(table_name)) ver = cur.fetchone() print ver except psycopg2.DatabaseError, e: print 'Error %s' % e sys.exit(1) finally: if con: con.close() ``` Output as follows ``` $ python pg_test_connect.py user: instance1 (1, 'tbl_1instance1') $ python pg_test_connect.py user: instance2 (1, 'tbl_1instance2') ``` Since I am able to verify this workflow from python, is this the limitation from **JDBC** or the applications(gerrit & sonarqube), Has anyone came across this problem with postgres?
2017/06/14
[ "https://Stackoverflow.com/questions/44548111", "https://Stackoverflow.com", "https://Stackoverflow.com/users/4189388/" ]
for those kind of operation it would be better to use collections, the method [removeAll()](https://docs.oracle.com/javase/8/docs/api/java/util/List.html#removeAll-java.util.Collection-) will filter the data containers, from the doc: > > Removes from this list all of its elements that are contained in the > specified collection (optional operation). > > > ``` List<Integer> myVarListA = Arrays.asList(26, 13, 88, 9); List<Integer> myVarListB = Arrays.asList(26, 1, 8, 12); List<Integer> myVarListAcomplementB = new ArrayList<>(myVarListA); List<Integer> myVarListBcomplementA = new ArrayList<>(myVarListB); myVarListAcomplementB.removeAll(myVarListB); myVarListBcomplementA.removeAll(myVarListA); System.out.println("elements in A but no in B: " + myVarListAcomplementB); System.out.println("elements in B but no in A: " + myVarListBcomplementA); myVarListBcomplementA.addAll(myVarListAcomplementB); System.out.println("both together: " + myVarListBcomplementA); ```
You may try this: ``` a.removeAll(b); ```
15,114
70,545,797
I have this image for a treeline crop. I need to find the general direction in which the crop is aligned. I'm trying to get the Hough lines of the image, and then find the mode of distribution of angles.[![tree crop line](https://i.stack.imgur.com/8GWAX.jpg)](https://i.stack.imgur.com/8GWAX.jpg) I've been following [this tutorial](https://medium.com/@jamesthesken/crop-row-detection-using-python-and-opencv-93e456ddd974)on crop lines, however in that one, the crop lines are sparse. Here they are densely pack, and after grayscaling, blurring, and using canny edge detection, this is what i get ```py import cv2 import numpy as np import matplotlib.pyplot as plt img = cv2.imread('drive/MyDrive/tree/sample.jpg') gray = cv2.cvtColor(img, cv2.COLOR_RGB2GRAY) gauss = cv2.GaussianBlur(gray, (3,3), 3) plt.figure(figsize=(15,15)) plt.subplot(1,2,1) plt.imshow(gauss) gscale = cv2.Canny(gauss, 80, 140) plt.subplot(1,2,2) plt.imshow(gscale) plt.show() ``` (Left side blurred image without canny, left one preprocessed with canny) [![preprocessed image](https://i.stack.imgur.com/NvjOJ.png)](https://i.stack.imgur.com/NvjOJ.png) After that, I followed the tutorial and "skeletonized" the preprocessed image ```py size = np.size(gscale) skel = np.zeros(gscale.shape, np.uint8) ret, gscale = cv2.threshold(gscale, 128, 255,0) element = cv2.getStructuringElement(cv2.MORPH_CROSS, (3,3)) done = False while not done: eroded = cv2.erode(gscale, element) temp = cv2.dilate(eroded, element) temp = cv2.subtract(gscale, temp) skel = cv2.bitwise_or(skel, temp) gscale = eroded.copy() zeros = size - cv2.countNonZero(gscale) if zeros==size: done = True ``` Giving me [![skeletonized](https://i.stack.imgur.com/uahSo.png)](https://i.stack.imgur.com/uahSo.png) As you can see, there are a bunch of curvy lines still. When using the HoughLines algorithm on it, there are 11k lines scattered everywhere ```py lines = cv2.HoughLinesP(skel,1,np.pi/180,130) a,b,c = lines.shape for i in range(a): rho = lines[i][0][0] theta = lines[i][0][1] a = np.cos(theta) b = np.sin(theta) x0 = a*rho y0 = b*rho x1 = int(x0 + 1000*(-b)) y1 = int(y0 + 1000*(a)) x2 = int(x0 - 1000*(-b)) y2 = int(y0 - 1000*(a)) cv2.line(img,(x1,y1),(x2,y2),(0,0,255),2, cv2.LINE_AA)#showing the results: plt.figure(figsize=(15,15)) plt.subplot(121)#OpenCV reads images as BGR, this corrects so it is displayed as RGB plt.plot() plt.imshow(cv2.cvtColor(img, cv2.COLOR_BGR2RGB)) plt.title('Row Detection') plt.xticks([]) plt.yticks([]) plt.subplot(122) plt.plot() plt.imshow(skel,cmap='gray') plt.title('Skeletal Image') plt.xticks([]) plt.yticks([]) plt.show() ``` [![hough lines](https://i.stack.imgur.com/qawm9.png)](https://i.stack.imgur.com/qawm9.png) I am a newbie when it comes to cv2, so I have 0 clue what to do. Searched and tried a bunch of stuff but none works. How can I remove the mildly big dots, and remove the squiggly lines?
2021/12/31
[ "https://Stackoverflow.com/questions/70545797", "https://Stackoverflow.com", "https://Stackoverflow.com/users/15230180/" ]
You can use a **2D FFT** to find the general direction in which the crop is aligned (as proposed by mozway in the comments). The idea is that the general direction can be easily extracted from *centred beaming rays appearing in the magnitude spectrum* when the input contains many lines in the same direction. You can find more information about how it works in [this previous post](https://stackoverflow.com/questions/37382491/how-to-find-orientation-of-image-using-fourier-transform). It works directly with the input image, but it is better to apply the Gaussian + Canny filters. Here is the interesting part of the magnitude spectrum of the filtered gray image: [![magnitude spectrum](https://i.stack.imgur.com/hmBZ0.png)](https://i.stack.imgur.com/hmBZ0.png) The main beaming ray can be easily seen. You can extract its angle by iterating over many lines with an increasing angle and sum the magnitude values on each line as in the following figure: [![lines](https://i.stack.imgur.com/1azP7.png)](https://i.stack.imgur.com/1azP7.png) Here is the magnitude sum of each line plotted against the angle (in radian) of the line: [![sum plotted against the angle](https://i.stack.imgur.com/L6u96.png)](https://i.stack.imgur.com/L6u96.png) Based on that, you just need to find the angle that maximize the computed sum. Here is the resulting code: ```py def computeAngle(arr): # Naive inefficient algorithm n, m = arr.shape yCenter, xCenter = (n-1, m//2-1) lineLen = m//2-2 sMax = 0.0 bestAngle = np.nan for angle in np.arange(0, math.pi, math.pi/300): i = np.arange(lineLen) y, x = (np.sin(angle) * i + 0.5).astype(np.int_), (np.cos(angle) * i + 0.5).astype(np.int_) s = np.sum(arr[yCenter-y, xCenter+x]) if s > sMax: bestAngle = angle sMax = s return bestAngle # Load the image in gray img = cv2.imread('lines.jpg') gray = cv2.cvtColor(img, cv2.COLOR_RGB2GRAY) # Apply some filters gauss = cv2.GaussianBlur(gray, (3,3), 3) gscale = cv2.Canny(gauss, 80, 140) # Compute the 2D FFT of real values freqs = np.fft.rfft2(gscale) # Shift the frequencies (centering) and select the low frequencies upperPart = freqs[:freqs.shape[0]//4,:freqs.shape[1]//2] lowerPart = freqs[-freqs.shape[0]//4:,:freqs.shape[1]//2] filteredFreqs = np.vstack((lowerPart, upperPart)) # Compute the magnitude spectrum magnitude = np.log(np.abs(filteredFreqs)) # Correct the angle magnitude = np.rot90(magnitude).copy() # Find the major angle bestAngle = computeAngle(magnitude) ```
Just for completeness I would like to post the Sobel Gradient Angle way as well. General idea: 1. for every pixel, compute X and Y gradient (e.g. with Sobel) 2. Compute the angle from the X and Y gradient information and their distribution 3. find the modes e.g. with a histogram and select the highest one Written in C++ but probably easily convertable to python: ``` int main() { try { cv::Mat img = cv::imread("C:/data/StackOverflow/cropLines/lines.jpg", cv::IMREAD_GRAYSCALE); // tests with artificial lines: //img = cv::Mat::zeros(img.size(), CV_8UC1); //img = cv::Mat(img.size(), CV_8UC1, cv::Scalar::all(255)); //cv::line(img, cv::Point(0, img.rows), cv::Point(img.cols, 0), cv::Scalar::all(255), 10); // sample to check angles //cv::line(img, cv::Point(img.cols, img.rows), cv::Point(0, 0), cv::Scalar::all(255), 10); // sample to check angles //cv::line(img, cv::Point(img.cols, img.rows/2), cv::Point(0, img.rows/2), cv::Scalar::all(255), 10); // sample to check angles //cv::line(img, cv::Point(img.cols/2, img.rows), cv::Point(img.cols/2, 0), cv::Scalar::all(255), 10); // sample to check angles //cv::line(img, cv::Point(img.cols / 2, img.rows), cv::Point(img.cols / 2, 0), cv::Scalar::all(255), 10); // sample to check angles //cv::line(img, cv::Point(img.cols / 2, img.rows), cv::Point(img.cols / 2, 0), cv::Scalar::all(0), 10); // sample to check angles cv::imshow("img", img); cv::Mat gradX, gradY, mag, angle; cv::Sobel(img, gradX, CV_32F, 1, 0, 3); cv::Sobel(img, gradY, CV_32F, 0, 1, 3); cv::cartToPolar(gradX, gradY, mag, angle, true); cv::Mat magMask = mag > 0; // dont use pixels where angle is 0 just because there is no gradient. float scaleX = 3; float scaleY = 0.15; float maxValueY = 3000; cv::Mat chart = cv::Mat(maxValueY * scaleY, 360 * scaleX, CV_8UC3); cv::Point prev(-1, -1); double window = 5.0; // window size 1 is much more noisy but still works // this loop can probably be optimized with an optimized histogram compuation from any library for (int i = 0; i <= 360; i = i + 1) { double amount = cv::countNonZero((angle >= i-window/2 & angle < i + window/2) & (magMask)); std::cout << i << "°: " << amount << std::endl; cv::Point current(i*scaleX, chart.rows - amount*scaleY/window); if (prev.x >= 0) cv::line(chart, prev, current, cv::Scalar(0, 0, 255), 1); prev = current; } cv::line(chart, cv::Point(45 * scaleX, 0), cv::Point(45 * scaleX, 100 * scaleY), cv::Scalar(255, 0, 0), 1); cv::line(chart, cv::Point(90 * scaleX, 0), cv::Point(90 * scaleX, 100 * scaleY), cv::Scalar(255, 0, 0), 1); cv::line(chart, cv::Point(135 * scaleX, 0), cv::Point(135 * scaleX, 100 * scaleY), cv::Scalar(255, 0, 0), 1); cv::line(chart, cv::Point(180 * scaleX, 0), cv::Point(180 * scaleX, 100 * scaleY), cv::Scalar(255, 0, 0), 1); cv::line(chart, cv::Point(225 * scaleX, 0), cv::Point(225 * scaleX, 100 * scaleY), cv::Scalar(255, 0, 0), 1); cv::line(chart, cv::Point(270 * scaleX, 0), cv::Point(270 * scaleX, 100 * scaleY), cv::Scalar(255, 0, 0), 1); cv::line(chart, cv::Point(315 * scaleX, 0), cv::Point(315 * scaleX, 100 * scaleY), cv::Scalar(255, 0, 0), 1); cv::line(chart, cv::Point(360 * scaleX, 0), cv::Point(360 * scaleX, 100 * scaleY), cv::Scalar(255, 0, 0), 1); cv::imshow("chart", chart); cv::imwrite("C:/data/StackOverflow/cropLines/chart.png", chart); cv::imwrite("C:/data/StackOverflow/cropLines/input.png", img); cv::waitKey(0); } catch (std::exception& e) { std::cout << e.what() << std::endl; } } ``` Giving this result, where every blue line at the top of the image is 45°. The maximm is as 52° (and its multiples of 180°), where the gradient is rotated by 90° compared to the line direction. So the line direction in that image is 142° clockwise from the x axis or 38° counter-clockwise. [![enter image description here](https://i.stack.imgur.com/ZAHoA.png)](https://i.stack.imgur.com/ZAHoA.png)
15,119
10,308,340
I have a list of rules for a given input file for my function. If any of them are violated in the file given, I want my program to return an error message and quit. * Every gene in the file should be on the same chromosome Thus for a lines such as: NM\_001003443 chr11 + 5997152 5927598 5921052 5926098 1 5928752,5925972, 5927204,5396098, NM\_001003444 chr11 + 5925152 5926098 5925152 5926098 2 5925152,5925652, 5925404,5926098, NM\_001003489 chr11 + 5925145 5926093 5925115 5926045 4 5925151,5925762, 5987404,5908098, etc. Each line in the file will be variations of this line Thus, I want to make sure every line in the file is on chr11 Yet I may be given a file with a different list of chr(and any number of numbers). Thus I want to write a function that will make sure whatever number is found on chr in the line is the same for every line. Should I use a regular expression for this, or what should I do? This is in python by the way. Such as: chr\d+ ? I am unsure how to make sure that whatever is matched is the same in every line though... I currently have: ``` from re import * for line in file: r = 'chr\d+' i = search(r, line) if i in line: ``` but I don't know how to make sure it is the same in every line... **In reference to sajattack's answer** ``` fp = open(infile, 'r') for line in fp: filestring = '' filestring +=line chrlist = search('chr\d+', filestring) chrlist = chrlist.group() for chr in chrlist: if chr != chrlist[0]: print('Every gene in file not on same chromosome') ```
2012/04/25
[ "https://Stackoverflow.com/questions/10308340", "https://Stackoverflow.com", "https://Stackoverflow.com/users/1348509/" ]
Just read the file and have a while loop check each line to make sure it contains `chr11`. There are string functions to search for substrings in a string. As soon as you find a line that returns false (does not contain `chr11`) then break out of the loop and set a flag `valid = false`. ``` import re fp = open(infile, 'r') fp.readline() tar = re.findall(r'chr\d+', fp.readline())[0] for line in fp: if (line.find(tar) == -1): print("Not valid") break ``` This should search for a number in the line and check for validity.
Is it safe to assume that the first chr is the correct one? If so, use this: ``` import re chrlist = re.findall("chr[0-9]+", open('file').read()) # ^ this is a list with all chr(whatever numbers) for chr in chrlist: if chr != chrlist[0] print("Chr does not match") break ```
15,120
71,186,021
I´m new into python, so I apreciate any help. I´m trying to develope a code that can search for a specific word in a csv file, but I don´t why he doesn´t recognize a word that I know it is in the program. I'm always getting "Não encontrei". My code: ``` #Definir perfis def pilar(): pilar = input("Perfil do pilar:") csv_file=csv.reader(open(r"C:\Users\tomas\Documents\ISEP\5º Ano\TESE\PROGRAMA\PERFIS.csv")) for row in csv_file: if pilar in csv_file: print("Pilar: ", pilar) else: print("Não encontrei") pilar() ```
2022/02/19
[ "https://Stackoverflow.com/questions/71186021", "https://Stackoverflow.com", "https://Stackoverflow.com/users/-1/" ]
An enum without associated values conforms implicitly to Equatable **and** Hashable. From the documentation: [Hashable](https://developer.apple.com/documentation/swift/hashable/) > > When you define an enumeration without associated values, it gains Hashable conformance automatically, and you can add Hashable conformance to your other custom types by implementing the `hash(into:)` method. > > > [Equatable](https://developer.apple.com/documentation/swift/equatable) > > An enum without associated values has Equatable conformance even without the declaration. > > > If you don't trust the documentation prove it: ``` enum TestEnum { case one, two, three, four, five } ``` Equatable: ``` TestEnum.one == TestEnum.one // true TestEnum.one == TestEnum.three // false ``` Hashable: ``` let dict : [TestEnum:Int] = [.one:1, .two:2] // compiles, a dictionary key must be hashable ``` And `hashValue` is discouraged in Swift anyway.
To chime in: If you *did* want to implement `==` yourself, the way to do it would be with pattern matching. Surprisingly (to me), the code below seems to call neither `~=` nor `==`. ``` enum TestEnum: Equatable { case one, two, three, four, five static func ==(lhs: TestEnum, rhs: TestEnum) -> Bool { switch (lhs, rhs) { case (.one, .one),(.two, .two), (.three, .three), (.four, .four), (.five, .five): return true default: return false } } } ``` That does kind of make sense. At bottom, you need the system to provide a way to compare these values. If one didn't exist all, it would be impossible for you to build one. By analogy, imagine how you might implement `==` for `Int`, if it wasn't already built-in. It would be impossible. I'll also add: it doesn't make any sense to implement `==` based off `hashValue`. `hashValue` speeds up search algorithms by allowing them to rapidly cull a massive part of their search space. For example, say you're looking for milk in the grocery store. A good hashValue for a grocery product might be the section its in. If you saw that your milk's hashValue was "dairy", then you know right away that there's no point searching the deli, meat, produce and frozen food isles. But once you're in front of the dairy isle, you start linearly searching through all the items there. This is where `==` comes in. It needs a more granular notion of equality than the heuristic that `hashValue` provides. Butter and milk are both dairy, but you need to be able to discern them apart.
15,123
66,396,659
I am using imbalanced dataset(54:38:7%) with RFECV for feature selection like this: ``` # making a multi logloss metric from sklearn.metrics import log_loss, make_scorer log_loss_rfe = make_scorer(score_func=log_loss, greater_is_better=False) # initiating Light GBM classifier lgb_rfe = LGBMClassifier(objective='multiclass', learning_rate=0.01, verbose=0, force_col_wise=True, random_state=100, n_estimators=5_000, n_jobs=7) # initiating RFECV rfe = RFECV(estimator=lgb_rfe, min_features_to_select=2, verbose=3, n_jobs=2, cv=3, scoring=log_loss_rfe) # fitting it rfe.fit(X=X_train, y=y_train) ``` And I got an error, presumably because the subsamples sklearn's RFECV has made doesn't have all of the classes from my data. I had no issues fitting the very same data outside of RFECV. Here's the complete error: ``` --------------------------------------------------------------------------- _RemoteTraceback Traceback (most recent call last) _RemoteTraceback: """ Traceback (most recent call last): File "/home/ubuntu/ds_jup_venv/lib/python3.8/site-packages/joblib/externals/loky/process_executor.py", line 431, in _process_worker r = call_item() File "/home/ubuntu/ds_jup_venv/lib/python3.8/site-packages/joblib/externals/loky/process_executor.py", line 285, in __call__ return self.fn(*self.args, **self.kwargs) File "/home/ubuntu/ds_jup_venv/lib/python3.8/site-packages/joblib/_parallel_backends.py", line 595, in __call__ return self.func(*args, **kwargs) File "/home/ubuntu/ds_jup_venv/lib/python3.8/site-packages/joblib/parallel.py", line 262, in __call__ return [func(*args, **kwargs) File "/home/ubuntu/ds_jup_venv/lib/python3.8/site-packages/joblib/parallel.py", line 262, in <listcomp> return [func(*args, **kwargs) File "/home/ubuntu/ds_jup_venv/lib/python3.8/site-packages/sklearn/utils/fixes.py", line 222, in __call__ return self.function(*args, **kwargs) File "/home/ubuntu/ds_jup_venv/lib/python3.8/site-packages/sklearn/feature_selection/_rfe.py", line 37, in _rfe_single_fit return rfe._fit( File "/home/ubuntu/ds_jup_venv/lib/python3.8/site-packages/sklearn/feature_selection/_rfe.py", line 259, in _fit self.scores_.append(step_score(estimator, features)) File "/home/ubuntu/ds_jup_venv/lib/python3.8/site-packages/sklearn/feature_selection/_rfe.py", line 39, in <lambda> lambda estimator, features: _score( File "/home/ubuntu/ds_jup_venv/lib/python3.8/site-packages/sklearn/model_selection/_validation.py", line 674, in _score scores = scorer(estimator, X_test, y_test) File "/home/ubuntu/ds_jup_venv/lib/python3.8/site-packages/sklearn/metrics/_scorer.py", line 199, in __call__ return self._score(partial(_cached_call, None), estimator, X, y_true, File "/home/ubuntu/ds_jup_venv/lib/python3.8/site-packages/sklearn/metrics/_scorer.py", line 242, in _score return self._sign * self._score_func(y_true, y_pred, File "/home/ubuntu/ds_jup_venv/lib/python3.8/site-packages/sklearn/utils/validation.py", line 63, in inner_f return f(*args, **kwargs) File "/home/ubuntu/ds_jup_venv/lib/python3.8/site-packages/sklearn/metrics/_classification.py", line 2265, in log_loss raise ValueError("y_true and y_pred contain different number of " ValueError: y_true and y_pred contain different number of classes 3, 2. Please provide the true labels explicitly through the labels argument. Classes found in y_true: [0 1 2] """ The above exception was the direct cause of the following exception: ValueError Traceback (most recent call last) <ipython-input-9-5feb62a6f457> in <module> 1 rfe = RFECV(estimator=lgb_rfe, min_features_to_select=2, verbose=3, n_jobs=2, cv=3, scoring=log_loss_rfe) ----> 2 rfe.fit(X=X_train, y=y_train) ~/ds_jup_venv/lib/python3.8/site-packages/sklearn/feature_selection/_rfe.py in fit(self, X, y, groups) 603 func = delayed(_rfe_single_fit) 604 --> 605 scores = parallel( 606 func(rfe, self.estimator, X, y, train, test, scorer) 607 for train, test in cv.split(X, y, groups)) ~/ds_jup_venv/lib/python3.8/site-packages/joblib/parallel.py in __call__(self, iterable) 1052 1053 with self._backend.retrieval_context(): -> 1054 self.retrieve() 1055 # Make sure that we get a last message telling us we are done 1056 elapsed_time = time.time() - self._start_time ~/ds_jup_venv/lib/python3.8/site-packages/joblib/parallel.py in retrieve(self) 931 try: 932 if getattr(self._backend, 'supports_timeout', False): --> 933 self._output.extend(job.get(timeout=self.timeout)) 934 else: 935 self._output.extend(job.get()) ~/ds_jup_venv/lib/python3.8/site-packages/joblib/_parallel_backends.py in wrap_future_result(future, timeout) 540 AsyncResults.get from multiprocessing.""" 541 try: --> 542 return future.result(timeout=timeout) 543 except CfTimeoutError as e: 544 raise TimeoutError from e 1 frames /usr/lib/python3.8/concurrent/futures/_base.py in __get_result(self) 386 def __get_result(self): 387 if self._exception: --> 388 raise self._exception 389 else: 390 return self._result ValueError: y_true and y_pred contain different number of classes 3, 2. Please provide the true labels explicitly through the labels argument. Classes found in y_true: [0 1 2] ``` How to fix this to be able to select features recursively?
2021/02/27
[ "https://Stackoverflow.com/questions/66396659", "https://Stackoverflow.com", "https://Stackoverflow.com/users/11814996/" ]
Log-loss needs the probability predictions, not the class predictions, so you should add ```py log_loss_rfe = make_scorer(score_func=log_loss, needs_proba=True, greater_is_better=False) ``` The error is because without that, the passed `y_pred` is one-dimensional (classes 0,1,2) and `sklearn` [assumes it's a binary classification](https://github.com/scikit-learn/scikit-learn/blob/95119c13af77c76e150b753485c662b7c52a41a2/sklearn/metrics/_classification.py#L2254) problem and those predictions are probability of the positive class. To deal with that, it adds on the probability of the negative class, but then there are only two columns compared to your three classes.
Consider applying *stratified* cross-validation, which will try to preserve the fraction of samples for each class. Experiment with one of these scikit-learn cross-validators: [`sklearn.model_selection.StratifiedKFold`](https://scikit-learn.org/stable/modules/generated/sklearn.model_selection.StratifiedKFold.html?highlight=stratified#sklearn.model_selection.StratifiedKFold), [`StratifiedShuffleSplit`](https://scikit-learn.org/stable/modules/generated/sklearn.model_selection.StratifiedShuffleSplit.html?highlight=stratified#sklearn.model_selection.StratifiedShuffleSplit), [`RepeatedStratifiedKFold`](https://scikit-learn.org/stable/modules/generated/sklearn.model_selection.RepeatedStratifiedKFold.html?highlight=stratified#sklearn.model_selection.RepeatedStratifiedKFold), replacing `cv=3` in your `RFECV` with the chosen cross-validator. **Edit** I have missed the fact that `StratifiedKFold` is the default cross-validator in `RFECV`. Actually, the error is related to `log_loss_rfe`, which was defined with `needs_proba=False`. Credit to @BenReiniger!
15,124
73,251,418
I'd like to filter a `df` by date. But I would like all the values with any date before today's date (python). For example from the table below, I'd like the rows that have a date before today's date (i.e. row 1 to row 3). | ID | date | | --- | --- | | 1 | 2022-03-25 06:00:00 | | 2 | 2022-04-25 06:00:00 | | 3 | 2022-05-25 06:00:00 | | 4 | 2022-08-25 06:00:00 | Thanks
2022/08/05
[ "https://Stackoverflow.com/questions/73251418", "https://Stackoverflow.com", "https://Stackoverflow.com/users/18028924/" ]
You can try this? ``` from datetime import datetime df[df['date'] < datetime.now()] ``` Output: ``` ID date 0 1 2022-03-25 06:00:00 1 2 2022-04-25 06:00:00 2 3 2022-05-25 06:00:00 ```
This will work ``` #convert column to datetime object df['date'] = pd.to_datetime(df['date'], infer_datetime_format=True, errors='coerce') #filter column df.loc[df['date'] < datetime.now()] ```
15,125
40,780,004
What's the result of returning `NotImplemented` from `__eq__` special method in python 3 (well 3.5 if it matters)? The documentation isn't clear; the [only relevant text I found](https://docs.python.org/3/library/constants.html#NotImplemented) only vaguely refers to "some other fallback": > > When `NotImplemented` is returned, the interpreter will then try the reflected operation on the other type, or some other fallback, depending on the operator. If all attempted operations return `NotImplemented`, the interpreter will raise an appropriate exception. See [Implementing the arithmetic operations](https://docs.python.org/3/library/numbers.html#implementing-the-arithmetic-operations) for more details. > > > Unfortunately, the "more details" link doesn't mention `__eq__` at all. My reading of this excerpt suggests that the code below should raise an "appropriate exception", but it does not: ``` class A: def __eq__(self, other): return NotImplemented class B: def __eq__(self, other): return NotImplemented # docs seems to say these lines should raise "an appropriate exception" # but no exception is raised a = A() b = B() a == b # evaluates as unequal a == a # evaluates as equal ``` From experimenting, I think that when `NotImplemented` is returned from `__eq__`, the interpreter behaves as if `__eq__` wasn't defined in the first place (specifically, it first swaps the arguments, and if that doesn't resolve the issue, it compares using the default `__eq__` that evaluates "equal" if the two objects have the same identity). If that's the case, where in the documentation can I find the confirmation of this behavior? Edit: see [Python issue 28785](http://bugs.python.org/issue28785)
2016/11/24
[ "https://Stackoverflow.com/questions/40780004", "https://Stackoverflow.com", "https://Stackoverflow.com/users/336527/" ]
Actually the `==` and `!=` check work identical to the ordering comparison operators (`<` and similar) except that they don't raise the **appropriate exception** but fall-back to identity comparison. That's the only difference. This can be easily seen in the [CPython source code (version 3.5.10)](https://github.com/python/cpython/blob/v3.5.10/Objects/object.c#L649-L699). I will include a Python version of that source code (at least as far as it's possible): ``` _mirrored_op = {'__eq__': '__eq__', # a == b => b == a '__ne__': '__ne__', # a != b => b != a '__lt__': '__gt__', # a < b => b > a '__le__': '__ge__', # a <= b => b >= a '__ge__': '__le__', # a >= b => b <= a '__gt__': '__lt__' # a > b => b < a } def richcmp(v, w, op): checked_reverse = 0 # If the second operand is a true subclass of the first one start with # a reversed operation. if type(v) != type(w) and issubclass(type(w), type(v)) and hasattr(w, op): checked_reverse = 1 res = getattr(w, _mirrored_op[op])(v) # reversed if res is not NotImplemented: return res # Always try the not-reversed operation if hasattr(v, op): res = getattr(v, op)(w) # normal if res is not NotImplemented: return res # If we haven't already tried the reversed operation try it now! if not checked_reverse and hasattr(w, op): res = getattr(w, _mirrored_op[op])(v) # reversed if res is not NotImplemented: return res # Raise exception for ordering comparisons but use object identity in # case we compare for equality or inequality if op == '__eq__': res = v is w elif op == '__ne__': res = v is not w else: raise TypeError('some error message') return res ``` and calling `a == b` then evaluates as `richcmp(a, b, '__eq__')`. The `if op == '__eq__'` is the special case that makes your `a == b` return `False` (because they aren't identical objects) and your `a == a` return `True` (because they are). However the behavior in Python 2.x was completely different. You could have up to 4 (or even 6, I don't remember exactly) comparisons before falling back to identity comparison!
Not sure where (or if) it is in the docs, but the basic behavior is: * try the operation: `__eq__(lhs, rhs)` * if result is not `NotImplemented` return it * else try the reflected operation: `__eq__(rhs, lhs)` * if result is not `NotImplemented` return it * otherwise use appropriate fall back: eq -> same objects? -> True, else False ne -> different objects? -> True, else False many others -> raise exception The reason that `eq` and `ne` do *not* raise exceptions is: * they can always be determined (apple == orange? no)
15,126
16,480,625
A variable `AA` is in `aaa.py`. I want to use this variable in my other python file `bbb.py` How do I access this variable?
2013/05/10
[ "https://Stackoverflow.com/questions/16480625", "https://Stackoverflow.com", "https://Stackoverflow.com/users/2351602/" ]
You're looking for [modules!](http://docs.python.org/2/tutorial/modules.html) In `aaa.py`: ``` AA = 'Foo' ``` In `bbb.py`: ``` import aaa print aaa.AA # Or print(aaa.AA) for Python 3 # Prints Foo ``` Or this works as well: ``` from aaa import AA print AA # Prints Foo ```
You can import it; this will execute the whole script though. ``` from aaa import AA ```
15,127
54,849,211
I have two lists: ``` providers = ["a", "b", "c", "d", "e"] ips = ["100.12.23.34", "199.134.3.01", "123.143.2.34", "154.234.4.66"] ``` I want the output to look like: ``` [{'provider_name':'a', 'server':'100.12.23.34'},.....] ``` How do i do this in python using for loop?
2019/02/24
[ "https://Stackoverflow.com/questions/54849211", "https://Stackoverflow.com", "https://Stackoverflow.com/users/11082580/" ]
Here is an easy to follow solution. For more reading on the zip method if necessary [click here](https://www.programiz.com/python-programming/methods/built-in/zip). ``` new = [] for i, j in zip(providers, ips): new.append({"provider_name": i, "server": j}) ```
Use: ``` >>> providers = ["a", "b", "c", "d", "e"] >>> ips = ["100.12.23.34", "199.134.3.01", "123.143.2.34", "154.234.4.66"] >>> [{'provider_name':x, 'server':y} for x,y in zip(providers,ips)] [{'provider_name': 'a', 'server': '100.12.23.34'}, {'provider_name': 'b', 'server': '199.134.3.01'}, {'provider_name': 'c', 'server': '123.143.2.34'}, {'provider_name': 'd', 'server': '154.234.4.66'}] >>> ```
15,132
56,022,332
I have been trying to upload a Pandas dataframe to a JSON object in Cloud Storage using Cloud Function. Follwing is my code - ``` def upload_blob(bucket_name, source_file_name, destination_blob_name): """Uploads a file to the bucket.""" storage_client = storage.Client() bucket = storage_client.get_bucket(bucket_name) blob = bucket.blob(destination_blob_name) blob.upload_from_file(source_file_name) print('File {} uploaded to {}.'.format( source_file_name, destination_blob_name)) final_file = pd.concat([df, df_second], axis=0) final_file.to_json('/tmp/abc.json') with open('/tmp/abc.json', 'r') as file_obj: upload_blob('test-bucket',file_obj,'abc.json') ``` I am getting the following error in line - blob.upload\_from\_file(source\_file\_name) ``` Deployment failure: Function failed on loading user code. Error message: Code in file main.py can't be loaded. Detailed stack trace: Traceback (most recent call last): File "/env/local/lib/python3.7/site- packages/google/cloud/functions/worker.py", line 305, in check_or_load_user_function _function_handler.load_user_function() File "/env/local/lib/python3.7/site- packages/google/cloud/functions/worker.py", line 184, in load_user_function spec.loader.exec_module(main) File "<frozen importlib._bootstrap_external>", line 728, in exec_module File "<frozen importlib._bootstrap>", line 219, in _call_with_frames_removed File "/user_code/main.py", line 6, in <module> import datalab.storage as gcs File "/env/local/lib/python3.7/site-packages/datalab/storage/__init__.py", line 16, in <module> from ._bucket import Bucket, Buckets File "/env/local/lib/python3.7/site-packages/datalab/storage/_bucket.py", line 21, in <module> import datalab.context File "/env/local/lib/python3.7/site-packages/datalab/context/__init__.py", line 15, in <module> from ._context import Context File "/env/local/lib/python3.7/site-packages/datalab/context/_context.py", line 20, in <module> from . import _project File "/env/local/lib/python3.7/site-packages/datalab/context/_project.py", line 18, in <module> import datalab.utils File "/env/local/lib/python3.7/site-packages/datalab/utils/__init__.py", line 15 from ._async import async, async_function, async_method ^ SyntaxError: invalid syntax ``` What possibly is the error?
2019/05/07
[ "https://Stackoverflow.com/questions/56022332", "https://Stackoverflow.com", "https://Stackoverflow.com/users/4853331/" ]
You are passing a string to [blob.upload\_from\_file()](https://googleapis.github.io/google-cloud-python/latest/storage/blobs.html#google.cloud.storage.blob.Blob.upload_from_file), but this method requires a file object. You probably want to use [blob.upload\_from\_filename()](https://googleapis.github.io/google-cloud-python/latest/storage/blobs.html#google.cloud.storage.blob.Blob.upload_from_filename) instead. Check the sample in [the GCP docs](https://cloud.google.com/storage/docs/uploading-objects#storage-upload-object-python). Alternatively, you could get the file object, and keep using blob.upload\_from\_file(), but it's unnecessary extra lines. ```py with open('/tmp/abc.json', 'r') as file_obj: upload_blob('test-bucket', file_obj, 'abc.json') ```
Use a bucket object instead of string something like `upload_blob(conn.get_bucket(mybucket),'/tmp/abc.json','abc.json')}`
15,134
39,040,250
I have scraped a website with scrapy and stored the data in a json file. Link to the json file: <https://drive.google.com/file/d/0B6JCr_BzSFMHLURsTGdORmlPX0E/view?usp=sharing> But the json isn't standard json and gives errors: ``` >>> import json >>> with open("/root/code/itjuzi/itjuzi/investorinfo.json") as file: ... data = json.load(file) ... Traceback (most recent call last): File "<stdin>", line 2, in <module> File "/root/anaconda2/lib/python2.7/json/__init__.py", line 291, in load **kw) File "/root/anaconda2/lib/python2.7/json/__init__.py", line 339, in loads return _default_decoder.decode(s) File "/root/anaconda2/lib/python2.7/json/decoder.py", line 367, in decode raise ValueError(errmsg("Extra data", s, end, len(s))) ValueError: Extra data: line 3 column 2 - line 3697 column 2 (char 45 - 3661517) ``` Then I tried this: ``` with open('/root/code/itjuzi/itjuzi/investorinfo.json','rb') as f: data = f.readlines() data = map(lambda x: x.decode('unicode_escape'), data) >>> df = pd.DataFrame(data) Traceback (most recent call last): File "<stdin>", line 1, in <module> NameError: name 'pd' is not defined >>> import pandas as pd >>> df = pd.DataFrame(data) >>> print pd <module 'pandas' from '/root/anaconda2/lib/python2.7/site-packages/pandas/__init__.pyc'> >>> print df [3697 rows x 1 columns] ``` Why does this only return 1 column? How can I standardize the json file and read it with pandas correctly?
2016/08/19
[ "https://Stackoverflow.com/questions/39040250", "https://Stackoverflow.com", "https://Stackoverflow.com/users/2672481/" ]
try this: ``` import json with open('data.json') as data_file: data = json.load(data_file) ``` This has the advantage of dealing well with large JSON files that do not fit in memory EDIT: Your data is not valid JSON. Delete the following in the first 3 lines and it will validate: ``` [{ "website": ["\u5341\u65b9\u521b\u6295"] }] ``` EDIT2[Since you need to access nested values from json]: You can now also access single values like this: ``` data["one"][0]["id"] # will return 'value' data["two"]["id"] # will return 'value' data["three"] # will return 'value' ```
Try following codes: (you are missing one something) ``` >>> import json >>> with open("/root/code/itjuzi/itjuzi/investorinfo.json") as file: ... data = json.load(file.read()) ```
15,135
5,852,199
I'm writing a web application (<http://www.checkio.org/>) which allows users to write python code. As one feedback metric among many, I'd like to enable profiling while running checks on this code. This is to allow users to get a very rough idea of the relative efficiency of various solutions. I need the profile to be (reasonably) deterministic. I don't want other load on the web server to give a bad efficiency reading. Also, I'm worried that some profilers won't give a good measurement because these short scripts run so quickly. The timeit module shows a function being run thousands of time, but I'd like to not waste server reasources on this small features if possible. It's not clear which (if any) of the standard profilers meet this need. Ideally the profiler would give units of "interpreter bytecode ticks" which would increment one per bytecode instruction. This would be a *very* rough measure, but meets the requirements of determinism and high-precision. Which profiling system should I use?
2011/05/01
[ "https://Stackoverflow.com/questions/5852199", "https://Stackoverflow.com", "https://Stackoverflow.com/users/146821/" ]
Python's standard profiler module provides [deterministic profiling](http://docs.python.org/library/profile.html#what-is-deterministic-profiling).
I also suggest giving a try to yappi. (http://code.google.com/p/yappi/) In v0.62, it supports CPU time profiling and you can stop the profiler at any time you want...
15,136
45,363,629
Let's say I have a list l1 = [a,b,c,d,e] and I want to map it to a dictionary that would contain the following {a:1, b:2, c:3, d:4, e:5} I know how to do it in a very naive way, but I would like something more 'pythonic' The naive way: ``` dic = {} j = 1 for i in list1: dic[i] = j j += 1 ```
2017/07/28
[ "https://Stackoverflow.com/questions/45363629", "https://Stackoverflow.com", "https://Stackoverflow.com/users/4369996/" ]
How about using a dictionary comprehension: ``` >>> {v: k for k, v in enumerate(l1, 1)} {'a': 1, 'b': 2, 'c': 3, 'd': 4, 'e': 5} ```
Just to make up for the flub earlier... You can use the `dict` type constructor with `itertools.count` and `zip`: ``` >>> L1 = ['a','b','c','d'] >>> from itertools import count >>> dict(zip(L1, count(1))) {'c': 3, 'b': 2, 'a': 1, 'd': 4} ```
15,137
65,056,382
I created a script that runs perfectly fine in visual studio code but I'm now trying to automate the script, which is proving to be a little tricky. I've turned the file into a Unix executable file for the automation but when I click on my script, the code that I’ve implemented doesn’t do what I want it to. I’ve got a line of code that changes the name of all of the .csv files within a directory. This is the particular line of code that I was talking about… ``` spath='/Users/emmanuel/Documents/Selenium/' sourcefiles = os.listdir(spath) for file in sourcefiles: if file.endswith('.csv'): os.rename(os.path.join(spath,file), "CC.csv") ``` The name that I want to change the .csv file to is CC.csv and I use this coding method as the name for the .csv file has a different name every-time that it's downloaded. The issue I'm having now is that, when the script reaches this part of the code, the .csv file just disappears from the directory and is moved from the source directory but I want to prevent that from happening. Reason being that I need the file to stay in there in order to be further manipulated by the pandas module. Is there a way to rename all of the .cvs files in a directory and keep them in the source directory after being renamed? I’ve run this code in VSC and it runs perfectly fine but when I run it as a unit executable file I get this error… ``` Traceback (most recent call last): File "/Users/emmanuel/Documents/Selenium/OptionsBotCode.py", line 120, in <module> CC = pd.read_csv('/Users/emmanuel/Documents/Selenium/CC.csv') File "/Library/Frameworks/Python.framework/Versions/3.9/lib/python3.9/site-packages/pandas/io/parsers.py", line 688, in read_csv return _read(filepath_or_buffer, kwds) File "/Library/Frameworks/Python.framework/Versions/3.9/lib/python3.9/site-packages/pandas/io/parsers.py", line 454, in _read parser = TextFileReader(fp_or_buf, **kwds) File "/Library/Frameworks/Python.framework/Versions/3.9/lib/python3.9/site-packages/pandas/io/parsers.py", line 948, in __init__ self._make_engine(self.engine) File "/Library/Frameworks/Python.framework/Versions/3.9/lib/python3.9/site-packages/pandas/io/parsers.py", line 1180, in _make_engine self._engine = CParserWrapper(self.f, **self.options) File "/Library/Frameworks/Python.framework/Versions/3.9/lib/python3.9/site-packages/pandas/io/parsers.py", line 2010, in __init__ self._reader = parsers.TextReader(src, **kwds) File "pandas/_libs/parsers.pyx", line 382, in pandas._libs.parsers.TextReader.__cinit__ File "pandas/_libs/parsers.pyx", line 674, in pandas._libs.parsers.TextReader._setup_parser_source FileNotFoundError: [Errno 2] No such file or directory: '/Users/emmanuel/Documents/Selenium/CC.csv' Saving session... ...copying shared history... ...saving history...truncating history files... ...completed. ``` I’ve already tried using ``` print(os.getcwd()) ``` And I’ve been able to find where the file supposedly travels to but when I look in the directory the file doesn’t appear to be there. To put things into context, I need the ‘pandas’ module to access the file (in order to manipulate it), but I need to either change the file name according to the file type (.csv) or have pandas access the file type as opposed to the file name. This is because, every-time this .csv file is downloaded, it has a different name, and in order to use the ``` pandas.read_csv ``` function, the bot would need the file to have the same name every time so it could repeat the process. I won’t have any issue with changing multiple file names because, at the start of my script, there is a piece of code that clears out all existing .csv files in the directory so it will be only one file that is targeted. I now understand that rename does more than just rename. If the source and destination are both in the same folder, it will move the renamed file into a new directory. However, I want to keep the file in the same directory
2020/11/29
[ "https://Stackoverflow.com/questions/65056382", "https://Stackoverflow.com", "https://Stackoverflow.com/users/-1/" ]
You just need to escape the backslash `\`, so it turns into two backslashes `\\` ```js console.log(JSON.parse('{"x":"Hello \\" test"}')) ```
```js let mydata = `{"x":"Hello \" test "}` let escapeJsonFunc = function(str) { return str.replace(/\\/g,'\\'); }; console.log( escapeJsonFunc(mydata) ) ```
15,138
57,728,801
I have a large (around 200Mb) single-line json file and I want to convert this to a more readable multi-line json (or txt) file. I tried to open the file with text editors like sublime text and it takes forever to open. So, I would like to make the conversion without opening the file. Therefore, I cannot use the interface suggested in [this](https://stackoverflow.com/questions/17003470/how-do-i-convert-a-single-line-json-file-into-a-human-readable-multi-line-file) SO question. I tried to `pretty-print` the json file as suggested in [this](https://stackoverflow.com/a/46301431/8889727) answer by doing the following. ``` cat myjsonfile.json | python -m json.tool > pretty.json ``` But the terminal prints the following message and I get an empty `pretty.json` file. ``` Extra data: line 1 column 34255 - line 1 column 173769197 (char 34254 - 173769196) ``` I'm thinking of installing visual basic, just to convert the file. But is there a better and efficient way to do the conversion?
2019/08/30
[ "https://Stackoverflow.com/questions/57728801", "https://Stackoverflow.com", "https://Stackoverflow.com/users/8889727/" ]
The simplest method would be using `jq` to pretty print the json: ``` jq . myjsonfile.json > pretty.json ``` But from the python output, I suspect the json file may be ill-formed.
if you can identify a character sequence that ends a line (e.g. a curly bracket followed by a semicolon) you can use sed for it ``` $ sed 's/};/\n/g' <<< "my};test};string" my test string ```
15,139
17,507,799
I've built a little app engine app that lets users upload short recordings. Some of the recordings are done in-browser with <https://github.com/mattdiamond/Recorderjs>, which creates wav files. To save space, I'd like to convert those to ogg before writing them to the app engine datastore, so that I use less of my outgoing bandwidth when I play the audio recordings back to users. How can I do this? I googled around, and apparently there's a command line tool called oggenc that encodes to ogg -- but I'm pretty sure I can't install that (or, even if I could install it, make calls to it) on app engine. I found a similar question at [Encode audio from getUserMedia() to a .OGG in JavaScript](https://stackoverflow.com/questions/16821919/encode-audio-from-getusermedia-to-a-ogg-in-javascript) -- this links to <https://github.com/jpemartins/speex.js>, a project that looks like it might eventually be able to convert from wav to ogg in javascript (which would be great), but, as far as I can tell, does not do so at the moment. At <https://github.com/jpemartins/speex.js/issues/4> the authors mentions that WAV -> ... -> OGG is not yet possible. What else should I try? Edit: My app engine code is written in Python, so another possibility would be to do the conversion there, with a python module that can convert wav to ogg. I think <http://pymedia.org/> can do this, but I'd have to somehow install it on app engine -- is that possible?
2013/07/06
[ "https://Stackoverflow.com/questions/17507799", "https://Stackoverflow.com", "https://Stackoverflow.com/users/610668/" ]
Pymedia isn't pure python so you won't be able to use it on app engine. You probably want to build something on Compute Engine to do this.
Provided it's possible to replace Matt Diamond's recorderjs with its fork, [chris-rudmin/Recorderjs](https://github.com/chris-rudmin/Recorderjs) ([demo page](https://rawgit.com/chris-rudmin/Recorderjs/master/example.html)) in AppEngine, this should be feasible. Or first encode to WAV and use [opusenc.js](https://github.com/Rillke/opusenc.js) ([demo page](https://blog.rillke.com/opusenc.js/)), which is an Emscripten port of the Opusenc tool, to convert a temporary WAV file to Ogg-Opus client side.
15,142
20,423,599
If I write the following in python, I get a syntax error, why so? ``` a = 1 b = (a+=1) ``` I am using python version 2.7 what I get when I run it, the following: ``` >>> a = 1 >>> b = (a +=1) File "<stdin>", line 1 b = (a +=1) ^ SyntaxError: invalid syntax >>> ```
2013/12/06
[ "https://Stackoverflow.com/questions/20423599", "https://Stackoverflow.com", "https://Stackoverflow.com/users/1932405/" ]
Unlike in some other languages, assignment (including augmented assignment, like `+=`) in Python is *not* an expression. This also affects things like this: ``` (a=1) > 2 ``` which is legal in C, and several other languages. The reason generally given for this is because it helps to prevent a class of bugs like this: ``` if a = 1: # instead of == pass else: pass ``` since assignment isn't an expression, this is a SyntaxError in Python. In the equivalent C code, it is a subtle bug where the variable will be *modified* rather than checked, the check will *always* be true (in C, like in Python, a non-zero integer is always truthy), and the else block can never fire. You *can* still do chained assignment in Python, so this works: ``` >>> a = 1 >>> a = b = a+1 >>> a 2 >>> b 2 ```
`a +=1` is a statement in Python and you can't assign a statement to a variable. Though it is a valid syntax in languages like C, PHP, etc but not Python. ``` b = (a+=1) ``` An equivalent version will be: ``` >>> a = 1 >>> a += 1 >>> b = a ```
15,143
1,423,214
I have this python code for opening a .cfg file, writing to it and saving it: ``` import ConfigParser def get_lock_file(): cf = ConfigParser.ConfigParser() cf.read("svn.lock") return cf def save_lock_file(configurationParser): cf = configurationParser config_file = open('svn.lock', 'w') cf.write(config_file) config_file.close() ``` Does this seem normal or am I missing something about how to open-write-save files? Is there a more standard way to read and write config files? I ask because I have two methods that seem to do the same thing, they get the config file handle ('cf') call cf.set('blah', 'foo' bar) then use the save\_lock\_file(cf) call above. For one method it works and for the other method the write never takes place, unsure why at this point. ``` def used_like_this(): cf = get_lock_file() cf.set('some_prop_section', 'some_prop', 'some_value') save_lock_file(cf) ```
2009/09/14
[ "https://Stackoverflow.com/questions/1423214", "https://Stackoverflow.com", "https://Stackoverflow.com/users/128508/" ]
Just to note that configuration file handling is simpler with ConfigObj. To read and then write a config file: ``` from configobj import ConfigObj config = ConfigObj(filename) value = config['entry'] config['entry'] = newvalue config.write() ```
Looks good to me. If both places call `get_lock_file`, then `cf.set(...)`, and then `save_lock_file`, and no exceptions are raised, this should work. If you have different threads or processes accessing the same file you could have a race condition: 1. thread/process A reads the file 2. thread/process B reads the file 3. thread/process A updates the file 4. thread/process B updates the file Now the file only contains B's updates, not A's. Also, for safe file writing, don't forget the `with` statement (Python 2.5 and up), it'll save you a try/finally (which you should be using if you're not using `with`). From `ConfigParser`'s docs: ``` with open('example.cfg', 'wb') as configfile: config.write(configfile) ```
15,148
64,436,875
I have a python package project 'webapi' and I want to set up in a way so that other people can "pip install webapi". If I want to put it on a private server with a specific ip: xx.xx.xx.xx. So other people with the access right don't need to git clone the project and install it locally into their virtual environment. Instead, they can simply do: ``` pip install webapi ``` And they can start to use it, just as use other public python libraries. In order to do this, how can I start with this? Is there some tutorial to help with this? I tried a few keywords to search for instruction, but haven't found something useful.
2020/10/20
[ "https://Stackoverflow.com/questions/64436875", "https://Stackoverflow.com", "https://Stackoverflow.com/users/3943868/" ]
I guess the question is unclear, if you want to upload your webapi package to PyPi you can read [this article](https://medium.com/@joel.barmettler/how-to-upload-your-python-package-to-pypi-65edc5fe9c56). But this will make your package public and i'm not quite sure that's what you want. If what you want is a private pypi server then checkout this package [private-pypi](https://pypi.org/project/private-pypi/). There's a whole description of how to set your server and use it in the documentation.
It seems like you need a software repository management application such as [Pulp](https://pulpproject.org/) take a look at [their plugin section](https://pulpproject.org/content-plugins/) and their documentation is [here](https://pulp-python.readthedocs.io/en/latest/). I use it as a private python repository for systems that for security reason are isolated from the internet but it can also host your own custom Python packages.
15,150
2,726,171
I am trying to change the font size using python's ImageDraw library. You can do something like [this](http://infohost.nmt.edu/tcc/help/pubs/pil/image-font.html): ``` fontPath = "/usr/share/fonts/dejavu-lgc/DejaVuLGCSansCondensed-Bold.ttf" sans16 = ImageFont.truetype ( fontPath, 16 ) im = Image.new ( "RGB", (200,50), "#ddd" ) draw = ImageDraw.Draw ( im ) draw.text ( (10,10), "Run awayyyy!", font=sans16, fill="red" ) ``` The problem is that I don't want to specify a font. I want to use the default font and just change the size of the font. This seems to me that it *should* be simple, but I can't find documentation on how to do this.
2010/04/28
[ "https://Stackoverflow.com/questions/2726171", "https://Stackoverflow.com", "https://Stackoverflow.com/users/889/" ]
Per [PIL's docs](http://www.pythonware.com/library/pil/handbook/imagedraw.htm), `ImageDraw`'s default font is a bitmap font, and therefore it cannot be scaled. For scaling, you need to select a true-type font. I hope it's not difficult to find a nice truetype font that "looks kinda like" the default font in your desired font-size!
Just do this ``` from PIL import Image from PIL import ImageDraw from PIL import ImageFont def image_char(char,image_size, font_size): img = Image.new("RGB", (image_size, image_size), (255,255,255)) print img.getpixel((0,0)) draw = ImageDraw.Draw(img) font_path = "/Users/admin/Library/Fonts/InputSans-Regular.ttf" font = ImageFont.truetype(font_path, font_size) draw.text((5, 5), char, (0,0,0),font=font) img.show() main(): image_char("A",36,16) if __name__ == '__main__': sys.exit(main()) ```
15,152
24,701,171
I want to stream an "infinite" (i.e. continuous) amount of data using HTTP Post. Basically, I want to send the POST request header and then stream the content (where the content length is unknown). I looked through <http://docs.python-requests.org/en/latest/user/advanced/> and it seems to have the facility. The one question I have is it says in the document " To stream and upload, simply provide a file-like object for your body". What does "file-like" object mean? The data I wish to stream comes from a sensor. How do I implement a "file-like" object which will read data from the sensor and pass it to the caller? Sorry about my ignorance here but I am feeling my way through python (i.e. learning as I go along. hmm.. looks like a snake. It feels slithery. Trying to avoid the business end of the critter... :-) ). Thank you in advance for your help. Ranga.
2014/07/11
[ "https://Stackoverflow.com/questions/24701171", "https://Stackoverflow.com", "https://Stackoverflow.com/users/2932861/" ]
Just wanted to give you an answer so you could close your question: It sounds like what you're really looking for is python websockets. Internally, you make a HTTP request to upgrade the connection to a websocket, and after the handshake you are free to stream data both ways. Python makes this easy, [for example](http://www.kennethreitz.org/essays/introducing-flask-sockets): ``` from flask import Flask from flask_sockets import Sockets app = Flask(__name__) sockets = Sockets(app) @sockets.route('/echo') def echo_socket(ws): while True: message = ws.receive() ws.send(message) @app.route('/') def hello(): return 'Hello World!' ``` Websockets do support full duplex communication, but you seem only interested in the server-to-client part. In that case you can just stream data using `ws.send()`. I'm not sure if this is what you're looking for, but it should provide a solution.
A file-like object is an object with a "read" method that accept a size and returns a binary data buffer for the next chunk of data. One example that looks like that is indeed, the file object, if you want to read from the filesystem. Another common case is the [StringIO](https://docs.python.org/2/library/stringio.html) class, which reads and writes to a buffer. In your case, you would need to implement a "file-like object" by yourself, which would simply read from the sensor. ``` class Sensor(object): def __init__(self, sensor_thing) self.sensor_thing = sensor_thing def read(self, size): return self.convert_to_binary(self.sensor_thing.read_from_sensor()) def convert_to_binary(self, sensor_data) .... ```
15,155
36,270,161
I want to get every value of 'Lemma' in this json: ``` {'sentences': [{'indexeddependencies': [], 'words': [ ['Cinnamomum', {'CharacterOffsetBegin': '0', 'CharacterOffsetEnd': '10', 'Lemma': 'Cinnamomum', 'PartOfSpeech': 'NNP', 'NamedEntityTag': 'O'}], ['.', {'CharacterOffsetBegin': '14', 'CharacterOffsetEnd': '15', 'Lemma': '.', 'PartOfSpeech': '.', 'NamedEntityTag': 'O'}] ], 'parsetree': [], 'text': 'Cinnamomum.', 'dependencies': [] }, {'indexeddependencies': [], 'words': [ ['specific', {'CharacterOffsetBegin': '16', 'CharacterOffsetEnd': '24', 'Lemma': 'specific', 'PartOfSpeech': 'JJ', 'NamedEntityTag': 'O'}], ['immunoglobulin', {'CharacterOffsetBegin': '25', 'CharacterOffsetEnd': '39', 'Lemma': 'immunoglobulin', 'PartOfSpeech': 'NN', 'NamedEntityTag': 'O'}], ['measurement', {'CharacterOffsetBegin': '51', 'CharacterOffsetEnd': '62', 'Lemma': 'measurement', 'PartOfSpeech': 'NN', 'NamedEntityTag': 'O'}] ], 'parsetree': [], 'text': 'specific immunoglobulin measurement', 'dependencies': [] }] } ``` How can I get every value using python? There are five Lemma keys but I can't get all of them. I've tried this, but it doesn't work: ``` for i in range(len(words)): #in this case the range of i would be 5 lemma = result["sentences"][0]["words"][i][1]["Lemma"] ```
2016/03/28
[ "https://Stackoverflow.com/questions/36270161", "https://Stackoverflow.com", "https://Stackoverflow.com/users/6125957/" ]
I'm not sure why do you have this data structure - assuming you cannot change/reshape it to better suit your queries and use cases and that `Lemma` key would always be present: ``` >>> [word[1]['Lemma'] for sentence in data['sentences'] for word in sentence['words']] ['Cinnamomum', '.', 'specific', 'immunoglobulin', 'measurement'] ```
this simple code traverses everything and finds all Lemma values (btw. your json should have " instead of ' as string quotes, I guess: ``` import json with open('lemma.json') as f: data = json.load(f) def traverse(node): for key in node: if isinstance(node, list): traverse(key) elif isinstance(node, dict): if key == 'Lemma': print key, node[key] continue traverse(node[key]) traverse(data) ```
15,156
48,409,243
I tried to install google cloud module on Ubuntu 16.04 for python 3 but it shows `permission error 13` this error is shown many times during installations for my python environment `PermissionError: [Errno 13] Permission denied: /usr/lib/python3/dist-packages/httplib2-0.9.1.egg-info`
2018/01/23
[ "https://Stackoverflow.com/questions/48409243", "https://Stackoverflow.com", "https://Stackoverflow.com/users/9177827/" ]
These are documented well enough in the common literature: [location](https://en.wikipedia.org/wiki/Location_parameter), [mu](https://en.wikipedia.org/wiki/Poisson_distribution), and the page you cited -- "well enough" is assuming that you're familiar enough with the field's vocabulary to work your way through the technical docs. * `loc` is the N-dimensional reference point of the distribution, that centroid being chosen appropriately to the function. For this application, it's simply the left end of the desired distribution (scalar). This defaults to 0, and is only changed if your application starts at something other than 0. * `mu` is the mean of the function. * `size` is the sample size. The Poisson distribution has only the one shape parameter: mu. The variance, mean, and frequency are lock-stepped to each other.
UHXW is asking what do these arguments mean in simple terms. Prune's answers could be simplified. The loc is like the lowest x value of your distribution the mu is like the middle of your distribution. Look at <https://www.datacamp.com/community/tutorials/probability-distributions-python> The uniform function generates a uniform continuous variable between the specified interval via its loc and scale arguments. This distribution is constant between loc and loc + scale. The size arguments describe the number of random variates. If you want to maintain reproducibility, include a random\_state argument assigned to a number.
15,160
18,478,287
The regular way of JSON-serializing custom non-serializable objects is to subclass `json.JSONEncoder` and then pass a custom encoder to `json.dumps()`. It usually looks like this: ``` class CustomEncoder(json.JSONEncoder): def default(self, obj): if isinstance(obj, Foo): return obj.to_json() return json.JSONEncoder.default(self, obj) print(json.dumps(obj, cls=CustomEncoder)) ``` What I'm trying to do, is to make something serializable with the default encoder. I looked around but couldn't find anything. My thought is that there would be some field in which the encoder looks at to determine the json encoding. Something similar to `__str__`. Perhaps a `__json__` field. Is there something like this in python? I want to make one class of a module I'm making to be JSON serializable to everyone that uses the package without them worrying about implementing their own [trivial] custom encoders.
2013/08/28
[ "https://Stackoverflow.com/questions/18478287", "https://Stackoverflow.com", "https://Stackoverflow.com/users/845169/" ]
I suggest putting the hack into the class definition. This way, once the class is defined, it supports JSON. Example: ``` import json class MyClass( object ): def _jsonSupport( *args ): def default( self, xObject ): return { 'type': 'MyClass', 'name': xObject.name() } def objectHook( obj ): if 'type' not in obj: return obj if obj[ 'type' ] != 'MyClass': return obj return MyClass( obj[ 'name' ] ) json.JSONEncoder.default = default json._default_decoder = json.JSONDecoder( object_hook = objectHook ) _jsonSupport() def __init__( self, name ): self._name = name def name( self ): return self._name def __repr__( self ): return '<MyClass(name=%s)>' % self._name myObject = MyClass( 'Magneto' ) jsonString = json.dumps( [ myObject, 'some', { 'other': 'objects' } ] ) print "json representation:", jsonString decoded = json.loads( jsonString ) print "after decoding, our object is the first in the list", decoded[ 0 ] ```
For production environment, prepare rather own module of `json` with your own custom encoder, to make it clear that you overrides something. Monkey-patch is not recommended, but you can do monkey patch in your testenv. For example, ``` class JSONDatetimeAndPhonesEncoder(json.JSONEncoder): def default(self, obj): if isinstance(obj, (datetime.date, datetime.datetime)): return obj.date().isoformat() elif isinstance(obj, basestring): try: number = phonenumbers.parse(obj) except phonenumbers.NumberParseException: return json.JSONEncoder.default(self, obj) else: return phonenumbers.format_number(number, phonenumbers.PhoneNumberFormat.NATIONAL) else: return json.JSONEncoder.default(self, obj) ``` you want: payload = json.dumps(your\_data, cls=JSONDatetimeAndPhonesEncoder) or: payload = your\_dumps(your\_data) or: payload = your\_json.dumps(your\_data) however in testing environment, go a head: ``` @pytest.fixture(scope='session', autouse=True) def testenv_monkey_patching(): json._default_encoder = JSONDatetimeAndPhonesEncoder() ``` which will apply your encoder to all `json.dumps` occurrences.
15,161
55,200,708
I am getting started on using Zappa. However, I already had installed python 3.7 on my computer while Zappa uses 3.6. I installed python 3.6.8, but when I try to use zappa in the cmd (zappa init) it uses python 3.7 by default. How can I direct zappa to use 3.6 instead?
2019/03/16
[ "https://Stackoverflow.com/questions/55200708", "https://Stackoverflow.com", "https://Stackoverflow.com/users/11116762/" ]
As mentioned in Zappa [README](https://github.com/Miserlou/Zappa#installation-and-configuration): > > Please note that Zappa must be installed into your project's virtual environment. > > > You should use something like `virtualenv` to create a virtual environment, which makes it easy to switch Python version. If you use virtualenv, you can try create an environment by: ``` $ virtualenv -p /usr/bin/python3.6 venv $ source activate venv ``` Then `pip install zappa` in this virtual environment.
I don't know about Zappa, but if you want use a specific version of python can do: ``` python3.6 my_program.py ``` and if whant use the command *python* with a specific version permanently, in **linux** modify the file */home/[user\_name]/.bashrc* and add the next line: ``` alias python=python3.6 ```
15,171
69,654,700
So I'm trying to achieve something like this ``` from enum import Enum tabulate_formats = ['fancy_grid', 'fancy_outline', 'github', 'grid'] class TableFormat(str, Enum): for item in tabulate_formats: exec(f"{item} = '{item}'") ``` Though i get this error ``` Traceback (most recent call last): File "/app/src/main.py", line 25, in <module> class TableFormat(str, Enum): File "/app/src/main.py", line 26, in TableFormat for item in tabulate_formats: File "/usr/local/lib/python3.6/enum.py", line 92, in __setitem__ raise TypeError('Attempted to reuse key: %r' % key) TypeError: Attempted to reuse key: 'item' ``` How do I properly assign them into the class
2021/10/21
[ "https://Stackoverflow.com/questions/69654700", "https://Stackoverflow.com", "https://Stackoverflow.com/users/3709060/" ]
What you want to do doesn't involve editing an array, only editing the property of that array. [Array.prototype.push](https://developer.mozilla.org/en-US/docs/Web/JavaScript/Reference/Global_Objects/Array/push) and [Array.prototype.splice](https://developer.mozilla.org/en-US/docs/Web/JavaScript/Reference/Global_Objects/Array/splice) are used for adding and removing elements to and from an array, and aren't what you want to use here. Instead, you just want to set the property on the object at the first index of your array, which you can do like this: ``` this.houseList[1].parking = this.ParkingIsTrue; ``` Of course, you might need to tweak that slightly depending on how you need to determine which element in the array needs to be edited, and what its value needs to be. But I hope that example is at least a useful guide to the syntax that you will need to use. --- The reason why you were getting the error "ERROR TypeError: this.houseList[i].push is not a function" is that `this.houseList[i]` resolves to an object like `{houseCode: '5678', street: 'Pike Street'}`. This value is not an array, and doesn't have a `push` method, so `this.houseList[i].push` resolves to `undefined`. When you then tried to call that like a function, you get an error complaining that it's not a function because, well, it wasn't a function. It was `undefined`.
You can work with append `parking` property to object as below: ```js this.houseList.splice(i, 0, { ...this.houseList[i], parking: this.ParkingIsTrue, }); ``` [Sample Solution on StackBlitz](https://stackblitz.com/edit/angular-ivy-zggwz7?file=src/app/app.component.html) --- References ---------- [JavaScript Ellipsis: Three dots ( … ) in JavaScript](https://dev.to/sagar/three-dots---in-javascript-26ci)
15,173
39,517,921
So I'm using tkinter python and I have an entry widget with Name text in it. I want to delete the text only when the widget is clicked on. This is what I have so far: ``` #Import tkinter to make gui from tkinter import * from tkinter import ttk#Sets title and creates gui root = Tk() root.title("Identity Form") #Configures column and row settings and sets padding mainframe = ttk.Frame(root, padding="3 3 12 12") mainframe.grid(column=0, row=0, sticky=(N, W, E, S)) mainframe.columnconfigure(0, weight=1) mainframe.rowconfigure(0, weight=1) name=StringVar() age=StringVar() gender=StringVar() #Widgets to put in items, quanitity, and shipping days name_entry = ttk.Entry(mainframe, width=20, textvariable=name) name_entry.grid(column=2, row=2, sticky=(W, E)) age_entry2 = ttk.Entry(mainframe, width=20, textvariable=age) age_entry2.grid(column=2, row=3, sticky=(W, E)) male = ttk.Radiobutton(mainframe, text='Male', variable=gender, value='male') female = ttk.Radiobutton(mainframe, text='Female', variable=gender, value='female') other = ttk.Radiobutton(mainframe, text='Other', variable=gender, value='other') name_entry.insert(0,'Name') ``` This creates the entry widget and has the text "Name" inside it. When it is clicked upon how I want the text to disappear.
2016/09/15
[ "https://Stackoverflow.com/questions/39517921", "https://Stackoverflow.com", "https://Stackoverflow.com/users/6570334/" ]
If you're expecting a socket to stay open for minutes at a time, you're in for a world of hurt. That might work on Wi-Fi, but on cellular, there's a high probability of the connection glitching because of tower switching or some other random event outside your control. When that happens, the connection drops, and there's really nothing your app can do about it. This really needs to be fixed by changing the way the client requests data so that the responses can be more asynchronous. Specifically: * Make your request. * On the server side, immediately provide the client with a unique identifier for that request and close the connection. * Next, on the client side, periodically ask the server for its status. + If the connection times out, ask again. + If the server says that the results are not ready, wait a few seconds and ask again. * On the server side, when processing is completed, store the results along with the identifier in a persistent fashion (e.g. in a file or database) * When the client requests the results for that identifier, return the results if they are ready, or return a "not ready" error of some sort. * Have a periodic cron job or similar on the server side to clean up old data that has not yet been collected. With that model, it doesn't matter if the connection to the server closes, because a subsequent request will get the data successfully.
I faced this issue and spend more than 1 week to fix this. AND i just solved this issue by changing Wifi connection.
15,174
40,009,358
I'm a 1st year CS student been struggling over the past few days on this lab task I received for python(2.7): --- Write a Python script named higher-lower.py which: first reads exactly one integer from standard input (10, in the example below), then reads exactly five more integers from standard input and for each of those five values outputs the message higher, lower or equal depending upon whether the new value is higher than, lower than or equal to the previous value. (Must have good use of a while loop) (Must not use a list) Example standard input: ``` 10 20 10 8 8 12 ``` Example standard output: ``` higher lower lower equal higher ``` (1 string per line) --- I have a working solution but when I upload it for correction I am told it's incorrect, This is my solution: ``` prev = input() output = "" s = 1 while s <= 5: curr = input() if prev < curr: output = output + "higher\n" elif curr < prev: output = output + "lower\n" else: output = output + "equal\n" s = s + 1 prev = curr print output ``` I think it's incorrect because each higher/lower/equal is printed on a single string over 5 lines, where the task wants each higher/lower/equal to be printed as an individual string on each line. Could anyone give me any hints in the right direction? I searched stackoverflow as well as google for a similar problem and couldn't find anything related to this. Any help would be appreciated!
2016/10/12
[ "https://Stackoverflow.com/questions/40009358", "https://Stackoverflow.com", "https://Stackoverflow.com/users/7010470/" ]
Given your description, I suspect that the validation program wants to see a single result after each additional input. Have you tried that? ``` while s <= 5: curr = input() if prev < curr: print "higher" elif curr < prev: print "lower" else: print "equal" s = s + 1 prev = curr ```
``` magic_number = 3 # Your code here... while True: guess = int(input("Guess my number: ")) if guess == magic_number: print "Correct!" break elif guess > magic_number: print "Too high!" else: print "Too low!" print "Great job guessing my number!" ```
15,180
53,881,731
How can I define an xpath command in python (scrapy) to accept any number at the place indicated in the code. I have already tried to put an `*` or `any()` at the position. ``` table = response.xpath('//*[@id="olnof_**here I want to accept any value**_altlinesodd"]/tr[1]/TD[1]/A[1]') ```
2018/12/21
[ "https://Stackoverflow.com/questions/53881731", "https://Stackoverflow.com", "https://Stackoverflow.com/users/10819357/" ]
You can do this using [regular expressions](https://doc.scrapy.org/en/latest/topics/selectors.html#regular-expressions): ``` table = response.xpath('//*[re:test(@id, "^olnof_.+_altlinesodd$")]/tr[1]/TD[1]/A[1]') ```
You can try below workaround: ``` '//*[starts-with(@id, "olnof_") and contains(@id, "_altlinesodd")]/tr[1]/TD[1]/A[1]' ``` `ends-with(@id, "_altlinesodd")` suites better in this case, but Scrapy doesn't support `ends-with` syntax, so `contains` used instead
15,181
22,291,337
I know this one has been covered before, and perhaps isn't the most pythonic way of constructing a class, but I have a lot of different maya node classes with a lot @properties for retrieving/setting node data, and I want to see if procedurally building the attributes cuts down on overhead/mantinence. I need to re-implement \_\_setattr\_\_ so that the standard behavior is maintained, but for certain special attributes, the value is get/set to an outside object. I have seen examples of re-implementing \_\_setattr\_\_ on stack overflow, but I seem to be missing something. I don't think i am maintaining the default functionality of **setAttr** Here is an example: ``` externalData = {'translateX':1.0,'translateY':1.0,'translateZ':1.0} attrKeys = ['translateX','translateY','translateZ'] class Transform(object): def __getattribute__(self, name): print 'Getting --->', name if name in attrKeys: return externalData[name] else: raise AttributeError("No attribute named [%s]" %name) def __setattr__(self, name, value): print 'Setting --->', name super(Transform, self).__setattr__(name, value) if name in attrKeys: externalData[name] = value myInstance = Transform() myInstance.translateX # Result: 1.0 # myInstance.translateX = 9999 myInstance.translateX # Result: 9999 # myInstance.name = 'myName' myInstance.name # AttributeError: No attribute named [name] # ``` !
2014/03/10
[ "https://Stackoverflow.com/questions/22291337", "https://Stackoverflow.com", "https://Stackoverflow.com/users/3395409/" ]
This worked for me: ``` class Transform(object): def __getattribute__(self, name): if name in attrKeys: return externalData[name] return super(Transform, self).__getattribute__(name) def __setattr__(self, name, value): if name in attrKeys: externalData[name] = value else: super(Transform, self).__setattr__(name, value) ``` However, I'm not sure this is a good route to go. If the external operations are time consuming (say, you're using this to disguise access to a database or a config file) you may give users of the code the wrong impression about the cost. In a case like that you should use a method so users understand that they are initiating an action, not just looking at data. OTOH if the access is quick, be careful that the encapsulation of your classes isn't broken. If you're doing this to get at maya scene data (pymel-style, or as in [this example](http://techartsurvival.blogspot.com/2014/02/rescuing-maya-gui-from-itself.html)) it's not a big deal since the time costs and stability of the data are more or less guaranteed. However you'd want to avoid the scenario in the example code you posted: it would be very easy to assume that having set 'translateX' to a given value it would stay put, where in fact there are lots of ways that the contents of the outside variables could get messed with, preventing you from being able to know your invariants while using the class. If the class is intended for throwaway use (say, its syntax sugar for a lot of fast repetitive processing inside as loop where no other operations are running) you could get away with it - but if not, internalize the data to your instances. One last issue: If you have 'a lot of classes' you will also have to do a lot of boilerplate to make this work. If you are trying to wrap Maya scene data, read up on descriptors ([here's a great 5-minute video](http://nedbatchelder.com/blog/201306/explaining_descriptors.html)). You can wrap typical transform properties, for example, like this: ``` import maya.cmds as cmds class MayaProperty(object): ''' in a real implmentation you'd want to support different value types, etc by storing flags appropriate to different commands.... ''' def __init__(self, cmd, flag): self.Command = cmd self.Flag = flag def __get__(self, obj, objtype): return self.Command(obj, **{'q':True, self.Flag:True} ) def __set__(self, obj, value): self.Command(obj, **{ self.Flag:value}) class XformWrapper(object): def __init__(self, obj): self.Object = obj def __repr__(self): return self.Object # so that the command will work on the string name of the object translation = MayaProperty(cmds.xform, 'translation') rotation = MayaProperty(cmds.xform, 'rotation') scale = MayaProperty(cmds.xform, 'scale') ``` In real code you'd need error handling and cleaner configuration but you see the idea. The example linked above talks about using metaclasses to populate classes when you have lots of property descriptors to configure, that is a good route to go if you don't want to worry about all the boilerplate (though it does have a minor startup time penalty - I think that's one of the reasons for the notorious Pymel startup crawl...)
Why not also do the same thing in `__getattribute__`? ``` def __getattribute__(self, name): print 'Getting --->', name if name in attrKeys: return externalData[name] else: # raise AttributeError("No attribute named [%s]" %name) return super(Transform, self).__getattribute__(name) ``` Test code ``` myInstance = Transform() myInstance.translateX print(externalData['translateX']) myInstance.translateX = 9999 myInstance.translateX print(externalData['translateX']) myInstance.name = 'myName' print myInstance.name print myInstance.__dict__['name'] ``` Output: ``` Getting ---> translateX 1.0 Setting ---> translateX Getting ---> translateX 9999 Setting ---> name Getting ---> name myName Getting ---> __dict__ myName ```
15,184
52,458,158
I am dealing with the dataset **titanic** from [*seaborn*]. ``` titanic = seaborn.load_dataset('titanic') ``` I cut the age column into categorical bins. ``` age = pd.cut(titanic['age'], [0, 18, 80]) ``` Then the problem comes, the groupby and pivot\_table give totally different results: ``` titanic.groupby(['sex', age, 'class'])['survived'].mean().unstack(-1) titanic.pivot_table('survived', ['sex', age], 'class') ``` [groupby and pivot\_table results](https://i.stack.imgur.com/1Ecd9.png) At first, I guess it is because the nan in **age**, then I used dataset processed by dropna to redo it. ``` titanic = titanic.dropna() age = pd.cut(titanic['age'], [0, 18, 80], right = True) titanic.groupby(['sex', age, 'class'])['survived'].mean().unstack(-1) titanic.pivot_table('survived', ['sex', age], 'class') ``` This time I even got a totally different result. [groupby and pivot\_table results after dropna](https://i.stack.imgur.com/6757L.png) My python version is:Python 3.6.5 :: Anaconda, Inc. pandas: 0.23.0 My operating system is MaxOS High Sierra 10.13.6 I tried again with python 3.7.0 and pandas 0.23.4, and no error occurs. [result under python 3..7.0](https://i.stack.imgur.com/maLu2.png) So I am wondering whether it is a bug of Anaconda?
2018/09/22
[ "https://Stackoverflow.com/questions/52458158", "https://Stackoverflow.com", "https://Stackoverflow.com/users/10401026/" ]
It seems to work best when the object's key is a single class. You can instead do something like this: ``` class="fa" [ngClass]="{'fa-paper-plane': true, 'fa-spinner': false, 'fa-spin': false }" ``` Because the `fa` class should always apply, it's being done in a normal `class` attribute
When a expression is evaluated to true the classes passed in ngClass are added to the classList for the element and when expression is evaluated to false the classes passed in ngClass are removed from the classList for the element. Example : ``` <div> <p> <i [ngClass]="{'fa fa-spinner fa-spin': false, 'fa fa-telegram-plane': false}"> </i> Random Text </p> <!-- DOM will have <i> </i> --> <p> <i [ngClass]="{'fa fa-spinner fa-spin test test1234': true, 'fa fa-telegram test test1234' : false}"> </i> Random Text </p> <!-- DOM will have <i class="fa-spinner fa-spin"> </i> --> <p> <i [ngClass]="{'fa fa-spinner fa-spin': false, 'fa fa-telegram': true}"> </i> Random Text </p> <!-- DOM will have <i class="fa fa-telegram"> </i> --> <p> <i [ngClass]="{'fa fa-telegram test test1234': false, 'fa fa-spinner fa-spin test test1234': true}"> </i> Random Text </p> <!-- DOM will have <i class="fa fa-spinner fa-spin test test1234"> </i> --> </div> ``` Example Code : <https://stackblitz.com/edit/angular-ngclass-kmherp?file=app%2Fapp.component.html>
15,189
45,619,018
I'm trying to use OpenCV to segment a bent rod from it's background then find the bends in it and calculate the angle between each bend. The first part luckily is trivial with a enough contrast between the foreground and background. A bit of erosion/dilation takes care of reflections/highlights when segmenting. The second part is where I'm not sure how to approach it. I can easily retrieve a contour (top and bottom are very similar so either would do), but I can't seem to figure out is how to get split the contour into the straight parts and the bend rods to calculate the angles. So far I've tried simplyfying the contours, but either I get too many or too few points and it feels difficult to pin point the right settings to keep the straight parts straight and the bent parts simplified. Here is my input image(bend.png) [![bend.png](https://i.stack.imgur.com/h0iZp.png)](https://i.stack.imgur.com/h0iZp.png) And here's what I've tried so far: ``` #!/usr/bin/env python import numpy as np import cv2 threshold = 229 # erosion/dilation kernel kernel = np.ones((5,5),np.uint8) # contour simplification epsilon = 0 # slider callbacks def onThreshold(x): global threshold print "threshold = ",x threshold = x def onEpsilon(x): global epsilon epsilon = x * 0.01 print "epsilon = ",epsilon # make a window to add sliders/preview to cv2.namedWindow('processed') #make some sliders cv2.createTrackbar('threshold','processed',60,255,onThreshold) cv2.createTrackbar('epsilon','processed',1,1000,onEpsilon) # load image img = cv2.imread('bend.png',0) # continuously process for quick feedback while 1: # exit on ESC key k = cv2.waitKey(1) & 0xFF if k == 27: break # Threshold ret,processed = cv2.threshold(img,threshold,255,0) # Invert processed = (255-processed) # Dilate processed = cv2.dilate(processed,kernel) processed = cv2.erode(processed,kernel) # Canny processed = cv2.Canny(processed,100,200) contours, hierarchy = cv2.findContours(processed,cv2.RETR_EXTERNAL,cv2.CHAIN_APPROX_SIMPLE) if len(contours) > 0: approx = cv2.approxPolyDP(contours[0],epsilon,True) # print len(approx) cv2.drawContours(processed, [approx], -1, (255,255,255), 3) demo = img.copy() cv2.drawContours(demo, [approx], -1, (192,0,0), 3) # show result cv2.imshow('processed ',processed) cv2.imshow('demo ',demo) # exit cv2.destroyAllWindows() ``` Here's what I've got so far, but I'm not convinced this is the best approach: [![contour finding](https://i.stack.imgur.com/6rllq.jpg)](https://i.stack.imgur.com/6rllq.jpg) [![simplified contour](https://i.stack.imgur.com/Knsj4.jpg)](https://i.stack.imgur.com/Knsj4.jpg) I've tried to figure this out visually and what I've aimed for is something along these lines: [![straight lines and bends segmented](https://i.stack.imgur.com/787iX.png)](https://i.stack.imgur.com/787iX.png) Because the end goal is to calculate the angle between bent parts something like this feels simpler: [![line fitting](https://i.stack.imgur.com/2ytKv.png)](https://i.stack.imgur.com/2ytKv.png) My assumption that fitting lines and compute the angles between pairs of intersecting lines could work: [![angles from line fitting intersections](https://i.stack.imgur.com/wK15o.png)](https://i.stack.imgur.com/wK15o.png) I did a quick test using the [HoughLines OpenCV Python tutorial](http://opencv-python-tutroals.readthedocs.io/en/latest/py_tutorials/py_imgproc/py_houghlines/py_houghlines.html), but regardless of the parameters passed I didn't get great results: ``` #!/usr/bin/env python import numpy as np import cv2 threshold = 229 minLineLength = 30 maxLineGap = 10 houghThresh = 15 # erosion/dilation kernel kernel = np.ones((5,5),np.uint8) # slider callbacks def onMinLineLength(x): global minLineLength minLineLength = x print "minLineLength = ",x def onMaxLineGap(x): global maxLineGap maxLineGap = x print "maxLineGap = ",x def onHoughThresh(x): global houghThresh houghThresh = x print "houghThresh = ",x # make a window to add sliders/preview to cv2.namedWindow('processed') #make some sliders cv2.createTrackbar('minLineLength','processed',1,50,onMinLineLength) cv2.createTrackbar('maxLineGap','processed',5,30,onMaxLineGap) cv2.createTrackbar('houghThresh','processed',15,50,onHoughThresh) # load image img = cv2.imread('bend.png',0) # continuously process for quick feedback while 1: # exit on ESC key k = cv2.waitKey(1) & 0xFF if k == 27: break # Threshold ret,processed = cv2.threshold(img,threshold,255,0) # Invert processed = (255-processed) # Dilate processed = cv2.dilate(processed,kernel) processed = cv2.erode(processed,kernel) # Canny processed = cv2.Canny(processed,100,200) lineBottom = np.zeros(img.shape,np.uint8) contours, hierarchy = cv2.findContours(processed,cv2.RETR_EXTERNAL,cv2.CHAIN_APPROX_SIMPLE) if len(contours) > 0: cv2.drawContours(lineBottom, contours, 0, (255,255,255), 1) # HoughLinesP houghResult = img.copy() lines = cv2.HoughLinesP(lineBottom,1,np.pi/180,houghThresh,minLineLength,maxLineGap) try: for x in range(0, len(lines)): for x1,y1,x2,y2 in lines[x]: cv2.line(houghResult,(x1,y1),(x2,y2),(0,255,0),2) except Exception as e: print e # show result cv2.imshow('lineBottom',lineBottom) cv2.imshow('houghResult ',houghResult) # exit cv2.destroyAllWindows() ``` [![HoughLinesP result](https://i.stack.imgur.com/Nf4Ax.jpg)](https://i.stack.imgur.com/Nf4Ax.jpg) Is this a feasible approach ? If so, what's the correct way of doing line fitting in OpenCV Python ? Otherwise, that's the best way to tackle this problem ? **Update** Following Miki's advise I've tried OpenCV 3's LSD and got nicer results than with `HoughLinesP` but it looks like there's still some tweaking needed, although it doesn't look other than `cv2.createLineSegmentDetector` there aren't many options to play with: [![LSD Result](https://i.stack.imgur.com/jBtj2.jpg)](https://i.stack.imgur.com/jBtj2.jpg)
2017/08/10
[ "https://Stackoverflow.com/questions/45619018", "https://Stackoverflow.com", "https://Stackoverflow.com/users/89766/" ]
It may be convenient to use curvature to find line segments. Here [example](http://www.morethantechnical.com/2012/12/07/resampling-smoothing-and-interest-points-of-curves-via-css-in-opencv-w-code/) of splitting contour by minimal curvature points, it may be better to use maximal curvature points in your case. B You can split your curve to parts, then each part approximate with line segment using RANSAC method.
Once you have the contour, you can analyze it using a method like the one proposed in this paper: <https://link.springer.com/article/10.1007/s10032-011-0175-3> Basically, the contour is tracked calculating the curvature at each point. Then you can use a curvature threshold to segment the contour into straight and curved sections.
15,190
21,889,795
I couldn't find the right search terms for this question, so please apologize if this question has already been asked before. Basically, I want to create a python function that allows you to name the columns (as a function parameter) that you will do certain kinds of analyses on. For instance see below. Obviously this code doesn't work because 'yearattribute' is taken literally after the df. I'd appreciate your help! ``` def networkpairs2(df, Year): """ An effort to generalize the networkpairs function by allowing you to choose the organization and actor parameter column names """ totaldf = df yearattribute = '%s' %Year print yearattribute yearlist = list(np.unique(df.yearattribute)) print yearlist return ```
2014/02/19
[ "https://Stackoverflow.com/questions/21889795", "https://Stackoverflow.com", "https://Stackoverflow.com/users/3314418/" ]
If I understand you, `df[yearattribute].unique()` should work. You can index into DataFrame columns like a dictionary. Aside #1: `totaldf = df` only makes `totaldf` a new name for `df`, it doesn't make a copy, and you don't use it anyway. Aside #2: you're not returning anything.
You can use [`getattr`](http://docs.python.org/3/library/functions.html#getattr) here: ``` yearlist = list(np.unique(getattr(df, yearattribute))) ``` `getattr` allows you to access an attribute via a string representation of its name. Below is a demonstration: ``` >>> class Foo: ... def __init__(self): ... self.attr = 'value' ... >>> foo = Foo() >>> getattr(foo, 'attr') 'value' >>> ```
15,192
2,492,508
is there a python library for supporting OpenType features? where can i get it? please do not guide to fontforge, i live in Iran , so i can not download anything from them.
2010/03/22
[ "https://Stackoverflow.com/questions/2492508", "https://Stackoverflow.com", "https://Stackoverflow.com/users/275221/" ]
The [Python Imaging Library (PIL)](http://www.pythonware.com/products/pil/) supports OpenType.
If you refer OpenType Fonts, there is python library for fontforge <http://fontforge.sourceforge.net/python.html>
15,193
70,422,733
Suppose you have a string: ``` text = "coding in python is a lot of fun" ``` And character positions: ``` positions = [(0,6),(10,16),(29,32)] ``` These are intervals, which cover certain words within text, i.e. coding, python and fun, respectively. Using the character positions, how could you split the text on those words, to get this output: ``` ['coding','in','python','is a lot of','fun'] ``` This is just an example, but it should work for any string and any list of character positions. I'm not looking for this: `[text[i:j] for i,j in positions]`
2021/12/20
[ "https://Stackoverflow.com/questions/70422733", "https://Stackoverflow.com", "https://Stackoverflow.com/users/6515530/" ]
I'd flatten `positions` to be `[0,6,10,16,29,32]` and then do something like ```py positions.append(-1) prev_positions = [0] + positions words = [] for begin, end in zip(prev_positions, positions): words.append(text[begin:end]) ``` This exact code produces `['', 'coding', ' in ', 'python', ' is a lot of ', 'fun', '']`, so it needs some additional work to strip the whitespace
Below code works as expected ``` text = "coding in python is a lot of fun" positions = [(0,6),(10,16),(29,32)] textList = [] lastIndex = 0 for indexes in positions: s = slice(indexes[0], indexes[1]) if positions.index(indexes) > 0: print(lastIndex) textList.append(text[lastIndex: indexes[0]]) textList.append(text[indexes[0]: indexes[1]]) lastIndex = indexes[1] + 1 print(textList) ``` Output: ['coding', 'in ', 'python', 'is a lot of ', 'fun'] Note: If space are not needed you can trim them
15,196
63,842,868
``` import pyautogui as py from selenium import webdriver from selenium.webdriver.common.keys import Keys import keyboard driver = webdriver.Chrome() driver.get('https://www.youtube.com/watch?v=pcel9QTPx_g&list=RDpcel9QTPx_g&start_radio=1&t=11&ab_channel=%E5%BE%AE%E7%B3%96%E9%80%A2') elem = driver.find_elements_by_id('video-title') while True: if keyboard.is_pressed('`'): driver.find_element_by_xpath('//*[@id="movie_player"]/div[29]/div[2]/div[1]/button'.click() for x in elem: elem.click() keyboard.wait('f4) ``` So i am trying to automate iterating through a bunch of songs playlist in selenium using python. I am trying make the code so when i pressed `, the video will pause and if i pressed f4 the code will skips to the next iteration. The f4 function is working just fine but the code inside the while loop isn't. Is it because when the code runs, when it gets to the for loop it won't get access to the while loop anymore? If you guys have any ideas or want to simplify my code feel free to answer!
2020/09/11
[ "https://Stackoverflow.com/questions/63842868", "https://Stackoverflow.com", "https://Stackoverflow.com/users/13781869/" ]
``` import pyautogui as py from selenium import webdriver from selenium.webdriver.common.keys import Keys import keyboard driver = webdriver.Chrome() driver.get('https://www.youtube.com/watch?v=pcel9QTPx_g&list=RDpcel9QTPx_g&start_radio=1&t=11&ab_channel=%E5%BE%AE%E7%B3%96%E9%80%A2') elem = driver.find_elements_by_id('video-title') for x in elem: x.click() while True: if keyboard.is_pressed('f12'): driver.find_element_by_xpath('//*[@id="movie_player"]/div[29]/div[2]/div[1]/button').click() keyboard.wait('f12') driver.find_element_by_xpath('//*[@id="movie_player"]/div[29]/div[2]/div[1]/button').click() if keyboard.is_pressed('f4'): break keyboard.wait('f4') ```
I would also suggest you to use another key instead of f12 because it will disrupt your code when you are running it from chrome, it will open developer mode!
15,197
39,845,636
I have the latest version of pip 8.1.1 on my ubuntu 16. But I am not able to install any modules via pip as I get this error all the time. ``` File "/usr/local/bin/pip", line 5, in <module> from pkg_resources import load_entry_point File "/usr/lib/python3/dist-packages/pkg_resources/__init__.py", line 2927, in <module> @_call_aside File "/usr/lib/python3/dist-packages/pkg_resources/__init__.py", line 2913, in _call_aside f(*args, **kwargs) File "/usr/lib/python3/dist-packages/pkg_resources/__init__.py", line 2940, in _initialize_master_working_set working_set = WorkingSet._build_master() File "/usr/lib/python3/dist-packages/pkg_resources/__init__.py", line 635, in _build_master ws.require(__requires__) File "/usr/lib/python3/dist-packages/pkg_resources/__init__.py", line 943, in require needed = self.resolve(parse_requirements(requirements)) File "/usr/lib/python3/dist-packages/pkg_resources/__init__.py", line 829, in resolve raise DistributionNotFound(req, requirers) pkg_resources.DistributionNotFound: The 'pip==7.1.0' distribution was not found and is required by the application ``` I found a similar [link](https://stackoverflow.com/questions/38587785/pkg-resources-distributionnotfound-the-pip-1-5-4-distribution-was-not-found), but not helpful.
2016/10/04
[ "https://Stackoverflow.com/questions/39845636", "https://Stackoverflow.com", "https://Stackoverflow.com/users/5474316/" ]
I repaired mine this with command: > > easy\_install pip > > >
Delete all of the pip/pip3 stuff under .local including the packages. ``` sudo apt-get purge python-pip python3-pip ``` Now remove all pip3 files from local ``` sudo rm -rf /usr/local/bin/pip3 ``` you can check which pip is installed other wise execute below one to remove all (No worries) ``` sudo rm -rf /usr/local/bin/pip3.* ``` Using pip and/or pip3, reinstall needed Python packages. ``` sudo apt-get install python-pip python3-pip ```
15,198
63,835,086
If I have a dataframe with the following layout: ``` ID# Response 1234 Covid-19 was a disaster for my business 3456 The way you handled this pandemic was awesome ``` I want to be able to count frequency of specific words from a list. ``` list=['covid','COVID','Covid-19','pandemic','coronavirus'] ``` In the end I want to generate a dictionary like the following ``` {covid:0,COVID:0,Covid-19:1,pandemic:1,'coronavirus':0} ``` Please help I am really stuck on how to code this in python
2020/09/10
[ "https://Stackoverflow.com/questions/63835086", "https://Stackoverflow.com", "https://Stackoverflow.com/users/13000877/" ]
For each string, find number of matches. ``` dict((s, df['response'].str.count(s).fillna(0).sum()) for s in list_of_strings) ``` Note that `Series.str.count` takes a regex input. You may want to append `(?=\b)` for positive look-ahead word-endings. `Series.str.count` returns `NA` when counting `NA`, thus, fill with 0. For each string, sum over column.
``` import pandas as pd import numpy as np df = pd.DataFrame({'sheet':['sheet1', 'sheet2', 'sheet3', 'sheet2'], 'tokenized_text':[['efcc', 'fficial', 'billiontwits', 'since', 'covid', 'landed'], ['when', 'people', 'say', 'the', 'fatality', 'rate', 'of', 'coronavirus', 'is'], ['in', 'the', 'coronavirus-induced', 'crisis', 'people', 'are', 'cyvbwx'], ['in', 'the', 'be-induced', 'crisis', 'people', 'are', 'cyvbwx']] }) print(df) words_collection = ['covid','COVID','Covid-19','pandemic','coronavirus'] # Extract the words from all lines all_words = [] for index, row in df.iterrows(): all_words.extend(row['tokenized_text']) # Create a dictionary that maps for each word from `words_collection` the counter it appears word_to_number_of_occurences = dict() # Go over the word collection and set it's counter for word in words_collection: word_to_number_of_occurences[word] = all_words.count(word) # {'covid': 1, 'COVID': 0, 'Covid-19': 0, 'pandemic': 0, 'coronavirus': 1} print(word_to_number_of_occurences) ```
15,208
9,377,801
I would like to know how practical it would be to create a program which takes handwritten characters in some form, analyzes them, and offers corrections to the user. The inspiration for this idea is to have elementary school students in other countries or University students in America learn how to write in languages such as Japanese or Chinese where there are a lot of characters and even the slightest mistake can make a big difference. I am unsure how the program will analyze the character. My current idea is to get a single pixel width line to represent the stroke, compare how far each pixel is from the corresponding pixel in the example character loaded from a database, and output which area needs the most work. Endpoints will also be useful to know. I would also like to tell the user if their character could be interpreted as another character similar to the one they wanted to write. I imagine I will need a library of some sort to complete this project in any sort of timely manner but I have been unable to locate one which meets the standards I will need for the program. I looked into OpenCV but it appears to be meant for vision than image processing. I would also appreciate the library/module to be in python or Java but I can learn a new language if absolutely necessary. Thank you for any help in this project.
2012/02/21
[ "https://Stackoverflow.com/questions/9377801", "https://Stackoverflow.com", "https://Stackoverflow.com/users/1223327/" ]
Character Recognition is usually implemented using Artificial Neural Networks ([ANNs](http://en.wikipedia.org/wiki/Artificial_neural_network)). It is not a straightforward task to implement seeing that there are usually lots of ways in which different people write the same character. The good thing about neural networks is that they can be trained. So, to change from one language to another all you need to change are the weights between the neurons, and leave your network intact. Neural networks are also able to generalize to a certain extent, so they are usually able to cope with minor variances of the same letter. [Tesseract](http://en.wikipedia.org/wiki/Artificial_neural_network) is an open source OCR which was developed in the mid 90's. You might want to read about it to gain some pointers.
Have you seen <http://www.skritter.com>? They do this in combination with spaced recognition scheduling. I guess you want to classify features such as curves in your strokes (http://en.wikipedia.org/wiki/CJK\_strokes), then as a next layer identify componenents, then estimate the most likely character. All the while statistically weighting the most likely character. Where there are two likely matches you will want to show them as likely to be confused. You will also need to create a database of probably 3000 to 5000 characters, or up to 10000 for the ambitious. See also <http://www.tegaki.org/> for an open source program to do this.
15,214
26,743,269
From the [Python docs](https://docs.python.org/2/library/heapq.html): > > The latter two functions [heapq.nlargest and heapq.nsmallest] perform best for smaller values of n. For > larger values, it is more efficient to use the sorted() function. > Also, when n==1, it is more efficient to use the built-in min() and > max() functions. > > > If I want to retrieve the minimum element in the min-heap, why do the Python docs suggest using `min()`, which I assume runs in O(n) time, when I can instead retrieve the first element in the heap in O(1) time? (I'm assuming the first element in the heap is the minimum)
2014/11/04
[ "https://Stackoverflow.com/questions/26743269", "https://Stackoverflow.com", "https://Stackoverflow.com/users/1032973/" ]
The `nsmallest` and `nlargest` methods available from `heapq` do not assume that the argument passed to them is already in heap format. Instead, they seek to "heapify" the argument as they traverse it, which will be more efficient than outright sorting for the top-k elements for small values of k, but for k exactly equal to one, it's even faster to avoid paying the heapify-as-you-traverse overhead, and just use `min` directly. Your statement is correct. If you are given an array that you can guarantee has been heapified, and not altered since, then accessing the first element will give you the min (respectively the max for a max-heap). Looking at the [source code for heapq](https://hg.python.org/cpython/file/2.7/Lib/heapq.py) (maybe I'm looking at old code?) it still seems quite weird to me. `nsmallest` has a special case for `n ==1` implemented like this (line 397): ``` def nsmallest(n, iterable, key=None): """Find the n smallest elements in a dataset. Equivalent to: sorted(iterable, key=key)[:n] """ # Short-cut for n==1 is to use min() when len(iterable)>0 if n == 1: it = iter(iterable) head = list(islice(it, 1)) if not head: return [] if key is None: return [min(chain(head, it))] return [min(chain(head, it), key=key)] # ... rest of function ``` Just playing with that expression in the interpreter makes it seem bizarre: ``` In [203]: foo = list(itertools.islice([1,2,3], 1)); it = iter([1,2,3]); x = itertools.chain(foo, it); In [204]: x.next() Out[204]: 1 In [205]: x.next() Out[205]: 1 In [206]: x.next() Out[206]: 2 In [207]: x.next() Out[207]: 3 In [208]: x.next() --------------------------------------------------------------------------- StopIteration Traceback (most recent call last) <ipython-input-208-e05f366da090> in <module>() ----> 1 x.next() StopIteration: ``` It seems to be building a generator (which gets turned into a `list` immediately) that only takes the 1st element (as you might expect with a min heap), but then it oddly `chain`s it with a plain old generator that's going to go over the whole array. I agree that if you start from a `list` and want to query for the minimum element, it's probably better to leave it as a `list` and use `min`. However, if you are handed a min heap, yes indeed you should just inspect the first element -- that is part of the point of heapifying it in the first place. But regardless, this source code looks quite bizarre for passing the min heap to `min` -- I would greatly welcome more explanation about what it's doing -- and maybe a pointer to some more recent C-level code for an implementation of heapq, if there is one.
If you need just to pick one minimum element on a heapified list just do list[0]: ``` import heapq lst = [1,-1,100,200] heapq.heapify(lst) min_value = lst[0] ``` Doc above refers to getting n smallest numbers, and heap is not the most efficient data structure to do that if n is large.
15,216
44,773,983
I have two following files: **testcase\_module.py** ``` import boto3 ec2 = boto3.resource('ec2') def f(): return ec2.instances.all() ``` **testcase\_test.py** ``` import testcase_module import unittest.mock class MainTest(unittest.TestCase): @unittest.mock.patch('testcase_module.ec2', spec_set=['instances']) def test_f(self, ec2_mock): ec2_mock.instances.spec_set = ['all'] testcase_module.f() if __name__ == '__main__': unittest.main() ``` I added `spec_test` parameter to the patch because I would like to assert if any other function than `instances.all()` has been called, but changing string `'all'` to `'allx'` doesn't make test fail while changing `'instances'` to `'instancesx'` does. I tried the following changes (`git diff testcase_test.py` and `python testcase_test.py` results below): **Attempt 1:** ``` diff --git a/testcase_test.py b/testcase_test.py index d6d6e59..ae274c8 100644 --- a/testcase_test.py +++ b/testcase_test.py @@ -3,9 +3,8 @@ import unittest.mock class MainTest(unittest.TestCase): - @unittest.mock.patch('testcase_module.ec2', spec_set=['instances']) - def test_f(self, ec2_mock): - ec2_mock.instances.spec_set = ['all'] + @unittest.mock.patch('testcase_module.ec2', spec_set=['instances.all']) + def test_f(self, _): testcase_module.f() ``` Produces: ``` E ====================================================================== ERROR: test_f (__main__.MainTest) ---------------------------------------------------------------------- Traceback (most recent call last): File "/usr/lib/python3.5/unittest/mock.py", line 1157, in patched return func(*args, **keywargs) File "testcase_test.py", line 8, in test_f testcase_module.f() File "/path/to/project/testcase_module.py", line 8, in f return ec2.instances.all() File "/usr/lib/python3.5/unittest/mock.py", line 578, in __getattr__ raise AttributeError("Mock object has no attribute %r" % name) AttributeError: Mock object has no attribute 'instances' ---------------------------------------------------------------------- Ran 1 test in 0.001s FAILED (errors=1) ``` **Attempt 2:** ``` diff --git a/testcase_test.py b/testcase_test.py index d6d6e59..d93abd1 100644 --- a/testcase_test.py +++ b/testcase_test.py @@ -3,9 +3,8 @@ import unittest.mock class MainTest(unittest.TestCase): - @unittest.mock.patch('testcase_module.ec2', spec_set=['instances']) - def test_f(self, ec2_mock): - ec2_mock.instances.spec_set = ['all'] + @unittest.mock.patch('testcase_module.ec2.instances', spec_set=['all']) + def test_f(self): testcase_module.f() ``` Produces: ``` E ====================================================================== ERROR: test_f (__main__.MainTest) ---------------------------------------------------------------------- Traceback (most recent call last): File "/usr/lib/python3.5/unittest/mock.py", line 1149, in patched arg = patching.__enter__() File "/usr/lib/python3.5/unittest/mock.py", line 1312, in __enter__ setattr(self.target, self.attribute, new_attr) AttributeError: can't set attribute During handling of the above exception, another exception occurred: Traceback (most recent call last): File "/usr/lib/python3.5/unittest/mock.py", line 1170, in patched patching.__exit__(*exc_info) File "/usr/lib/python3.5/unittest/mock.py", line 1334, in __exit__ delattr(self.target, self.attribute) AttributeError: can't delete attribute ---------------------------------------------------------------------- Ran 1 test in 0.001s FAILED (errors=1) ``` How can I make it failing when other method than `instances.all` has been called?
2017/06/27
[ "https://Stackoverflow.com/questions/44773983", "https://Stackoverflow.com", "https://Stackoverflow.com/users/1906088/" ]
Try using `mock_add_spec`. `ec2_mock.instances.mock_add_spec(['all'], spec_set=True)` Link: <https://docs.python.org/3/library/unittest.mock.html#unittest.mock.Mock.mock_add_spec>
What about doing it like this: ``` @unittest.mock.patch('testcase_module.boto3.resource', autospec=True) def test_f(self, ec2_resource_mock): class InstanceStub(object): def all(self): return [...] ec2_resource_mock.return_value = mock.create_autospec( EC2InstanceType, instances=InstanceStub()) testcase_module.f() ```
15,217
71,957,239
How can I get the largest key with the largest value in python dictionary. In the below example you can see 1 and 2 have same frequency. But i want to return the larger key. ``` nums = [1,2,2,3,1] frq = {} for i in nums: if i not in frq: frq[i] = 1 else: frq[i] += 1 frequency = max(frq, key=frq.get) print(frequency) ```
2022/04/21
[ "https://Stackoverflow.com/questions/71957239", "https://Stackoverflow.com", "https://Stackoverflow.com/users/11416278/" ]
Have your `key` function return a tuple of the value and the associated key. The first element of the tuple is compared first, the second will break ties. ``` >>> from collections import Counter >>> nums = [1, 2, 2, 3, 1] >>> frq = Counter(nums) >>> max(frq, key=lambda n: (frq[n], n)) 2 ``` Note that `collections.Counter` builds the `frq` dictionary automatically given `nums`.
You can use tuple comparison for the keys to compare keys based on their frequencies, and then tiebreak based on the actual value of the key only if the frequencies are the same: ```py frequency = max(frq, key=lambda x: (frq.get(x), x)) ``` With this change, this outputs: ```py 2 ```
15,218
38,506,250
i want to take input from user in and each value of the input is on consecutive line.this is to be implemented in python ``` while x=int(raw_input()): ##<=showing error at this line print(x) gollum(x) #the function gollum() has to be called if the input is present ```
2016/07/21
[ "https://Stackoverflow.com/questions/38506250", "https://Stackoverflow.com", "https://Stackoverflow.com/users/6590204/" ]
There are no other ways with HTTP (except what HTTP allows, as already mentioned). But there are many other ways to transfer data from server to server, like FTP or establishing a direct socket connection. Note that you will need install/configure such additional ways, and maybe not only on the server (for communication through a firewall, you will also need to allow the used ports)
**Session :** A session is stored on the server and cannot be accessed by the user (client). It is used to store information across the site such as login sessions. It can be used to store information in the server side, and pass between different php scripts Note that session creates a session cookie for identification like `PHPSESSID`, which may reveal server-side scripting language used, which most people prefer not to reveal. To not reveal, we can change this cookie name from php. Also, by stealing that cookie, someone may hijack the session. So, session should be used properly by verifying it. If session verifying is done, you may prevent hijacking and use it securely. **For Secure Usage:** To use sessions securely, you can follow this blog which shows how sessions can be verified and prevented from being hijacked. > > <http://blog.teamtreehouse.com/how-to-create-bulletproof-sessions> > > >
15,220
13,620,633
Given the URL <http://www.smartmoney.com/quote/FAST/?story=financials&timewindow=1&opt=YB&isFinprint=1&framework.view=smi_emptyView> , how would you capture and print the contents of an entire row of data? For example, what would it take to get an output that looked something like: "Cash & Short Term Investments 144,841 169,760 189,252 86,743 57,379"? Or something like "Property, Plant & Equipment - Gross 725,104 632,332 571,467 538,805 465,493"? I've been introduced to the basics of Xpath through sites <http://www.techchorus.net/web-scraping-lxml> . However, the Xpath syntax is still largely a mystery to me. I already have successfully done this in BeautifulSoup. I like the fact that BeautifulSoup doesn't require me to know the structure of the file - it just looks for the element containing the text I search for. Unfortunately, BeautifulSoup is too slow for a script that has to do this THOUSANDS of times. The source code for my task in BeautifulSoup is (with title\_input equal to "Cash & Short Term Investments"): ``` page = urllib2.urlopen (url_local) soup = BeautifulSoup (page) soup_line_item = soup.findAll(text=title_input)[0].parent.parent.parent list_output = soup_line_item.findAll('td') # List of elements ``` So what would the equivalent code in lxml be? EDIT 1: The URLs were concealed the first time I posted. I have now fixed that. EDIT 2: I have added my BeautifulSoup-based solution to clarify what I'm trying to do. EDIT 3: +10 to root for your solution. For the benefit of future developers with the same question, I'm posting here a quick-and-dirty script that worked for me: ``` #!/usr/bin/env python import urllib import lxml.html url = 'balancesheet.html' result = urllib.urlopen(url) html = result.read() doc = lxml.html.document_fromstring(html) x = doc.xpath(u'.//th[div[text()="Cash & Short Term Investments"]]/following-sibling::td/text()') print x ```
2012/11/29
[ "https://Stackoverflow.com/questions/13620633", "https://Stackoverflow.com", "https://Stackoverflow.com/users/776739/" ]
``` In [18]: doc.xpath(u'.//th[div[text()="Cash & Short Term Investments"]]/following-sibling::td/text()') Out[18]: [' 144,841', ' 169,760', ' 189,252', ' 86,743', ' 57,379'] ``` or you can define a little function to get the rows by text: ``` In [19]: def func(doc,txt): ...: exp=u'.//th[div[text()="{0}"]]'\ ...: u'/following-sibling::td/text()'.format(txt) ...: return [i.strip() for i in doc.xpath(exp)] In [20]: func(doc,u'Total Accounts Receivable') Out[20]: ['338,594', '270,133', '214,169', '244,940', '236,331'] ``` or you can get all the rows to a `dict`: ``` In [21]: d={} In [22]: for i in doc.xpath(u'.//tbody/tr'): ...: if len(i.xpath(u'.//th/div/text()')): ...: d[i.xpath(u'.//th/div/text()')[0]]=\ ...: [e.strip() for e in i.xpath(u'.//td/text()')] In [23]: d.items()[:3] Out[23]: [('Accounts Receivables, Gross', ['344,241', '274,894', '218,255', '247,600', '238,596']), ('Short-Term Investments', ['27,165', '26,067', '24,400', '851', '159']), ('Cash & Short Term Investments', ['144,841', '169,760', '189,252', '86,743', '57,379'])] ```
let html holds the html source code: ``` import lxm.html doc = lxml.html.document_fromstring(html) rows_element = doc.xpath('/html/body/div/div[2]/div/div[5]/div/div/table/tbody/tr') for row in rows_element: print row.text_content() ``` not tested but should work P.S.Install xpath cheker or firefinder in firefox to help you with xpath
15,222
17,669,095
I'm trying find exactly what's wrong with a larger job that I'm trying to schedule with launchd for the first time. So I made the simplest python file I could think of, `print 'running test'`, titled it `com.schedulertest.plist` and then made a plist file like so: ``` <?xml version="1.0" encoding="UTF-8"?> <!DOCTYPE plist PUBLIC -//Apple Computer//DTD PLIST 1.0//EN http://www.apple.com/DTDs/PropertyList-1.0.dtd > <plist version="1.0"> <dict> <key>Label</key> <string>com.schedulertest.py.plist</string> <key>ProgramArguments</key> <array> <string>arch</string> <string>-i386</string> <string>/usr/bin/python2.7</string> <string>/Users/user/Documents/Python_Alerts_Project/schedulertest.py</string> </array> <key>RunAtLoad</key> <true/> <key>StartInterval</key> <integer>60</integer> </dict> </plist> ``` Then I saved it in `$HOME/Library/LaunchAgents/` and ran: ``` launchctl load com.schedulertest.plist ``` I should be getting the print output from my py script every 60 seconds, right? I don't see anything though -- is there an obvious fault in my script or process?
2013/07/16
[ "https://Stackoverflow.com/questions/17669095", "https://Stackoverflow.com", "https://Stackoverflow.com/users/1844086/" ]
So the answer was not a big deal, but it might help others to share the solution. I had simply forgotten, as we will when moving around several virtualenvs, which python I was in. If you're having trouble and your `.plist` and script seem well-formed, it won't hurt to run `which python` etc, and check the result against the path you're listing in your program arguments.
### Troubleshooting * To debug `.plist`, you can check the log for any error, e.g. ``` tail -f /var/log/system.log ``` To specify custom log, use: ``` <key>StandardOutPath</key> <string>/var/log/myjob.log</string> <key>StandardErrorPath</key> <string>/var/log/myjob.log</string> ``` * To find the latest exit status of the job, run: ``` launchctl list com.schedulertest.plist ``` * To make sure the syntax is correct, use `plutil` command. See: [Debugging `launchd` Jobs section of Creating Launch Daemons and Agents page](https://developer.apple.com/library/content/documentation/MacOSX/Conceptual/BPSystemStartup/Chapters/CreatingLaunchdJobs.html#//apple_ref/doc/uid/10000172i-SW7-BCIEDDBJ).
15,223