qid
int64
46k
74.7M
question
stringlengths
54
37.8k
date
stringlengths
10
10
metadata
listlengths
3
3
response_j
stringlengths
29
22k
response_k
stringlengths
26
13.4k
__index_level_0__
int64
0
17.8k
35,494,331
How can I get generics to work in Python.NET with CPython. I get an error when using the subscript syntax from [Python.NET Using Generics](http://pythonnet.sourceforge.net/readme.html#generics) ``` TypeError: unsubscriptable object ``` With Python 2.7.11 + pythonnet==2.1.0.dev1 ``` >python.exe Python 2.7.11 (v2.7.11:6d1b6a68f775, Dec 5 2015, 20:32:19) [MSC v.1500 32 bit (Intel)] on win32 Type "help", "copyright", "credits" or "license" for more information. >>> import clr >>> from System import EventHandler >>> from System import EventArgs >>> EventHandler[EventArgs] Traceback (most recent call last): File "<stdin>", line 1, in <module> TypeError: unsubscriptable object ``` I've also tried pythonnet==2.0.0 and building form github ca15fe8 with ReleaseWin x86 and got the same error. With IronPython-2.7.5: ``` >ipy.exe IronPython 2.7.5 (2.7.5.0) on .NET 4.0.30319.0 (32-bit) Type "help", "copyright", "credits" or "license" for more information. >>> import clr >>> from System import EventHandler >>> from System import EventArgs >>> EventHandler[EventArgs] <type 'EventHandler[EventArgs]'> ```
2016/02/18
[ "https://Stackoverflow.com/questions/35494331", "https://Stackoverflow.com", "https://Stackoverflow.com/users/742745/" ]
You can get the generic class object explicitly: ``` EventHandler = getattr(System, 'EventHandler`1') ``` The number indicates the number of generic arguments.
That doesn't work because there are both generic and non-generic versions of the `EventHandler` class that exist in the `System` namespace. The name is overloaded. You need to indicate that you want the generic version. I'm not sure how exactly Python.NET handles overloaded classes/functions but it seems like it has an `Overloads` property. Try something like this and see how that goes: ``` EventHandler_EventArgs = EventHandler.Overloads[EventArgs] ``` (this doesn't work) It doesn't seem like Python.NET has ways to resolve the overloaded class name, each overload is treated as separate distinct types (using their clr names). It doesn't do the same thing as IronPython and lump them into a single containing type. In this particular case, the `EventArgs` name corresponds to the non-generic `EventArgs` type. You'll need to get the appropriate type directly from the System module as filmor illustrates.
10,738
30,189,013
is the code he wants me to enter that fails ``` from sys import argv script, user_name = argv prompt = '> ' print "Hi %s, I'm the %s script." % (user_name, script) print "I'd like to ask you a few questions." print "Do you like me %s?" % user_name likes = raw_input(prompt) ``` is the code I modified after seeing errors and knowing he uses python 2, I've just been making corrections to his code as I find them online. ``` from sys import argv script, user_name = argv prompt = '> ' print ("Hi" user_name: %s, "I/'m the", %s: script.) print ("I;d like tok ask you a few questions") print ("Do you like me %s") % (user_name) likes = input(prompt) ``` --- All `%s`, `%d` `%r` have failed for me. Is this a python 2 convention? Should I be using something else? for example ``` foo = bar print ("the variable foo %s is a fundamental programming issue.) ``` I have tried using tuples? as in: ``` print ("the variable foo", %s: foo, "is a fundamental programming issue.") ``` with no success
2015/05/12
[ "https://Stackoverflow.com/questions/30189013", "https://Stackoverflow.com", "https://Stackoverflow.com/users/4891078/" ]
Here is one method: ``` select v.* from vusearch as v where v.JobId = (select max(v2.JobId) from vusearch as v2 where v2.AddressId = v.AddressId ); ```
Managed to get it fixed - I probably hadn't provided enough information as I was trying to keep my explanation simple. Many thanks for your help Gordon ((vuSearch.PDID) IN ( (SELECT Max(v2.PDID) FROM vuSearch AS v2 GROUP BY v2.PAID)))
10,739
51,947,819
I have a pandas dataframe with 5 years daily time series data. I want to make a monthly plot from whole datasets so that the plot should shows variation (std or something else) within monthly data. Simillar figure I tried to create but did not found a way to do that: [![enter image description here](https://i.stack.imgur.com/wVxxx.png)](https://i.stack.imgur.com/wVxxx.png) for example, I have a sudo daily precipitation data: ``` date = pd.to_datetime("1st of Dec, 1999") dates = date+pd.to_timedelta(np.arange(1900), 'D') ppt = np.random.normal(loc=0.0, scale=1.0, size=1900).cumsum() df = pd.DataFrame({'pre':ppt},index=dates) ``` Manually I can do it like: ``` one = df['pre']['1999-12-01':'2000-11-29'].values two = df['pre']['2000-12-01':'2001-11-30'].values three = df['pre']['2001-12-01':'2002-11-30'].values four = df['pre']['2002-12-01':'2003-11-30'].values five = df['pre']['2003-12-01':'2004-11-29'].values df = pd.DataFrame({'2000':one,'2001':two,'2002':three,'2003':four,'2004':five}) std = df.std(axis=1) lw = df.mean(axis=1)-std up = df.mean(axis=1)+std plt.fill_between(np.arange(365), up, lw, alpha=.4) ``` I am looking for the more pythonic way to do that instead of doing it manually! Any helps will be highly appreciated
2018/08/21
[ "https://Stackoverflow.com/questions/51947819", "https://Stackoverflow.com", "https://Stackoverflow.com/users/5111380/" ]
If I'm understanding you correctly you'd like to plot your daily observations against a monthly periodic mean +/- 1 standard deviation. And that's what you get in my screenshot below. Nevermind the lackluster design and color choice. We'll get to that if this is something you can use. And please notice that I've replaced your `ppt = np.random.rand(1900)` with `ppt = np.random.normal(loc=0.0, scale=1.0, size=1900).cumsum()` just to make the data look a bit more like your screenshot. [![enter image description here](https://i.stack.imgur.com/WgmOA.png)](https://i.stack.imgur.com/WgmOA.png) Here I've aggregated the daily data by month, and retrieved mean and standard deviation for each month. Then I've merged that data with the original dataframe so that you're able to plot both the source and the grouped data like this: ``` # imports import matplotlib.pyplot as plt import pandas as pd import matplotlib.dates as mdates import numpy as np # Data that matches your setup, but with a random # seed to make it reproducible np.random.seed(42) date = pd.to_datetime("1st of Dec, 1999") dates = date+pd.to_timedelta(np.arange(1900), 'D') #ppt = np.random.rand(1900) ppt = np.random.normal(loc=0.0, scale=1.0, size=1900).cumsum() df = pd.DataFrame({'ppt':ppt},index=dates) # A subset df = df.tail(200) # Add a yearmonth column df['YearMonth'] = df.index.map(lambda x: 100*x.year + x.month) # Create aggregated dataframe df2 = df.groupby('YearMonth').agg(['mean', 'std']).reset_index() df2.columns = ['YearMonth', 'mean', 'std'] # Merge original data and aggregated data df3 = pd.merge(df,df2,how='left',on=['YearMonth']) df3 = df3.set_index(df.index) df3 = df3[['ppt', 'mean', 'std']] # Function to make your plot def monthplot(): fig, ax = plt.subplots(1) ax.set_facecolor('white') # Define upper and lower bounds for shaded variation lower_bound = df3['mean'] + df3['std']*-1 upper_bound = df3['mean'] + df3['std'] fig, ax = plt.subplots(1) ax.set_facecolor('white') # Source data and mean ax.plot(df3.index,df3['mean'], lw=0.5, color = 'red') ax.plot(df3.index, df3['ppt'], lw=0.1, color = 'blue') # Variation and shaded area ax.fill_between(df3.index, lower_bound, upper_bound, facecolor='grey', alpha=0.5) fig = ax.get_figure() # Assign months to X axis locator = mdates.MonthLocator() # every month # Specify the format - %b gives us Jan, Feb... fmt = mdates.DateFormatter('%b') X = plt.gca().xaxis X.set_major_locator(locator) X.set_major_formatter(fmt) fig.show() monthplot() ``` Check out [this post](https://stackoverflow.com/questions/46555819/months-as-axis-ticks) for more on axis formatting and [this post](https://stackoverflow.com/questions/25146121/extracting-just-month-and-year-from-pandas-datetime-column-python) on how to add a YearMonth column.
In your example, you have a few mistakes, but I think it isn't important. Do you want all years to be on the same graphic (like in your example)? If you do, this may help you: ``` df['month'] = df.index.strftime("%m-%d") df['year'] = df.index.year df.set_index(['month']).drop(['year'],1).plot() ```
10,740
67,010,037
I am not an expert in python. When I run this code, I get an error stating that the source is empty. It occurs in the statement that converts bgr to rgb from a live video feed. I also attached some of the error code below. I did try to resolve it changing some of it, but it did not work out. So, if you have any ideas, please share. ``` import cv2 import mediapipe as mp import time class handDetector(): def __init__(self, mode = False, maxHands=2, detectionCon=0.5, trackCon=0.5): self.mode = mode self.maxHands = maxHands self.detectionCon = detectionCon self.trackCon = trackCon self.mpHands = mp.solutions.hands self.hands = self.mpHands.Hands(self.mode, self.maxHands, self.detectionCon, self.trackCon) self.mpDraw = mp.solutions.drawing_utils def findhands(self, img, draw=True): imgRGB = cv2.cvtColor(img, cv2.COLOR_BGR2RGB) self.results = self.hands.process(imgRGB) if self.results.multi_hand_landmarks: for handLms in self.results.multi_hand_landmarks: if draw: self.mpDraw.draw_landmarks(img, handLms, self.mpHands.HAND_CONNECTIONS) return img # for id, lm in enumerate(handLms.landmark): # # print(id,lm) # h, w, c = img.shape # cx, cy = int(lm.x * w), int(lm.y * h) # print(id, cx, cy) # # if id==4: # cv2.circle(img, (cx, cy), 15, (255, 0, 255), cv2.FILLED) def findPosition(self, img, handNo=0, draw=True): lmList = [] if self.results.multi_hand_landmarks: myHand = self.results.multi_hand_landmarks[handNo] for id, lm in enumerate(myHand.landmark): # print(id, lm) h, w, c = img.shape cx, cy = int(lm.x * w), int(lm.y * h) # print(id, cx, cy) lmList.append([id, cx, cy]) if draw: cv2.circle(img, (cx, cy), 10, (255, 0, 0), cv2.FILLED) return lmList def main(): pTime = 0 cTime = 0 cap = cv2.VideoCapture(0) detector = handDetector() while True: success,img = cap.read() img = detector.findhands(img) lmList = detector.findPosition(img) if len(lmList) != 0: print(lmList[4]) cTime = time.time() fps = 1/(cTime-pTime) pTime = cTime cv2.putText(img, str(int(fps)), (10, 70), cv2.FONT_HERSHEY_PLAIN, 3, (255, 0, 255), 3) cv2.imshow("Image", img) cv2.waitKey(1) if __name__=="__main__": main() ``` Error code is : ``` cv2.cvtColor(img, cv2.COLOR_BGR2RGB) cv2.error: OpenCV(4.5.1) error: (-215:Assertion failed) !_src.empty() in function 'cv::cvtColor' ``` Thank You
2021/04/08
[ "https://Stackoverflow.com/questions/67010037", "https://Stackoverflow.com", "https://Stackoverflow.com/users/14020038/" ]
You can use `Series.isin`: ``` In [1998]: res = df1[df1.id_number.isin(df2.id_number) & df1.accuracy.ge(85)] In [1999]: res Out[1999]: Name Contact_number id_number accuracy 0 Eric 9786543628 AZ256hy 90 1 Jack 9786543628 AZ98kds 85 ``` **EDIT:** If you want only certain columns: ``` In [2089]: res = df1[df1.id_number.isin(df2.id_number) & df1.accuracy.ge(85)][['Name', 'Contact_number']] In [2090]: res Out[2090]: Name Contact_number 0 Eric 9786543628 1 Jack 9786543628 ```
Edit: I made changes to the conditions. This should work ``` df = pd.read_excel(open(r'input.xlsx', 'rb'), sheet_name='sheet1') df2 = pd.read_excel(open(r'input.xlsx', 'rb'), sheet_name='sheet2') df.loc[(df['id_number'] == df2['id_number']) & (df['accuracy']>= 85),['Name','Contact_number', 'id_number']] ```
10,741
28,458,785
How can I pass a `sed` command to `popen` without using a raw string? When I pass an sed command to `popen` in the list form I get an error: `unterminated address regex` (see first example) ```python >>> COMMAND = ['sed', '-i', '-e', "\$amystring", '/home/map/myfile'] >>> subprocess.Popen(COMMAND).communicate(input=None) sed: -e expression #1, char 11: unterminated address regex ``` using the raw string form it works as expected: ```python >>> COMMAND = r"""sed -i -e "\$amystring" /home/map/myfile""" >>> subprocess.Popen(COMMAND, shell=True).communicate(input=None) ``` I'm really interested in passing `"\$amystring"` as an element of the list. Please avoid answers like ```python >>> COMMAND = r" ".join(['sed', '-i', '-e', "\$amystring", '/home/map/myfile'] >>> subprocess.Popen(COMMAND, shell=True).communicate(input=None) ```
2015/02/11
[ "https://Stackoverflow.com/questions/28458785", "https://Stackoverflow.com", "https://Stackoverflow.com/users/1659599/" ]
The difference between the two forms is that with `shell=True` as an argument, the string gets passed as is to the shell, which then interprets it. This results in (with bash): ``` sed -i -e \$amystring /home/map/myfile ``` being run. With the list args and the default `shell=False`, python calls the executable directly with the arguments in the list. In this case, the literal string is passed to sed, ``` sed -i -e '\$amystring' /home/map/myfile ``` and `'\$amystring'` is not a valid `sed` expression. In this case, you'd need to call ``` >>> COMMAND = ['sed', '-i', '-e', "$amystring", '/home/map/myfile'] >>> subprocess.Popen(COMMAND).communicate(input=None) ``` since the string does not need to be escaped for the shell.
There is not such thing as a raw string. There are only raw string *literals*. A literal -- it is something that you type in the Python source code. `r'\$amystring'` and `'\\$amystring'` are the same *strings objects* despite being represented using different string *string literals*. As [@Jonathan Villemaire-Krajden said](https://stackoverflow.com/a/28460565/4279): if there is no `shell=True` then you don't need to escape `$` shell metacharacter. You only need it if you run the command in the shell: ``` $ python -c 'import sys; print(sys.argv)' "\$amystring" ['-c', '$amystring'] ``` Note: there is no backslash in the output. Don't use `.communicate()` unless you redirect standard streams using `PIPE`, you could use `call()`, `check_call()` instead: ``` import subprocess rc = subprocess.call(['sed', '-i', '-e', '$amystring', '/home/map/myfile']) ``` To emulate the `$amystring` sed command in Python (± newlines): ``` with open('/home/map/myfile', 'a') as file: print('mystring', file=file) ```
10,742
65,354,710
I was trying to Connect and Fetch data from BigQuery Dataset to Local Pycharm Using Pyspark. I ran this below Script in Pycharm: ``` from pyspark.sql import SparkSession spark = SparkSession.builder\ .config('spark.jars', "C:/Users/PycharmProjects/pythonProject/spark-bigquery-latest.jar")\ .getOrCreate() conn = spark.read.format("bigquery")\ .option("credentialsFile", "C:/Users/PycharmProjects/pythonProject/google-bq-api.json")\ .option("parentProject", "Google-Project-ID")\ .option("project", "Dataset-Name")\ .option("table", "dataset.schema.tablename")\ .load() conn.show() ``` For this I got the below error: ``` Exception in thread "main" java.io.IOException: No FileSystem for scheme: C at org.apache.hadoop.fs.FileSystem.getFileSystemClass(FileSystem.java:2660) at org.apache.hadoop.fs.FileSystem.createFileSystem(FileSystem.java:2667) at org.apache.hadoop.fs.FileSystem.access$200(FileSystem.java:94) at org.apache.hadoop.fs.FileSystem$Cache.getInternal(FileSystem.java:2703) at org.apache.hadoop.fs.FileSystem$Cache.get(FileSystem.java:2685) at org.apache.hadoop.fs.FileSystem.get(FileSystem.java:373) at org.apache.spark.deploy.DependencyUtils$.resolveGlobPath(DependencyUtils.scala:191) at org.apache.spark.deploy.DependencyUtils$.$anonfun$resolveGlobPaths$2(DependencyUtils.scala:147) at org.apache.spark.deploy.DependencyUtils$.$anonfun$resolveGlobPaths$2$adapted(DependencyUtils.scala:145) at scala.collection.TraversableLike.$anonfun$flatMap$1(TraversableLike.scala:245) at scala.collection.IndexedSeqOptimized.foreach(IndexedSeqOptimized.scala:36) at scala.collection.IndexedSeqOptimized.foreach$(IndexedSeqOptimized.scala:33) at scala.collection.mutable.WrappedArray.foreach(WrappedArray.scala:38) at scala.collection.TraversableLike.flatMap(TraversableLike.scala:245) at scala.collection.TraversableLike.flatMap$(TraversableLike.scala:242) at scala.collection.AbstractTraversable.flatMap(Traversable.scala:108) at org.apache.spark.deploy.DependencyUtils$.resolveGlobPaths(DependencyUtils.scala:145) at org.apache.spark.deploy.SparkSubmit.$anonfun$prepareSubmitEnvironment$4(SparkSubmit.scala:363) at scala.Option.map(Option.scala:230) at org.apache.spark.deploy.SparkSubmit.prepareSubmitEnvironment(SparkSubmit.scala:363) at org.apache.spark.deploy.SparkSubmit.org$apache$spark$deploy$SparkSubmit$$runMain(SparkSubmit.scala:871) at org.apache.spark.deploy.SparkSubmit.doRunMain$1(SparkSubmit.scala:180) at org.apache.spark.deploy.SparkSubmit.submit(SparkSubmit.scala:203) at org.apache.spark.deploy.SparkSubmit.doSubmit(SparkSubmit.scala:90) at org.apache.spark.deploy.SparkSubmit$$anon$2.doSubmit(SparkSubmit.scala:1007) at org.apache.spark.deploy.SparkSubmit$.main(SparkSubmit.scala:1016) at org.apache.spark.deploy.SparkSubmit.main(SparkSubmit.scala) Traceback (most recent call last): File "C:\Users\naveen.chandar\PycharmProjects\pythonProject\BigQueryConnector.py", line 4, in <module> spark = SparkSession.builder.config('spark.jars', 'C:/Users/naveen.chandar/PycharmProjects/pythonProject/spark-bigquery-latest.jar').getOrCreate() File "C:\Users\naveen.chandar\AppData\Local\Programs\Python\Python39\lib\site-packages\pyspark\sql\session.py", line 186, in getOrCreate sc = SparkContext.getOrCreate(sparkConf) File "C:\Users\naveen.chandar\AppData\Local\Programs\Python\Python39\lib\site-packages\pyspark\context.py", line 376, in getOrCreate SparkContext(conf=conf or SparkConf()) File "C:\Users\naveen.chandar\AppData\Local\Programs\Python\Python39\lib\site-packages\pyspark\context.py", line 133, in __init__ SparkContext._ensure_initialized(self, gateway=gateway, conf=conf) File "C:\Users\naveen.chandar\AppData\Local\Programs\Python\Python39\lib\site-packages\pyspark\context.py", line 325, in _ensure_initialized SparkContext._gateway = gateway or launch_gateway(conf) File "C:\Users\naveen.chandar\AppData\Local\Programs\Python\Python39\lib\site-packages\pyspark\java_gateway.py", line 105, in launch_gateway raise Exception("Java gateway process exited before sending its port number") Exception: Java gateway process exited before sending its port number ``` So, I researched and tried it from different Diecrtory like "D-drive" and also tried to fix a static port with `set PYSPARK_SUBMIT_ARGS="--master spark://<IP_Address>:<Port>"`, but still I got the same error in Pycharm. Then I thought of trying the same script in local Command Prompt under Pyspark and I got this error: ``` failed to find class org/conscrypt/CryptoUpcalls ERROR:root:Exception while sending command. Traceback (most recent call last): File "D:\spark-2.4.7-bin-hadoop2.7\python\lib\py4j-0.10.7-src.zip\py4j\java_gateway.py", line 1152, in send_command answer = smart_decode(self.stream.readline()[:-1]) File "C:\Users\naveen.chandar\AppData\Local\Programs\Python\Python37\lib\socket.py", line 589, in readinto return self._sock.recv_into(b) ConnectionResetError: [WinError 10054] An existing connection was forcibly closed by the remote host During handling of the above exception, another exception occurred: Traceback (most recent call last): File "D:\spark-2.4.7-bin-hadoop2.7\python\lib\py4j-0.10.7-src.zip\py4j\java_gateway.py", line 985, in send_command response = connection.send_command(command) File "D:\spark-2.4.7-bin-hadoop2.7\python\lib\py4j-0.10.7-src.zip\py4j\java_gateway.py", line 1164, in send_command "Error while receiving", e, proto.ERROR_ON_RECEIVE) py4j.protocol.Py4JNetworkError: Error while receiving Traceback (most recent call last): File "<stdin>", line 1, in <module> File "D:\spark-2.4.7-bin-hadoop2.7\python\pyspark\sql\dataframe.py", line 381, in show print(self._jdf.showString(n, 20, vertical)) File "D:\spark-2.4.7-bin-hadoop2.7\python\lib\py4j-0.10.7-src.zip\py4j\java_gateway.py", line 1257, in __call__ File "D:\spark-2.4.7-bin-hadoop2.7\python\pyspark\sql\utils.py", line 63, in deco return f(*a, **kw) File "D:\spark-2.4.7-bin-hadoop2.7\python\lib\py4j-0.10.7-src.zip\py4j\protocol.py", line 336, in get_return_value py4j.protocol.Py4JError: An error occurred while calling o42.showString ``` My Python Version is 3.7.9 and Spark Version is 2.4.7 So either way I ran out of idea's and I appreciate some help on any one of the situation I facing... Thanks In Advance!!
2020/12/18
[ "https://Stackoverflow.com/questions/65354710", "https://Stackoverflow.com", "https://Stackoverflow.com/users/10030455/" ]
Start your file system references with `file:///c:/...`
You need to replace `/` with `\` for the path to work
10,743
29,580,828
I'm starting with Docker. I have started with a Hello World script in Python 3. This is my Dockerfile: ``` FROM ubuntu:latest RUN apt-get update RUN apt-get install python3 COPY . hello.py CMD python3 hello.py ``` In the same directory, I have this python script: ``` if __name__ == "__main__": print("Hello World!"); ``` I have built the image with this command: ``` docker build -t home/ubuntu-python-hello . ``` So far, so good. But when I try to run the script with this command: ``` docker run home/ubuntu-python-hello ``` I get this error: ``` /usr/bin/python3: can't find '__main__' module in 'hello.py' ``` What am I doing wrong? Any advice or suggestion is accepted, I'm just a newbie. Thanks.
2015/04/11
[ "https://Stackoverflow.com/questions/29580828", "https://Stackoverflow.com", "https://Stackoverflow.com/users/3026283/" ]
Thanks to Gerrat, I solved it this way: ``` COPY hello.py hello.py ``` instead of ``` COPY . hello.py ```
You need to install python this way and confirm it with the -y. RUN apt-get update && apt-get install python3-dev -y
10,744
25,561,020
Below are the snippets of my code regarding file upload. Here is my HTML code where I will choose and upload the file: ```html <form ng-click="addImportFile()" enctype="multipart/form-data"> <label for="importfile">Import Time Events File:</label><br><br> <label for="select_import_file">SELECT FILE:</label><br> <input id="import_file" type="file" class="file btn btn-default" ng-disabled="CutOffListTemp.id== Null" data-show-preview="false"> <input class="btn btn-primary" type="submit" name="submit" value="Upload" ng-disabled="CutOffListTemp.id== Null"/><br/><br/> </form> ``` This is my controller that will link both html and my python file: ```javascript angular.module('hrisWebappApp').controller('ImportPayrollCtrl', function ($scope, $state, $stateParams, $http, ngTableParams, $modal, $filter) { $scope.addImportFile = function() { $http.post('http://127.0.0.1:5000/api/v1.0/upload_file/' + $scope.CutOffListTemp.id, {}) .success(function(data, status, headers, config) { console.log(data); if (data.success) { console.log('import success!'); } else { console.log('importing of file failed' ); } }) .error(function(data, status, headers, config) {}); }; ``` This is my python file: ```python @api.route('/upload_file/<int:id>', methods=['GET','POST']) @cross_origin(headers=['Content-Type']) def upload_file(id): print "hello" try: os.stat('UPLOAD_FOLDER') except: os.mkdir('UPLOAD_FOLDER') file = request.files['file'] print 'filename: ' + file.filename if file and allowed_file(file.filename): print 'allowing file' filename = secure_filename(file.filename) path=(os.path.join(current_app.config['UPLOAD_FOLDER'], filename)) file.save(path) #The end of the line which save the file you uploaded. return redirect(url_for('uploaded_file', filename=filename)) return ''' <!doctype html> <title>Upload new File</title> <h1>Upload new File</h1> <p>opsss it seems you uploaded an invalid filename please use .csv only</p> <form action="" method=post enctype=multipart/form-data> <p><input type=file name=file> <input type=submit value=Upload> </form> ''' ``` And the result in the console gave me this even if I select the correct format of file: ```html <!doctype html> <title>Upload new File</title> <h1>Upload new File</h1> <p>opsss it seems you uploaded an invalid filename please use .csv only</p> <form action="" method=post enctype=multipart/form-data> <p><input type=file name=file> <input type=submit value=Upload> </form> ``` This is not returning to my HTML and I cannot upload the file.
2014/08/29
[ "https://Stackoverflow.com/questions/25561020", "https://Stackoverflow.com", "https://Stackoverflow.com/users/3988760/" ]
Hi I can finally upload the file, I change the angular part, I change it by this: ``` $scope.addImportFile = function() { var f = document.getElementById('file').files[0]; console.log(f); var formData = new FormData(); formData.append('file', f); $http({method: 'POST', url: 'http://127.0.0.1:5000/api/v1.0/upload_file/' +$scope.CutOffListTemp.id, data: formData, headers: {'Content-Type': undefined}, transformRequest: angular.identity}) .success(function(data, status, headers, config) {console.log(data); if (data.success) { console.log('import success!'); } }) .error(function(data, status, headers, config) { }); // } }; ```
The first thing is about the post request. Without ng-click="addImportFile()", the browser will usually take care of serializing form data and sending it to the server. So if you try: ``` <form method="put" enctype="multipart/form-data" action="http://127.0.0.1:5000/api/v1.0/upload_file"> <label for="importfile">Import Time Events File:</label><br><br> <label for="select_import_file">SELECT FILE:</label><br> <input id="import_file" type="file" name="file" class="file btn btn-default" ng-disabled="CutOffListTemp.id== Null" data-show-preview="false"> <input class="btn btn-primary" type="submit" name="submit" value="Upload" ng-disabled="CutOffListTemp.id== Null"/><br/><br/> </form> ``` and then in python, make your request url independent of scope.CutOffListTemp.id: @api.route('/upload\_file', methods=['GET','POST']) It probably will work. Alternatively, if you want to use your custom function to send post request, the browser will not take care of the serialization stuff any more, you will need to do it yourself. In angular, the API for $http.post is: $http.post('/someUrl', data).success(successCallback); If we use "{}" for the data parameter, which means empty, the server will not find the data named "file" (file = request.files['file']). Thus you will see Bad Request To fix it, we need to use formData to make file upload which requires your browser supports HTML5: ``` $scope.addImportFile = function() { var f = document.getElementById('file').files[0] var fd = new FormData(); fd.append("file", f); $http.post('http://127.0.0.1:5000/api/v1.0/upload_file/'+$scope.CutOffListTemp.id, fd, headers: {'Content-Type': undefined}) .success...... ``` Other than using the native javascript code above, there are plenty great angular file upload libraries which can make file upload much easier for angular, you may probably want to have a look at one of them (reference: [File Upload using AngularJS](https://stackoverflow.com/questions/18571001/file-upload-using-angularjs)): * <https://github.com/nervgh/angular-file-upload> * <https://github.com/leon/angular-upload> * ......
10,745
62,497,777
Can't start server using Apache + Django OS: MacOS Catalina Apache: 2.4.43 Python: 3.8 Django: 3.0.7 Used by Apache from Brew. mod\_wsgi installed via pip. The application is created through the standard command ``` django-admin startproject project_temp ``` The application starts when the command is called ``` python manage.py runserver ``` At start for mod\_wsgi - everything is OK ``` mod_wsgi-express start-server ``` When I start Apache, the server is not accessible. Checked at "localhost: 80". Tell me, what do I need to do to start the server? Httpd settings: ``` ServerRoot "/usr/local/opt/httpd" ServerName localhost Listen 80 LoadModule mpm_prefork_module lib/httpd/modules/mod_mpm_prefork.so LoadModule authn_file_module lib/httpd/modules/mod_authn_file.so LoadModule authn_core_module lib/httpd/modules/mod_authn_core.so LoadModule authz_host_module lib/httpd/modules/mod_authz_host.so LoadModule authz_groupfile_module lib/httpd/modules/mod_authz_groupfile.so LoadModule authz_user_module lib/httpd/modules/mod_authz_user.so LoadModule authz_core_module lib/httpd/modules/mod_authz_core.so LoadModule access_compat_module lib/httpd/modules/mod_access_compat.so LoadModule auth_basic_module lib/httpd/modules/mod_auth_basic.so LoadModule reqtimeout_module lib/httpd/modules/mod_reqtimeout.so LoadModule filter_module lib/httpd/modules/mod_filter.so LoadModule mime_module lib/httpd/modules/mod_mime.so LoadModule log_config_module lib/httpd/modules/mod_log_config.so LoadModule env_module lib/httpd/modules/mod_env.so LoadModule headers_module lib/httpd/modules/mod_headers.so LoadModule setenvif_module lib/httpd/modules/mod_setenvif.so LoadModule version_module lib/httpd/modules/mod_version.so LoadModule unixd_module lib/httpd/modules/mod_unixd.so LoadModule status_module lib/httpd/modules/mod_status.so LoadModule autoindex_module lib/httpd/modules/mod_autoindex.so LoadModule alias_module lib/httpd/modules/mod_alias.so LoadModule rewrite_module lib/httpd/modules/mod_rewrite.so LoadModule wsgi_module /Library/Frameworks/Python.framework/Versions/3.8/lib/python3.8/site-packages/mod_wsgi/server/mod_wsgi-py38.cpython-38-darwin.so <Directory /> AllowOverride All </Directory> <Files ".ht*"> Require all denied </Files> <IfModule log_config_module> LogFormat "%h %l %u %t \"%r\" %>s %b \"%{Referer}i\" \"%{User-Agent}i\"" combined LogFormat "%h %l %u %t \"%r\" %>s %b" common <IfModule logio_module> LogFormat "%h %l %u %t \"%r\" %>s %b \"%{Referer}i\" \"%{User-Agent}i\" %I %O" combinedio </IfModule> CustomLog "/usr/local/var/log/httpd/access_log" common </IfModule> <IfModule headers_module> RequestHeader unset Proxy early </IfModule> <IfModule mime_module> TypesConfig /usr/local/etc/httpd/mime.types AddType application/x-compress .Z AddType application/x-gzip .gz .tgz </IfModule> <IfModule proxy_html_module> Include /usr/local/etc/httpd/extra/proxy-html.conf </IfModule> <IfModule ssl_module> SSLRandomSeed startup builtin SSLRandomSeed connect builtin </IfModule> WSGIScriptAlias / /Users/r/Projects/project_temp/project_temp/wsgi.py WSGIPythonHome /Library/Frameworks/Python.framework/Versions/3.8 <VirtualHost localhost:80> LogLevel warn ErrorLog /Users/r/Projects/project_temp/log/error.log CustomLog /Users/r/Projects/project_temp/log/access.log combined <Directory /Users/r/Projects/project_temp/project_temp> <Files wsgi.py> Require all granted </Files> </Directory> </VirtualHost> ```
2020/06/21
[ "https://Stackoverflow.com/questions/62497777", "https://Stackoverflow.com", "https://Stackoverflow.com/users/7060063/" ]
`&str` is an immutable slice, it somewhat similar to [`std::string_view`](https://en.cppreference.com/w/cpp/string/basic_string_view), so you cannot modify it. Instead, you may use iterator and collect a new [`String`](https://doc.rust-lang.org/std/string/struct.String.html): ```rust let removed: String = foo .chars() .take(start) .chain(foo.chars().skip(stop)) .collect(); ``` the other way would be an in-place `String` modifying: ```rust let mut foo: String = "hello".to_string(); // ... foo.replace_range((start..stop), ""); ``` Keep in mind, however, that the last example semantically different, because it operates on byte indicies, rather than char ones. Therefore it may panic at wrong usage (e.g. when `start` offset lay at the middle of multi-byte char).
Kitsu's solution w/o lambda ``` fn remove(start: usize, stop: usize, s: &str) -> String { let mut rslt = "".to_string(); for (i, c) in s.chars().enumerate() { if start > i || stop < i + 1 { rslt.push(c); } } rslt } ``` …as fast as `replace_range` but can handle unicode character [Playground](https://play.rust-lang.org/?version=stable&mode=release&edition=2021&gist=bf4b8b0f1d9ad886ac0702238177b11d)
10,746
9,511,825
I have a pythonscript run.py that I currently run in the command line. However, I want a start.py script either in python (preferably) or .bat, php, or some other means that allows me to make it such that once run.py finishes running, the start.py script will reexecute the run.py script indefinitely, but ONLY after the run.py finishes executing and exits. Sample Steps: - Start.py is run that starts Run.py - Run.py prints "hello" for 3 times after 5 seconds and exits normally or abnormally - Start.py knows Run.py finished/closed and reexecutes run.py How do I accomplish this?
2012/03/01
[ "https://Stackoverflow.com/questions/9511825", "https://Stackoverflow.com", "https://Stackoverflow.com/users/971888/" ]
Your problem is that you're using -INFINITY and +INFINITY as win/loss scores. You should have scores for win/loss that are higher/lower than any other positional evaluation score, but not equal to your infinity values. This will guarantee that a move will be chosen even in positions that are hopelessly lost.
It's been a long time since i implemented minimax so I might be wrong, but it seems to me that your code, if you encounter a winning or losing move, does not update the best variable (this happens in the (board.checkEnd()) statement at the top of your method). Also, if you want your algorithm to try to win with as much as possible, or lose with as little as possible if it can't win, I suggest you update your eval function. In a win situation, it should return a large value (larger that any non-win situation), the more you win with the laregr the value. In a lose situation, it should return a large negative value (less than in any non-lose situation), the more you lose by the less the value. It seems to me (without trying it out) that if you update your eval function that way and skip the check if (board.checkEnd()) altogether, your algorithm should work fine (unless there's other problems with it). Good luck!
10,747
6,539,267
I started coding an RPG engine in python and I want it to be very scripted(buffs, events). I am experimenting with events and hooking. I would appreciate if you could tell me some matured opensource projects(so i can inspect the code) to learn from. Not necessarily python, but it would be ideal. Thanks in advance.
2011/06/30
[ "https://Stackoverflow.com/questions/6539267", "https://Stackoverflow.com", "https://Stackoverflow.com/users/492162/" ]
As Daenyth suggested, [pygame](http://pygame.org/) is a great place to start. There are plenty of projects linked to on their page. The other library that is quite lovely for this type of thing is [Panda3D.](http://www.panda3d.org/) Though I haven't yet used it, the library comes with samples, and it looks like there is a list of projects using it somewhere. Have fun.
You might have a look at `pygame`, it's pretty common for this sort of thing.
10,750
63,129,698
I'm using subprocess to spawn a `conda create` command and capture the resulting `stdout` for later use. I also immediately print the `stdout` to the console so the user can still see the progress of the subprocess: ``` p = subprocess.Popen('conda create -n env1 python', stdout=subprocess.PIPE, stderr=subprocess.STDOUT) for line in iter(p.stdout.readline, b''): sys.stdout.write(line.decode(sys.stdout.encoding)) ``` This works fine until half way through the execution when the `conda create` requires user input: it prompts `Proceed (n/y)?` and waits for the user to input an option. However, the code above doesn't print the prompt and instead just waits for input "on a blank screen". Once an input is received the prompt is printed *afterwards* and then execution continues as expected. I assume this is because the input somehow blocks the prompt being written to `stdout` and so the `readline` doesn't receive new data until after the input block has been lifted. Is there a way I can ensure the input prompt is printed before the subprocess waits for user input? Note I'm running on Windows.
2020/07/28
[ "https://Stackoverflow.com/questions/63129698", "https://Stackoverflow.com", "https://Stackoverflow.com/users/1963945/" ]
Although I'm sure pexpect would have worked in this case, I decided it would be overkill. Instead I used MisterMiyagi's insight and replaced `readline` with `read`. The final code is as so: ``` p = subprocess.Popen('conda create -n env1 python', stdout=subprocess.PIPE, stderr=subprocess.STDOUT) while p.poll() is None: sys.stdout.write(p.stdout.read(1).decode(sys.stdout.encoding)) sys.stdout.flush() ``` Note the `read(1)` as just using `read()` would block the while loop until an EOF is found. Given no EOF is found before the input prompt, nothing will be printed at all! In addition, `flush` is called which ensures the text written to `sys.stdout` is actually visible on screen.
For this use case I recommend using [pexpect](https://pypi.org/project/pexpect/). *stdin != stdout* Example use case where it conditionally sends to *stdin* on prompts on *stdout* ``` def expectgit(alog): TMPLOG = "/tmp/pexpect.log" cmd = f''' ssh -T git@github.com ;\ echo "alldone" ; ''' with open(TMPLOG, "w") as log: ch = pexpect.spawn(f"/bin/bash -c \"{cmd}\"", encoding='utf-8', logfile=log) while True: i = ch.expect(["successfully authenticated", "Are you sure you want to continue connecting"]) if i == 0: alog.info("Git all good") break elif i == 1: alog.info("Fingerprint verification") ch.send("yes\r") ch.expect("alldone") i = ch.expect([pexpect.EOF], timeout=5) ch.close() alog.info("expect done - EOF") with open(TMPLOG) as log: for l in log.readlines(): alog.debug(l.strip()) ```
10,751
49,830,562
Let the two lists be ``` x = [0,1,2,2,5,2,1,0,1,2] y = [0,1,3,2,1,4,1,3,1,2] ``` How to find the similar elements in these two lists in python and print them. What I am doing- ``` for i, j in x, y: if x[i] == y[j]: print(x[i], y[j]) ``` I want to find elements like x[0], y[0] and x[1], y[1] etc. This does not work, I am new to python. I want to find the exact index at which the elements are common. I want to find the index 0, 1, 3, 6, 8, 9; because the elements at these indices are equal
2018/04/14
[ "https://Stackoverflow.com/questions/49830562", "https://Stackoverflow.com", "https://Stackoverflow.com/users/9645192/" ]
Two problems: First, you should initiate `smallest` with the last element if you want to search from the end of array: ``` int result = findMinAux(arr,arr.length-1,arr[arr.length - 1]); ``` Secondly, you should reassign `smallest`: ``` if(startIndex>=0) { smallest = findMinAux(arr,startIndex,smallest); } ```
See this code. In every iteration, elements are compared with current element and index in increased on the basis of comparison. This is tail recursive as well. So it can be used in large arrays as well. ``` public class Q1 { public static void main(String[] args) { int[] testArr = {12, 32, 45, 435, -1, 345, 0, 564, -10, 234, 25}; System.out.println(); System.out.println(find(testArr, 0, testArr.length - 1, testArr[0])); } public static int find(int[] arr, int currPos, int lastPos, int elem) { if (currPos == lastPos) { return elem; } else { if (elem < arr[currPos]) { return find(arr, currPos + 1, lastPos, elem); } else { return find(arr, currPos + 1, lastPos, arr[currPos]); } } } } ```
10,752
73,818,926
I am trying to send 2 params to the backend through a get request that returns some query based on the params I send to the backend. I am using React.js front end and flask python backend. My get request looks like this: ``` async function getStatsData() { const req = axios.get('http://127.0.0.1:5000/stat/', { params: { user: 0, flashcard_id: 1 }, headers: {'Access-Control-Allow-Origin': '*', 'X-Requested-With': 'XMLHttpRequest'}, }) const res = await req; return res.data.results.map((statsItem, index) => { return { stat1: statsItem.stat1, stat2: statsItem.stat2, stat3: statsItem.stat3, stat4: statsItem.user, key: statsItem.key } }) } ``` and then my route in the backend is this: ``` @app.route('/stat/<user>/<flashcard_id>', methods=['GET', 'POST', 'OPTIONS']) def stats(user, flashcard_id): def get_total_percent_correct(user): correct = d.db_query('SELECT COUNT(*) FROM cards.responses WHERE guess = answer AND user_id = %s' % user)[0][0] total = d.db_query('SELECT COUNT(*) FROM cards.responses WHERE user_id = %s' % user)[0][0] try: return round(float(correct)/float(total),3)*100 except: print('0') def get_percent_correct_for_flashcard(user,flashcard_id): total = d.db_query('SELECT COUNT(*) FROM cards.responses WHERE user_id = %s AND flashcard_id = %s' % (user, flashcard_id))[0][0] correct = d.db_query('SELECT COUNT(*) FROM cards.responses WHERE flashcard_id = %s AND guess = answer AND user_id = %s' % (flashcard_id, user))[0][0] try: return round(float(correct)/float(total),3)*100 except: print('0') def get_stats_for_flashcard(user_id, flashcard_id): attempts = d.db_query('SELECT COUNT(*) FROM cards.responses WHERE user_id = %s AND flashcard_id = %s' % (user_id, flashcard_id))[0][0] correct = d.db_query('SELECT COUNT(*) FROM cards.responses WHERE flashcard_id = %s AND guess = answer AND user_id = %s' % (flashcard_id, user_id))[0][0] missed = attempts - correct return attempts, correct, missed data = [{ "stat1": get_total_percent_correct(user), "stat2": get_percent_correct_for_flashcard(user, flashcard_id), 'stat3': get_stats_for_flashcard(user, flashcard_id), 'user': user, 'key':999 }] return {"response_code" : 200, "results" : data} ``` When I go to <http://127.0.0.1:5000/stat/0/1> in my browser, the stats are shown correctly but the get request is not working because it says xhr.js:210 GET <http://127.0.0.1:5000/stat/?user=0&flashcard_id=1> 404 (NOT FOUND) . So clearly I'm not sending or receiving the params correctly. Does anyone know how to solve this? Thank you for your time
2022/09/22
[ "https://Stackoverflow.com/questions/73818926", "https://Stackoverflow.com", "https://Stackoverflow.com/users/19603491/" ]
In your backend route you are expecting the values in url as dynamic segment, but from axios you are sending it as [query sring](https://en.wikipedia.org/wiki/Query_string). **Solution:** You can modify the axios request like this to send the values as dynamic segment: ``` const user = 0; const flashcard_id = 1; const req = axios.get(`http://127.0.0.1:5000/stat/${user}/${flashcard_id}`,{ headers: {'Access-Control-Allow-Origin': '*', 'X-Requested-With': 'XMLHttpRequest'}, }) ``` **or** you can modify flask route like this if you need to recieve values from query params: ``` from flask import request @app.route('/stat/', methods=['GET', 'POST', 'OPTIONS']) def stats(): user = request.args.get('user') flashcard_id = request.args.get('flashcard_id') ```
Send the parameters like this: ``` const req = axios.get(`http://127.0.0.1:5000/stat/${user}/${flashcard_id}`) ``` and in Flask you receive the parameters like this: ``` @app.route('/stat/<user>/<flashcard_id>', methods=['GET', 'POST', 'OPTIONS']) def stats(user, flashcard_id): ```
10,761
52,154,682
After installing python 3.7 from python.org, running the Install Certificate.command resulted in the below error. Please, can you provide some guidance? Why does Install Certificate.command result in error? [Some background] Tried to install python via anaconda, brew and python.org, even installing version 3.6.6, hoping I could get one of them to work. Each installation resulted in ssl certification errors when I tried to install a package. See further below for example error from current python 3.7 installation. I read every page with any reference to openssl errors and followed every instruction. That last one probably has done more damage to my machine tbh. I installed, uninstalled and reinstalled from each of anaconda, brew and python.org, deleting and cleaning a bunch of folders along the way, trying to make a clean installation. Along the way, I even managed to delete pip, wheel and setuptools from site-directories folder of the apple preinstalled python version. So all in all, after a week of python installation hell, I am totally stuck. ``` Collecting certifi Retrying (Retry(total=4, connect=None, read=None, redirect=None, status=None)) after connection broken by 'SSLError(SSLCertVerificationError(1, '[SSL: CERTIFICATE_VERIFY_FAILED] certificate verify failed: self signed certificate in certificate chain (_ssl.c:1045)'))': /simple/certifi/ Retrying (Retry(total=3, connect=None, read=None, redirect=None, status=None)) after connection broken by 'SSLError(SSLCertVerificationError(1, '[SSL: CERTIFICATE_VERIFY_FAILED] certificate verify failed: self signed certificate in certificate chain (_ssl.c:1045)'))': /simple/certifi/ Retrying (Retry(total=2, connect=None, read=None, redirect=None, status=None)) after connection broken by 'SSLError(SSLCertVerificationError(1, '[SSL: CERTIFICATE_VERIFY_FAILED] certificate verify failed: self signed certificate in certificate chain (_ssl.c:1045)'))': /simple/certifi/ Retrying (Retry(total=1, connect=None, read=None, redirect=None, status=None)) after connection broken by 'SSLError(SSLCertVerificationError(1, '[SSL: CERTIFICATE_VERIFY_FAILED] certificate verify failed: self signed certificate in certificate chain (_ssl.c:1045)'))': /simple/certifi/ Retrying (Retry(total=0, connect=None, read=None, redirect=None, status=None)) after connection broken by 'SSLError(SSLCertVerificationError(1, '[SSL: CERTIFICATE_VERIFY_FAILED] certificate verify failed: self signed certificate in certificate chain (_ssl.c:1045)'))': /simple/certifi/ Could not fetch URL https://pypi.org/simple/certifi/: There was a problem confirming the ssl certificate: HTTPSConnectionPool(host='pypi.org', port=443): Max retries exceeded with url: /simple/certifi/ (Caused by SSLError(SSLCertVerificationError(1, '[SSL: CERTIFICATE_VERIFY_FAILED] certificate verify failed: self signed certificate in certificate chain (_ssl.c:1045)'))) - skipping Could not find a version that satisfies the requirement certifi (from versions: ) No matching distribution found for certifi Traceback (most recent call last): File "<stdin>", line 44, in <module> File "<stdin>", line 25, in main File "/Library/Frameworks/Python.framework/Versions/3.7/lib/python3.7/subprocess.py", line 328, in check_call raise CalledProcessError(retcode, cmd) subprocess.CalledProcessError: Command '['/Library/Frameworks/Python.framework/Versions/3.7/bin/python3.7', '-E', '-s', '-m', 'pip', 'install', '--upgrade', 'certifi']' returned non-zero exit status 1. logout Saving session... ...copying shared history... ...saving history...truncating history files... ...completed. [Process completed] ``` `pip3 install numpy` results in the below error ``` Collecting numpy Retrying (Retry(total=4, connect=None, read=None, redirect=None, status=None)) after connection broken by 'SSLError(SSLCertVerificationError(1, '[SSL: CERTIFICATE_VERIFY_FAILED] certificate verify failed: self signed certificate in certificate chain (_ssl.c:1045)'))': /simple/numpy/ Retrying (Retry(total=3, connect=None, read=None, redirect=None, status=None)) after connection broken by 'SSLError(SSLCertVerificationError(1, '[SSL: CERTIFICATE_VERIFY_FAILED] certificate verify failed: self signed certificate in certificate chain (_ssl.c:1045)'))': /simple/numpy/ Retrying (Retry(total=2, connect=None, read=None, redirect=None, status=None)) after connection broken by 'SSLError(SSLCertVerificationError(1, '[SSL: CERTIFICATE_VERIFY_FAILED] certificate verify failed: self signed certificate in certificate chain (_ssl.c:1045)'))': /simple/numpy/ Retrying (Retry(total=1, connect=None, read=None, redirect=None, status=None)) after connection broken by 'SSLError(SSLCertVerificationError(1, '[SSL: CERTIFICATE_VERIFY_FAILED] certificate verify failed: self signed certificate in certificate chain (_ssl.c:1045)'))': /simple/numpy/ Retrying (Retry(total=0, connect=None, read=None, redirect=None, status=None)) after connection broken by 'SSLError(SSLCertVerificationError(1, '[SSL: CERTIFICATE_VERIFY_FAILED] certificate verify failed: self signed certificate in certificate chain (_ssl.c:1045)'))': /simple/numpy/ Could not fetch URL https://pypi.org/simple/numpy/: There was a problem confirming the ssl certificate: HTTPSConnectionPool(host='pypi.org', port=443): Max retries exceeded with url: /simple/numpy/ (Caused by SSLError(SSLCertVerificationError(1, '[SSL: CERTIFICATE_VERIFY_FAILED] certificate verify failed: self signed certificate in certificate chain (_ssl.c:1045)'))) - skipping Could not find a version that satisfies the requirement numpy (from versions: ) No matching distribution found for numpy ``` --- EDIT post @newbie comment Based on the most upvoted answer in that post, I tried the following: ``` $ curl https://bootstrap.pypa.io/get-pip.py -o get-pip.py % Total % Received % Xferd Average Speed Time Time Time Current Dload Upload Total Spent Left Speed 100 1604k 100 1604k 0 0 188k 0 0:00:08 0:00:08 --:--:-- 148k $ python3 get-pip.py Collecting pip Retrying (Retry(total=4, connect=None, read=None, redirect=None, status=None)) after connection broken by 'SSLError(SSLCertVerificationError(1, '[SSL: CERTIFICATE_VERIFY_FAILED] certificate verify failed: self signed certificate in certificate chain (_ssl.c:1045)'))': /simple/pip/ Retrying (Retry(total=3, connect=None, read=None, redirect=None, status=None)) after connection broken by 'SSLError(SSLCertVerificationError(1, '[SSL: CERTIFICATE_VERIFY_FAILED] certificate verify failed: self signed certificate in certificate chain (_ssl.c:1045)'))': /simple/pip/ Retrying (Retry(total=2, connect=None, read=None, redirect=None, status=None)) after connection broken by 'SSLError(SSLCertVerificationError(1, '[SSL: CERTIFICATE_VERIFY_FAILED] certificate verify failed: self signed certificate in certificate chain (_ssl.c:1045)'))': /simple/pip/ Retrying (Retry(total=1, connect=None, read=None, redirect=None, status=None)) after connection broken by 'SSLError(SSLCertVerificationError(1, '[SSL: CERTIFICATE_VERIFY_FAILED] certificate verify failed: self signed certificate in certificate chain (_ssl.c:1045)'))': /simple/pip/ Retrying (Retry(total=0, connect=None, read=None, redirect=None, status=None)) after connection broken by 'SSLError(SSLCertVerificationError(1, '[SSL: CERTIFICATE_VERIFY_FAILED] certificate verify failed: self signed certificate in certificate chain (_ssl.c:1045)'))': /simple/pip/ Could not fetch URL https://pypi.org/simple/pip/: There was a problem confirming the ssl certificate: HTTPSConnectionPool(host='pypi.org', port=443): Max retries exceeded with url: /simple/pip/ (Caused by SSLError(SSLCertVerificationError(1, '[SSL: CERTIFICATE_VERIFY_FAILED] certificate verify failed: self signed certificate in certificate chain (_ssl.c:1045)'))) - skipping Could not find a version that satisfies the requirement pip (from versions: ) No matching distribution found for pip $ pip3 search numpy Retrying (Retry(total=4, connect=None, read=None, redirect=None, status=None)) after connection broken by 'SSLError(SSLCertVerificationError(1, '[SSL: CERTIFICATE_VERIFY_FAILED] certificate verify failed: self signed certificate in certificate chain (_ssl.c:1045)'))': /pypi Retrying (Retry(total=3, connect=None, read=None, redirect=None, status=None)) after connection broken by 'SSLError(SSLCertVerificationError(1, '[SSL: CERTIFICATE_VERIFY_FAILED] certificate verify failed: self signed certificate in certificate chain (_ssl.c:1045)'))': /pypi Retrying (Retry(total=2, connect=None, read=None, redirect=None, status=None)) after connection broken by 'SSLError(SSLCertVerificationError(1, '[SSL: CERTIFICATE_VERIFY_FAILED] certificate verify failed: self signed certificate in certificate chain (_ssl.c:1045)'))': /pypi Retrying (Retry(total=1, connect=None, read=None, redirect=None, status=None)) after connection broken by 'SSLError(SSLCertVerificationError(1, '[SSL: CERTIFICATE_VERIFY_FAILED] certificate verify failed: self signed certificate in certificate chain (_ssl.c:1045)'))': /pypi Retrying (Retry(total=0, connect=None, read=None, redirect=None, status=None)) after connection broken by 'SSLError(SSLCertVerificationError(1, '[SSL: CERTIFICATE_VERIFY_FAILED] certificate verify failed: self signed certificate in certificate chain (_ssl.c:1045)'))': /pypi Exception: Traceback (most recent call last): File "/Library/Frameworks/Python.framework/Versions/3.7/lib/python3.7/site-packages/pip/_vendor/urllib3/connectionpool.py", line 600, in urlopen chunked=chunked) File "/Library/Frameworks/Python.framework/Versions/3.7/lib/python3.7/site-packages/pip/_vendor/urllib3/connectionpool.py", line 343, in _make_request self._validate_conn(conn) File "/Library/Frameworks/Python.framework/Versions/3.7/lib/python3.7/site-packages/pip/_vendor/urllib3/connectionpool.py", line 849, in _validate_conn conn.connect() File "/Library/Frameworks/Python.framework/Versions/3.7/lib/python3.7/site-packages/pip/_vendor/urllib3/connection.py", line 356, in connect ssl_context=context) File "/Library/Frameworks/Python.framework/Versions/3.7/lib/python3.7/site-packages/pip/_vendor/urllib3/util/ssl_.py", line 359, in ssl_wrap_socket return context.wrap_socket(sock, server_hostname=server_hostname) File "/Library/Frameworks/Python.framework/Versions/3.7/lib/python3.7/ssl.py", line 412, in wrap_socket session=session File "/Library/Frameworks/Python.framework/Versions/3.7/lib/python3.7/ssl.py", line 850, in _create self.do_handshake() File "/Library/Frameworks/Python.framework/Versions/3.7/lib/python3.7/ssl.py", line 1108, in do_handshake self._sslobj.do_handshake() ssl.SSLCertVerificationError: [SSL: CERTIFICATE_VERIFY_FAILED] certificate verify failed: self signed certificate in certificate chain (_ssl.c:1045) During handling of the above exception, another exception occurred: Traceback (most recent call last): File "/Library/Frameworks/Python.framework/Versions/3.7/lib/python3.7/site-packages/pip/_vendor/requests/adapters.py", line 445, in send timeout=timeout File "/Library/Frameworks/Python.framework/Versions/3.7/lib/python3.7/site-packages/pip/_vendor/urllib3/connectionpool.py", line 667, in urlopen **response_kw) File "/Library/Frameworks/Python.framework/Versions/3.7/lib/python3.7/site-packages/pip/_vendor/urllib3/connectionpool.py", line 667, in urlopen **response_kw) File "/Library/Frameworks/Python.framework/Versions/3.7/lib/python3.7/site-packages/pip/_vendor/urllib3/connectionpool.py", line 667, in urlopen **response_kw) [Previous line repeated 1 more times] File "/Library/Frameworks/Python.framework/Versions/3.7/lib/python3.7/site-packages/pip/_vendor/urllib3/connectionpool.py", line 638, in urlopen _stacktrace=sys.exc_info()[2]) File "/Library/Frameworks/Python.framework/Versions/3.7/lib/python3.7/site-packages/pip/_vendor/urllib3/util/retry.py", line 398, in increment raise MaxRetryError(_pool, url, error or ResponseError(cause)) pip._vendor.urllib3.exceptions.MaxRetryError: HTTPSConnectionPool(host='pypi.org', port=443): Max retries exceeded with url: /pypi (Caused by SSLError(SSLCertVerificationError(1, '[SSL: CERTIFICATE_VERIFY_FAILED] certificate verify failed: self signed certificate in certificate chain (_ssl.c:1045)'))) During handling of the above exception, another exception occurred: Traceback (most recent call last): File "/Library/Frameworks/Python.framework/Versions/3.7/lib/python3.7/site-packages/pip/_internal/basecommand.py", line 141, in main status = self.run(options, args) File "/Library/Frameworks/Python.framework/Versions/3.7/lib/python3.7/site-packages/pip/_internal/commands/search.py", line 48, in run pypi_hits = self.search(query, options) File "/Library/Frameworks/Python.framework/Versions/3.7/lib/python3.7/site-packages/pip/_internal/commands/search.py", line 65, in search hits = pypi.search({'name': query, 'summary': query}, 'or') File "/Library/Frameworks/Python.framework/Versions/3.7/lib/python3.7/xmlrpc/client.py", line 1112, in __call__ return self.__send(self.__name, args) File "/Library/Frameworks/Python.framework/Versions/3.7/lib/python3.7/xmlrpc/client.py", line 1452, in __request verbose=self.__verbose File "/Library/Frameworks/Python.framework/Versions/3.7/lib/python3.7/site-packages/pip/_internal/download.py", line 788, in request headers=headers, stream=True) File "/Library/Frameworks/Python.framework/Versions/3.7/lib/python3.7/site-packages/pip/_vendor/requests/sessions.py", line 559, in post return self.request('POST', url, data=data, json=json, **kwargs) File "/Library/Frameworks/Python.framework/Versions/3.7/lib/python3.7/site-packages/pip/_internal/download.py", line 396, in request return super(PipSession, self).request(method, url, *args, **kwargs) File "/Library/Frameworks/Python.framework/Versions/3.7/lib/python3.7/site-packages/pip/_vendor/requests/sessions.py", line 512, in request resp = self.send(prep, **send_kwargs) File "/Library/Frameworks/Python.framework/Versions/3.7/lib/python3.7/site-packages/pip/_vendor/requests/sessions.py", line 622, in send r = adapter.send(request, **kwargs) File "/Library/Frameworks/Python.framework/Versions/3.7/lib/python3.7/site-packages/pip/_vendor/cachecontrol/adapter.py", line 53, in send resp = super(CacheControlAdapter, self).send(request, **kw) File "/Library/Frameworks/Python.framework/Versions/3.7/lib/python3.7/site-packages/pip/_vendor/requests/adapters.py", line 511, in send raise SSLError(e, request=request) pip._vendor.requests.exceptions.SSLError: HTTPSConnectionPool(host='pypi.org', port=443): Max retries exceeded with url: /pypi (Caused by SSLError(SSLCertVerificationError(1, '[SSL: CERTIFICATE_VERIFY_FAILED] certificate verify failed: self signed certificate in certificate chain (_ssl.c:1045)'))) ``` =========== EDIT2 : searched for all locations where copies of openssl.cnf file exists. Does this seem right? ``` $ mdfind openssl.cnf /usr/local/etc/openssl/openssl.cnf /opt/vagrant/embedded/ssl/openssl.cnf /opt/vagrant/embedded/ssl/openssl.cnf.dist /private/etc/ssl/openssl.cnf /System/Library/OpenSSL/openssl.cnf /opt/vagrant/embedded/ssl/misc/CA.pl ```
2018/09/03
[ "https://Stackoverflow.com/questions/52154682", "https://Stackoverflow.com", "https://Stackoverflow.com/users/10297557/" ]
wanted to answer my own question as I seem to have fixed most of the issues. The solution: 1. Created a pip directory and then a pip.conf file in $HOME/Library/Application Support 2. To the pip.conf added code [global] trusted-host = pypi.python.org pypi.org files.pythonhosted.org 3. Started installing using $ pip install --trusted-host pypi.org --trusted-host files.pythonhosted.org pip 4. Then tried pip install which worked, so used that to install numpy, pandas, geopy etc. 5. Successfully ran the Install CertificateCommand file in Applications/Python 3.7 Results: 1. Installation via pip is working. 2. Spyder3 is working 3. Python 3.7 is working 4. numpy, pandas, matplotlib, geopy are working 5. Jupyter notebook is working Outstanding: Still getting GeocoderServiceError and Brew doctor says vim missing python. Will raise separate questions for these.
You don't need to run Install Certificate.command. You should reinstall Xcode command line tools that contains Python. ```sh pip3 uninstall -y -r <(pip requests certifi) brew uninstall --ignore-dependencies python3 sudo rm -rf /Library/Developer/CommandLineTools xcode-select --install sudo xcode-select -r python3 -m pip install --user certifi python3 -m pip install --user requests ```
10,762
39,739,195
I have a JSON object that I'd like to transform using jq from one form to another (of course, I could use javascript or python and iterate, but jq would be preferable). The issue is that the input contains long arrays that needs to be broken into multiple smaller arrays whenever data stops repeating within the first array. I'm not really sure how to describe this problem so I'll just put an example out here which is hopefully more explanatory. The one safe assumption-- if it is of any help-- is that the input data is always pre-sorted on the first two elements (e.g. "row\_x" and "col\_y"): input: ``` { "headers": [ "col1", "col2", "col3" ], "data": [ [ "row1","col1","b","src2" ], [ "row1","col1","b","src1" ], [ "row1","col1","b","src3" ], [ "row1","col2","d","src4" ], [ "row1","col2","e","src5" ], [ "row1","col2","f","src6" ], [ "row1","col3","j","src7" ], [ "row1","col3","g","src8" ], [ "row1","col3","h","src9" ], [ "row1","col3","i","src10" ], [ "row2","col1","l","src13" ], [ "row2","col1","j","src11" ], [ "row2","col1","k","src12" ], [ "row2","col3","o","src15" ] ] } ``` desired output: ``` { "headers": [ "col1", "col2", "col3" ], "values": [ [["b","b","b"],["d","e","f"],["g","h","i","j"]], [["j","k","l"],null,["o"]] ], "sources": [ [["src1","src2","src3"],["src4","src5","src6"],["src7","src8","src9","src10"]], [["src11","src12","src13"],null,["src15"]] ] } ``` Is this doable at all in jq? UPDATE: a variant of this is to retain the original data order, so the output is as follows: ``` { "headers": [ "col1", "col2", "col3" ], "values": [ [["b","b","b"],["d","e","f"],["j","g","h","i"]], [["l","j","k"],null,["o"]] ], "sources": [ [["src2","src1","src3"],["src4","src5","src6"],["src7","src8","src9","src10"]], [["src13","src11","src12"],null,["src15"]] ] } ```
2016/09/28
[ "https://Stackoverflow.com/questions/39739195", "https://Stackoverflow.com", "https://Stackoverflow.com/users/3160967/" ]
Is it doable? Of course! First you'll want to group the data by rows then columns. Then with the groups, build your values/sources arrays. ``` .headers as $headers | .data # make the data easier to access | map({ row: .[0], col: .[1], val: .[2], src: .[3] }) # keep it sorted so they are in expected order in the end | sort_by([.row,.col,.src]) # group by rows | group_by(.row) # create a map to each of the cols for easier access | map(group_by(.col) | reduce .[] as $col ({}; .[$col[0].col] = [$col[] | {val,src}] ) ) # build the result | { headers: $headers, values: map([.[$headers[]] | [.[]?.val]]), sources: map([.[$headers[]] | [.[]?.src]]) } ``` This will produce the following result: ``` { "headers": [ "col1", "col2", "col3" ], "values": [ [ [ "b", "b", "b" ], [ "d", "e", "f" ], [ "i", "j", "g", "h" ] ], [ [ "j", "k", "l" ], [], [ "o" ] ] ], "sources": [ [ [ "src1", "src2", "src3" ], [ "src4", "src5", "src6" ], [ "src10", "src7", "src8", "src9" ] ], [ [ "src11", "src12", "src13" ], [], [ "src15" ] ] ] } ```
Since the primary data source here can be thought of as a two-dimensional matrix, it may be worth considering a matrix-oriented approach to the problem, especially if it is intended that empty rows in the input matrix are not simply omitted, or if the number of columns in the matrix is not initially known. To spice things up a little, let's choose to represent an m x n matrix, M, as a JSON array of the form [m, n, a] where a is an array of arrays, such that a[i][j] is the element of M in row i, column j. First, let's define some basic matrix-oriented operations: ``` def ij(i;j): .[2][i][j]; def set_ij(i;j;value): def max(a;b): if a < b then b else a end; .[0] as $m | .[1] as $n | [max(i+1;$m), max(j+1;$n), (.[2] | setpath([i,j];value)) ]; ``` The data source uses strings of the form "rowI" for row i and "colJ" for row j, so we define a matrix-update function accordingly: ``` def update_row_col( row; col; value): ((row|sub("^row";"")|tonumber) - 1) as $r | ((col|sub("^col";"")|tonumber) - 1) as $c | ij($r;$c) as $v | set_ij($r; $c; if $v == null then [value] else $v + [value] end) ; ``` Given an array of items of the form ["rowI","colJ", V, S], generate a matrix with the value {"source": S, "value": V} at row I and column J: ``` def generate: reduce .[] as $x ([0,0,null]; update_row_col( $x[0]; $x[1]; { "source": $x[3], "value": $x[2] }) ); ``` Now we turn to the desired output. The following filter will extract f from the input matrix, producing an array of arrays, replacing [] with null: ``` def extract(f): . as $m | (reduce range(0; $m[0]) as $i ([]; . + ( reduce range(0; $m[1]) as $j ([]; . + [ $m | ij($i;$j) // [] | map(f) ]) ) )) | map( if length == 0 then null else . end ); ``` Putting it all together (generating the headers dynamically is left as an exercise for the interested reader): ``` {headers} + (.data | generate | { "values": extract(.value), "sources": extract(.source) } ) ``` Output: `{ "headers": [ "col1", "col2", "col3" ], "values": [ [ "b", "b", "b" ], [ "d", "e", "f" ], [ "j", "g", "h", "i" ], [ "l", "j", "k" ], null, [ "o" ] ], "sources": [ [ "src2", "src1", "src3" ], [ "src4", "src5", "src6" ], [ "src7", "src8", "src9", "src10" ], [ "src13", "src11", "src12" ], null, [ "src15" ] ] }`
10,763
50,967,329
I am trying to create a script that will * look at each word in a text document and store in a list (WordList) * Look at a second text document and store each word in a list (RandomText) * Print out the words that appear in both lists I have come up with the below which stores the text in a file, however I can't seem to get the function to compare the lists and print similarities. ``` KeyWords = open("wordlist.txt").readlines() # Opens WordList text file, Reads each line and stores in a list named "KeyWords". RanText = open("RandomText.txt").readlines() # Opens RandomText text file, reads each line and stores in a list named "RanText" def Check(): for x in KeyWords: if x in RanText: print(x) print(KeyWords) print(RanText) print(Check) ``` Output: ``` C:\Scripts>python Search.py ['Word1\n', 'Word2\n', 'Word3\n', 'Word4\n', 'Word5'] ['Lorem ipsum dolor sit amet, Word1 consectetur adipiscing elit. Nunc fringilla arcu congue metus aliquam mollis.\n', 'Mauris nec maximus purus. Maecenas sit amet pretium tellus. Praesent Word3 sed rhoncus eo. Duis id commodo orci.\n', 'Quisque at dignissim lacus.'] <function Check at 0x00A9B618> ```
2018/06/21
[ "https://Stackoverflow.com/questions/50967329", "https://Stackoverflow.com", "https://Stackoverflow.com/users/7276704/" ]
Add `tools:replace="android:label"` to your `<application>` tag in AndroidManifest.xml as suggested in error logs. This error might have occurred because AndroidManifest.xml of some jar file or library might also be having the `android:label` attribute defined in its `<application>` tag which is causing merger conflict as manifest merging process could not understand which `android:label` to continue with.
add uses-SDK tools in the Manifest file ``` <uses-sdk tools:replace="android:label" /> ```
10,765
5,217,513
i am looking to start learning. people have told me it is juts as capable. though i haven't really seen any good looking games available. some decent ones on pygame, but none really stand out. i would like to know if python really is as capable as other languages. EDIT: thanks guys, is python good for game development?
2011/03/07
[ "https://Stackoverflow.com/questions/5217513", "https://Stackoverflow.com", "https://Stackoverflow.com/users/647813/" ]
Short answer: Yes, it is. But this heavily depends on your problem :)
This [article](http://www.python.org/doc/essays/comparisons.html) has very good comparison of Python with other languages like C++, Java etc.
10,766
28,865,785
I am using python to take a very large string (DNA sequences) and try to make a suffix tree out of it. My program gave a memory error after a long while of making nested objects, so I thought in order to increase performance, it might be useful to create buffers from the string instead of actually slicing the string. Both version are below, with their relevant issues described. **First (non-buffer) version** - After about a minute of processing a large string a *MemoryError* occurs for currNode.out[substring[0]] = self.Node(pos, substring[1:]) ``` class SuffixTree(object): class Node(object): def __init__(self, position, suffix): self.position = position self.suffix = suffix self.out = {} def __init__(self, text): self.text = text self.max_repeat = 2 self.repeats = {} self.root = self.Node(None, '') L = len(self.text) for i in xrange(L): substring = self.text[-1*(i+1):] + "$" currNode = self.root self.branch(currNode, substring, L-i-1, 0) max_repeat = max(self.repeats.iterkeys(), key=len) print "Max repeat is", len(max_repeat), ":", max_repeat, "at locations:", self.repeats[max_repeat] def branch(self, currNode, substring, pos, repeat): if currNode.suffix != '': currNode.out[currNode.suffix[0]] = self.Node(currNode.position, currNode.suffix[1:]) currNode.suffix = '' currNode.position = None if substring[0] not in currNode.out: currNode.out[substring[0]] = self.Node(pos, substring[1:]) if repeat >= self.max_repeat: for node in currNode.out: self.repeats.setdefault(self.text[pos:pos+repeat], []).append(currNode.out[node].position) self.max_repeat = repeat else: newNode = currNode.out[substring[0]] self.branch(newNode, substring[1:], pos, repeat+1) ``` \*\*Second Version \*\* - Thinking that the constant saving of large string slices was probably the issue, I implemented all of the slices using buffers of strings instead. However, this version almost immediately gives a *MemoryError* for substring = buffer(self.text, i-1) + "$" ``` class SuffixTree(object): class Node(object): def __init__(self, position, suffix): self.position = position self.suffix = suffix self.out = {} def __init__(self, text): self.text = text self.max_repeat = 2 self.repeats = {} self.root = self.Node(None, '') L = len(self.text) for i in xrange(L,0,-1): substring = buffer(self.text, i-1) + "$" #print substring currNode = self.root self.branch(currNode, substring, i-1, 0) max_repeat = max(self.repeats.iterkeys(), key=len) print "Max repeat is", len(max_repeat), ":", max_repeat, "at locations:", self.repeats[max_repeat] #print self.repeats def branch(self, currNode, substring, pos, repeat): if currNode.suffix != '': currNode.out[currNode.suffix[0]] = self.Node(currNode.position, buffer(currNode.suffix,1)) currNode.suffix = '' currNode.position = None if substring[0] not in currNode.out: currNode.out[substring[0]] = self.Node(pos, buffer(substring,1)) if repeat >= self.max_repeat: for node in currNode.out: self.repeats.setdefault(buffer(self.text,pos,repeat), []).append(currNode.out[node].position) self.max_repeat = repeat else: newNode = currNode.out[substring[0]] self.branch(newNode, buffer(substring,1), pos, repeat+1) ``` Is my understanding of buffers mistaken in someway? I thought using them would help the memory issue my program was having, not make it worse.
2015/03/04
[ "https://Stackoverflow.com/questions/28865785", "https://Stackoverflow.com", "https://Stackoverflow.com/users/-1/" ]
The result makes sense. The check inside your ifs is truey in both cases (probably because both BL1, BN1 exist). What you need to do in your code is retrieve the selected index and then use it. Give an id to your dropdown list: ``` <select id="dropdownList"> .... ``` And then use it in your code to retrieve the selected value: ``` var dropdownList = document.getElementById("dropdownList"); var price = dropdownList.options[dropdownList.selectedIndex].value; //rest of code ``` Take a look on a similar question [here](https://stackoverflow.com/questions/1085801/get-selected-value-of-dropdownlist-using-javascript). You can also find a relative tutorial [here](http://www.javascript-coder.com/javascript-form/javascript-get-select.phtml).
1st: You are defining the variable *price* multiple times. 2nd: Your code checks for 'value' and of cause, it always has a value. I guess, what you really want to check is, whether an option is selected or not annd then use its value. ```js function BookingFare() { var price = 0; var BN1 = document.getElementById('BN1'); var BL1 = document.getElementById('BL1'); if (BL1.selected) { price = BL1.value; } if (BN1.selected) { price = BN1.value; } document.getElementById('priceBox').innerHTML = price; } ``` ```html <form> <select> <option name="Bristol " value="40.0" id="BN1">Bristol - Newcastle</option> <option name="London " value="35.0" id="BL1">Bristol - London</option> </select> <button type="button" onclick="BookingFare(); return false;">Submit</button> <br> <label id='priceBox'></label> </form> ```
10,771
49,883,623
Am trying to solve project euler question : > > 2520 is the smallest number that can be divided by each of the numbers > from 1 to 10 without any remainder. What is the smallest positive > number that is evenly divisible by all of the numbers from 1 to 20? > > > I've come up with the python solution below, but is there any way to loop the number in the `if` instead of having to write all of them. ``` def smallest_m(): start=1 while True: if start%2==0 and start%3==0 and start%4==0 and start%5==0 and start%5==0 and start%6==0 and start%7==0 and start%8==0 and start%9==0 and start%10==0 and start%11==0 and start%12==0 and start%13==0 and start%14==0 and start%15==0 and start%16==0 and start%17==0 and start%18==0 and start%19==0 and start%20==0: print(start) break else: start+=1 smallest_m() ```
2018/04/17
[ "https://Stackoverflow.com/questions/49883623", "https://Stackoverflow.com", "https://Stackoverflow.com/users/5126615/" ]
You can use a generator expression and the all function: ``` def smallest_m(): start = 1 while True: if all(start % i == 0 for i in range(2, 20+1)): print(start) break else: start+=1 smallest_m() ```
Try this ``` n = 2520 result = True for i in range(1, 11): if (n % i != 0): result = False print(result) ```
10,773
21,735,023
I am a python newb so please forgive me. I have searched on Google and SA but couldn't find anything. Anyway, I am using the python library [Wordpress XMLRPC](http://python-wordpress-xmlrpc.readthedocs.org/en/latest/overview.html). `myblog`, `myusername`, and `mypassword` are just placeholders to hide my real website, username and password. When I run the code I use my real data. My Code: ``` from wordpress_xmlrpc import * wp = Client('http://www.myblog.wordpress.com/xmlrpc.php', 'myusername', 'mypassword') ``` The Error: ``` Traceback (most recent call last): File "C:/Python27/wordpress_bro", line 2, in <module> wp = Client('http://www.myblog.wordpress.com/xmlrpc.php', 'myusername', 'mypassword') File "build\bdist.win32\egg\wordpress_xmlrpc\base.py", line 27, in __init__ raise ServerConnectionError(repr(e)) ServerConnectionError: <ProtocolError for www.myblog.wordpress.com/xmlrpc.php: 301 Moved Permanently> ``` When I go to `http://www.myblog.wordpress.com/xmlrpc.php` in my browser I get: ``` XML-RPC server accepts POST requests only. ``` Could somebody please help me? Thanks!
2014/02/12
[ "https://Stackoverflow.com/questions/21735023", "https://Stackoverflow.com", "https://Stackoverflow.com/users/3302735/" ]
Variant 3 is ok (but I'd rather use a loop instead of hard-coded options). Your mistake is that you compare 'option1', 'option2' and so one when your real values are '1', '2', '3'. Also as @ElefantPhace said, don't forget about spaces before **selected**, or you'll get invalid html instead. So it would be this: ``` <select name="up_opt"> <option value="1" <?php if ($_GET['up_opt'] == 1) { echo ' selected="selected"'; } ?>>Opt1</option> <option value="2" <?php if ($_GET['up_opt'] == 2) { echo ' selected="selected"'; } ?>>Opt2</option> <option value="3" <?php if ($_GET['up_opt'] == 3) { echo ' selected="selected"'; } ?>>Opt3</option> </select> ``` With loop: ``` <?php $options = array( 1 => 'Opt1', 2 => 'Opt2', 3 => 'Opt3', ); ?> <select name="up_opt"> <?php foreach ($options as $value => $label): ?> <option value="<?php echo $value; ?>" <?php if ($_GET['up_opt'] == 1) { echo ' selected="selected"'; } ?>><?php echo $label; ?></option> <?php endforeach ?> </select> ```
Here is all three of your variants, tested and working as expected. They are all basically the same, you were just using the wrong variable names, and different ones from example to example 1) ``` <?php $opt= array('1' => 'opt1', '2' => 'opt2', '3' => 'opt3') ; echo '<select name="up_opt">'; foreach ($opt as $i => $value) { echo "<option value=\"$i\""; if ($_POST['up_opt'] == $i){ echo " selected"; } echo ">$value</option>"; } echo '</select>'; ?> ``` 2) uses same array as above ``` $edu = $_POST['up_opt']; echo '<select name="up_opt">'; foreach ( $opt as $i => $value ){ echo "<option value=\"$i\"", ($i == $edu) ? ' selected' : ''; echo ">$value</option>"; } echo "</select>"; ``` 3) ``` echo '<select name="up_opt">'; echo '<option value="1"', ($_POST['up_opt'] == '1') ? 'selected':'' ,'>Opt1</option>'; echo '<option value="2"', ($_POST['up_opt'] == '2') ? 'selected':'' ,'>Opt2</option>'; echo '<option value="3"', ($_POST['up_opt'] == '3') ? 'selected':'' ,'>Opt3</option>'; echo '</select>'; ```
10,776
15,260,558
Here is my code to run the server: ``` class MyRequestHandler(SimpleHTTPServer.SimpleHTTPRequestHandler): #.... PORT = 8089 httpd = SocketServer.TCPServer(("", PORT), MyRequestHandler) httpd.allow_reuse_address = True print "Serving forever at port", PORT try: httpd.serve_forever() except: print "Closing the server." httpd.server_close() raise ``` Yet this is what happens: ``` ^CClosing the server. Traceback (most recent call last): File "server.py", line 118, in <module> self.send_error(400, "Unimplemented GET command: %s" % (self.path,)) File "/home/claudiu/local/lib/python2.6/SocketServer.py", line 224, in serve_forever r, w, e = select.select([self], [], [], poll_interval) KeyboardInterrupt (.virtualenv)claudiu@xxx:~/xxx$ python server.py Traceback (most recent call last): File "server.py", line 122, in <module> httpd = SocketServer.TCPServer(("", PORT), MyRequestHandler) File "/home/claudiu/local/lib/python2.6/SocketServer.py", line 402, in __init__ self.server_bind() File "/home/claudiu/local/lib/python2.6/SocketServer.py", line 413, in server_bind self.socket.bind(self.server_address) File "<string>", line 1, in bind socket.error: [Errno 98] Address already in use ``` Why? I close the server and set `allow_reuse_address` to True... Using python 2.6.8.
2013/03/06
[ "https://Stackoverflow.com/questions/15260558", "https://Stackoverflow.com", "https://Stackoverflow.com/users/15055/" ]
It is because TCP [TIME\_WAIT](http://dev.fyicenter.com/Interview-Questions/Socket-4/Explain_the_TIME_WAIT_state_.html). [Somebody discovered this exact problem.](http://brokenbad.com/2012/01/address-reuse-in-pythons-socketserver/) > > However, if I try to stop and start the server again to test any modifications, I get a random “socket.error: [Errno 98] Address > already in use” error. This happens only if a client has already > connected to the server. > > > Checking with netstat and ps, I found that although the process it > self is no longer running, the socket is still listening on the port > with status “TIME\_WAIT”. Basically the OS waits for a while to make > sure this connection has no remaining packets on the way. > > >
It is because you have to set SO\_REUSEADDRESS *before* you bind the socket. As you are creating and binding the socket all in one step and then setting it, it is already too late.
10,777
5,988,617
According to the python doc, vertical bars literal are used as an 'or' operator. It matches A|B,where A and B can be arbitrary REs. For example, if the regular expression is as following: ABC|DEF,it matches strings like these: "ABC", "DEF" But what if I want to match strings as following: "ABCF", "ADEF" Perhaps what I want is something like **A(BC)|(DE)F** which means: * match "A" first, * then string "BC" or "DE", * then char "F". I know the above expression is not right since brackets have other meanings in regular expression, just to express my idea. Thanks!
2011/05/13
[ "https://Stackoverflow.com/questions/5988617", "https://Stackoverflow.com", "https://Stackoverflow.com/users/166482/" ]
These will work: ``` A(BC|DE)F A(?:BC|DE)F ``` The difference is the number of groups generated: 1 with the first, 0 with the second. Yours will match either `ABC` or `DEF`, with 2 groups, one containing nothing and the other containing the matched fragment (`BC` or `DE`).
The only difference between parentheses in Python regexps (and perl-compatible regexps in general), and parentheses in formal regular expressions, is that in Python, parens store their result. Everything matched by a regular expression inside parentheses is stored as a "submatch" or "group" that you can access using the `group` method on the match object returned by `re.match`, `re.search`, or `re.finditer`. They are also used in backreferences, a feature of Python RE/PCRE that violates normal regular expression rules, and that you probably don't care about. If you don't care about the whole submatch extraction deal, it's fine to use parens like this. If you do care, there is a non-capturing version of parens that are exactly the same as formal regular expressions: `(?:...)` instead of `(...)`. This, and more, is described in the [official docs](http://docs.python.org/library/re.html#regular-expression-syntax)
10,783
8,340,372
> > **Note:** this question is tagged both **language-agnostic** and **python** as my primary concern is finding out the algorithm to implement the solution to the problem, but information on how to implement it *efficiently* (=executing fast!) in python are a plus. > > > **Rules of the game:** * Imagine two teams one of *A agents* (`An`) and one of *B agents* (`Bn`). * In the game space there are a certain number of available *slots* (`Sn`) that can be occupied. * At each turn each agent is given a subset of slots he/she can occupy. * One agent can occupy *only one slot at at time*, however each slot can be occupied by two different agents, provided they are *each from a different team*. **The question:** I am trying to find an ***efficient* way to compute the best possible move** for `A` agents, where "best possible move" means either maximising or minimising the chances to occupy the same slots occupied by team `B`. The moves of team `B` are not known in advance. **Example scenario:** This scenario is deliberately trivial. It is just meant to illustrate the game mechanics. ``` A1 can occupy S1, S2 A2 can occupy S2, S3 B1 can occupy S1, S2 ``` In this case the solution is obvious: `A1 → S1` and `A2 → S2` is the option that will guarantee meeting with `B1` [as `B1` cannot avoid to occupy either `S1` or `S2`], while `A2 → S3` and `A1 → random(S1, S2)` is the one that will maximise the chances to avoid `B1`. **Real-life scenarios:** In real-life scenarios, the slots can be hundreds and the agents in each team various dozens. The difficulty in the naïve implementation I tried so far is that I basically consider every single possible set of moves for the team `B`, and score each of the possible alternative set of moves for `A`. So, my computation time increases exponentially. Still, I'm not sure this is a problem that can be solved only by "brute force". And even if this is the case I wonder: 1. If the optimal brute force solution necessarily grows exponentially (time-wise). 2. If there is a way to compute an non-optimal, locally-best solution. Thank you!
2011/12/01
[ "https://Stackoverflow.com/questions/8340372", "https://Stackoverflow.com", "https://Stackoverflow.com/users/146792/" ]
If I understand correctly, the problem of finding an optimal strategy for A once you know the positions for B is the same as finding a [maximum matching](http://en.wikipedia.org/wiki/Matching_%28graph_theory%29) in a bipartite graph. The first set of vertices represent the A agents, the second set of vertices represent the slots ocupyed by the B agents and there is an edge if an agent can choose the occupy the slot. The problem is then finding the maximum number of edges you can ocupy without a vertex touching more then one edge. There are simple polynomial algorithms to solve this problem. One of the most classic is the one based on [augmenting paths](https://en.wikipedia.org/wiki/Maximum_flow_problem#Maximum_cardinality_bipartite_matching). ``` while you can find a path, augment the path a path is a sequence of vertices a1, b1, a2, b2, ... an, bn such that ai -> bi is an unmatched edge bi -> a(i+1) is a matched edge a1 and bn are unmatched to augment a path match all the unmatched edges (ai -> bi) unmatch all the matched edges (bi -> a(i+1)) (this results in one aditional matched edge after the iteration) ``` A naïve implementation of this algorithm is O(V\*E) but you can probably find more efficient python implementations of bipartite matching somewhere.
This is not really a question of programming as much as it is a game theory question. What follows is a sketch of a game-theoretic analysis of the problem. We have a game of two players (A and B). A two-player game is always a zero-sum game, i.e. the gain of one player is loss for the other. Even if game payoffs are not zero-sum (i.e. there are results that give positive payoffs to both players), payoffs can be always normalized (taking their difference), so we can assume without loss of generality that the game here is also a zero-sum game. From this we deduce that if it is the goal of A to maximize [minimize] the number of meetings with agents of B, then it is the goal of B to minimize [maximize] the number of meetings with agents of A. Based on the description it is further assumed that A and B choose their moves simultaneously, i.e. A picks the slots for A's agents without knowing which slots B's agents are going to take and vice versa. Without this assumption, i.e. if B can see A's moves, it's very easy for B to "win". Let X be the set of all possible assignments of A's agents to the available slots (obeying the constraints for the current round or turn), i.e. X is a set of subsets of the slots; every subset denoting an assignment of agents to exactly those slots in the subset. Similarly, let Y be the set of all possible assignment of B's agents to the available slots (similarly obeying the constraints for B's agents). There are now four games. In each game A chooses an alement x of X and b chooses an element y of Y, after which: * In game I.a, A wins if x and y share a slot, otherwise B wins (in this game, A tries to force at least one meeting) * In game I.b, A receives a positive payoff equivalent to the number of common slots in x and y (in this game, A tries to maximize the number of meetings) * In game II.a, A wins if x and y do not share a slot, otherwise B wins (in this game, A tries to avoid any meetings) * In game II.b, A receives a negative payoff equivalent to the number of common slots in x and y (in this game, A tries to minimize the number of meetings) All these games can be analyzed using standard game-theoretic techniques. We focus on game I.a and leave the rest as an exercise to the reader. If there is an element y of Y available such that no x in X shares a slot with y, B chooses y and wins; so assume not, i.e. assume that every y in Y corresponds to at least one x in X such that x and y share a slot. A cannot play by any deterministic strategy, because B can counter it by choosing an y that does not share a slot with the deterministically chosen x; therefore A has to play by a mixed strategy, i.e. a randomized strategy exactly in the same way as the mixed strategy 1/3 - 1/3 - 1/3 is optimal for Roshambo (rock-paper-scissors). B will be also playing a mixed strategy in response. The probabilities of different elements of X and Y are dictated by the number of matching sets, i.e. an elements x of X that has common slots with many Y's is going to have a higher probability in the mixed strategy than those that have common slots with only a few Y's. Calculating the stable mixed strategies (a Nash equilibrium for this game) is in theory straightforward and any basic reference on game theory can be consulted.
10,784
53,557,240
I have an input, which is a word. if my input contains `python`, print `True`. if not, print `False`. for example: if the input is `puytrmhqoln` print `True`(because it contains python's letter, however, there is some letter between `python`) if the input is `pythno` print `False` (because in types o after n)
2018/11/30
[ "https://Stackoverflow.com/questions/53557240", "https://Stackoverflow.com", "https://Stackoverflow.com/users/10688308/" ]
I found the answer. ``` import sys inputs = sys.stdin.readline().strip() word = "python" for i in range(len(inputs)): if word == "": break if inputs[i] == word[0]: word = word[1:] if word == "": print("YES") else: print("NO") ``` it works for words like `hello` with double letter, also `python`
Iterate over each character of your string. And see whether the current character is identical to the currently next one in your string you are looking for: ``` strg = "pythrno" lookfor = "python" longest_substr = "" index = 0 max_index = len(lookfor) for c in strg: if c == lookfor[index] and index < max_index: longest_substr += c index += 1 print(longest_substr) ``` will give you ``` pytho ``` You now can simply compare `longest_substr` and `lookfor` and wrap this in a dedicated function.
10,790
52,911,232
I'm trying to install and import the Basemap library into my Jupyter Notebook, but this returns the following error: ``` KeyError: 'PROJ_LIB' ``` After some research online, I understand I'm to install Basemap on a separate environment in Anaconda. After creating a new environment and installing Basemap (as well as all other relevant libraries), I have activated the environment. But when importing Basemap I still receive the same KeyError. Here's what I did in my MacOS terminal: ``` conda create --name Py3.6 python=3.6 basemap source activate Py3.6 conda upgrade proj4 env | grep -i proj conda update --channel conda-forge proj4 ``` Then in Jupyter Notebook I run the following: ``` from mpl_toolkits.basemap import Basemap ``` Can anyone tell me why this results in a KeyError?
2018/10/21
[ "https://Stackoverflow.com/questions/52911232", "https://Stackoverflow.com", "https://Stackoverflow.com/users/10534668/" ]
Need to set the PROJ\_LIB environment variable either before starting your notebook or in python with `os.environ['PROJ_LIB'] = '<path_to_anaconda>/share/proj'` Ref. [Basemap import error in PyCharm —— KeyError: 'PROJ\_LIB'](https://stackoverflow.com/questions/52295117/basemap-import-error-in-pycharm-keyerror-proj-lib)
The problem occurs as the file location of "epsg" and PROJ\_LIB has been changed for recent versions of python, but somehow they forgot to update the **init**.py for Basemap. If you have installed python using anaconda, this is a possible location for your espg file: `C:\Users\(xxxx)\AppData\Local\Continuum\anaconda3\pkgs\proj4-5.1.0-hfa6e2cd_1\Library\share` So you have to add this path at the start of your code in spyder or whatever field you are using. ```py import os os.environ['PROJ_LIB'] = r'C:\Users\(xxxx)\AppData\Local\Continuum\anaconda3\pkgs\proj4-5.1.0-hfa6e2cd_1\Library\share' from mpl_toolkits.basemap import Basemap ```
10,792
67,439,037
Code to extract sequences ``` from Bio import SeqIO def get_cds_feature_with_qualifier_value(seq_record, name, value): for feature in genome_record.features: if feature.type == "CDS" and value in feature.qualifiers.get(name, []): return feature return None genome_record = SeqIO.read("470.8208.gbk", "genbank") db_xref = ['fig|470.8208.peg.2198', 'fig|470.8208.peg.2200', 'fig|470.8208.peg.2203', 'fig|470.8208.peg.2199', 'fig|470.8208.peg.2201', 'fig|470.8208.peg.2197', 'fig|470.8208.peg.2202', 'fig|470.8208.peg.2501', 'fig|470.8208.peg.2643', 'fig|470.8208.peg.2193', 'fig|470.8208.peg.2670', 'fig|470.8208.peg.2695', 'fig|470.8208.peg.2696', 'fig|470.8208.peg.2189', 'fig|470.8208.peg.2458', 'fig|470.8208.peg.2191', 'fig|470.8208.peg.2190', 'fig|470.8208.peg.2188', 'fig|470.8208.peg.2192', 'fig|470.8208.peg.2639', 'fig|470.8208.peg.3215', 'fig|470.8208.peg.2633', 'fig|470.8208.peg.2682', 'fig|470.8208.peg.3186', 'fig|470.8208.peg.2632', 'fig|470.8208.peg.2683', 'fig|470.8208.peg.3187', 'fig|470.8208.peg.2764', 'fig|470.8208.peg.2686', 'fig|470.8208.peg.2638', 'fig|470.8208.peg.2680', 'fig|470.8208.peg.2685', 'fig|470.8208.peg.2684', 'fig|470.8208.peg.2633', 'fig|470.8208.peg.2682', 'fig|470.8208.peg.3186', 'fig|470.8208.peg.2632', 'fig|470.8208.peg.2683', 'fig|470.8208.peg.3187', 'fig|470.8208.peg.2640', 'fig|470.8208.peg.3221', 'fig|470.8208.peg.3222', 'fig|470.8208.peg.3389', 'fig|470.8208.peg.2764', 'fig|470.8208.peg.2653', 'fig|470.8208.peg.3216', 'fig|470.8208.peg.3231', 'fig|470.8208.peg.2641', 'fig|470.8208.peg.2638', 'fig|470.8208.peg.2680', 'fig|470.8208.peg.2637', 'fig|470.8208.peg.2642', 'fig|470.8208.peg.2679', 'fig|470.8208.peg.3230', 'fig|470.8208.peg.2676', 'fig|470.8208.peg.2677', 'fig|470.8208.peg.1238', 'fig|470.8208.peg.2478', 'fig|470.8208.peg.2639', 'fig|470.8208.peg.854', 'fig|470.8208.peg.382', 'fig|470.8208.peg.383'] with open("nucleotides.fasta", "w") as nt_output, open("proteins.fasta", "w") as aa_output: for xref in db_xref: print ("Looking at " + xref) cds_feature = get_cds_feature_with_qualifier_value (genome_record, "db_xref", xref) gene_sequence = cds_feature.extract(genome_record.seq) protein_sequence = gene_sequence.translate(table=11, cds=True) # This is asking Python to halt if the translation does not match: assert protein_sequence == cds_feature.qualifiers["translation"][0] # Output FASTA records - note \n means insert a new line. # This is a little lazy as it won't line wrap the sequence: nt_output.write(">%s\n%s\n" % (xref, gene_sequence)) aa_output.write(">%s\n%s\n" % (xref, gene_sequence)) print("Done") ``` getting following error ``` /usr/local/lib/python3.7/dist-packages/Bio/GenBank/Scanner.py:1394: BiopythonParserWarning: Truncated LOCUS line found - is this correct? :'LOCUS CP027704 3430798 bp DNA linear UNK \n' BiopythonParserWarning, Looking at fig|470.8208.peg.2198 --------------------------------------------------------------------------- AttributeError Traceback (most recent call last) <ipython-input-32-323ff320990a> in <module>() 15 print ("Looking at " + xref) 16 cds_feature = get_cds_feature_with_qualifier_value (genome_record, "db_xref", xref) ---> 17 gene_sequence = cds_feature.extract(genome_record.seq) 18 protein_sequence = gene_sequence.translate(table=11, cds=True) 19 AttributeError: 'NoneType' object has no attribute 'extract' ```
2021/05/07
[ "https://Stackoverflow.com/questions/67439037", "https://Stackoverflow.com", "https://Stackoverflow.com/users/13965256/" ]
You can resolve the inline execution error by changing `scriptTag.innerHTML = scriptText;` to `scriptTag.src = chrome.runtime.getURL(filePath);`, no need to fetch the script. Manifest v3 seems to only allow injecting static scripts into the page context. If you want to run dynamically sourced scripts I think this can be achieved by having the static (already trusted) script fetch a remote script then eval it. UPDATE: example extension, with manifest v3, that injects a script that operates in the page context. ``` # myscript.js window.variableInMainContext = "hi" ``` ``` # manifest.json { "name": "example", "version": "1.0", "description": "example extension", "manifest_version": 3, "content_scripts": [ { "matches": ["https://*/*"], "run_at": "document_start", "js": ["inject.js"] } ], "web_accessible_resources": [ { "resources": [ "myscript.js" ], "matches": [ "https://*/*" ] } ] } ``` ``` # inject.js const nullthrows = (v) => { if (v == null) throw new Error("it's a null"); return v; } function injectCode(src) { const script = document.createElement('script'); // This is why it works! script.src = src; script.onload = function() { console.log("script injected"); this.remove(); }; // This script runs before the <head> element is created, // so we add the script to <html> instead. nullthrows(document.head || document.documentElement).appendChild(script); } injectCode(chrome.runtime.getURL('/myscript.js')); ```
Download the script files and put it on your project to make it local. It solved my content security policy problem.
10,801
20,564,010
I am calling a python script from a ruby program as: ``` sku = ["VLJAI20225", "VJLS1234"] qty = ["3", "7"] system "python2 /home/nish/stuff/repos/Untitled/voylla_staging_changes/app/models/ReviseItem.py #{sku} #{qtys}" ``` But I'd like to access the array elements in the python script. ``` print sys.argv[1] #gives [VLJAI20225, expected ["VLJAI20225", "VJLS1234"] print sys.argv[2] #gives VJLS1234] expected ["3", "7"] ``` I feel that the space between the array elements is treating the array elements as separate arguments. I may be wrong. How can I pass the array correctly?
2013/12/13
[ "https://Stackoverflow.com/questions/20564010", "https://Stackoverflow.com", "https://Stackoverflow.com/users/2388940/" ]
Not a python specialist, but a quick fix would be to use json: ``` system "python2 /home/nish/stuff/repos/Untitled/voylla_staging_changes/app/models/ReviseItem.py #{sku.to_json} #{qtys.to_json}" ``` Then parse in your python script.
``` "python2 /home/nish/stuff/repos/Untitled/voylla_staging_changes/app/models/ReviseItem.py \'#{sku}\' \'#{qtys}\'" ``` Maybe something like this?
10,802
66,297,277
I'm trying to embed python code in a C++ application. Problem is that if I use OpenCV functions in the C++ code and also in python function I am embedding there is memory corruption. Simply commenting all the opencv functions from the code below solve the problem. One thing is that my OpenCV for c++ is 4.5.0 (dinamically linked), compiled from source while version used in python is 3.1.0 (installed using the python wheel). Looking at the backtrace (but I am not an expert on this) seems to me that resize function in python is somehow relying on the OpenCV 4.5.0 library to free memory, causing some sort of conflicts between the two version.. Python is version 3.5.2. This is a minimal example. My main cpp code: ``` #include <iostream> #include <Python.h> #include <opencv2/opencv.hpp> using namespace std; int main() { cv::Mat test = cv::imread("/home/rocco/repo/cpp_python/current.jpg"); cv::Mat test_res; cv::resize(test,test_res,cv::Size(),0.5,0.5); cv::imwrite("resized.jpg",test_res); std::string path = "/home/rocco/repo/cpp_python/current.jpg"; std::string filename = "script"; std::string function_name = "myfunction"; std::string wkd = "/home/rocco/repo/cpp_python"; auto script_name = PyUnicode_FromString(filename.c_str()); Py_Initialize(); PyObject *sys_path = PySys_GetObject("path"); PyList_Insert(sys_path, 0, PyUnicode_FromString(wkd.c_str())); PySys_SetObject("path", sys_path); PyObject *pModule; pModule = PyImport_Import(script_name); auto pFunc = PyObject_GetAttrString(pModule, function_name.c_str()); PyObject * pValue = NULL, * pArgs = NULL; pArgs = PyTuple_New(2); auto pFilename = PyUnicode_FromString(path.c_str()); PyTuple_SetItem(pArgs, 0, pFilename); pValue = PyObject_CallObject(pFunc, pArgs); Py_DECREF(pArgs); Py_DECREF(pFunc); Py_DECREF(pModule); Py_DECREF(pFilename); Py_Finalize(); return 0; } ``` My python script: ``` import cv2 def myfunction(filename): img = cv2.imread(filename, cv2.IMREAD_UNCHANGED) scale_percent = 0.5 # percent of original size img = cv2.resize(img, None, fx=scale_percent, fy=scale_percent) cv2.imwrite('resized2.jpg',img) myfunction('current.jpg') ``` Backtrace provided here: ``` *** Error in `/home/rocco/repo/cpp_python/cpp_python': free(): invalid next size (fast): 0x0000000002447670 *** ======= Backtrace: ========= /lib/x86_64-linux-gnu/libc.so.6(+0x777f5)[0x7f1a8c9807f5] /lib/x86_64-linux-gnu/libc.so.6(+0x8038a)[0x7f1a8c98938a] /lib/x86_64-linux-gnu/libc.so.6(cfree+0x4c)[0x7f1a8c98d58c] /home/rocco/.local/lib/python3.5/site-packages/cv2/cv2.cpython-35m-x86_64-linux-gnu.so(+0x103b8c)[0x7f1a8639bb8c] /home/rocco/dynamic_libraries/opencv_4-5/lib/libopencv_core.so.4.5(_ZN2cv3Mat10deallocateEv+0x100)[0x7f1a8d5aa140] /home/rocco/.local/lib/python3.5/site-packages/cv2/cv2.cpython-35m-x86_64-linux-gnu.so(+0x114fe8)[0x7f1a863acfe8] /home/rocco/.local/lib/python3.5/site-packages/cv2/cv2.cpython-35m-x86_64-linux-gnu.so(+0x1154d4)[0x7f1a863ad4d4] /usr/lib/x86_64-linux-gnu/libpython3.5m.so.1.0(PyCFunction_Call+0xe9)[0x7f1a90686049] /usr/lib/x86_64-linux-gnu/libpython3.5m.so.1.0(PyEval_EvalFrameEx+0x7555)[0x7f1a90792135] /usr/lib/x86_64-linux-gnu/libpython3.5m.so.1.0(PyEval_EvalFrameEx+0x79d9)[0x7f1a907925b9] /usr/lib/x86_64-linux-gnu/libpython3.5m.so.1.0(+0x24ac6c)[0x7f1a90822c6c] /usr/lib/x86_64-linux-gnu/libpython3.5m.so.1.0(PyEval_EvalCodeEx+0x23)[0x7f1a90822d43] /usr/lib/x86_64-linux-gnu/libpython3.5m.so.1.0(PyEval_EvalCode+0x1b)[0x7f1a9078a94b] /usr/lib/x86_64-linux-gnu/libpython3.5m.so.1.0(+0x1bf5fd)[0x7f1a907975fd] /usr/lib/x86_64-linux-gnu/libpython3.5m.so.1.0(PyCFunction_Call+0xc9)[0x7f1a90686029] /usr/lib/x86_64-linux-gnu/libpython3.5m.so.1.0(PyEval_EvalFrameEx+0x8c0e)[0x7f1a907937ee] /usr/lib/x86_64-linux-gnu/libpython3.5m.so.1.0(+0x24ac6c)[0x7f1a90822c6c] /usr/lib/x86_64-linux-gnu/libpython3.5m.so.1.0(PyEval_EvalFrameEx+0x62d9)[0x7f1a90790eb9] /usr/lib/x86_64-linux-gnu/libpython3.5m.so.1.0(PyEval_EvalFrameEx+0x79d9)[0x7f1a907925b9] /usr/lib/x86_64-linux-gnu/libpython3.5m.so.1.0(PyEval_EvalFrameEx+0x79d9)[0x7f1a907925b9] /usr/lib/x86_64-linux-gnu/libpython3.5m.so.1.0(PyEval_EvalFrameEx+0x79d9)[0x7f1a907925b9] /usr/lib/x86_64-linux-gnu/libpython3.5m.so.1.0(+0x24ac6c)[0x7f1a90822c6c] /usr/lib/x86_64-linux-gnu/libpython3.5m.so.1.0(PyEval_EvalCodeEx+0x23)[0x7f1a90822d43] /usr/lib/x86_64-linux-gnu/libpython3.5m.so.1.0(+0xd2ac8)[0x7f1a906aaac8] /usr/lib/x86_64-linux-gnu/libpython3.5m.so.1.0(PyObject_Call+0x6e)[0x7f1a9075f46e] /usr/lib/x86_64-linux-gnu/libpython3.5m.so.1.0(_PyObject_CallMethodIdObjArgs+0x1bf)[0x7f1a9073d95f] /usr/lib/x86_64-linux-gnu/libpython3.5m.so.1.0(PyImport_ImportModuleLevelObject+0x864)[0x7f1a907c98d4] /usr/lib/x86_64-linux-gnu/libpython3.5m.so.1.0(+0x1be638)[0x7f1a90796638] /usr/lib/x86_64-linux-gnu/libpython3.5m.so.1.0(PyCFunction_Call+0xe9)[0x7f1a90686049] /usr/lib/x86_64-linux-gnu/libpython3.5m.so.1.0(PyObject_Call+0x6e)[0x7f1a9075f46e] /usr/lib/x86_64-linux-gnu/libpython3.5m.so.1.0(PyObject_CallFunction+0xcf)[0x7f1a9075f5cf] /usr/lib/x86_64-linux-gnu/libpython3.5m.so.1.0(PyImport_Import+0xe6)[0x7f1a907c9d66] /home/rocco/repo/cpp_python/cpp_python[0x402291] /lib/x86_64-linux-gnu/libc.so.6(__libc_start_main+0xf0)[0x7f1a8c929840] /home/rocco/repo/cpp_python/cpp_python[0x401e89] ======= Memory map: ======== 00400000-00407000 r-xp 00000000 08:04 9180205 /home/rocco/repo/cpp_python/cpp_python 00606000-00607000 r--p 00006000 08:04 9180205 /home/rocco/repo/cpp_python/cpp_python 00607000-00608000 rw-p 00007000 08:04 9180205 /home/rocco/repo/cpp_python/cpp_python 01e0d000-0251f000 rw-p 00000000 00:00 0 [heap] 7f1a64000000-7f1a64021000 rw-p 00000000 00:00 0 7f1a64021000-7f1a68000000 ---p 00000000 00:00 0 7f1a690f0000-7f1a690f1000 rw-p 00000000 00:00 0 7f1a690f1000-7f1a6a74d000 rw-p 00000000 00:00 0 7f1a6a74d000-7f1a6beeb000 rw-p 00000000 00:00 0 7f1a6beeb000-7f1a6bf7f000 r-xp 00000000 08:04 530555 /home/rocco/.local/lib/python3.5/site-packages/numpy/random/_generator.cpython-35m-x86_64-linux-gnu.so 7f1a6bf7f000-7f1a6c17f000 ---p 00094000 08:04 530555 /home/rocco/.local/lib/python3.5/site-packages/numpy/random/_generator.cpython-35m-x86_64-linux-gnu.so 7f1a6c17f000-7f1a6c1a3000 rw-p 00094000 08:04 530555 /home/rocco/.local/lib/python3.5/site-packages/numpy/random/_generator.cpython-35m-x86_64-linux-gnu.so 7f1a6c1a3000-7f1a6c1a5000 rw-p 00000000 00:00 0 7f1a6c1a5000-7f1a6c1b0000 r-xp 00000000 08:04 530563 /home/rocco/.local/lib/python3.5/site-packages/numpy/random/_sfc64.cpython-35m-x86_64-linux-gnu.so 7f1a6c1b0000-7f1a6c3af000 ---p 0000b000 08:04 530563 /home/rocco/.local/lib/python3.5/site-packages/numpy/random/_sfc64.cpython-35m-x86_64-linux-gnu.so 7f1a6c3af000-7f1a6c3b1000 rw-p 0000a000 08:04 530563 /home/rocco/.local/lib/python3.5/site-packages/numpy/random/_sfc64.cpython-35m-x86_64-linux-gnu.so 7f1a6c3b1000-7f1a6c3bf000 r-xp 00000000 08:04 530561 /home/rocco/.local/lib/python3.5/site-packages/numpy/random/_pcg64.cpython-35m-x86_64-linux-gnu.so 7f1a6c3bf000-7f1a6c5bf000 ---p 0000e000 08:04 530561 /home/rocco/.local/lib/python3.5/site-packages/numpy/random/_pcg64.cpython-35m-x86_64-linux-gnu.so 7f1a6c5bf000-7f1a6c5c1000 rw-p 0000e000 08:04 530561 /home/rocco/.local/lib/python3.5/site-packages/numpy/random/_pcg64.cpython-35m-x86_64-linux-gnu.so 7f1a6c5c1000-7f1a6c5d2000 r-xp 00000000 08:04 530566 /home/rocco/.local/lib/python3.5/site-packages/numpy/random/_philox.cpython-35m-x86_64-linux-gnu.so 7f1a6c5d2000-7f1a6c7d1000 ---p 00011000 08:04 530566 /home/rocco/.local/lib/python3.5/site-packages/numpy/random/_philox.cpython-35m-x86_64-linux-gnu.so 7f1a6c7d1000-7f1a6c7d3000 rw-p 00010000 08:04 530566 /home/rocco/.local/lib/python3.5/site-packages/numpy/random/_philox.cpython-35m-x86_64-linux-gnu.so 7f1a6c7d3000-7f1a6c7d4000 rw-p 00000000 00:00 0 7f1a6c7d4000-7f1a6c7eb000 r-xp 00000000 08:04 530559 /home/rocco/.local/lib/python3.5/site-packages/numpy/random/_mt19937.cpython-35m-x86_64-linux-gnu.so 7f1a6c7eb000-7f1a6c9eb000 ---p 00017000 08:04 530559 /home/rocco/.local/lib/python3.5/site-packages/numpy/random/_mt19937.cpython-35m-x86_64-linux-gnu.so 7f1a6c9eb000-7f1a6c9ed000 rw-p 00017000 08:04 530559 /home/rocco/.local/lib/python3.5/site-packages/numpy/random/_mt19937.cpython-35m-x86_64-linux-gnu.so 7f1a6c9ed000-7f1a6ca3b000 r-xp 00000000 08:04 530556 /home/rocco/.local/lib/python3.5/site-packages/numpy/random/_bounded_integers.cpython-35m-x86_64-linux-gnu.so 7f1a6ca3b000-7f1a6cc3b000 ---p 0004e000 08:04 530556 /home/rocco/.local/lib/python3.5/site-packages/numpy/random/_bounded_integers.cpython-35m-x86_64-linux-gnu.so 7f1a6cc3b000-7f1a6cc3d000 rw-p 0004e000 08:04 530556 /home/rocco/.local/lib/python3.5/site-packages/numpy/random/_bounded_integers.cpython-35m-x86_64-linux-gnu.so 7f1a6cc3d000-7f1a6ce58000 r-xp 00000000 08:04 9437776 /lib/x86_64-linux-gnu/libcrypto.so.1.0.0 7f1a6ce58000-7f1a6d057000 ---p 0021b000 08:04 9437776 /lib/x86_64-linux-gnu/libcrypto.so.1.0.0 7f1a6d057000-7f1a6d073000 r--p 0021a000 08:04 9437776 /lib/x86_64-linux-gnu/libcrypto.so.1.0.0 7f1a6d073000-7f1a6d07f000 rw-p 00236000 08:04 9437776 /lib/x86_64-linux-gnu/libcrypto.so.1.0.0 7f1a6d07f000-7f1a6d082000 rw-p 00000000 00:00 0 7f1a6d082000-7f1a6d088000 r-xp 00000000 08:04 7086087 /usr/lib/python3.5/lib-dynload/_hashlib.cpython-35m-x86_64-linux-gnu.so 7f1a6d088000-7f1a6d287000 ---p 00006000 08:04 7086087 /usr/lib/python3.5/lib-dynload/_hashlib.cpython-35m-x86_64-linux-gnu.so 7f1a6d287000-7f1a6d288000 r--p 00005000 08:04 7086087 /usr/lib/python3.5/lib-dynload/_hashlib.cpython-35m-x86_64-linux-gnu.so 7f1a6d288000-7f1a6d289000 rw-p 00006000 08:04 7086087 /usr/lib/python3.5/lib-dynload/_hashlib.cpython-35m-x86_64-linux-gnu.so 7f1a6d289000-7f1a6d2b9000 r-xp 00000000 08:04 530568 /home/rocco/.local/lib/python3.5/site-packages/numpy/random/_common.cpython-35m-x86_64-linux-gnu.so 7f1a6d2b9000-7f1a6d4b8000 ---p 00030000 08:04 530568 /home/rocco/.local/lib/python3.5/site-packages/numpy/random/_common.cpython-35m-x86_64-linux-gnu.so 7f1a6d4b8000-7f1a6d4ba000 rw-p 0002f000 08:04 530568 /home/rocco/.local/lib/python3.5/site-packages/numpy/random/_common.cpython-35m-x86_64-linux-gnu.so 7f1a6d4ba000-7f1a6d4bb000 rw-p 00000000 00:00 0 7f1a6d4bb000-7f1a6d4e0000 r-xp 00000000 08:04 530557 /home/rocco/.local/lib/python3.5/site-packages/numpy/random/_bit_generator.cpython-35m-x86_64-linux-gnu.so 7f1a6d4e0000-7f1a6d6e0000 ---p 00025000 08:04 530557 /home/rocco/.local/lib/python3.5/site-packages/numpy/random/_bit_generator.cpython-35m-x86_64-linux-gnu.so 7f1a6d6e0000-7f1a6d6e5000 rw-p 00025000 08:04 530557 /home/rocco/.local/lib/python3.5/site-packages/numpy/random/_bit_generator.cpython-35m-x86_64-linux-gnu.so 7f1a6d6e5000-7f1a6d6e6000 rw-p 00000000 00:00 0 7f1a6d6e6000-7f1a6d75e000 r-xp 00000000 08:04 530558 /home/rocco/.local/lib/python3.5/site-packages/numpy/random/mtrand.cpython-35m-x86_64-linux-gnu.so 7f1a6d75e000-7f1a6d95d000 ---p 00078000 08:04 530558 /home/rocco/.local/lib/python3.5/site-packages/numpy/random/mtrand.cpython-35m-x86_64-linux-gnu.so 7f1a6d95d000-7f1a6d983000 rw-p 00077000 08:04 530558 /home/rocco/.local/lib/python3.5/site-packages/numpy/random/mtrand.cpython-35m-x86_64-linux-gnu.so 7f1a6d983000-7f1a6d985000 rw-p 00000000 00:00 0 7f1a6d985000-7f1a6d99b000 r-xp 00000000 08:04 529217 /home/rocco/.local/lib/python3.5/site-packages/numpy/fft/_pocketfft_internal.cpython-35m-x86_64-linux-gnu.so 7f1a6d99b000-7f1a6db9a000 ---p 00016000 08:04 529217 /home/rocco/.local/lib/python3.5/site-packages/numpy/fft/_pocketfft_internal.cpython-35m-x86_64-linux-gnu.so 7f1a6db9a000-7f1a6db9b000 rw-p 00015000 08:04 529217 /home/rocco/.local/lib/python3.5/site-packages/numpy/fft/_pocketfft_internal.cpython-35m-x86_64-linux-gnu.so 7f1a6db9b000-7f1a6dbd2000 r-xp 00000000 08:04 6824744 /usr/lib/x86_64-linux-gnu/libmpdec.so.2.4.2 7f1a6dbd2000-7f1a6ddd1000 ---p 00037000 08:04 6824744 /usr/lib/x86_64-linux-gnu/libmpdec.so.2.4.2 7f1a6ddd1000-7f1a6ddd2000 r--p 00036000 08:04 6824744 /usr/lib/x86_64-linux-gnu/libmpdec.so.2.4.2 7f1a6ddd2000-7f1a6ddd3000 rw-p 00037000 08:04 6824744 /usr/lib/x86_64-linux-gnu/libmpdec.so.2.4.2 7f1a6ddd3000-7f1a6ddf7000 r-xp 00000000 08:04 7150229 /usr/lib/python3.5/lib-dynload/_decimal.cpython-35m-x86_64-linux-gnu.so 7f1a6ddf7000-7f1a6dff6000 ---p 00024000 08:04 7150229 /usr/lib/python3.5/lib-dynload/_decimal.cpython-35m-x86_64-linux-gnu.so 7f1a6dff6000-7f1a6dff7000 r--p 00023000 08:04 7150229 /usr/lib/python3.5/lib-dynload/_decimal.cpython-35m-x86_64-linux-gnu.so 7f1a6dff7000-7f1a6e000000 rw-p 00024000 08:04 7150229 /usr/lib/python3.5/lib-dynload/_decimal.cpython-35m-x86_64-linux-gnu.so 7f1a6e000000-7f1a70000000 rw-p 00000000 00:00 0 7f1a70000000-7f1a70021000 rw-p 00000000 00:00 0 7f1a70021000-7f1a74000000 ---p 00000000 00:00 0 7f1a74000000-7f1a78000000 rw-p 00000000 00:00 0 7f1a78000000-7f1a78021000 rw-p 00000000 00:00 0 7f1a78021000-7f1a7c000000 ---p 00000000 00:00 0 7f1a7c010000-7f1a7c190000 rw-p 00000000 00:00 0 7f1a7c190000-7f1a7c197000 r-xp 00000000 08:04 7136401 /usr/lib/python3.5/lib-dynload/_lzma.cpython-35m-x86_64-linux-gnu.so 7f1a7c197000-7f1a7c396000 ---p 00007000 08:04 7136401 /usr/lib/python3.5/lib-dynload/_lzma.cpython-35m-x86_64-linux-gnu.so 7f1a7c396000-7f1a7c397000 r--p 00006000 08:04 7136401 /usr/lib/python3.5/lib-dynload/_lzma.cpython-35m-x86_64-linux-gnu.so 7f1a7c397000-7f1a7c399000 rw-p 00007000 08:04 7136401 /usr/lib/python3.5/lib-dynload/_lzma.cpython-35m-x86_64-linux-gnu.so 7f1a7c399000-7f1a7c3a8000 r-xp 00000000 08:04 9437491 /lib/x86_64-linux-gnu/libbz2.so.1.0.4 7f1a7c3a8000-7f1a7c5a7000 ---p 0000f000 08:04 9437491 /lib/x86_64-linux-gnu/libbz2.so.1.0.4 7f1a7c5a7000-7f1a7c5a8000 r--p 0000e000 08:04 9437491 /lib/x86_64-linux-gnu/libbz2.so.1.0.4 7f1a7c5a8000-7f1a7c5a9000 rw-p 0000f000 08:04 9437491 /lib/x86_64-linux-gnu/libbz2.so.1.0.4 7f1a7c5a9000-7f1a7c5ad000 r-xp 00000000 08:04 7136410 /usr/lib/python3.5/lib-dynload/_bz2.cpython-35m-x86_64-linux-gnu.so 7f1a7c5ad000-7f1a7c7ac000 ---p 00004000 08:04 7136410 /usr/lib/python3.5/lib-dynload/_bz2.cpython-35m-x86_64-linux-gnu.so 7f1a7c7ac000-7f1a7c7ad000 r--p 00003000 08:04 7136410 /usr/lib/python3.5/lib-dynload/_bz2.cpython-35m-x86_64-linux-gnu.so 7f1a7c7ad000-7f1a7c7ae000 rw-p 00004000 08:04 7136410 /usr/lib/python3.5/lib-dynload/_bz2.cpython-35m-x86_64-linux-gnu.so 7f1a7c7ae000-7f1a7c8ae000 rw-p 00000000 00:00 0 7f1a7c8ae000-7f1a7c8d9000 r-xp 00000000 08:04 530149 /home/rocco/.local/lib/python3.5/site-packages/numpy/linalg/_umath_linalg.cpython-35m-x86_64-linux-gnu.so 7f1a7c8d9000-7f1a7cad9000 ---p 0002b000 08:04 530149 /home/rocco/.local/lib/python3.5/site-packages/numpy/linalg/_umath_linalg.cpython-35m-x86_64-linux-gnu.so 7f1a7cad9000-7f1a7cadb000 rw-p 0002b000 08:04 530149 /home/rocco/.local/lib/python3.5/site-packages/numpy/linalg/_umath_linalg.cpython-35m-x86_64-linux-gnu.so 7f1a7cadb000-7f1a7cade000 rw-p 000d4000 08:04 530149 /home/rocco/.local/lib/python3.5/site-packages/numpy/linalg/_umath_linalg.cpython-35m-x86_64-linux-gnu.so 7f1a7cade000-7f1a7cae2000 r-xp 00000000 08:04 530150 /home/rocco/.local/lib/python3.5/site-packages/numpy/linalg/lapack_lite.cpython-35m-x86_64-linux-gnu.so 7f1a7cae2000-7f1a7cce2000 ---p 00004000 08:04 530150 /home/rocco/.local/lib/python3.5/site-packages/numpy/linalg/lapack_lite.cpython-35m-x86_64-linux-gnu.so 7f1a7cce2000-7f1a7cce3000 rw-p 00004000 08:04 530150 /home/rocco/.local/lib/python3.5/site-packages/numpy/linalg/lapack_lite.cpython-35m-x86_64-linux-gnu.so 7f1a7cce3000-7f1a7cce5000 rw-p 00019000 08:04 530150 /home/rocco/.local/lib/python3.5/site-packages/numpy/linalg/lapack_lite.cpython-35m-x86_64-linux-gnu.so 7f1a7cce5000-7f1a7cda5000 rw-p 00000000 00:00 0 7f1a7cda5000-7f1a7cdc7000 r-xp 00000000 08:04 7136407 /usr/lib/python3.5/lib-dynload/_ctypes.cpython-35m-x86_64-linux-gnu.so 7f1a7cdc7000-7f1a7cfc6000 ---p 00022000 08:04 7136407 /usr/lib/python3.5/lib-dynload/_ctypes.cpython-35m-x86_64-linux-gnu.so 7f1a7cfc6000-7f1a7cfc7000 r--p 00021000 08:04 7136407 /usr/lib/python3.5/lib-dynload/_ctypes.cpython-35m-x86_64-linux-gnu.so 7f1a7cfc7000-7f1a7cfcb000 rw-p 00022000 08:04 7136407 /usr/lib/python3.5/lib-dynload/_ctypes.cpython-35m-x86_64-linux-gnu.so 7f1a7cfcb000-7f1a7cfcc000 rw-p 00000000 00:00 0 7f1a7cfcc000-7f1a7cfeb000 r-xp 00000000 08:04 530237 /home/rocco/.local/lib/python3.5/site-packages/numpy/core/_multiarray_tests.cpython-35m-x86_64-linux-gnu.so 7f1a7cfeb000-7f1a7d1eb000 ---p 0001f000 08:04 530237 /home/rocco/.local/lib/python3.5/site-packages/numpy/core/_multiarray_tests.cpython-35m-x86_64-linux-gnu.so 7f1a7d1eb000-7f1a7d1ed000 rw-p 0001f000 08:04 530237 /home/rocco/.local/lib/python3.5/site-packages/numpy/core/_multiarray_tests.cpython-35m-x86_64-linux-gnu.so 7f1a7d1ed000-7f1a7d26d000 rw-p 00000000 00:00 0 7f1a7d26d000-7f1a7d26e000 ---p 00000000 00:00 0 7f1a7d26e000-7f1a7da6e000 rw-p 00000000 00:00 0 7f1a7da6e000-7f1a7f564000 r-xp 00000000 08:04 532531 /home/rocco/.local/lib/python3.5/site-packages/numpy.libs/libopenblasp-r0-34a18dc3.3.7.so 7f1a7f564000-7f1a7f763000 ---p 01af6000 08:04 532531 /home/rocco/.local/lib/python3.5/site-packages/numpy.libs/libopenblasp-r0-34a18dc3.3.7.so 7f1a7f763000-7f1a7f77c000 rw-p 01af5000 08:04 532531 /home/rocco/.local/lib/python3.5/site-packages/numpy.libs/libopenblasp-r0-34a18dc3.3.7.so 7f1a7f77c000-7f1a7f787000 rw-p 00000000 00:00 0 7f1a7f787000-7f1a7f7ff000 rw-p 01be1000 08:04 532531 /home/rocco/.local/lib/python3.5/site-packages/numpy.libs/libopenblasp-r0-34a18dc3.3.7.so 7f1a7f7ff000-7f1a7f800000 ---p 00000000 00:00 0 7f1a7f800000-7f1a80000000 rw-p 00000000 00:00 0 7f1a80000000-7f1a80021000 rw-p 00000000 00:00 0 7f1a80021000-7f1a84000000 ---p 00000000 00:00 0 7f1a8401d000-7f1a840dd000 rw-p 00000000 00:00 0 7f1a840fe000-7f1a841be000 rw-p 00000000 00:00 0 7f1a841be000-7f1a841bf000 ---p 00000000 00:00 0 7f1a841bf000-7f1a849bf000 rw-p 00000000 00:00 0 7f1a849bf000-7f1a849c0000 ---p 00000000 00:00 0 7f1a849c0000-7f1a851c0000 rw-p 00000000 00:00 0 7f1a851c0000-7f1a852b0000 r-xp 00000000 08:04 532291 /home/rocco/.local/lib/python3.5/site-packages/numpy.libs/libgfortran-ed201abd.so.3.0.0 7f1a852b0000-7f1a854af000 ---p 000f0000 08:04 532291 /home/rocco/.local/lib/python3.5/site-packages/numpy.libs/libgfortran-ed201abd.so.3.0.0 7f1a854af000-7f1a854b1000 rw-p 000ef000 08:04 532291 /home/rocco/.local/lib/python3.5/site-packages/numpy.libs/libgfortran-ed201abd.so.3.0.0 7f1a854b1000-7f1a854b2000 rw-p 00000000 00:00 0 7f1a854b2000-7f1a854ba000 rw-p 000f2000 08:04 532291 /home/rocco/.local/lib/python3.5/site-packages/numpy.libs/libgfortran-ed201abd.so.3.0.0 7f1a854ba000-7f1a8585f000 r-xp 00000000 08:04 530243 /home/rocco/.local/lib/python3.5/site-packages/numpy/core/_multiarray_umath.cpython-35m-x86_64-linux-gnu.so 7f1a8585f000-7f1a85a5e000 ---p 003a5000 08:04 530243 /home/rocco/.local/lib/python3.5/site-packages/numpy/core/_multiarray_umath.cpython-35m-x86_64-linux-gnu.so 7f1a85a5e000-7f1a85a7d000 rw-p 003a4000 08:04 530243 /home/rocco/.local/lib/python3.5/site-packages/numpy/core/_multiarray_umath.cpython-35m-x86_64-linux-gnu.so 7f1a85a7d000-7f1a85a9e000 rw-p 00000000 00:00 0 7f1a85a9e000-7f1a85aa5000 rw-p 01474000 08:04 530243 /home/rocco/.local/lib/python3.5/site-packages/numpy/core/_multiarray_umath.cpython-35m-x86_64-linux-gnu.so 7f1a85aa5000-7f1a85ae5000 rw-p 00000000 00:00 0 7f1a85ae5000-7f1a85b53000 r-xp 00000000 08:04 9441868 /lib/x86_64-linux-gnu/libpcre.so.3.13.2 7f1a85b53000-7f1a85d53000 ---p 0006e000 08:04 9441868 /lib/x86_64-linux-gnu/libpcre.so.3.13.2 7f1a85d53000-7f1a85d54000 r--p 0006e000 08:04 9441868 /lib/x86_64-linux-gnu/libpcre.so.3.13.2 7f1a85d54000-7f1a85d55000 rw-p 0006f000 08:04 9441868 /lib/x86_64-linux-gnu/libpcre.so.3.13.2 7f1a85d55000-7f1a85e64000 r-xp 00000000 08:04 9437261 /lib/x86_64-linux-gnu/libglib-2.0.so.0.4800.2 7f1a85e64000-7f1a86063000 ---p 0010f000 08:04 9437261 /lib/x86_64-linux-gnu/libglib-2.0.so.0.4800.2 7f1a86063000-7f1a86064000 r--p 0010e000 08:04 9437261 /lib/x86_64-linux-gnu/libglib-2.0.so.0.4800.2 7f1a86064000-7f1a86065000 rw-p 0010f000 08:04 9437261 /lib/x86_64-linux-gnu/libglib-2.0.so.0.4800.2 7f1a86065000-7f1a86066000 rw-p 00000000 00:00 0 7f1a86066000-7f1a86067000 r-xp 00000000 08:04 6819030 /usr/lib/x86_64-linux-gnu/libgthread-2.0.so.0.4800.2 7f1a86067000-7f1a86266000 ---p 00001000 08:04 6819030 /usr/lib/x86_64-linux-gnu/libgthread-2.0.so.0.4800.2 7f1a86266000-7f1a86267000 r--p 00000000 08:04 6819030 /usr/lib/x86_64-linux-gnu/libgthread-2.0.so.0.4800.2 7f1a86267000-7f1a86268000 rw-p 00001000 08:04 6819030 /usr/lib/x86_64-linux-gnu/libgthread-2.0.so.0.4800.2 7f1a86298000-7f1a870f2000 r-xp 00000000 08:04 528776 /home/rocco/.local/lib/python3.5/site-packages/cv2/cv2.cpython-35m-x86_64-linux-gnu.so 7f1a870f2000-7f1a872f1000 ---p 00e5a000 08:04 528776 /home/rocco/.local/lib/python3.5/site-packages/cv2/cv2.cpython-35m-x86_64-linux-gnu.so 7f1a872f1000-7f1a8733f000 rw-p 00e59000 08:04 528776 /home/rocco/.local/lib/python3.5/site-packages/cv2/cv2.cpython-35m-x86_64-linux-gnu.so 7f1a8733f000-7f1a87465000 rw-p 00000000 00:00 0 7f1a87465000-7f1a8773f000 r--p 00000000 08:04 6816127 /usr/lib/locale/locale-archive 7f1a8773f000-7f1a87740000 ---p 00000000 00:00 0 7f1a87740000-7f1a87f40000 rw-p 00000000 00:00 0 7f1a87f40000-7f1a87f41000 ---p 00000000 00:00 0 7f1a87f41000-7f1a8a336000 rw-p 00000000 00:00 0 7f1a8a336000-7f1a8a33b000 r-xp 00000000 08:04 6823807 /usr/lib/x86_64-linux-gnu/libIlmThread-2_2.so.12.0.0 7f1a8a33b000-7f1a8a53b000 ---p 00005000 08:04 6823807 /usr/lib/x86_64-linux-gnu/libIlmThread-2_2.so.12.0.0 ``` **EDIT** After some test, I tried updating python to version 3.6.13 but problem persist, in addition I realized that the problem is not related to cv2.resize function but to opencv in general, whatever opencv function I call from python, it calls my dynamic linked opencv 4.5 libray for deallocation, causing the problem. **EDIT** Using OpenCV statically linked in C++ everything works as expected.. is there a reason for this?
2021/02/20
[ "https://Stackoverflow.com/questions/66297277", "https://Stackoverflow.com", "https://Stackoverflow.com/users/4267439/" ]
It seems a linking problem. Python uses "dlopen" to dynamically load libraries, try changing the [flags](https://manpages.debian.org/buster/manpages-dev/dlopen.3.en.html) that are passed to this function. For example, using RTLD\_DEEPBIND you can specify to prefer the symbols contained by the loaded objects (instead of the global ones with the same name). Using these two lines: ```py import os print(os.RTLD_DEEPBIND | os.RTLD_NOW) ``` You can obtain the int code for these flags, which is 10. Then you can change the flags used by dlopen by calling `sys.setdlopenflags(10)`: ```cpp void setFlags() { auto flag = PyLong_FromLong(10); auto setdl = PySys_GetObject("setdlopenflags"); auto arg = PyTuple_New(1); PyTuple_SetItem(arg, 0, flag); PyObject_CallObject(setdl, arg); } ``` It must be used before `PyImport_Import`.
I think your problem is that python uses binding for c++ functions, and as you mention you are using different opencv versions for opencv c++ and python. Refer to <https://docs.opencv.org/3.4/da/d49/tuy_bindings_basics.html> for more information about python bindings. One solution will be to use already bonded functions or use the same compiled library version if you create your own bindings.
10,805
67,094,562
My dataset is a .txt file separated by colons (:). One of the columns contains a date *AND* time, the date is separated by backslash (/) which is fine. However, the time is separated by colons (:) just like the rest of the data which throws off my method for cleaning the data. Example of a couple of lines of the dataset: ``` USA:Houston, Texas:05/06/2020 12:00:00 AM:car Japan:Tokyo:05/06/2020 11:05:10 PM:motorcycle USA:Houston, Texas:12/15/2020 12:00:10 PM:car Japan:Kyoto:01/04/1999 05:30:00 PM:bicycle ``` I'd like to clean the dataset before loading it into a dataframe in python using pandas. How do I separate the columns? I can't use ``` df = pandas.read_csv('example.txt', sep=':', header=None) ``` because that will separate the time data into different columns. Any ideas?
2021/04/14
[ "https://Stackoverflow.com/questions/67094562", "https://Stackoverflow.com", "https://Stackoverflow.com/users/12326879/" ]
You cannot assign to arrays. You should use [`strcpy()`](https://man7.org/linux/man-pages/man3/strcpy.3.html) to copy C-style strings. ``` strcpy(dstPath, getenv("APPDATA")); strcat(dstPath, p.filename().string().c_str()); ``` Or the concatination can be done in one line via [`snprintf()`](https://man7.org/linux/man-pages/man3/snprintf.3.html): ``` snprintf(dstPath, sizeof(dstPath), "%s%s", getenv("APPDATA"), p.filename().string().c_str()); ``` Finally, `TCHAR` and `GetModuleFileName` can refer to UNICODE version of the API, according to the compilation option. Using ANSI version (`char` and `GetModuleFileNameA`) explicitly is safer to work with `std::string` and other APIs that require strings consists of `char`.
You are trying to use strcat to concatenate two strings and store the result in another one, but it does not work that way. The call `strcat (str1, str2)` adds the content of `str2` at the end of `str1`. It also returns a pointer to `str1` but I don't normally use it. What you are trying to do should be done in three steps: * Make sure that `dstPath` contains an empty string * Concatenate to `dstPath` the value of the environment variable APPDATA * Concatenate to `dstPath` the value of filename Something like this: ``` dstPath[0] = '\0'; strcat(dstPath, getenv("APPDATA")); strcat(dstPath, p.filename().string().c_str()); ``` You should also add checks not to overflow `dstPath`...
10,806
65,974,443
When I compile `graalpython -m ginstall install pandas` or `graalpython -m ginstall install bumpy` I got the following error, please comment how to fix the error. Thank you. ``` line 54, in __init__ File "number.c", line 284, in array_power File "ufunc_object.c", line 4688, in ufunc_generic_call File "ufunc_object.c", line 3178, in PyUFunc_GenericFunction File "ufunc_type_resolution.c", line 180, in PyUFunc_DefaultTypeResolver File "ufunc_type_resolution.c", line 2028, in linear_search_type_resolver File "ufunc_type_resolution.c", line 1639, in ufunc_loop_matches File "convert_datatype.c", line 904, in PyArray_CanCastArrayTo java.lang.AssertionError ```
2021/01/31
[ "https://Stackoverflow.com/questions/65974443", "https://Stackoverflow.com", "https://Stackoverflow.com/users/5329711/" ]
I'm seeing a couple steps missing. 1. You shouldn't be installing chromedriver from brew, you can use the "webdrivers" gem to handle that. The gem will install drivers in the `~/.webdrivers` directory by default. 2. When running integration tests, you'll need to set the proper driver for Capybara. <https://github.com/teamcapybara/capybara#selenium> `Capybara.current_driver = :selenium_chrome`
Very stupid and non-obvious issue - the gems need to be in the `test` group ‍♂️: ``` group :test do gem 'capybara' gem 'webdrivers' end ```
10,808
41,175,862
I'm using Connector/Python to insert many rows into a temp table in mysql. The rows are all in a list-of-lists. I perform the insertion like this: ``` cursor = connection.cursor(); batch = [[1, 'foo', 'bar'],[2, 'xyz', 'baz']] cursor.executemany('INSERT INTO temp VALUES(?, ?, ?)', batch) connection.commit() ``` I noticed that (with many more rows, of course) the performance was extremely poor. Using SHOW PROCESSLIST, I noticed that each insert was executing separately. But the documentation <https://dev.mysql.com/doc/connector-python/en/connector-python-api-mysqlcursor-executemany.html> says this should be optimized into 1 insert. What's going on?
2016/12/16
[ "https://Stackoverflow.com/questions/41175862", "https://Stackoverflow.com", "https://Stackoverflow.com/users/2627992/" ]
Answering so other people won't go through the debugging I had to! I wrote the query modeling it on other queries in our code that used prepared statements and used '?' to indicate parameters. But you *can't do that* for executemany()! It *must* use '%s'. Changing to the following: ``` cursor.executemany('INSERT INTO temp VALUES(%s,%s,%s)', batch) ``` ...led to a hundredfold speed improvement and the optimized single query could be seen using SHOW PROCESSLIST. Beware the standard '?' syntax!
Try turn this command on: cursor.fast\_executemany = True Otherwise executemany acts just like multiple execute
10,809
35,162,989
I am learning python and need some help with lists and printing of the same. List would end up looking list this: ``` mylist = ["a", "d", "c", "g", "g", "g", "a", "b", "n", "g", "a", "s", "t", "z", "a"] ``` I've used Counter(i think lol) ``` class item_print(Counter): def __str__(self): return '\n'.join('{}: {}'.format(k, v) for k, v in self.items()) ``` to make it look like this: ``` a:4 b:1 etc ``` Wondering if there is a way to make it look like this: ``` "a":4 "b":1 etc etc ```
2016/02/02
[ "https://Stackoverflow.com/questions/35162989", "https://Stackoverflow.com", "https://Stackoverflow.com/users/5699265/" ]
Three problems. First this needs to be put in a directive so you are assured the element(s) exist(s) when the code is run Next...the event is outside of angular context . Whenever code outside of angular updates scope you need to tell angular to update view Last ... `angular.element` doesn't accept class selectors unless jQuery is included in page. Using directive also solves this issue since the element itself is exposed within directive as a jQlite or jQuery object ``` videoElement.on('pause',function() { vm.hideVideoCaption = false; $scope.$apply() }); ```
You are updating scope variable angular, out of its context. Angular doesn't run the digest cycle for those kind of updation. In this case you are updating scope variables from custom events, which doesn't intimate Angular digest system something has udpated on UI, resultant the digest cycle doesn't get fired. You need to kick off digest cycle manually to update the bindings. You could either run digest cycle by `$apply` over `$scope` **OR** use `$timeout` function. ``` videoElement.on('canplay',function() { $timeout(function(){ vm.hidePlayButton = false; }) }); videoElement.on('pause',function() { $timeout(function(){ vm.hideVideoCaption = false; }) }); ```
10,812
55,738,296
A call to [`functools.reduce`](https://docs.python.org/3/library/functools.html#functools.reduce) returns only the final result: ``` >>> from functools import reduce >>> a = [1, 2, 3, 4, 5] >>> f = lambda x, y: x + y >>> reduce(f, a) 15 ``` Instead of writing a loop myself, does a function exist which returns the intermediary values, too? ``` [3, 6, 10, 15] ``` (This is only a simple example, I'm not trying to calculate the cumulative sum - the solution should work for arbitrary `a` and `f`.)
2019/04/18
[ "https://Stackoverflow.com/questions/55738296", "https://Stackoverflow.com", "https://Stackoverflow.com/users/1621041/" ]
You can use [`itertools.accumulate()`](https://docs.python.org/3/library/itertools.html#itertools.accumulate): ``` >>> from itertools import accumulate >>> list(accumulate([1, 2, 3, 4, 5], lambda x, y: x+y))[1:] [3, 6, 10, 15] ``` Note that the order of parameters is switched relative to `functools.reduce()`. Also, the default `func` (the second parameter) is a sum (like `operator.add`), so in your case, it's technically optional: ``` >>> list(accumulate([1, 2, 3, 4, 5]))[1:] # default func: sum [3, 6, 10, 15] ``` And finally, it's worth noting that `accumulate()` will include the first term in the sequence, hence why the result is indexed from `[1:]` above. --- In your edit, you noted that... > > This is only a simple example, I'm not trying to calculate the cumulative sum - the solution should work for arbitrary a and f. > > > The nice thing about `accumulate()` is that it is flexible about the callable it will take. It only demands a callable that is a function of two parameters. For instance, builtin [`max()`](https://docs.python.org/3.5/library/functions.html#max) satisfies that: ``` >>> list(accumulate([1, 10, 4, 2, 17], max)) [1, 10, 10, 10, 17] ``` This is a longer form of using the unnecessary lambda: ``` >>> # Don't do this >>> list(accumulate([1, 10, 4, 2, 17], lambda x, y: max(x, y))) [1, 10, 10, 10, 17] ```
``` import numpy as np x=[1, 2, 3, 4, 5] y=np.cumsum(x) # gets you the cumulative sum y=list(y[1:]) # remove the first number print(y) #[3, 6, 10, 15] ```
10,813
53,129,790
I'm using Django with Postgres. On a page I can show a list of featured items, let's say 10. 1. If in the database I have more featured items than 10, I want to get them random/(better rotate). 2. If the number of featured item is lower than 10, get all featured item and add to the list until 10 non-featured items. Because the random takes more time on database, I do the sampling in python: ``` count = Item.objects.filter(is_featured=True).count() if count >= 10: item = random.sample(list(Item.objects.filter(is_featured=True))[:10]) else: item = list(Item.objects.all()[:10]) ``` The code above miss the case where there less than 10 featured(for example 8, to add 2 non-featured). I can try to add a new query, but I don't know if this is an efficient retrive, using 4-5 queries for this.
2018/11/03
[ "https://Stackoverflow.com/questions/53129790", "https://Stackoverflow.com", "https://Stackoverflow.com/users/3541631/" ]
you may use the `whereHas` keyword in laravel: ``` public function search(Request $request) { return Product::with('categories') ->whereHas('categories', function ($query) use ($request){ $query->where('category_id', $request->category_id); })->get(); } ``` Here is the [docs](https://laravel.com/docs/5.7/eloquent-relationships)
You can search it in following way: ``` public function search(Request $request) { return Product::with('categories') ->whereHas('categories', function ($q) use (request) { $q->where('id', $request->category_id); }); } ```
10,814
37,530,804
I'm new to MPI, but I have been trying to use it on a cluster with OpenMPI. I'm having the following problem: ``` $ python -c "from mpi4py import MPI" Traceback (most recent call last): File "<string>", line 1, in <module> ImportError: libmpi.so.1: cannot open shared object file: No such file or directory ``` I've done some research and tried a few things including: ``` module load mpi/openmpi-x86_64 ``` which doesn't seem to change anything. My LD\_LIBRARY\_PATH seems to be set correctly, but the desired "libmpi.so.1" does not exist. Instead, there is "libmpi.so.12": ``` $ echo $LD_LIBRARY_PATH /usr/lib64/openmpi/lib:/usr/local/matio/lib:/usr/lib64/mysql:/usr/local/knitro/lib:/usr/local/gurobi/linux64/lib:/usr/local/lib: $ls /usr/lib64/openmpi/lib ... libmpi.so libmpi.so.12 libmpi.so.12.0.0 libmpi_usempi.so ... ``` I can't uninstall/re-install mpi4py because it's outside my virtual environment / I don't have the permissions to uninstall it on the general cluster. I've seen this: [Error because file libmpi.so.1 missing](https://stackoverflow.com/questions/36860315/error-because-file-libmpi-so-1-missing), but not sure how I can create a symbolic link / don't think I have permission to modify the folder. I'm somewhat out of ideas: Not sure if it's possible to have a separate install of mpi4py on my virtualenv, or whether I can specify the correct file to mpi4py temporarily? Any suggestions would be greatly appreciated! Update: I ran `find /usr -name libmpi.so.1 2>/dev/null` as suggested, which returned `usr/lib64/compat-openmpi16/lib/libmpi.so.1`. Using `export LD_LIBRARY_PATH=$LD_LIBRARY_PATH"/usr/lib64/compat-openmpi16/lib/"`, `python -c "from mpi4py import MPI"` runs without any problems. However, if I try `mpiexec -np 2 python -c 'from mpi4py import MPI'`, I get the following error: ``` [[INVALID],INVALID] ORTE_ERROR_LOG: Not found in file ess_env_module.c at line 367 [[INVALID],INVALID]-[[28753,0],0] mca_oob_tcp_peer_try_connect: connect to 255.255.255.255:37376 failed: Network is unreachable (101) ```
2016/05/30
[ "https://Stackoverflow.com/questions/37530804", "https://Stackoverflow.com", "https://Stackoverflow.com/users/6332838/" ]
Even adding to LD\_LIBRARY\_PATH did not work for me. I had to uninstall mpi4py, then install it manually with my mpicc path according to the instructions here - <https://mpi4py.readthedocs.io/en/stable/install.html> (using distutils section)
I solve this problem by ```bash pip uninstall mpi4py conda install mpi4py ```
10,815
44,132,619
I used the `pyodbc` and `pypyodbc` python package to connect to SQL server. Drivers used anyone of these `['SQL Server', 'SQL Server Native Client 10.0', 'ODBC Driver 11 for SQL Server', 'ODBC Driver 13 for SQL Server']`. connection string : ``` connection = pyodbc.connect('DRIVER={SQL Server};' 'Server=aaa.database.windows.net;' 'DATABASE=DB_NAME;' 'UID=User_name;' 'PWD=password') ``` now I am getting error message like ``` DatabaseError: (u'28000', u"[28000] [Microsoft][ODBC SQL Server Driver][SQL Server]Login failed for user ``` But I can connect to the server through the SQL server management studio. its on SQL Server Authentication, Not Windows Authentication. Is this about python package and driver issue or DB issue??? how to solve?
2017/05/23
[ "https://Stackoverflow.com/questions/44132619", "https://Stackoverflow.com", "https://Stackoverflow.com/users/5602871/" ]
The problem is not driver issue, you can see the error message is `DatabaseError: Login failed for user`, it means this problem occurs if the user tries to log in with credentials that cannot be validated. I suspect you are login with your windows Authentication, if so, use `Trusted_Connection=yes` instead: ``` connection = pyodbc.connect('DRIVER={SQL Server};Server=aaa.database.windows.net;DATABASE=DB_NAME;Trusted_Connection=yes') ``` For more details, please refer to [my old answer](https://stackoverflow.com/questions/43815104/python-pyodbc-connection-error/43820031#43820031) about the difference of `SQL Server Authentication modes`.
I think problem because of driver definition in your connection string. You may try with below. ``` connection = pyodbc.connect('DRIVER={SQL Server Native Client 10.0}; Server=aaa.database.windows.net; DATABASE=DB_NAME; UID=User_name; PWD=password') ```
10,816
12,700,194
I've installed git-core (+svn) on my Mac from MacPorts. This has given me: ``` git-core @1.7.12.2_0+credential_osxkeychain+doc+pcre+python27+svn subversion @1.7.6_2 ``` I'm attempting to call something like the following: ``` git svn clone http://my.svn.com/svn/area/subarea/project -s ``` The output looks something like this: ``` Initialized empty Git repository in /Users/bitwise/work/svn/project/.git/ Using higher level of URL: http://my.svn.com/svn/area/subarea/project => http://my.svn.com/svn/area A folder/file.txt A folder/file2.txt [... some number of files from svn ... ] A folder44/file0.txt Temp file with moniker 'svn_delta' already in use at /opt/local/lib/perl5/site_perl/5.12.4/Git.pm line 1024. ``` I've done the usual searches but most of the threads seem to trail off without proposing a clear fix.
2012/10/03
[ "https://Stackoverflow.com/questions/12700194", "https://Stackoverflow.com", "https://Stackoverflow.com/users/1637252/" ]
Add this setting to your `~/.subversion/servers` file: ``` [global] http-bulk-updates=on ``` I had this issue on Linux, and saw the above workaround on [this thread](http://mail-archives.apache.org/mod_mbox/subversion-users/201307.mbox/%3C51D79BDD.5020106@wandisco.com%3E). I **think** I ran into this because I forced [Alien SVN](http://search.cpan.org/~mschwern/Alien-SVN-v1.7.3.1/src/subversion/subversion/bindings/swig/perl/native/Core.pm) to build with subversion 1.8 which uses the serf library now instead of neon for https, and apparently git-svn doesn't play nicely with serf.
<http://bugs.debian.org/534763> suggests it is a bug in libsvn-perl package, try upgrading that
10,822
73,629,234
Simplest way to explain will be I have this code, ``` Str = 'Floor_Live_Patterened_SpanPairs_1: [[-3, 0, 0, 5.5], [-3, 5.5, 0, 9.5]]Floor_Live_Patterened_SpanPairs_2: [[-3, 0, 0, 5.5], [-3, 9.5, 0, 13.5]]Floor_Live_Patterened_SpanPairs_3: [[-3, 5.5, 0, 9.5], [-3, 9.5, 0, 13.5]]' from re import findall findall ('[^\]\]]+\]\]?', Str) ``` What I get is, ``` ['Floor_Live_Patterened_SpanPairs_1: [[-3, 0, 0, 5.5]', ', [-3, 5.5, 0, 9.5]]', 'Floor_Live_Patterened_SpanPairs_2: [[-3, 0, 0, 5.5]', ', [-3, 9.5, 0, 13.5]]', 'Floor_Live_Patterened_SpanPairs_3: [[-3, 5.5, 0, 9.5]', ', [-3, 9.5, 0, 13.5]]'] ``` I assume it's taking only single ']' instead of ']]' when splitting, I want result as below, ``` ['Floor_Live_Patterened_SpanPairs_1: [[-3, 0, 0, 5.5], [-3, 5.5, 0, 9.5]]', 'Floor_Live_Patterened_SpanPairs_2: [[-3, 0, 0, 5.5], [-3, 9.5, 0, 13.5]]', 'Floor_Live_Patterened_SpanPairs_3: [[-3, 5.5, 0, 9.5], [-3, 9.5, 0, 13.5]]'] ``` I have gone through the documentation but couldn't work out how to achieve this or what modification should be done in above using regex findall function, a similar technique was adopted in one of answers in [In Python, how do I split a string and keep the separators?](https://stackoverflow.com/questions/2136556/in-python-how-do-i-split-a-string-and-keep-the-separators)
2022/09/07
[ "https://Stackoverflow.com/questions/73629234", "https://Stackoverflow.com", "https://Stackoverflow.com/users/16966180/" ]
You can simplify and start to make the code reusable by putting it into a function. Also, it's a good idea to make it pythonic and remove all the `;` that will only serve to confuse the reader as to what language they're looking at. ``` def login(name, age): password = input('Enter your desired password: ') print('Registration successful') tries = 5 for i in range(tries): print(' Login ') login = input('Enter password: ') if login == password: return True else: print(' Wrong password') print(f'{tries-1-i} trial(s) left ') return False name = 'Bob' age = 12 if login(name, age): print(' Welcome ', 'Account Information: ', f'Name: {name}', f'Age: {age}', sep='\n') else: print('Account locked') ``` Example Output: ``` # Success 1st try: Enter your desired password: password Registration successful Login Enter password: password Welcome Account Information: Name: Bob Age: 12 # Success 3rd try: Enter your desired password: password Registration successful Login Enter password: pass Wrong password 4 trial(s) left Login Enter password: pass Wrong password 3 trial(s) left Login Enter password: password Welcome Account Information: Name: Bob Age: 12 # Fail: Enter your desired password: password Registration successful Login Enter password: pass Wrong password 4 trial(s) left Login Enter password: pass Wrong password 3 trial(s) left Login Enter password: pass Wrong password 2 trial(s) left Login Enter password: word Wrong password 1 trial(s) left Login Enter password: pasword Wrong password 0 trial(s) left Account locked ```
I got this working. ```py def login(): counter = 1; x = 5; for i in range(5): print(' Login '); login = input('Enter password: '); if login == password: counter -= 1 print(' Welcome '); print('Account Information: '); print('Name: ',name); print('Age: ',age) break; else: x=x-1; if x==0: print('Account locked'); start(); else: print(' Wrong password'); print(x,' trial(s) left '); def start(): for i in range(1): global password password = input('Enter your desired password: '); print('Registration successful'); login(); start(); ``` [![enter image description here](https://i.stack.imgur.com/Mwyy4.png)](https://i.stack.imgur.com/Mwyy4.png)
10,825
53,810,242
i have two functions in python ``` class JENKINS_JOB_INFO(): def __init__(self): parser = argparse.ArgumentParser(description='xxxx. e.g., script.py -j jenkins_url -u user -a api') parser.add_argument('-j', '--address', dest='address', default="", required=True, action="store") parser.add_argument('-u', '--user', dest='user', default="xxxx", required=True, action="store") parser.add_argument('-t', '--api_token', dest='api_token', required=True, action="store") parsers = parser.parse_args() self.base_url = parsers.address.strip() self.user = parsers.user.strip() self.api_token = parsers.api_token.strip() def main(self): logger.info("call the function") self.get_jobs_state() def get_jobs_state(self): get_jenkins_json_data(params) def get_jenkins_json_data(self, params, base_url): url = urljoin(self.base_url, str(params.params)) r = requests.get(url, auth=HTTPBasicAuth(self.user, self.api_token), verify=False) ``` i have a parameter `params` defined in my function `get_jobs_state`and i want to pass this param to my other function `get_jenkins_json_data` so that the complete `url` inside function `get_jenkins_json_data` joins to `https:<jenkins>/api/json?pretty=true&tree=jobs[name,color]` But when i run my code the `url` is not correct and the value of `params` inside the function is `<__main__.CLASS_NAME instance at 0x7f9adf4c0ab8>` here `base_url` is a parameter that i am passing to my script. How can i get rid of this error?
2018/12/17
[ "https://Stackoverflow.com/questions/53810242", "https://Stackoverflow.com", "https://Stackoverflow.com/users/5871199/" ]
Just write `params.params` instead of `params`. The way you do it is extremely confusing because in `get_jenkins_josn_data`, `self` will be the `params` and `params` will be the `base_url`. I would advise you no to do that in the future. If you want to send some parameters to the function, send the minimal amount of information that the function needs. Here, for example, you could have sent `self.params` instead of the whole `self`. This way you wouldn't encounter this error and the code would be much more readable. I suggest you to rewrite this function this way.
So your solution is a bit confusing. You shouldn't pass the self to the `get_jenkins_json_data` method. The python will do that for you automatically. You should check out the data model for how [instance methods](https://docs.python.org/3/reference/datamodel.html#the-standard-type-hierarchy) work. I would refactor your code like this: ``` def get_jobs_state(self): params = "api/json?pretty=true&tree=jobs[name,color]" self.get_jenkins_json_data(params) def get_jenkins_json_data(self, params): url = urljoin(self.base_url, params) r = requests.get(url, auth=HTTPBasicAuth(self.user, self.api_token), verify=False) ... ```
10,826
66,038,159
To improve performance on project of mine, i've coded a function using tf.function to replace a function witch does not use tf. The result is that plain python code runs much (100x faster) than the tf.funtion when GPU is enabled. When running on CPU, TF is still slower, but only 10x slower. Am i missing something? ``` @tf.function def test1(cond): xp = tf.constant(0) yp = tf.constant(0) stride = tf.constant(10) patches = tf.TensorArray( tf.int32, size=tf.cast((cond / stride + 1) * (cond / stride + 1), dtype=tf.int32), dynamic_size=False, clear_after_read=False) i = tf.constant(0) while tf.less_equal(yp, cond): while tf.less_equal(xp, cond): xp = tf.add(xp, stride) patches = patches.write(i, xp) i += 1 xp = tf.constant(0) yp = tf.add(yp, stride) return patches.stack() def test2(cond): xp = 0 yp = 0 stride = 10 i = 0 patches = [] while yp <= cond: while xp <= cond: xp += stride patches.append(xp) xp = 0 yp += stride return patches ``` This is specially noticeable when cond is big (like 5000 or greater) **UPDATE:** I have found [this](https://github.com/tensorflow/tensorflow/issues/46518) and [this](https://github.com/tensorflow/tensorflow/issues/19733). As i was expecting, it seems performance of TensorArray is poor, and, the solution, in my case, was to replace TensorArray and loops with other Tensor calculations (in this case, i used tf.image.extract\_patches and others). In this way, it achieved performance 3x faster than plain python code.
2021/02/04
[ "https://Stackoverflow.com/questions/66038159", "https://Stackoverflow.com", "https://Stackoverflow.com/users/15140159/" ]
When you use some tf functions with GPU enabled, it does a callback and transfers data to the GPU. In some cases, this overhead is not worth it. When running on the CPU, this overhead decreases, but it's still slower than pure python code. Tensorflow is faster when you do heavy calculations, and that's what Tensorflow was made for. Even numpy can be slower than pure python code for light calculations.
The slow part (while loop) is still in python and simple functions like this are pretty fast. The linear overhead of switching from python to tf each time is certainly bigger than anything you could ever gain on such a small function. For more complex operations, this might be very different. In this case tf is simply overkill.
10,828
13,216,520
Walking through matplotlib's animation example on my Mac OSX machine - <http://matplotlib.org/examples/animation/simple_anim.html> - I am getting this error:- ``` File "/Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7/site-packages/matplotlib/animation.py", line 248, in _blit_clear a.figure.canvas.restore_region(bg_cache[a]) AttributeError: 'FigureCanvasMac' object has no attribute 'restore_region' ``` Does anyone who has encountered this before know how to resolve this issue? Looks like it's a known (and unresolved at this time of writing) issue - <https://github.com/matplotlib/matplotlib/issues/531>
2012/11/04
[ "https://Stackoverflow.com/questions/13216520", "https://Stackoverflow.com", "https://Stackoverflow.com/users/482506/" ]
You can avoid the problem by switching to a different backend: ``` import matplotlib matplotlib.use('TkAgg') ```
Looks like it's a known (and unresolved at this time of writing) issue - <https://github.com/matplotlib/matplotlib/issues/531>
10,829
34,347,401
I have a Pelican blog where I write the posts in Markdown. I want each article to link to the previous and next article in the sequence, and to one random article. All the articles are generated with a python script, resulting in a folder of markdown files called /content/. Here the files are like: * article-slug1.md * another-article-slug.md * more-articles-slug.md * [...] Is there a token I can add to the markdown to randomly interlink/link to next/previous? If not, how can I set this up in python? Thanks in advance
2015/12/18
[ "https://Stackoverflow.com/questions/34347401", "https://Stackoverflow.com", "https://Stackoverflow.com/users/4108771/" ]
There is the [Pelican Neighbours](https://github.com/getpelican/pelican-plugins/tree/master/neighbors) plug-in that might do what you want. You'll have to active the plug-in and update your template to get it to work. * Adding plug-ins: [Pelican-Plugins' Readme](https://github.com/getpelican/pelican-plugins/blob/master/Readme.rst) * Updating your template: [Plug-in's Readme](https://github.com/getpelican/pelican-plugins/blob/master/neighbors/Readme.rst)
I am not sure about the random article, but for next and previous, there is a Pelican plugin called [neighbor articles](https://github.com/getpelican/pelican-plugins/tree/master/neighbors).
10,835
67,455,130
I am new to python and trying to create a small atm like project using classes. I wanna have the user ability to input the amount they want to withdraw and if the amount exceeds the current balance, it prints out that you cannot have negative balance and asks for the input again. ``` class account: def __init__(self,owner,balance): self.owner = owner self.balance = balance # self.withdraw_amount = withdraw_amount # self.deposit_amount = deposit_amount print("{} has {} in his account".format(owner,balance)) def withdraw(self,withdraw_amount): withdraw_amount = int(input("Enter amount to withdraw: ")) self.withdraw_amount = withdraw_amount if self.withdraw_amount > self.balance: print('Balance cannot be negative') else: self.final_amount = self.balance - self.withdraw_amount print("Final Amount after Withdraw " + str(self.final_amount)) def deposit(self,deposit_amount): deposit_amount = int(input("Enter amount to Deposit: ")) self.deposit_amount = deposit_amount self.final_balance = self.balance + self.deposit_amount print("Final Amount after Deposit " + str(self.final_balance)) my_account = account('Samuel',500) my_account.withdraw(withdraw_amount) ``` But i am getting the followng error : `NameError: name 'withdraw_amount' is not defined` Can someone please point out what i am doing wrong and how can i fix it? Thank you.
2021/05/09
[ "https://Stackoverflow.com/questions/67455130", "https://Stackoverflow.com", "https://Stackoverflow.com/users/11606914/" ]
Via `np.select` ``` condlist = [ (df.Email_x.isna()) & (~df.Email_y.isna()), # 1st column NAN but 2nd is not (df.Email_y.isna()) & (~df.Email_x.isna()), # 2nd column NAN but 1st is not (~df.Email_x.isna()) & (~df.Email_y.isna()) # both is not NAN ] choicelist = [ df.Email_y, df.Email_x, df.Email_x ] df['Email'] = np.select(condlist,choicelist, default=‘') # default value '' ```
You can use [`.mask()`](https://pandas.pydata.org/docs/reference/api/pandas.Series.mask.html), as follows: ``` df['email'] = df['Email_x'].mask(df['Email_x'].isna(), df['Email_y']) ``` It will retain the value of `df['Email_x']` if the condition is false (i.e. not `NaN`) and replace with value of `df['Email_y']` if the condition is true (i.e. `df['Email_x']` is `NaN`).
10,838
24,468,944
I am trying to write a python program that asks the user to enter an existing text file's name and then display the first 5 lines of the text file or the complete file if it is 5 lines or less. This is what I have programmed so far: ``` def main(): # Ask user for the file name they wish to view filename = input('Enter the file name that you wish to view: ') # opens the file name the user specifies for reading open_file = open(filename, 'r') # reads the file contents - first 5 lines for count in range (1,6): line = open_file.readline() # prints the contents of line print() main() ``` I am using a file that has 8 lines in it called names.txt. The contents of this text file is the following: ``` Steve Smith Kevin Applesauce Mike Hunter David Jones Cliff Martinez Juan Garcia Amy Doe John Doe ``` When I run the python program, I get no output. Where am I going wrong?
2014/06/28
[ "https://Stackoverflow.com/questions/24468944", "https://Stackoverflow.com", "https://Stackoverflow.com/users/3786320/" ]
Just `print()`, by itself, will only print a newline, nothing else. You need to pass the `line` variable to `print()`: ``` print(line) ``` The `line` string will have a newline at the end, you probably want to ask `print` not to add another: ``` print(line, end='') ``` or you can remove the newline: ``` print(line.rstrip('\n')) ```
In order to print first 5 or less lines. You can try the following code: ``` filename = input('Enter the file name that you wish to view: ') from itertools import islice with open(filename) as myfile: head = list(islice(myfile,5)) print head ``` Hope the above code will satisfy your query. Thank you.
10,840
12,231,733
I have a python script that I want to only allow to be running once on a machine. I want it to print something like "Error, already running" if it is already running, whether its running in the background or in a different ssh session. How would I do this? Here is my script. ``` import urllib, urllib2, sys num = sys.argv[1] print 'Calling' phones = [ 'http://phone1/index.htm', 'http://phone2/index.htm', 'https://phone3/index.htm', 'https://phone4/index.htm', 'https://phone5/index.htm' ] data = urllib.urlencode({"NUMBER":num, "DIAL":"Dial", "active_line":1}) while 1: for phone in phones: try: urllib2.urlopen(phone,data) # make call urllib2.urlopen(phone+"?dialeddel=0") # clear logs except: pass ``` P.S I am using CentOS 5 if that matters.
2012/09/01
[ "https://Stackoverflow.com/questions/12231733", "https://Stackoverflow.com", "https://Stackoverflow.com/users/1601509/" ]
1. You can implement a lock on the file. 2. Create a temp file at the start of the execution and check if that file is present before running the script. Refer to this post for answer- [Check to see if python script is running](https://stackoverflow.com/questions/788411/check-to-see-if-python-script-is-running)
You could lock an existing file using e.g. [flock](http://docs.python.org/library/fcntl.html#fcntl.flock) at start of your script. Then, if the same script is run twice, the latest started would block. See also [this question](https://stackoverflow.com/questions/3918385/flock-question).
10,843
60,682,568
I've been trying to write code which seperates the age (digits) from name (alphabets) and then compares it to a pre-defined list and if it doesn't match with the list then it sends out an error but instead of getting (for example) alex:error, i'm getting a:error l:error e:error x:error, that is it's splitting the words to its alphabets. Here's the code: ``` from string import digits print("Enter name and age(seperated by a comma):") names=input("Data:") names1=names.strip().replace(" ","").split(',') removed_digits=str.maketrans('','',digits) names2=names.translate(removed_digits) lst1=['john','cena','rey'] print(names) print(names1) print(names2) for name in names2: if name not in lst1: print(f"{name}:Not Matching to our database.") ``` the output is : ``` Enter name and age(seperated by a comma): Data:alex 12, john 13 alex 12, john 13 ['alex12', 'john13'] alex , john a:Not Matching to our database. l:Not Matching to our database. e:Not Matching to our database. x:Not Matching to our database. :Not Matching to our database. ,:Not Matching to our database. :Not Matching to our database. j:Not Matching to our database. o:Not Matching to our database. h:Not Matching to our database. n:Not Matching to our database. :Not Matching to our database. ``` thanks for helping me out! Also I'd love if someone was to explain why my code wasn't working, I referred to pythontutor but still wasn't able to figure out the bug myself!
2020/03/14
[ "https://Stackoverflow.com/questions/60682568", "https://Stackoverflow.com", "https://Stackoverflow.com/users/12387627/" ]
You need to change your names2 variable as its a string type. You need to conver it to a list and append each name to it after str.translate(). Here is the modified code. ``` names2=[name.translate(removed_digits) for name in names1] ``` I hope your problem will solved.
You can try something like this: ``` names=input("Data:") names1=names.strip().split(',') #split names for name in names1: names2 = name.strip().split(' ') #split name and digit if names2[0] not in lst1: print(f"{name}:Not Matching to our database.") ```
10,846
62,789,471
I'm running python in docker and run across the `ModuleNotFoundError: No module named 'flask'` error message. any thoughts what am I missing in the Dockerfile or requirements ? ```sh FROM python:3.7.2-alpine RUN pip install --upgrade pip RUN apk update && \ apk add --virtual build-deps gcc python-dev RUN adduser -D myuser USER myuser WORKDIR /home/myuser COPY --chown=myuser:myuser ./requirements.txt /home/myuser/requirements.txt RUN pip install --no-cache-dir --user -r requirements.txt ENV PATH="/home/myuser/.local/bin:${PATH}" COPY --chown=myuser:myuser . . ENV FLASK_APP=/home/myuser/app.py CMD ["python", "app.py"] ~ ``` in the app.py I use this line ```sh from flask import Flask, jsonify ``` with requirements looking like this ```sh Flask==0.12.5 ```
2020/07/08
[ "https://Stackoverflow.com/questions/62789471", "https://Stackoverflow.com", "https://Stackoverflow.com/users/13115677/" ]
One way using `itertools.starmap`, `islice` and `operator.sub`: ``` from operator import sub from itertools import starmap, islice l = list(range(1, 10000000)) [l[0], *starmap(sub, zip(islice(l, 1, None), l))] ``` Output: ``` [1, 1, 1, ..., 1] ``` --- Benchmark: ``` l = list(range(1, 100000000)) # OP's method %timeit [l[i] - l[i - 1] for i in range(len(l) - 1, 0, -1)] # 14.2 s ± 373 ms per loop (mean ± std. dev. of 7 runs, 1 loop each) # numpy approach by @ynotzort %timeit np.diff(l) # 8.52 s ± 301 ms per loop (mean ± std. dev. of 7 runs, 1 loop each) # zip approach by @Nick %timeit [nxt - cur for cur, nxt in zip(l, l[1:])] # 7.96 s ± 243 ms per loop (mean ± std. dev. of 7 runs, 1 loop each) # itertool and operator approach by @Chris %timeit [l[0], *starmap(sub, zip(islice(l, 1, None), l))] # 6.4 s ± 255 ms per loop (mean ± std. dev. of 7 runs, 1 loop each) ```
You could use [numpy.diff](https://numpy.org/doc/stable/reference/generated/numpy.diff.html), For example: ```py import numpy as np a = [1, 2, 3, 4, 5] npa = np.array(a) a_diff = np.diff(npa) ```
10,847
69,788,691
I am new here. I am a begginer with python so I am trying to write a code that allows me to remove the link break of a list in python. I have the following list (which is more extense), but I will share a part of it. ``` info = ['COLOMBIA Y LA \nNUEVA REVOLUCIÓN \nINDUSTRIAL\nPropuestas del Foco \nde Tecnologías Convergentes \ne Industrias 4.0\nVolumen 9\nCOLOMBIA - 2019\nArtista: Federico Uribe\n\n-----\n\n-----\nPropuestas del Foco \nde Tecnologías Convergentes\ne Industrias 4.0\nTomo 9\nCOLOMBIA\nY LA NUEVA \nREVOLUCIÓN \nINDUSTRIAL\n\n-----\n© Vicepresidencia de la República de Colombia\n© Ministerio de Ciencia, Tecnología e Innovación\n© Elías D. Niño-Ruiz, Jean Paul Allain, José Alejandro Montoya, Juan Luis Mejía Arango, Markus Eisenhauer, \nMaría del Pilar Noriega E., Mauricio Arroyave Franco, Mónica Álvarez-Láinez, Nora Cadavid Giraldo, \nOlga L. Quintero-Montoya, Orlando Ayala, Raimundo Abello, Tim Osswald \nPrimera edición, 2020\nISBN Impreso: 978-958-5135-10-9\nISBN digital: 978-958-5135-11-6\nDOI: https://doi.org/10.17230/9789585135116vdyc\nColección: Misión Internacional de Sabios 2019\nTítulo del volumen 9: Colombia y la nueva revolución industrial\nPreparación editorial\nUniversidad EAFIT \nUniversidad del Norte\nCarrera 49 No. 7 sur - 50 \nDirección de Investigación, Desarrollo e Innovación\nTel.: 261 95 23, Medellín \nKm. 5 Vía Puerto Colombia, Área Metropolitana de Barranquilla\ne-mail: publicaciones@eafit.edu.co \nTel.: 3509420\n \ne-mail: dip@uninorte.edu.co\nCorrección de textos y coordinación editorial: Cristian Suárez-Giraldo y Óscar Caicedo Alarcón\nDiseño de la colección y cubierta: leonardofernandezsuarez.com\nDiagramación: Ana Milena Gómez Correa \nMedellín, Colombia, 2020\nProhibida la reproducción total o parcial por cualquier medio sin la autorización escrita del titular de los \nderechos patrimoniales.\n__________________________________\nColombia y la nueva revolución industrial / Elías D. Niño-Ruiz…[et al]. – Medellín : \n Colombia. Ministerio de Ciencia Tecnología e Innovación, 2020\n 175 p. -- (Misión Internacional de Sabios 2019).\n \n ISBN: 978-958-5135-10-9 ; 978-958-5135-11-6\n1. Educación – Colombia. 2. Educación y desarrollo – Colombia. 3. Desarrollo científico y tecnológico – Colombia. I. Niño-Ruiz, Elias D. \nII. Noriega E., María del Pilar, pról. III. Mejía Arango, Juan Luis, pról. IV. Abello Llanos, Raimundo, pról. V. Tít. VI. Serie\n370.9861 cd 23 ed.\nC718\n Universidad Eafit- Centro Cultural Biblioteca Luis Echavarría Villegas\n___________________________________\n\n-----\nTomo 9\nCOLOMBIA \nY LA NUEVA \nREVOLUCIÓN \nINDUSTRIAL\n\n-----\nBiotecnología, Bioeconomía \ny Medio Ambiente\nSilvia Restrepo, coordinadora \nCristian Samper \nFederica di Palma (Reino Unido) \nElizabeth Hodson \nMabel Torres\nEsteban Manrique Reol (España) \nMichel Eddi (Francia) \nLudger Wessjohann (Alemania) \nGermán Poveda\nCiencias Básicas y del Espacio\nMoisés Wasserman Lerner, coordinador \nCarmenza Duque Beltrán \nSerge Haroche (Francia, premio Nobel) \nAna María Rey Ayala \nAntonio Julio Copete Villa\nCiencias Sociales y Desarrollo \nHumano con Equidad\nClemente Forero Pineda, coordinador \nAna María Arjona \nSara Victoria Alvarado Salgado \nWilliam Maloney (Estados Unidos) \nStanislas Dehaene (Francia) \nJohan Schot (Holanda) \nKyoo Sung Noh (Corea del Sur)\nCiencias de la Vida y la Salud\nJuan Manuel Anaya, coordinador \nNubia Muñoz \nIsabelle Magnin (Francia) \nRodolfo Llinás \nJorge Reynolds \nAlejandro Jadad\nEnergía Sostenible\nJuan Benavides, coordinador \nAngela Wilkinson (Reino Unido)\nEduardo Posada \nJosé Fernando Isaza\nIndustrias Creativas y Culturales \nEdgar Puentes, coordinador \nRamiro Osorio \nCamila Loboguerrero \nLina Paola Rodríguez Fernández \nCarlos Jacanamijoy \nAlfredo Zolezzi (Chile)\nOcéanos y Recursos Hidrobiológicos\nAndrés Franco, coordinador \nWeildler Antonio Guerra \nJorge Reynolds \nJuan Armando Sánchez \nSabrina Speich (Francia)\nTecnologías Convergentes Nano, \nInfo y Cogno Industrias 4.0\nMaría del Pilar Noriega, coordinadora \nJean Paul Allain \nTim Andreas Osswald \nOrlando Ayala\nCoordinador de coordinadores\nClemente Forero Pineda\nCOMISIONADOS\n\n-----\nBiotecnología, \nBioeconomía y \nMedio Ambiente\nSecretaría Técnica – \nUniversidad de los \nAndes, Vicerrectoría de \nInvestigación \nSilvia Restrepo \nMaría Fernanda Mideros\nClaudia Carolina Caballero \nLaguna\nGuy Henry \nRelator \nMartín Ramírez \nCiencias Básicas \ny del Espacio\nSecretaría Técnica – \nUniversidad Nacional de \nColombia\nJairo Alexis Rodríguez López\nHernando Guillermo \nGaitán Duarte\nLiliana Pulido Báez\nRelator\nDiego Alejandro Torres \nGalindo\nCiencias Sociales y \nDesarrollo Humano \ncon Equidad\nSecretaría Técnica – \nUniversidad del Rosario, \nEscuela de Ciencias \nHumanas \nStéphanie Lavaux \nCarlos Gustavo Patarroyo \nMaría Martínez \nRelatores\nJuliana Valdés Pereira \nEdgar Sánchez Cuevas \nPaula Juliana Guevara \nPosada\nCiencias de la \nVida y la Salud', 'El gerente de relaciones públicas de Riot Games para la región, Juan José Moreno, retrató el fenómeno de League of Legends en la industria gamer.</li>\n\n<li>Creadoras de contenido relevantes expusieron varios consejos a las mujeres que están interesadas en emprender este camino.</li>\n\n</ul></div><div class=" cid-616 aid-184215">\n\t\t\t<div class="figure cid-616 aid-184215 binary-foto_marquesina format-png"><a href="articles-184215_foto_marquesina.png" title="Ir a "><img src="articles-184215_foto_marquesina.png" alt="" width="960" height="400"></a><a href="articles-184215_foto_marquesina.png" title="Ir a "></a></div>\n<p>Con éxito finalizó el tercer y último día de Colombia 4.0, el evento que reunió a la industria creativa y TI, tanto de manera virtual como presencial durante tres días en Puerta de Oro, en Barranquilla.</p>\n<p>Durante la tercera jornada, los asistentes pudieron presenciar a diversos conferencistas expertos en temáticas como la transformación digital en los jóvenes, el futuro de la industria de los videojuegos, las nuevas maneras de aproximarse a las audiencias y la creación de contenido digital. Así mismo, tuvo lugar un nuevo Foro Misión TIC 2022.</p>\n<p>A continuación, los momentos más destacados del tercer día de Colombia 4.0:</p>\n<h3>El fenómeno League of Legends</h3>\n<p>Juan José Moreno, gerente de relaciones públicas de Riot Games para América Latina, aseguró, durante su participación en Colombia 4.0, que busca ser más que una empresa de videojuegos y crear una experiencia completa a sus jugadores. Así, detalló que en 2016 el juego contaba con más de 100 millones de jugadores activos mensuales y en 2019 alcanzaron 8 millones de jugadores que se conectaban diariamente a jugar partidas simultáneas en todo el mundo.</p>\n<p>Por eso, en 2019 lanzaron una colección de ropa en alianza con Louis Vuitton, además crearon KDA, un grupo virtual de pop de LoL que se compone de los personajes más populares del juego. Esto se dio como respuesta a la necesidad de los streamers de tener canciones sin copyright para sus transmisiones, por lo que lanzaron un disco con 37 canciones originales para que las personas puedan usarlas sin problema.</p>\n<h3>El análisis FODA de las industrias 4.0</h3>\n<p>Ximena Duque, presidenta de Fedesoft, e Iván Darío Castaño, director ejecutivo de Ruta N, desarrollaron un análisis de fortalezas, oportunidades, debilidades y amenazas de las industrias 4.0 en el país. Entre las oportunidades que tienen estas industrias ambos coincidieron en que el mercado entendió que es el momento de transformar sus negocios hacia lo digital. Castaño instó a las industrias 4.0 a que aprovechen este momento, puesto que, dijo, hoy en día se pueden hacer muchos negocios sin que la territorialidad sea un límite.</p>\n<p>Hablando de los aspectos por mejorar, Ximena Duque se refirió a que aún se debe seguir trabajando por el cierre de brechas sociales, al tiempo que Castaño afirmó que el bilingüismo es algo en lo que el país debe avanzar. Eso sí, recordó que cuando se trata de bilingüismo hoy en día es necesario pensar en idiomas diferentes al inglés.</p>\n<h3>Consejos para las creadoras de contenido digital</h3>\n<p>En el último día de Colombia 4.0, algunas de las creadoras de contenidos más influyentes del país dieron sus mejores consejos para aquellas personas que están empezando en las redes sociales y quieren triunfar en el mundo digital.</p>\n<p>La influenciadora Valentina Lizcano comentó que esos influenciadores que se dedican a mostrar vidas perfectas en las redes sociales o hacen contenido aspiracional, no aportan mucho, por eso, la honestidad y ser genuinos es lo que va a ayudar a crear comunidades realmente fieles y sólidas. "Las mujeres debemos dejar de hablarnos con mentiras. Como somos nos vemos lindas. Nuestra realidad tiene que estar alineada con tu contenido", afirmó la actriz.</p>\n<h3>Pódcast como nueva manera de formar comunidad</h3>\n<p>Una de las conversaciones al respecto giró en torno al pódcast, es decir aquellos contenidos de audio que se distribuyen a través de plataformas, y a cómo esta herramienta puede convertirse, incluso, en una fuente de ingresos o de fidelización de comunidades para las empresas.</p>\n<p>La conversación fue entre Mauricio Cabrera, creador de Story Baker, y Alejandro Vargas, gerente general de Podway, quienes coincidieron en que el podcasting ha logrado democratizar el audio para que haya otras ideas por fuera de lo tradicional ya que ahora cualquier ciudadano tiene las herramientas necesarias para ser un creador.</p>\n<h3>Nuevo Foro Misión TIC 2022</h3>\n<p>El tercer foro de Misión TIC contó con la presencia de destacados invitados del sector de la tecnología que despertaron el interés de los asistentes por el apasionante mundo de las tecnologías, donde se analizaron las perspectivas y el rol que juegan las mujeres en las industrias TI del país.</p>\n<p>La introducción y bienvenida estuvo a cargo de Dennis Palacios Directora (E) de Economía Digital del Ministerio TIC, quien explicó los beneficios de Misión TIC e hizo una especial invitación a todos los colombianos, <em>"Hago una cordial invitación a todos los niños jóvenes y adultos del país a que se inscriban en la última convocatoria de Misión TIC \'</em>La última tripulación\' que tiene 50 mil cupos gratis para aprender programación", expresó Palacios durante la apertura.', 'En Norte de Santander hay aproximadamente 650 empresas de base tecnológica registradas en la Cámara de Comercio de Cúcuta, estás desarrollan software, contenidos digitales, aplicaciones, marketing y comercialización virtual. El departamento se ha convertido en una plataforma de despegue para estas ideas de negocio y las cifras muestran que cada vez ganan más terreno.</p>\n\n<p>\n\t<strong>Según Procolombia, en una nota publicada en su página web, las exportaciones de las industrias 4.0 llegaron a 407,5 millones de dólares en 2018, un incremento del 33 % en comparación con 2017.</strong></p>\n\n<p>\n\tLa entidad encargada de la promoción de los productos y la industria nacional reportó que 337 empresas colombianas hicieron negocios en más de 60 destinos. Además, el principal comprador fue Estados Unidos con 177,7 millones de dólares.</p>\n\n<p>\n\t“El apetito de los compradores internacionales por los servicios de las Industrias 4.0 de Colombia (BPO, software, salud, audiovisuales y contenidos digitales, comunicación gráfica y editorial) sigue ampliándose con ritmo acelerado”, reseña la entidad.</p>\n\n<p>\n\t<strong>Al desagregar los productos y servicios que hacen parte de la oferta de estas empresas, resalta el liderazgo de las ventas de software, que aportaron 159,7 millones de dólares, seguido por BPO, con 103,9 millones; audiovisuales y contenidos digitales, con 82,8 millones; salud, con 57,4 millones y comunicación gráfica y editorial, con 3,5 millones de dólares.</strong></p>\n\n<p>\n\tJuliana Villegas, vicepresidente de Exportaciones de Procolombia, indicó que “las ventas del país se han ‘desconcentrado’, porque antes el 80 % de las exportaciones nacionales de estos productos provenía de Bogotá. Ahora, hay zonas que han ganado participación como Antioquia, Valle del Cauca, Santander y Norte de Santander”.</p>\n\n<p>\n\tEl año pasado, el 56,6 %, es decir 230,8 millones de dólares, de las exportaciones salieron de Bogotá. Antioquia aportó el 18,8 % y Valle del Cauca, el 12,5 %. Sin embargo, al analizar las ciudades se destacan los aumentos de 107 % de Pereira (5,8 millones de dólares), del 48 % de Cúcuta (420.000 dólares) y del 10 % de Barranquilla (11,5 millones de dólares).</p>\n\n<p>\n\tEn Norte de Santander cuatro empresas llevaron al extranjero sus productos de\xa0software, contenidos digitales y BPO, con Estados Unidos y México como destinos.\xa0Las exportaciones de software ascendieron a 420.346 dólares, los contenidos digitales produjeron 36.285 dólares y el BPO fue avaluado en 28.950 dólares.</p>\n\n<p>\n\t<strong>La diversificación de creadores y exportadores de este tipo de contenido benefician al país en el incremento de su oferta.</strong></p>\n\n<p>\n\tEstados Unidos es el mayor comprador de la tecnología nacional con el 43,6 % del total de las exportaciones (177,7 millones de dólares) y los productos llegan a San Francisco, Miami, Los Ángeles, Washington, Nueva York, Houston, Atlanta, Dallas y Chicago.</p>\n\n<p>\n\t<strong>Trabajo articulado regional</strong></p>\n\n<p>\n\tDesde el 2016 existía la iniciativa de la Cámara de Comercio de Cúcuta y la Universidad Francisco de Paula Santander (Ufps) de crear un clúster para las industrias de base tecnológica.</p>\n\n<p>\n\tSin embargo, sólo hasta el año pasado se logró la unión de 23 empresas para aunar esfuerzos en pro de mejorar la productividad y abrirse nuevos mercados.</p>\n\n<p>\n\t<strong>Beatriz Vélez, empresaria del sector, dijo que el sector se enfoca en software para el sector salud y de educación. “Estos son los subsectores a donde más están apuntando los desarrolladores del departamento”, indicó.</strong></p>\n\n<p>\n\tLa rentabilidad y poca inversión inicial que se necesita para crear estas empresas es uno de los beneficios más grandes del sector. “No se necesita una gran infraestructura, se puede trabajar desde el garaje de una casa, lo importante es tener buen equipo (computador) e internet”, explicó Vélez.</p>\n\n<p>\n\t<img alt="" height="370" src="/sites/default/files/2019/03/05/imagenes_adicionales/e5.jpg" width="640" /></p>\n\n<p>\n\t<em>23 empresas componen el clúster Nortic del departamento, con el objetivo de impulsar y fortalecer el sector.</em></p>\n\n<p>\n\tSin embargo, la empresaria destacó que el trabajo en equipo es vital para desarrollar el software que venden estas empresas. “En Cúcuta hay mano de obra semi calificada y económica, esto beneficia a las industrias 4.0 de la región”, agregó la empresaria.</p>\n\n<p>\n\t<strong>William Trillos, gerente de Gnosoft, resaltó que las industrias 4.0 hacen parte del cambio de los procesos tradicionales en las empresas colombianas. “En cualquier tipo de actividad económica se pueden aplicar los productos que genera el sector”, manifestó.</strong></p>\n\n<p>\n\tEl gerente de Gnosoft estudió ingeniería de sistemas en la Ufps, luego de trabajar en varias empresas tomó la decisión de crear la suya.\xa0</p>\n\n<p>\n\tDe esta forma, en el 2007 empezó el trabajo para que naciera Gnosoft, empresa que ofrece asesoramiento al sector educativo para la introducción efectiva de las TIC.</p>\n\n<p>\n\t<strong>“Iniciamos fracasando en la salud, luego empezamos a trabajar con la educación y nos dimos cuenta que podíamos cambiar vidas a través de la tecnología, mejorando los canales de comunicación y automatizando los procesos de este sector”, explicó Trillos.</strong></p>\n\n<p>\n\tHoy, a Gnosoft la componen 18 empleados entre ingenieros de sistemas, analistas de diseño gráfico y contadores. Gracias a Procolombia Trillos y su equipo recibieron un curso para desarrollar exportación, “desde Cúcuta podemos ofrecer el servicio a cualquier país del mundo en donde allá conexión a internet, eso es una ventaja”, agregó el empresario.</p>\n\n<p>\n\t<img alt="" height="370" src="/sites/default/files/2019/03/05/imagenes_adicionales/e6.jpg" width="640" /></p>\n\n<p>\n\tOtra empresa que está entrando en el sector es Insegroup. Yordan Mantilla, uno de sus líderes, señaló que actualmente se desarrollan en el mercado eléctrico y en sus proyectos de automatización requieren nuevas tecnologías de las comunicaciones.</p>\n\n<p>\n\t“Enfocamos el desarrollo del concepto de ciudades inteligentes, creando soluciones para la cuantificación de variables ambientales. Otro proyecto es (T-Cyborg) que busca integrar el análisis de los sonidos de las ciudades para entender cómo percibe e interpreta su ciudad una persona ciega”, explicó Mantilla.</p>\n\n<p>\n\tInsegroup tiene una unidad de negocio que busca desarrollar los pilotos de sus proyectos en Cúcuta, para poner en práctica su oferta de servicios y ofrecerla a mediano plazo a nivel mundial.</p>\n\n<p>\n\t<strong>“Dentro de la validación del T-Cyborg, Latinoamérica y Centroamérica mostraron tener un potencial enorme para acceder a esa tecnología, la cual es de punta e innovadora”, indicó el empresario.</strong></p>\n\n<p>\n\tCamilo Puello, cofundador de Just Sapiens, convirtió una idea de negocio digital nacida en 2013, en una empresa. Hoy, su iniciativa se ha vendido a nivel regional y nacional.</p>\n\n<p>\n\t“En un par de meses buscamos vender la idea a nivel internacional, en el departamento hemos vendido el servicio a 35 abogados y a entidades públicas. En el país hemos llegado a Neiva, Santa Marta y Bogotá”, explicó Puello.</p>\n\n<p>\n\tA través del clúster se están desarrollando estrategias para enfocar a las empresas en segmentos de mercado con baja ocupación, donde la región con su portafolio de servicio puede apoyar en la generación de alto valor agregado con soluciones TIC.</p>\n\n<p>\n\t<strong>Vitrina internacional</strong></p>\n\n<p>\n\tUna delegación de cerca de 30 empresas colombianas, en las que no se incluía empresas nortesantandereanas, generó en el Mobile World Congress 2019, llevado a cabo en Barcelona, ventas por cerca de 2 millones de euros y expectativas de negocio de cerca de 50 millones de euros.</p>\n\n<p>\n\tProcolombia reseñó esta información en su página web, donde se aseguró que Colombia despuntó en Barcelona siendo la delegación Latinoamericana más grande.</p>\n\n<p>\n\tLa presidenta de Procolombia, Flavia Santoro manifestó que “el Mobile World Congress fue una gran oportunidad para mostrarle al mundo el valor agregado de las industrias 4.0 del país”.'] ``` I want to remove every **'\n'** that is on the list because when I remove the special characters ("[\W]+"), some words, for example like "\nNUEVA" or "\nINDUSTRIAL", end up with an 'n' before the word ("nNUEVA" o nINDUSTRIAL"). What I need is to remove the 'break line' form each word of each text on the list, even if it is at the beggining, the end or between words. I have tried: ``` lista_1 = [] lista_2 =[] for it in info: for q in it: if q[0:2] != '\n': lista_1.append(it) else: lista_2.append(it[2:]) ``` and then: ``` new_list = lista_1 + lista_2 new_list ``` but I am getting: "memory error" Does somebody knows how can I overcome this problem? or any other code I could use? Thank you so much!
2021/10/31
[ "https://Stackoverflow.com/questions/69788691", "https://Stackoverflow.com", "https://Stackoverflow.com/users/17293731/" ]
You can use list comprehension: ``` info = [i.replace("\n", "") for i in info] ```
You can use a generator comprehension and then join each entries of the list by a separator character, `sep`, I choose an empty string. ``` sep = '' # choose the separator character text = sep.join(s.replace('\n', '') for s in info) ```
10,849
31,972,419
I have a file whose contents are ``` {'FileID': 'a3333.txt','Timestamp': '2014-12-05T02:01:28.271Z','SuccessList':'a,b,c,d,e'} ``` When I read the file using python, I get the string as ``` "{'FileID': 'a3333.txt','Timestamp': '2014-12-05T02:01:28.271Z','SuccessList':'a,b,c,d,e'}" ``` I want the double quotes to be removed from the beginning and end of the string. From python docs , I came to know that python adds double quotes by itself if there are single quotes in the string to avoid escaping.
2015/08/12
[ "https://Stackoverflow.com/questions/31972419", "https://Stackoverflow.com", "https://Stackoverflow.com/users/1642114/" ]
If the files as stored are intended to be JSON then they are invalid. The JSON format doesn't allow the use of single quotes to delimit strings. **Assuming you have no single quotes within the key/value strings** themselves, you can replace the single quotes with double quotes and then read in using the JSON module: ``` import json x = "{'FileID': 'a3333.txt','Timestamp': '2014-12-05T02:01:28.271Z','SuccessList':'a,b,c,d,e'}" x = x.replace("'", '"') j = json.loads(x) print j ``` yields: ``` {'FileID': 'a3333.txt','Timestamp': '2014-12-05T02:01:28.271Z','SuccessList':'a,b,c,d,e'} ``` **Alternatively:** If the data is the string representation of a Python `dict`, you can read it in with `eval`. Using `eval` is dangerous (see [Ned Batchelder](http://nedbatchelder.com/blog/201206/eval_really_is_dangerous.html)'s thoughts on it). That said, if you wrote the file yourself and you are confident that it contains no malicious code, you can use `eval` to read the string as Python source code: ``` x = "{'FileID': 'a3333.txt','Timestamp': '2014-12-05T02:01:28.271Z','SuccessList':'a,b,c,d,e'}" eval(x, {'__builtins__': {}}) ``` yields: ``` {'FileID': 'a3333.txt','Timestamp': '2014-12-05T02:01:28.271Z','SuccessList':'a,b,c,d,e'} ``` Don't make a habit of this though! The right way to do this is to save the data to a file in a proper serialization format and then to read it from disk using a library like the `json` module.
If your string actually contains double quotes (which it might not, as they could just be part of the printed representation), you could get rid of them with a slice, e.g., ``` >>> hello = '"hello more stuff things"' >>> hello '"hello more stuff things"' >>> hello[1:-1] 'hello more stuff things' ``` Note in this case that the outer single quotes are not part of the string, they are just part of the printed representation.
10,850
10,436,130
How should this python be expressed ``` gl_enums = ... # map from name to values for opengl enums # use with gl_enums["enum name"] ``` in clojure? This works, but is it right? ``` (def gl-enums ...) ; map from name to values for opengl enums (defn gl-enum [k] (or (gl-enums k) (throw (SomeException.)))) ; use with (gl-enum :enum-name) ``` edit: for clarification, this question is about the exception-throwing part, not the map-defining part
2012/05/03
[ "https://Stackoverflow.com/questions/10436130", "https://Stackoverflow.com", "https://Stackoverflow.com/users/322866/" ]
Your original example is OK. Two approaches you are also likely to come across: ``` ;; not very idiomatic (defn example [m] (if (contains? m :name) (:name m) (throw (IllegalArgumentException. (format "key %s is missing" :name))))) ;; idiomatic (defn example [m] (if-let [v (:name m)] v (throw (IllegalArgumentException. (format "key %s is missing" :name))))) ``` [More on clojure.core/if-let](http://clojuredocs.org/clojure_core/clojure.core/if-let "More on if-let") [More on clojure.core/contains?](http://clojuredocs.org/clojure_core/clojure.core/contains_q "More on clojure.core/contains?")
Just use a regular hashmap: ``` (def gl-enums {:name1 "value1", :name2 "value2", :name3 "value3", ...}) ``` if you don't want to provide keywords (like `:keyword`) but prefer strings, you'll need to use `(get gl-enums str)` in `gl-enum`
10,853
43,684,760
I have very basic python knowledge. This is my code so far: when i run this code the error `UnboundLocalError: local variable 'response' referenced before assignment on line 7` displays. I am trying to create a function that compares the response input to two lists and if that input is found true or false is assigned to response. I also need to assign response, which should be true or false to another list of answers. (true or false will be assigned values and the total of the list of answers will be calculated to match a final list calculation). ``` response = [str(input("Would you rather eat an apple or an orange? Answer apple or orange."))] list1 =[str("apple"), str("Apple"), str("APPLE")] lsit3 = [str("an orange"), str("Orange"), str("orange"), str("an Orange"), str("ORANGE")] def listCompare(): for list1[0] in list1: for response[0] in response: if response[0] == list1[0]: response = true else: for list3[0] in list3: for response[0] in response: if response[0] == list3[0]: response = false listCompare() ``` \*\*EDIT: Ok. Thanks for the roast. I'm in highschool and halfway through an EXTREMELY basic class. I'm just trying to make this work to pass. I dont need "help" anymore.
2017/04/28
[ "https://Stackoverflow.com/questions/43684760", "https://Stackoverflow.com", "https://Stackoverflow.com/users/7937653/" ]
You are complicating things a lot here, but that is understandable if you are new to programming or python. To put you on the right track, this is a better way to attack the problem: ``` valid_responses = ['a', 'b'] response = input("chose a or b: ").lower().strip() if response in valid_responses: print("Valid response:", response) else: print("Invalid response:", response) ``` Look up any function you don't understand here. String declarations can also just be in single or double quotes: ``` my_strings = ['orange', "apple"] ``` Also to make global variables assignable inside a function you need to use the global keyword. ``` my_global = "hello" def my_fuction(): global my_global # Do stuff with my_global ``` For loops should assign to new local variables: ``` options = ['a', 'b', 'c'] for opt in options: print(opt) ```
Instead of enumerating a list of exact possible answers, you could instead match against patterns of possible answers. Here is one way to do that, case insensitively: ``` import re known_fruits = ['apple', 'orange'] response = str(input("What would you like to eat? (Answer " + ' or '.join(known_fruits) + '): ')) def listCompare(): for fruit in known_fruits: pattern = '^(?:an\s)?\s*' + fruit + '\s*$' if re.search(pattern,response,re.IGNORECASE): return True if listCompare(): print("Enjoy it!") else: print("'%s' is an invalid choice" % response) ``` This would match `apple`, `Apple`, or even `ApPle`. It would also match 'an apple'. However, it would not match `pineapple`. Here is the same code, with the regular expression broken down: ``` import re known_fruits = ['apple', 'orange'] print("What would you like to eat?") response = str(input("(Answer " + ' or '.join(known_fruits) + '): ')) # Change to lowercase and remove leading/trailing whitespace response = response.lower().strip() def listCompare(): for fruit in known_fruits: pattern = ( '^' + # Beginning of string '(' + # Start of group 'an' + # word "an" '\s' + # single space character ')' + # End of group '?' + # Make group optional '\s*' + # Zero or more space characters fruit + # Word inside "fruit" variable '\s*' + # Zero or more space characters '$' # End of the string ) if re.search(pattern,response): return True if listCompare(): print("Enjoy it!") else: print("'%s' is an invalid choice" % response) ```
10,854
15,854,916
i've a problem, like a title. I've tried to install smart\_selects in my project Django, but does not work. I followed the readme in <https://github.com/digi604/django-smart-selects>... but the error is: > > No module named admin\_static > Request Method: GET > Request URL: http://**\*.com/panel/schedevendorcomplete/add/1/Basic/ > Django Version: 1.2.7 > Exception Type: ImportError > Exception Value: > > No module named admin\_static > Exception Location: /var/www/website/smart\_selects/widgets.py in , line 4 > Python Executable: /usr/bin/python > Python Version: 2.7.3 > Python Path: ['/usr/lib/python2.7/dist-packages/PIL', '/usr/lib/python2.7', '/usr/lib/python2.7/plat-linux2', '/usr/lib/python2.7/lib-tk', '/usr/lib/python2.7/lib-old', '/usr/lib/python2.7/lib-dynload', '/usr/local/lib/python2.7/dist-packages', '/usr/lib/python2.7/dist-packages', '/var/www/vhosts/**.com', '/var/www/vhosts/**.com/website'] > Server time: sab, 6 Apr 2013 20:50:04 +0200 > Traceback Switch to copy-and-paste view > /usr/local/lib/python2.7/dist-packages/django/core/handlers/base.py in get\_response > request.path\_info) ... > ▶ Local vars > /usr/local/lib/python2.7/dist-packages/django/core/urlresolvers.py in resolve > for pattern in self.url\_patterns: ... > ▶ Local vars > /usr/local/lib/python2.7/dist-packages/django/core/urlresolvers.py in \_get\_url\_patterns > patterns = getattr(self.urlconf\_module, "urlpatterns", self.urlconf\_module) ... > ▶ Local vars > /usr/local/lib/python2.7/dist-packages/django/core/urlresolvers.py in \_get\_urlconf\_module > self.\_urlconf\_module = import\_module(self.urlconf\_name) ... > ▶ Local vars > /usr/local/lib/python2.7/dist-packages/django/utils/importlib.py in import\_module > \_*import*\_(name) ... > ▶ Local vars > /var/www/vhosts/tuttoricevimenti.com/website/urls.py in > admin.autodiscover() ... > ▶ Local vars > /usr/local/lib/python2.7/dist-packages/django/contrib/admin/\_*init*\_.py in autodiscover > import\_module('%s.admin' % app) ... > ▶ Local vars > /usr/local/lib/python2.7/dist-packages/django/utils/importlib.py in import\_module > \_*import*\_(name) ... > ▶ Local vars > /usr/local/lib/python2.7/dist-packages/django/contrib/auth/admin.py in > admin.site.register(Group, GroupAdmin) ... > ▶ Local vars > /usr/local/lib/python2.7/dist-packages/django/contrib/admin/sites.py in register > validate(admin\_class, model) ... > ▶ Local vars > /usr/local/lib/python2.7/dist-packages/django/contrib/admin/validation.py in validate > models.get\_apps() ... > ▶ Local vars > /usr/local/lib/python2.7/dist-packages/django/db/models/loading.py in get\_apps > self.\_populate() ... > ▶ Local vars > /usr/local/lib/python2.7/dist-packages/django/db/models/loading.py in \_populate > self.load\_app(app\_name) ... > ▶ Local vars > /usr/local/lib/python2.7/dist-packages/django/db/models/loading.py in load\_app > models = import\_module('.models', app\_name) ... > ▶ Local vars > /usr/local/lib/python2.7/dist-packages/django/utils/importlib.py in import\_module > \_*import*\_(name) ... > ▶ Local vars > /var/wwww/vhosts/\***.com/gazzettadelpopolo/models.py in > from website.smart\_selects.db\_fields import GroupedForeignKey ... > ▶ Local vars > /var/wwww/vhosts/*\**.com/website/smart\_selects/db\_fields.py in > import form\_fields ... > ▶ Local vars > /var/www/website/smart\_selects/form\_fields.py in > from smart\_selects.widgets import ChainedSelect ... > ▶ Local vars > /var/www/website/smart\_selects/widgets.py in > from django.contrib.admin.templatetags.admin\_static import static ... > ▶ Local vars > > > I do not know where I'm wrong, I'm not very familiar with django. What could be causing this? Tnx
2013/04/06
[ "https://Stackoverflow.com/questions/15854916", "https://Stackoverflow.com", "https://Stackoverflow.com/users/2252933/" ]
Use Java's Calendar: (since my beloved Date is deprecated) [Calendar API](http://docs.oracle.com/javase/1.5.0/docs/api/java/util/Calendar.html) Try something like: ``` Calendar c = Calendar.getInstance(); c.setTimeInMillis(1365228375*1000); System.out.print(c.toString()); - just to demonstrate the API has lots of info about presentation ``` Just put whatever is being returned from your PHP to the Android app in place of the '1365228375' shown above. Cheers.
Well, conversion from seconds to microseconds shouldn't be too difficult; ``` echo time() * 1000; ``` If you need the time stamp to be **acurate** in milliseconds, look at [`microtime()`](http://www.php.net/manual/en/function.microtime.php) however, this function does *not* return an integer so you'll have to do some conversions Don't know how to convert that to a readable time in Android
10,855
27,740,044
I am installing cffi package for cryptography and Jasmin installation. I did some research before posting question, so I found following option but which is seems not working: System ------ > > Mac OSx 10.9.5 > > > python2.7 > > > Error ----- ``` c/_cffi_backend.c:13:10: fatal error: 'ffi.h' file not found #include <ffi.h> ^ 1 warning and 1 error generated. ``` Please guide me on following issue. Thanks Command ------- > > env DYLD\_LIBRARY\_PATH=/usr/local/opt/openssl/lib/ ARCHFLAGS="-Wno-error=unused-command-line-argument-hard-error-in-future" LDFLAGS="-L/usr/local/opt/openssl/lib" CFLAGS="-I/usr/local/opt/openssl/include" sudo -E pip install cffi > > > LOG --- ``` bhushanvaiude$ env DYLD_LIBRARY_PATH=/usr/local/opt/openssl/lib/ ARCHFLAGS="-Wno-error=unused-command-line-argument-hard-error-in-future" LDFLAGS="-L/usr/local/opt/openssl/lib" CFLAGS="-I/usr/local/opt/openssl/include" sudo -E pip install cffi Password: Downloading/unpacking cffi Downloading cffi-0.8.6.tar.gz (196kB): 196kB downloaded Running setup.py egg_info for package cffi warning: unknown warning option '-Werror=unused-command-line-argument-hard-error-in-future'; did you mean '-Werror=unused-command-line-argument'? [-Wunknown-warning-option] 1 warning generated. Downloading/unpacking pycparser (from cffi) Downloading pycparser-2.10.tar.gz (206kB): 206kB downloaded Running setup.py egg_info for package pycparser Installing collected packages: cffi, pycparser Running setup.py install for cffi warning: unknown warning option '-Werror=unused-command-line-argument-hard-error-in-future'; did you mean '-Werror=unused-command-line-argument'? [-Wunknown-warning-option] 1 warning generated. building '_cffi_backend' extension cc -DNDEBUG -g -fwrapv -Os -Wall -Wstrict-prototypes -I/usr/local/opt/openssl/include -Qunused-arguments -pipe -Wno-error=unused-command-line-argument-hard-error-in-future -DUSE__THREAD -I@@HOMEBREW_CELLAR@@/libffi/3.0.13/lib/libffi-3.0.13/include -I/System/Library/Frameworks/Python.framework/Versions/2.7/include/python2.7 -c c/_cffi_backend.c -o build/temp.macosx-10.9-intel-2.7/c/_cffi_backend.o warning: unknown warning option '-Werror=unused-command-line-argument-hard-error-in-future'; did you mean '-Werror=unused-command-line-argument'? [-Wunknown-warning-option] c/_cffi_backend.c:13:10: fatal error: 'ffi.h' file not found #include <ffi.h> ^ 1 warning and 1 error generated. error: command 'cc' failed with exit status 1 Complete output from command /Users/****project path***/bin/python -c "import setuptools;__file__='/Users/****project path***/build/cffi/setup.py';exec(compile(open(__file__).read().replace('\r\n', '\n'), __file__, 'exec'))" install --record /var/folders/7w/8z_mn3g120n34bv0w780gnd00000gn/T/pip-e6d6Ay-record/install-record.txt --single-version-externally-managed --install-headers /Users/****project path***/include/site/python2.7: warning: unknown warning option '-Werror=unused-command-line-argument-hard-error-in-future'; did you mean '-Werror=unused-command-line-argument'? [-Wunknown-warning-option] 1 warning generated. running install running build running build_py creating build creating build/lib.macosx-10.9-intel-2.7 creating build/lib.macosx-10.9-intel-2.7/cffi copying cffi/__init__.py -> build/lib.macosx-10.9-intel-2.7/cffi copying cffi/api.py -> build/lib.macosx-10.9-intel-2.7/cffi copying cffi/backend_ctypes.py -> build/lib.macosx-10.9-intel-2.7/cffi copying cffi/commontypes.py -> build/lib.macosx-10.9-intel-2.7/cffi copying cffi/cparser.py -> build/lib.macosx-10.9-intel-2.7/cffi copying cffi/ffiplatform.py -> build/lib.macosx-10.9-intel-2.7/cffi copying cffi/gc_weakref.py -> build/lib.macosx-10.9-intel-2.7/cffi copying cffi/lock.py -> build/lib.macosx-10.9-intel-2.7/cffi copying cffi/model.py -> build/lib.macosx-10.9-intel-2.7/cffi copying cffi/vengine_cpy.py -> build/lib.macosx-10.9-intel-2.7/cffi copying cffi/vengine_gen.py -> build/lib.macosx-10.9-intel-2.7/cffi copying cffi/verifier.py -> build/lib.macosx-10.9-intel-2.7/cffi running build_ext building '_cffi_backend' extension creating build/temp.macosx-10.9-intel-2.7 creating build/temp.macosx-10.9-intel-2.7/c cc -DNDEBUG -g -fwrapv -Os -Wall -Wstrict-prototypes -I/usr/local/opt/openssl/include -Qunused-arguments -pipe -Wno-error=unused-command-line-argument-hard-error-in-future -DUSE__THREAD -I@@HOMEBREW_CELLAR@@/libffi/3.0.13/lib/libffi-3.0.13/include -I/System/Library/Frameworks/Python.framework/Versions/2.7/include/python2.7 -c c/_cffi_backend.c -o build/temp.macosx-10.9-intel-2.7/c/_cffi_backend.o warning: unknown warning option '-Werror=unused-command-line-argument-hard-error-in-future'; did you mean '-Werror=unused-command-line-argument'? [-Wunknown-warning-option] c/_cffi_backend.c:13:10: fatal error: 'ffi.h' file not found #include <ffi.h> ^ 1 warning and 1 error generated. error: command 'cc' failed with exit status 1 ---------------------------------------- Cleaning up... ```
2015/01/02
[ "https://Stackoverflow.com/questions/27740044", "https://Stackoverflow.com", "https://Stackoverflow.com/users/1694106/" ]
In your terminal try and run: ``` xcode-select --install ``` After that try installing the package again. By default, XCode installs itself as the IDE and does not set up the environment for the use by command line tools; for example, the `/usr/include` folder will be missing. Running the above command will install the tools necessary to run compilation from the command line and create the required symbolic links. Since Python packages compile native code parts using the command-line interface of XCode, this step is required to install Python packages that include native components. You only need to do this once per XCode install/upgrade, or if you see a similar error.
Install CLI development toolchain with ``` $ xcode-select --install ``` If you have a broken pkg-config, unlink it with following command as mentioned in comments. ``` $ brew unlink pkg-config ``` Install libffi package ``` $ brew install pkg-config libffi ``` and then install cffi ``` $ pip install cffi ``` Source: [Error installing bcrypt with pip on OS X: can't find ffi.h (libffi is installed)](https://stackoverflow.com/questions/22875270/error-installing-bcrypt-with-pip-on-os-x-cant-find-ffi-h-libffi-is-installed)
10,857
47,275,478
I'm using azure service bus queues in my application and here my question is, is there a way how to check the message queue is empty so that I can shutdown my containers and vms to save cost. If there is way to get that please let me know, preferably in python. Thanks
2017/11/13
[ "https://Stackoverflow.com/questions/47275478", "https://Stackoverflow.com", "https://Stackoverflow.com/users/6918471/" ]
For this, you can use [`Azure Service Bus Python SDK`](https://pypi.python.org/pypi/azure-servicebus). What you would need to do is get the properties of a queue using `get_queue` method that will return an object of type `Queue`. This object exposes the total number of messages through `message_count` property. Please note that this count will include count for active messages, dead-letter queue messages and more. Here's a sample code to do so: ``` from azure.servicebus import ServiceBusService, Message, Queue bus_service = ServiceBusService( service_namespace='namespacename', shared_access_key_name='RootManageSharedAccessKey', shared_access_key_value='accesskey') queue = bus_service.get_queue('taskqueue1') print queue.message_count ``` Source code for Azure Service Bus SDK for Python is available on Github: <https://github.com/Azure/azure-sdk-for-python/tree/master/azure-servicebus/azure/servicebus>.
You could check the length of the messages returned from [peek\_messages](https://azuresdkdocs.blob.core.windows.net/$web/python/azure-servicebus/latest/azure.servicebus.html?highlight=peek_messages#azure.servicebus.ServiceBusReceiver.peek_messages) method on the class `azure.servicebus.ServiceBusReceiver` ``` with servicebus_receiver: messages = servicebus_receiver.peek_messages(max_message_count=1) if len(messages) == 0: print('Queue empty') ```
10,860
40,130,468
I am using Alamofire for the HTTP networking in my app. But in my api which is written in python have an header key for getting request, if there is a key then only give response. Now I want to use that header key in my iOS app with Alamofire, I am not getting it how to implement. Below is my code of normal without any key implementation: ``` Alamofire.request(.GET,"http://name/user_data/\(userName)@someURL.com").responseJSON { response in // 1 print(response.request) // original URL request print(response.response) // URL response print(response.data) // server data print(response.result) } ``` I have a key as "appkey" and value as a "test" in my api. If anyone can help. Thank you!
2016/10/19
[ "https://Stackoverflow.com/questions/40130468", "https://Stackoverflow.com", "https://Stackoverflow.com/users/5088930/" ]
This should work ``` let headers = [ "appkey": "test" ] Alamofire.request(.GET, "http://name/user_data/\(userName)@someURL.com", parameters: nil, encoding: .URL, headers: headers).responseJSON { response in //handle response } ```
``` let headers: HTTPHeaders = [ "Accept": "application/json", "appkey": "test" ] Alamofire.request("http://name/user_data/\(userName)@someURL.com", headers: headers).responseJSON { response in print(response.request) // original URL request print(response.response) // URL response print(response.data) // server data print(response.result) } ```
10,861
49,997,303
Write a program using Python 3.x Write the scipt that will read "input.txt" and print the first 5 lines of the file input.txt that consists of the single odd number to stdout The file may contain lines having numeric and non numeric data your script should ignore all the lines that contain anything except single odd integer Assumption:"input.txt" is present in the same folder where script resides My code is as follows ``` f = open('input.txt', mode='r') t = f.readlines() print(t) lst1 = [] for i in t: try: if int(i): lst1.append(i) except ValueError: print ("{}is not a valid number,ignoring the same".format(i)) print ("This is a list with numeric values", lst1) for i in lst1: if int(i) % 2 == 1: print("Odd numbers are: ", i) Input.txt : 123a 13aa a1 1s2 2 3 3 455 56 6 7 8 ``` output : ``` yogi@fdfd:~/Python-Practice$python3 test.py ['123a\n', '13aa\n', 'a1\n', '1s2\n', '2\n', '3\n', '3\n', '455\n', '56\n', '6\n', '7\n', '8\n'] 123a is not a valid number,ignoring the same 13aa is not a valid number,ignoring the same a1 is not a valid number,ignoring the same 1s2 is not a valid number,ignoring the same This is a list with numeric values ['2\n', '3\n', '3\n', '455\n', '56\n', '6\n', '7\n', '8\n'] 3 3 455 7 ``` Note: My code is working fine and prints the odd numbers however I am struggling to check whether list elements are single integer or not.Any pointers will be helpful
2018/04/24
[ "https://Stackoverflow.com/questions/49997303", "https://Stackoverflow.com", "https://Stackoverflow.com/users/8064377/" ]
**Put your admin link like this :** as `Admin` is not your default `controller` and you not set your route for admin Put your `admin` link like this ``` <li class="nav-item"> <a class="nav-link" href="<?php echo base_url('admin/index'); ?>">Admin</a> </li> ``` but if you want to do like this : ``` <li class="nav-item"> <a class="nav-link" href="<?php echo base_url('admin'); ?>">Admin</a> </li> ``` then set `route.php` like this : ``` $route['admin'] = 'admin/index'; ``` And controller : ``` <?php class Admin extends CI_Controller{ public function index(){ echo "Reach here"; die; if (!$this->session->userdata('logged_in')) { redirect('users/login'); } ?> ```
Please use parent construct and after you can do your way. ``` public function __construct() { parent :: __construct(); $this->load->model(array('restaurants_m', 'categories_m', 'cities_m', 'customers_m', 'states_m', 'orders_m', 'delivery_boys_m')); $this->load->library(array('email')); $this->load->helper('url'); } ```
10,862
55,168,955
On Mac OS 10.14 (Mojave) I used: ``` pip install -U pytest ``` to install pytest. I got a permission denied error trying to install the packages to `/Users/nagen/Library/Python/2.7` I tried ``` sudo pip install -U pytest ``` This time it installed successfully But, despite adding the full path, the terminal doesn't recognize pytest. If I try to run `/Users/nagen/Library/Python/2.7/bin/pytest` - I get permission error. In addtion, `sudo /Users/nagen/Library/Python/2.7/bin/pytest` works, but it prompts for a password, so I can not use it in automation scripts. Tried installing python3 and then running pip3 install...same issue.
2019/03/14
[ "https://Stackoverflow.com/questions/55168955", "https://Stackoverflow.com", "https://Stackoverflow.com/users/11204687/" ]
I think the best option might be to use a python virtual env. <https://packaging.python.org/guides/installing-using-pip-and-virtualenv/> is a good starting point ``` > virtualenv env > source env/bin/activate > pip install pytest > pytest ``` This will avoid pathing and permissions issues and keep your environment clean. From any other changes you make withing that venv.
I would **strongly** recommend using [homebrew](https://brew.sh/). This is the **best** dev tool there is for mac users and I never install things without it. To install it run the following in terminal: ``` /usr/bin/ruby -e "$(curl -fsSL https://raw.githubusercontent.com/Homebrew/install/master/install)" ``` Now to install python3 you simply: ``` brew install python3 ``` brew will make sure your PATH is configured correctly and you shouldn't have any issues running `pip3 install x`. Also, if you decide to reinstall python using homebrew, you will need to follow [this](https://osxuninstaller.com/uninstall-guides/properly-uninstall-python-mac/) guide to uninstall python first. This will be the most tedious part of the process. Make sure that you **don't** uninstall python2 packages! Your mac OS uses them. If you don't have python3 installed at all you can skip the uninstall step and go straight to `brew install python3` When I first started using python, I had the same problem you are having because I tried manually installing it from python.org, then I came across homebrew and never had problems since.
10,863
15,034,151
I have a directory /a/b/c that has files and subdirectories. I need to copy the /a/b/c/\* in the /x/y/z directory. What python methods can I use? I tried `shutil.copytree("a/b/c", "/x/y/z")`, but python tries to create /x/y/z and raises an `error "Directory exists"`.
2013/02/22
[ "https://Stackoverflow.com/questions/15034151", "https://Stackoverflow.com", "https://Stackoverflow.com/users/260127/" ]
I found this code working which is part of the standard library: ``` from distutils.dir_util import copy_tree # copy subdirectory example from_directory = "/a/b/c" to_directory = "/x/y/z" copy_tree(from_directory, to_directory) ``` Reference: * Python 2: <https://docs.python.org/2/distutils/apiref.html#distutils.dir_util.copy_tree> * Python 3: <https://docs.python.org/3/distutils/apiref.html#distutils.dir_util.copy_tree>
You can also use glob2 to recursively collect all paths (using \*\* subfolders wildcard) and then use shutil.copyfile, saving the paths glob2 link: <https://code.activestate.com/pypm/glob2/>
10,864
59,296,151
I'm writing a python script that uses a model to predict a large number of values by groupID, where **efficiency is important** (N on the order of 10^8). I initialize a results matrix and am trying to sequentially update a running sum of values in the results matrix. Trying to be efficient, in my current method I use groupID as row numbers of the results matrix to avoid merging (merging is expensive, as far as I understand). My attempt: ``` import numpy as np # Initialize results matrix results = np.zeros((5,3)) # dimension: number of groups x timestep # Now I loop over batches, with batch size 4. Here's an example of one iteration: batch_groupIDs = [3,1,0,1] # Note that multiple values can be generated for same groupID batch_results = np.ones((4,3)) # My attempt at appending the results (low dimension example): results[batch_groupIDs] += batch_results print(results) ``` This outputs: ``` [[1. 1. 1.] [1. 1. 1.] [0. 0. 0.] [1. 1. 1.] [0. 0. 0.]] ``` My desired output is the following (since group 1 shows up twice, and should be appended twice): ``` [[1. 1. 1.] [2. 2. 2.] [0. 0. 0.] [1. 1. 1.] [0. 0. 0.]] ``` The actual dimensions of my problem are approximately 100 timesteps x a batch size of 1 million+ and 2000 groupIDs
2019/12/12
[ "https://Stackoverflow.com/questions/59296151", "https://Stackoverflow.com", "https://Stackoverflow.com/users/11918892/" ]
The problem is that your in your language dates use the slash separator, which makes Blazor think you're trying to access a different route. Whenever sending dates as a URL parameter they need to be in invariant culture and use dashes. ```cs NavigationManager.NavigateTo("routeTest/"+numberToSend+"/"+dateToSend.ToString("yyyy-MM-dd HH:mm:ss", System.Globalization.CultureInfo.InvariantCulture)); ``` For reference, see the warning in the [official documentation](https://learn.microsoft.com/en-us/aspnet/core/blazor/routing?view=aspnetcore-3.1#route-constraints)
As you've identified, the DateTime in the URL is affecting the routing due to the slashes. Send the `DateTime` in ISO8601 format `yyyy-MM-ddTHH:mm:ss`. You could use: ``` dateToSend.ToString("s", System.Globalization.CultureInfo.InvariantCulture) ``` where the format specifier `s` is known as the [Sortable date/time pattern](https://learn.microsoft.com/en-us/dotnet/standard/base-types/standard-date-and-time-format-strings#the-sortable-s-format-specifier) Or ``` dateToSend.ToString("yyyy-MM-dd HH:mm:ss", System.Globalization.CultureInfo.InvariantCulture) ``` Use `InvariantCulture` since the [Blazor routing](https://learn.microsoft.com/en-us/aspnet/core/blazor/routing?view=aspnetcore-3.1) page states: > > Route constraints that verify the URL and are converted to a CLR type > (such as `int` or `DateTime`) always use the invariant culture. These > constraints assume that the URL is non-localizable. > > >
10,870
62,061,258
I am trying to create a view that handles the email confirmation but i keep on getting an error, your help is highly appreciated My views.py ``` import re from django.contrib.auth import login, logout, authenticate from django.shortcuts import render, HttpResponseRedirect, Http404 # Create your views here. from .forms import LoginForm, RegistrationForm from . models import EmailConfirmation SHA1_RE = re.compile('^[a-f0-9]{40}s') def activation_view(request, activation_key): if SHA1_RE.search(activation_key): print('activation is real') try: instance = EmailConfirmation.objects.get(activation_key=activation_key) except EmailConfirmation.DoesNotExist: instance = None raise Http404 if instance is not None and not instance.confirmed: print('Confirmation complete') instance.confirmed = True instance.save() elif instance is not None and instance.confirmed: print('User already confirmed') else: pass context = {} return render(request, "accounts/activation_complete.html", context) else: pass ``` urls.py ``` from django.urls import path from .views import( logout_view, login_view, register_view, activation_view, ) urlpatterns = [ path('accounts/logout/', logout_view, name='auth_logout'), path('accounts/login/', login_view, name='auth_login'), path('accounts/register/', register_view, name='auth_register'), path('accounts/activate/<activation_key>/', activation_view, name='activation_view'), ] ``` models.py ``` class EmailConfirmation(models.Model): user = models.OneToOneField(settings.AUTH_USER_MODEL, on_delete=models.CASCADE) activation_key = models.CharField(max_length=200) confirmed = models.BooleanField(default=False) def __str__(self): return str(self.confirmed) def activate_user_email(self): #send email here activation_url = 'http://localhost:8000/accounts/activate%s' %(self.activation_key) context = { 'activation_key': self.activation_key, 'activation_url': activation_url, 'user': self.user.username } message = render_to_string('accounts/activation/actiavtion_message.txt', context) subject = 'Email actiavtion key' print(message) self.email_user(subject, message, settings.DEFAULT_FROM_EMAIL) def email_user(self, subject, message, from_email=None, **kwargs): send_mail(subject, message, from_email, [self.user.email], kwargs) ``` ERROR ``` ValueError at /accounts/activate/d99e89141eaf2fa786a3b5215ab4aa986e411bcc/ The view accounts.views.activation_view didn't return an HttpResponse object. It returned None instead. Request Method: GET Request URL: http://127.0.0.1:8000/accounts/activate/d99e89141eaf2fa786a3b5215ab4aa986e411bcc/ Django Version: 3.0.6 Exception Type: ValueError Exception Value: The view accounts.views.activation_view didn't return an HttpResponse object. It returned None instead. Exception Location: C:\Users\Martin\Desktop\My Projects\Savanna\venv\lib\site-packages\django\core\handlers\base.py in _get_response, line 124 Python Executable: C:\Users\Martin\Desktop\My Projects\Savanna\venv\Scripts\python.exe Python Version: 3.8.2 Python Path: ['C:\\Users\\Martin\\Desktop\\My Projects\\Savanna', 'C:\\Users\\Martin\\Desktop\\My ' 'Projects\\Savanna\\venv\\Scripts\\python38.zip', 'c:\\program files\\python38\\DLLs', 'c:\\program files\\python38\\lib', 'c:\\program files\\python38', 'C:\\Users\\Martin\\Desktop\\My Projects\\Savanna\\venv', 'C:\\Users\\Martin\\Desktop\\My Projects\\Savanna\\venv\\lib\\site-packages'] Server time: Thu, 28 May 2020 09:05:51 +0000 ```
2020/05/28
[ "https://Stackoverflow.com/questions/62061258", "https://Stackoverflow.com", "https://Stackoverflow.com/users/13633361/" ]
Use [`justify`](https://stackoverflow.com/a/44559180/2901002) with `DataFrame` constructor: ``` arr = justify(data_df.to_numpy(), invalid_val=np.nan,axis=0) df = pd.DataFrame(arr, columns=data_df.columns, index=data_df.index) print(df) 8 16 20 24 0 4.0 0.0 1.0 6.0 0 7.0 2.0 5.0 8.0 0 NaN 3.0 NaN NaN 0 NaN 9.0 NaN NaN 0 NaN NaN NaN NaN 0 NaN NaN NaN NaN 0 NaN NaN NaN NaN 0 NaN NaN NaN NaN 0 NaN NaN NaN NaN 0 NaN NaN NaN NaN ```
This is not that pretty - but with help of `numpy` you can fairly easy get a numpy array with your desired result. ``` import numpy def shifted_column(values): none_nan_values = values[ ~np.isnan(values) ] nan_row = np.zeros(values.shape) nan_row[:] = np.nan nan_row[:none_nan_values.size] = none_nan_values return nan_row np.apply_along_axis(shifted_column, 0, data_df.values) ``` You could convert it back to pandas as you wish
10,872
3,387,663
HI all I am trying to use SWIG to export C++ code to Python. The C sample I read on the web site does work but I have problem with C++ code. Here are the lines I call ``` swig -c++ -python SWIG_TEST.i g++ -c -fPIC SWIG_TEST.cpp SWIG_TEST_wrap.cxx -I/usr/include/python2.4/ gcc --shared SWIG_TEST.o SWIG_TEST_wrap.o -o _SWIG_TEST.so -lstdc++ ``` When I am finished I receive the following error message ``` ImportError: ./_SWIG_TEST.so: undefined symbol: Py_InitModule4 ``` Do you know what it is?
2010/08/02
[ "https://Stackoverflow.com/questions/3387663", "https://Stackoverflow.com", "https://Stackoverflow.com/users/245416/" ]
It looks like you aren't linking to the Python runtime library. Something like adding `-lpython24` to your gcc line. (I don't have a Linux system handy at the moment).
you might try building the shared library using `gcc` ``` g++ -shared SWIG_TEST.o SWIG_TEST_wrap.o -o _SWIG_TEST.so ``` rather than using `ld` directly.
10,873
54,856,829
In TensorFlow <2 the training function for a DDPG actor could be concisely implemented using `tf.keras.backend.function` as follows: ``` critic_output = self.critic([self.actor(state_input), state_input]) actor_updates = self.optimizer_actor.get_updates(params=self.actor.trainable_weights, loss=-tf.keras.backend.mean(critic_output)) self.actor_train_on_batch = tf.keras.backend.function(inputs=[state_input], outputs=[self.actor(state_input)], updates=actor_updates) ``` Then during each training step calling `self.actor_train_on_batch([np.array(state_batch)])` would compute the gradients and perform the updates. However running that on TF 2.0 gives the following error due to eager mode being on by default: ``` actor_updates = self.optimizer_actor.get_updates(params=self.actor.trainable_weights, loss=-tf.keras.backend.mean(critic_output)) File "C:\ProgramData\Anaconda3\lib\site-packages\tensorflow\python\keras\optimizer_v2\optimizer_v2.py", line 448, in get_updates grads = self.get_gradients(loss, params) File "C:\ProgramData\Anaconda3\lib\site-packages\tensorflow\python\keras\optimizer_v2\optimizer_v2.py", line 361, in get_gradients grads = gradients.gradients(loss, params) File "C:\ProgramData\Anaconda3\lib\site-packages\tensorflow\python\ops\gradients_impl.py", line 158, in gradients unconnected_gradients) File "C:\ProgramData\Anaconda3\lib\site-packages\tensorflow\python\ops\gradients_util.py", line 547, in _GradientsHelper raise RuntimeError("tf.gradients is not supported when eager execution " RuntimeError: tf.gradients is not supported when eager execution is enabled. Use tf.GradientTape instead. ``` As expected, disabling eager execution via `tf.compat.v1.disable_eager_execution()` fixes the issue. However I don't want to disable eager execution for everything - I would like to use purely the 2.0 API. The exception suggests using `tf.GradientTape` instead of `tf.gradients` but that's an internal call. **Question:** What is the appropriate way of computing `-tf.keras.backend.mean(critic_output)` in graph mode (in TensorFlow 2.0)?
2019/02/24
[ "https://Stackoverflow.com/questions/54856829", "https://Stackoverflow.com", "https://Stackoverflow.com/users/3552418/" ]
I found two ways to solve it: ###Using data attribute Get the max number of pages in the template, assign it to a data attribute, and access it in the scripts. Then check current page against total page numbers, and set disabled states to the load more button when it reaches the last page. PHP/HTML: ``` <ul id="ajax-content"></ul> <button type="button" id="ajax-button" data-endpoint="<?php echo get_rest_url(null, 'wp/v2/posts'); ?>" data-ppp="<?php echo get_option('posts_per_page'); ?>" data-pages="<?php echo $wp_query->max_num_pages; ?>">Show more</button> ``` JavaScripts: ``` (function($) { var loadMoreButton = $('#ajax-button'); var loadMoreContainer = $('#ajax-content'); if (loadMoreButton) { var endpoint = loadMoreButton.data('endpoint'); var ppp = loadMoreButton.data('ppp'); var pages = loadMoreButton.data('pages'); var loadPosts = function(page) { var theData, errorStatus, errorMessage; $.ajax({ url: endpoint, dataType: 'json', data: { per_page: ppp, page: page, type: 'post', orderby: 'date' }, beforeSend: function() { loadMoreButton.attr('disabled', true); }, success: function(data) { theData = []; for (i = 0; i < data.length; i++) { theData[i] = {}; theData[i].id = data[i].id; theData[i].link = data[i].link; theData[i].title = data[i].title.rendered; theData[i].content = data[i].content.rendered; } $.each(theData, function(i) { loadMoreContainer.append('<li><a href="' + theData[i].link + '">' + theData[i].title + '</a></li>'); }); loadMoreButton.attr('disabled', false); if (getPage == pages) { loadMoreButton.attr('disabled', true); } getPage++; }, error: function(jqXHR) { errorStatus = jqXHR.status + ' ' + jqXHR.statusText + '\n'; errorMessage = jqXHR.responseJSON.message; console.log(errorStatus + errorMessage); } }); }; var getPage = 2; loadMoreButton.on('click', function() { loadPosts(getPage); }); } })(jQuery); ``` ###Using jQuery `complete` event Get the total pages `x-wp-totalpages` from the HTTP response headers. Then change the button states when reaches last page. PHP/HTML: ``` <ul id="ajax-content"></ul> <button type="button" id="ajax-button" data-endpoint="<?php echo get_rest_url(null, 'wp/v2/posts'); ?>" data-ppp="<?php echo get_option('posts_per_page'); ?>">Show more</button> ``` JavaScripts: ``` (function($) { var loadMoreButton = $('#ajax-button'); var loadMoreContainer = $('#ajax-content'); if (loadMoreButton) { var endpoint = loadMoreButton.data('endpoint'); var ppp = loadMoreButton.data('ppp'); var pager = 0; var loadPosts = function(page) { var theData, errorStatus, errorMessage; $.ajax({ url: endpoint, dataType: 'json', data: { per_page: ppp, page: page, type: 'post', orderby: 'date' }, beforeSend: function() { loadMoreButton.attr('disabled', true); }, success: function(data) { theData = []; for (i = 0; i < data.length; i++) { theData[i] = {}; theData[i].id = data[i].id; theData[i].link = data[i].link; theData[i].title = data[i].title.rendered; theData[i].content = data[i].content.rendered; } $.each(theData, function(i) { loadMoreContainer.append('<li><a href="' + theData[i].link + '">' + theData[i].title + '</a></li>'); }); loadMoreButton.attr('disabled', false); }, error: function(jqXHR) { errorStatus = jqXHR.status + ' ' + jqXHR.statusText + '\n'; errorMessage = jqXHR.responseJSON.message; console.log(errorStatus + errorMessage); }, complete: function(jqXHR) { if (pager == 0) { pager = jqXHR.getResponseHeader('x-wp-totalpages'); } pager--; if (pager == 1) { loadMoreButton.attr('disabled', true); } } }); }; var getPage = 2; loadMoreButton.on('click', function() { loadPosts(getPage); getPage++; }); } })(jQuery); ```
The problem appears to be an invalid query to that endpoint so the `success: function()` is never being run in this circumstance. ### Add to All API Errors You could add the same functionality for all errors like this... ``` error: function(jqXHR, textStatus, errorThrown) { loadMoreButton.remove(); .... } ``` Though that may not be the desired way of handling of **all** errors. ### Test for Existing Error Message Another option could be to remove the button if you receive an error with that exact message... ``` error: function(jqXHR, textStatus, errorThrown) { if (jqXHR.statusText === 'The page number requested is larger than the number of pages available.') { loadMoreButton.remove(); } .... } ``` but this would be susceptible to breaking with any changes to that error message. ### Return Custom Error Code from API The recommended way to handle it would be to return specific error code (along with HTTP status code 400) to specify the exact situation in a more reliable format... ``` error: function(jqXHR, textStatus, errorThrown) { if (jqXHR.statusCode === '215') { loadMoreButton.remove(); } .... } ``` Here's an example on how to configure error handling in an API: [Best Practices for API Error Handling](https://nordicapis.com/best-practices-api-error-handling#gooderrorexamples) ### Return 200 HTTP Status Code The last option would be to change the way your API endpoint handles this type of "error"/situation, by returning a `200` level HTTP status code instead, which would invoke the `success:` instead of the `error:` callback instead.
10,875
43,819,092
I have installed ngram using pip install ngram. While I am running the following code ``` from ngram import NGram c=NGram.compare('cereal_crop','cereals') print c ``` I get the error `ImportError: cannot import name NGram` Screenshot for it: [![screenshot of console window](https://i.stack.imgur.com/chq5c.png)](https://i.stack.imgur.com/chq5c.png) P.S. A similar question has been asked previously [using ngram in python](https://stackoverflow.com/questions/8390585/using-ngram-in-python/43818866?noredirect=1#comment74676421_43818866), but that time the person who was getting error did not install ngram, so installing ngram worked. In my case I am getting the error in spite of ngram being installed.
2017/05/06
[ "https://Stackoverflow.com/questions/43819092", "https://Stackoverflow.com", "https://Stackoverflow.com/users/4451870/" ]
Your Python script is named `ngram.py`, so it defines a module named `ngram`. When Python runs `from ngram import NGram`, Python ends up looking in your script for something named `NGram`, not in the `ngram` module you have installed. Try changing the name of your script to something else, for example `ngram_test.py`.
try like this: ``` import ngram c = ngram.NGram.compare('cereal_crop','cereals') print c ```
10,876
45,006,190
ConnectionRefusedError error showing when register user, basic information added on database but password field was blank and other database fields submitted please find the following error and our class code, **Class** class ProfessionalRegistrationSerializer(serializers.HyperlinkedModelSerializer): ``` password = serializers.CharField(max_length=20, write_only=True) email = serializers.EmailField() first_name = serializers.CharField(max_length=30) last_name = serializers.CharField(max_length=30) class Meta: model = User fields = ('url', 'id', 'first_name', 'last_name', 'email', 'password') def validate_email(self, value): from validate_email_address import validate_email if User.all_objects.filter(email=value.lower()).exists(): raise serializers.ValidationError('User with this email already exists.') return value.lower() def create(self, validated_data): password = validated_data.pop('password') email = validated_data.pop('email') user = User.objects.create( username=email.lower(), email=email.lower(), role_id=1, **validated_data) user.set_password(password) user.save() return user ``` **Error** ConnectionRefusedError at /api/v1/register/professional/ [Errno 111] Connection refused Request Method: POST Request URL: <http://127.0.0.1:8000/api/v1/register/professional/> Django Version: 1.8.14 Exception Type: ConnectionRefusedError Exception Value: [Errno 111] Connection refused Exception Location: /usr/lib/python3.5/socket.py in create\_connection, line 702 Python Executable: /home/project\_backend/env/bin/python Python Version: 3.5.2 Python Path: ['/home/project\_backend', '/home/project\_backend/env/lib/python35.zip', '/home/project\_backend/env/lib/python3.5', '/home/project\_backend/env/lib/python3.5/plat-x86\_64-linux-gnu', '/home/project\_backend/env/lib/python3.5/lib-dynload', '/usr/lib/python3.5', '/usr/lib/python3.5/plat-x86\_64-linux-gnu', '/home/project\_backend/env/lib/python3.5/site-packages', '/home/project\_backend/env/lib/python3.5/site-packages/setuptools-36.0.1-py3.5.egg'] **Traceback** ``` File "/home/project_backend/env/lib/python3.5/site-packages/django/core/handlers/base.py" in get_response ``` 132.response = wrapped\_callback(request, \*callback\_args, \*\*callback\_kwargs) File "/home/project\_backend/env/lib/python3.5/site-packages/django/views/decorators/csrf.py" in wrapped\_view 58. return view\_func(\*args, \*\*kwargs) File "/home/project\_backend/env/lib/python3.5/site-packages/django/views/generic/base.py" in view 71. return self.dispatch(request, \*args, \*\*kwargs) File "/home/project\_backend/env/lib/python3.5/site-packages/rest\_framework/views.py" in dispatch 464. response = self.handle\_exception(exc) File "/home/project\_backend/env/lib/python3.5/site-packages/rest\_framework/views.py" in dispatch 461. response = handler(request, \*args, \*\*kwargs) File "/home/project\_backend/filmup/apps/registrations/views.py" in post 53. user = serializer.save(work\_status=user\_type) File "/home/project\_backend/env/lib/python3.5/site-packages/rest\_framework/serializers.py" in save 175. self.instance = self.create(validated\_data) File "/home/project\_backend/project/apps/registrations/serializers.py" in create 157. \*\*validated\_data) File "/home/project\_backend/env/lib/python3.5/site-packages/django/db/models/manager.py" in manager\_method 127. return getattr(self.get\_queryset(), name)(\*args, \*\*kwargs) File "/home/project\_backend/env/lib/python3.5/site-packages/django/db/models/query.py" in create 348. obj.save(force\_insert=True, using=self.db) File "/home/project\_backend/project/libs/accounts/models.py" in save 217. super().save(\*args, \*\*kwargs) File "/home/project\_backend/env/lib/python3.5/site-packages/django/db/models/base.py" in save 734. force\_update=force\_update, update\_fields=update\_fields) File "/home/project\_backend/env/lib/python3.5/site-packages/django/db/models/base.py" in save\_base 771. update\_fields=update\_fields, raw=raw, using=using) File "/home/project\_backend/env/lib/python3.5/site-packages/django/dispatch/dispatcher.py" in send 189. response = receiver(signal=self, sender=sender, \*\*named) File "/home/project\_backend/filmup/libs/accounts/signals.py" in create\_user\_setting 19. create\_ejabberd\_user(instance) File "/home/project\_backend/project/libs/accounts/signals.py" in create\_ejabberd\_user 11. EjabberdUser.objects.create(username=str(user.id), password=str(token.key)) File "/home/project\_backend/project/libs/accounts/models.py" in create 73. ctl.register(user=kwargs['username'], password=kwargs['password']) File "/home/project\_backend/project/libs/ejabberdctl.py" in register 54. 'password': password}) File "/home/project\_backend/project/libs/ejabberdctl.py" in ctl 32. return fn(self.params, payload) File "/usr/lib/python3.5/xmlrpc/client.py" in **call** 1092. return self.\_\_send(self.\_\_name, args) File "/usr/lib/python3.5/xmlrpc/client.py" in \_\_request 1432. verbose=self.\_\_verbose File "/usr/lib/python3.5/xmlrpc/client.py" in request 1134. return self.single\_request(host, handler, request\_body, verbose) File "/usr/lib/python3.5/xmlrpc/client.py" in single\_request 1146. http\_conn = self.send\_request(host, handler, request\_body, verbose) File "/usr/lib/python3.5/xmlrpc/client.py" in send\_request 1259. self.send\_content(connection, request\_body) File "/usr/lib/python3.5/xmlrpc/client.py" in send\_content 1289. connection.endheaders(request\_body) File "/usr/lib/python3.5/http/client.py" in endheaders 1102. self.\_send\_output(message\_body) File "/usr/lib/python3.5/http/client.py" in \_send\_output 934. self.send(msg) File "/usr/lib/python3.5/http/client.py" in send 877. self.connect() File "/usr/lib/python3.5/http/client.py" in connect 849. (self.host,self.port), self.timeout, self.source\_address) File "/usr/lib/python3.5/socket.py" in create\_connection 711. raise err File "/usr/lib/python3.5/socket.py" in create\_connection 702. sock.connect(sa)
2017/07/10
[ "https://Stackoverflow.com/questions/45006190", "https://Stackoverflow.com", "https://Stackoverflow.com/users/7225424/" ]
I was getting the same error and it may be due to email verification. I add following code in my setting.py file and now authentication working perfectly ``` ACCOUNT_EMAIL_VERIFICATION = 'none' ACCOUNT_AUTHENTICATION_METHOD = 'username' ACCOUNT_EMAIL_REQUIRED = False ```
You perform a call to a remote server that you can't reach / isn't configured / isn't running. It's not an issue with Django or DRF.
10,877
27,259,112
I want to test uploaded python programs with the `unittest` module on a django based web site and give a useful feedback to the student. I created some helper function to get statistics (how many failures and errors) and messages. ``` def suite(*test_cases): suites = [unittest.makeSuite(case) for case in test_cases] return unittest.TestSuite(suites) def testcase_statistics_and_messages(*test_cases): devnull = open('/dev/null', "w") runner = unittest.TextTestRunner(stream=devnull) test_suite = suite(*test_cases) test_result = runner.run(test_suite) devnull.close() failure_messages = [mesg for test_case, mesg in test_result.failures] number_of_failures = len(failure_messages) error_messages = [mesg for test_case, mesg in test_result.errors] number_of_errors = len(error_messages) number_of_test_cases = test_suite.countTestCases() number_of_successes = (number_of_test_cases - number_of_errors - number_of_failures) return dict( number_of_test_cases=number_of_test_cases, failure_messages=failure_messages, error_messages=error_messages, number_of_successes=number_of_successes, number_of_errors=number_of_errors, number_of_failures=number_of_failures, ) ``` I save the program of the student and the unit tests into a file. I import the file, and I get the list of the TestCase classes from the file, and run the functions above. (I can handle SyntaxError, IndentationError and runtime errors of the program uploaded by the student.) However the messages I got from the unit tests are not too useful for the students. E.g. ``` Traceback (most recent call last): File "/tmp/tmpnda6x_60.py", line 39, in test_discriminant_returns_the_proper_values self.assertEqual(discriminant(*args), return_values) AssertionError: -2 != 0 ``` If I would get for example the docstring of the test method of the `TestCase`, I would be happy. I can not find a straightforward way to do this. If I would get the name of the TestCase and the method, I could get the docstring with the `eval` function. Is there an easier way, than to get the test method names from the messages and go through the TestCases and check whether there is a test method called like I have found in the message. I have tried to use `msg` keyword arguments to the asserts, but than it is a pain to write unit tests, and I need to use e.g. regular expression to get the informative part of the message. I have Python 3.2 on the server I want to run this django project on.
2014/12/02
[ "https://Stackoverflow.com/questions/27259112", "https://Stackoverflow.com", "https://Stackoverflow.com/users/813946/" ]
If you have a `TestCase` like: ``` class ExampleTestCase(unittest.TestCase): def test_example(self): """If this fails, it may not be your fault. Try hacking the integer cache. Evil laugh.""" self.assertEqual(3, 4) ``` You can later, in your `testcase_statistics_and_messages` function, get first line of your doc string using `test_case.shortDescription()`, or the full doc string using `test_case._testMethodDoc`. So, adding those to your function (and to the return dict): ``` short_docs = [test_case.shortDescription() for test_case, mesg in test_result.failures] docs = [test_case._testMethodDoc for test_case, mesg in test_result.failures] ``` And then printing the results using: ``` for key, value in testcase_statistics_and_messages(ExampleTestCase).items(): print(key, "==>", value) ``` Gives me: ``` short_docs ==> ['If this fails, it may not be your fault.'] number_of_test_cases ==> 1 error_messages ==> [] docs ==> ['If this fails, it may not be your fault.\n Try hacking the integer cache. Evil laugh.'] failure_messages ==> ['Traceback (most recent call last):\n File "ut.py", line 9, in test_example\n self.assertEqual(3, 4)\nAssertionError: 3 != 4\n'] number_of_failures ==> 1 number_of_errors ==> 0 number_of_successes ==> 0 ```
Here is one possible solution - override the specific assertion methods that you're using (`assertEqual`, etc, hopefully not too many) and store the variables in a JSON string as message: ``` import json import unittest class ExampleTestCase(unittest.TestCase): longMessage = False def assertEqual(self, first, second, msg=None): super(ExampleTestCase, self).assertEqual(first, second, msg=json.dumps({ 'first': first, 'second': second, })) def test_example(self): self.assertEqual(3, 4) ``` By using `longMessage = False` Python will return the assertion message to you without modifying it further. The above will give you: ``` File "/Library/Frameworks/Python.framework/Versions/3.4/lib/python3.4/unittest/case.py", line 797, in assertEqual assertion_func(first, second, msg=msg) File "/Library/Frameworks/Python.framework/Versions/3.4/lib/python3.4/unittest/case.py", line 790, in _baseAssertEqual raise self.failureException(msg) AssertionError: {"first": 3, "second": 4} ``` The last line of this is much easier to parse using `json.loads`. It's up to you to make the sure the values can be serialised and deserialised to/from JSON. Lastly, you may also need to customize running the tests to easily parse each of the tracebacks. This all being said, if you can avoid it, it may be better to review whether your use case can be done using one of the other Python testing frameworks (nosetests, py.tests, etc). After reviewing the situation it may turn out that writing your own testing framework is the best approach to get the best error messages. For example: ``` x = 3 y = 4 try: assert x == y except AssertionError: return ... # your error message here (can be a string or an arbitrary object) ```
10,878
53,067,695
I'm writing a function to find the percentage change using Numpy and function calls. So far what I got is: ``` def change(a,b): answer = (np.subtract(a[b+1], a[b])) / a[b+1] * 100 return answer print(change(a,0)) ``` "a" is the array I have made and b will be the index/numbers I am trying to calculate. For example: My Array is ``` [[1,2,3,5,7] [1,4,5,6,7] [5,8,9,10,32] [3,5,6,13,11]] ``` How would I calculate the percentage change between 1 to 2 (=0.5) or 1 to 4(=0.75) or 5,7 etc.. **Note:** I know how mathematically to get the change, I'm not sure how to do this in python/ numpy.
2018/10/30
[ "https://Stackoverflow.com/questions/53067695", "https://Stackoverflow.com", "https://Stackoverflow.com/users/10541940/" ]
The accepted answer is close but incorrect if you're trying to take % difference from left to right. You should get the following percent difference: `1,2,3,5,7` --> `100%, 50%, 66.66%, 40%` check for yourself: <https://www.calculatorsoup.com/calculators/algebra/percent-change-calculator.php> Going by what Josmoor98 said, you can use `np.diff(a) / a[:,:-1] * 100` to get the percent difference from left to right, which will give you the correct answer. ``` array([[100. , 50. , 66.66666667, 40. ], [300. , 25. , 20. , 16.66666667], [ 60. , 12.5 , 11.11111111, 220. ], [ 66.66666667, 20. , 116.66666667, -15.38461538]]) ```
1. Combine all your arrays. 2. Then make a data frame from them. ``` df = pd.df(data=array you made) ``` 3. Use the `pct_change()` function on dataframe. It will calculate the % change for all rows in dataframe.
10,879
28,323,435
I'm trying to use [`logging`'s `SMTPHandler`](https://docs.python.org/3/library/logging.handlers.html#smtphandler). From Python 3.3 on, you can specify a `timeout` keyword argument. If you add that argument in older versions, it fails. To get around this, I used the following: ``` import sys if sys.version_info >= (3, 3): smtp_handler = SMTPHandler(SMTP_SERVER, FROM_EMAIL, TO_EMAIL, SUBJECT, timeout=20.0) else: smtp_handler = SMTPHandler(SMTP_SERVER, FROM_EMAIL, TO_EMAIL, SUBJECT) ``` Is there a better way of doing this?
2015/02/04
[ "https://Stackoverflow.com/questions/28323435", "https://Stackoverflow.com", "https://Stackoverflow.com/users/810870/" ]
Rather than test for the version, use exception handling: ``` try: smtp_handler = SMTPHandler(SMTP_SERVER, FROM_EMAIL, TO_EMAIL, SUBJECT, timeout=20.0) except TypeError: # Python < 3.3, no timeout parameter smtp_handler = SMTPHandler(SMTP_SERVER, FROM_EMAIL, TO_EMAIL, SUBJECT) ``` Now you can conceivably upgrade your standard library in-place with a patch or a backport module and it'll continue to work.
Here is another slightly different approach: ``` from logging.handlers import SMTPHandler import sys if sys.version_info >= (3, 3): # patch in timeout where available from functools import partial SMTPHandler = partial(SMTPHandler, timeout=20.0) ``` Now in the rest of the code you can just use: ``` smtp_handler = SMTPHandler(SMTP_SERVER, FROM_EMAIL, TO_EMAIL, SUBJECT) ``` and know that the `timeout` argument is being used if available. This does still rely on static version checking, but means that all of the version-specific config is in one place and may reduce duplication elsewhere.
10,889
5,763,608
I'm trying to start a script on bootup on a Ubuntu Server 10.10 machine. The relevant parts of my rc.local look like this: ``` /usr/local/bin/python3.2 "/root/Advantage/main.py" >> /startuplogfile exit 0 ``` If I run ./rc.local from /etc everything works just fine and it writes the following into /startuplogfile: ``` usage: main.py [--loop][--dry] ``` for testing purposes this is exactly what needs to happen. It won't write anything to startuplogfile when I reboot the computer. I'd venture to guess that the script is not started when rc.local is run at bootup. I verified that rc.local is started with a 'touch /rclocaltest' in the file. As expected the directory is created. I've tested rc.local with another python script that simple creates a file in / and prints to the /startuplogfile. From terminal and after reboot, it works just fine. my execution bits are set like this: ``` -rwxrwxrwx 1 root root 4598 2011-04-22 19:09 main.py ``` I have absolutely no idea why this happens and I tried everything that I could think of to remedy the problem. Any ideas on what could be causing this, I'm totally out of ideas. EDIT: I forgot to mention that I only login to this machine over ssh. I'm not sure that makes difference since rc.local is executed before login as far as I know. EDIT: I noticed that this post is a little chaotic so let me sum it up: * I verified that rc.local is called on bootup * If rc.local is manually called everything works as expected * Permissions are set correctly * A testing python script works as expected on bootup with rc.local * My actual python script will only run if rc.local is manually called, not on bootup
2011/04/23
[ "https://Stackoverflow.com/questions/5763608", "https://Stackoverflow.com", "https://Stackoverflow.com/users/317163/" ]
You can't depend on `argv[0]` containing something specific. Sometimes you'll get the full path, sometimes only the program name, sometimes something else entirely. It depends on how the code was invoked.
As you’ve noticed, `argv[0]` can (but doesn’t always) contain the full path of your executable. If you want the file name only, one solution is: ``` #include <libgen.h> const char *proName = basename(argv[0]); ``` --- As noted by Mat, `argv[0]` is not always reliable — although it should usually be. It depends on what exactly you’re trying to accomplish. That said, there’s an alternative way of obtaining the name of the executable on Mac OS X: ``` #include <mach-o/dyld.h> #include <libgen.h> uint32_t bufsize = 0; // Ask _NSGetExecutablePath() to return the buffer size // needed to hold the string containing the executable path _NSGetExecutablePath(NULL, &bufsize); // Allocate the string buffer and ask _NSGetExecutablePath() // to fill it with the executable path char exepath[bufsize]; _NSGetExecutablePath(exepath, &bufsize); const char *proName = basename(exepath); ```
10,892
6,730,735
Last weekend (16. July 2011) our mercurial packages auto-updated to the newest 1.9 mercurial binaries using the mercurial-stable ppa on a ubuntu lucid. Now pulling from repository over SSH no longer works. Following error is displayed: ``` remote: Traceback (most recent call last): remote: File "/usr/share/mercurial-server/hg-ssh", line 86, in <module> remote: dispatch.dispatch(['-R', repo, 'serve', '--stdio']) remote: File "/usr/lib/pymodules/python2.6/mercurial/dispatch.py", line 31, in dispatch remote: if req.ferr: remote: AttributeError: 'list' object has no attribute 'ferr' abort: no suitable response from remote hg! ``` In the mercurial 1.9 [upgrade notes](https://www.mercurial-scm.org/wiki/UpgradeNotes#A1.9:_minor_changes.2C_drop_experimental_parentdelta_format) there is an 'interesting' note: ``` contrib/hg-ssh from older Mercurial releases will not be compatible with version 1.9, please update your copy. ``` Has somebody an idea how to upgrade (if there is already a version) the package mercurial-server? Or do we need to upgrade something else? (New python scripts?) If there is no new version yet of the necessary packages, how to downgrade to the previous 1.7.5 (ubuntu lucid)? Any help is really appreciated as our development processes are really slowed down by this fact. :S Thanks
2011/07/18
[ "https://Stackoverflow.com/questions/6730735", "https://Stackoverflow.com", "https://Stackoverflow.com/users/619789/" ]
Ok, found a (workaround) solution by editing a python script. Edit the script /usr/share/mercurial-server/hg-ssh At the end of the script replace the line: ``` dispatch.dispatch(['-R', repo, 'serve', '--stdio']) ``` with the line: ``` dispatch.dispatch(dispatch.request(['-R', repo, 'serve', '--stdio'])) ``` Replace also: ``` dispatch.dispatch(['init', repo]) ``` with the line: ``` dispatch.dispatch(dispatch.request(['init', repo])) ``` This works for us. Hopefully this saves to somebody else burning 4 hours of work with googleing and learning basics of python. :S
More recent versions of mercurial-server are updated to support the API changes, but **may require** the `refresh-auth` script to be rerun for installations being upgraded.
10,893
56,145,426
I ran `pip3 install detect-secrets`; but running `detect-secrets` then gives "Command not found". I also tried variations, for example the switch `--user`; `sudo`; and even `pip` rather than `pip3`. Also with underscore in the name. I further added all directories shown in `python3.6 -m site` to my `PATH` (Ubuntu 18.04). Retrying the installation command shows that the package was successfully installed. `find . -name detect-secrets` (also `detect_secrets`) shows these in `./.local/bin/detect-secrets` and `./home/user/.local/lib/python3.6/site-packages/detect_secrets`) None of these gave access to the executable. How do I do that?
2019/05/15
[ "https://Stackoverflow.com/questions/56145426", "https://Stackoverflow.com", "https://Stackoverflow.com/users/39242/" ]
You can use Treemap to contain the duplicity and total sum corresponding to your input String. Also using treemap, your output will be in sorted format if that you require. ```java public static void main(String[] args) { String[] names = {"a","b","a","a","c","b"}; Integer[] numbers = {5,2,3,1,2,1}; Map<String, Integer> map = new TreeMap<>(); for (int i = 0; i < names.length; i++) { if (!map.containsKey(names[i])) { map.put(names[i], 0); } map.put(names[i], map.get(names[i]) + numbers[i]); } for (String key : map.keySet()) { System.out.println( key +" = " + map.get(key)); } } ``` The output for above code will be (as pointed by @vincrichaud): ``` a = 9 b = 3 c = 2 ```
You can use Map interface. Use your String array "names" as keys. Write a for loop for it and inside: If your map contains the key, get the value and sum with the new value. PS: Couldn't write code right now, I can update it later.
10,894
32,342,761
In my code pasted bellow (which is python 3 code) I expected the for loop to change the original objects (ie I expected NSTEPx to have been changed by the for loop). Since lists and arrays are mutable I should have edited the object by referring to it by the variable "data". However, after this code was run, and I called NSTEPx, it was not changed. Can someone explain why this is? I come from a background of C++ and the idea of mutable and immutable objects is something that I am only recently understanding the nuances of, or so I thought. Here is the code: ``` NSTEPx = np.array(NSTEPx) TIMEx = np.array(TIMEx) TEMPx = np.array(TEMPx) PRESSx = np.array(PRESSx) Etotx = np.array(Etotx) EKtotx = np.array(EKtotx) EPtotx = np.array(EPtotx) VOLUMEx = np.array(VOLUMEx) alldata = [NSTEPx,TIMEx,TEMPx, PRESSx, Etotx, EKtotx, EPtotx] for data in alldata: temp = data[1001:-1] data = np.insert(data,0,temp) data = np.delete(data,np.s_[1001:-1]) ```
2015/09/02
[ "https://Stackoverflow.com/questions/32342761", "https://Stackoverflow.com", "https://Stackoverflow.com/users/5286344/" ]
In your loop, `data` refers to an array (some object). The object referred to is mutable. The variable `data` can be changed as well to refer to something else, but that won't change what's in `alldata` (values that refer to objects) or the variables whose contents you implicitly copied to construct `alldata`. Hence, all you change is a local variable (implicitly copied from `alldata`) to refer to a newly created array. Any other referring values are unchanged and still refer to the old array.
Python has **no** assignment! `data = value` is strictly a *binding* operation, not an assignment. This is really different then in eg C++ A Python variable is like a label, or a yellow sticky note: you can put it on *something* or move it to something else; it does not (**never**) change the *thing* (object) it is one. The =-operator does move the label; it "binds" it. Although we usually say *assign*, it is really not the assign of C. (Where it is basically is a memory address) To change a value is Python, you need a method: `aLabel.do_update()`, will (typically) change *self*, the object itself. Note `aList[....]` is a method! So, to change you data: change it (sec). Do not put a other label on it, nor put the existing label on other data! Hope this explains you question
10,902
1,381,739
First of all, thank you for taking the time to read this. I am new to developing applications for the Mac and I am having some problems. My application works fine, and that is not the focus of my question. Rather, I have a python program which essentially does this: ``` for i in values: os.system(java program_and_options[i]) ``` However, every time my program executes the java program, a java window is created in my dock (with an annoying animation) and most importantly steals the focus of my mouse and keyboard. Then it goes away a second later, to be replaced by another Java instance. This means that my batch program cannot be used while I am interacting with my Mac, because I get a hiccup every second or more often and cannot get anything done. My problem is that the act of displaying something in the dock takes my focus, and I would like it not to. Is there a setting on OS X to never display something in the dock (such as Java or python)? Is there a Mac setting or term that I should use to properly describe this problem I am having? I completely lack the vocabulary to describe this problem and I hope I make sense. I appreciate any help. I am running Mac OS X, Version 10.5.7 with a 1.66 GHz Intel Core Due, 2 GB memory, Macintosh HD. I am running Python 2.5.1, java version "1.5.0\_16" Java(TM) 2 Runtime Environment, Standard Edition (build 1.5.0\_16-b06-284) Java HotSpot(TM) Client VM (build 1.5.0\_16-133, mixed mode, sharing). Thanks again, -Brian J. Stinar-
2009/09/04
[ "https://Stackoverflow.com/questions/1381739", "https://Stackoverflow.com", "https://Stackoverflow.com/users/-1/" ]
Does running Java with headless mode = true fix it? <http://zzamboni.org/brt/2007/12/07/disable-dock-icon-for-java-programs-in-mac-osx-howto/>
As far as I am aware there is no way to disable the annoying double Java bounce without making your Java application a first class citizen on Mac OS X (much like NetBeans, or Eclipse). As for making certain programs not show in the dock, there are .plist modifications that can be made so that the program does not show up in the dock. See <http://www.macosxhints.com/article.php?story=20010701191518268>
10,903
47,314,905
I did import the module with a name, and import it again without a name and both seems to be working fine and gives the same class type. ``` >>> from collections import Counter as c >>> c <class 'collections.Counter'> >>> from collections import Counter >>> Counter <class 'collections.Counter'> ``` How does that works in python, is that a single object points to the same reference? Also why not that the previous name import got overwritten or removed. *I'm not sure about the terminology as well*
2017/11/15
[ "https://Stackoverflow.com/questions/47314905", "https://Stackoverflow.com", "https://Stackoverflow.com/users/3950422/" ]
Using python 2.7.13: ``` >>> from collections import Counter as c >>> c <class 'collections.Counter'> >>> from collections import Counter >>> Counter <class 'collections.Counter'> >>> id(c), id(Counter) (140244739511392, 140244739511392) >>> id(c) == id(Counter) True ``` Yes, `c` and `Counter` are the same. Two variables (names) that reference the same object.
As I remember , everything you define in python is an object belongs to a class. And yes if a variable object has assigned some value and if you create another variable with same value then python wont create a new reference for the second variable but it will use first variables reference for second variable as well. For example: ``` >>> a=10 >>> id(a) 2001255152 >>> b=20 >>> id(b) 2001255472 >>> c=10 >>> id(c) 2001255152 >>> ``` I may not explain in much better way but my example does I hope.
10,905
24,584,441
I'm trying to execute an operation to each file found by find - with a specific file extension (wma). For example, in python, I would simply write the following script: ``` for file in os.listdir('.'): if file.endswith('wma'): name = file[:-4] command = "ffmpeg -i '{0}.wma' '{0}.mp3'".format(name) os.system(command) ``` I know I need to execute something similar to ``` find -type f -name "*.wma" \ exec ffmpeg -i {}.wma {}.mp3; ``` But obviously this isn't working or else I wouldn't be asking this question =]
2014/07/05
[ "https://Stackoverflow.com/questions/24584441", "https://Stackoverflow.com", "https://Stackoverflow.com/users/2014591/" ]
Try to use setMainView method ``` class IndexController extends ControllerBase { public function onConstruct(){ } public function indexAction() { return $this->view->setMainView("login/login"); } } ``` setMainView method use to set the default view. Just put the view name as parameter. <http://docs.phalconphp.com/en/latest/api/Phalcon_Mvc_View.html>
Remove the `return` keyword. I believe it is fetching the view you want and then returning it into the base template.
10,907
22,949,270
Is there a way how to switch off (and on later) this check at runtime? The motivation is that I need to use third party libraries which do not care about tabs and spaces mixing and thus running my code with [`-t` switch](https://docs.python.org/2/using/cmdline.html#cmdoption-t "switch") issues warnings. (I hope that analogous method can be used for the `-b` switch.) **edit:** I forgot to note, that the library already mixes tabs and spaces in one file and that's why I see the warnings.
2014/04/08
[ "https://Stackoverflow.com/questions/22949270", "https://Stackoverflow.com", "https://Stackoverflow.com/users/542196/" ]
Taking a straight percentage of views doesn't give an accurate representation of the item's popularity, either. Although 9 likes out of 18 is "stronger" than 9 likes out of 500, the fact that one video got 500 views and the other got only 18 is a much stronger indication of the video's popularity. A video that gets a lot of views usually means that it's very popular across a wide range of viewers. That it only gets a small percentage of likes or dislikes is usually a secondary consideration. A video that gets a small number of views and a large number of likes is usually an indication of a video that's very narrowly targeted. If you want to incorporate views in the equation, I would suggest multiplying the Bayesian average you get from the likes and dislikes by the logarithm of the number of views. That should sort things out pretty well. Unless you want to go with multi-factor ranking, where likes, dislikes, and views are each counted separately and given individual weights. The math is more involved and it takes some tweaking, but it tends to give better results. Consider, for example, that people will often "like" a video that they find mildly amusing, but they'll only "dislike" if they find it objectionable. A dislike is a much stronger indication than a like.
A simple approach would be to come up with a suitable scale factor for each average - and then sum the "weights". The difficult part would be tweaking the scale factors to produce the desired ordering. From your example data, a starting point might be something like: ``` Weighted Rating = (AV * (1 / 50)) + (AL * 3) - (AD * 6) ``` Key & Explanation ----------------- **AV** = Average views per day: *5000 is high so divide by 50 to bring the weight down to 100 in this case.* **AL** = Average likes per day: *100 in 3 days = 33.33 is high so multiply by 3 to bring the weight up to 100 in this case.* **AD** = Average dislikes per day: *10,000 seems an extreme value here - would agree with Jim Mischel's point that dislikes may be more significant than likes so am initially going with a negative scale factor of twice the size of the "likes" scale factor.* This gives the following results (see [SQL Fiddle Demo](http://sqlfiddle.com/#!2/94d9a/4)): ``` ID TITLE SCORE ----------------------------- 3 Epic Fail 60.8 2 Silly Dog 4.166866 1 Funny Cat 1.396528 5 Trololool -1.666766 4 Duck Song -14950 ``` [Am deliberately keeping this simple to present the idea of a starting point - but with real data you might find linear scaling isn't sufficient - in which case you could consider bandings or logarithmic scaling.]
10,910
73,668,351
I have connected an Arduino to a raspberry pi so that a specific event is triggered when I send a signal(in this case a number). When I send a number with the script and tell it just to print in serial monitor it works, when I try and just have it run the motors on start it works fine, however when combining the two: having it run a specific command if a particular number is received nothing happens. If anyone could point to the flaw here, I would be very grateful. Python Code: ```py import serial, time arduino = serial.Serial('/dev/ttyUSB0', 9600, timeout=1) cmd = '' while cmd != '0': cmd = input('Enter a cmd ') arduino.write(cmd.encode('ascii')) ``` Arduino Code: ```cpp #include <Arduino.h> const byte MOTOR_A = 3; // Motor 2 Interrupt Pin - INT 1 - Right Motor const byte MOTOR_B = 2; // Motor 1 Interrupt Pin - INT 0 - Left Motor // Constant for steps in disk const float stepcount = 20.00; // 20 Slots in disk, change if different // Constant for wheel diameter const float wheeldiameter = 66.10; // Wheel diameter in millimeters, change if different const float gear_ratio = 34; const float PPR = 12; // Integers for pulse counters volatile int counter_A = 0; volatile int counter_B = 0; // Motor A int enA = 10; int in1 = 9; int in2 = 8; // Motor B int enB = 5; int in3 = 7; int in4 = 6; // Interrupt Service Routines // Motor A pulse count ISR void ISR_countA() { counter_A++; // increment Motor A counter value } // Motor B pulse count ISR void ISR_countB() { counter_B++; // increment Motor B counter value } // Function to convert from centimeters to steps int CMtoSteps(float cm) { float circumference = (wheeldiameter * 3.14) / 10; // Calculate wheel circumference in cm return int(cm * gear_ratio * PPR / circumference); } // Function to Move Forward void MoveForward(int steps, int mspeed) { counter_A = 0; // reset counter A to zero counter_B = 0; // reset counter B to zero // Set Motor A forward digitalWrite(in1, HIGH); digitalWrite(in2, LOW); // Set Motor B forward digitalWrite(in3, HIGH); digitalWrite(in4, LOW); // Go forward until step value is reached while (steps > counter_A or steps > counter_B) { if (steps > counter_A) { analogWrite(enA, mspeed); } else { analogWrite(enA, 0); } if (steps > counter_B) { analogWrite(enB, mspeed); } else { analogWrite(enB, 0); } } // Stop when done analogWrite(enA, 0); analogWrite(enB, 0); counter_A = 0; // reset counter A to zero counter_B = 0; // reset counter B to zero } // Function to Move in Reverse void MoveReverse(int steps, int mspeed) { counter_A = 0; // reset counter A to zero counter_B = 0; // reset counter B to zero // Set Motor A reverse digitalWrite(in1, LOW); digitalWrite(in2, HIGH); // Set Motor B reverse digitalWrite(in3, LOW); digitalWrite(in4, HIGH); // Go in reverse until step value is reached while (steps > counter_A && steps > counter_B) { if (steps > counter_A) { analogWrite(enA, mspeed); } else { analogWrite(enA, 0); } if (steps > counter_B) { analogWrite(enB, mspeed); } else { analogWrite(enB, 0); } } // Stop when done analogWrite(enA, 0); analogWrite(enB, 0); counter_A = 0; // reset counter A to zero counter_B = 0; // reset counter B to zero } // Function to Spin Right void SpinRight(int steps, int mspeed) { counter_A = 0; // reset counter A to zero counter_B = 0; // reset counter B to zero // Set Motor A reverse digitalWrite(in1, LOW); digitalWrite(in2, HIGH); // Set Motor B forward digitalWrite(in3, HIGH); digitalWrite(in4, LOW); // Go until step value is reached while (steps > counter_A && steps > counter_B) { if (steps > counter_A) { analogWrite(enA, mspeed); } else { analogWrite(enA, 0); } if (steps > counter_B) { analogWrite(enB, mspeed); } else { analogWrite(enB, 0); } } // Stop when done analogWrite(enA, 0); analogWrite(enB, 0); counter_A = 0; // reset counter A to zero counter_B = 0; // reset counter B to zero } // Function to Spin Left void SpinLeft(int steps, int mspeed) { counter_A = 0; // reset counter A to zero counter_B = 0; // reset counter B to zero // Set Motor A forward digitalWrite(in1, HIGH); digitalWrite(in2, LOW); // Set Motor B reverse digitalWrite(in3, LOW); digitalWrite(in4, HIGH); // Go until step value is reached while (steps > counter_A && steps > counter_B) { if (steps > counter_A) { analogWrite(enA, mspeed); } else { analogWrite(enA, 0); } if (steps > counter_B) { analogWrite(enB, mspeed); } else { analogWrite(enB, 0); } } // Stop when done analogWrite(enA, 0); analogWrite(enB, 0); counter_A = 0; // reset counter A to zero counter_B = 0; // reset counter B to zero } void setup() { Serial.begin(9600); // Attach the Interrupts to their ISR's pinMode(MOTOR_A,INPUT); pinMode(MOTOR_B,INPUT); pinMode(in1,OUTPUT); pinMode(in2,OUTPUT); pinMode(in3,OUTPUT); pinMode(in4,OUTPUT); pinMode(enA,OUTPUT); pinMode(enB,OUTPUT); attachInterrupt(digitalPinToInterrupt (MOTOR_A), ISR_countA, RISING); // Increase counter A when speed sensor pin goes High attachInterrupt(digitalPinToInterrupt (MOTOR_B), ISR_countB, RISING); // Increase counter B when speed sensor pin goes High } void loop() { delay(100); int compareOne = 1; int compareTwo = 2; int compareThree = 3; if (Serial.available() > 0){ String stringFromSerial = Serial.readString(); if (stringFromSerial.toInt() == compareOne){ Serial.println("Forward"); MoveForward(CMtoSteps(50), 255); // Forward half a metre at 255 speed } if (stringFromSerial.toInt() == compareTwo){ Serial.println("Spin Right"); SpinRight(CMtoSteps(10), 255); // Right half a metre at 255 speed } if (stringFromSerial.toInt() == compareThree){ Serial.println("Spin Left"); SpinLeft(CMtoSteps(10), 255); // Right half a metre at 255 speed } else { Serial.println("Not equal"); } } Put whatever you want here! MoveReverse(CMtoSteps(25.4),255); // Reverse 25.4 cm at 255 speed } ``` UPDATE: I have changed the `loop` so that it compares ints instead of strings as per @GrooverFromHolland suggestion. Still, nothing happens when I input from python but it is printed in the serial monitor. Why the motors spin when I just trigger it in the loop directly for testing, but not when commanded to via serial monitor is my issue. As well as this, I have discovered that the interrupts are not working for some reason. Any help appreciated.
2022/09/09
[ "https://Stackoverflow.com/questions/73668351", "https://Stackoverflow.com", "https://Stackoverflow.com/users/12527861/" ]
Since you're using func `firstIndex` of an array in this `func indexOfItem(_ item: Item) -> Int?` therefore the `Item` has to be a concrete object (behind the scene of `firstIndex` func is comparing each element of an array and print out the index of the element). There are 2 ways to do this * First is using associatedtype to keep your protocol generic ``` protocol Item: Equatable { var name: String { get } } protocol Container { associatedtype Item var items: [Item] { get } } struct MyItem: Item { var name: String } extension Container where Item == MyItem { func indexOfItem(_ item: Item) -> Int? { return items.firstIndex(of: item) } } ``` * Second is using an equatable object `MyItem` instead a protocol `Item` inside the `Container` protocol ``` protocol Item { var name: String { get } } protocol Container { var items: [MyItem] { get } } struct MyItem: Item, Equatable { var name: String } extension Container { func findIndex(of item: MyItem) -> Int? { return items.firstIndex(of: item) } } ```
Finally find simple enough solution: То make protocol generic with associated type and constraint this type to Equatable. ``` public protocol Container { associatedtype EquatableItem: Item, Equatable var items: [EquatableItem] {get} } public protocol Item { var name: String {get} } public extension Container { func indexOfItem(_ item: EquatableItem) -> Int? { items.firstIndex(of: item) } } ``` This compiles and now if I have some types ``` struct SomeContainer { var items: [SomeItem] } struct SomeItem: Item, Equatable { var name: String } ``` I only need to resolve associatedtype to provide protocol conformance for SomeContainer type: ``` extension SomeContainer: Container { typealias EquatableItem = SomeItem } ```
10,916
59,796,680
I recently moved to a place with terrible internet connection. Ever since then I have been having huge issues getting my programming environments set up with all the tools I need - you don't realize how many things you need to download until each one of those things takes over a day. For this post I would like to try to figure out how to deal with this in pip. **The Problem** Almost every time I `pip install` something it ends out timing out somewhere in the middle. It takes many tries until I get lucky enough to have it complete without a time out. This happens with many different things I have tried, big or small. Every time an install fails the next time starts all over again from 0%, no matter how far I got before. I get something along the lines of ``` pip._vendor.urllib3.exceptions.ReadTimeoutError: HTTPSConnectionPool(host='files.pythonhosted.org', port=443): Read timed out. ``` **What I want to happen** Ideally I would like to either extend the definition of time pip uses before it declares a timeout or be able to disable the option of a timeout all together. I am not sure either of these are possible, so if anyone has any other solution for me that would be greatly appreciated as well. **Other Information** Not sure this helps any but what I found is that the only reliable way for me to download anything here is using torrents, as they do not restart a download once they lose connection, rather they always continue where they left off. If there is a way to use this fact in any way that would be nice too.
2020/01/18
[ "https://Stackoverflow.com/questions/59796680", "https://Stackoverflow.com", "https://Stackoverflow.com/users/6036156/" ]
Use option `--timeout <sec>` to set socket time out. Also, as @Iain Shelvington mentioned, `timeout = <sec>` in [pip configuration](https://pip.pypa.io/en/stable/user_guide/#configuration) will also work. *TIP: Every time you want to know something (maybe an option) about a command (tool), before googling, check the manual page of the command by using `man <command>` or use `<command> --help` or `check that command's docs online` will be very useful too (Maybe better than Google).*
To set the `timeout` time to 30sec for example. The easiest way is executing: `pip config global.timeout 30` or going to the pip configuration file ***pip.ini*** located in the directory ***~\AppData\Roaming\pip*** in the case of Windows operating system. If the file does not exist there, create it and write: ``` [global] timeout = 30 ``` .
10,917
45,934,259
I am working on a simple project on PhpStorm and installed GAE plugin and SDK. Running a server and show the project works, but when I try to deploy my application I get this kind of error: (This is a PHP project) ``` C:\Python27\python.exe "C:/Users/asim/AppData/Local/Google/Cloud SDK/google-cloud-sdk/platform/google_appengine/appcfg.py" update . 10:08 AM Application: gtmdocx; version: None 10:08 AM Host: appengine.google.com Traceback (most recent call last): File "C:/Users/asim/AppData/Local/Google/Cloud SDK/google-cloud-sdk/platform/google_appengine/appcfg.py", line 133, in <module> run_file(__file__, globals()) File "C:/Users/asim/AppData/Local/Google/Cloud SDK/google-cloud-sdk/platform/google_appengine/appcfg.py", line 129, in run_file execfile(_PATHS.script_file(script_name), globals_) File "C:\Users\asim\AppData\Local\Google\Cloud SDK\google-cloud-sdk\platform\google_appengine\google\appengine\tools\appcfg.py", line 5518, in <module> main(sys.argv) File "C:\Users\asim\AppData\Local\Google\Cloud SDK\google-cloud-sdk\platform\google_appengine\google\appengine\tools\appcfg.py", line 5509, in main result = AppCfgApp(argv).Run() File "C:\Users\asim\AppData\Local\Google\Cloud SDK\google-cloud-sdk\platform\google_appengine\google\appengine\tools\appcfg.py", line 2969, in Run self.action(self) File "C:\Users\asim\AppData\Local\Google\Cloud SDK\google-cloud-sdk\platform\google_appengine\google\appengine\tools\appcfg.py", line 5165, in __call__ return method() File "C:\Users\asim\AppData\Local\Google\Cloud SDK\google-cloud-sdk\platform\google_appengine\google\appengine\tools\appcfg.py", line 3897, in Update self._UpdateWithParsedAppYaml(appyaml, self.basepath) File "C:\Users\asim\AppData\Local\Google\Cloud SDK\google-cloud-sdk\platform\google_appengine\google\appengine\tools\appcfg.py", line 3918, in _UpdateWithParsedAppYaml updatecheck.CheckForUpdates() File "C:\Users\asim\AppData\Local\Google\Cloud SDK\google-cloud-sdk\platform\google_appengine\google\appengine\tools\sdk_update_checker.py", line 245, in CheckForUpdates runtime=runtime)) File "C:\Users\asim\AppData\Local\Google\Cloud SDK\google-cloud-sdk\platform\google_appengine\google\appengine\tools\appengine_rpc_httplib2.py", line 246, in Send url, method=method, body=payload, headers=headers) File "C:\Users\asim\AppData\Local\Google\Cloud SDK\google-cloud-sdk\platform\google_appengine\lib\httplib2\httplib2\__init__.py", line 1626, in request (response, content) = self._request(conn, authority, uri, request_uri, method, body, headers, redirections, cachekey) File "C:\Users\asim\AppData\Local\Google\Cloud SDK\google-cloud-sdk\platform\google_appengine\lib\httplib2\httplib2\__init__.py", line 1368, in _request (response, content) = self._conn_request(conn, request_uri, method, body, headers) File "C:\Users\asim\AppData\Local\Google\Cloud SDK\google-cloud-sdk\platform\google_appengine\lib\httplib2\httplib2\__init__.py", line 1288, in _conn_request conn.connect() File "C:\Users\asim\AppData\Local\Google\Cloud SDK\google-cloud-sdk\platform\google_appengine\lib\httplib2\httplib2\__init__.py", line 1082, in connect raise SSLHandshakeError(e) httplib2.SSLHandshakeError: [SSL: CERTIFICATE_VERIFY_FAILED] certificate verify failed (_ssl.c:581) Process finished with exit code 1 ``` I've tried to uninstall and upgrade Python, now I'm using 2.7.9 but still this error wont remove. I tried also removing `cacerts.txt` but still no luck still this problem ``` ttplib2.SSLHandshakeError: [SSL: CERTIFICATE_VERIFY_FAILED] certificate verify failed (_ssl.c:581) ``` I hope anyone has encountered this problem before and can help me with this. Here is my App.yaml file: ``` runtime: php55 api_version: 1 threadsafe: true service: default application: gtmdocx handlers: - url: .* script: main.php login: admin ```
2017/08/29
[ "https://Stackoverflow.com/questions/45934259", "https://Stackoverflow.com", "https://Stackoverflow.com/users/6428568/" ]
The traceback indicates the failure happens when trying to check for SDK updates, so you *should* be able to work around it by using `appcfg.py`'s `--skip_sdk_update_check` option. I'm not using the PHP SDK, but I found a similar failure in the SDK upgrade check for the python development server, my solution for that could be applicable in your case as well. See [Google App Engine SSL Certificate Error](https://stackoverflow.com/questions/43221963/google-app-engine-ssl-certificate-error/43233424#43233424).
If it is really a SSL handshake error than check to see if machine that you are using to access is behind a firewall. If you are than you will have a problem you might have to ask you network guys to open network up. alternatively you can try to get on to network that is not behind firewall. I might be wrong but I have been in this situation.
10,918
42,081,376
I have exactly opposite issue described [here](https://stackoverflow.com/q/11489330/2215679). In my case I have: logging.py ``` import logging log = logging.getLogger(..) ``` I got this error: ``` AttributeError: 'module' object has no attribute 'getLogger' ``` This happens only on project with python 2.7 run under Pyramid framework. When I run it in another project, python 3.6 without any framework it works perfect. PS. there is a [similar issue](https://stackoverflow.com/questions/5299199/python-importing-a-global-site-packages-module-rather-than-the-file-of-the-sam), but it is different case, in my case it is global package that is not present in any `sys.path` folder. So none of solutions from that question worked for me. Please don't mark this issue as duplicated.
2017/02/07
[ "https://Stackoverflow.com/questions/42081376", "https://Stackoverflow.com", "https://Stackoverflow.com/users/2215679/" ]
I found solution, just putting: ``` from __future__ import absolute_import ``` on top of the file will resolve the issue. source: [https://docs.python.org/2/library/**future**.html](https://docs.python.org/2/library/__future__.html) As you may see, in python 3>= absolute import is by default
> > It is better to rename your local file to be different with builtin module name. > > >
10,926
44,395,941
OK, so I am currently messing around coding hangman in python and was wondering if I can clear what it says in the python shell as I don't just wan't the person to read the word. ``` import time keyword = input(" Please enter the word you want the person to guess") lives = int(input("How many lives would you like to have?")) print ("There are ", len(keyword), "letters in the word") time.sleep(2) guess = input("please enter your guess") ``` I would like to remove all the text in the shell.
2017/06/06
[ "https://Stackoverflow.com/questions/44395941", "https://Stackoverflow.com", "https://Stackoverflow.com/users/8020756/" ]
if you are a windows user use this: ``` import os os.system("cls") ``` Mac/linux then : ``` import os os.system("clear") ```
Try this: ``` import subprocess import time tmp=subprocess.call('clear', shell=True) # 'cls' in windows keyword = input(" Please enter the word you want the person to guess") lives = int(input("How many lives would you like to have?")) print ("There are ", len(keyword), "letters in the word") time.sleep(2) ``` Save the code in a python file. Then execute it from shell.
10,927
67,611,765
``` i = SomeIndex() while mylist[i] is not None: if mylist[i] == name: return foo() i+=1 ``` I want foo() to always run on 1st iteration of loop, if mylist[i] isn't 'name', but never run if its any iteration but the first. I know I could the following, but I don't know if it's the most efficient and prettiest python code: ``` i = SomeIndex() FirstIter = True while mylist[i] is not None: if mylist[i] == name: return if FirstIter: foo() FirstIter = False i+=1 ```
2021/05/19
[ "https://Stackoverflow.com/questions/67611765", "https://Stackoverflow.com", "https://Stackoverflow.com/users/14725111/" ]
Let's "pythonize" your example, step by step. **1. Remove the `first_index` flag:** ``` start_idx = SomeIndex() i = start_idx while mylist[i] is not None: if mylist[i] == name: return if i == start_idx: foo() i += 1 ``` **2. Convert to `while True`:** ``` start_idx = SomeIndex() i = start_idx while True: if mylist[i] is not None: break if mylist[i] == name: return if i == start_idx: foo() i += 1 ``` **3. Convert to `for i in range` loop:** ``` start_idx = SomeIndex() for i in range(start_idx, len(mylist)): if mylist[i] is not None: break if mylist[i] == name: return if i == start_idx: foo() ``` Note that this also fixes a bug by bounding `i < len(mylist)`. --- Variations ---------- These are not equivalent to your code sample, but they might be relevant patterns for you to use. **Variation 1:** Find the index where `mylist[i] == name`: ``` index = next( ( i for i in range(start_idx, len(mylist)) if mylist[i] == name ), None, # If no index is found, set to None. ) ```
You are trying to emulate a do-while, take a look at [this question](https://stackoverflow.com/questions/743164/how-to-emulate-a-do-while-loop) if you want. Since there is no do-while equivalent in Python, the simple idea is to move the first iteration out of the loop ``` i = SomeIndex() foo() while mylist[i] is not None: if mylist[i] == name: return foo() i+=1 ``` Also, the way you iterate over the list is unnecessarily complex, you can simplify it like: ``` i = SomeIndex() if myList[i] != name: foo() for element in myList[i+1:]: if element == name: return foo() ```
10,930
827,557
I'm building an app on Google App Engine. I'm incredibly new to Python and have been beating my head against the following problem for the past 3 days. I have a class to represent an RSS Feed and in this class I have a method called setUrl. Input to this method is a URL. I'm trying to use the re python module to validate off of the RFC 3986 Reg-ex (<http://www.ietf.org/rfc/rfc3986.txt>) Below is a snipped which *should* work? ``` p = re.compile('^(([^:/?#]+):)?(//([^/?#]*))?([^?#]*)(\?([^#]*))?(#(.*))?') m = p.match(url) if m: self.url = url return url ```
2009/05/06
[ "https://Stackoverflow.com/questions/827557", "https://Stackoverflow.com", "https://Stackoverflow.com/users/91163/" ]
`urlparse` quite happily takes invalid URLs, it is more a string string-splitting library than any kind of validator. For example: ``` from urlparse import urlparse urlparse('http://----') # returns: ParseResult(scheme='http', netloc='----', path='', params='', query='', fragment='') ``` Depending on the situation, this might be fine.. If you mostly trust the data, and just want to verify the protocol is HTTP, then `urlparse` is perfect. If you want to make the URL is actually a legal URL, use [the ridiculous regex](https://stackoverflow.com/questions/827557/how-do-you-validate-a-url-with-a-regular-expression-in-python/835527#835527) If you want to make sure it's a real web address, ``` import urllib try: urllib.urlopen(url) except IOError: print "Not a real URL" ```
The regex provided should match any url of the form <http://www.ietf.org/rfc/rfc3986.txt>; and does when tested in the python interpreter. What format have the URLs you've been having trouble parsing had?
10,934
55,381,039
I am trying to get a dynamic text displayed in the system tray (this will be 2 numbers (from 1 to 100) changing every 2 minutes). I found this [script](http://code.activestate.com/recipes/475155-dynamic-system-tray-icon-wxpython/) as a starting point (but I am not commited to it!). But I get this error : ``` TypeError: Image.SetData(): arguments did not match any overloaded call: overload 1: argument 1 has unexpected type 'str' overload 2: argument 1 has unexpected type 'str' OnInit returned false, exiting... ``` The relevant part of the code is: ``` def Get(self,l,r): s=""+self.s_line for i in range(5): if i<(5-l): sl = self.sl_off else: sl = self.sl_on if i<(5-r): sr = self.sr_off else: sr = self.sr_on s+=self.s_border+sl+self.s_point+sr+self.s_point s+=self.s_border+sl+self.s_point+sr+self.s_point s+=self.s_line image = wx.EmptyImage(16,16) image.SetData(s) bmp = image.ConvertToBitmap() bmp.SetMask(wx.Mask(bmp, wx.WHITE)) #sets the transparency colour to white icon = wx.EmptyIcon() icon.CopyFromBitmap(bmp) return icon ``` I add to update the script by adding `import wx.adv` and by replacing the 2 `wx.TaskBarIcon` by `wx.adv.TaskBarIcon`. I am on Windows 10 with Python 3.6
2019/03/27
[ "https://Stackoverflow.com/questions/55381039", "https://Stackoverflow.com", "https://Stackoverflow.com/users/3154274/" ]
I think this issue was occurring due to using the OpenJDK and not the OracleJDK. I am no longer having this issue since changing the project SDK to the OracleJDK, so if anyone else ever has this issue in the future... that may be the fix.
* Be sure to see also the Swing/Seesaw section [from the Clojure Cookbook](https://github.com/clojure-cookbook/clojure-cookbook/blob/master/04_local-io/4-25_seesaw/4-25_making-a-window.asciidoc) * [The newer fn/fx lib](https://github.com/fn-fx/fn-fx) for using JavaFX from Clojure.
10,944
50,431,371
I am trying to create a python program that uses user input in an equation. When I run the program, it gives this error code, "answer = ((((A\*10**A)\*\*2)**(B\*C))\*D\*\*E) TypeError: unsupported operand type(s) for \*\* or pow(): 'int' and 'str'". My code is: ``` import cmath A = input("Enter a number for A: ") B = input("Enter a number for B: ") C = input("Enter a number for C: ") D = input("Enter a number for D: ") E = input("Enter a number for E: ") answer = ((((A*10**A)**2)**(B*C))*D**E) print(answer)` ```
2018/05/20
[ "https://Stackoverflow.com/questions/50431371", "https://Stackoverflow.com", "https://Stackoverflow.com/users/6754577/" ]
The [`input()`](https://docs.python.org/3/library/functions.html#input) function returns a string value: you need to convert to a number using `Decimal`: ``` from decimal import Decimal A = Decimal(input("Enter a number for A: ")) # ... etc ``` But your user might enter something that isn't a decimal number, so you might want to do some checking: ``` from decimal import Decimal, InvalidOperation def get_decimal_input(variableName): x = None while x is None: try: x = Decimal(input('Enter a number for ' + variableName + ': ')) except InvalidOperation: print("That's not a number") return x A = get_decimal_input('A') B = get_decimal_input('B') C = get_decimal_input('C') D = get_decimal_input('D') E = get_decimal_input('E') print((((A * 10 ** A) ** 2) ** (B * C)) * D ** E) ```
The compiler thinks your inputs are of string type. You can wrap each of A, B, C, D, E with float() to cast the input into float type, provided you're actually inputting numbers at the terminal. This way, you're taking powers of float numbers instead of strings, which python doesn't know how to handle. ``` A = float(input("Enter a number for A: ")) B = float(input("Enter a number for B: ")) C = float(input("Enter a number for C: ")) D = float(input("Enter a number for D: ")) E = float(input("Enter a number for E: ")) ```
10,946
14,228,659
I can add the XML node using the ElementTree, but this returns the output in one single line instead of a tree structure look alike when I open the xml file in text format. I also tried using the minidom.toprettyxml but I do not know how to add the output to original XML. Since I would like the script to be reproducible in other environments, I prefer not using external libraries such as lxml. Can someone please help how I can pretty print the output? - python 2.7 The Sample XML. This is how it looks both in text format and Explorer. ``` <?xml version="1.0" encoding="utf-8"?> <default_locators > <locator_ref> <name>cherry</name> <display_name>cherrycherry</display_name> <workspace_properties> <factory_progid>Workspace</factory_progid> <path>InstallDir</path> </workspace_properties> </locator_ref> </default_locators> ``` Expected Output in both text format and Explorer. ``` <?xml version="1.0" encoding="utf-8"?> <default_locators > <locator_ref> <name>cherry</name> <display_name>cherrycherry</display_name> <workspace_properties> <factory_progid>Workspace</factory_progid> <path>InstallDir</path> </workspace_properties> </locator_ref> <locator_ref> <name>berry</name> <display_name>berryberry</display_name> <workspace_properties> <factory_progid>Workspace</factory_progid> <path>C:\temp\temp</path> </workspace_properties> </locator_ref> </default_locators> ``` My script ``` #coding: cp932 import xml.etree.ElementTree as ET tree = ET.parse(r"C:\DefaultLocators.xml") root = tree.getroot() locator_ref = ET.SubElement(root, "locator_ref") name = ET.SubElement(locator_ref, "name") name.text = " berry" display_name = ET.SubElement(locator_ref, "display_name") display_name.text = "berryberry" workspace_properties = ET.SubElement(locator_ref, "workspace_properties") factory_progid = ET.SubElement(workspace_properties,"factory_progid") factory_progid.text = "Workspace" path = ET.SubElement(workspace_properties, "path") path.text = r"c:\temp\temp" tree.write(r"C:\DefaultLocators.xml", encoding='utf-8') ``` Returned output. After running my script, new nodes are added to my sample.xml file, but it returns output in one single line, with all newlines and indents removed from the original sample.xml file. At least thats how it looks when I open the sample.xml file in text format. However, When I open the sample.xml file in Explorer, it looks fine. I still see the newlines and indents as they were before. How can I keep the original tree structure in text format even after running the script? ``` <default_locators> <locator_ref> <name>cherry</name> <display_name>cherrycherry</display_name> <workspace_properties> <factory_progid>Workspace</factory_progid> <path>InstallDir</path> </workspace_properties> </locator_ref> <locator_ref><name> berry</name><display_name>berryberry</display_name><workspace_properties><factory_progid>Workspace</factory_progid><path>c:\temp\temp</path></workspace_properties></locator_ref></default_locators> ```
2013/01/09
[ "https://Stackoverflow.com/questions/14228659", "https://Stackoverflow.com", "https://Stackoverflow.com/users/1027101/" ]
when dealing with element, you can do like this: `element.tail = '\n'` then,it will be written in single line.
write your xml in elementTree as: ``` import xml.etree.ElementTree as ET def serialize_xml(write, elem, encoding, qnames, namespaces): tag = elem.tag text = elem.text if tag is ET.Comment: write("<!--%s-->" % _encode(text, encoding)) elif tag is ET.ProcessingInstruction: write("<?%s?>" % _encode(text, encoding)) else: tag = qnames[tag] if tag is None: if text: write(_escape_cdata(text, encoding)) for e in elem: serialize_xml(write, e, encoding, qnames, None) else: write("\n<" + tag) ## '\n' added by namit items = elem.items() if items or namespaces: if namespaces: for v, k in sorted(namespaces.items(), key=lambda x: x[1]): # sort on prefix if k: k = ":" + k write(" xmlns%s=\"%s\"" % ( k.encode(encoding), _escape_attrib(v, encoding) )) for k, v in sorted(items): # lexical order if isinstance(k, QName): k = k.text if isinstance(v, QName): v = qnames[v.text] else: v = _escape_attrib(v, encoding) write(" %s=\"%s\"" % (qnames[k], v)) if text or len(elem): write(">") if text: write(ET._escape_cdata(text, encoding)) for e in elem: serialize_xml(write, e, encoding, qnames, None) write("</" + tag + ">") else: write(" />") if elem.tail: write(ET._escape_cdata(elem.tail, encoding)) ET._serialize_xml=serialize_xml tree = ET.parse(r"samplexml.xml") root = tree.getroot() locator_ref = ET.SubElement(root, "locator_ref") name = ET.SubElement(locator_ref, "name") name.text = " berry" display_name = ET.SubElement(locator_ref, "display_name") display_name.text = "berryberry" workspace_properties = ET.SubElement(locator_ref, "workspace_properties") factory_progid = ET.SubElement(workspace_properties,"factory_progid") factory_progid.text = "WorkspaceFactory" path = ET.SubElement(workspace_properties, "path") ins_out=open("samplexml_1.xml",'wb',1000) ET.ElementTree(locator_ref).write(ins_out,encoding="ASCII") ins_out.close() ``` this will write complete file in single line; without adding white space in xml tail.
10,950
2,565,415
What is the right way to forming in-memory table in python with direct lookups for rows and columns. I thought of using dict of dicts this way, ``` class Table(dict): def __getitem__(self, key): if key not in self: self[key]={} return dict.__getitem__(self, key) table = Table() table['row1']['column1'] = 'value11' table['row1']['column2'] = 'value12' table['row2']['column1'] = 'value21' table['row2']['column2'] = 'value22' >>>table {'row1':{'column1':'value11','column2':'value12'},'row2':{'column1':'value21','column2':'value22'}} ``` I had difficulty in looking up for values in columns. ``` >>>'row1' in table True >>>'value11' in table['row1'].values() True ``` Now how do I do lookup if `'column1'` has `'value11'` Is this method of forming tables wrong? Is there a better way to implement such tables with easier lookups?.
2010/04/02
[ "https://Stackoverflow.com/questions/2565415", "https://Stackoverflow.com", "https://Stackoverflow.com/users/231295/" ]
I'd use an [in-memory database](http://www.sqlite.org/inmemorydb.html) with [SQLite](http://docs.python.org/library/sqlite3.html) for this. The sqlite module is even in the standard library since Python 2.5, which means this doesn't even add much to your requirements.
A nested list should be able to do the job here. I would only use nested dictionaries if elements are spread thin across the grid. ``` grid = [] for row in height: grid.append([]) for cell in width: grid[-1].append(value) ``` Checking rows is easy: ``` def valueInRow(value, row): return value in grid[row] ``` Checking collumns takes a little more work, because the grid is a list of rows, not a list of collumns: ``` def collumnIterator(collumn): height = len(grid) for row in xrange(height): yield grid[row][collumn] def valueInCollumn(value, collumn): return value in collumnIterator(collumn) ```
10,952
6,184,079
Similar questions have been asked, but I have not come across an easy-to-do-it way We have some application logs of various kinds which fill up the space and we face other unwanted issues. How do I write a monitoring script(zipping files of particular size, moving them, watching them, etc..) for this maintenance? I am looking for a simple solution(as in what to use?), if possible in python or maybe just a shell script. Thanks.
2011/05/31
[ "https://Stackoverflow.com/questions/6184079", "https://Stackoverflow.com", "https://Stackoverflow.com/users/294714/" ]
The "standard" way of doing this (atleast on most Gnu/Linux distros) is to use [logrotate](http://www.linuxcommand.org/man_pages/logrotate8.html). I see a `/etc/logrotate.conf` on my Debian machine which has details on which files to rotate and at what frequency. It's triggered by a daily cron entry. This is what I'd recommend. If you want your application itself to do this (which is a pain really since it's not it's job), you could consider writing a custom [log handler](http://docs.python.org/library/logging.handlers.html#module-logging.handlers). A RotatingFileHandler (or TimedRotatingFileHandler) might work but you can write a custom one. Most systems are by default set up to automatically rotate log files which are emitted by syslog. You might want to consider using the SysLogHandler and logging to syslog (from all your apps regardless of language) so that the system infrastructure automatically takes care of things for you.
Use [logrotate](http://linuxcommand.org/man_pages/logrotate8.html) to do the work for you. Remember that there are few cases where it **may not work properly**, for example if the logging application keeps the log file always open and is not able to resume it if the file is removed and recreated. Over the years I encountered few applications like that, but even for them you could configure logrotate to restart them when it rotates the logs.
10,957
54,446,492
I have a requirement where I have to trigger a dataset in a blob to my python code where processing will happen and then store the processed dataset to the blob? Where should I do it? Any notebooks? Azure functions dont have an option to write a Python code. Any help would be appreciated.
2019/01/30
[ "https://Stackoverflow.com/questions/54446492", "https://Stackoverflow.com", "https://Stackoverflow.com/users/9668890/" ]
The difference here is *really* subtle, and can only *easily* be appreciated in IL: ``` class MyBuilder1 { private MySynchronizer m_synchronizer = new MySynchronizer(); public MyBuilder1() { } } ``` gives us the constructor: ``` .method public hidebysig specialname rtspecialname instance void .ctor () cil managed { // Method begins at RVA 0x2050 // Code size 18 (0x12) .maxstack 8 IL_0000: ldarg.0 IL_0001: newobj instance void MySynchronizer::.ctor() IL_0006: stfld class MySynchronizer MyBuilder1::m_synchronizer IL_000b: ldarg.0 IL_000c: call instance void [mscorlib]System.Object::.ctor() IL_0011: ret } // end of method MyBuilder1::.ctor ``` where-as this: ``` class MyBuilder2 { private MySynchronizer m_synchronizer; public MyBuilder2() { m_synchronizer = new MySynchronizer(); } } ``` gives us: ``` // Methods .method public hidebysig specialname rtspecialname instance void .ctor () cil managed { // Method begins at RVA 0x2063 // Code size 18 (0x12) .maxstack 8 IL_0000: ldarg.0 IL_0001: call instance void [mscorlib]System.Object::.ctor() IL_0006: ldarg.0 IL_0007: newobj instance void MySynchronizer::.ctor() IL_000c: stfld class MySynchronizer MyBuilder2::m_synchronizer IL_0011: ret } // end of method MyBuilder2::.ctor ``` The difference is simply one of ordering: * field initializers (`MyBuilder1`) happen *before* the base-type constructor call (`object` is the base here; `call instance void [mscorlib]System.Object::.ctor()` is the base-constructor call) * constructors happen *after* the base-type constructor call In most cases, **this won't matter**. Unless your base-constructor invokes a virtual method that the derived type overrides: then whether or not the field has a value in the overridden method will be different between the two.
I almost always choose the second one option (initializing inside the constructor). In my point of view it keeps your code more readable and the control logic is inside the constructor which gives more flexibility to add logic in the future. But again, it is only my personal opinion.
10,958
60,538,059
I am trying to download MNIST data in PyTorch using the following code: ``` train_loader = torch.utils.data.DataLoader( datasets.MNIST('data', train=True, download=True, transform=transforms.Compose([ transforms.ToTensor(), transforms.Normalize((0.1307,), (0.3081,)) ])), batch_size=128, shuffle=True) ``` and it gives the following error. ```py Downloading http://yann.lecun.com/exdb/mnist/train-images-idx3-ubyte.gz to data/MNIST/raw/train-images-idx3-ubyte.gz 0it [00:00, ?it/s] --------------------------------------------------------------------------- HTTPError Traceback (most recent call last) <ipython-input-2-2fee284dabb8> in <module>() 5 transform=transforms.Compose([ 6 transforms.ToTensor(), ----> 7 transforms.Normalize((0.1307,), (0.3081,)) 8 ])), 9 batch_size=128, shuffle=True) 11 frames /usr/lib/python3.6/urllib/request.py in http_error_default(self, req, fp, code, msg, hdrs) 648 class HTTPDefaultErrorHandler(BaseHandler): 649 def http_error_default(self, req, fp, code, msg, hdrs): --> 650 raise HTTPError(req.full_url, code, msg, hdrs, fp) 651 652 class HTTPRedirectHandler(BaseHandler): HTTPError: HTTP Error 403: Forbidden ``` How do I solve this? The notebook was working before, I'm trying to rerun it but I got this error.
2020/03/05
[ "https://Stackoverflow.com/questions/60538059", "https://Stackoverflow.com", "https://Stackoverflow.com/users/4848812/" ]
This is a new bug, reported here: <https://github.com/pytorch/vision/issues/1938> See that thread for some potential workarounds until the issue is fixed in pytorch itself.
My workaround is: run on your local machine a simple program to download the MNIST dataset from the `torchvision.datasets` module, save with `pickle` a copy on your machine and upload it in your Google Drive. Is not proper fix but a viable and affordable workaround, hope it helps somehow
10,961
23,080,960
Here I'm trying to create a pie chart using **matplotlib** python library. But the dates are overlapping if the values are same "0.0" multiple times. My question is how I can display them separately. Thanks. ![enter image description here](https://i.stack.imgur.com/mBL5o.png) This is what I tried: ``` from pylab import * labels = [ "05-02-2014", "23-02-2014","07-02-2014","08-02-2014"] values = [0, 0, 2, 10] fig = plt.figure(figsize=(9.0, 6.10)) plt.pie(values, labels=labels, autopct='%1.1f%%', shadow=True) plt.axis('equal') show() ```
2014/04/15
[ "https://Stackoverflow.com/questions/23080960", "https://Stackoverflow.com", "https://Stackoverflow.com/users/3270800/" ]
You can adjust the label positions manually, although that results in a bit more code you would want to for such a simple request. You can detect groups of duplicate labels by examining the positions at which there are placed. Here is an example with some random data replicating the occurrence of overlapping labels: ``` import matplotlib.pyplot as plt import numpy as np from collections import Counter import datetime # number slices of pie num = 10 # generate some labels dates = [datetime.datetime(2014,1,1) + datetime.timedelta(days=np.random.randint(1,20)) for i in range(num)] labels = [d.strftime('%d-%m-%Y') for d in dates] # generate some values values = np.random.randint(2,10, num) # force half of them to be zero mask = np.random.choice(num, num // 2, replace=False) values[mask] = 0 # pick some colors colors = plt.cm.Blues(np.linspace(0,1,num)) fig, ax = plt.subplots(figsize=(9.0, 6.10), subplot_kw={'aspect': 1}) wedges, labels, pcts = ax.pie(values, colors=colors, labels=labels, autopct='%1.1f%%') # find duplicate labels and the amount of duplicates c = Counter([l.get_position() for l in labels]) dups = {key: val for key, val in c.items() if val > 1} # degrees of spacing between duplicate labels offset = np.deg2rad(3.) # loop over any duplicate 'position' for pos, n in dups.items(): # select all labels with that position dup_labels = [l for l in labels if l.get_position() == pos] # calculate the angle with respect to the center of the pie theta = np.arctan2(pos[1], pos[0]) # get the offsets offsets = np.linspace(-(n-1) * offset, (n-1) * offset, n) # loop over the duplicate labels for l, off in zip(dup_labels, offsets): lbl_radius = 1.3 # calculate the new label positions newx = lbl_radius * np.cos(theta + off) newy = lbl_radius * np.sin(theta + off) l.set_position((newx, newy)) # rotate the label rot = np.rad2deg(theta + off) # adjust the rotation so its # never upside-down if rot > 90: rot += 180 elif rot < -90: rot += 180 # rotate and highlight the adjusted labels l.set_rotation(rot) l.set_ha('center') l.set_color('#aa0000') ``` I purposely only modified the overlapping labels to highlight the effect, but you could alter all labels in a similar way to create a uniform styling. The rotation makes it easier to automatically space them, but you could try alternate ways of placement. Note that it only detect truly equal placements, if you would have values of `[0, 0.00001, 2, 10]`, they would probably still overlap. ![enter image description here](https://i.stack.imgur.com/tQqs2.png)
I am not sure it there is a way to adjust "labeldistance" for every element, but I could solve this using a tricky-way. I added explode(0, 0.1, 0, 0) ``` from pylab import * labels = [ "05-02-2014", "23-02-2014","07-02-2014","08-02-2014"] values = [0, 0, 2, 10] explode = (0, 0.1, 0, 0) fig = plt.figure(figsize=(9.0, 6.10)) test=range(len(values)) patches,texts= plt.pie(values, explode=explode,labels=labels, startangle=90, radius=0.5 )#pctdistance=1.1,startangle=10, labeldistance=0.8,radius=0.5) #plt.axis('equal') plt.axis('equal') plt.show() ``` **UPDATE** This is working with me, you should update pylab
10,962
58,841,308
I need a domain validator and email validator, ie validate if both exist. The company I'm servicing has a website that validates this for them, ensuring they won't send email to a nonexistent mailbox. It would be an email marketing action anyway. They have something basic about excel, but they want a service to be running directly getting a list of information and or transactional so that it checks by lot, speeding up the process. It is a work very similar to what makes this [site](https://tools.verifyemailaddress.io). I would like to develop something similar in python rather. I would like to know if such a work is feasible and if so, if anyone could give me some reference.
2019/11/13
[ "https://Stackoverflow.com/questions/58841308", "https://Stackoverflow.com", "https://Stackoverflow.com/users/9403338/" ]
There is a [documented](https://learn.microsoft.com/graph/api/channel-get-filesfolder?view=graph-rest-1.0&tabs=http) navigational property of the Channel resource called `filesFolder`. From the Graph v1.0 endpoint: ```xml <EntityType Name="channel" BaseType="microsoft.graph.entity"> <Property Name="displayName" Type="Edm.String"/> <Property Name="description" Type="Edm.String"/> <Property Name="isFavoriteByDefault" Type="Edm.Boolean"/> <Property Name="email" Type="Edm.String"/> <Property Name="webUrl" Type="Edm.String"/> <Property Name="membershipType" Type="microsoft.graph.channelMembershipType"/> <NavigationProperty Name="messages" Type="Collection(microsoft.graph.chatMessage)" ContainsTarget="true"/> <NavigationProperty Name="chatThreads" Type="Collection(microsoft.graph.chatThread)" ContainsTarget="true"/> <NavigationProperty Name="tabs" Type="Collection(microsoft.graph.teamsTab)" ContainsTarget="true"/> <NavigationProperty Name="members" Type="Collection(microsoft.graph.conversationMember)" ContainsTarget="true"/> <NavigationProperty Name="filesFolder" Type="microsoft.graph.driveItem" ContainsTarget="true"/> </EntityType> ``` You can call this using this template: ``` /v1.0/teams/{teamId}/channels/{channelId}/filesFolder ``` This will return the Drive associated with a Private Channel: ```json { "@odata.context": "https://graph.microsoft.com/v1.0/$metadata#teams('{teamsId}')/channels('{channelId}')/filesFolder/$entity", "id": "{id}", "createdDateTime": "0001-01-01T00:00:00Z", "lastModifiedDateTime": "2019-11-13T16:49:13Z", "name": "Private", "webUrl": "https://{tenant}.sharepoint.com/sites/{team}-Private/Shared%20Documents/{channel}", "size": 0, "parentReference": { "driveId": "{driveId}", "driveType": "documentLibrary" }, "fileSystemInfo": { "createdDateTime": "2019-11-13T16:49:13Z", "lastModifiedDateTime": "2019-11-13T16:49:13Z" }, "folder": { "childCount": 0 } } ```
Currently /filesFolder for Private Channels returns BadGateway
10,963
27,554,484
I'm trying to use theano but I get an error when I import it. I've installed cuda\_6.5.14\_linux\_64.run, and passed all the recommended test in Chapter 6 of [this](http://developer.download.nvidia.com/compute/cuda/6_5/rel/docs/CUDA_Getting_Started_Linux.pdf) NVIDIA PDF. Ultimately I want to be able to install pylearn2, but I get the exact same error as below when I try to compile it. EDIT1: My theanorc looks like: ``` [cuda] root = /usr/local/cuda-6.5 [global] device = gpu floatX=float32 ``` If I replace gpu with cpu, the command import theano succeeds. ``` Python 2.7.8 |Anaconda 1.9.0 (64-bit)| (default, Aug 21 2014, 18:22:21) [GCC 4.4.7 20120313 (Red Hat 4.4.7-1)] on linux2 Type "help", "copyright", "credits" or "license" for more information. Anaconda is brought to you by Continuum Analytics. Please check out: http://continuum.io/thanks and https://binstar.org Imported NumPy 1.9.1, SciPy 0.14.0, Matplotlib 1.3.1 Type "scientific" for more details. >>> import theano Using gpu device 0: GeForce GTX 750 Ti Traceback (most recent call last): File "<stdin>", line 1, in <module> File "/home/g/anaconda/lib/python2.7/site-packages/theano/__init__.py", line 92, in <module> theano.sandbox.cuda.tests.test_driver.test_nvidia_driver1() File "/home/g/anaconda/lib/python2.7/site-packages/theano/sandbox/cuda/tests/test_driver.py", line 28, in test_nvidia_driver1 profile=False) File "/home/g/anaconda/lib/python2.7/site-packages/theano/compile/function.py", line 223, in function profile=profile) File "/home/g/anaconda/lib/python2.7/site-packages/theano/compile/pfunc.py", line 512, in pfunc on_unused_input=on_unused_input) File "/home/g/anaconda/lib/python2.7/site-packages/theano/compile/function_module.py", line 1312, in orig_function defaults) File "/home/g/anaconda/lib/python2.7/site-packages/theano/compile/function_module.py", line 1181, in create _fn, _i, _o = self.linker.make_thunk(input_storage=input_storage_lists) File "/home/g/anaconda/lib/python2.7/site-packages/theano/gof/link.py", line 434, in make_thunk output_storage=output_storage)[:3] File "/home/g/anaconda/lib/python2.7/site-packages/theano/gof/vm.py", line 847, in make_all no_recycling)) File "/home/g/anaconda/lib/python2.7/site-packages/theano/sandbox/cuda/__init__.py", line 237, in make_thunk compute_map, no_recycling) File "/home/g/anaconda/lib/python2.7/site-packages/theano/gof/op.py", line 606, in make_thunk output_storage=node_output_storage) File "/home/g/anaconda/lib/python2.7/site-packages/theano/gof/cc.py", line 948, in make_thunk keep_lock=keep_lock) File "/home/g/anaconda/lib/python2.7/site-packages/theano/gof/cc.py", line 891, in __compile__ keep_lock=keep_lock) File "/home/g/anaconda/lib/python2.7/site-packages/theano/gof/cc.py", line 1322, in cthunk_factory key=key, fn=self.compile_cmodule_by_step, keep_lock=keep_lock) File "/home/g/anaconda/lib/python2.7/site-packages/theano/gof/cmodule.py", line 996, in module_from_key module = next(compile_steps) File "/home/g/anaconda/lib/python2.7/site-packages/theano/gof/cc.py", line 1237, in compile_cmodule_by_step preargs=preargs) File "/home/g/anaconda/lib/python2.7/site-packages/theano/sandbox/cuda/nvcc_compiler.py", line 444, in compile_str return dlimport(lib_filename) File "/home/g/anaconda/lib/python2.7/site-packages/theano/gof/cmodule.py", line 284, in dlimport rval = __import__(module_name, {}, {}, [module_name]) ImportError: ('The following error happened while compiling the node', GpuCAReduce{add}{1}(<CudaNdarrayType(float32, vector)>), '\n', '/home/g/.theano/compiledir_Linux-3.11.0-26-generic-x86_64-with-debian-wheezy-sid-x86_64-2.7.8-64/tmpWYqQw5/7173b40d34b57da0645a57198c96dbcc.so: undefined symbol: __fatbinwrap_66_tmpxft_00004bf1_00000000_12_cuda_device_runtime_compute_50_cpp1_ii_5f6993ef', '[GpuCAReduce{add}{1}(<CudaNdarrayType(float32, vector)>)]') ```
2014/12/18
[ "https://Stackoverflow.com/questions/27554484", "https://Stackoverflow.com", "https://Stackoverflow.com/users/2423116/" ]
I encountered exactly the same question. My solution is to replace cuda-6.5 with cuda-5.5, and everything works fine.
We also saw this error. We found that putting /usr/local/cuda-6.5/bin in $PATH seemed to fix it (even with the root = ... line in .theanorc).
10,966
61,264,563
When I import numpy and pandas in jupyter it gives error same in spider but in spider works after starting new kernel. ``` import numpy as np ``` --- ``` NameError Traceback (most recent call last) <ipython-input-1-0aa0b027fcb6> in <module> ----> 1 import numpy as np ~\numpy.py in <module> 1 from numpy import* 2 ----> 3 arr = array([1,2,3,4]) NameError: name 'array' is not defined ```
2020/04/17
[ "https://Stackoverflow.com/questions/61264563", "https://Stackoverflow.com", "https://Stackoverflow.com/users/13287554/" ]
this is showing "NameError" which is due to the arr=array([1,2,3,4]) you should try something like this arr=np.array([1,2,3,4])
Try this: ``` arr=np.array([1,2,3,4]) ```
10,967
73,230,522
Hi I am new to python and I have a simple question, I have a list consisting of some user info and I want to know how can I write a program to find and update some of that info. ``` user_list = [ {'name': 'Alizom_12', 'gender': 'f', 'age': 34, 'active_day': 170}, {'name': 'Xzt4f', 'gender': None, 'age': None, 'active_day': 1152}, {'name': 'TomZ', 'gender': 'm', 'age': 24, 'active_day': 15}, {'name': 'Zxd975', 'gender': None, 'age': 44, 'active_day': 752}, ] ``` what I did for finding a user is the following but I want to change it to display the info of a user rather than just printing the user exists: ``` def find_user(user_name): for items in user_list: if items['name'] == user_name: return f'{user_name} exists. ' return f'{user_name} does not exists' ``` Also for updating the user info: ``` def update_user_info(user_name, **kw): user_list.update({'name': name, 'gender': gender, 'age': age, 'active_day': active_day}) return user_list ``` ``` print(find_user('Alizom_12')) update_user_info('Alizom_12', **{'age': 29}) print(find_user('Alizom_12')) ```
2022/08/04
[ "https://Stackoverflow.com/questions/73230522", "https://Stackoverflow.com", "https://Stackoverflow.com/users/19686631/" ]
Even if you accepted the remote version you still created a merge commit which basically contains the information that the changes you made are integrated in the branch. The merge commit will have two parents: the commit you pulled and your local one. This new commit needs pushing. You'll see the commit when you inspect the log using `git log` or your preferred visual tool for inspecting the commit history.
If you haven't set `rebase=true` in `.gitconfig`, please set it up like this: ``` [pull] rebase = true ``` When you have conflicts you should resolve it and force push it: ``` git push -f ```
10,972
16,536,101
I read this on Python tutorial: (<http://docs.python.org/2/tutorial/inputoutput.html#reading-and-writing-files>) > > Python on Windows makes a distinction between text and binary files; > the end-of-line characters in text files are automatically altered slightly > when data is read or written. This behind-the-scenes modification to file > data is fine for ASCII text files, but it’ll corrupt binary data like that > in JPEG or EXE files. Be very careful to use binary mode when reading and > writing such files. > > > I don't quite understand how 'end-of-line characters in text files are altered' will 'corrupt binary data'. Because I feel binary data don't have such things like end-of-line. Can somebody explain more of this paragraph for me? It's making me feel like Python doesn't welcome binary files.
2013/05/14
[ "https://Stackoverflow.com/questions/16536101", "https://Stackoverflow.com", "https://Stackoverflow.com/users/1769958/" ]
You just have to take care to open files on windows as binary (`open(filename, "rb")`) and not as text files. After that there is no problem using the data. Particularly the end-of-line on Windows is `'\r\n'`. And if you read a binary file as text file and write it back out, then single `'\n'` are transformed in `'\r\n'` sequences. If you open the files as binary (for reading and for writing) there is **no** such problem. Python is capable of dealing with binary data, and you would have to take this kind of care in any language on the windows systems, not just in Python (but the developers of Python are friendly enough to warn you of possible OS problems). In systems like Linux where the end-of-line is a single character this distinction exists as well, but is less likely to cause a problem when reading/writing binary data as text (i.e. without the `b` option for opening of files).
> > I feel binary data don't have such things like end-of-line. > > > Binary files can have ANY POSSIBLE character in them, including the character \n. You do not want python implicitly converting any characters in a binary file to something else. Python has no idea it is reading a binary file unless you tell it so. And when python reads a text file it automatically converts any \n character to the OS's newline character, which on Windows is \r\n. That is the way things work in all computer programming languages. Another way to think about it is: a file is just a long series of bytes (8 bits). A byte is just an integer. And a byte can be any integer. If a byte happens to be the integer 10, that is also the ascii code for the character \n. If the bytes in the file represent binary data, you don't want Python to read in 10 and convert it to two bytes: 13 and 10. Usually when you read binary data, you want to read, say, the first 2 bytes which represents a number, then the next 4 bytes which represent another number, etc.. Obviously, if python suddenly converts one of the bytes to two bytes, that will cause two problems: 1) It alters the data, 2) All your data boundaries will be messed up. An example: suppose the first byte of a file is supposed to represent a dog's weight, and the byte's value is 10. Then the next byte is supposed to represent the dog's age, and its value is 1. If Python converts the 10, which is the ascii code for \n, to two bytes: 10 and 13, then the data python hands you will look like: 10 13 1 And when you extract the second byte for the dog's age, you get 13--not 1. We often say a file contains 'characters' but that is patently false. Computers cannot store characters; they can only store numbers. So a file is just a long series of numbers. If you tell python to treat those numbers as ascii codes, which represent characters, then python will give you text.
10,975
60,882,099
I have a redhat server with docker installed I want to create a docker image in which I want to run django with MySQL but the problem is django is unable to connect to MySQL server(remote server). I'm getting following error: ``` Plugin caching_sha2_password could not be loaded: /usr/lib/x86_64-linux-gnu/mariadb19/plugin/caching_sha2_password.so: cannot open shared object file: No such file or directory ``` I googled it and found that libraries does not support 'caching\_sha2\_password'. Can anyone suggest me which distro have libraries that support 'caching\_sha2\_password'? Thanks in advance. P.S. I don't have access to MySQL server so any change in server side is not in my hand. UPDATED: Dockerfile: ``` FROM python:3.7.4-stretch COPY code/ /code/ WORKDIR /code RUN apt-get update RUN apt-get -y upgrade RUN pip install -r requirements.txt EXPOSE 8000 RUN python manage.py migrate CMD python manage.py runserver ``` Error: ``` Step 8/9 : RUN python manage.py migrate ---> Running in a907f2d6dce6 Traceback (most recent call last): File "/usr/local/lib/python3.7/site-packages/django/db/backends/base/base.py", line 220, in ensure_connection self.connect() File "/usr/local/lib/python3.7/site-packages/django/utils/asyncio.py", line 26, in inner return func(*args, **kwargs) File "/usr/local/lib/python3.7/site-packages/django/db/backends/base/base.py", line 197, in connect self.connection = self.get_new_connection(conn_params) File "/usr/local/lib/python3.7/site-packages/django/utils/asyncio.py", line 26, in inner return func(*args, **kwargs) File "/usr/local/lib/python3.7/site-packages/django/db/backends/mysql/base.py", line 233, in get_new_connection return Database.connect(**conn_params) File "/usr/local/lib/python3.7/site-packages/MySQLdb/__init__.py", line 84, in Connect return Connection(*args, **kwargs) File "/usr/local/lib/python3.7/site-packages/MySQLdb/connections.py", line 179, in __init__ super(Connection, self).__init__(*args, **kwargs2) MySQLdb._exceptions.OperationalError: (2059, "Authentication plugin 'caching_sha2_password' cannot be loaded: /usr/lib/x86_64-linux-gnu/mariadb18/plugin/caching_sha2_password.so: cannot open shared object file: No such file or directory") The above exception was the direct cause of the following exception: Traceback (most recent call last): File "manage.py", line 22, in <module> execute_from_command_line(sys.argv) File "/usr/local/lib/python3.7/site-packages/django/core/management/__init__.py", line 401, in execute_from_command_line utility.execute() File "/usr/local/lib/python3.7/site-packages/django/core/management/__init__.py", line 395, in execute self.fetch_command(subcommand).run_from_argv(self.argv) File "/usr/local/lib/python3.7/site-packages/django/core/management/base.py", line 328, in run_from_argv self.execute(*args, **cmd_options) File "/usr/local/lib/python3.7/site-packages/django/core/management/base.py", line 366, in execute self.check() File "/usr/local/lib/python3.7/site-packages/django/core/management/base.py", line 395, in check include_deployment_checks=include_deployment_checks, File "/usr/local/lib/python3.7/site-packages/django/core/management/commands/migrate.py", line 63, in _run_checks issues = run_checks(tags=[Tags.database]) File "/usr/local/lib/python3.7/site-packages/django/core/checks/registry.py", line 72, in run_checks new_errors = check(app_configs=app_configs) File "/usr/local/lib/python3.7/site-packages/django/core/checks/database.py", line 10, in check_database_backends issues.extend(conn.validation.check(**kwargs)) File "/usr/local/lib/python3.7/site-packages/django/db/backends/mysql/validation.py", line 9, in check issues.extend(self._check_sql_mode(**kwargs)) File "/usr/local/lib/python3.7/site-packages/django/db/backends/mysql/validation.py", line 13, in _check_sql_mode with self.connection.cursor() as cursor: File "/usr/local/lib/python3.7/site-packages/django/utils/asyncio.py", line 26, in inner return func(*args, **kwargs) File "/usr/local/lib/python3.7/site-packages/django/db/backends/base/base.py", line 260, in cursor return self._cursor() File "/usr/local/lib/python3.7/site-packages/django/db/backends/base/base.py", line 236, in _cursor self.ensure_connection() File "/usr/local/lib/python3.7/site-packages/django/utils/asyncio.py", line 26, in inner return func(*args, **kwargs) File "/usr/local/lib/python3.7/site-packages/django/db/backends/base/base.py", line 220, in ensure_connection self.connect() File "/usr/local/lib/python3.7/site-packages/django/db/utils.py", line 90, in __exit__ raise dj_exc_value.with_traceback(traceback) from exc_value File "/usr/local/lib/python3.7/site-packages/django/db/backends/base/base.py", line 220, in ensure_connection self.connect() File "/usr/local/lib/python3.7/site-packages/django/utils/asyncio.py", line 26, in inner return func(*args, **kwargs) File "/usr/local/lib/python3.7/site-packages/django/db/backends/base/base.py", line 197, in connect self.connection = self.get_new_connection(conn_params) File "/usr/local/lib/python3.7/site-packages/django/utils/asyncio.py", line 26, in inner return func(*args, **kwargs) File "/usr/local/lib/python3.7/site-packages/django/db/backends/mysql/base.py", line 233, in get_new_connection return Database.connect(**conn_params) File "/usr/local/lib/python3.7/site-packages/MySQLdb/__init__.py", line 84, in Connect return Connection(*args, **kwargs) File "/usr/local/lib/python3.7/site-packages/MySQLdb/connections.py", line 179, in __init__ super(Connection, self).__init__(*args, **kwargs2) django.db.utils.OperationalError: (2059, "Authentication plugin 'caching_sha2_password' cannot be loaded: /usr/lib/x86_64-linux-gnu/mariadb18/plugin/caching_sha2_password.so: cannot open shared object file: No such file or directory") The command '/bin/sh -c python manage.py migrate' returned a non-zero code: 1 ``` Requirement.txt: ``` django==3.0.4 django-environ==0.4.5 bcrypt==3.1.7 mysqlclient==1.4.6 psycopg2==2.8.4 PyMySQL==0.9.3 ```
2020/03/27
[ "https://Stackoverflow.com/questions/60882099", "https://Stackoverflow.com", "https://Stackoverflow.com/users/10386411/" ]
The primary reason is simplicity. The existing rule is easy to understand (you clearly understand it) and easy to implement. The data-flow analysis required (to distinguish between acceptable and unacceptable uses in general) is complex and not normally necessary for a compiler, so it was thought a bad idea to require it of compilers. Another consideration is Ada's compilation rules. If `Proc` passes `X` to another subprogram declared in another package, the data-flow analysis would require the body of that subprogram, but Ada requires that it be possible to compile `Proc` without the body of the other package. Finally, the only time\* you'll ever need access-to-object types is if you need to declare a large object that won't fit on the stack, and in that case you won't need `access all` or `'access`, so you won't have to deal with this. \*True as a 1st-order approximation (probably true at 2nd- and 3rd-order, too)
In Ada, when you try to think about accessibility, you have to do it in terms of access types instead of variables. There's no lifetime analysis of variables (contrarily to what Rust does, I think). So, what's the worst that could happen? If your pointer type level is less than the target variable level, accessibility checks will fail because the pointer *might* outlive the target. I'm not sure what goes on with anonymous access types, but that's a whole different mess from what I pick here and there. Some people recommend not using them at all for variables.
10,978
11,450,649
I'm having a really tough time with getting the results page of this url with python's urllib2: ``` http://www.google.com/search?tbs=sbi:AMhZZitAaz7goe6AsfVSmFw1sbwsmX0uIjeVnzKHjEXMck70H3j32Q-6FApxrhxdSyMo0OedyWkxk3-qYbyf0q1OqNspjLu8DlyNnWVbNjiKGo87QUjQHf2_1idZ1q_1vvm5gzOCMpChYiKsKYdMywOLjJzqmzYoJNOU2UsTs_1zZGWjU-LsjdFXt_1D5bDkuyRK0YbsaLVcx4eEk_1KMkcJpWlfFEfPMutxTLGf1zxD-9DFZDzNOODs0oj2j_1KG8FRCaMFnTzAfTdl7JfgaDf_1t5Vti8FnbeG9i7qt9wF6P-QK9mdvC15hZ5UR29eQdYbcD1e4woaOQCmg8Q1VLVPf4-kf8dAI7p3jM_1MkBBwaxdt_1TsM4FLwh0oHAYKOS5qBRI28Vs0aw5_1C5-WR4dC902Eqm5eAkLiQyAM9J2bioR66g3tMWe-j9Hyh1ID40R1NyXEJDHcGxp7xOn_16XxfW_1Cq5ArdSNzxFvABb1UcXCn5s4_1LpXZxhZbauwaO8cg3CKGLUvl_1wySDB7QIkMIF2ZInEPS4K-eyErVKqOdY9caYUD8X7oOf6sDKFjT7pNHwlkXiuYbKBRYjlvRHPlcPN1WHWCJWdSNyXdZhwDI3VRaKwmi4YNvkryeNMMbhGytfvlNaaelKcOzWbvzCtSNaP2lJziN1x3btcIAplPcoZxEpb0cDlQwId3A5FDhczxpVbdRnOB-Xeq_1AiUTt_1iI6bSgUAinWXQFYWveTOttdSNCgK-VTxV4OCtlrCrZerk27RBLAzT0ol9NOfYmYhiabzhUczWk4NuiVhKN-M4eo76cAsi74PY4V_1lWjvOpI35V_1YLJQrm0fxVcD34wxFYCIllT2gYW09fj3cuBDMNbsaJqPVQ04OOGlwmcmJeAnK96xd_1aMUd6FsVLOSDS7RfS5MNUSyd1jnXvRU_1MF_1Dj8oC8sm7PfVdjm3firiMcaKM28j9kGWbY0heIGLtO_1m6ad-iKfxYEzSux2b5w62LQlP57yS7vX8RFoyKzHA0RrFIEbPBQdNMA3Vpw0G_1LvEjCAPSCV1HH1pDp0l4EnNCvUIAppVXzNMyWT_1gKITj1NLqAn-Z1tH323JwZSc77OftDSreyHJ-BPxn3n7JMkNZFcQx6S7tfBxeqJ1NuDlpax11pw0_1Oi_1nF3vyEP0NbGKSVgNvBv_1tv8ahxvrHn9UnP78FleiOpzUBfdfRPZiT20VEq5-oXtV_1XwIzrd-5_15-cf2yoL7ohyPuv3WKGUGr4YCsYje7_1D8VslqMPsvbwMg9haj3TrBKH7go70ZfPjUv3h1K7lplnnCdV0hrYVQkSLUY1eEor3L--Vu5PlewS60ZH5YEn4qTnDxniV95h8q0Y3RWXJ6gIXitR5y6CofVg ``` I use the following headers, and this should be simple I would think: ``` headers = {'Host':'www.google.com','User-Agent':user_agent,'Accept-Language':'en-us,en;q=0.5','Accept-Encoding':'gzip, deflate','Accept-Charset':'ISO-8859-1,utf-8;q=0.7,*;q=0.7','Connection':'keep-alive','Referer':'http://www.google.co.in/imghp?hl=en&tab=ii','Cookie':'PREF=ID=1d7bc4ff2a5d8bc6:U=1d37ba5a518b9be1:FF=4:LD=en:TM=1300950025:LM=1302071720:S=rkk0IbbhxUIgpTyA; NID=51=uNq6mZ385WlV1UTfXsiWkSgnsa6PdjH4l9ph-vSQRszBHRcKW3VRJclZLd2XUEdZtxiCtl5hpbJiS3SpEV7670w_x738h75akcO6Viw47MUlpCZfy4KZ2vLT4tcleeiW; SID=DQAAAMEAAACoYm-3B2aiLKf0cRU8spJuiNjiXEQRyxsUZqKf8UXZXS55movrnTmfEcM6FYn-gALmyMPNRIwLDBojINzkv8doX69rUQ9-'} ``` When I do the following, I get a result that doesn't contain what any ordinary web browser returns: ``` request=urllib2.Request(url,,None,headers) response=urllib2.urlopen(request) html=response.read() ``` Similarly, this bit of code returns a bunch of hex junk I can't read: ``` request=urllib2.Request(url,headers=headers) response=urllib2.urlopen(request) html=response.read() ``` Please help, as I am quite sure this is simple enough, and I must just be missing something. I was able to get this link in a similar way, but also uploading an image to images.google.com using the following code: ``` import httplib, mimetypes, android, sys, urllib2, urllib, simplejson def post_multipart(host, selector, fields, files): """ Post fields and files to an http host as multipart/form-data. fields is a sequence of (name, value) elements for regular form fields. files is a sequence of (name, filename, value) elements for data to be uploaded as files Return the server's response page. """ content_type, body = encode_multipart_formdata(fields, files) h = httplib.HTTP(host) h.putrequest('POST', selector) h.putheader('content-type', content_type) h.putheader('content-length', str(len(body))) h.endheaders() h.send(body) errcode, errmsg, headers = h.getreply() return h.file.read() def encode_multipart_formdata(fields, files): """ fields is a sequence of (name, value) elements for regular form fields. files is a sequence of (name, filename, value) elements for data to be uploaded as files Return (content_type, body) ready for httplib.HTTP instance """ BOUNDARY = '----------ThIs_Is_tHe_bouNdaRY_$' CRLF = '\r\n' L = [] for (key, value) in fields: L.append('--' + BOUNDARY) L.append('Content-Disposition: form-data; name="%s"' % key) L.append('') L.append(value) for (key, filename, value) in files: L.append('--' + BOUNDARY) L.append('Content-Disposition: form-data; name="%s"; filename="%s"' % (key, filename)) L.append('Content-Type: %s' % get_content_type(filename)) L.append('') L.append(value) L.append('--' + BOUNDARY + '--') L.append('') body = CRLF.join(L) content_type = 'multipart/form-data; boundary=%s' % BOUNDARY return content_type, body def get_content_type(filename): return mimetypes.guess_type(filename)[0] or 'application/octet-stream' host = 'www.google.co.in' selector = '/searchbyimage/upload' fields = [('user-agent','Mozilla/5.0 (Windows NT 5.1; rv:6.0.2) Gecko/20100101 Firefox/6.0.2'),('connection','keep-alive'),('referer','')] with open('jpeg.jpg', 'rb') as jpeg: files = [('encoded_image', 'jpeg.jpg', jpeg.read())] response = post_multipart(host, selector, fields, files) #added: response = responseLen=(len(response)-1) x=22 if response[(x-21):(x+1)]!='EF=\"http://www.google': x+=1 x+=145 link='' while response[(x+1):(x+7)]!='amp;us': #>here< link=link+response[x] x+=1 print(link) ``` The above code returned not the page a browser would return, but instead html with a "link that has moved", which is the 'url' I posted first in this message. If I can do the upload of my image and return a results page, why can't I get the resulting links html page? It's severely frustrating:( Please help, I've been burning out my brain for over a month on this problem. Yes I am a newbee, but I thought this would be straightforward:( Please help me to return the results page of this one little url: ``` http://www.google.com/search?tbs=sbi:AMhZZitAaz7goe6AsfVSmFw1sbwsmX0uIjeVnzKHjEXMck70H3j32Q-6FApxrhxdSyMo0OedyWkxk3-qYbyf0q1OqNspjLu8DlyNnWVbNjiKGo87QUjQHf2_1idZ1q_1vvm5gzOCMpChYiKsKYdMywOLjJzqmzYoJNOU2UsTs_1zZGWjU-LsjdFXt_1D5bDkuyRK0YbsaLVcx4eEk_1KMkcJpWlfFEfPMutxTLGf1zxD-9DFZDzNOODs0oj2j_1KG8FRCaMFnTzAfTdl7JfgaDf_1t5Vti8FnbeG9i7qt9wF6P-QK9mdvC15hZ5UR29eQdYbcD1e4woaOQCmg8Q1VLVPf4-kf8dAI7p3jM_1MkBBwaxdt_1TsM4FLwh0oHAYKOS5qBRI28Vs0aw5_1C5-WR4dC902Eqm5eAkLiQyAM9J2bioR66g3tMWe-j9Hyh1ID40R1NyXEJDHcGxp7xOn_16XxfW_1Cq5ArdSNzxFvABb1UcXCn5s4_1LpXZxhZbauwaO8cg3CKGLUvl_1wySDB7QIkMIF2ZInEPS4K-eyErVKqOdY9caYUD8X7oOf6sDKFjT7pNHwlkXiuYbKBRYjlvRHPlcPN1WHWCJWdSNyXdZhwDI3VRaKwmi4YNvkryeNMMbhGytfvlNaaelKcOzWbvzCtSNaP2lJziN1x3btcIAplPcoZxEpb0cDlQwId3A5FDhczxpVbdRnOB-Xeq_1AiUTt_1iI6bSgUAinWXQFYWveTOttdSNCgK-VTxV4OCtlrCrZerk27RBLAzT0ol9NOfYmYhiabzhUczWk4NuiVhKN-M4eo76cAsi74PY4V_1lWjvOpI35V_1YLJQrm0fxVcD34wxFYCIllT2gYW09fj3cuBDMNbsaJqPVQ04OOGlwmcmJeAnK96xd_1aMUd6FsVLOSDS7RfS5MNUSyd1jnXvRU_1MF_1Dj8oC8sm7PfVdjm3firiMcaKM28j9kGWbY0heIGLtO_1m6ad-iKfxYEzSux2b5w62LQlP57yS7vX8RFoyKzHA0RrFIEbPBQdNMA3Vpw0G_1LvEjCAPSCV1HH1pDp0l4EnNCvUIAppVXzNMyWT_1gKITj1NLqAn-Z1tH323JwZSc77OftDSreyHJ-BPxn3n7JMkNZFcQx6S7tfBxeqJ1NuDlpax11pw0_1Oi_1nF3vyEP0NbGKSVgNvBv_1tv8ahxvrHn9UnP78FleiOpzUBfdfRPZiT20VEq5-oXtV_1XwIzrd-5_15-cf2yoL7ohyPuv3WKGUGr4YCsYje7_1D8VslqMPsvbwMg9haj3TrBKH7go70ZfPjUv3h1K7lplnnCdV0hrYVQkSLUY1eEor3L--Vu5PlewS60ZH5YEn4qTnDxniV95h8q0Y3RWXJ6gIXitR5y6CofVg ``` Dave
2012/07/12
[ "https://Stackoverflow.com/questions/11450649", "https://Stackoverflow.com", "https://Stackoverflow.com/users/1488252/" ]
Your user-agent is not defined ! Take that one : ``` #!/usr/bin/python import urllib2 url = "http://www.google.com/search?q=mysearch"; opener = urllib2.build_opener() opener.addheaders = [('User-agent', 'Mozilla/5.0')] print opener.open(url).read() raw_input() ``` If you like find an other user-agent, you can write `about:config` in the Firefox. And search "user-agent" : > > Mozilla/5.0 (X11; U; Linux i686; en-US; rv:1.7.8) Gecko/20050511 > > > Googlebot/2.1 (+http://www.google.com/bot.html) > > > Opera/7.23 (Windows 98; U) [en] > > >
Google has several anti-scraping techniques in place, since they don't want users to get to the results without the APIs or real browsers. If you are serious about scraping this kind of pages, I suggest you look into: [Selenium](http://seleniumhq.org/) or [Spynner](http://code.google.com/p/spynner/). Another advantage is that both execute javascript.
10,979
66,488,745
**PROBLEM ENCOUNTERED:** > > E/AndroidRuntime: FATAL EXCEPTION: main > Process: org.tensorflow.lite.examples.detection, PID: 14719 > java.lang.AssertionError: Error occurred when initializing ObjectDetector: Mobile SSD models are expected to have exactly 4 outputs, found 8 > > > **Problem Description** * Android Application Source: TensorFlow Lite Object Detection Example from Google * Error shown when starting the Example Application **Model Description** * Custom Model Used? **YES** * Pre-trained Model Used: ssd\_mobilenet\_v2\_fpnlite\_640x640\_coco17\_tpu-8 * Inference type: FLOAT * Number of classes: 4 **System Information** * OS Platform and Distribution: ( Linux Ubuntu 20.14) * TensorFlow Version: 2.4.1 * TensorFlow installed from: Pip **Saved Model conversion commands used:** **1. Saved\_Model.pb export:** > > python ./exporter\_main\_v2.py > > --input\_type image\_tensor > > --pipeline\_config\_path ./models/ssd\_mobilenet\_v2\_fpnlite\_640x640\_coco17\_tpu-8/pipeline.config > > --trained\_checkpoint\_dir ./models/ssd\_mobilenet\_v2\_fpnlite\_640x640\_coco17\_tpu-8 > > --output\_directory exported\_models/tflite > > > **2. Convert saved model (.pb) to tflite** > > toco > > --saved\_model\_dir ./exported-models/tflite/saved\_model > > --emit-select-tf-ops true > > --allow\_custom\_ops > > --graph\_def\_file ./exported-models/tflite/saved\_model/saved\_model.pb > > --output\_file ./exported-models/tflite/tflite/detect.tflite > > --input\_shapes 1,300,300,3 > > --input\_arrays normalized\_input\_image\_tensor > > --output\_arrays 'TFLite\_Detection\_PostProcess’,’TFLite\_Detection\_PostProcess:1','TFLite\_Detection\_PostProcess:2','TFLite\_Detection\_PostProcess:3' > > --inference\_type=FLOAT > > --allow\_custom\_ops > > > **Remarks** I am trying to use a trained custom model on the Google TensorFlow lite provided example. Just that every time I open the application, it returns such an error, Mobile SSD models are expected to have exactly 4 outputs, found 8. The model is trained to identify 4 classes, all stated in the labelmap.txt and pipeline config. **Does anybody have any clue about this error?**
2021/03/05
[ "https://Stackoverflow.com/questions/66488745", "https://Stackoverflow.com", "https://Stackoverflow.com/users/15334979/" ]
After further study, I believe the aforementioned issue was raised since the model has 8 tensors output but the Android application written in Java can only support 4 tensors output (at least the example provided by Google only supports 4 tensors output) I am not very certain about the number of tensors output on different models. So far as I understood and messed around with different models, models with fixed\_shape\_resizer of 640 x 640 are likely to require more than 4 tensors output ( 8 tensors output usually), which is not compatible with the Android application written in Java. For any amateur users like me, please find the following prerequisites to use your custom model in the [Android application](https://github.com/tensorflow/models/blob/master/research/object_detection/g3doc/running_on_mobile_tf2.md) Suggested Setup ( Assume you are using TensorFlow version >= 2.3): * TensorFlow Model: **SSD model with fixed\_shape\_resizer of 320 x 320** (In my case, SSD MobileNet v2 320x320 works perfectly fine) (The tensors output has to be 4) * **Colab** ( Perfect for model training and conversion) ( I've tried to perform the training and conversion on both Linux and Windows platform on my local machine, the incompatibility of different tools and package gives me a headache. I turned out using Colab to perform the training and conversion. It is much more powerful and has great compatibility with those training tools and script.) * The [**metadata writer library**](https://stackoverflow.com/questions/64097085/issue-in-creating-tflite-model-populated-with-metadata-for-object-detection/64493506#64493506) that was written by [@lu-wang-g](https://github.com/lu-wang-g) (In my case, after converting the trained model to .tflite, if you directly migrate the .tflite model to the Android application, the application will report tons of problem regarding the config of the .tflite model. Assume if you trained and converted the model correctly, all you need is the metadata writer library above. It will automatically configure the metadata for you according to the .tflite model. Then you can directly migrate the model to the application.) For detail, please visit my GitHub issue: <https://github.com/tensorflow/tensorflow/issues/47595>
For those who will stumble on this problem/question later: limitations on the number of output tensors are part of Tensorflow Lite Object Detection API specification described [here](https://www.tensorflow.org/lite/inference_with_metadata/task_library/object_detector#model_compatibility_requirements) I don't know how to make a model compatible with this requirements yet, but will append my answer if/when I will figure that out. **UPDATE** [Here](https://colab.research.google.com/github/tensorflow/models/blob/master/research/object_detection/colab_tutorials/convert_odt_model_to_TFLite.ipynb) is the official Google Colab with an example of a model conversion. The interesting part is that script call: ``` python models/research/object_detection/export_tflite_graph_tf2.py \ --trained_checkpoint_dir {'ssd_mobilenet_v2_fpnlite_640x640_coco17_tpu-8/checkpoint'} \ --output_directory {'ssd_mobilenet_v2_fpnlite_640x640_coco17_tpu-8/tflite'} \ --pipeline_config_path {'ssd_mobilenet_v2_fpnlite_640x640_coco17_tpu-8/pipeline.config'} ``` The script doesn't convert your model, but makes the model compatible with TFLite in terms of used operations and inputs/outputs format. A comment inside the script claims that only SSD meta-architectures are supported (also stated [here](https://github.com/tensorflow/models/blob/master/research/object_detection/g3doc/running_on_mobile_tf2.md)). Also in the same directory of the repo where this script comes from there are other scripts that seem to be doing similar things, though no clear description given.
10,980
27,239,348
I am using photologue to create a photo gallery site with django. I installed django-tagging into my virtualenv, not knowing it was no longer supported by photologue. Now, after having performed migrations, whenever I try to add a photo or view the photo, I get FieldError at /admin/photologue/photo/upload\_zip/ Cannot resolve keyword 'items' into field. Choices are: id, name. I uninstalled and reinstalled django, photologue, the SQLite file, and removed django-tagging, but the problem persists. I also tried running a different project that uses photologue and shares a virtualenv, and I am prompted to perform the same (assumedly destructive) migration. I can't figure out what could have possibly changed on my system if the problem spans multiple projects and all the dependencies have been freshly installed. Exception Location: /home/cameron/Envs/photologue/local/lib/python2.7/site-packages/django/db/models/sql/query.py in raise\_field\_error, line 1389 Traceback: ``` Environment: Request Method: POST Request URL: http://localhost:8000/admin/photologue/photo/add/ Django Version: 1.7.1 Python Version: 2.7.6 Installed Applications: ('django.contrib.admin', 'django.contrib.auth', 'django.contrib.contenttypes', 'django.contrib.sessions', 'django.contrib.messages', 'django.contrib.staticfiles', 'django.contrib.sites', 'sortedm2m', 'photologue', 'photologue_custom', 'pornsite') Installed Middleware: ('django.contrib.sessions.middleware.SessionMiddleware', 'django.middleware.common.CommonMiddleware', 'django.middleware.csrf.CsrfViewMiddleware', 'django.contrib.auth.middleware.AuthenticationMiddleware', 'django.contrib.auth.middleware.SessionAuthenticationMiddleware', 'django.contrib.messages.middleware.MessageMiddleware', 'django.middleware.clickjacking.XFrameOptionsMiddleware') Traceback: File "/home/cameron/Envs/photologue/local/lib/python2.7/site-packages/django/core/handlers/base.py" in get_response 111. response = wrapped_callback(request, *callback_args, **callback_kwargs) File "/home/cameron/Envs/photologue/local/lib/python2.7/site-packages/django/contrib/admin/options.py" in wrapper 584. return self.admin_site.admin_view(view)(*args, **kwargs) File "/home/cameron/Envs/photologue/local/lib/python2.7/site-packages/django/utils/decorators.py" in _wrapped_view 105. response = view_func(request, *args, **kwargs) File "/home/cameron/Envs/photologue/local/lib/python2.7/site-packages/django/views/decorators/cache.py" in _wrapped_view_func 52. response = view_func(request, *args, **kwargs) File "/home/cameron/Envs/photologue/local/lib/python2.7/site-packages/django/contrib/admin/sites.py" in inner 204. return view(request, *args, **kwargs) File "/home/cameron/Envs/photologue/local/lib/python2.7/site-packages/django/contrib/admin/options.py" in add_view 1454. return self.changeform_view(request, None, form_url, extra_context) File "/home/cameron/Envs/photologue/local/lib/python2.7/site-packages/django/utils/decorators.py" in _wrapper 29. return bound_func(*args, **kwargs) File "/home/cameron/Envs/photologue/local/lib/python2.7/site-packages/django/utils/decorators.py" in _wrapped_view 105. response = view_func(request, *args, **kwargs) File "/home/cameron/Envs/photologue/local/lib/python2.7/site-packages/django/utils/decorators.py" in bound_func 25. return func.__get__(self, type(self))(*args2, **kwargs2) File "/home/cameron/Envs/photologue/local/lib/python2.7/site-packages/django/db/transaction.py" in inner 394. return func(*args, **kwargs) File "/home/cameron/Envs/photologue/local/lib/python2.7/site-packages/django/contrib/admin/options.py" in changeform_view 1405. self.save_model(request, new_object, form, not add) File "/home/cameron/Envs/photologue/local/lib/python2.7/site-packages/django/contrib/admin/options.py" in save_model 1046. obj.save() File "/home/cameron/Envs/photologue/local/lib/python2.7/site-packages/photologue/models.py" in save 540. super(Photo, self).save(*args, **kwargs) File "/home/cameron/Envs/photologue/local/lib/python2.7/site-packages/photologue/models.py" in save 491. super(ImageModel, self).save(*args, **kwargs) File "/home/cameron/Envs/photologue/local/lib/python2.7/site-packages/django/db/models/base.py" in save 591. force_update=force_update, update_fields=update_fields) File "/home/cameron/Envs/photologue/local/lib/python2.7/site-packages/django/db/models/base.py" in save_base 628. update_fields=update_fields, raw=raw, using=using) File "/home/cameron/Envs/photologue/local/lib/python2.7/site-packages/django/dispatch/dispatcher.py" in send 198. response = receiver(signal=self, sender=sender, **named) File "/home/cameron/Envs/photologue/local/lib/python2.7/site-packages/tagging/fields.py" in _save 81. Tag.objects.update_tags(kwargs['instance'], tags) File "/home/cameron/Envs/photologue/local/lib/python2.7/site-packages/tagging/models.py" in update_tags 34. items__object_id=obj.pk)) File "/home/cameron/Envs/photologue/local/lib/python2.7/site-packages/django/db/models/manager.py" in manager_method 92. return getattr(self.get_queryset(), name)(*args, **kwargs) File "/home/cameron/Envs/photologue/local/lib/python2.7/site-packages/django/db/models/query.py" in filter 691. return self._filter_or_exclude(False, *args, **kwargs) File "/home/cameron/Envs/photologue/local/lib/python2.7/site-packages/django/db/models/query.py" in _filter_or_exclude 709. clone.query.add_q(Q(*args, **kwargs)) File "/home/cameron/Envs/photologue/local/lib/python2.7/site-packages/django/db/models/sql/query.py" in add_q 1287. clause, require_inner = self._add_q(where_part, self.used_aliases) File "/home/cameron/Envs/photologue/local/lib/python2.7/site-packages/django/db/models/sql/query.py" in _add_q 1314. current_negated=current_negated, connector=connector) File "/home/cameron/Envs/photologue/local/lib/python2.7/site-packages/django/db/models/sql/query.py" in build_filter 1138. lookups, parts, reffed_aggregate = self.solve_lookup_type(arg) File "/home/cameron/Envs/photologue/local/lib/python2.7/site-packages/django/db/models/sql/query.py" in solve_lookup_type 1076. _, field, _, lookup_parts = self.names_to_path(lookup_splitted, self.get_meta()) File "/home/cameron/Envs/photologue/local/lib/python2.7/site-packages/django/db/models/sql/query.py" in names_to_path 1383. self.raise_field_error(opts, name) File "/home/cameron/Envs/photologue/local/lib/python2.7/site-packages/django/db/models/sql/query.py" in raise_field_error 1389. "Choices are: %s" % (name, ", ".join(available))) Exception Type: FieldError at /admin/photologue/photo/add/ Exception Value: Cannot resolve keyword 'items' into field. Choices are: id, name ```
2014/12/01
[ "https://Stackoverflow.com/questions/27239348", "https://Stackoverflow.com", "https://Stackoverflow.com/users/4043633/" ]
The problem seems to arise from the fact, that django-tagging was somehow still present on the virtualenv. In your traceback after photologue saves a model, django-tagging reacts to the sent signal and tries to update any related tags: ``` File "/home/cameron/Envs/photologue/local/lib/python2.7/site-packages/django/dispatch/dispatcher.py" in send 198. response = receiver(signal=self, sender=sender, **named) File "/home/cameron/Envs/photologue/local/lib/python2.7/site-packages/tagging/fields.py" in _save 81. Tag.objects.update_tags(kwargs['instance'], tags) File "/home/cameron/Envs/photologue/local/lib/python2.7/site-packages/tagging/models.py" in update_tags 34. items__object_id=obj.pk)) ``` There it tries to use the (apparently not existing anymore) field `items`, and that's where the error occurs. I guess the way you deinstalled django-tagging on the venv didn't really work. Did you uninstalled with: `pip uninstall django-tagging` ? For reference, here again my comment/steps to recreate the venv ... If your venv is somehow corrupted, the easiest could be to recreate it from scratch: 1. On your venv do: `env pip freeze > orig_requirements.txt` Check the `orig_requirements.txt` and delete everything you don't need. 2. Make a new venv with no site-packages and reinstall the requirements: `mkvirtualenv --no-site-packages photoenv` `pip install -r orig_requirements.txt` 3. Now double check you're on this venv when running your django project `workon photoenv`
Well the error is simple -- in that you are requesting a field in the database that does not exist. Since you haven't posted code it is hard to be more specific than that. Was one of your templates built, referencing a field named 'items' that is no longer there? Please edit your question to include a FULL traceback as well as some code where you think the problem could be. The full traceback will give you a better idea of where the problem is.
10,981