qid
int64
46k
74.7M
question
stringlengths
54
37.8k
date
stringlengths
10
10
metadata
listlengths
3
3
response_j
stringlengths
17
26k
response_k
stringlengths
26
26k
8,997,431
A common pattern in python is to catch an error in an upstream module and re-raise that error as something more useful. ``` try: config_file = open('config.ini', 'r') except IOError: raise ConfigError('Give me my config, user!') ``` This will generate a stack trace of the form ``` Traceback (most recent call last): File "<stdin>", line 4, in <module> __main__.ConfigError: Give me my config, user! ``` Is there any way to access the wrapped exception in order to generate a stack trace more like this? ``` Traceback (most recent call last): File "<stdin>", line 2, in <module> __builtin__.IOError: File Does not exist. Exception wrapped by: File "<stdin>", line 4, in <module> __main__.ConfigError: Give me my config, user! ``` EDIT: ===== The problem i'm trying to defeat is that some 3rd party code can wrap exceptions up to 3 times and I want to be able to determine the root cause, i.e. a generic way to inspect the exception stack and determine the root cause of an exception without having to add any extra code to 3rd party modules.
2012/01/25
[ "https://Stackoverflow.com/questions/8997431", "https://Stackoverflow.com", "https://Stackoverflow.com/users/234254/" ]
This is known as *Exception Chaining* and is suported in Python 3. PEP 3134: <http://www.python.org/dev/peps/pep-3134/> In Python 2, the old exception is lost when you raise a new one, unless you save it in the `except` block.
Use the `traceback` [module](http://docs.python.org/library/traceback.html). It will allow you to access the most recent traceback and store it in a string. For example, ``` import traceback try: config_file = open('config.ini', 'r') except OSError: tb = traceback.format_exc() raise ConfigError('Give me my config, user!',tb) ``` The "nested" traceback will be stored in tb and passed to ConfigError, where you can work with it however you want.
8,997,431
A common pattern in python is to catch an error in an upstream module and re-raise that error as something more useful. ``` try: config_file = open('config.ini', 'r') except IOError: raise ConfigError('Give me my config, user!') ``` This will generate a stack trace of the form ``` Traceback (most recent call last): File "<stdin>", line 4, in <module> __main__.ConfigError: Give me my config, user! ``` Is there any way to access the wrapped exception in order to generate a stack trace more like this? ``` Traceback (most recent call last): File "<stdin>", line 2, in <module> __builtin__.IOError: File Does not exist. Exception wrapped by: File "<stdin>", line 4, in <module> __main__.ConfigError: Give me my config, user! ``` EDIT: ===== The problem i'm trying to defeat is that some 3rd party code can wrap exceptions up to 3 times and I want to be able to determine the root cause, i.e. a generic way to inspect the exception stack and determine the root cause of an exception without having to add any extra code to 3rd party modules.
2012/01/25
[ "https://Stackoverflow.com/questions/8997431", "https://Stackoverflow.com", "https://Stackoverflow.com/users/234254/" ]
Use the `traceback` [module](http://docs.python.org/library/traceback.html). It will allow you to access the most recent traceback and store it in a string. For example, ``` import traceback try: config_file = open('config.ini', 'r') except OSError: tb = traceback.format_exc() raise ConfigError('Give me my config, user!',tb) ``` The "nested" traceback will be stored in tb and passed to ConfigError, where you can work with it however you want.
Here is an example of how to unwind [PEP-3134](http://www.python.org/dev/peps/pep-3134/) exception chains. Note that due to legacy reasons some Python frameworks may not use exception chaining, but instead of wrap exceptions in their own way. For example. [SQLALchemy DBABIError uses `orig` attribute](https://docs.sqlalchemy.org/en/14/core/exceptions.html#sqlalchemy.exc.DBAPIError). ```py class Foobar(Exception): pass class Dummy(Exception): pass def func1(): raise Foobar("func1() argh") def func2(): try: func1() except Exception as e: raise Dummy("func2 vyaaarrg!") from e try: func2() except Exception as e: print(f"Current {e.__class__}: {e}") print(f"Nested {e.__cause__.__class__}:{e.__cause__}") ``` Prints ``` Current <class '__main__.Dummy'>: func2 vyaaarrg! Nested <class '__main__.Foobar'>:func1() argh ```
8,997,431
A common pattern in python is to catch an error in an upstream module and re-raise that error as something more useful. ``` try: config_file = open('config.ini', 'r') except IOError: raise ConfigError('Give me my config, user!') ``` This will generate a stack trace of the form ``` Traceback (most recent call last): File "<stdin>", line 4, in <module> __main__.ConfigError: Give me my config, user! ``` Is there any way to access the wrapped exception in order to generate a stack trace more like this? ``` Traceback (most recent call last): File "<stdin>", line 2, in <module> __builtin__.IOError: File Does not exist. Exception wrapped by: File "<stdin>", line 4, in <module> __main__.ConfigError: Give me my config, user! ``` EDIT: ===== The problem i'm trying to defeat is that some 3rd party code can wrap exceptions up to 3 times and I want to be able to determine the root cause, i.e. a generic way to inspect the exception stack and determine the root cause of an exception without having to add any extra code to 3rd party modules.
2012/01/25
[ "https://Stackoverflow.com/questions/8997431", "https://Stackoverflow.com", "https://Stackoverflow.com/users/234254/" ]
This is known as *Exception Chaining* and is suported in Python 3. PEP 3134: <http://www.python.org/dev/peps/pep-3134/> In Python 2, the old exception is lost when you raise a new one, unless you save it in the `except` block.
Here is an example of how to unwind [PEP-3134](http://www.python.org/dev/peps/pep-3134/) exception chains. Note that due to legacy reasons some Python frameworks may not use exception chaining, but instead of wrap exceptions in their own way. For example. [SQLALchemy DBABIError uses `orig` attribute](https://docs.sqlalchemy.org/en/14/core/exceptions.html#sqlalchemy.exc.DBAPIError). ```py class Foobar(Exception): pass class Dummy(Exception): pass def func1(): raise Foobar("func1() argh") def func2(): try: func1() except Exception as e: raise Dummy("func2 vyaaarrg!") from e try: func2() except Exception as e: print(f"Current {e.__class__}: {e}") print(f"Nested {e.__cause__.__class__}:{e.__cause__}") ``` Prints ``` Current <class '__main__.Dummy'>: func2 vyaaarrg! Nested <class '__main__.Foobar'>:func1() argh ```
51,247,566
It looks like I can only change the value of mutable variables using a function, but is it possible to change immutable ### Code ``` def f(a, b): a += 1 b.append('hi') x = 1 y = ['hello'] f(x, y) print(x, y) #x didn't change, but y did ``` ### Result ``` 1 [10, 1] ``` So, my question is that is it possible to modify immutable variables using functions? If no then why? What's the reason that python bans people from doing that?
2018/07/09
[ "https://Stackoverflow.com/questions/51247566", "https://Stackoverflow.com", "https://Stackoverflow.com/users/9273687/" ]
In python, the list is passed by **object-reference**. Actually, everything in python is an object but when you pass a single variable to function it creates a local copy of that if a value is changed but in case of a list if it creates a local copy even than the reference remains to the previous list object. Hence the value of the list will not get changed.\ You can refer to the [link](https://robertheaton.com/2014/02/09/pythons-pass-by-object-reference-as-explained-by-philip-k-dick/). You can check the following example for clarification. ``` def fun1(b): for i in range(0,len(b)): b[i]+=4 arr=[1,2,3,4] print("Before Passing",arr) fun2(arr) print("After Passing",arr) #output #Before Passing [1, 2, 3, 4] #After Passing [5, 6, 7, 8] ``` If you do not want any function to change value accidentally you can use an immutable object such as a tuple. **Edit:** (Copy example) We can check it by printing the id of both objects. ``` def fun(a): a=5 print(hex(id(a))) a=3 print(hex(id(a))) fun(a) # Output: # 0x555eb8890cc0 # 0x555eb8890d00 ``` But if we do it with a **List** object: ``` def fun(a): a.append(5) print(hex(id(a))) a=[1,2,3] print(hex(id(a))) fun(a) # Output: # 0x7f97e1589308 # 0x7f97e1589308 ```
Y is not value its just some bindings to memory. When You pass it to function its memory address is passed to function (call by reference). On the other hand x is value and when you pass it to function new local variable is created with same value. (At the assembly level all parameters of function are passed via stack pointer. Value of x and adress of y are pushed to stack pointer.
8,933,380
I'm developing a web application in python for which each user request makes an API call to an external service and takes about **20 seconds** to receive response. As a result, in the event of several concurrent requests being made, the CPU load goes crazy (>95%) with several idle processes. The server consists of a 1.6 GHz dual core Atom 330 with 2GB RAM. The web app is developed in python which is served through Apache with mod\_wsgi > > My question is the following. Will a non-blocking webserver such as Tornado improve CPU load and thus handle more concurrent users (I'm also interested why) ? Can you suggest any other scalable solution ? > > >
2012/01/19
[ "https://Stackoverflow.com/questions/8933380", "https://Stackoverflow.com", "https://Stackoverflow.com/users/1159455/" ]
You can't create an object like that - the "key" in an object literal must be a constant, not a variable or expression. If the key is a variable you need the array-like syntax instead: ``` myArray[key] = value; ``` Hence you need: ``` var data = {}; // empty object data[$(this).attr('id')] = $(this).val(); ``` However as all of your fields are actually plain `HTMLInputElement` or `HTMLTextAreaElement` objects, you should really use this and avoid those expensive jQuery calls: ``` var data = {}; // empty object data[this.id] = this.value; ``` I'd also question why you're creating an *array of objects* - as the keys should all be unique, I would normally expect to return a single object: ``` function formObjectBuild($group) { var obj = {}; $group.find('input[type=text],textarea').each(function () { obj[this.id] = this.value; }); return obj; } ```
You can't build property names dynamically like that. ``` function ArrayPush($group) { var arr = new Array(); $group.find('input[type=text],textarea').each(function () { var data = {}; data [$(this).attr('id')] = $(this).val(); arr.push(data); }); return arr; } ```
8,933,380
I'm developing a web application in python for which each user request makes an API call to an external service and takes about **20 seconds** to receive response. As a result, in the event of several concurrent requests being made, the CPU load goes crazy (>95%) with several idle processes. The server consists of a 1.6 GHz dual core Atom 330 with 2GB RAM. The web app is developed in python which is served through Apache with mod\_wsgi > > My question is the following. Will a non-blocking webserver such as Tornado improve CPU load and thus handle more concurrent users (I'm also interested why) ? Can you suggest any other scalable solution ? > > >
2012/01/19
[ "https://Stackoverflow.com/questions/8933380", "https://Stackoverflow.com", "https://Stackoverflow.com/users/1159455/" ]
You can't create an object like that - the "key" in an object literal must be a constant, not a variable or expression. If the key is a variable you need the array-like syntax instead: ``` myArray[key] = value; ``` Hence you need: ``` var data = {}; // empty object data[$(this).attr('id')] = $(this).val(); ``` However as all of your fields are actually plain `HTMLInputElement` or `HTMLTextAreaElement` objects, you should really use this and avoid those expensive jQuery calls: ``` var data = {}; // empty object data[this.id] = this.value; ``` I'd also question why you're creating an *array of objects* - as the keys should all be unique, I would normally expect to return a single object: ``` function formObjectBuild($group) { var obj = {}; $group.find('input[type=text],textarea').each(function () { obj[this.id] = this.value; }); return obj; } ```
``` var data = {}; data[$(this).attr('id')] = $(this).val(); ``` Use that instead. Otherwise you're trying to do some kind of eval...
8,933,380
I'm developing a web application in python for which each user request makes an API call to an external service and takes about **20 seconds** to receive response. As a result, in the event of several concurrent requests being made, the CPU load goes crazy (>95%) with several idle processes. The server consists of a 1.6 GHz dual core Atom 330 with 2GB RAM. The web app is developed in python which is served through Apache with mod\_wsgi > > My question is the following. Will a non-blocking webserver such as Tornado improve CPU load and thus handle more concurrent users (I'm also interested why) ? Can you suggest any other scalable solution ? > > >
2012/01/19
[ "https://Stackoverflow.com/questions/8933380", "https://Stackoverflow.com", "https://Stackoverflow.com/users/1159455/" ]
You can't create an object like that - the "key" in an object literal must be a constant, not a variable or expression. If the key is a variable you need the array-like syntax instead: ``` myArray[key] = value; ``` Hence you need: ``` var data = {}; // empty object data[$(this).attr('id')] = $(this).val(); ``` However as all of your fields are actually plain `HTMLInputElement` or `HTMLTextAreaElement` objects, you should really use this and avoid those expensive jQuery calls: ``` var data = {}; // empty object data[this.id] = this.value; ``` I'd also question why you're creating an *array of objects* - as the keys should all be unique, I would normally expect to return a single object: ``` function formObjectBuild($group) { var obj = {}; $group.find('input[type=text],textarea').each(function () { obj[this.id] = this.value; }); return obj; } ```
Try changing that to. ``` var data = {}; data[$(this).attr('id')] = $(this).val(); ```
8,933,380
I'm developing a web application in python for which each user request makes an API call to an external service and takes about **20 seconds** to receive response. As a result, in the event of several concurrent requests being made, the CPU load goes crazy (>95%) with several idle processes. The server consists of a 1.6 GHz dual core Atom 330 with 2GB RAM. The web app is developed in python which is served through Apache with mod\_wsgi > > My question is the following. Will a non-blocking webserver such as Tornado improve CPU load and thus handle more concurrent users (I'm also interested why) ? Can you suggest any other scalable solution ? > > >
2012/01/19
[ "https://Stackoverflow.com/questions/8933380", "https://Stackoverflow.com", "https://Stackoverflow.com/users/1159455/" ]
You can't create an object like that - the "key" in an object literal must be a constant, not a variable or expression. If the key is a variable you need the array-like syntax instead: ``` myArray[key] = value; ``` Hence you need: ``` var data = {}; // empty object data[$(this).attr('id')] = $(this).val(); ``` However as all of your fields are actually plain `HTMLInputElement` or `HTMLTextAreaElement` objects, you should really use this and avoid those expensive jQuery calls: ``` var data = {}; // empty object data[this.id] = this.value; ``` I'd also question why you're creating an *array of objects* - as the keys should all be unique, I would normally expect to return a single object: ``` function formObjectBuild($group) { var obj = {}; $group.find('input[type=text],textarea').each(function () { obj[this.id] = this.value; }); return obj; } ```
Try this: ``` function ArrayPush($group) { var arr = new Array(); $group.find('input[type=text],textarea').each(function () { arr[$(this).attr('id')] = $(this).val(); }); return arr; } ```
8,933,380
I'm developing a web application in python for which each user request makes an API call to an external service and takes about **20 seconds** to receive response. As a result, in the event of several concurrent requests being made, the CPU load goes crazy (>95%) with several idle processes. The server consists of a 1.6 GHz dual core Atom 330 with 2GB RAM. The web app is developed in python which is served through Apache with mod\_wsgi > > My question is the following. Will a non-blocking webserver such as Tornado improve CPU load and thus handle more concurrent users (I'm also interested why) ? Can you suggest any other scalable solution ? > > >
2012/01/19
[ "https://Stackoverflow.com/questions/8933380", "https://Stackoverflow.com", "https://Stackoverflow.com/users/1159455/" ]
You can't build property names dynamically like that. ``` function ArrayPush($group) { var arr = new Array(); $group.find('input[type=text],textarea').each(function () { var data = {}; data [$(this).attr('id')] = $(this).val(); arr.push(data); }); return arr; } ```
Try this: ``` function ArrayPush($group) { var arr = new Array(); $group.find('input[type=text],textarea').each(function () { arr[$(this).attr('id')] = $(this).val(); }); return arr; } ```
8,933,380
I'm developing a web application in python for which each user request makes an API call to an external service and takes about **20 seconds** to receive response. As a result, in the event of several concurrent requests being made, the CPU load goes crazy (>95%) with several idle processes. The server consists of a 1.6 GHz dual core Atom 330 with 2GB RAM. The web app is developed in python which is served through Apache with mod\_wsgi > > My question is the following. Will a non-blocking webserver such as Tornado improve CPU load and thus handle more concurrent users (I'm also interested why) ? Can you suggest any other scalable solution ? > > >
2012/01/19
[ "https://Stackoverflow.com/questions/8933380", "https://Stackoverflow.com", "https://Stackoverflow.com/users/1159455/" ]
``` var data = {}; data[$(this).attr('id')] = $(this).val(); ``` Use that instead. Otherwise you're trying to do some kind of eval...
Try this: ``` function ArrayPush($group) { var arr = new Array(); $group.find('input[type=text],textarea').each(function () { arr[$(this).attr('id')] = $(this).val(); }); return arr; } ```
8,933,380
I'm developing a web application in python for which each user request makes an API call to an external service and takes about **20 seconds** to receive response. As a result, in the event of several concurrent requests being made, the CPU load goes crazy (>95%) with several idle processes. The server consists of a 1.6 GHz dual core Atom 330 with 2GB RAM. The web app is developed in python which is served through Apache with mod\_wsgi > > My question is the following. Will a non-blocking webserver such as Tornado improve CPU load and thus handle more concurrent users (I'm also interested why) ? Can you suggest any other scalable solution ? > > >
2012/01/19
[ "https://Stackoverflow.com/questions/8933380", "https://Stackoverflow.com", "https://Stackoverflow.com/users/1159455/" ]
Try changing that to. ``` var data = {}; data[$(this).attr('id')] = $(this).val(); ```
Try this: ``` function ArrayPush($group) { var arr = new Array(); $group.find('input[type=text],textarea').each(function () { arr[$(this).attr('id')] = $(this).val(); }); return arr; } ```
61,388,487
Was trying to learn itertools from the python docs. I was going through `count` function and replicated the example given. However, I did not get any output to view. ``` def count(start=0, step=1): n = start while True: yield n n += step ``` output was : ``` >>> print(count(2.5,0.5)) <generator object count at 0x00000254BF3FEA48> ```
2020/04/23
[ "https://Stackoverflow.com/questions/61388487", "https://Stackoverflow.com", "https://Stackoverflow.com/users/13146477/" ]
if u want set the icon size small then u can do like that ``` Tab( text: "Category List", icon: Icon(Icons.home,size: 15,), ), Tab( text: "Product List", icon: Icon(Icons.view_list,size: 15,), ), Tab( text: "Contact Us", icon: Icon(Icons.contacts,size: 15,), ), Tab( text: "Darshan Timing", icon: Icon(Icons.access_time,size: 15,), ) ``` here `size: 15,` will make the icon size as you want
You gotta make custom TabBar for customization. Something like this. **CustomTabbar** ``` import 'package:flutter/material.dart'; class ChangeTextSizeTabbar extends StatefulWidget { @override ChangeTextSizeTabbarState createState() { return new ChangeTextSizeTabbarState(); } } class ChangeTextSizeTabbarState extends State<ChangeTextSizeTabbar> { @override Widget build(BuildContext context) { return DefaultTabController( length: 3, child: Scaffold( appBar: AppBar( title: Text("Change Text Size Tabbar Example"), bottom: TabBar( tabs: <Tab>[ Tab( child: Image.asset( 'assets/icons/project/proj_001.png', height : 100, width : 100, ), ), Tab( icon: Image.asset( 'assets/icons/project/all.png', height : 100, width : 100, ), ), Tab( icon: Image.asset( 'assets/icons/project/proj_009.png', height : 100, width : 100, ), ), ] ), ), body: TabBarView( children: <Widget>[ Container(), Container(), Container(), ], ), ), ); } } ``` **main.dart** ``` import 'package:flutter/material.dart'; import 'change_text_size_tabbar_task-3.dart'; void main() => runApp(MyApp()); class MyApp extends StatelessWidget { // This widget is the root of your application. @override Widget build(BuildContext context) { return MaterialApp( title: 'Flutter Demo', debugShowCheckedModeBanner: false, theme: ThemeData( // This is the theme of your application. // // Try running your application with "flutter run". You'll see the // application has a blue toolbar. Then, without quitting the app, try // changing the primarySwatch below to Colors.green and then invoke // "hot reload" (press "r" in the console where you ran "flutter run", // or simply save your changes to "hot reload" in a Flutter IDE). // Notice that the counter didn't reset back to zero; the application // is not restarted. primarySwatch: Colors.blue, ), //home: MyHomePage(title: 'Flutter Demo Home Page'), home: ChangeTextSizeTabbar(), ); } } ```
31,715,184
I want to calculate 6 month before date in python.So is there any problem occurs at dates (example 31 august).Can we solve this problem using timedelta() function.can we pass months like *date=now - timedelta(days=days)* instead of argument days.
2015/07/30
[ "https://Stackoverflow.com/questions/31715184", "https://Stackoverflow.com", "https://Stackoverflow.com/users/4947801/" ]
`timedelta` does not support months, but you can try using [`dateutil.relativedelta`](http://dateutil.readthedocs.org/en/latest/relativedelta.html#dateutil.relativedelta.relativedelta) for your calculations , which do support months. Example - ``` >>> from dateutil import relativedelta >>> from datetime import datetime >>> n = datetime.now() >>> n - relativedelta.relativedelta(months=6) datetime.datetime(2015, 1, 30, 10, 5, 32, 491815) >>> n - relativedelta.relativedelta(months=8) datetime.datetime(2014, 11, 30, 10, 5, 32, 491815) ```
If you are only interested in what the month was 6 months ago then try this: ``` import datetime month = datetime.datetime.now().month - 6 if month < 1: month = 12 + month # At this point month is 0 or a negative number so we add ```
31,715,184
I want to calculate 6 month before date in python.So is there any problem occurs at dates (example 31 august).Can we solve this problem using timedelta() function.can we pass months like *date=now - timedelta(days=days)* instead of argument days.
2015/07/30
[ "https://Stackoverflow.com/questions/31715184", "https://Stackoverflow.com", "https://Stackoverflow.com/users/4947801/" ]
If you are only interested in what the month was 6 months ago then try this: ``` import datetime month = datetime.datetime.now().month - 6 if month < 1: month = 12 + month # At this point month is 0 or a negative number so we add ```
Following function should work fine for both month add and month substract. ``` import datetime import calendar def add_months(sourcedate, months): month = sourcedate.month - 1 + months year = sourcedate.year + month / 12 month = month % 12 + 1 day = min(sourcedate.day,calendar.monthrange(year,month)[1]) return datetime.date(year,month,day) #Example: Get today dateToday = datetime.date.today() #Substract 6 month from Today print add_months(dateToday ,-6) ```
31,715,184
I want to calculate 6 month before date in python.So is there any problem occurs at dates (example 31 august).Can we solve this problem using timedelta() function.can we pass months like *date=now - timedelta(days=days)* instead of argument days.
2015/07/30
[ "https://Stackoverflow.com/questions/31715184", "https://Stackoverflow.com", "https://Stackoverflow.com/users/4947801/" ]
`timedelta` does not support months, but you can try using [`dateutil.relativedelta`](http://dateutil.readthedocs.org/en/latest/relativedelta.html#dateutil.relativedelta.relativedelta) for your calculations , which do support months. Example - ``` >>> from dateutil import relativedelta >>> from datetime import datetime >>> n = datetime.now() >>> n - relativedelta.relativedelta(months=6) datetime.datetime(2015, 1, 30, 10, 5, 32, 491815) >>> n - relativedelta.relativedelta(months=8) datetime.datetime(2014, 11, 30, 10, 5, 32, 491815) ```
Following function should work fine for both month add and month substract. ``` import datetime import calendar def add_months(sourcedate, months): month = sourcedate.month - 1 + months year = sourcedate.year + month / 12 month = month % 12 + 1 day = min(sourcedate.day,calendar.monthrange(year,month)[1]) return datetime.date(year,month,day) #Example: Get today dateToday = datetime.date.today() #Substract 6 month from Today print add_months(dateToday ,-6) ```
52,805,041
Say im having scrapper\_1.py , scrapper\_2.py, scrapper\_3.py. The way i run it now its from pycharm run/execute each in separate, this way i can see the 3 python.exe in execution at task manager. Now im trying to write a master script say scrapper\_runner.py that imports this scrappers as modules and run them all in parallel not sequential. I tried examples with subprocess, multiprocessing even os.system from various SO posts ... but without any luck ... from logs they all run in sequence and from task manager i only see one python.exe execution. Is this the right pattern for this kind of process ? **EDIT:1** (trying with concurrent.futures ProcessPoolExecutor) it runns sequentially. ``` from concurrent.futures import ProcessPoolExecutor import scrapers.scraper_1 as scraper_1 import scrapers.scraper_2 as scraper_2 import scrapers.scraper_3 as scraper_3 ## Calling method runner on each scrapper_x to kick off processes runners_list = [scraper_1.runner(), scraper_1.runner(), scraper_3.runner()] if __name__ == "__main__": with ProcessPoolExecutor(max_workers=10) as executor: for runner in runners_list: future = executor.submit(runner) print(future.result()) ```
2018/10/14
[ "https://Stackoverflow.com/questions/52805041", "https://Stackoverflow.com", "https://Stackoverflow.com/users/662409/" ]
A subprocess in python may or may not show up as a separate process, depending on your OS and your task manager. `htop` in linux, for example, will display subprocesses under the parent process in tree-view. I recommend taking a look at this in depth tutorial on the `multiprocessing` module in python: <https://pymotw.com/2/multiprocessing/basics.html> However, if python's built-in methods of multiprocessing/threading don't work or make sense to you, you can achieve your desired result by using bash to call your python scripts. The following bash script results in the attached screenshot. ``` #!/bin/sh ./py1.py & ./py2.py & ./py3.py & ``` [![parallel python scripts](https://i.stack.imgur.com/VKxPW.png)](https://i.stack.imgur.com/VKxPW.png) Explanation: The `&` at the end of each call tells bash to run each call as a background process.
Your problem is in how you setup the processes. You are not running the processes in parallel, even though you think you are. You are actually running them, when you add them to the `runners_list` and then you are running the result of each runner in parallel as multiprocesses. What you want to do, is to add the functions to the `runners_list` without executing them, then have them being executed in your multiprocessing `pool`. The way to achieve this, is to add the function references, i.e. the name of the functions. To do this, you should not include the parantheses, since this is the syntax for calling functions and not just name them. In addition, to have the futures execute asynchronously, it is not possible to have a direct call to `future.result`, as that will force the code to execute sequentially, to ensure that the results are available in the same sequnece as the functions are called. This means that the soultion to your problem is ``` from concurrent.futures import ProcessPoolExecutor import scrapers.scraper_1 as scraper_1 import scrapers.scraper_2 as scraper_2 import scrapers.scraper_3 as scraper_3 ## NOT calling method runner on each scrapper_x to kick off processes ## Instead add them to the list of functions to be run in the pool runners_list = [scraper_1.runner, scraper_1.runner, scraper_3.runner] # Adding callback function to call when future is done. # If result is not printed in callback, the future.result call will # serialize the call sequence to ensure results in order def print_result(future): print(future.result) if __name__ == "__main__": with ProcessPoolExecutor(max_workers=10) as executor: for runner in runners_list: future = executor.submit(runner) future.add_done_callback(print_result) ``` As you can see, here the invocation of the runners does not happen when the list is created, but later, when the `runner` is submitted to the executor. And, when the results are ready, the callback is called, to print the result to screen.
52,902,158
I have the following list: ``` o_dict_list = [(OrderedDict([('StreetNamePreType', 'ROAD'), ('StreetName', 'Coffee')]), 'Ambiguous'), (OrderedDict([('StreetNamePreType', 'AVENUE'), ('StreetName', 'Washington')]), 'Ambiguous'), (OrderedDict([('StreetNamePreType', 'ROAD'), ('StreetName', 'Quartz')]), 'Ambiguous')] ``` And like the title says, I am trying to take this list and create a pandas dataframe where the columns are: `'StreetNamePreType'` and `'StreetName'` and the rows contain the corresponding values for each key in the OrderedDict. I have done some searching on StackOverflow to get some guidance on how to create a dataframe, see [here](https://stackoverflow.com/questions/44365209/generate-a-pandas-dataframe-from-ordereddict) but I am getting an error when I run this code (I am trying to replicate what is going on in that response). ``` from collections import Counter, OrderedDict import pandas as pd col = Counter() for k in o_dict_list: col.update(k) df = pd.DataFrame([k.values() for k in o_dict_list], columns = col.keys()) ``` When I run this code, the error I get is: `TypeError: unhashable type: 'OrderedDict'` I looked up this error, [here](https://stackoverflow.com/questions/15880765/python-unhashable-type-ordereddict), I get that there is a problem with the datatypes, but I, unfortunately, I don't know enough about the inner workings of Python/Pandas to resolve this problem on my own. I suspect that my list of OrderedDict is not exactly the same as in [here](https://stackoverflow.com/questions/44365209/generate-a-pandas-dataframe-from-ordereddict) which is why I am not getting my code to work. More specifically, I believe I have a list of sets, and each element contains an OrderedDict. The example, that I have linked to [here](https://stackoverflow.com/questions/44365209/generate-a-pandas-dataframe-from-ordereddict) seems to be a true list of OrderedDicts. Again, I don't know enough about the inner workings of Python/Pandas to resolve this problem on my own and am looking for help.
2018/10/20
[ "https://Stackoverflow.com/questions/52902158", "https://Stackoverflow.com", "https://Stackoverflow.com/users/6930132/" ]
I would use list comprehension to do this as follows. ``` pd.DataFrame([o_dict_list[i][0] for i, j in enumerate(o_dict_list)]) ``` > > > > > > See the output below. > > > > > > > > > ``` StreetNamePreType StreetName 0 ROAD Coffee 1 AVENUE Washington 2 ROAD Quartz ```
extracting the `OrderedDict` objects from your list and then use `pd.Dataframe` should work ``` values= [] for i in range(len(o_dict_list)): values.append(o_dict_list[i][0]) pd.DataFrame(values) StreetNamePreType StreetName 0 ROAD Coffee 1 AVENUE Washington 2 ROAD Quartz ```
52,902,158
I have the following list: ``` o_dict_list = [(OrderedDict([('StreetNamePreType', 'ROAD'), ('StreetName', 'Coffee')]), 'Ambiguous'), (OrderedDict([('StreetNamePreType', 'AVENUE'), ('StreetName', 'Washington')]), 'Ambiguous'), (OrderedDict([('StreetNamePreType', 'ROAD'), ('StreetName', 'Quartz')]), 'Ambiguous')] ``` And like the title says, I am trying to take this list and create a pandas dataframe where the columns are: `'StreetNamePreType'` and `'StreetName'` and the rows contain the corresponding values for each key in the OrderedDict. I have done some searching on StackOverflow to get some guidance on how to create a dataframe, see [here](https://stackoverflow.com/questions/44365209/generate-a-pandas-dataframe-from-ordereddict) but I am getting an error when I run this code (I am trying to replicate what is going on in that response). ``` from collections import Counter, OrderedDict import pandas as pd col = Counter() for k in o_dict_list: col.update(k) df = pd.DataFrame([k.values() for k in o_dict_list], columns = col.keys()) ``` When I run this code, the error I get is: `TypeError: unhashable type: 'OrderedDict'` I looked up this error, [here](https://stackoverflow.com/questions/15880765/python-unhashable-type-ordereddict), I get that there is a problem with the datatypes, but I, unfortunately, I don't know enough about the inner workings of Python/Pandas to resolve this problem on my own. I suspect that my list of OrderedDict is not exactly the same as in [here](https://stackoverflow.com/questions/44365209/generate-a-pandas-dataframe-from-ordereddict) which is why I am not getting my code to work. More specifically, I believe I have a list of sets, and each element contains an OrderedDict. The example, that I have linked to [here](https://stackoverflow.com/questions/44365209/generate-a-pandas-dataframe-from-ordereddict) seems to be a true list of OrderedDicts. Again, I don't know enough about the inner workings of Python/Pandas to resolve this problem on my own and am looking for help.
2018/10/20
[ "https://Stackoverflow.com/questions/52902158", "https://Stackoverflow.com", "https://Stackoverflow.com/users/6930132/" ]
I would use list comprehension to do this as follows. ``` pd.DataFrame([o_dict_list[i][0] for i, j in enumerate(o_dict_list)]) ``` > > > > > > See the output below. > > > > > > > > > ``` StreetNamePreType StreetName 0 ROAD Coffee 1 AVENUE Washington 2 ROAD Quartz ```
``` d = [{'points': 50, 'time': '5:00', 'year': 2010}, {'points': 25, 'time': '6:00', 'month': "february"}, {'points':90, 'time': '9:00', 'month': 'january'}, {'points_h1':20, 'month': 'june'}] pd.DataFrame(d) ```
5,980,863
In my django app I call `location.reload();` in an Ajax routine in some rare circumstances. That works well with Chrome, but with Firefox4 I get an `error: [Errno 32] Broken pipe` twice on my development server (Django 1.2.5, Python 2.7), which takes about 10 sec. And the error seems to eat the message I'm trying to display using the django messages framework. No I replaced this line with ``` var uri = location.href; location.href = uri; ``` Now the reload still takes some 10 sec, but Firefox displays the message. So far, it works. But to me this looks like a dirty hack. So my questions are: 1. Can anybody explain (or guess) what the error is in the first place? 2. Do you see any problems where this 'solution' could bite me in the future? (Note: I'm not the first [to](https://stackoverflow.com/questions/2868059/django-webkit-broken-pipe) [experience](https://stackoverflow.com/questions/5847580/django-is-sooo-slow-errno-32-broken-pipe-dcramer-django-sentry-static-folder) [that](https://stackoverflow.com/questions/4029297/error-errno-32-broken-pipe-when-paypal-calls-back-to-python-django-app) [problem](http://code.djangoproject.com/ticket/4444)).
2011/05/12
[ "https://Stackoverflow.com/questions/5980863", "https://Stackoverflow.com", "https://Stackoverflow.com/users/526169/" ]
First of all, that's an issue with some specific browsers (and, probably, long processing on the server side) not a problem in django. From the [bug report](http://code.djangoproject.com/ticket/4444#comment:14) on django: > > This is common error which happens whenever your browser closes the connection while the dev server is still busy sending data. The best we could is to have a more explicit error message. > > > It actually can happen on other systems, eg from [cherrypy](http://www.cherrypy.org/wiki/CherryPyBrokenPipe) > > There is nothing to worry about as this just means that the client closed the connection before the server. After this traceback, your CherryPy server will still keep running normally. > > > So that's an introduction to your first question: 1. Can anybody explain (or guess) what the error is in the first place? Well, it's simply the browser closing the connection - kind of a client-side timeout. This [Django + WebKit = Broken pipe](https://stackoverflow.com/questions/2868059/django-webkit-broken-pipe/2868082#2868082) answer does answer that question. Why does it work by changing `location.href` and not using `location.reload()`? Well I would guess, but that's ONLY a guess, that Firefox behaves slightly differently and a reload will timeout differently. I think the message is consumed because the request is already being sent when the browser pulls the trigger and shuts the connection. The dev server is single threaded, and that might also be a factor in the issue. I usually do my development on a real (local) server (nginx+apache+mod\_wsgi, nothing fancy) - that avoid running into silly issues that would never happen on production. 2. Do you see any problems where this 'solution' could bite me in the future? Well, it might not work on a browser that would check if `href` has changed before reloading. Or it might hit the cache instead of doing a real request (you can force avoiding the cache with reload()). And behaviour might not be consistent on all browsers. But again, you are already hitting a browser quirk, so I wouldn't worry about it too much by itself. By the way, you could simply do: ``` location.href = location.href ``` I would rather worry that the processing takes 10s! That really should not happen. **edit** So it looks like it's the browser itself provoking the long processing time AND the broken pipe error. sounds like (bad) parallel requests on the singlethreaded django server to me. **endedit** Test on a real webserver, optimize your code; if that's not enough launch the long tasks on a background process with celery+rabbitmq; in any case don't lose time on an issue which is not really an issue! You will probably be able to live with `location.reload()` and a little tweaking OR maybe just a real test environment!
The broken pipe error can also be down to lack of support for certain functionality in the Django debug server - one notable issue is Django's lack of support for [`Range` HTTP requests](https://www.rfc-editor.org/rfc/rfc7233) (see here for details: [Byte Ranges in Django](https://stackoverflow.com/questions/14324250/byte-ranges-in-django)) which are commonly used when delivering [streaming] media content. It's probably worth investigating the actual HTTP interchange using a packet capture program such as Wireshark so you can see where and when the problem is occurring.
50,014,265
I'm trying to write a python script that can echo whatever a user types when running the script Right now, the code I have is (version\_msg and usage\_msg don't matter right now) ``` from optparse import OptionParser version_msg = "" usage_msg = "" parser = OptionParser(version=version_msg, usage=usage_msg) parser.add_option("-e", "--echo", action="append", dest="input_lines", default=[]) ``` But if I try to run the script (python options.py -e hello world), it echoes just ['hello']. How would I go about fixing this so it outputs ['hello', 'world']?
2018/04/25
[ "https://Stackoverflow.com/questions/50014265", "https://Stackoverflow.com", "https://Stackoverflow.com/users/6642021/" ]
A slightly hacky way of doing it: ``` from optparse import OptionParser version_msg = "" usage_msg = "" parser = OptionParser(version=version_msg, usage=usage_msg) parser.add_option("-e", "--echo", action="append", dest="input_lines", default=[]) options, arguments = parser.parse_args() print(options.input_lines + arguments) ``` I then run ``` python myscript.py -e hello world how are you ``` Output: ``` ['hello', 'world', 'how', 'are', 'you'] ```
I think this is best accomplished by quoting the argument ie hello world becomes 'hello world' this ensures that the -e option consumes the entire string. If you really need the string to be broken up into pieces ie ['hello', 'world'] instead of ['hello world'] you could easily call split() on options.e[0] ``` strings = options.e[0].split() ``` For a more complex method, you can use a callback, below links to a relevant example for you. <https://docs.python.org/3/library/optparse.html#callback-example-6-variable-arguments>
50,014,265
I'm trying to write a python script that can echo whatever a user types when running the script Right now, the code I have is (version\_msg and usage\_msg don't matter right now) ``` from optparse import OptionParser version_msg = "" usage_msg = "" parser = OptionParser(version=version_msg, usage=usage_msg) parser.add_option("-e", "--echo", action="append", dest="input_lines", default=[]) ``` But if I try to run the script (python options.py -e hello world), it echoes just ['hello']. How would I go about fixing this so it outputs ['hello', 'world']?
2018/04/25
[ "https://Stackoverflow.com/questions/50014265", "https://Stackoverflow.com", "https://Stackoverflow.com/users/6642021/" ]
In `argparse` this is quite easy, with its `nargs` parameter: ``` In [245]: parser = argparse.ArgumentParser() In [246]: parser.add_argument('-e','--echo', nargs='+'); In [247]: parser.parse_args(['-e','hello','world']) Out[247]: Namespace(echo=['hello', 'world']) ``` `nargs` is used to specify how many strings the Action takes. '+' means one or more. The results are collected in a list. It models the `nargs` values on the regex usage (e.g. '?' and '\*' also work). ``` In [248]: parser.print_help() usage: ipython3 [-h] [-e ECHO [ECHO ...]] optional arguments: -h, --help show this help message and exit -e ECHO [ECHO ...], --echo ECHO [ECHO ...] ``` --- Looking at the `optparse` docs, I see a `nargs` parameter, but it must be a number. For a variable number, we have to use a `callback` as described in: <https://docs.python.org/2/library/optparse.html#callback-example-6-variable-arguments> Using the function defined in this section: ``` In [266]: parser = optparse.OptionParser() In [267]: parser.add_option('-e','--echo', dest='echo', action='callback', callback=vararg_callback); In [268]: parser.parse_args(['-e','hello','world']) Out[268]: (<Values at 0x7f0ff208a5c0: {'echo': ['hello', 'world']}>, []) ``` In `argparse`, `nargs='+'` collects values up to the next `--` or `-`, but that allocation is done topdown, by the main parsing routine, not a callback defined for the `option` itself.
I think this is best accomplished by quoting the argument ie hello world becomes 'hello world' this ensures that the -e option consumes the entire string. If you really need the string to be broken up into pieces ie ['hello', 'world'] instead of ['hello world'] you could easily call split() on options.e[0] ``` strings = options.e[0].split() ``` For a more complex method, you can use a callback, below links to a relevant example for you. <https://docs.python.org/3/library/optparse.html#callback-example-6-variable-arguments>
14,404,744
I'm trying to read a CSV file using `numpy.recfromcsv(...)` where some of the fields have commas in them. The fields that have commas in them are surrounded by quotes i.e., `"value1, value2"`. Numpy see's the quoted field as two different fields and it doesn't work very well. The command I'm using right now is ``` data = numpy.recfromcsv(dataFilename, delimiter=',', autstrip=True) ``` I found this question > > [Read CSV file with comma within fields in Python](https://stackoverflow.com/questions/8311900/python-read-csv-file-with-comma-within-fields) > > > But it doesn't use `numpy`, which I'd really love to use. So I'm hoping there are at least one of a few options here: 1. What are some options to `numpy.recfromcsv(...)` that will allow me to read a quoted field as one field instead of multiple comma separated fields? 2. Should I format my CSV file differently? 3. (alternatively, but not ideally) Read CSV as in quoted question, with extra steps to create `numpy` array. Please advise.
2013/01/18
[ "https://Stackoverflow.com/questions/14404744", "https://Stackoverflow.com", "https://Stackoverflow.com/users/633318/" ]
It is possible to do this with [pandas](http://pandas.pydata.org/): ``` np_array = pandas.io.parsers.read_csv("file_with_comma_fields_quoted.csv").as_matrix() ```
If you consider using native Python csv reader, with Python doc [here](http://docs.python.org/2/library/csv.html#csv-fmt-params): Python csv reader defines some optional `Dialect.quotechar` options, which defaults to `'"'`. In the csv format standard, quotechar is another field delimiter, and the delimiter (comma in your case) may be included in the quoted field. Rules for quoting character in csv format are clear in first section of [this page](http://golang.org/pkg/encoding/csv/). So, it seems that with default quoting character to `"`, native Python csv reader manages your problem in default mode. If you want to stick to Python, why not clean your csv file first, using regexp to identify quoted fields, and change delimiter from comma to `\t` for instance. But here you are actually parsing csv format by yourself.
14,404,744
I'm trying to read a CSV file using `numpy.recfromcsv(...)` where some of the fields have commas in them. The fields that have commas in them are surrounded by quotes i.e., `"value1, value2"`. Numpy see's the quoted field as two different fields and it doesn't work very well. The command I'm using right now is ``` data = numpy.recfromcsv(dataFilename, delimiter=',', autstrip=True) ``` I found this question > > [Read CSV file with comma within fields in Python](https://stackoverflow.com/questions/8311900/python-read-csv-file-with-comma-within-fields) > > > But it doesn't use `numpy`, which I'd really love to use. So I'm hoping there are at least one of a few options here: 1. What are some options to `numpy.recfromcsv(...)` that will allow me to read a quoted field as one field instead of multiple comma separated fields? 2. Should I format my CSV file differently? 3. (alternatively, but not ideally) Read CSV as in quoted question, with extra steps to create `numpy` array. Please advise.
2013/01/18
[ "https://Stackoverflow.com/questions/14404744", "https://Stackoverflow.com", "https://Stackoverflow.com/users/633318/" ]
It is possible to do this with [pandas](http://pandas.pydata.org/): ``` np_array = pandas.io.parsers.read_csv("file_with_comma_fields_quoted.csv").as_matrix() ```
It turns out the easiest way to do this is to use the standard library module, `csv` to read in the file into a tuple, then use the tuple as input to a numpy array. I wish I could just read it in with numpy, but that doesn't seem to work.
25,426,447
I was trying to understand how non-blocking sockets work ,so I wrote this simple server in python . ``` import socket s=socket.socket(socket.AF_INET, socket.SOCK_STREAM) s.bind(('127.0.0.1',1000)) s.listen(5) s.setblocking(0) while True: try: conn, addr = s.accept() print ('connection from',addr) data=conn.recv(100) print ('recived: ',data,len(data)) except: pass ``` Then I tried to connect to this server from multiple instances of this client ``` import socket s=socket.socket(socket.AF_INET, socket.SOCK_STREAM) s.connect(('127.0.0.1',1000)) while True: continue ``` But for some reason putting blocking to 0 or 1 dose not seem to have an effect and server's recv method always block the execution. So, dose creating non-blocking socket in python require more than just setting the blocking flag to 0.
2014/08/21
[ "https://Stackoverflow.com/questions/25426447", "https://Stackoverflow.com", "https://Stackoverflow.com/users/2789669/" ]
`setblocking` only affects the socket you use it on. So you have to add `conn.setblocking(0)` to see an effect: The `recv` will then return immediately if there is no data available.
You just need to call `setblocking(0)` on the *connected* socket, i.e. `conn`. ``` import socket s = socket.socket() s.bind(('127.0.0.1', 12345)) s.listen(5) s.setblocking(0) >>> conn, addr = s.accept() Traceback (most recent call last): File "<stdin>", line 1, in <module> File "/usr/lib64/python2.7/socket.py", line 202, in accept sock, addr = self._sock.accept() socket.error: [Errno 11] Resource temporarily unavailable # start your client... >>> conn, addr = s.accept() >>> conn.recv() # this will hang until the client sends some data.... 'hi there\n' >>> conn.setblocking(0) # set non-blocking on the connected socket "conn" >>> conn.recv() Traceback (most recent call last): File "<stdin>", line 1, in <module> socket.error: [Errno 11] Resource temporarily unavailable ```
25,426,447
I was trying to understand how non-blocking sockets work ,so I wrote this simple server in python . ``` import socket s=socket.socket(socket.AF_INET, socket.SOCK_STREAM) s.bind(('127.0.0.1',1000)) s.listen(5) s.setblocking(0) while True: try: conn, addr = s.accept() print ('connection from',addr) data=conn.recv(100) print ('recived: ',data,len(data)) except: pass ``` Then I tried to connect to this server from multiple instances of this client ``` import socket s=socket.socket(socket.AF_INET, socket.SOCK_STREAM) s.connect(('127.0.0.1',1000)) while True: continue ``` But for some reason putting blocking to 0 or 1 dose not seem to have an effect and server's recv method always block the execution. So, dose creating non-blocking socket in python require more than just setting the blocking flag to 0.
2014/08/21
[ "https://Stackoverflow.com/questions/25426447", "https://Stackoverflow.com", "https://Stackoverflow.com/users/2789669/" ]
`setblocking` only affects the socket you use it on. So you have to add `conn.setblocking(0)` to see an effect: The `recv` will then return immediately if there is no data available.
<https://docs.python.org/3/library/socket.html#socket.setdefaulttimeout> You can use `s.setdefaulttimeout(1.0)` to apply all connection sockets as default.
25,426,447
I was trying to understand how non-blocking sockets work ,so I wrote this simple server in python . ``` import socket s=socket.socket(socket.AF_INET, socket.SOCK_STREAM) s.bind(('127.0.0.1',1000)) s.listen(5) s.setblocking(0) while True: try: conn, addr = s.accept() print ('connection from',addr) data=conn.recv(100) print ('recived: ',data,len(data)) except: pass ``` Then I tried to connect to this server from multiple instances of this client ``` import socket s=socket.socket(socket.AF_INET, socket.SOCK_STREAM) s.connect(('127.0.0.1',1000)) while True: continue ``` But for some reason putting blocking to 0 or 1 dose not seem to have an effect and server's recv method always block the execution. So, dose creating non-blocking socket in python require more than just setting the blocking flag to 0.
2014/08/21
[ "https://Stackoverflow.com/questions/25426447", "https://Stackoverflow.com", "https://Stackoverflow.com/users/2789669/" ]
You just need to call `setblocking(0)` on the *connected* socket, i.e. `conn`. ``` import socket s = socket.socket() s.bind(('127.0.0.1', 12345)) s.listen(5) s.setblocking(0) >>> conn, addr = s.accept() Traceback (most recent call last): File "<stdin>", line 1, in <module> File "/usr/lib64/python2.7/socket.py", line 202, in accept sock, addr = self._sock.accept() socket.error: [Errno 11] Resource temporarily unavailable # start your client... >>> conn, addr = s.accept() >>> conn.recv() # this will hang until the client sends some data.... 'hi there\n' >>> conn.setblocking(0) # set non-blocking on the connected socket "conn" >>> conn.recv() Traceback (most recent call last): File "<stdin>", line 1, in <module> socket.error: [Errno 11] Resource temporarily unavailable ```
<https://docs.python.org/3/library/socket.html#socket.setdefaulttimeout> You can use `s.setdefaulttimeout(1.0)` to apply all connection sockets as default.
23,785,259
I'm new using Python 3.4 and I'll be using it for my internship in the next month. However, my instructor gave me a task to practice while I haven't started it yet. Thus, he gave me a set of data and he asked me to figure how to load this out. However, it keep showing me this: ``` Traceback (most recent call last): File "<pyshell#3>", line 1, in <module> raindata = loadtxt('slz_chuva.txt', comments='#', delimiter=',') File "/usr/lib/python3/dist-packages/numpy/lib/npyio.py", line 848, in loadtxt items = [conv(val) for (conv, val) in zip(converters, vals)] File "/usr/lib/python3/dist-packages/numpy/lib/npyio.py", line 848, in <listcomp> items = [conv(val) for (conv, val) in zip(converters, vals)] ValueError: could not convert string to float: b'A203' ``` and this is my code: ``` from scipy import loadtxt raindata = loadtxt('slz_chuva.txt', comments='#', delimiter= ',') ``` and this is my data: codigo\_estacao,data,hora,temp\_inst,temp\_max,temp\_min,umid\_inst,umid\_max,umid\_min,pto\_orvalh#o\_inst,pto\_orvalho\_max,pto\_orvalho\_min,pressao,pressao\_max,pressao\_min,vento\_direcao,vento\_vel,vento\_rajada,radiacao,precipitacao =============================================================================================================================================================================================================================================== A203,09/05,2014,00,24.8,24.8,24.5,95,95,94,23.9,24.0,23.7,1006.3,1006.3,1005.7,0.3,24,1.8,-3.08,0.0 A203,09/05/2014,01,24.5,24.8,24.5,95,95,95,23.7,24.0,23.7,1006.9,1006.9,1006.3,0.0,30,1.7,-2.78,0.0 A203,09/05/2014,02,24.6,24.6,24.4,96,96,95,23.8,23.8,23.7,1006.6,1006.9,1006.6,0.3,42,1.7,-2.86,0.0 A203,09/05/2014,03,24.8,25.0,24.5,96,96,95,24.1,24.2,23.8,1006.2,1006.6,1006.2,0.0,51,1.8,-1.70,0.0 Could someone help me out? thanks
2014/05/21
[ "https://Stackoverflow.com/questions/23785259", "https://Stackoverflow.com", "https://Stackoverflow.com/users/3661013/" ]
On the SelectionChanged event you can do this: ``` private void dataGridView1_SelectionChanged(object sender, EventArgs e) { if (dataGridView1.SelectedCells.Count > 2) { dataGridView1.SelectedCells[0].Selected = false; } } ``` This will prevent/undo selecting any more cells after selecting two. For whole rows: ``` private void dataGridView1_SelectionChanged(object sender, EventArgs e) { if (dataGridView1.SelectedRows.Count > 2) { dataGridView1.SelectedRows[0].Selected = false; } } ```
You could try overriding SetSelectedRowCore, calling the base with adding your new limitation to the selected condition. ``` protected virtual void SetSelectedRowCore(int rowIndex,bool selected ) { base(rowIndex, selected && currentSelection < allowedSelectionCount); } ``` [SetSelectedRowCore](http://msdn.microsoft.com/en-us/library/system.windows.forms.datagridview.setselectedrowcore%28v=vs.110%29.aspx)
23,785,259
I'm new using Python 3.4 and I'll be using it for my internship in the next month. However, my instructor gave me a task to practice while I haven't started it yet. Thus, he gave me a set of data and he asked me to figure how to load this out. However, it keep showing me this: ``` Traceback (most recent call last): File "<pyshell#3>", line 1, in <module> raindata = loadtxt('slz_chuva.txt', comments='#', delimiter=',') File "/usr/lib/python3/dist-packages/numpy/lib/npyio.py", line 848, in loadtxt items = [conv(val) for (conv, val) in zip(converters, vals)] File "/usr/lib/python3/dist-packages/numpy/lib/npyio.py", line 848, in <listcomp> items = [conv(val) for (conv, val) in zip(converters, vals)] ValueError: could not convert string to float: b'A203' ``` and this is my code: ``` from scipy import loadtxt raindata = loadtxt('slz_chuva.txt', comments='#', delimiter= ',') ``` and this is my data: codigo\_estacao,data,hora,temp\_inst,temp\_max,temp\_min,umid\_inst,umid\_max,umid\_min,pto\_orvalh#o\_inst,pto\_orvalho\_max,pto\_orvalho\_min,pressao,pressao\_max,pressao\_min,vento\_direcao,vento\_vel,vento\_rajada,radiacao,precipitacao =============================================================================================================================================================================================================================================== A203,09/05,2014,00,24.8,24.8,24.5,95,95,94,23.9,24.0,23.7,1006.3,1006.3,1005.7,0.3,24,1.8,-3.08,0.0 A203,09/05/2014,01,24.5,24.8,24.5,95,95,95,23.7,24.0,23.7,1006.9,1006.9,1006.3,0.0,30,1.7,-2.78,0.0 A203,09/05/2014,02,24.6,24.6,24.4,96,96,95,23.8,23.8,23.7,1006.6,1006.9,1006.6,0.3,42,1.7,-2.86,0.0 A203,09/05/2014,03,24.8,25.0,24.5,96,96,95,24.1,24.2,23.8,1006.2,1006.6,1006.2,0.0,51,1.8,-1.70,0.0 Could someone help me out? thanks
2014/05/21
[ "https://Stackoverflow.com/questions/23785259", "https://Stackoverflow.com", "https://Stackoverflow.com/users/3661013/" ]
This always leave selected 2 last selected rows ``` private void dataGridView1_SelectionChanged(object sender, EventArgs e) { if (dataGridView1.SelectedRows.Count > 2) { for (int i = 2; i < dataGridView1.SelectedRows.Count; i++) { dataGridView1.SelectedRows[i].Selected = false; } } } ```
You could try overriding SetSelectedRowCore, calling the base with adding your new limitation to the selected condition. ``` protected virtual void SetSelectedRowCore(int rowIndex,bool selected ) { base(rowIndex, selected && currentSelection < allowedSelectionCount); } ``` [SetSelectedRowCore](http://msdn.microsoft.com/en-us/library/system.windows.forms.datagridview.setselectedrowcore%28v=vs.110%29.aspx)
38,994,265
When we input : 10 output :01 02 03 04 05 06 07 08 09 10 When we input :103 output :001 002 003...010 011 012 013.....100 101 002 103 How to create this sequence in ruby or python ?
2016/08/17
[ "https://Stackoverflow.com/questions/38994265", "https://Stackoverflow.com", "https://Stackoverflow.com/users/-1/" ]
Ruby implementation: ``` n = gets p (1..n.to_i).map{ |i| i.to_s.rjust(n.to_s.length, "0") }.join(" ") ``` Here `rjust` will add leading zeros.
A very basic Python implementation. Note that it's a generator so it returns one value at a time. ``` def get_range(n): len_n = len(str(n)) for num in range(1, n + 1): output = str(num) while len(output) < len_n: output = '0' + output yield output for i in get_range(100): print(i) >> 001 002 ... ... 009 010 011 .. .. 099 100 ```
38,994,265
When we input : 10 output :01 02 03 04 05 06 07 08 09 10 When we input :103 output :001 002 003...010 011 012 013.....100 101 002 103 How to create this sequence in ruby or python ?
2016/08/17
[ "https://Stackoverflow.com/questions/38994265", "https://Stackoverflow.com", "https://Stackoverflow.com/users/-1/" ]
Another one in Ruby: ``` n = gets.chomp '1'.rjust(n.size, '0').upto(n) { |s| puts s } ``` [`String#upto`](http://ruby-doc.org/core-2.3.1/String.html#method-i-upto) handles numeric strings in a special way: ``` '01'.upto('10').to_a #=> ["01", "02", "03", "04", "05", "06", "07", "08", "09", "10"] ```
A very basic Python implementation. Note that it's a generator so it returns one value at a time. ``` def get_range(n): len_n = len(str(n)) for num in range(1, n + 1): output = str(num) while len(output) < len_n: output = '0' + output yield output for i in get_range(100): print(i) >> 001 002 ... ... 009 010 011 .. .. 099 100 ```
38,994,265
When we input : 10 output :01 02 03 04 05 06 07 08 09 10 When we input :103 output :001 002 003...010 011 012 013.....100 101 002 103 How to create this sequence in ruby or python ?
2016/08/17
[ "https://Stackoverflow.com/questions/38994265", "https://Stackoverflow.com", "https://Stackoverflow.com/users/-1/" ]
Ruby implementation: ``` n = gets p (1..n.to_i).map{ |i| i.to_s.rjust(n.to_s.length, "0") }.join(" ") ``` Here `rjust` will add leading zeros.
Using `zfill` you can add leading zeroes. ``` num=input() for i in range(1,int(num)+1): print (str(i).zfill(len(num))) ```
38,994,265
When we input : 10 output :01 02 03 04 05 06 07 08 09 10 When we input :103 output :001 002 003...010 011 012 013.....100 101 002 103 How to create this sequence in ruby or python ?
2016/08/17
[ "https://Stackoverflow.com/questions/38994265", "https://Stackoverflow.com", "https://Stackoverflow.com/users/-1/" ]
Another one in Ruby: ``` n = gets.chomp '1'.rjust(n.size, '0').upto(n) { |s| puts s } ``` [`String#upto`](http://ruby-doc.org/core-2.3.1/String.html#method-i-upto) handles numeric strings in a special way: ``` '01'.upto('10').to_a #=> ["01", "02", "03", "04", "05", "06", "07", "08", "09", "10"] ```
Using `zfill` you can add leading zeroes. ``` num=input() for i in range(1,int(num)+1): print (str(i).zfill(len(num))) ```
38,994,265
When we input : 10 output :01 02 03 04 05 06 07 08 09 10 When we input :103 output :001 002 003...010 011 012 013.....100 101 002 103 How to create this sequence in ruby or python ?
2016/08/17
[ "https://Stackoverflow.com/questions/38994265", "https://Stackoverflow.com", "https://Stackoverflow.com/users/-1/" ]
Ruby implementation: ``` n = gets p (1..n.to_i).map{ |i| i.to_s.rjust(n.to_s.length, "0") }.join(" ") ``` Here `rjust` will add leading zeros.
Another one in Ruby: ``` n = gets.chomp '1'.rjust(n.size, '0').upto(n) { |s| puts s } ``` [`String#upto`](http://ruby-doc.org/core-2.3.1/String.html#method-i-upto) handles numeric strings in a special way: ``` '01'.upto('10').to_a #=> ["01", "02", "03", "04", "05", "06", "07", "08", "09", "10"] ```
51,325,955
I am trying to scrape a website using `Selenium Firefox` (headless) driver in `python`. I read all the anchors in the webpage and go through them all one by one. But I want for the browser to wait for the `Ajax` calls on the page to be over before moving to another page. My code is the following: ``` import time from selenium import webdriver from selenium.webdriver.firefox.options import Options from selenium.webdriver.support.ui import WebDriverWait from selenium.webdriver.common.by import By from selenium.webdriver.support import expected_conditions as EC from selenium.webdriver.common.desired_capabilities import DesiredCapabilities caps = DesiredCapabilities().FIREFOX caps["pageLoadStrategy"] = "eager" # complete options = Options() options.add_argument("--headless") url = "http://localhost:3000/" # Using Selenium's webdriver to open the page driver = webdriver.Firefox(desired_capabilities=caps,firefox_options=options) driver.get(url) urls = WebDriverWait(driver, 10).until(EC.presence_of_all_elements_located((By.TAG_NAME, "a"))) links = [] for url in urls: links.append(url.get_attribute("href")) for link in links: print 'navigating to: ' + link driver.get(link) body = WebDriverWait(driver, 10).until(EC.presence_of_all_elements_located((By.TAG_NAME, "p"))) driver.execute_script("window.scrollTo(0,1000);") print(body) driver.back() driver.quit() ``` the line `print(body)` was added for testing purposes. and it returned uncomprehensible text , instead of the actual HTML of the page. Heres a part of the printed text : ``` [<selenium.webdriver.firefox.webelement.FirefoxWebElement (session="fb183e8b-ce36-47e7-a03e-d3aeea376304", element="e7dfa6b2-1ddf-438d-b562-1e2ac8416e07")>, <selenium.webdriver.firefox.webelement.FirefoxWebElement (session="fb183e8b-ce36-47e7-a03e-d3aeea376304", element="6fe1ffb0-17a8-4b64-9166-691478a0bbd4")>, <selenium.webdriver.firefox.webelement.FirefoxWebElement (session="fb183e8b-ce36-47e7-a03e-d3aeea376304", element="1f510a00-a587-4ae8-9ecf-dd4c90081a5a")>, <selenium.webdriver.firefox.webelement.FirefoxWebElement (session="fb183e8b-ce36-47e7-a03e-d3aeea376304", element="c1bfb1cd-5ccf-42b6-ad4c-c1a70486cc98")>, <selenium.webdriver.firefox.webelement.FirefoxWebElement (session="fb183e8b-ce36-47e7-a03e-d3aeea376304", element="be44db09-3948-48f1-8505-937db509a157")>, <selenium.webdriver.firefox.webelement.FirefoxWebElement (session="fb183e8b-ce36-47e7-a03e-d3aeea376304", element="68f3c9f2-80b0-493e-a47f-ad69caceaa06")>, ``` What is causing this? Everything (content related) in the pages i'm scraping is static.
2018/07/13
[ "https://Stackoverflow.com/questions/51325955", "https://Stackoverflow.com", "https://Stackoverflow.com/users/2378622/" ]
You should add a `0` in your conversion specification to indicate that you want zero-padding: ``` $test = sprintf('%06d', rand(1, 1000000)); // ^-- here ``` The conversion specifications are documented on [the `sprintf` manual page](http://php.net/manual/en/function.sprintf.php).
You can just replace the empty character with 0. ``` $test = str_replace(" ", "0", sprintf('%6d', rand(1, 1000000))); ```
51,325,955
I am trying to scrape a website using `Selenium Firefox` (headless) driver in `python`. I read all the anchors in the webpage and go through them all one by one. But I want for the browser to wait for the `Ajax` calls on the page to be over before moving to another page. My code is the following: ``` import time from selenium import webdriver from selenium.webdriver.firefox.options import Options from selenium.webdriver.support.ui import WebDriverWait from selenium.webdriver.common.by import By from selenium.webdriver.support import expected_conditions as EC from selenium.webdriver.common.desired_capabilities import DesiredCapabilities caps = DesiredCapabilities().FIREFOX caps["pageLoadStrategy"] = "eager" # complete options = Options() options.add_argument("--headless") url = "http://localhost:3000/" # Using Selenium's webdriver to open the page driver = webdriver.Firefox(desired_capabilities=caps,firefox_options=options) driver.get(url) urls = WebDriverWait(driver, 10).until(EC.presence_of_all_elements_located((By.TAG_NAME, "a"))) links = [] for url in urls: links.append(url.get_attribute("href")) for link in links: print 'navigating to: ' + link driver.get(link) body = WebDriverWait(driver, 10).until(EC.presence_of_all_elements_located((By.TAG_NAME, "p"))) driver.execute_script("window.scrollTo(0,1000);") print(body) driver.back() driver.quit() ``` the line `print(body)` was added for testing purposes. and it returned uncomprehensible text , instead of the actual HTML of the page. Heres a part of the printed text : ``` [<selenium.webdriver.firefox.webelement.FirefoxWebElement (session="fb183e8b-ce36-47e7-a03e-d3aeea376304", element="e7dfa6b2-1ddf-438d-b562-1e2ac8416e07")>, <selenium.webdriver.firefox.webelement.FirefoxWebElement (session="fb183e8b-ce36-47e7-a03e-d3aeea376304", element="6fe1ffb0-17a8-4b64-9166-691478a0bbd4")>, <selenium.webdriver.firefox.webelement.FirefoxWebElement (session="fb183e8b-ce36-47e7-a03e-d3aeea376304", element="1f510a00-a587-4ae8-9ecf-dd4c90081a5a")>, <selenium.webdriver.firefox.webelement.FirefoxWebElement (session="fb183e8b-ce36-47e7-a03e-d3aeea376304", element="c1bfb1cd-5ccf-42b6-ad4c-c1a70486cc98")>, <selenium.webdriver.firefox.webelement.FirefoxWebElement (session="fb183e8b-ce36-47e7-a03e-d3aeea376304", element="be44db09-3948-48f1-8505-937db509a157")>, <selenium.webdriver.firefox.webelement.FirefoxWebElement (session="fb183e8b-ce36-47e7-a03e-d3aeea376304", element="68f3c9f2-80b0-493e-a47f-ad69caceaa06")>, ``` What is causing this? Everything (content related) in the pages i'm scraping is static.
2018/07/13
[ "https://Stackoverflow.com/questions/51325955", "https://Stackoverflow.com", "https://Stackoverflow.com/users/2378622/" ]
You should add a `0` in your conversion specification to indicate that you want zero-padding: ``` $test = sprintf('%06d', rand(1, 1000000)); // ^-- here ``` The conversion specifications are documented on [the `sprintf` manual page](http://php.net/manual/en/function.sprintf.php).
If you don't want to use `sprintf` (some dont!), an alternative way to do it would be: ``` $test = str_pad(mt_rand(1, 999999),6,0,STR_PAD_LEFT); ``` Example output: ``` 736523 024132 003145 ``` Using `mt_rand` here because its a better random number function (not perfect, but better than just `rand`). Also adjusted to 999999 since 1000000 could possibly produce a 7 digit number. --- Doing a benchmark of 10000 iterations on the three answers provided (Sean, Mine, Aslan), these are the results in speed: ```none Sean's Method: 0.005 My Method: 0.006 Aslan's Method: 0.009 ``` So you would be better off going with Sean's method.
39,948,588
How can I read the contents of a binary or a text file in a non-blocking mode? For binary files: when I `open(filename, mode='rb')`, I get an instance of `io.BufferedReader`. The documentation fort `io.BufferedReader.read` [says](https://docs.python.org/3.5/library/io.html#io.BufferedReader.read): > > Read and return size bytes, or if size is not given or negative, until EOF or if the read call would block in non-blocking mode. > > > Obviously a straightforward `open(filename, 'rb').read()` is in a blocking mode. To my surprise, I could not find an explanation anywhere in the `io` docs of how to choose the non-blocking mode. For text files: when I `open(filename, mode='rt')`, I get `io.TextIOWrapper`. I assume the relevant docs are those for `read` in its base class, `io.TextIOBase`; and [according to those docs](https://docs.python.org/3.5/library/io.html#io.TextIOBase.read), there seems no way to do non-blocking read at all: > > Read and return at most size characters from the stream as a single str. If size is negative or None, reads until EOF. > > >
2016/10/09
[ "https://Stackoverflow.com/questions/39948588", "https://Stackoverflow.com", "https://Stackoverflow.com/users/336527/" ]
File operations are blocking. There is no non-blocking mode. But you can create a thread which reads the file in the background. In Python 3, [`concurrent.futures` module](https://docs.python.org/3.4/library/concurrent.futures.html#module-concurrent.futures) can be useful here. ``` from concurrent.futures import ThreadPoolExecutor def read_file(filename): with open(filename, 'rb') as f: return f.read() executor = concurrent.futures.ThreadPoolExecutor(1) future_file = executor.submit(read_file, 'C:\\Temp\\mocky.py') # continue with other work # later: if future_file.done(): file_contents = future_file.result() ``` Or, if you need a callback to be called when the operation is done: ``` def on_file_reading_finished(future_file): print(future_file.result()) future_file = executor.submit(read_file, 'C:\\Temp\\mocky.py') future_file.add_done_callback(on_file_reading_finished) # continue with other code while the file is loading... ```
I suggest using [**aiofiles**](https://github.com/Tinche/aiofiles) - a library for handling local disk files in asyncio applications. ``` import aiofiles async def read_without_blocking(): f = await aiofiles.open('filename', mode='r') try: contents = await f.read() finally: await f.close() ```
39,948,588
How can I read the contents of a binary or a text file in a non-blocking mode? For binary files: when I `open(filename, mode='rb')`, I get an instance of `io.BufferedReader`. The documentation fort `io.BufferedReader.read` [says](https://docs.python.org/3.5/library/io.html#io.BufferedReader.read): > > Read and return size bytes, or if size is not given or negative, until EOF or if the read call would block in non-blocking mode. > > > Obviously a straightforward `open(filename, 'rb').read()` is in a blocking mode. To my surprise, I could not find an explanation anywhere in the `io` docs of how to choose the non-blocking mode. For text files: when I `open(filename, mode='rt')`, I get `io.TextIOWrapper`. I assume the relevant docs are those for `read` in its base class, `io.TextIOBase`; and [according to those docs](https://docs.python.org/3.5/library/io.html#io.TextIOBase.read), there seems no way to do non-blocking read at all: > > Read and return at most size characters from the stream as a single str. If size is negative or None, reads until EOF. > > >
2016/10/09
[ "https://Stackoverflow.com/questions/39948588", "https://Stackoverflow.com", "https://Stackoverflow.com/users/336527/" ]
File operations are blocking. There is no non-blocking mode. But you can create a thread which reads the file in the background. In Python 3, [`concurrent.futures` module](https://docs.python.org/3.4/library/concurrent.futures.html#module-concurrent.futures) can be useful here. ``` from concurrent.futures import ThreadPoolExecutor def read_file(filename): with open(filename, 'rb') as f: return f.read() executor = concurrent.futures.ThreadPoolExecutor(1) future_file = executor.submit(read_file, 'C:\\Temp\\mocky.py') # continue with other work # later: if future_file.done(): file_contents = future_file.result() ``` Or, if you need a callback to be called when the operation is done: ``` def on_file_reading_finished(future_file): print(future_file.result()) future_file = executor.submit(read_file, 'C:\\Temp\\mocky.py') future_file.add_done_callback(on_file_reading_finished) # continue with other code while the file is loading... ```
Python does support non-blocking reads, at least on Unix type systems, by setting the [`O_NONBLOCK` flag](https://docs.python.org/3/library/os.html#os.O_NONBLOCK). In Python 3.5+, there is the [`os.set_blocking()` function](https://docs.python.org/3/library/os.html#os.set_blocking) which makes this easier: ``` import os f = open(filename, 'rb') os.set_blocking(f.fileno(), False) f.read() # This will be non-blocking. ``` However, as [zvone's answer](https://stackoverflow.com/a/39948796) notes, this doesn't necessarily work on actual disk files. This isn't a Python thing though, but an OS limitation. As the Linux [open(2) man page](https://manpages.debian.org/buster/manpages-dev/open.2.en.html) states: > > Note that this flag has no effect for regular files and block devices; > that is, I/O operations will (briefly) block when device activity is > required, regardless of whether O\_NONBLOCK is set. > > > But it does suggest this may be implemented in the future: > > Since O\_NONBLOCK semantics might eventually be implemented, > applications should not depend upon blocking behavior when specifying > this flag for regular files and block devices. > > >
48,146,921
I have a script that builds llvm/clang 3.42 from source (with configure+make). **It runs smooth on ubuntu 14.04.5 LTS**. When I upgraded to **ubuntu 17.04**, the build fails. Here is the building script: ``` svn co https://llvm.org/svn/llvm-project/llvm/tags/RELEASE_342/final llvm svn co https://llvm.org/svn/llvm-project/cfe/tags/RELEASE_342/final llvm/tools/clang svn co https://llvm.org/svn/llvm-project/compiler-rt/tags/RELEASE_342/final llvm/projects/compiler-rt svn co https://llvm.org/svn/llvm-project/libcxx/tags/RELEASE_342/final llvm/projects/libcxx rm -rf llvm/.svn rm -rf llvm/tools/clang/.svn rm -rf llvm/projects/compiler-rt/.svn rm -rf llvm/projects/libcxx/.svn cd llvm ./configure \ --enable-optimized \ --disable-assertions \ --enable-targets=host \ --with-python="/usr/bin/python2" make -j `nproc` ``` Here are the errors I get (TLDR: problems with definitions of **malloc**, **calloc**, **realloc** and **free**) ``` /usr/include/malloc.h:38:14: error: declaration conflicts with target of using declaration already in scope extern void *malloc (size_t __size) __THROW __attribute_malloc__ __wur; ^ /usr/include/stdlib.h:427:14: note: target of using declaration extern void *malloc (size_t __size) __THROW __attribute_malloc__ __wur; ^ /usr/lib/gcc/x86_64-linux-gnu/6.3.0/../../../../include/c++/6.3./stdlib.h:65:12: note: using declaration using std::malloc; ^ In file included from /home/oren/GIT/LatestKlee/llvm/projects/compiler-rt/lib/tsan/rtl/tsan_platform_linux.cc:47: /usr/include/malloc.h:41:14: error: declaration conflicts with target of using declaration already in scope extern void *calloc (size_t __nmemb, size_t __size) ^ /usr/include/stdlib.h:429:14: note: target of using declaration extern void *calloc (size_t __nmemb, size_t __size) ^ /usr/lib/gcc/x86_64-linux-gnu/6.3.0/../../../../include/c++/6.3.0/stdlib.h:59:12: note: using declaration using std::calloc; ^ In file included from /home/oren/GIT/LatestKlee/llvm/projects/compiler-rt/lib/tsan/rtl/tsan_platform_linux.cc:47: /usr/include/malloc.h:49:14: error: declaration conflicts with target of using declaration already in scope extern void *realloc (void *__ptr, size_t __size) ^ /usr/include/stdlib.h:441:14: note: target of using declaration extern void *realloc (void *__ptr, size_t __size) ^ /usr/lib/gcc/x86_64-linux-gnu/6.3.0/../../../../include/c++/6.3.0/stdlib.h:73:12: note: using declaration using std::realloc; ^ In file included from /home/oren/GIT/LatestKlee/llvm/projects/compiler-rt/lib/tsan/rtl/tsan_platform_linux.cc:47: /usr/include/malloc.h:53:13: error: declaration conflicts with target of using declaration already in scope extern void free (void *__ptr) __THROW; ^ /usr/include/stdlib.h:444:13: note: target of using declaration extern void free (void *__ptr) __THROW; ^ /usr/lib/gcc/x86_64-linux-gnu/6.3.0/../../../../include/c++/6.3.0/stdlib.h:61:12: note: using declaration using std::free; ^ COMPILE: clang_linux/tsan-x86_64/x86_64: /home/oren/GIT/LatestKlee/llvm/projects/compiler-rt/lib/tsan/rtl/tsan_rtl_mutex.cc 4 errors generated. Makefile:267: recipe for target '/home/oren/GIT/LatestKlee/llvm/tools/clang/runtime/compiler-rt/clang_linux/tsan-x86_64/x86_64/SubDir.lib__tsan__rtl/tsan_platform_linux.o' failed make[5]: *** [/home/oren/GIT/LatestKlee/llvm/tools/clang/runtime/compiler-rt/clang_linux/tsan-x86_64/x86_64/SubDir.lib__tsan__rtl/tsan_platform_linux.o] Error 1 ``` The default gcc version shipped with ubuntu 17.04 is 6.3. Maybe this is an issue of default C++ dialect used by gcc 6.3? Any help is very much appreciated, thanks!
2018/01/08
[ "https://Stackoverflow.com/questions/48146921", "https://Stackoverflow.com", "https://Stackoverflow.com/users/3357352/" ]
That seems to be an issue with LLVM 3.4.2`tsan` (Thread Sanitizer) failing to build with GCC 6.x, as previously reported here: <https://aur.archlinux.org/packages/clang34-analyzer-split> It seems the inclusion of `stdlib.h` and `malloc.h` is conflicting, since both define `malloc` and friends. It's possible that this issue only manifets in `tsan`, so if `tsan` is not instrumental to your LLVM build (which is very likely), and you wish to stick with the system gcc for building LLVM, you may consider disabling `tsan` completely. If you're running a CMake build (as in [here](https://stackoverflow.com/questions/48188796/clang-4-0-fails-to-build-clang-3-42-on-ubuntu-17-04/48226872#48226872)), you can do so by commenting line 29 of `llvm/projects/compiler-rt/lib/CMakeLists.txt`: ``` if (CMAKE_SYSTEM_NAME MATCHES "Linux" AND NOT ANDROID) add_subdirectory(tsan) # comment out this line ``` If you're forced to stick to the `configure` build, my best guess would be removing the `tsan-x86_64` target in `llvm/projects/compiler-rt/make/clang_linux.mk`, line 63: ``` Configs += full-x86_64 profile-x86_64 san-x86_64 asan-x86_64 --> tsan-x86_64 <-- ```
I faced the same problem on my Ubuntu 16.10. It has default gcc 6.2. You need to instruct LLVM build system to use gcc 4.9. Also, I suggest you remove GCC6 completely. ``` $ sudo apt-get remove g++-6 gcc-6 cpp $ sudo apt-get install gcc-4.9 g++4.9 $ export CC=/usr/bin/gcc-4.9 $ export CXX=/usr/bin/g++-4.9 $ export CPP=/usr/bin/cpp-4.9 $ ./configure $ make ``` And maybe you will need: ``` $ sudo ln -s /usr/bin/cpp-4.9 /usr/bin/cpp ```
35,245,401
I work with conda environments and need some pip packages as well, e.g. pre-compiled wheels from [~gohlke](http://www.lfd.uci.edu/~gohlke/pythonlibs/). At the moment I have two files: `environment.yml` for conda with: ``` # run: conda env create --file environment.yml name: test-env dependencies: - python>=3.5 - anaconda ``` and `requirements.txt` for pip which can be used after activating above conda environment: ``` # run: pip install -i requirements.txt docx gooey http://www.lfd.uci.edu/~gohlke/pythonlibs/bofhrmxk/opencv_python-3.1.0-cp35-none-win_amd64.whl ``` Is there a possibility to combine them in one file (for conda)?
2016/02/06
[ "https://Stackoverflow.com/questions/35245401", "https://Stackoverflow.com", "https://Stackoverflow.com/users/5276734/" ]
Pip dependencies can be included in the `environment.yml` file like this ([docs](https://conda.io/docs/user-guide/tasks/manage-environments.html#create-env-file-manually)): ``` # run: conda env create --file environment.yml name: test-env dependencies: - python>=3.5 - anaconda - pip - numpy=1.13.3 # pin version for conda - pip: # works for regular pip packages - docx - gooey - matplotlib==2.0.0 # pin version for pip # and for wheels - http://www.lfd.uci.edu/~gohlke/pythonlibs/bofhrmxk/opencv_python-3.1.0-cp35-none-win_amd64.whl ``` It also works for `.whl` files in the same directory (see [Dengar's answer](https://stackoverflow.com/a/41454032/5276734)) as well as with common pip packages.
Just want to add that adding a wheel in the directory also works. I was getting this error when using the entire URL: ``` HTTP error 404 while getting http://www.lfd.uci.edu/~gohlke/pythonlibs/f9r7rmd8/opencv_python-3.1.0-cp35-none-win_amd64.whl ``` Ended up downloading the wheel and saving it into the same directory as the yml file. ``` name: test-env dependencies: - python>=3.5 - anaconda - pip - pip: - opencv_python-3.1.0-cp35-none-win_amd64.whl ```
35,245,401
I work with conda environments and need some pip packages as well, e.g. pre-compiled wheels from [~gohlke](http://www.lfd.uci.edu/~gohlke/pythonlibs/). At the moment I have two files: `environment.yml` for conda with: ``` # run: conda env create --file environment.yml name: test-env dependencies: - python>=3.5 - anaconda ``` and `requirements.txt` for pip which can be used after activating above conda environment: ``` # run: pip install -i requirements.txt docx gooey http://www.lfd.uci.edu/~gohlke/pythonlibs/bofhrmxk/opencv_python-3.1.0-cp35-none-win_amd64.whl ``` Is there a possibility to combine them in one file (for conda)?
2016/02/06
[ "https://Stackoverflow.com/questions/35245401", "https://Stackoverflow.com", "https://Stackoverflow.com/users/5276734/" ]
Pip dependencies can be included in the `environment.yml` file like this ([docs](https://conda.io/docs/user-guide/tasks/manage-environments.html#create-env-file-manually)): ``` # run: conda env create --file environment.yml name: test-env dependencies: - python>=3.5 - anaconda - pip - numpy=1.13.3 # pin version for conda - pip: # works for regular pip packages - docx - gooey - matplotlib==2.0.0 # pin version for pip # and for wheels - http://www.lfd.uci.edu/~gohlke/pythonlibs/bofhrmxk/opencv_python-3.1.0-cp35-none-win_amd64.whl ``` It also works for `.whl` files in the same directory (see [Dengar's answer](https://stackoverflow.com/a/41454032/5276734)) as well as with common pip packages.
One can also use the `requirements.txt` directly in the YAML. For example, ``` name: test-env dependencies: - python>=3.5 - anaconda - pip - pip: - -r requirements.txt ``` Basically, [any option you can run with `pip install`](https://pip.pypa.io/en/stable/reference/pip_install/) you can run in a YAML. See [the Advanced Pip Example](https://github.com/conda/conda/tree/master/tests/conda_env/support/advanced-pip) for a showcase of other capabilities. --- ### Important Note A previous version of this answer (and Conda's Advanced Pip Example) used a substandard `file` URI syntax: ```yaml - -r file:requirements.txt ``` Pip v21.2.1 introduced stricter behavior for URI parsing and no longer supports this. See [this answer for details](https://stackoverflow.com/a/68586065/570918).
35,245,401
I work with conda environments and need some pip packages as well, e.g. pre-compiled wheels from [~gohlke](http://www.lfd.uci.edu/~gohlke/pythonlibs/). At the moment I have two files: `environment.yml` for conda with: ``` # run: conda env create --file environment.yml name: test-env dependencies: - python>=3.5 - anaconda ``` and `requirements.txt` for pip which can be used after activating above conda environment: ``` # run: pip install -i requirements.txt docx gooey http://www.lfd.uci.edu/~gohlke/pythonlibs/bofhrmxk/opencv_python-3.1.0-cp35-none-win_amd64.whl ``` Is there a possibility to combine them in one file (for conda)?
2016/02/06
[ "https://Stackoverflow.com/questions/35245401", "https://Stackoverflow.com", "https://Stackoverflow.com/users/5276734/" ]
Pip dependencies can be included in the `environment.yml` file like this ([docs](https://conda.io/docs/user-guide/tasks/manage-environments.html#create-env-file-manually)): ``` # run: conda env create --file environment.yml name: test-env dependencies: - python>=3.5 - anaconda - pip - numpy=1.13.3 # pin version for conda - pip: # works for regular pip packages - docx - gooey - matplotlib==2.0.0 # pin version for pip # and for wheels - http://www.lfd.uci.edu/~gohlke/pythonlibs/bofhrmxk/opencv_python-3.1.0-cp35-none-win_amd64.whl ``` It also works for `.whl` files in the same directory (see [Dengar's answer](https://stackoverflow.com/a/41454032/5276734)) as well as with common pip packages.
If you want to do it automatically it seems that if you do: ``` conda env export > environment.yml` ``` already has the pip things you need. No need to run `pip freeze > requirements4pip.txt` separately for me or include it as an ``` - pip: - -r file:requirements.txt ``` as another answer mentioned. See my yml file: ``` $ cat environment.yml name: myenv channels: - pytorch - dglteam - defaults - conda-forge dependencies: - _libgcc_mutex=0.1=main - absl-py=0.12.0=py38h06a4308_0 - aiohttp=3.7.4=py38h27cfd23_1 - async-timeout=3.0.1=py38h06a4308_0 - attrs=20.3.0=pyhd3eb1b0_0 - beautifulsoup4=4.9.3=pyha847dfd_0 - blas=1.0=mkl - blinker=1.4=py38h06a4308_0 - brotlipy=0.7.0=py38h27cfd23_1003 - bzip2=1.0.8=h7b6447c_0 - c-ares=1.17.1=h27cfd23_0 - ca-certificates=2021.4.13=h06a4308_1 - cachetools=4.2.1=pyhd3eb1b0_0 - cairo=1.14.12=h8948797_3 - certifi=2020.12.5=py38h06a4308_0 - cffi=1.14.0=py38h2e261b9_0 - chardet=3.0.4=py38h06a4308_1003 - click=7.1.2=pyhd3eb1b0_0 - conda=4.10.1=py38h06a4308_1 - conda-build=3.21.4=py38h06a4308_0 - conda-package-handling=1.7.3=py38h27cfd23_1 - coverage=5.5=py38h27cfd23_2 - cryptography=3.4.7=py38hd23ed53_0 - cudatoolkit=11.0.221=h6bb024c_0 - cycler=0.10.0=py38_0 - cython=0.29.23=py38h2531618_0 - dbus=1.13.18=hb2f20db_0 - decorator=4.4.2=pyhd3eb1b0_0 - dgl-cuda11.0=0.6.1=py38_0 - dill=0.3.3=pyhd3eb1b0_0 - expat=2.3.0=h2531618_2 - filelock=3.0.12=pyhd3eb1b0_1 - fontconfig=2.13.1=h6c09931_0 - freetype=2.10.4=h7ca028e_0 - fribidi=1.0.10=h7b6447c_0 - gettext=0.21.0=hf68c758_0 - glib=2.66.3=h58526e2_0 - glob2=0.7=pyhd3eb1b0_0 - google-auth=1.29.0=pyhd3eb1b0_0 - google-auth-oauthlib=0.4.4=pyhd3eb1b0_0 - graphite2=1.3.14=h23475e2_0 - graphviz=2.40.1=h21bd128_2 - grpcio=1.36.1=py38h2157cd5_1 - gst-plugins-base=1.14.0=h8213a91_2 - gstreamer=1.14.0=h28cd5cc_2 - harfbuzz=1.8.8=hffaf4a1_0 - icu=58.2=he6710b0_3 - idna=2.10=pyhd3eb1b0_0 - importlib-metadata=3.10.0=py38h06a4308_0 - intel-openmp=2021.2.0=h06a4308_610 - jinja2=2.11.3=pyhd3eb1b0_0 - joblib=1.0.1=pyhd3eb1b0_0 - jpeg=9b=h024ee3a_2 - kiwisolver=1.3.1=py38h2531618_0 - lcms2=2.12=h3be6417_0 - ld_impl_linux-64=2.33.1=h53a641e_7 - libarchive=3.4.2=h62408e4_0 - libffi=3.2.1=hf484d3e_1007 - libgcc-ng=9.1.0=hdf63c60_0 - libgfortran-ng=7.3.0=hdf63c60_0 - libglib=2.66.3=hbe7bbb4_0 - libiconv=1.16=h516909a_0 - liblief=0.10.1=he6710b0_0 - libpng=1.6.37=h21135ba_2 - libprotobuf=3.14.0=h8c45485_0 - libstdcxx-ng=9.1.0=hdf63c60_0 - libtiff=4.1.0=h2733197_1 - libuuid=1.0.3=h1bed415_2 - libuv=1.40.0=h7b6447c_0 - libxcb=1.14=h7b6447c_0 - libxml2=2.9.10=hb55368b_3 - lz4-c=1.9.2=he1b5a44_3 - markdown=3.3.4=py38h06a4308_0 - markupsafe=1.1.1=py38h7b6447c_0 - matplotlib=3.3.4=py38h06a4308_0 - matplotlib-base=3.3.4=py38h62a2d02_0 - mkl=2020.2=256 - mkl-service=2.3.0=py38h1e0a361_2 - mkl_fft=1.3.0=py38h54f3939_0 - mkl_random=1.2.0=py38hc5bc63f_1 - multidict=5.1.0=py38h27cfd23_2 - ncurses=6.2=he6710b0_1 - networkx=2.5.1=pyhd3eb1b0_0 - ninja=1.10.2=hff7bd54_1 - numpy=1.19.2=py38h54aff64_0 - numpy-base=1.19.2=py38hfa32c7d_0 - oauthlib=3.1.0=py_0 - olefile=0.46=pyh9f0ad1d_1 - openssl=1.1.1k=h27cfd23_0 - pandas=1.2.4=py38h2531618_0 - pango=1.42.4=h049681c_0 - patchelf=0.12=h2531618_1 - pcre=8.44=he6710b0_0 - pillow=8.2.0=py38he98fc37_0 - pip=21.0.1=py38h06a4308_0 - pixman=0.40.0=h7b6447c_0 - pkginfo=1.7.0=py38h06a4308_0 - protobuf=3.14.0=py38h2531618_1 - psutil=5.8.0=py38h27cfd23_1 - py-lief=0.10.1=py38h403a769_0 - pyasn1=0.4.8=py_0 - pyasn1-modules=0.2.8=py_0 - pycosat=0.6.3=py38h7b6447c_1 - pycparser=2.20=py_2 - pyjwt=2.0.1=pyhd8ed1ab_1 - pyopenssl=20.0.1=pyhd3eb1b0_1 - pyparsing=2.4.7=pyhd3eb1b0_0 - pyqt=5.9.2=py38h05f1152_4 - pysocks=1.7.1=py38h06a4308_0 - python=3.8.2=hcf32534_0 - python-dateutil=2.8.1=pyhd3eb1b0_0 - python-libarchive-c=2.9=pyhd3eb1b0_1 - python_abi=3.8=1_cp38 - pytorch=1.7.1=py3.8_cuda11.0.221_cudnn8.0.5_0 - pytz=2021.1=pyhd3eb1b0_0 - pyyaml=5.4.1=py38h27cfd23_1 - qt=5.9.7=h5867ecd_1 - readline=8.1=h27cfd23_0 - requests=2.25.1=pyhd3eb1b0_0 - requests-oauthlib=1.3.0=py_0 - ripgrep=12.1.1=0 - rsa=4.7.2=pyhd3eb1b0_1 - ruamel_yaml=0.15.100=py38h27cfd23_0 - scikit-learn=0.24.1=py38ha9443f7_0 - scipy=1.6.2=py38h91f5cce_0 - setuptools=52.0.0=py38h06a4308_0 - sip=4.19.13=py38he6710b0_0 - six=1.15.0=pyh9f0ad1d_0 - soupsieve=2.2.1=pyhd3eb1b0_0 - sqlite=3.35.4=hdfb4753_0 - tensorboard=2.4.0=pyhc547734_0 - tensorboard-plugin-wit=1.6.0=py_0 - threadpoolctl=2.1.0=pyh5ca1d4c_0 - tk=8.6.10=hbc83047_0 - torchaudio=0.7.2=py38 - torchtext=0.8.1=py38 - torchvision=0.8.2=py38_cu110 - tornado=6.1=py38h27cfd23_0 - typing-extensions=3.7.4.3=0 - typing_extensions=3.7.4.3=py_0 - urllib3=1.26.4=pyhd3eb1b0_0 - werkzeug=1.0.1=pyhd3eb1b0_0 - wheel=0.36.2=pyhd3eb1b0_0 - xz=5.2.5=h7b6447c_0 - yaml=0.2.5=h7b6447c_0 - yarl=1.6.3=py38h27cfd23_0 - zipp=3.4.1=pyhd3eb1b0_0 - zlib=1.2.11=h7b6447c_3 - zstd=1.4.5=h9ceee32_0 - pip: - aioconsole==0.3.1 - lark-parser==0.6.5 - lmdb==0.94 - pexpect==4.6.0 - progressbar2==3.39.3 - ptyprocess==0.7.0 - pycapnp==1.0.0 - python-utils==2.5.6 - sexpdata==0.0.3 - tqdm==4.56.0 prefix: /home/miranda9/miniconda3/envs/myenv ``` note that at the time of this writing doing `conda env create --file environment.yml` to create the yml env results in an error: ``` $ conda env create --file environment.yml CondaValueError: prefix already exists: /home/miranda9/miniconda3/envs/myenv ```
35,245,401
I work with conda environments and need some pip packages as well, e.g. pre-compiled wheels from [~gohlke](http://www.lfd.uci.edu/~gohlke/pythonlibs/). At the moment I have two files: `environment.yml` for conda with: ``` # run: conda env create --file environment.yml name: test-env dependencies: - python>=3.5 - anaconda ``` and `requirements.txt` for pip which can be used after activating above conda environment: ``` # run: pip install -i requirements.txt docx gooey http://www.lfd.uci.edu/~gohlke/pythonlibs/bofhrmxk/opencv_python-3.1.0-cp35-none-win_amd64.whl ``` Is there a possibility to combine them in one file (for conda)?
2016/02/06
[ "https://Stackoverflow.com/questions/35245401", "https://Stackoverflow.com", "https://Stackoverflow.com/users/5276734/" ]
One can also use the `requirements.txt` directly in the YAML. For example, ``` name: test-env dependencies: - python>=3.5 - anaconda - pip - pip: - -r requirements.txt ``` Basically, [any option you can run with `pip install`](https://pip.pypa.io/en/stable/reference/pip_install/) you can run in a YAML. See [the Advanced Pip Example](https://github.com/conda/conda/tree/master/tests/conda_env/support/advanced-pip) for a showcase of other capabilities. --- ### Important Note A previous version of this answer (and Conda's Advanced Pip Example) used a substandard `file` URI syntax: ```yaml - -r file:requirements.txt ``` Pip v21.2.1 introduced stricter behavior for URI parsing and no longer supports this. See [this answer for details](https://stackoverflow.com/a/68586065/570918).
Just want to add that adding a wheel in the directory also works. I was getting this error when using the entire URL: ``` HTTP error 404 while getting http://www.lfd.uci.edu/~gohlke/pythonlibs/f9r7rmd8/opencv_python-3.1.0-cp35-none-win_amd64.whl ``` Ended up downloading the wheel and saving it into the same directory as the yml file. ``` name: test-env dependencies: - python>=3.5 - anaconda - pip - pip: - opencv_python-3.1.0-cp35-none-win_amd64.whl ```
35,245,401
I work with conda environments and need some pip packages as well, e.g. pre-compiled wheels from [~gohlke](http://www.lfd.uci.edu/~gohlke/pythonlibs/). At the moment I have two files: `environment.yml` for conda with: ``` # run: conda env create --file environment.yml name: test-env dependencies: - python>=3.5 - anaconda ``` and `requirements.txt` for pip which can be used after activating above conda environment: ``` # run: pip install -i requirements.txt docx gooey http://www.lfd.uci.edu/~gohlke/pythonlibs/bofhrmxk/opencv_python-3.1.0-cp35-none-win_amd64.whl ``` Is there a possibility to combine them in one file (for conda)?
2016/02/06
[ "https://Stackoverflow.com/questions/35245401", "https://Stackoverflow.com", "https://Stackoverflow.com/users/5276734/" ]
Just want to add that adding a wheel in the directory also works. I was getting this error when using the entire URL: ``` HTTP error 404 while getting http://www.lfd.uci.edu/~gohlke/pythonlibs/f9r7rmd8/opencv_python-3.1.0-cp35-none-win_amd64.whl ``` Ended up downloading the wheel and saving it into the same directory as the yml file. ``` name: test-env dependencies: - python>=3.5 - anaconda - pip - pip: - opencv_python-3.1.0-cp35-none-win_amd64.whl ```
If you want to do it automatically it seems that if you do: ``` conda env export > environment.yml` ``` already has the pip things you need. No need to run `pip freeze > requirements4pip.txt` separately for me or include it as an ``` - pip: - -r file:requirements.txt ``` as another answer mentioned. See my yml file: ``` $ cat environment.yml name: myenv channels: - pytorch - dglteam - defaults - conda-forge dependencies: - _libgcc_mutex=0.1=main - absl-py=0.12.0=py38h06a4308_0 - aiohttp=3.7.4=py38h27cfd23_1 - async-timeout=3.0.1=py38h06a4308_0 - attrs=20.3.0=pyhd3eb1b0_0 - beautifulsoup4=4.9.3=pyha847dfd_0 - blas=1.0=mkl - blinker=1.4=py38h06a4308_0 - brotlipy=0.7.0=py38h27cfd23_1003 - bzip2=1.0.8=h7b6447c_0 - c-ares=1.17.1=h27cfd23_0 - ca-certificates=2021.4.13=h06a4308_1 - cachetools=4.2.1=pyhd3eb1b0_0 - cairo=1.14.12=h8948797_3 - certifi=2020.12.5=py38h06a4308_0 - cffi=1.14.0=py38h2e261b9_0 - chardet=3.0.4=py38h06a4308_1003 - click=7.1.2=pyhd3eb1b0_0 - conda=4.10.1=py38h06a4308_1 - conda-build=3.21.4=py38h06a4308_0 - conda-package-handling=1.7.3=py38h27cfd23_1 - coverage=5.5=py38h27cfd23_2 - cryptography=3.4.7=py38hd23ed53_0 - cudatoolkit=11.0.221=h6bb024c_0 - cycler=0.10.0=py38_0 - cython=0.29.23=py38h2531618_0 - dbus=1.13.18=hb2f20db_0 - decorator=4.4.2=pyhd3eb1b0_0 - dgl-cuda11.0=0.6.1=py38_0 - dill=0.3.3=pyhd3eb1b0_0 - expat=2.3.0=h2531618_2 - filelock=3.0.12=pyhd3eb1b0_1 - fontconfig=2.13.1=h6c09931_0 - freetype=2.10.4=h7ca028e_0 - fribidi=1.0.10=h7b6447c_0 - gettext=0.21.0=hf68c758_0 - glib=2.66.3=h58526e2_0 - glob2=0.7=pyhd3eb1b0_0 - google-auth=1.29.0=pyhd3eb1b0_0 - google-auth-oauthlib=0.4.4=pyhd3eb1b0_0 - graphite2=1.3.14=h23475e2_0 - graphviz=2.40.1=h21bd128_2 - grpcio=1.36.1=py38h2157cd5_1 - gst-plugins-base=1.14.0=h8213a91_2 - gstreamer=1.14.0=h28cd5cc_2 - harfbuzz=1.8.8=hffaf4a1_0 - icu=58.2=he6710b0_3 - idna=2.10=pyhd3eb1b0_0 - importlib-metadata=3.10.0=py38h06a4308_0 - intel-openmp=2021.2.0=h06a4308_610 - jinja2=2.11.3=pyhd3eb1b0_0 - joblib=1.0.1=pyhd3eb1b0_0 - jpeg=9b=h024ee3a_2 - kiwisolver=1.3.1=py38h2531618_0 - lcms2=2.12=h3be6417_0 - ld_impl_linux-64=2.33.1=h53a641e_7 - libarchive=3.4.2=h62408e4_0 - libffi=3.2.1=hf484d3e_1007 - libgcc-ng=9.1.0=hdf63c60_0 - libgfortran-ng=7.3.0=hdf63c60_0 - libglib=2.66.3=hbe7bbb4_0 - libiconv=1.16=h516909a_0 - liblief=0.10.1=he6710b0_0 - libpng=1.6.37=h21135ba_2 - libprotobuf=3.14.0=h8c45485_0 - libstdcxx-ng=9.1.0=hdf63c60_0 - libtiff=4.1.0=h2733197_1 - libuuid=1.0.3=h1bed415_2 - libuv=1.40.0=h7b6447c_0 - libxcb=1.14=h7b6447c_0 - libxml2=2.9.10=hb55368b_3 - lz4-c=1.9.2=he1b5a44_3 - markdown=3.3.4=py38h06a4308_0 - markupsafe=1.1.1=py38h7b6447c_0 - matplotlib=3.3.4=py38h06a4308_0 - matplotlib-base=3.3.4=py38h62a2d02_0 - mkl=2020.2=256 - mkl-service=2.3.0=py38h1e0a361_2 - mkl_fft=1.3.0=py38h54f3939_0 - mkl_random=1.2.0=py38hc5bc63f_1 - multidict=5.1.0=py38h27cfd23_2 - ncurses=6.2=he6710b0_1 - networkx=2.5.1=pyhd3eb1b0_0 - ninja=1.10.2=hff7bd54_1 - numpy=1.19.2=py38h54aff64_0 - numpy-base=1.19.2=py38hfa32c7d_0 - oauthlib=3.1.0=py_0 - olefile=0.46=pyh9f0ad1d_1 - openssl=1.1.1k=h27cfd23_0 - pandas=1.2.4=py38h2531618_0 - pango=1.42.4=h049681c_0 - patchelf=0.12=h2531618_1 - pcre=8.44=he6710b0_0 - pillow=8.2.0=py38he98fc37_0 - pip=21.0.1=py38h06a4308_0 - pixman=0.40.0=h7b6447c_0 - pkginfo=1.7.0=py38h06a4308_0 - protobuf=3.14.0=py38h2531618_1 - psutil=5.8.0=py38h27cfd23_1 - py-lief=0.10.1=py38h403a769_0 - pyasn1=0.4.8=py_0 - pyasn1-modules=0.2.8=py_0 - pycosat=0.6.3=py38h7b6447c_1 - pycparser=2.20=py_2 - pyjwt=2.0.1=pyhd8ed1ab_1 - pyopenssl=20.0.1=pyhd3eb1b0_1 - pyparsing=2.4.7=pyhd3eb1b0_0 - pyqt=5.9.2=py38h05f1152_4 - pysocks=1.7.1=py38h06a4308_0 - python=3.8.2=hcf32534_0 - python-dateutil=2.8.1=pyhd3eb1b0_0 - python-libarchive-c=2.9=pyhd3eb1b0_1 - python_abi=3.8=1_cp38 - pytorch=1.7.1=py3.8_cuda11.0.221_cudnn8.0.5_0 - pytz=2021.1=pyhd3eb1b0_0 - pyyaml=5.4.1=py38h27cfd23_1 - qt=5.9.7=h5867ecd_1 - readline=8.1=h27cfd23_0 - requests=2.25.1=pyhd3eb1b0_0 - requests-oauthlib=1.3.0=py_0 - ripgrep=12.1.1=0 - rsa=4.7.2=pyhd3eb1b0_1 - ruamel_yaml=0.15.100=py38h27cfd23_0 - scikit-learn=0.24.1=py38ha9443f7_0 - scipy=1.6.2=py38h91f5cce_0 - setuptools=52.0.0=py38h06a4308_0 - sip=4.19.13=py38he6710b0_0 - six=1.15.0=pyh9f0ad1d_0 - soupsieve=2.2.1=pyhd3eb1b0_0 - sqlite=3.35.4=hdfb4753_0 - tensorboard=2.4.0=pyhc547734_0 - tensorboard-plugin-wit=1.6.0=py_0 - threadpoolctl=2.1.0=pyh5ca1d4c_0 - tk=8.6.10=hbc83047_0 - torchaudio=0.7.2=py38 - torchtext=0.8.1=py38 - torchvision=0.8.2=py38_cu110 - tornado=6.1=py38h27cfd23_0 - typing-extensions=3.7.4.3=0 - typing_extensions=3.7.4.3=py_0 - urllib3=1.26.4=pyhd3eb1b0_0 - werkzeug=1.0.1=pyhd3eb1b0_0 - wheel=0.36.2=pyhd3eb1b0_0 - xz=5.2.5=h7b6447c_0 - yaml=0.2.5=h7b6447c_0 - yarl=1.6.3=py38h27cfd23_0 - zipp=3.4.1=pyhd3eb1b0_0 - zlib=1.2.11=h7b6447c_3 - zstd=1.4.5=h9ceee32_0 - pip: - aioconsole==0.3.1 - lark-parser==0.6.5 - lmdb==0.94 - pexpect==4.6.0 - progressbar2==3.39.3 - ptyprocess==0.7.0 - pycapnp==1.0.0 - python-utils==2.5.6 - sexpdata==0.0.3 - tqdm==4.56.0 prefix: /home/miranda9/miniconda3/envs/myenv ``` note that at the time of this writing doing `conda env create --file environment.yml` to create the yml env results in an error: ``` $ conda env create --file environment.yml CondaValueError: prefix already exists: /home/miranda9/miniconda3/envs/myenv ```
35,245,401
I work with conda environments and need some pip packages as well, e.g. pre-compiled wheels from [~gohlke](http://www.lfd.uci.edu/~gohlke/pythonlibs/). At the moment I have two files: `environment.yml` for conda with: ``` # run: conda env create --file environment.yml name: test-env dependencies: - python>=3.5 - anaconda ``` and `requirements.txt` for pip which can be used after activating above conda environment: ``` # run: pip install -i requirements.txt docx gooey http://www.lfd.uci.edu/~gohlke/pythonlibs/bofhrmxk/opencv_python-3.1.0-cp35-none-win_amd64.whl ``` Is there a possibility to combine them in one file (for conda)?
2016/02/06
[ "https://Stackoverflow.com/questions/35245401", "https://Stackoverflow.com", "https://Stackoverflow.com/users/5276734/" ]
One can also use the `requirements.txt` directly in the YAML. For example, ``` name: test-env dependencies: - python>=3.5 - anaconda - pip - pip: - -r requirements.txt ``` Basically, [any option you can run with `pip install`](https://pip.pypa.io/en/stable/reference/pip_install/) you can run in a YAML. See [the Advanced Pip Example](https://github.com/conda/conda/tree/master/tests/conda_env/support/advanced-pip) for a showcase of other capabilities. --- ### Important Note A previous version of this answer (and Conda's Advanced Pip Example) used a substandard `file` URI syntax: ```yaml - -r file:requirements.txt ``` Pip v21.2.1 introduced stricter behavior for URI parsing and no longer supports this. See [this answer for details](https://stackoverflow.com/a/68586065/570918).
If you want to do it automatically it seems that if you do: ``` conda env export > environment.yml` ``` already has the pip things you need. No need to run `pip freeze > requirements4pip.txt` separately for me or include it as an ``` - pip: - -r file:requirements.txt ``` as another answer mentioned. See my yml file: ``` $ cat environment.yml name: myenv channels: - pytorch - dglteam - defaults - conda-forge dependencies: - _libgcc_mutex=0.1=main - absl-py=0.12.0=py38h06a4308_0 - aiohttp=3.7.4=py38h27cfd23_1 - async-timeout=3.0.1=py38h06a4308_0 - attrs=20.3.0=pyhd3eb1b0_0 - beautifulsoup4=4.9.3=pyha847dfd_0 - blas=1.0=mkl - blinker=1.4=py38h06a4308_0 - brotlipy=0.7.0=py38h27cfd23_1003 - bzip2=1.0.8=h7b6447c_0 - c-ares=1.17.1=h27cfd23_0 - ca-certificates=2021.4.13=h06a4308_1 - cachetools=4.2.1=pyhd3eb1b0_0 - cairo=1.14.12=h8948797_3 - certifi=2020.12.5=py38h06a4308_0 - cffi=1.14.0=py38h2e261b9_0 - chardet=3.0.4=py38h06a4308_1003 - click=7.1.2=pyhd3eb1b0_0 - conda=4.10.1=py38h06a4308_1 - conda-build=3.21.4=py38h06a4308_0 - conda-package-handling=1.7.3=py38h27cfd23_1 - coverage=5.5=py38h27cfd23_2 - cryptography=3.4.7=py38hd23ed53_0 - cudatoolkit=11.0.221=h6bb024c_0 - cycler=0.10.0=py38_0 - cython=0.29.23=py38h2531618_0 - dbus=1.13.18=hb2f20db_0 - decorator=4.4.2=pyhd3eb1b0_0 - dgl-cuda11.0=0.6.1=py38_0 - dill=0.3.3=pyhd3eb1b0_0 - expat=2.3.0=h2531618_2 - filelock=3.0.12=pyhd3eb1b0_1 - fontconfig=2.13.1=h6c09931_0 - freetype=2.10.4=h7ca028e_0 - fribidi=1.0.10=h7b6447c_0 - gettext=0.21.0=hf68c758_0 - glib=2.66.3=h58526e2_0 - glob2=0.7=pyhd3eb1b0_0 - google-auth=1.29.0=pyhd3eb1b0_0 - google-auth-oauthlib=0.4.4=pyhd3eb1b0_0 - graphite2=1.3.14=h23475e2_0 - graphviz=2.40.1=h21bd128_2 - grpcio=1.36.1=py38h2157cd5_1 - gst-plugins-base=1.14.0=h8213a91_2 - gstreamer=1.14.0=h28cd5cc_2 - harfbuzz=1.8.8=hffaf4a1_0 - icu=58.2=he6710b0_3 - idna=2.10=pyhd3eb1b0_0 - importlib-metadata=3.10.0=py38h06a4308_0 - intel-openmp=2021.2.0=h06a4308_610 - jinja2=2.11.3=pyhd3eb1b0_0 - joblib=1.0.1=pyhd3eb1b0_0 - jpeg=9b=h024ee3a_2 - kiwisolver=1.3.1=py38h2531618_0 - lcms2=2.12=h3be6417_0 - ld_impl_linux-64=2.33.1=h53a641e_7 - libarchive=3.4.2=h62408e4_0 - libffi=3.2.1=hf484d3e_1007 - libgcc-ng=9.1.0=hdf63c60_0 - libgfortran-ng=7.3.0=hdf63c60_0 - libglib=2.66.3=hbe7bbb4_0 - libiconv=1.16=h516909a_0 - liblief=0.10.1=he6710b0_0 - libpng=1.6.37=h21135ba_2 - libprotobuf=3.14.0=h8c45485_0 - libstdcxx-ng=9.1.0=hdf63c60_0 - libtiff=4.1.0=h2733197_1 - libuuid=1.0.3=h1bed415_2 - libuv=1.40.0=h7b6447c_0 - libxcb=1.14=h7b6447c_0 - libxml2=2.9.10=hb55368b_3 - lz4-c=1.9.2=he1b5a44_3 - markdown=3.3.4=py38h06a4308_0 - markupsafe=1.1.1=py38h7b6447c_0 - matplotlib=3.3.4=py38h06a4308_0 - matplotlib-base=3.3.4=py38h62a2d02_0 - mkl=2020.2=256 - mkl-service=2.3.0=py38h1e0a361_2 - mkl_fft=1.3.0=py38h54f3939_0 - mkl_random=1.2.0=py38hc5bc63f_1 - multidict=5.1.0=py38h27cfd23_2 - ncurses=6.2=he6710b0_1 - networkx=2.5.1=pyhd3eb1b0_0 - ninja=1.10.2=hff7bd54_1 - numpy=1.19.2=py38h54aff64_0 - numpy-base=1.19.2=py38hfa32c7d_0 - oauthlib=3.1.0=py_0 - olefile=0.46=pyh9f0ad1d_1 - openssl=1.1.1k=h27cfd23_0 - pandas=1.2.4=py38h2531618_0 - pango=1.42.4=h049681c_0 - patchelf=0.12=h2531618_1 - pcre=8.44=he6710b0_0 - pillow=8.2.0=py38he98fc37_0 - pip=21.0.1=py38h06a4308_0 - pixman=0.40.0=h7b6447c_0 - pkginfo=1.7.0=py38h06a4308_0 - protobuf=3.14.0=py38h2531618_1 - psutil=5.8.0=py38h27cfd23_1 - py-lief=0.10.1=py38h403a769_0 - pyasn1=0.4.8=py_0 - pyasn1-modules=0.2.8=py_0 - pycosat=0.6.3=py38h7b6447c_1 - pycparser=2.20=py_2 - pyjwt=2.0.1=pyhd8ed1ab_1 - pyopenssl=20.0.1=pyhd3eb1b0_1 - pyparsing=2.4.7=pyhd3eb1b0_0 - pyqt=5.9.2=py38h05f1152_4 - pysocks=1.7.1=py38h06a4308_0 - python=3.8.2=hcf32534_0 - python-dateutil=2.8.1=pyhd3eb1b0_0 - python-libarchive-c=2.9=pyhd3eb1b0_1 - python_abi=3.8=1_cp38 - pytorch=1.7.1=py3.8_cuda11.0.221_cudnn8.0.5_0 - pytz=2021.1=pyhd3eb1b0_0 - pyyaml=5.4.1=py38h27cfd23_1 - qt=5.9.7=h5867ecd_1 - readline=8.1=h27cfd23_0 - requests=2.25.1=pyhd3eb1b0_0 - requests-oauthlib=1.3.0=py_0 - ripgrep=12.1.1=0 - rsa=4.7.2=pyhd3eb1b0_1 - ruamel_yaml=0.15.100=py38h27cfd23_0 - scikit-learn=0.24.1=py38ha9443f7_0 - scipy=1.6.2=py38h91f5cce_0 - setuptools=52.0.0=py38h06a4308_0 - sip=4.19.13=py38he6710b0_0 - six=1.15.0=pyh9f0ad1d_0 - soupsieve=2.2.1=pyhd3eb1b0_0 - sqlite=3.35.4=hdfb4753_0 - tensorboard=2.4.0=pyhc547734_0 - tensorboard-plugin-wit=1.6.0=py_0 - threadpoolctl=2.1.0=pyh5ca1d4c_0 - tk=8.6.10=hbc83047_0 - torchaudio=0.7.2=py38 - torchtext=0.8.1=py38 - torchvision=0.8.2=py38_cu110 - tornado=6.1=py38h27cfd23_0 - typing-extensions=3.7.4.3=0 - typing_extensions=3.7.4.3=py_0 - urllib3=1.26.4=pyhd3eb1b0_0 - werkzeug=1.0.1=pyhd3eb1b0_0 - wheel=0.36.2=pyhd3eb1b0_0 - xz=5.2.5=h7b6447c_0 - yaml=0.2.5=h7b6447c_0 - yarl=1.6.3=py38h27cfd23_0 - zipp=3.4.1=pyhd3eb1b0_0 - zlib=1.2.11=h7b6447c_3 - zstd=1.4.5=h9ceee32_0 - pip: - aioconsole==0.3.1 - lark-parser==0.6.5 - lmdb==0.94 - pexpect==4.6.0 - progressbar2==3.39.3 - ptyprocess==0.7.0 - pycapnp==1.0.0 - python-utils==2.5.6 - sexpdata==0.0.3 - tqdm==4.56.0 prefix: /home/miranda9/miniconda3/envs/myenv ``` note that at the time of this writing doing `conda env create --file environment.yml` to create the yml env results in an error: ``` $ conda env create --file environment.yml CondaValueError: prefix already exists: /home/miranda9/miniconda3/envs/myenv ```
55,746,170
I am trying to implement a neural network for an NLP task with a convolutional layer followed up by an LSTM layer. I am currently experimenting with the new Tensorflow 2.0 to do this. However, when building the model, I've encountered an error that I could not understand. ``` # Input shape of training and validation set (1000, 1, 512), (500, 1, 512) ``` **The model** ``` model = keras.Sequential() model.add(keras.layers.InputLayer(input_shape=(None, 512))) model.add(keras.layers.Conv1D(128, 1, activation="relu")) model.add(keras.layers.MaxPooling1D((2))) model.add(keras.layers.LSTM(64, activation="tanh")) model.add(keras.layers.Dense(6)) model.add(keras.layers.Activation("softmax")) ``` **The error** ``` InvalidArgumentError: Tried to stack elements of an empty list with non-fully-defined element_shape: [?,64] [[{{node unified_lstm_16/TensorArrayV2Stack/TensorListStack}}]] [Op:__inference_keras_scratch_graph_26641] ``` At first, I tried to check if there are any issues regarding implementing a `Conv1D` layer with an `LSTM` layer. I found [this post](https://github.com/keras-team/keras/issues/129), that suggested so that I reshaped the layer between the convolutional layer and lstm layer. But that still did not work and I got a different error instead. [This post](https://stackoverflow.com/questions/55431081/how-to-connect-convlolutional-layer-with-lstm-in-tensorflow-keras) seems similar but it does not use Tensorflow 2.0 and not answer so far. I also found this post that has the same intention of stacking a convolutional and lstm layers. But it uses `Conv2D` instead of `Conv1D`. [This post](https://stackoverflow.com/questions/35254138/python-keras-how-to-change-the-size-of-input-after-convolution-layer-into-lstm-l) also suggests to use reshaped the output of the convolutional layer with a built-in layer called `Reshape`. Yet, I still got the same error. I also tried to specify the `input_shape` in the LSTM layer. ``` model = keras.Sequential() model.add(keras.layers.InputLayer(input_shape=(None, 512))) model.add(keras.layers.Conv1D(128, 1, activation="relu")) model.add(keras.layers.MaxPooling1D((2))) model.add(keras.layers.LSTM(64, activation="tanh", input_shape=(None, 64))) model.add(keras.layers.Dense(6)) model.add(keras.layers.Activation("softmax")) ``` And I still got the same error in the end. I am not sure if I understand how to stack the 1-dimensional convolutional layer and lstm layer. I know that TF2.0 is still an Alpha, but did can someone point out what I was missing? Thanks in advance
2019/04/18
[ "https://Stackoverflow.com/questions/55746170", "https://Stackoverflow.com", "https://Stackoverflow.com/users/7274157/" ]
The issue is a dimensionality issue. Your feature is of shape `[..., 1, 512]`; therefore, `MaxPooling1D` `pooling_size` 2 is bigger than 1 causing the issue. Adding `padding="same"` will solve the issue. ``` model = tf.keras.Sequential() model.add(tf.keras.layers.InputLayer(input_shape=(None, 512))) model.add(tf.keras.layers.Conv1D(128, 1, activation="relu")) model.add(tf.keras.layers.MaxPooling1D(2, padding="same")) model.add(tf.keras.layers.LSTM(64, activation="tanh")) model.add(tf.keras.layers.Flatten()) model.add(tf.keras.layers.Dense(6)) model.add(tf.keras.layers.Activation("softmax")) ```
**padding="same"** should solve your issue. Change below line: `model.add(tf.keras.layers.MaxPooling1D(2, padding="same"))`
23,793,774
``` omnia@ubuntu:~$ psql --version psql (PostgreSQL) 9.3.4 omnia@ubuntu:~$ pg_dump --version pg_dump (PostgreSQL) 9.2.8 omnia@ubuntu:~$ dpkg -l | grep pg ii gnupg 1.4.11-3ubuntu2.5 GNU privacy guard - a free PGP replacement ii gpgv 1.4.11-3ubuntu2.5 GNU privacy guard - signature verification tool ii libgpg-error0 1.10-2ubuntu1 library for common error values and messages in GnuPG components ii libpq5 9.3.4-1.pgdg60+1 PostgreSQL C client library ii pgdg-keyring 2013.2 keyring for apt.postgresql.org ii postgresql-9.2 9.2.8-1.pgdg60+1 object-relational SQL database, version 9.2 server ii postgresql-9.3 9.3.4-1.pgdg60+1 object-relational SQL database, version 9.3 server ii postgresql-client-9.2 9.2.8-1.pgdg60+1 front-end programs for PostgreSQL 9.2 ii postgresql-client-9.3 9.3.4-1.pgdg60+1 front-end programs for PostgreSQL 9.3 ii postgresql-client-common 154.pgdg60+1 manager for multiple PostgreSQL client versions ii postgresql-common 154.pgdg60+1 PostgreSQL database-cluster manager ii python-gnupginterface 0.3.2-9.1ubuntu3 Python interface to GnuPG (GPG) ii unattended-upgrades 0.76ubuntu1 automatic installation of security upgrades ii update-manager-core 1:0.156.14.13 manage release upgrades omnia@ubuntu:~$ ``` Seems I have both installed but pg\_dump is stuck in an older version? Weird since both are linked to the same "wrapper": ``` omnia@ubuntu:~$ readlink /usr/bin/psql ../share/postgresql-common/pg_wrapper omnia@ubuntu:~$ readlink /usr/bin/pg_dump ../share/postgresql-common/pg_wrapper ``` What am I doing wrong?
2014/05/21
[ "https://Stackoverflow.com/questions/23793774", "https://Stackoverflow.com", "https://Stackoverflow.com/users/7595/" ]
``` sudo rm /usr/bin/pg_dump sudo ln -s /usr/lib/postgresql/9.3/bin/pg_dump /usr/bin/pg_dump ```
The `pgdg60` package suffix leads me to believe these packages are not from the official Ubuntu repository. Try looking into `/etc/apt/sources.list` or `/etc/apt/sources.list.d` and see if you have any third party PPA's or repositories specified. Try getting the Postgresql packages either from your Ubuntu repo (although these may be a bit out-of-date depending on your Ubuntu version), or from the official postgres repo (they provide an apt server for Ubuntu/Debian): <https://wiki.postgresql.org/wiki/Apt>
23,793,774
``` omnia@ubuntu:~$ psql --version psql (PostgreSQL) 9.3.4 omnia@ubuntu:~$ pg_dump --version pg_dump (PostgreSQL) 9.2.8 omnia@ubuntu:~$ dpkg -l | grep pg ii gnupg 1.4.11-3ubuntu2.5 GNU privacy guard - a free PGP replacement ii gpgv 1.4.11-3ubuntu2.5 GNU privacy guard - signature verification tool ii libgpg-error0 1.10-2ubuntu1 library for common error values and messages in GnuPG components ii libpq5 9.3.4-1.pgdg60+1 PostgreSQL C client library ii pgdg-keyring 2013.2 keyring for apt.postgresql.org ii postgresql-9.2 9.2.8-1.pgdg60+1 object-relational SQL database, version 9.2 server ii postgresql-9.3 9.3.4-1.pgdg60+1 object-relational SQL database, version 9.3 server ii postgresql-client-9.2 9.2.8-1.pgdg60+1 front-end programs for PostgreSQL 9.2 ii postgresql-client-9.3 9.3.4-1.pgdg60+1 front-end programs for PostgreSQL 9.3 ii postgresql-client-common 154.pgdg60+1 manager for multiple PostgreSQL client versions ii postgresql-common 154.pgdg60+1 PostgreSQL database-cluster manager ii python-gnupginterface 0.3.2-9.1ubuntu3 Python interface to GnuPG (GPG) ii unattended-upgrades 0.76ubuntu1 automatic installation of security upgrades ii update-manager-core 1:0.156.14.13 manage release upgrades omnia@ubuntu:~$ ``` Seems I have both installed but pg\_dump is stuck in an older version? Weird since both are linked to the same "wrapper": ``` omnia@ubuntu:~$ readlink /usr/bin/psql ../share/postgresql-common/pg_wrapper omnia@ubuntu:~$ readlink /usr/bin/pg_dump ../share/postgresql-common/pg_wrapper ``` What am I doing wrong?
2014/05/21
[ "https://Stackoverflow.com/questions/23793774", "https://Stackoverflow.com", "https://Stackoverflow.com/users/7595/" ]
If your *pg\_dump* is sym-linked to *pg\_wrapper*, then the best fix is to tell *pg\_wrapper* which version to use. Append ``` * * 9.6 localhost:5432 * ``` to `/etc/postgresql-common/user_clusters`, (assuming your postmaster is listening on localhost:5432 of course). This then fixes the problem for all *pg\_* commands, doesn't involve breaking anything, and scales nicely for future versions which you may wish to install. See `man pg_wrapper` and `man postgresqlrc` for details and other options. NB This answer is specifically for Debian/Ubuntu, and is most likely applicable when there are two version of pg installed, eg. after an upgrade.
The `pgdg60` package suffix leads me to believe these packages are not from the official Ubuntu repository. Try looking into `/etc/apt/sources.list` or `/etc/apt/sources.list.d` and see if you have any third party PPA's or repositories specified. Try getting the Postgresql packages either from your Ubuntu repo (although these may be a bit out-of-date depending on your Ubuntu version), or from the official postgres repo (they provide an apt server for Ubuntu/Debian): <https://wiki.postgresql.org/wiki/Apt>
23,793,774
``` omnia@ubuntu:~$ psql --version psql (PostgreSQL) 9.3.4 omnia@ubuntu:~$ pg_dump --version pg_dump (PostgreSQL) 9.2.8 omnia@ubuntu:~$ dpkg -l | grep pg ii gnupg 1.4.11-3ubuntu2.5 GNU privacy guard - a free PGP replacement ii gpgv 1.4.11-3ubuntu2.5 GNU privacy guard - signature verification tool ii libgpg-error0 1.10-2ubuntu1 library for common error values and messages in GnuPG components ii libpq5 9.3.4-1.pgdg60+1 PostgreSQL C client library ii pgdg-keyring 2013.2 keyring for apt.postgresql.org ii postgresql-9.2 9.2.8-1.pgdg60+1 object-relational SQL database, version 9.2 server ii postgresql-9.3 9.3.4-1.pgdg60+1 object-relational SQL database, version 9.3 server ii postgresql-client-9.2 9.2.8-1.pgdg60+1 front-end programs for PostgreSQL 9.2 ii postgresql-client-9.3 9.3.4-1.pgdg60+1 front-end programs for PostgreSQL 9.3 ii postgresql-client-common 154.pgdg60+1 manager for multiple PostgreSQL client versions ii postgresql-common 154.pgdg60+1 PostgreSQL database-cluster manager ii python-gnupginterface 0.3.2-9.1ubuntu3 Python interface to GnuPG (GPG) ii unattended-upgrades 0.76ubuntu1 automatic installation of security upgrades ii update-manager-core 1:0.156.14.13 manage release upgrades omnia@ubuntu:~$ ``` Seems I have both installed but pg\_dump is stuck in an older version? Weird since both are linked to the same "wrapper": ``` omnia@ubuntu:~$ readlink /usr/bin/psql ../share/postgresql-common/pg_wrapper omnia@ubuntu:~$ readlink /usr/bin/pg_dump ../share/postgresql-common/pg_wrapper ``` What am I doing wrong?
2014/05/21
[ "https://Stackoverflow.com/questions/23793774", "https://Stackoverflow.com", "https://Stackoverflow.com/users/7595/" ]
``` sudo rm /usr/bin/pg_dump sudo ln -s /usr/lib/postgresql/9.3/bin/pg_dump /usr/bin/pg_dump ```
If your *pg\_dump* is sym-linked to *pg\_wrapper*, then the best fix is to tell *pg\_wrapper* which version to use. Append ``` * * 9.6 localhost:5432 * ``` to `/etc/postgresql-common/user_clusters`, (assuming your postmaster is listening on localhost:5432 of course). This then fixes the problem for all *pg\_* commands, doesn't involve breaking anything, and scales nicely for future versions which you may wish to install. See `man pg_wrapper` and `man postgresqlrc` for details and other options. NB This answer is specifically for Debian/Ubuntu, and is most likely applicable when there are two version of pg installed, eg. after an upgrade.
64,808,992
I am stuck with code below. Either I cannot find simple answer to my problem due to not narrow enough search or I am just too blind to see. Anyway I am looking to put the "+" and "-" buttons to use. They suppose to literally do what their assigned symbols do. With my level of python knowledge I can only achieve that by creating single function to each button which is a lot of code. I wonder if it is possible to create loop which could save tons of code and still be able to update label called "stock" in the same row as pressed button. At the moment I have assigned random numbers to that label, but in a bigger scope that label will be populated by integers taken from db. I will be very grateful if anyone could point me into right direction. ``` import tkinter as tk from tkinter import Tk import random root = tk.Tk() my_list=dict(AAA=["aa1", "aa2", "aa3"], BBB=["ab1", "ab2", "ab3", "ab4", "ab5"], CCC=["ac1", "ac2", "ac3", "ac4", "ac5", "ac6"], DDD=["ad1", "ad2", "ad3", "ad4", "ad5", "ad6"], EEE=["ae1", "ae2", "ae3", "ae4", "ae5", "ae6"], FFF=["af1", "af2", "af3", "af4", "af5", "af6"], GGG=["ag1", "ag2", "ag3", "ag4", "ag5", "ag6"], HHH=["ah1", "ah2", "ah3", "ah4", "ah5", "ah6"]) for x, y in enumerate(my_list): xyz=x*4 tk.Label(root, text=y, width=25, bd=3, relief=tk.GROOVE).grid(row=0, column=xyz,columnspan=4,padx=(0,10)) for xing, ying in enumerate(my_list[y]): tk.Label(root, text=ying, width=10,relief=tk.SUNKEN).grid(row=xing+1, column=xyz) stock=tk.Label(root,text=random.randint(0,9), width=5,relief=tk.SUNKEN) stock.grid(row=xing+1, column=xyz+1) tk.Button(root, text="+", width=3).grid(row=xing+1, column=xyz+2) tk.Button(root, text="-", width=3).grid(row=xing+1, column=xyz+3,padx=(0,10)) root.mainloop() ```
2020/11/12
[ "https://Stackoverflow.com/questions/64808992", "https://Stackoverflow.com", "https://Stackoverflow.com/users/11618118/" ]
Currently, there is one solution **Real-World Super-Resolution via Kernel Estimation and Noise Injection**. The author proposes a degradation framework RealSR, which provides realistic images for super-resolution learning. It is a promising method for shakiness or motion effect images super-resolution. The method is divided into two stages. The first stage *Realistic Degradation for Super-Resolution* > > is to estimate the degradation from real data and generate realistically > LR images. > > > The second stage *Super-Resolution Model* > > is to train the SR model based on the constructed data. > > > You can look at this Github article: <https://github.com/jixiaozhong/RealSR>
I've also been working on this super-resolution field and found some promising results but haven't tried yet, [first paper](https://doi.org/10.1016/j.heliyon.2021.e08341) (license plate base text) they implement the image enhancement first then do the super-resolution in a later stage. [second paper](https://arxiv.org/abs/2106.15368) and [github](https://github.com/mjq11302010044/TPGSR) in this paper they use text prior to guide the super-resolution network.
69,280,273
So, I have a list of dicts in python that looks like this: ``` lis = [ {'action': 'Notify', 'type': 'Something', 'Genre': 10, 'date': '2021-05-07 01:59:37'}, {'action': 'Notify', 'type': 'Something Else', 'Genre': 20, 'date': '2021-05-07 01:59:37'} ... ] ``` Now I want `lis` to be in a way, such that **each individual dict** is ordered using the `mapping` for the keys that I will provide. For example, if ``` mapping = {1:'date', 2:'Genre', 3:'action', 4:'type'} ``` Then, I want to make my original list of dicts look like this: ``` lis = [ {'date': '2021-05-07 01:59:37', 'Genre': 10, 'action': 'Notify', 'type': 'Something'}, {'date': '2021-05-07 01:59:37', 'Genre': 20, 'action': 'Notify', 'type': 'Something Else'} ... ] ``` How do I implement this?
2021/09/22
[ "https://Stackoverflow.com/questions/69280273", "https://Stackoverflow.com", "https://Stackoverflow.com/users/11903403/" ]
You might harness [`collections.OrderedDict`](https://docs.python.org/3/library/collections.html#collections.OrderedDict) for this task as follows ``` import collections order = ['date', 'Genre', 'action', 'type'] dct1 = {'action': 'Notify', 'type': 'Something', 'Genre': 10, 'date': '2021-05-07 01:59:37'} dct2 = {'action': 'Notify', 'type': 'Something Else', 'Genre': 20, 'date': '2021-05-07 01:59:37'} odct1 = collections.OrderedDict.fromkeys(order) odct1.update(dct1) odct2 = collections.OrderedDict.fromkeys(order) odct2.update(dct2) print(odct1) print(odct2) ``` output: ``` OrderedDict([('date', '2021-05-07 01:59:37'), ('Genre', 10), ('action', 'Notify'), ('type', 'Something')]) OrderedDict([('date', '2021-05-07 01:59:37'), ('Genre', 20), ('action', 'Notify'), ('type', 'Something Else')]) ``` Disclaimer: this assume every dict you want to process has exactly all keys from `order`. This solution works with any python version which has `collections.OrderedDict` if you will be using solely `python3.7` or newer you might use common `dict` as follows ``` order = ['date', 'Genre', 'action', 'type'] dct1 = dict.fromkeys(order) dct1.update({'action': 'Notify', 'type': 'Something', 'Genre': 10, 'date': '2021-05-07 01:59:37'}) print(dct1) ``` output ``` {'date': '2021-05-07 01:59:37', 'Genre': 10, 'action': 'Notify', 'type': 'Something'} ``` Disclaimer still holds
Try this: ``` def sort_dct(li, mapping): return {v: li[v] for k,v in mapping.items()} out = [] mapping = {1:'date', 2:'Genre', 3:'action', 4:'type'} for li in lis: out.append(sort_dct(li,mapping)) print(out) ``` Output: ``` [{'date': '2021-05-07 01:59:37', 'Genre': 10, 'action': 'Notify', 'type': 'Something'}, {'date': '2021-05-07 01:59:37', 'Genre': 20, 'action': 'Notify', 'type': 'Something Else'}] ```
69,280,273
So, I have a list of dicts in python that looks like this: ``` lis = [ {'action': 'Notify', 'type': 'Something', 'Genre': 10, 'date': '2021-05-07 01:59:37'}, {'action': 'Notify', 'type': 'Something Else', 'Genre': 20, 'date': '2021-05-07 01:59:37'} ... ] ``` Now I want `lis` to be in a way, such that **each individual dict** is ordered using the `mapping` for the keys that I will provide. For example, if ``` mapping = {1:'date', 2:'Genre', 3:'action', 4:'type'} ``` Then, I want to make my original list of dicts look like this: ``` lis = [ {'date': '2021-05-07 01:59:37', 'Genre': 10, 'action': 'Notify', 'type': 'Something'}, {'date': '2021-05-07 01:59:37', 'Genre': 20, 'action': 'Notify', 'type': 'Something Else'} ... ] ``` How do I implement this?
2021/09/22
[ "https://Stackoverflow.com/questions/69280273", "https://Stackoverflow.com", "https://Stackoverflow.com/users/11903403/" ]
With a list comprehension: ```py lis = [ {'action': 'Notify', 'type': 'Something', 'Genre': 10, 'date': '2021-05-07 01:59:37'}, {'action': 'Notify', 'type': 'Something Else', 'Genre': 20, 'date': '2021-05-07 01:59:37'} ] mapping = {1:'date', 2:'Genre', 3:'action', 4:'type'} sorted_lis = [ {field: record[field] for field in mapping.values()} for record in lis ] print(sorted_lis) ```
Try this: ``` def sort_dct(li, mapping): return {v: li[v] for k,v in mapping.items()} out = [] mapping = {1:'date', 2:'Genre', 3:'action', 4:'type'} for li in lis: out.append(sort_dct(li,mapping)) print(out) ``` Output: ``` [{'date': '2021-05-07 01:59:37', 'Genre': 10, 'action': 'Notify', 'type': 'Something'}, {'date': '2021-05-07 01:59:37', 'Genre': 20, 'action': 'Notify', 'type': 'Something Else'}] ```
36,534,186
I'm having this problem with a python script I'm writing that calls an exe file (subrocess.Popen). I'm redirecting the stdout and stderr to PIPE, but i cant read (subprocess.Popen.stdout.readline()) any output. I did try to run the exec file in windows cli and redirecting both stdout and stderr... and nothing happens. So I reckon there is no stdout and stderr in this Qt app. Is there any way I can get to the data that prints this exe on screen (by the way the application is photivo.exe)?
2016/04/10
[ "https://Stackoverflow.com/questions/36534186", "https://Stackoverflow.com", "https://Stackoverflow.com/users/6185152/" ]
Does this work? Set the alpha of the extra lines to 0 (so they become transparent. Using geom\_line as geom\_density uses alpha for fill only. (system problems prevent testing) ``` ggplotly( ggplot(diamonds, aes(depth, colour = cut)) + geom_density() + geom_line(aes(text = paste("Clarity: ", clarity)), stat="density", alpha=0) + xlim(55, 70) ) ```
I realize that this is an old answer, but the main problem here is that you're trying to do something that's logically impossible. `clarity` and `cut` are two separate dimensions, so you can't simply put the `clarity` in a tooltip on the line that's grouped by `cut`, because that line represents diamonds of all different `clarity`s grouped together. Once you add `clarity` into the mix (via the `text` aesthetic), ggplot rightly separates the various `clarities` out, so that it has a `clarity` to refer to. You could force it back to grouping just by `cut` by adding `group=cut` to the `aes`, but you'll lose the `clarity` tooltip, because there's no meaningful value of `clarity` when you're grouping just by `cut` - again, each point is all clarities at once. Richard's solution simply displays both graphs at once, but makes the `clarity`-grouped ones invisible. I'm not sure what the original goal was here, but that doesn't accomplish anything useful, because it just lets you mouse-over invisible peaks in addition to the properly-grouped `cut` bands. I'm not sure what your original data was, but you simply can't display two dimensions and group by only one of them. You'd either have to use the multiple curves, which accurately represent the second dimension, or flatten the second dimension by doing some sort of summarization of it - in the case of `clarity`, there's not really any sensible summarization you can do, but if it were, say, price, you could display an average.
54,900,964
Hi I have a question with regards to python programming for my assignment The task is to replace the occurrence of a number in a given value in a recursive manner, and the final output must be in integer i.e. digit\_swap(521, 1, 3) --> 523 where 1 is swapped out for 3 Below is my code and it works well for s = 0 - 9 if the final answer is outputted as string ``` def digit_swap(n, d, s): result = "" if len(str(n)) == 1: if str(n) == str(d): return str(s) else: return str(n) elif str(n)[0] == str(d): result = result + str(s) + str(digit_swap(str(n)[1:], d, s)) return result else: result = result + str(n)[0] + str(digit_swap(str(n)[1:], d, s)) return result ``` However, I have trouble making the final output as Integer The code breaks down when s = 0 i.e. digit\_swap(65132, 1, 0) --> 6532 instead of 65032 Is there any fix to my code? ``` def digit_swap(n, d, s): result = "" if len(str(n)) == 1: if str(n) == str(d): return str(s) else: return str(n) elif str(n)[0] == str(d): result = result + str(s) + str(digit_swap(str(n)[1:], d, s)) return int(result) # Changes else: result = result + str(n)[0] + str(digit_swap(str(n)[1:], d, s)) return int(result) # Changes ```
2019/02/27
[ "https://Stackoverflow.com/questions/54900964", "https://Stackoverflow.com", "https://Stackoverflow.com/users/9369481/" ]
Conversion to string is unnecessary, this can be implemented much easier ``` def digit_swap(n, d, s): if n == 0: return 0 lower_n = (s if (n % 10) == d else (n % 10)) higher_n = digit_swap(n // 10, d, s) * 10 return higher_n + lower_n assert digit_swap(521, 1, 3) == 523 assert digit_swap(65132, 1, 0) == 65032 ```
For example `int(00)` is casted to 0. Therefore a zero is discarded. I suggest not to cast, instead leave it as a string. If you have to give back an `int`, you should not cast until you return the number. However, you still discard 0s at the beginning. So all in all, I would suggest just return strings instead of ints: ``` return str(result) # instead of return int(result) ``` And call it: ``` int(digit_swap(n,d,s)) ```
54,900,964
Hi I have a question with regards to python programming for my assignment The task is to replace the occurrence of a number in a given value in a recursive manner, and the final output must be in integer i.e. digit\_swap(521, 1, 3) --> 523 where 1 is swapped out for 3 Below is my code and it works well for s = 0 - 9 if the final answer is outputted as string ``` def digit_swap(n, d, s): result = "" if len(str(n)) == 1: if str(n) == str(d): return str(s) else: return str(n) elif str(n)[0] == str(d): result = result + str(s) + str(digit_swap(str(n)[1:], d, s)) return result else: result = result + str(n)[0] + str(digit_swap(str(n)[1:], d, s)) return result ``` However, I have trouble making the final output as Integer The code breaks down when s = 0 i.e. digit\_swap(65132, 1, 0) --> 6532 instead of 65032 Is there any fix to my code? ``` def digit_swap(n, d, s): result = "" if len(str(n)) == 1: if str(n) == str(d): return str(s) else: return str(n) elif str(n)[0] == str(d): result = result + str(s) + str(digit_swap(str(n)[1:], d, s)) return int(result) # Changes else: result = result + str(n)[0] + str(digit_swap(str(n)[1:], d, s)) return int(result) # Changes ```
2019/02/27
[ "https://Stackoverflow.com/questions/54900964", "https://Stackoverflow.com", "https://Stackoverflow.com/users/9369481/" ]
Conversion to string is unnecessary, this can be implemented much easier ``` def digit_swap(n, d, s): if n == 0: return 0 lower_n = (s if (n % 10) == d else (n % 10)) higher_n = digit_swap(n // 10, d, s) * 10 return higher_n + lower_n assert digit_swap(521, 1, 3) == 523 assert digit_swap(65132, 1, 0) == 65032 ```
do not return int from method, instead convert it to int from where you are calling method. Problem lies where your code trying to convert string to int `return int(result)`. So if result is `03` then function will return `int('03')` i.e `3`. **call your method like this `print int(digit_swap(65132, 1, 0))` so you will get integer at the end.**
13,083,026
Imagine I have a script, let's say `my_tools.py` that I import as a module. But `my_tools.py` is saved twice: at `C:\Python27\Lib` and at the same directory from where the script is run that does the import. Can I change the order where python looks for `my_tools.py` first? That is, to check first if it exists at `C:\Python27\Lib` and if so, do the import?
2012/10/26
[ "https://Stackoverflow.com/questions/13083026", "https://Stackoverflow.com", "https://Stackoverflow.com/users/1105929/" ]
You can manipulate `sys.path` as much as you want... If you wanted to move the current directory to be scanned last, then just do `sys.path[1:] + sys.path[:1]`. Otherwise, if you want to get into the nitty gritty then the [imp module](http://docs.python.org/library/imp.html) can be used to customise until your hearts content - there's an example on that page, and one at <http://blog.dowski.com/2008/07/31/customizing-the-python-import-system/>
You can modify [`sys.path`](http://docs.python.org/library/sys.html#sys.path), which will determine the order and locations that Python searches for imports. (Note that you must do this *before* the import statement.)
13,083,026
Imagine I have a script, let's say `my_tools.py` that I import as a module. But `my_tools.py` is saved twice: at `C:\Python27\Lib` and at the same directory from where the script is run that does the import. Can I change the order where python looks for `my_tools.py` first? That is, to check first if it exists at `C:\Python27\Lib` and if so, do the import?
2012/10/26
[ "https://Stackoverflow.com/questions/13083026", "https://Stackoverflow.com", "https://Stackoverflow.com/users/1105929/" ]
You can manipulate `sys.path` as much as you want... If you wanted to move the current directory to be scanned last, then just do `sys.path[1:] + sys.path[:1]`. Otherwise, if you want to get into the nitty gritty then the [imp module](http://docs.python.org/library/imp.html) can be used to customise until your hearts content - there's an example on that page, and one at <http://blog.dowski.com/2008/07/31/customizing-the-python-import-system/>
if you don't want python to search builtin modules then search in current folder first,, you can change `sys.path` `upon program startup, the first item of this list, path[0], is the directory containing the script that was used to invoke the Python interpreter` `sys.path[0] is the empty string, which directs Python to search modules in the current directory first`, you can put this at the end of the list, that way it will first search in all possible location before coming to current directory
46,092,292
I would like to split strings like the following: ``` x <- "abc-1230-xyz-[def-ghu-jkl---]-[adsasa7asda12]-s-[klas-bst-asdas foo]" ``` by dash (`-`) on the condition that those dashes must not be contained inside a pair of `[]`. The expected result would be ``` c("abc", "1230", "xyz", "[def-ghu-jkl---]", "[adsasa7asda12]", "s", "[klas-bst-asdas foo]") ``` Notes: * There is no nesting of square brackets inside each other. * The square brackets can contain any characters / numbers / symbols except square brackets. * The other parts of the string are also variable so that we can only assume that we split by `-` whenever it's not inside `[]`. There's a similar question for python ([How to split a string by commas positioned outside of parenthesis?](https://stackoverflow.com/questions/1648537)) but I haven't yet been able to accurately adjust that to my scenario.
2017/09/07
[ "https://Stackoverflow.com/questions/46092292", "https://Stackoverflow.com", "https://Stackoverflow.com/users/3521006/" ]
You could use look ahead to verify that there is no `]` following sooner than a `[`: [`-(?![^[]*\])`](https://regex101.com/r/x0WbVt/2) So in R: ``` strsplit(x, "-(?![^[]*\\])", perl=TRUE) ``` ### Explanation: * `-`: match the hyphen * `(?! )`: negative look ahead: if that part is found after the previously matched hyphen, it invalidates the match of the hyphen. + `[^[]`: match any character that is not a `[` + `*`: match any number of the previous + `\]`: match a literal `]`. If this matches, it means we found a `]` before finding a `[`. As all this happens in a negative look ahead, a match here means the hyphen is *not* a match. Note that a `]` is a special character in regular expressions, so it must be escaped with a backslash (although it *does* work without escape, as the engine knows there is no matching `[` preceding it -- but I prefer to be clear about it being a literal). And as backslashes have a special meaning in string literals (they also denote an escape), that backslash itself must be escaped again in this string, so it appears as `\\]`.
I am not familiar with `r` language, but I believe it can do regex based search and replace. Instead of struggling with one single regex split function, I would go in 3 steps: * replace `-` in all `[....]` parts by a invisible char, like `\x99` * split by `-` * for each element in the above split result(array/list), replace `\x99` back to `-` For the first step, you can find the parts by `\[[^]]`
46,092,292
I would like to split strings like the following: ``` x <- "abc-1230-xyz-[def-ghu-jkl---]-[adsasa7asda12]-s-[klas-bst-asdas foo]" ``` by dash (`-`) on the condition that those dashes must not be contained inside a pair of `[]`. The expected result would be ``` c("abc", "1230", "xyz", "[def-ghu-jkl---]", "[adsasa7asda12]", "s", "[klas-bst-asdas foo]") ``` Notes: * There is no nesting of square brackets inside each other. * The square brackets can contain any characters / numbers / symbols except square brackets. * The other parts of the string are also variable so that we can only assume that we split by `-` whenever it's not inside `[]`. There's a similar question for python ([How to split a string by commas positioned outside of parenthesis?](https://stackoverflow.com/questions/1648537)) but I haven't yet been able to accurately adjust that to my scenario.
2017/09/07
[ "https://Stackoverflow.com/questions/46092292", "https://Stackoverflow.com", "https://Stackoverflow.com/users/3521006/" ]
Instead of splitting, extract the parts: ``` library(stringr) str_extract_all(x, "(\\[[^\\[]*\\]|[^-])+") ```
I am not familiar with `r` language, but I believe it can do regex based search and replace. Instead of struggling with one single regex split function, I would go in 3 steps: * replace `-` in all `[....]` parts by a invisible char, like `\x99` * split by `-` * for each element in the above split result(array/list), replace `\x99` back to `-` For the first step, you can find the parts by `\[[^]]`
46,092,292
I would like to split strings like the following: ``` x <- "abc-1230-xyz-[def-ghu-jkl---]-[adsasa7asda12]-s-[klas-bst-asdas foo]" ``` by dash (`-`) on the condition that those dashes must not be contained inside a pair of `[]`. The expected result would be ``` c("abc", "1230", "xyz", "[def-ghu-jkl---]", "[adsasa7asda12]", "s", "[klas-bst-asdas foo]") ``` Notes: * There is no nesting of square brackets inside each other. * The square brackets can contain any characters / numbers / symbols except square brackets. * The other parts of the string are also variable so that we can only assume that we split by `-` whenever it's not inside `[]`. There's a similar question for python ([How to split a string by commas positioned outside of parenthesis?](https://stackoverflow.com/questions/1648537)) but I haven't yet been able to accurately adjust that to my scenario.
2017/09/07
[ "https://Stackoverflow.com/questions/46092292", "https://Stackoverflow.com", "https://Stackoverflow.com/users/3521006/" ]
You could use look ahead to verify that there is no `]` following sooner than a `[`: [`-(?![^[]*\])`](https://regex101.com/r/x0WbVt/2) So in R: ``` strsplit(x, "-(?![^[]*\\])", perl=TRUE) ``` ### Explanation: * `-`: match the hyphen * `(?! )`: negative look ahead: if that part is found after the previously matched hyphen, it invalidates the match of the hyphen. + `[^[]`: match any character that is not a `[` + `*`: match any number of the previous + `\]`: match a literal `]`. If this matches, it means we found a `]` before finding a `[`. As all this happens in a negative look ahead, a match here means the hyphen is *not* a match. Note that a `]` is a special character in regular expressions, so it must be escaped with a backslash (although it *does* work without escape, as the engine knows there is no matching `[` preceding it -- but I prefer to be clear about it being a literal). And as backslashes have a special meaning in string literals (they also denote an escape), that backslash itself must be escaped again in this string, so it appears as `\\]`.
Instead of splitting, extract the parts: ``` library(stringr) str_extract_all(x, "(\\[[^\\[]*\\]|[^-])+") ```
57,398,668
I don't have o picture but I am asking did question because I am a beginner using python
2019/08/07
[ "https://Stackoverflow.com/questions/57398668", "https://Stackoverflow.com", "https://Stackoverflow.com/users/11897155/" ]
`input()` takes user input as a string. It's very safe. ``` >>> usr = input('Enter some input: ') Enter some input: hello, world >>> usr "hello, world" ``` `eval()` will execute a string as if it were python code. It's very dangerous. ``` >>>eval(input('Make it happen!')) Make it happen! print('hello') hello >>>eval(input('Make it happen!')) Make it happen! os.system('echo malicious things') ``` And now you've really messed up your computer.
`eval()` is used to evaluate an expression and `input()` is used to take user input. Here are the examples: ``` #evaluates expression >> eval('5+2') >> 7 # Takes user input >> input() 10 (user enters) >> 10 #evaluates user input >> eval('input()') 15 (user enters) >> 15 ```
37,776,724
I've just completed [Tatiana Tylosky's tutorial for Python](https://www.thinkful.com/learn/intro-to-python-tutorial/#Creating-Your-Pypet) and created my own Python pypet. In her tutorial, she shows how to do a "for" loop consisting of: ``` cat = { 'name': 'Fluffy', 'hungry': True, 'weight': 9.5, 'age': 5, 'photo': '(=^o.o^=)__', } mouse = { 'name': 'Mouse', 'age': 6, 'weight': 1.5, 'hungry': False, 'photo': '<:3 )~~~~', } pets = [cat, mouse] def feed(pet): if pet['hungry'] == True: pet['hungry'] = False pet['weight'] = pet['weight'] + 1 else: print 'The Pypet is not hungry!' for pet in pets: feed(pet) print pet ``` **I'd like to know how to repeat this "for" loop so that I feed both the cat and the mouse three times.** Most of the Python guides I've read say that you have to do something like: ``` for i in range(0, 6): ``` In this case, however, the "for" loop uses the list "pets." So the above code can't be used? What should I do? I've tried some wacky-looking things like: ``` for pet in pets(1,4): feed(pet) print pet ``` Or: ``` for pet in range(1,4): feed(pet) print pet ``` Naturally it doesn't work. What should I do to get the "for" loop to repeat?
2016/06/12
[ "https://Stackoverflow.com/questions/37776724", "https://Stackoverflow.com", "https://Stackoverflow.com/users/6456667/" ]
I would enclose your feed `for` loop in a `for` loop that iterates three times. I would use something like: ``` for _ in range(3): for pet in pets: feed(pet) print pet ``` `for _ in range(3)` iterates three times. Note that I used `_` because you are not using the iteration variable, see e.g. [What is the purpose of the single underscore "\_" variable in Python?](https://stackoverflow.com/q/5893163/3001761)
Programming languages let you embed one structure in another. Put your current loop under a for loop that runs three times, as @intboolstring's answer already showed. Here are two more things you should do now: 1. Don't compare against `True`. `if pet["Hungry"] == True:` is better written as ``` if pet["Hungry"]: ... ``` 2. Switch to Python 3. Why are you learning an outdated version of the language?
37,776,724
I've just completed [Tatiana Tylosky's tutorial for Python](https://www.thinkful.com/learn/intro-to-python-tutorial/#Creating-Your-Pypet) and created my own Python pypet. In her tutorial, she shows how to do a "for" loop consisting of: ``` cat = { 'name': 'Fluffy', 'hungry': True, 'weight': 9.5, 'age': 5, 'photo': '(=^o.o^=)__', } mouse = { 'name': 'Mouse', 'age': 6, 'weight': 1.5, 'hungry': False, 'photo': '<:3 )~~~~', } pets = [cat, mouse] def feed(pet): if pet['hungry'] == True: pet['hungry'] = False pet['weight'] = pet['weight'] + 1 else: print 'The Pypet is not hungry!' for pet in pets: feed(pet) print pet ``` **I'd like to know how to repeat this "for" loop so that I feed both the cat and the mouse three times.** Most of the Python guides I've read say that you have to do something like: ``` for i in range(0, 6): ``` In this case, however, the "for" loop uses the list "pets." So the above code can't be used? What should I do? I've tried some wacky-looking things like: ``` for pet in pets(1,4): feed(pet) print pet ``` Or: ``` for pet in range(1,4): feed(pet) print pet ``` Naturally it doesn't work. What should I do to get the "for" loop to repeat?
2016/06/12
[ "https://Stackoverflow.com/questions/37776724", "https://Stackoverflow.com", "https://Stackoverflow.com/users/6456667/" ]
I would enclose your feed `for` loop in a `for` loop that iterates three times. I would use something like: ``` for _ in range(3): for pet in pets: feed(pet) print pet ``` `for _ in range(3)` iterates three times. Note that I used `_` because you are not using the iteration variable, see e.g. [What is the purpose of the single underscore "\_" variable in Python?](https://stackoverflow.com/q/5893163/3001761)
If you don't want to use nested for loops you could also extend the pet list temporarily like this: ``` for pet in pets * 3: feed(pet) ``` that works because `pet * 3` creates the following list: `[cat, mouse, cat, mouse, cat, mouse]` If you need more control over the feeding order ( e.g. first feed all cats then all mice) the nested for loop approach might be better.
59,391,988
I am trying to set up dockerized production environment for Flask application with gunicorn. I follow this [Digital Ocean's](https://www.digitalocean.com/community/tutorials/how-to-serve-flask-applications-with-gunicorn-and-nginx-on-ubuntu-18-04) instructions together with [testdriven's one](https://testdriven.io/blog/dockerizing-django-with-postgres-gunicorn-and-nginx/) for dockerizing. Project structure is following: ``` tree -L 2 . ├── Docker │   ├── Dockerfile │   ├── Dockerfile-nginx │   └── nginx.conf ├── dev-requirements.txt ├── docker-compose.prod.yml ├── docker-compose.yml ├── gunicorn_conf.py ├── requirements.txt ├── setup.cfg ├── src │   ├── __pycache__ │   ├── config.py │   ├── main.py │   ├── models.py │   ├── tests.py │   ├── views.py │   └── wsgi.py └── venv ├── bin ├── include ├── lib └── pip-selfcheck.json 7 directories, 16 files ``` The config resides in `docker-compose.prod.yml`: ``` version: "3.7" services: web: build: context: . dockerfile: Docker/Dockerfile env_file: - .web.env ports: - "5000:5000" depends_on: - db command: gunicorn wsgi:app -c ../gunicorn_conf.py working_dir: /app/src db: image: "postgres:11" volumes: - simple_app_data:/var/lib/postgresql/data env_file: - .db.env volumes: simple_app_data: ``` Contents of `gunicorn_conf.py`: ``` bind = "0.0.0.0:5000" workers = 2 ``` And `wsgi.py`: ``` from main import app print('*'*10) print(__name__) print('*'*10+'\n') if __name__ == '__main__': app.run() ``` When I try to run this configuration with `docker-compose -f docker-compose.prod.yml build --force-rm --no-cache web && docker-compose -f docker-compose.prod.yml run web` I get following logs: ``` Starting simple_app_db_1 ... done [2019-12-18 12:15:45 +0000] [1] [INFO] Starting gunicorn 20.0.4 [2019-12-18 12:15:45 +0000] [1] [INFO] Listening at: http://0.0.0.0:5000 (1) [2019-12-18 12:15:45 +0000] [1] [INFO] Using worker: sync [2019-12-18 12:15:45 +0000] [9] [INFO] Booting worker with pid: 9 [2019-12-18 12:15:45 +0000] [10] [INFO] Booting worker with pid: 10 /usr/local/lib/python3.7/site-packages/flask_sqlalchemy/__init__.py:835: FSADeprecationWarning: SQLALCHEMY_TRACK_MODIFICATIONS adds significant overhead and will be disabled by default in the future. Set it to True or False to suppress this warning. 'SQLALCHEMY_TRACK_MODIFICATIONS adds significant overhead and ' ********** wsgi ********** /usr/local/lib/python3.7/site-packages/flask_sqlalchemy/__init__.py:835: FSADeprecationWarning: SQLALCHEMY_TRACK_MODIFICATIONS adds significant overhead and will be disabled by default in the future. Set it to True or False to suppress this warning. 'SQLALCHEMY_TRACK_MODIFICATIONS adds significant overhead and ' ********** wsgi ********** ``` So the `wsgi.py` file is not the `__main__`. However, when I try to get rid of this `if`: ``` from main import app print('*'*10) print(__name__) print('*'*10+'\n') app.run() ``` I get: ``` OSError: [Errno 98] Address already in use ``` How can I correct this config to use gunicorn?
2019/12/18
[ "https://Stackoverflow.com/questions/59391988", "https://Stackoverflow.com", "https://Stackoverflow.com/users/4765864/" ]
1. Close `Visual Studio`. 2. Delete the `*.testlog` files in: *solutionfolder*\.vs\*solution name*\v16\TestStore\*number*.
I faced the same issue right now. A cleanup helped. As I had cleanup issues with VS in the last time (some DB-lock prevents a real cleanup to happen), my working cleanup was this way: 1. Close VS. 2. Git Bash in solution folder: `git clean -xfd` Probably it helps.
59,391,988
I am trying to set up dockerized production environment for Flask application with gunicorn. I follow this [Digital Ocean's](https://www.digitalocean.com/community/tutorials/how-to-serve-flask-applications-with-gunicorn-and-nginx-on-ubuntu-18-04) instructions together with [testdriven's one](https://testdriven.io/blog/dockerizing-django-with-postgres-gunicorn-and-nginx/) for dockerizing. Project structure is following: ``` tree -L 2 . ├── Docker │   ├── Dockerfile │   ├── Dockerfile-nginx │   └── nginx.conf ├── dev-requirements.txt ├── docker-compose.prod.yml ├── docker-compose.yml ├── gunicorn_conf.py ├── requirements.txt ├── setup.cfg ├── src │   ├── __pycache__ │   ├── config.py │   ├── main.py │   ├── models.py │   ├── tests.py │   ├── views.py │   └── wsgi.py └── venv ├── bin ├── include ├── lib └── pip-selfcheck.json 7 directories, 16 files ``` The config resides in `docker-compose.prod.yml`: ``` version: "3.7" services: web: build: context: . dockerfile: Docker/Dockerfile env_file: - .web.env ports: - "5000:5000" depends_on: - db command: gunicorn wsgi:app -c ../gunicorn_conf.py working_dir: /app/src db: image: "postgres:11" volumes: - simple_app_data:/var/lib/postgresql/data env_file: - .db.env volumes: simple_app_data: ``` Contents of `gunicorn_conf.py`: ``` bind = "0.0.0.0:5000" workers = 2 ``` And `wsgi.py`: ``` from main import app print('*'*10) print(__name__) print('*'*10+'\n') if __name__ == '__main__': app.run() ``` When I try to run this configuration with `docker-compose -f docker-compose.prod.yml build --force-rm --no-cache web && docker-compose -f docker-compose.prod.yml run web` I get following logs: ``` Starting simple_app_db_1 ... done [2019-12-18 12:15:45 +0000] [1] [INFO] Starting gunicorn 20.0.4 [2019-12-18 12:15:45 +0000] [1] [INFO] Listening at: http://0.0.0.0:5000 (1) [2019-12-18 12:15:45 +0000] [1] [INFO] Using worker: sync [2019-12-18 12:15:45 +0000] [9] [INFO] Booting worker with pid: 9 [2019-12-18 12:15:45 +0000] [10] [INFO] Booting worker with pid: 10 /usr/local/lib/python3.7/site-packages/flask_sqlalchemy/__init__.py:835: FSADeprecationWarning: SQLALCHEMY_TRACK_MODIFICATIONS adds significant overhead and will be disabled by default in the future. Set it to True or False to suppress this warning. 'SQLALCHEMY_TRACK_MODIFICATIONS adds significant overhead and ' ********** wsgi ********** /usr/local/lib/python3.7/site-packages/flask_sqlalchemy/__init__.py:835: FSADeprecationWarning: SQLALCHEMY_TRACK_MODIFICATIONS adds significant overhead and will be disabled by default in the future. Set it to True or False to suppress this warning. 'SQLALCHEMY_TRACK_MODIFICATIONS adds significant overhead and ' ********** wsgi ********** ``` So the `wsgi.py` file is not the `__main__`. However, when I try to get rid of this `if`: ``` from main import app print('*'*10) print(__name__) print('*'*10+'\n') app.run() ``` I get: ``` OSError: [Errno 98] Address already in use ``` How can I correct this config to use gunicorn?
2019/12/18
[ "https://Stackoverflow.com/questions/59391988", "https://Stackoverflow.com", "https://Stackoverflow.com/users/4765864/" ]
I faced the same issue right now. A cleanup helped. As I had cleanup issues with VS in the last time (some DB-lock prevents a real cleanup to happen), my working cleanup was this way: 1. Close VS. 2. Git Bash in solution folder: `git clean -xfd` Probably it helps.
Neither of these solutions worked for me. I was able to get the test explorer working by **closing visual studio** and **deleting** the "**.vs**" folder. Then **reopen the solution** and let it rebuild it.
59,391,988
I am trying to set up dockerized production environment for Flask application with gunicorn. I follow this [Digital Ocean's](https://www.digitalocean.com/community/tutorials/how-to-serve-flask-applications-with-gunicorn-and-nginx-on-ubuntu-18-04) instructions together with [testdriven's one](https://testdriven.io/blog/dockerizing-django-with-postgres-gunicorn-and-nginx/) for dockerizing. Project structure is following: ``` tree -L 2 . ├── Docker │   ├── Dockerfile │   ├── Dockerfile-nginx │   └── nginx.conf ├── dev-requirements.txt ├── docker-compose.prod.yml ├── docker-compose.yml ├── gunicorn_conf.py ├── requirements.txt ├── setup.cfg ├── src │   ├── __pycache__ │   ├── config.py │   ├── main.py │   ├── models.py │   ├── tests.py │   ├── views.py │   └── wsgi.py └── venv ├── bin ├── include ├── lib └── pip-selfcheck.json 7 directories, 16 files ``` The config resides in `docker-compose.prod.yml`: ``` version: "3.7" services: web: build: context: . dockerfile: Docker/Dockerfile env_file: - .web.env ports: - "5000:5000" depends_on: - db command: gunicorn wsgi:app -c ../gunicorn_conf.py working_dir: /app/src db: image: "postgres:11" volumes: - simple_app_data:/var/lib/postgresql/data env_file: - .db.env volumes: simple_app_data: ``` Contents of `gunicorn_conf.py`: ``` bind = "0.0.0.0:5000" workers = 2 ``` And `wsgi.py`: ``` from main import app print('*'*10) print(__name__) print('*'*10+'\n') if __name__ == '__main__': app.run() ``` When I try to run this configuration with `docker-compose -f docker-compose.prod.yml build --force-rm --no-cache web && docker-compose -f docker-compose.prod.yml run web` I get following logs: ``` Starting simple_app_db_1 ... done [2019-12-18 12:15:45 +0000] [1] [INFO] Starting gunicorn 20.0.4 [2019-12-18 12:15:45 +0000] [1] [INFO] Listening at: http://0.0.0.0:5000 (1) [2019-12-18 12:15:45 +0000] [1] [INFO] Using worker: sync [2019-12-18 12:15:45 +0000] [9] [INFO] Booting worker with pid: 9 [2019-12-18 12:15:45 +0000] [10] [INFO] Booting worker with pid: 10 /usr/local/lib/python3.7/site-packages/flask_sqlalchemy/__init__.py:835: FSADeprecationWarning: SQLALCHEMY_TRACK_MODIFICATIONS adds significant overhead and will be disabled by default in the future. Set it to True or False to suppress this warning. 'SQLALCHEMY_TRACK_MODIFICATIONS adds significant overhead and ' ********** wsgi ********** /usr/local/lib/python3.7/site-packages/flask_sqlalchemy/__init__.py:835: FSADeprecationWarning: SQLALCHEMY_TRACK_MODIFICATIONS adds significant overhead and will be disabled by default in the future. Set it to True or False to suppress this warning. 'SQLALCHEMY_TRACK_MODIFICATIONS adds significant overhead and ' ********** wsgi ********** ``` So the `wsgi.py` file is not the `__main__`. However, when I try to get rid of this `if`: ``` from main import app print('*'*10) print(__name__) print('*'*10+'\n') app.run() ``` I get: ``` OSError: [Errno 98] Address already in use ``` How can I correct this config to use gunicorn?
2019/12/18
[ "https://Stackoverflow.com/questions/59391988", "https://Stackoverflow.com", "https://Stackoverflow.com/users/4765864/" ]
I faced the same issue right now. A cleanup helped. As I had cleanup issues with VS in the last time (some DB-lock prevents a real cleanup to happen), my working cleanup was this way: 1. Close VS. 2. Git Bash in solution folder: `git clean -xfd` Probably it helps.
According to the Visual Studio developer community (found by going to the Help menu and selecting Feedback), an update to Visual Studio to version 16.5.5 will resolve the issue. FYI: They released this in February 2020 I can confirm it works (I was on VS 16.4.6)
59,391,988
I am trying to set up dockerized production environment for Flask application with gunicorn. I follow this [Digital Ocean's](https://www.digitalocean.com/community/tutorials/how-to-serve-flask-applications-with-gunicorn-and-nginx-on-ubuntu-18-04) instructions together with [testdriven's one](https://testdriven.io/blog/dockerizing-django-with-postgres-gunicorn-and-nginx/) for dockerizing. Project structure is following: ``` tree -L 2 . ├── Docker │   ├── Dockerfile │   ├── Dockerfile-nginx │   └── nginx.conf ├── dev-requirements.txt ├── docker-compose.prod.yml ├── docker-compose.yml ├── gunicorn_conf.py ├── requirements.txt ├── setup.cfg ├── src │   ├── __pycache__ │   ├── config.py │   ├── main.py │   ├── models.py │   ├── tests.py │   ├── views.py │   └── wsgi.py └── venv ├── bin ├── include ├── lib └── pip-selfcheck.json 7 directories, 16 files ``` The config resides in `docker-compose.prod.yml`: ``` version: "3.7" services: web: build: context: . dockerfile: Docker/Dockerfile env_file: - .web.env ports: - "5000:5000" depends_on: - db command: gunicorn wsgi:app -c ../gunicorn_conf.py working_dir: /app/src db: image: "postgres:11" volumes: - simple_app_data:/var/lib/postgresql/data env_file: - .db.env volumes: simple_app_data: ``` Contents of `gunicorn_conf.py`: ``` bind = "0.0.0.0:5000" workers = 2 ``` And `wsgi.py`: ``` from main import app print('*'*10) print(__name__) print('*'*10+'\n') if __name__ == '__main__': app.run() ``` When I try to run this configuration with `docker-compose -f docker-compose.prod.yml build --force-rm --no-cache web && docker-compose -f docker-compose.prod.yml run web` I get following logs: ``` Starting simple_app_db_1 ... done [2019-12-18 12:15:45 +0000] [1] [INFO] Starting gunicorn 20.0.4 [2019-12-18 12:15:45 +0000] [1] [INFO] Listening at: http://0.0.0.0:5000 (1) [2019-12-18 12:15:45 +0000] [1] [INFO] Using worker: sync [2019-12-18 12:15:45 +0000] [9] [INFO] Booting worker with pid: 9 [2019-12-18 12:15:45 +0000] [10] [INFO] Booting worker with pid: 10 /usr/local/lib/python3.7/site-packages/flask_sqlalchemy/__init__.py:835: FSADeprecationWarning: SQLALCHEMY_TRACK_MODIFICATIONS adds significant overhead and will be disabled by default in the future. Set it to True or False to suppress this warning. 'SQLALCHEMY_TRACK_MODIFICATIONS adds significant overhead and ' ********** wsgi ********** /usr/local/lib/python3.7/site-packages/flask_sqlalchemy/__init__.py:835: FSADeprecationWarning: SQLALCHEMY_TRACK_MODIFICATIONS adds significant overhead and will be disabled by default in the future. Set it to True or False to suppress this warning. 'SQLALCHEMY_TRACK_MODIFICATIONS adds significant overhead and ' ********** wsgi ********** ``` So the `wsgi.py` file is not the `__main__`. However, when I try to get rid of this `if`: ``` from main import app print('*'*10) print(__name__) print('*'*10+'\n') app.run() ``` I get: ``` OSError: [Errno 98] Address already in use ``` How can I correct this config to use gunicorn?
2019/12/18
[ "https://Stackoverflow.com/questions/59391988", "https://Stackoverflow.com", "https://Stackoverflow.com/users/4765864/" ]
I faced the same issue right now. A cleanup helped. As I had cleanup issues with VS in the last time (some DB-lock prevents a real cleanup to happen), my working cleanup was this way: 1. Close VS. 2. Git Bash in solution folder: `git clean -xfd` Probably it helps.
Steps as below 1. Close Visual Studio 2. Go to the project folder 3. Find the ".vs" folder. (Make sure you are also checking hidden item) 4. Delete ".vs" folder. 5. Good to go, Open visual studio, build and run project.
59,391,988
I am trying to set up dockerized production environment for Flask application with gunicorn. I follow this [Digital Ocean's](https://www.digitalocean.com/community/tutorials/how-to-serve-flask-applications-with-gunicorn-and-nginx-on-ubuntu-18-04) instructions together with [testdriven's one](https://testdriven.io/blog/dockerizing-django-with-postgres-gunicorn-and-nginx/) for dockerizing. Project structure is following: ``` tree -L 2 . ├── Docker │   ├── Dockerfile │   ├── Dockerfile-nginx │   └── nginx.conf ├── dev-requirements.txt ├── docker-compose.prod.yml ├── docker-compose.yml ├── gunicorn_conf.py ├── requirements.txt ├── setup.cfg ├── src │   ├── __pycache__ │   ├── config.py │   ├── main.py │   ├── models.py │   ├── tests.py │   ├── views.py │   └── wsgi.py └── venv ├── bin ├── include ├── lib └── pip-selfcheck.json 7 directories, 16 files ``` The config resides in `docker-compose.prod.yml`: ``` version: "3.7" services: web: build: context: . dockerfile: Docker/Dockerfile env_file: - .web.env ports: - "5000:5000" depends_on: - db command: gunicorn wsgi:app -c ../gunicorn_conf.py working_dir: /app/src db: image: "postgres:11" volumes: - simple_app_data:/var/lib/postgresql/data env_file: - .db.env volumes: simple_app_data: ``` Contents of `gunicorn_conf.py`: ``` bind = "0.0.0.0:5000" workers = 2 ``` And `wsgi.py`: ``` from main import app print('*'*10) print(__name__) print('*'*10+'\n') if __name__ == '__main__': app.run() ``` When I try to run this configuration with `docker-compose -f docker-compose.prod.yml build --force-rm --no-cache web && docker-compose -f docker-compose.prod.yml run web` I get following logs: ``` Starting simple_app_db_1 ... done [2019-12-18 12:15:45 +0000] [1] [INFO] Starting gunicorn 20.0.4 [2019-12-18 12:15:45 +0000] [1] [INFO] Listening at: http://0.0.0.0:5000 (1) [2019-12-18 12:15:45 +0000] [1] [INFO] Using worker: sync [2019-12-18 12:15:45 +0000] [9] [INFO] Booting worker with pid: 9 [2019-12-18 12:15:45 +0000] [10] [INFO] Booting worker with pid: 10 /usr/local/lib/python3.7/site-packages/flask_sqlalchemy/__init__.py:835: FSADeprecationWarning: SQLALCHEMY_TRACK_MODIFICATIONS adds significant overhead and will be disabled by default in the future. Set it to True or False to suppress this warning. 'SQLALCHEMY_TRACK_MODIFICATIONS adds significant overhead and ' ********** wsgi ********** /usr/local/lib/python3.7/site-packages/flask_sqlalchemy/__init__.py:835: FSADeprecationWarning: SQLALCHEMY_TRACK_MODIFICATIONS adds significant overhead and will be disabled by default in the future. Set it to True or False to suppress this warning. 'SQLALCHEMY_TRACK_MODIFICATIONS adds significant overhead and ' ********** wsgi ********** ``` So the `wsgi.py` file is not the `__main__`. However, when I try to get rid of this `if`: ``` from main import app print('*'*10) print(__name__) print('*'*10+'\n') app.run() ``` I get: ``` OSError: [Errno 98] Address already in use ``` How can I correct this config to use gunicorn?
2019/12/18
[ "https://Stackoverflow.com/questions/59391988", "https://Stackoverflow.com", "https://Stackoverflow.com/users/4765864/" ]
1. Close `Visual Studio`. 2. Delete the `*.testlog` files in: *solutionfolder*\.vs\*solution name*\v16\TestStore\*number*.
Neither of these solutions worked for me. I was able to get the test explorer working by **closing visual studio** and **deleting** the "**.vs**" folder. Then **reopen the solution** and let it rebuild it.
59,391,988
I am trying to set up dockerized production environment for Flask application with gunicorn. I follow this [Digital Ocean's](https://www.digitalocean.com/community/tutorials/how-to-serve-flask-applications-with-gunicorn-and-nginx-on-ubuntu-18-04) instructions together with [testdriven's one](https://testdriven.io/blog/dockerizing-django-with-postgres-gunicorn-and-nginx/) for dockerizing. Project structure is following: ``` tree -L 2 . ├── Docker │   ├── Dockerfile │   ├── Dockerfile-nginx │   └── nginx.conf ├── dev-requirements.txt ├── docker-compose.prod.yml ├── docker-compose.yml ├── gunicorn_conf.py ├── requirements.txt ├── setup.cfg ├── src │   ├── __pycache__ │   ├── config.py │   ├── main.py │   ├── models.py │   ├── tests.py │   ├── views.py │   └── wsgi.py └── venv ├── bin ├── include ├── lib └── pip-selfcheck.json 7 directories, 16 files ``` The config resides in `docker-compose.prod.yml`: ``` version: "3.7" services: web: build: context: . dockerfile: Docker/Dockerfile env_file: - .web.env ports: - "5000:5000" depends_on: - db command: gunicorn wsgi:app -c ../gunicorn_conf.py working_dir: /app/src db: image: "postgres:11" volumes: - simple_app_data:/var/lib/postgresql/data env_file: - .db.env volumes: simple_app_data: ``` Contents of `gunicorn_conf.py`: ``` bind = "0.0.0.0:5000" workers = 2 ``` And `wsgi.py`: ``` from main import app print('*'*10) print(__name__) print('*'*10+'\n') if __name__ == '__main__': app.run() ``` When I try to run this configuration with `docker-compose -f docker-compose.prod.yml build --force-rm --no-cache web && docker-compose -f docker-compose.prod.yml run web` I get following logs: ``` Starting simple_app_db_1 ... done [2019-12-18 12:15:45 +0000] [1] [INFO] Starting gunicorn 20.0.4 [2019-12-18 12:15:45 +0000] [1] [INFO] Listening at: http://0.0.0.0:5000 (1) [2019-12-18 12:15:45 +0000] [1] [INFO] Using worker: sync [2019-12-18 12:15:45 +0000] [9] [INFO] Booting worker with pid: 9 [2019-12-18 12:15:45 +0000] [10] [INFO] Booting worker with pid: 10 /usr/local/lib/python3.7/site-packages/flask_sqlalchemy/__init__.py:835: FSADeprecationWarning: SQLALCHEMY_TRACK_MODIFICATIONS adds significant overhead and will be disabled by default in the future. Set it to True or False to suppress this warning. 'SQLALCHEMY_TRACK_MODIFICATIONS adds significant overhead and ' ********** wsgi ********** /usr/local/lib/python3.7/site-packages/flask_sqlalchemy/__init__.py:835: FSADeprecationWarning: SQLALCHEMY_TRACK_MODIFICATIONS adds significant overhead and will be disabled by default in the future. Set it to True or False to suppress this warning. 'SQLALCHEMY_TRACK_MODIFICATIONS adds significant overhead and ' ********** wsgi ********** ``` So the `wsgi.py` file is not the `__main__`. However, when I try to get rid of this `if`: ``` from main import app print('*'*10) print(__name__) print('*'*10+'\n') app.run() ``` I get: ``` OSError: [Errno 98] Address already in use ``` How can I correct this config to use gunicorn?
2019/12/18
[ "https://Stackoverflow.com/questions/59391988", "https://Stackoverflow.com", "https://Stackoverflow.com/users/4765864/" ]
1. Close `Visual Studio`. 2. Delete the `*.testlog` files in: *solutionfolder*\.vs\*solution name*\v16\TestStore\*number*.
According to the Visual Studio developer community (found by going to the Help menu and selecting Feedback), an update to Visual Studio to version 16.5.5 will resolve the issue. FYI: They released this in February 2020 I can confirm it works (I was on VS 16.4.6)
59,391,988
I am trying to set up dockerized production environment for Flask application with gunicorn. I follow this [Digital Ocean's](https://www.digitalocean.com/community/tutorials/how-to-serve-flask-applications-with-gunicorn-and-nginx-on-ubuntu-18-04) instructions together with [testdriven's one](https://testdriven.io/blog/dockerizing-django-with-postgres-gunicorn-and-nginx/) for dockerizing. Project structure is following: ``` tree -L 2 . ├── Docker │   ├── Dockerfile │   ├── Dockerfile-nginx │   └── nginx.conf ├── dev-requirements.txt ├── docker-compose.prod.yml ├── docker-compose.yml ├── gunicorn_conf.py ├── requirements.txt ├── setup.cfg ├── src │   ├── __pycache__ │   ├── config.py │   ├── main.py │   ├── models.py │   ├── tests.py │   ├── views.py │   └── wsgi.py └── venv ├── bin ├── include ├── lib └── pip-selfcheck.json 7 directories, 16 files ``` The config resides in `docker-compose.prod.yml`: ``` version: "3.7" services: web: build: context: . dockerfile: Docker/Dockerfile env_file: - .web.env ports: - "5000:5000" depends_on: - db command: gunicorn wsgi:app -c ../gunicorn_conf.py working_dir: /app/src db: image: "postgres:11" volumes: - simple_app_data:/var/lib/postgresql/data env_file: - .db.env volumes: simple_app_data: ``` Contents of `gunicorn_conf.py`: ``` bind = "0.0.0.0:5000" workers = 2 ``` And `wsgi.py`: ``` from main import app print('*'*10) print(__name__) print('*'*10+'\n') if __name__ == '__main__': app.run() ``` When I try to run this configuration with `docker-compose -f docker-compose.prod.yml build --force-rm --no-cache web && docker-compose -f docker-compose.prod.yml run web` I get following logs: ``` Starting simple_app_db_1 ... done [2019-12-18 12:15:45 +0000] [1] [INFO] Starting gunicorn 20.0.4 [2019-12-18 12:15:45 +0000] [1] [INFO] Listening at: http://0.0.0.0:5000 (1) [2019-12-18 12:15:45 +0000] [1] [INFO] Using worker: sync [2019-12-18 12:15:45 +0000] [9] [INFO] Booting worker with pid: 9 [2019-12-18 12:15:45 +0000] [10] [INFO] Booting worker with pid: 10 /usr/local/lib/python3.7/site-packages/flask_sqlalchemy/__init__.py:835: FSADeprecationWarning: SQLALCHEMY_TRACK_MODIFICATIONS adds significant overhead and will be disabled by default in the future. Set it to True or False to suppress this warning. 'SQLALCHEMY_TRACK_MODIFICATIONS adds significant overhead and ' ********** wsgi ********** /usr/local/lib/python3.7/site-packages/flask_sqlalchemy/__init__.py:835: FSADeprecationWarning: SQLALCHEMY_TRACK_MODIFICATIONS adds significant overhead and will be disabled by default in the future. Set it to True or False to suppress this warning. 'SQLALCHEMY_TRACK_MODIFICATIONS adds significant overhead and ' ********** wsgi ********** ``` So the `wsgi.py` file is not the `__main__`. However, when I try to get rid of this `if`: ``` from main import app print('*'*10) print(__name__) print('*'*10+'\n') app.run() ``` I get: ``` OSError: [Errno 98] Address already in use ``` How can I correct this config to use gunicorn?
2019/12/18
[ "https://Stackoverflow.com/questions/59391988", "https://Stackoverflow.com", "https://Stackoverflow.com/users/4765864/" ]
1. Close `Visual Studio`. 2. Delete the `*.testlog` files in: *solutionfolder*\.vs\*solution name*\v16\TestStore\*number*.
Steps as below 1. Close Visual Studio 2. Go to the project folder 3. Find the ".vs" folder. (Make sure you are also checking hidden item) 4. Delete ".vs" folder. 5. Good to go, Open visual studio, build and run project.
59,391,988
I am trying to set up dockerized production environment for Flask application with gunicorn. I follow this [Digital Ocean's](https://www.digitalocean.com/community/tutorials/how-to-serve-flask-applications-with-gunicorn-and-nginx-on-ubuntu-18-04) instructions together with [testdriven's one](https://testdriven.io/blog/dockerizing-django-with-postgres-gunicorn-and-nginx/) for dockerizing. Project structure is following: ``` tree -L 2 . ├── Docker │   ├── Dockerfile │   ├── Dockerfile-nginx │   └── nginx.conf ├── dev-requirements.txt ├── docker-compose.prod.yml ├── docker-compose.yml ├── gunicorn_conf.py ├── requirements.txt ├── setup.cfg ├── src │   ├── __pycache__ │   ├── config.py │   ├── main.py │   ├── models.py │   ├── tests.py │   ├── views.py │   └── wsgi.py └── venv ├── bin ├── include ├── lib └── pip-selfcheck.json 7 directories, 16 files ``` The config resides in `docker-compose.prod.yml`: ``` version: "3.7" services: web: build: context: . dockerfile: Docker/Dockerfile env_file: - .web.env ports: - "5000:5000" depends_on: - db command: gunicorn wsgi:app -c ../gunicorn_conf.py working_dir: /app/src db: image: "postgres:11" volumes: - simple_app_data:/var/lib/postgresql/data env_file: - .db.env volumes: simple_app_data: ``` Contents of `gunicorn_conf.py`: ``` bind = "0.0.0.0:5000" workers = 2 ``` And `wsgi.py`: ``` from main import app print('*'*10) print(__name__) print('*'*10+'\n') if __name__ == '__main__': app.run() ``` When I try to run this configuration with `docker-compose -f docker-compose.prod.yml build --force-rm --no-cache web && docker-compose -f docker-compose.prod.yml run web` I get following logs: ``` Starting simple_app_db_1 ... done [2019-12-18 12:15:45 +0000] [1] [INFO] Starting gunicorn 20.0.4 [2019-12-18 12:15:45 +0000] [1] [INFO] Listening at: http://0.0.0.0:5000 (1) [2019-12-18 12:15:45 +0000] [1] [INFO] Using worker: sync [2019-12-18 12:15:45 +0000] [9] [INFO] Booting worker with pid: 9 [2019-12-18 12:15:45 +0000] [10] [INFO] Booting worker with pid: 10 /usr/local/lib/python3.7/site-packages/flask_sqlalchemy/__init__.py:835: FSADeprecationWarning: SQLALCHEMY_TRACK_MODIFICATIONS adds significant overhead and will be disabled by default in the future. Set it to True or False to suppress this warning. 'SQLALCHEMY_TRACK_MODIFICATIONS adds significant overhead and ' ********** wsgi ********** /usr/local/lib/python3.7/site-packages/flask_sqlalchemy/__init__.py:835: FSADeprecationWarning: SQLALCHEMY_TRACK_MODIFICATIONS adds significant overhead and will be disabled by default in the future. Set it to True or False to suppress this warning. 'SQLALCHEMY_TRACK_MODIFICATIONS adds significant overhead and ' ********** wsgi ********** ``` So the `wsgi.py` file is not the `__main__`. However, when I try to get rid of this `if`: ``` from main import app print('*'*10) print(__name__) print('*'*10+'\n') app.run() ``` I get: ``` OSError: [Errno 98] Address already in use ``` How can I correct this config to use gunicorn?
2019/12/18
[ "https://Stackoverflow.com/questions/59391988", "https://Stackoverflow.com", "https://Stackoverflow.com/users/4765864/" ]
Neither of these solutions worked for me. I was able to get the test explorer working by **closing visual studio** and **deleting** the "**.vs**" folder. Then **reopen the solution** and let it rebuild it.
Steps as below 1. Close Visual Studio 2. Go to the project folder 3. Find the ".vs" folder. (Make sure you are also checking hidden item) 4. Delete ".vs" folder. 5. Good to go, Open visual studio, build and run project.
59,391,988
I am trying to set up dockerized production environment for Flask application with gunicorn. I follow this [Digital Ocean's](https://www.digitalocean.com/community/tutorials/how-to-serve-flask-applications-with-gunicorn-and-nginx-on-ubuntu-18-04) instructions together with [testdriven's one](https://testdriven.io/blog/dockerizing-django-with-postgres-gunicorn-and-nginx/) for dockerizing. Project structure is following: ``` tree -L 2 . ├── Docker │   ├── Dockerfile │   ├── Dockerfile-nginx │   └── nginx.conf ├── dev-requirements.txt ├── docker-compose.prod.yml ├── docker-compose.yml ├── gunicorn_conf.py ├── requirements.txt ├── setup.cfg ├── src │   ├── __pycache__ │   ├── config.py │   ├── main.py │   ├── models.py │   ├── tests.py │   ├── views.py │   └── wsgi.py └── venv ├── bin ├── include ├── lib └── pip-selfcheck.json 7 directories, 16 files ``` The config resides in `docker-compose.prod.yml`: ``` version: "3.7" services: web: build: context: . dockerfile: Docker/Dockerfile env_file: - .web.env ports: - "5000:5000" depends_on: - db command: gunicorn wsgi:app -c ../gunicorn_conf.py working_dir: /app/src db: image: "postgres:11" volumes: - simple_app_data:/var/lib/postgresql/data env_file: - .db.env volumes: simple_app_data: ``` Contents of `gunicorn_conf.py`: ``` bind = "0.0.0.0:5000" workers = 2 ``` And `wsgi.py`: ``` from main import app print('*'*10) print(__name__) print('*'*10+'\n') if __name__ == '__main__': app.run() ``` When I try to run this configuration with `docker-compose -f docker-compose.prod.yml build --force-rm --no-cache web && docker-compose -f docker-compose.prod.yml run web` I get following logs: ``` Starting simple_app_db_1 ... done [2019-12-18 12:15:45 +0000] [1] [INFO] Starting gunicorn 20.0.4 [2019-12-18 12:15:45 +0000] [1] [INFO] Listening at: http://0.0.0.0:5000 (1) [2019-12-18 12:15:45 +0000] [1] [INFO] Using worker: sync [2019-12-18 12:15:45 +0000] [9] [INFO] Booting worker with pid: 9 [2019-12-18 12:15:45 +0000] [10] [INFO] Booting worker with pid: 10 /usr/local/lib/python3.7/site-packages/flask_sqlalchemy/__init__.py:835: FSADeprecationWarning: SQLALCHEMY_TRACK_MODIFICATIONS adds significant overhead and will be disabled by default in the future. Set it to True or False to suppress this warning. 'SQLALCHEMY_TRACK_MODIFICATIONS adds significant overhead and ' ********** wsgi ********** /usr/local/lib/python3.7/site-packages/flask_sqlalchemy/__init__.py:835: FSADeprecationWarning: SQLALCHEMY_TRACK_MODIFICATIONS adds significant overhead and will be disabled by default in the future. Set it to True or False to suppress this warning. 'SQLALCHEMY_TRACK_MODIFICATIONS adds significant overhead and ' ********** wsgi ********** ``` So the `wsgi.py` file is not the `__main__`. However, when I try to get rid of this `if`: ``` from main import app print('*'*10) print(__name__) print('*'*10+'\n') app.run() ``` I get: ``` OSError: [Errno 98] Address already in use ``` How can I correct this config to use gunicorn?
2019/12/18
[ "https://Stackoverflow.com/questions/59391988", "https://Stackoverflow.com", "https://Stackoverflow.com/users/4765864/" ]
According to the Visual Studio developer community (found by going to the Help menu and selecting Feedback), an update to Visual Studio to version 16.5.5 will resolve the issue. FYI: They released this in February 2020 I can confirm it works (I was on VS 16.4.6)
Steps as below 1. Close Visual Studio 2. Go to the project folder 3. Find the ".vs" folder. (Make sure you are also checking hidden item) 4. Delete ".vs" folder. 5. Good to go, Open visual studio, build and run project.
52,305,075
Per [Google's Cloud Datastore Emulator installation instructions](https://cloud.google.com/datastore/docs/tools/datastore-emulator), I was able to install and run the emulator in a *bash* terminal window without problem with `gcloud beta emulators datastore start --project gramm-id`. I also setup the environment variables, [per the instructions](https://cloud.google.com/datastore/docs/tools/datastore-emulator#automatically_setting_the_variables), in another terminal with `$(gcloud beta emulators datastore env-init)` and verified they were defined. However, when I run my python script to add an entity to the local datastore with this code: ```py from google.cloud import datastore print(os.environ['DATASTORE_HOST']) # output: http://localhost:8081 print(os.environ['DATASTORE_EMULATOR_HOST']) # output: localhost:8081 client = datastore.Client('gramm-id') kind = 'Task' name = 'simpleTask' task_key = client.key(kind, name) task = client.Enity(key=task_key) task['description'] = 'Buy milk' client.put(task) ``` I get the error: ``` Traceback (most recent call last): File "tools.py", line 237, in <module> client = datastore.Client('gramm-id') File "/home/.../lib/python3.6/site-packages/google/cloud/datastore/client.py", line 205, in __init__ project=project, credentials=credentials, _http=_http) ... long stack trace .... File "/home/.../lib/python3.6/site-packages/google/auth/_default.py", line 306, in default raise exceptions.DefaultCredentialsError(_HELP_MESSAGE) google.auth.exceptions.DefaultCredentialsError: Could not automatically determine credentials. Please set GOOGLE_APPLICATION_CREDENTIALS or explicitly create credentials and re-run the application. For more information, please see https://developers.google.com/accounts/docs/application-default-credentials. ``` I don't think I need to [create a GCP service account and provide access credentials](https://cloud.google.com/docs/authentication/production#obtaining_and_providing_service_account_credentials_manually) to use the datastore emulator on my machine. My system: * Ubuntu 18.04 * Anaconda python 3.6.6 * Google Cloud SDK 215.0.0 * cloud-datastore-emulator 2.0.2. What am I missing?
2018/09/13
[ "https://Stackoverflow.com/questions/52305075", "https://Stackoverflow.com", "https://Stackoverflow.com/users/1181911/" ]
> > gcloud auth application-default login > > > This will prompt you to login through a browser window and will set your GOOGLE\_APPLICATION\_CREDENTIALS correctly for you. [[1]](https://cloud.google.com/docs/authentication/production#calling)
In theory you should be able to use mock credentials, e.g.: ``` class EmulatorCreds(google.auth.credentials.Credentials): def __init__(self): self.token = b'secret' self.expiry = None @property def valid(self): return True def refresh(self, _): raise RuntimeError('Should never be refreshed.') client = datastore.Client( project='gramm-id', credentials=EmulatorCreds() , _http=requests.Session() # Un-authorized ) ``` However [it seems like this doesn't currently work](https://github.com/GoogleCloudPlatform/google-cloud-python/issues/3920), so for now you'll need to set `GOOGLE_APPLICATION_CREDENTIALS`
42,068,203
I am learning to use scrapinghub.com which runs in python 2.x I have written a script which uses Scrapy, I have crawled a string like below: ``` %3Ctable%20width%3D%22100%25%22%3E%0D%0A%20%20%20%20%20%20%20%20%20%20%20%20%20%20%20%20%20%20%20%20%20%20%20%20%20%20%20%20%20%20%20%20%3Ctr%3E%3Ctd%3E%3Cp%20style%3D%22color%3A%23ff0000%3Bfont-size%3A20pt%3Btext-align%3Acenter%3Bfont-weight%3Abold%22%3E%0D%0A%09%E6%84%9B%E8%BF%AA%E9%81%94%20adidas%20Energy%20Boost%20%E8%B7%AF%E8%B7%91%20%E4%BD%8E%E7%AD%92%20%E9%81%8B%E5%8B%95%20%E4%BC%91%E9%96%92%20%E8%B7%91%E9%9E%8B%20%E8%B7%91%E6%AD%A5%20%E6%85%A2%E8%B7%91%20%E9%A6%AC%E6%8B%89%E6%9D%BE%20%E5%81%A5%E8%BA%AB%E6%88%BF%20%E6%B5%81%E8%A1%8C%20%E7%90%83%E9%9E%8B%20%E5%A5%B3%E8%A3%9D%20%E5%A5%B3%E6%AC%BE%20%E5%A5%B3%20%E5%A5%B3%E9%9E%8B%0D%0A%3C%2Fp%3E%0D%0A%3C%2Ftd%3E%3C%2Ftr%3E%0D%0A%20%20%20%20%20%20%20%20%20%20%20%20%20%20%20%20%20%20%20%20%20%20%20%20%20%20%20%20%20%20%20%20%3Ctr%3E%3Ctd%3E%3Cp%20style%3D%22color%3A%23000000%3Bfont-size%3A14pt%3Btext-align%3Acenter%22%3E%0D%0A%09%EF%BC%8A%E9%9D%88%E6%B4%BB%E3%80%81%E8%BC%95%E9%87%8F%E3%80%81%E8%88%92%E9%81%A9%E5%85%BC%E5%85%B7%E7%9A%84%E9%81%B8%E6%93%87%3Cbr%20%2F%3E%EF%BC%8A%E7%B0%A1%E7%B4%84%E7%8F%BE%E4%BB%A3%E7%9A%84%E7%94%A2%E5%93%81%E8%A8%AD%E8%A8%88%2C%E5%B9%B4%E8%BC%95%E5%A4%9A%E6%A8%A3%E5%8C%96%E7%9A%84%E9%85%8D%E8%89%B2%E6%96%B9%E6%A1%88%2C%E6%9B%B4%E7%82%BA%E7%AC%A6%E5%90%88%E5%B9%B4%E8%BC%95%E6%B6%88%E8%B2%BB%E8%80%85%E7%9A%84%E5%AF%A9%E7%BE%8E%E5%81%8F%E5%A5%BD%3Cbr%20%2F%3E%EF%BC%8A%E7%B0%A1%E5%96%AE%E7%9A%84%E7%B7%9A%E6%A2%9D%E5%92%8C%E4%B9%BE%E6%B7%A8%E7%9A%84%E8%A8%AD%E8%A8%88%2C%E6%8F%90%E4%BE%9B%E4%BA%86%E7%8D%A8%E7%89%B9%E7%9A%84%E7%A9%BF%E6%90%AD%E7%B5%84%E5%90%88%3Cbr%20%2F%3E%EF%BC%8A%E9%80%8F%E6%B0%A3%E8%88%87%E4%BF%9D%E8%AD%B7%E6%80%A7%2C%E7%B5%90%E5%90%88%E4%BA%86ADIDAS%E7%9A%84%E5%89%B5%E6%96%B0%E7%A7%91%E6%8A%80%2C%E5%89%B5%E9%80%A0%E4%BA%86%E5%AE%8C%E7%BE%8E%E7%9A%84%E7%94%A2%E5%93%81%3Cbr%20%2F%3E%0D%0A%3C%2Fp%3E%0D%0A%3C%2Ftd%3E%3C%2Ftr%3E%0D%0A%20%20%20%20%20%20%20%20%20%20%20%20%20%20%20%20%20%20%20%20%20%20%20%20%20%20%20%20%20%20%20%20%3Ctr%3E%3Ctd%3E%3Cdiv%20align%3D%22center%22%3E%3Cimg%20src%3D%22https%3A%2F%2Fs.yimg.com%2Fwb%2Fimages%2F2B558E585E39649599A9A266349EABD17A4ABC18%22%20%2F%3E%3C%2Fdiv%3E%3C%2Ftd%3E%3C%2Ftr%3E%0D%0A%20%20%20%20%20%20%20%20%20%20%20%20%20%20%20%20%20%20%20%20%20%20%20%20%20%20%20%20%3C%2Ftable%3E%3Ctable%20width%3D%22100%25%22%3E%0D%0A%20%20%20%20%20%20%20%20%20%20%20%20%20%20%20%20%20%20%20%20%20%20%20%20%20%20%20%20%20%20%20%3Ctr%3E%3Ctd%3E%3Cp%20style%3D%22color%3A%23000000%3Bfont-size%3A12pt%3Btext-align%3Aleft%3Bfont-weight%3A100%22%3E%0D%0A%09%0D%0A%3C%2Fp%3E%0D%0A%3C%2Ftd%3E%3C%2Ftr%3E%0D%0A%20%20%20%20%20%20%20%20%20%20%20%20%20%20%20%20%20%20%20%20%20%20%20%20%20%20%20%20%20%20%20%3Ctr%3E%3Ctd%3E%3Cdiv%20align%3D%22center%22%3E%3Cimg%20src%3D%22https%3A%2F%2Fs.yimg.com%2Fwb%2Fimages%2F0F1A6CBFE6F6631189D491A17A2A2E7C388F194E%22%20%2F%3E%3Cdiv%3E%3C%2Ftd%3E%3C%2Ftr%3E%0D%0A%20%20%20%20%20%20%20%20%20%20%20%20%20%20%20%20%20%20%20%20%20%20%20%20%20%20%20%20%20%20%20%3Ctr%3E%3Ctd%3E%3C%2Ftd%3E%3C%2Ftr%3E%0D%0A%20%20%20%20%20%20%20%20%20%20%20%20%20%20%20%20%20%20%20%20%20%20%20%20%20%20%20%20%3C%2Ftable%3E%3Ctable%20width%3D%22100%25%22%3E%0D%0A%20%20%20%20%20%20%20%20%20%20%20%20%20%20%20%20%20%20%20%20%20%20%20%20%20%20%20%20%20%20%20%3Ctr%3E%3Ctd%3E%3Cp%20style%3D%22color%3A%23000000%3Bfont-size%3A12pt%3Btext-align%3Aleft%3Bfont-weight%3A100%22%3E%0D%0A%09%0D%0A%3C%2Fp%3E%0D%0A%3C%2Ftd%3E%3C%2Ftr%3E%0D%0A%20%20%20%20%20%20%20%20%20%20%20%20%20%20%20%20%20%20%20%20%20%20%20%20%20%20%20%20%20%20%20%3Ctr%3E%3Ctd%3E%3Cdiv%20align%3D%22center%22%3E%3Cimg%20src%3D%22https%3A%2F%2Fs.yimg.com%2Fwb%2Fimages%2FA0C9B09CAC784E2CA81A572E8F9F2E5721812607%22%20%2F%3E%3Cdiv%3E%3C%2Ftd%3E%3C%2Ftr%3E%0D%0A%20%20%20%20%20%20%20%20%20%20%20%20%20%20%20%20%20%20%20%20%20%20%20%20%20%20%20%20%20%20%20%3Ctr%3E%3Ctd%3E%3C%2Ftd%3E%3C%2Ftr%3E%0D%0A%20%20%20%20%20%20%20%20%20%20%20%20%20%20%20%20%20%20%20%20%20%20%20%20%20%20%20%20%3C%2Ftable%3E ``` Which always gives me the following: ```html <table width="100%"> <tr><td><p style="color:#fa6b81;font-size:18pt;text-align:center;font-weight:bold">(女) æè¿ªé ADIDAD ENERGY CLOUD W éæ°£ç¶²å¸ ç¾æ­ é» èè·ç¶ ä¼éé æ¢è·é</p></td></tr> <tr><td><p style="color:#000000;font-size:12pt;text-align:center"><font color="BLUE">â»æ¬è³£å ´åççºYAHOOè³¼ç©ä¸­å¿å°ç¨ï¼å¶å®å¹³å°è¥ä½¿ç¨æ¬ç«ç¸éåç~ç屬侵æ¬!!</font><BR><BR></p></td></tr> <tr><td><div align="center"><img src="https://s.yimg.com/wb/images/739F6D54CD0AA4440D67A8BF0E569B0229AB1B37" /></div></td></tr> </table><table width="100%"> <tr><td><p style="color:#000000;font-size:12pt;text-align:left;font-weight:100"></p></td></tr> <tr><td><div align="center"><img src="https://s.yimg.com/wb/images/91D28279378AF5E3C26740855775ECAD3A7F4A6B" /><div></td></tr> <tr><td></td></tr> </table><table width="100%"> <tr><td><p style="color:#000000;font-size:12pt;text-align:left;font-weight:100"></p></td></tr> <tr><td><div align="center"><img src="https://s.yimg.com/wb/images/B2237D69C0886CCF330AFA459E3C03BB4454D01B" /><div></td></tr> <tr><td></td></tr> </table><table width="100%"> <tr><td><p style="color:#000000;font-size:12pt;text-align:left;font-weight:100"></p></td></tr> <tr><td><div align="center"><img src="https://s.yimg.com/wb/images/B60D486A89EDBAFBFE824F00309D069517654050" /><div></td></tr> <tr><td></td></tr> </table><table width="100%"> <tr><td><p style="color:#000000;font-size:12pt;text-align:left;font-weight:100"></p></td></tr> <tr><td><div align="center"><img src="https://s.yimg.com/wb/images/57EAC1C8B09A019AC734F50FB51DB87D0B319002" /><div></td></tr> <tr><td></td></tr> </table><table width="100%"> <tr><td><p style="color:#000000;font-size:12pt;text-align:left;font-weight:100"></p></td></tr> <tr><td><div align="center"><img src="https://s.yimg.com/wb/images/CEC5C31984853968755AE7465BCB251C82676B0B" /><div></td></tr> <tr><td></td></tr> </table><table width="100%"> <tr><td><p style="color:#000000;font-size:12pt;text-align:left;font-weight:100"></p></td></tr> <tr><td><div align="center"><img src="https://s.yimg.com/wb/images/B065DFBACAEC5ABED898492265DEB710EA052358" /><div></td></tr> <tr><td></td></tr> </table> ``` I always get the garbage text `(女) æè¿ªé ADIDAD ENERGY CLOUD W éæ°£ç¶²å`¸ The conversion code from url encoded text to unicode is like below ``` special_text = re.sub("<.*?>", "", special_text) special_text = re.sub("<!--", "", special_text) special_text = re.sub("-->", "", special_text) special_text = re.sub("\n", "", special_text) special_text = special_text.strip() special_text = unquote(special_text) special_text = re.sub("\n", "", special_text) special_text = re.sub("\r", "", special_text) special_text = re.sub("\t", "", special_text) special_text = u' '.join((special_text, '')).encode('utf-8').strip() ``` I have tried a lot of different codes like ``` special_text = special_text.encode('utf-8') special_text = special_text.decode('utf-8') ``` Which either gives me error or still the garbage text Not sure what is the proper way to convert to unicode?
2017/02/06
[ "https://Stackoverflow.com/questions/42068203", "https://Stackoverflow.com", "https://Stackoverflow.com/users/339229/" ]
Your data is perfectly valid UTF-8, encoded into a URL (so URLEncoded). Your output indicates you are looking at a [Mojibake](https://en.wikipedia.org/wiki/Mojibake), where your own software (console, terminal, text editor), is using a *different* codec to interpret the UTF-8 data. I suspect your setup is using CP-1254: ``` >>> print text.encode('utf8').decode('sloppy-cp1254') # codec from the ftfy project æ„›è¿ªé” adidas Energy Boost 路跑 ä½ç­’ é‹å‹• 休閒 è·‘é‹ è·‘æ­¥ 慢跑 é¦¬æ‹‰æ¾ å¥èº«æˆ¿ æµè¡Œ çƒé‹ å¥³è£ å¥³æ¬¾ 女 å¥³é‹ ï¼Šéˆæ´»ã€è¼•é‡ã€èˆ’é©å…¼å…·çš„鏿“‡ *簡約ç¾ä»£çš„產å“設計,年輕多樣化的é…色方案,更為符åˆå¹´è¼•消費者的審ç¾å好 *簡單的線æ¢å’Œä¹¾æ·¨çš„設計,æä¾›äº†ç¨ç‰¹çš„ç©¿æ­çµ„åˆ ï¼Šé€æ°£èˆ‡ä¿è­·æ€§,çµåˆäº†ADIDAS的創新科技,創造了完ç¾çš„ç”¢å“ ``` If you don't know how to fix your terminal, I suggest you write the data to a file instead and use an editor you can tell what codec to use to read the data: ``` import io with io.open('somefilename.txt', encoding='utf8') as f: f.write(unicode_value) ``` I also strongly recommend you use an actual HTML parser to handle the data, and not rely on regular expressions. The following code for Python 2 and 3 produces a Unicode value with the textual information from your URL: ``` from bs4 import BeautifulSoup try: from urllib import unquote except ImportError: from urllib.parse import unquote soup = BeautifulSoup(unquote(special_text), 'html.parser') # consider installing lxml instead text = soup.get_text('\n', strip=True) # put newlines between sections print(text) ``` For your input, on my Mac OSX terminal configured for handling Unicode text as UTF-8, I see: ```none 愛迪達 adidas Energy Boost 路跑 低筒 運動 休閒 跑鞋 跑步 慢跑 馬拉松 健身房 流行 球鞋 女裝 女款 女 女鞋 *靈活、輕量、舒適兼具的選擇 *簡約現代的產品設計,年輕多樣化的配色方案,更為符合年輕消費者的審美偏好 *簡單的線條和乾淨的設計,提供了獨特的穿搭組合 *透氣與保護性,結合了ADIDAS的創新科技,創造了完美的產品 ```
I don't know why, but for some reason I get it to work on scrapinghub.com like below. Let say I have an HTML text like: ``` <html> <div class="a"> Some chinese text </div> <div class="b"> QUOTED text got chinese in it %3Ctable%20width%3D%22100%25%22%3E%0D%0A%20%20%20%20%20%20%20%20%20%20%20%20%20%20%20%20%20%20%20%20%20%20%20%20%20%20%20%20%20%20%20%20%3Ctr%3E%3Ctd%3E%3Cp%20style%3D%22color%3A%23ff0000%3Bfont-size%3A20pt%3Btext-align%3Acenter%3Bfont-weight%3Abold%22%3E%0D%0A%09%E6%84%9B%E8%BF%AA%E9%81%94%20adidas%20Energy%20Boost%20%E8%B7%AF%E8%B7%91%20%E4%BD%8E%E7%AD%92%20%E9%81%8B%E5%8B%95%20%E4%BC%91%E9%96%92%20%E8%B7%91%E9%9E%8B%20%E8%B7%91%E6%AD%A5%20%E6%85%A2%E8%B7%91%20%E9%A6%AC%E6%8B%89%E6%9D%BE%20%E5%81%A5%E8%BA%AB%E6%88%BF%20%E6%B5%81%E8%A1%8C%20%E7%90%83%E9%9E%8B%20%E5%A5%B3%E8%A3%9D%20%E5%A5%B3%E6%AC%BE%20%E5%A5%B3%20%E5%A5%B3%E9%9E%8B%0D%0A%3C%2Fp%3E%0D%0A%3C%2Ftd%3E%3C%2Ftr%3E%0D%0A%20%20%20%20%20%20%20%20%20%20%20%20%20%20%20%20%20%20%20%20%20%20%20%20%20%20%20%20%20%20%20%20%3Ctr%3E%3Ctd%3E%3Cp%20style%3D%22color%3A%23000000%3Bfont-size%3A14pt%3Btext-align%3Acenter%22%3E%0D%0A%09%EF%BC%8A%E9%9D%88%E6%B4%BB%E3%80%81%E8%BC%95%E9%87%8F%E3%80%81%E8%88%92%E9%81%A9%E5%85%BC%E5%85%B7%E7%9A%84%E9%81%B8%E6%93%87%3Cbr%20%2F%3E%EF%BC%8A%E7%B0%A1%E7%B4%84%E7%8F%BE%E4%BB%A3%E7%9A%84%E7%94%A2%E5%93%81%E8%A8%AD%E8%A8%88%2C%E5%B9%B4%E8%BC%95%E5%A4%9A%E6%A8%A3%E5%8C%96%E7%9A%84%E9%85%8D%E8%89%B2%E6%96%B9%E6%A1%88%2C%E6%9B%B4%E7%82%BA%E7%AC%A6%E5%90%88%E5%B9%B4%E8%BC%95%E6%B6%88%E8%B2%BB%E8%80%85%E7%9A%84%E5%AF%A9%E7%BE%8E%E5%81%8F%E5%A5%BD%3Cbr%20%2F%3E%EF%BC%8A%E7%B0%A1%E5%96%AE%E7%9A%84%E7%B7%9A%E6%A2%9D%E5%92%8C%E4%B9%BE%E6%B7%A8%E7%9A%84%E8%A8%AD%E8%A8%88%2C%E6%8F%90%E4%BE%9B%E4%BA%86%E7%8D%A8%E7%89%B9%E7%9A%84%E7%A9%BF%E6%90%AD%E7%B5%84%E5%90%88%3Cbr%20%2F%3E%EF%BC%8A%E9%80%8F%E6%B0%A3%E8%88%87%E4%BF%9D%E8%AD%B7%E6%80%A7%2C%E7%B5%90%E5%90%88%E4%BA%86ADIDAS%E7%9A%84%E5%89%B5%E6%96%B0%E7%A7%91%E6%8A%80%2C%E5%89%B5%E9%80%A0%E4%BA%86%E5%AE%8C%E7%BE%8E%E7%9A%84%E7%94%A2%E5%93%81%3Cbr%20%2F%3E%0D%0A%3C%2Fp%3E%0D%0A%3C%2Ftd%3E%3C%2Ftr%3E%0D%0A%20%20%20%20%20%20%20%20%20%20%20%20%20%20%20%20%20%20%20%20%20%20%20%20%20%20%20%20%20%20%20%20%3Ctr%3E%3Ctd%3E%3Cdiv%20align%3D%22center%22%3E%3Cimg%20src%3D%22https%3A%2F%2Fs.yimg.com%2Fwb%2Fimages%2F2B558E585E39649599A9A266349EABD17A4ABC18%22%20%2F%3E%3C%2Fdiv%3E%3C%2Ftd%3E%3C%2Ftr%3E%0D%0A%20%20%20%20%20%20%20%20%20%20%20%20%20%20%20%20%20%20%20%20%20%20%20%20%20%20%20%20%3C%2Ftable%3E%3Ctable%20width%3D%22100%25%22%3E%0D%0A%20%20%20%20%20%20%20%20%20%20%20%20%20%20%20%20%20%20%20%20%20%20%20%20%20%20%20%20%20%20%20%3Ctr%3E%3Ctd%3E%3Cp%20style%3D%22color%3A%23000000%3Bfont-size%3A12pt%3Btext-align%3Aleft%3Bfont-weight%3A100%22%3E%0D%0A%09%0D%0A%3C%2Fp%3E%0D%0A%3C%2Ftd%3E%3C%2Ftr%3E%0D%0A%20%20%20%20%20%20%20%20%20%20%20%20%20%20%20%20%20%20%20%20%20%20%20%20%20%20%20%20%20%20%20%3Ctr%3E%3Ctd%3E%3Cdiv%20align%3D%22center%22%3E%3Cimg%20src%3D%22https%3A%2F%2Fs.yimg.com%2Fwb%2Fimages%2F0F1A6CBFE6F6631189D491A17A2A2E7C388F194E%22%20%2F%3E%3Cdiv%3E%3C%2Ftd%3E%3C%2Ftr%3E%0D%0A%20%20%20%20%20%20%20%20%20%20%20%20%20%20%20%20%20%20%20%20%20%20%20%20%20%20%20%20%20%20%20%3Ctr%3E%3Ctd%3E%3C%2Ftd%3E%3C%2Ftr%3E%0D%0A%20%20%20%20%20%20%20%20%20%20%20%20%20%20%20%20%20%20%20%20%20%20%20%20%20%20%20%20%3C%2Ftable%3E%3Ctable%20width%3D%22100%25%22%3E%0D%0A%20%20%20%20%20%20%20%20%20%20%20%20%20%20%20%20%20%20%20%20%20%20%20%20%20%20%20%20%20%20%20%3Ctr%3E%3Ctd%3E%3Cp%20style%3D%22color%3A%23000000%3Bfont-size%3A12pt%3Btext-align%3Aleft%3Bfont-weight%3A100%22%3E%0D%0A%09%0D%0A%3C%2Fp%3E%0D%0A%3C%2Ftd%3E%3C%2Ftr%3E%0D%0A%20%20%20%20%20%20%20%20%20%20%20%20%20%20%20%20%20%20%20%20%20%20%20%20%20%20%20%20%20%20%20%3Ctr%3E%3Ctd%3E%3Cdiv%20align%3D%22center%22%3E%3Cimg%20src%3D%22https%3A%2F%2Fs.yimg.com%2Fwb%2Fimages%2FA0C9B09CAC784E2CA81A572E8F9F2E5721812607%22%20%2F%3E%3Cdiv%3E%3C%2Ftd%3E%3C%2Ftr%3E%0D%0A%20%20%20%20%20%20%20%20%20%20%20%20%20%20%20%20%20%20%20%20%20%20%20%20%20%20%20%20%20%20%20%3Ctr%3E%3Ctd%3E%3C%2Ftd%3E%3C%2Ftr%3E%0D%0A%20%20%20%20%20%20%20%20%20%20%20%20%20%20%20%20%20%20%20%20%20%20%20%20%20%20%20%20%3C%2Ftable%3E </div> </html> ``` So I parse it to assign class="a" to variable AAA, class="b" to variable BBB if I want to unquote BBB and have the chinese characters display correctly I do the following: ``` BBB = u' '.join((BBB, '')) BBB = BBB.encode('ascii') BBB = unquote(BBB) ``` So when I output both AAA & BBB on scrapinghub, it will both display chinese text correctly. I just want to point out that Martijn Pieters is also correct in his answers when I am doing this locally on my MAC. But just not sure whats going on in scrapinghub that I need to do the above.
9,851,156
I am managing a quite large python code base (>2000 lines) that I want anyway to be available as a single runnable python script. So I am searching for a method or a tool to merge a development folder, made of different python files into a single running script. The thing/method I am searching for should take code split into different files, maybe with a starting `__init___.py` file that contains the imports and merge it into a single, big script. Much like a preprocessor. Best if a near-native way, better if I can anyway run from the dev folder. I have already checked out pypp and pypreprocessor but they don't seem to take the point. Something like a strange use of `__import__()` or maybe a bunch of `from foo import *` replaced by the preprocessor with the code? Obviously I only want to merge my directory and not common libraries. **Update** What I want is exactly mantaining the code as a package, and then being able to "compile" it into a single script, easy to copy-paste, distribute and reuse.
2012/03/24
[ "https://Stackoverflow.com/questions/9851156", "https://Stackoverflow.com", "https://Stackoverflow.com/users/749014/" ]
It sounds like you're asking how to merge your codebase into a single 2000-plus source file-- are you really, really sure you want to do this? It will make your code harder to maintain. Python files correspond to modules, so unless your main script does `from modname import *` for all its parts, you'll lose the module structure by converting it into one file. What I would recommend is leaving the source structured as they are, and solving the problem of how to *distribute* the program: 1. You could use [PyInstaller](https://stackoverflow.com/a/112713/699305), py2exe or something similar to generate a single executable that doesn't even need a python installation. (If you can count on python being present, see @Sebastian's comment below.) 2. If you want to distribute your code base for use by other python programs, you should definitely start by structuring it as a package, so it can be loaded with a single `import`. 3. To distribute a lot of python source files easily, you can package everything into a zip archive or an "egg" (which is actually a zip archive with special housekeeping info). Python can import modules directly from a zip or egg archive.
[waffles](https://bitbucket.org/ArneBab/waffles) seems to do exactly what you're after, although I've not tried it You could probably do this manually, something like: ``` # file1.py from .file2 import func1, func2 def something(): func1() + func2() # file2.py def func1(): pass def func2(): pass # __init__.py from .file1 import something if __name__ == "__main__": something() ``` Then you can concatenate all the files together, removing any line starting with `from .`, and.. it might work. That said, an executable egg or regular PyPI distribution would be much simpler and more reliable!
61,452,787
I cannot install Django 3 on my Debian 9 system. I follow <https://www.rosehosting.com/blog/how-to-install-python-3-6-4-on-debian-9/> this guide to install a Python 3 because there is no Python 3 in Debian repositories: ```sh :~# python3 Python 3.5.3 (default, Sep 27 2018, 17:25:39) ``` ```sh ~# pip3 -V pip 9.0.1 from /usr/lib/python3/dist-packages (python 3.5) ``` ```sh ~# pip3 install Django==3.0.5 Collecting Django==3.0.5 Could not find a version that satisfies the requirement Django==3.0.5 (from versions: 1.1.3, 1.1.4, 1.2, 1.2.1, 1.2.2, 1.2.3, 1.2.4, 1.2.5, 1.2.6, 1.2.7, 1.3, 1.3.1, 1.3.2, 1.3.3, 1.3.4, 1.3.5, 1.3.6, 1.3.7, 1.4, 1.4.1, 1.4.2, 1.4.3, 1.4.4, 1.4.5, 1.4.6, 1.4.7, 1.4.8, 1.4.9, 1.4.10, 1.4.11, 1.4.12, 1.4.13, 1.4.14, 1.4.15, 1.4.16, 1.4.17, 1.4.18, 1.4.19, 1.4.20, 1.4.21, 1.4.22, 1.5, 1.5.1, 1.5.2, 1.5.3, 1.5.4, 1.5.5, 1.5.6, 1.5.7, 1.5.8, 1.5.9, 1.5.10, 1.5.11, 1.5.12, 1.6, 1.6.1, 1.6.2, 1.6.3, 1.6.4, 1.6.5, 1.6.6, 1.6.7, 1.6.8, 1.6.9, 1.6.10, 1.6.11, 1.7, 1.7.1, 1.7.2, 1.7.3, 1.7.4, 1.7.5, 1.7.6, 1.7.7, 1.7.8, 1.7.9, 1.7.10, 1.7.11, 1.8a1, 1.8b1, 1.8b2, 1.8rc1, 1.8, 1.8.1, 1.8.2, 1.8.3, 1.8.4, 1.8.5, 1.8.6, 1.8.7, 1.8.8, 1.8.9, 1.8.10, 1.8.11, 1.8.12, 1.8.13, 1.8.14, 1.8.15, 1.8.16, 1.8.17, 1.8.18, 1.8.19, 1.9a1, 1.9b1, 1.9rc1, 1.9rc2, 1.9, 1.9.1, 1.9.2, 1.9.3, 1.9.4, 1.9.5, 1.9.6, 1.9.7, 1.9.8, 1.9.9, 1.9.10, 1.9.11, 1.9.12, 1.9.13, 1.10a1, 1.10b1, 1.10rc1, 1.10, 1.10.1, 1.10.2, 1.10.3, 1.10.4, 1.10.5, 1.10.6, 1.10.7, 1.10.8, 1.11a1, 1.11b1, 1.11rc1, 1.11, 1.11.1, 1.11.2, 1.11.3, 1.11.4, 1.11.5, 1.11.6, 1.11.7, 1.11.8, 1.11.9, 1.11.10, 1.11.11, 1.11.12, 1.11.13, 1.11.14, 1.11.15, 1.11.16, 1.11.17, 1.11.18, 1.11.20, 1.11.21, 1.11.22, 1.11.23, 1.11.24, 1.11.25, 1.11.26, 1.11.27, 1.11.28, 1.11.29, 2.0a1, 2.0b1, 2.0rc1, 2.0, 2.0.1, 2.0.2, 2.0.3, 2.0.4, 2.0.5, 2.0.6, 2.0.7, 2.0.8, 2.0.9, 2.0.10, 2.0.12, 2.0.13, 2.1a1, 2.1b1, 2.1rc1, 2.1, 2.1.1, 2.1.2, 2.1.3, 2.1.4, 2.1.5, 2.1.7, 2.1.8, 2.1.9, 2.1.10, 2.1.11, 2.1.12, 2.1.13, 2.1.14, 2.1.15, 2.2a1, 2.2b1, 2.2rc1, 2.2, 2.2.1, 2.2.2, 2.2.3, 2.2.4, 2.2.5, 2.2.6, 2.2.7, 2.2.8, 2.2.9, 2.2.10, 2.2.11, 2.2.12) No matching distribution found for Django==3.0.5 ```
2020/04/27
[ "https://Stackoverflow.com/questions/61452787", "https://Stackoverflow.com", "https://Stackoverflow.com/users/4003516/" ]
For the latest versions of Django you must be using python 3.6, 3.7, or 3.8. You're currently using 3.5 <https://docs.djangoproject.com/en/3.0/faq/install/#faq-python-version-support>
install python3-venv by command: ``` sudo apt install python3-venv ``` and ``` mkdir my_django_app cd my_django_app; python3 -m venv venv ``` ref: <https://linuxize.com/post/how-to-install-django-on-debian-9>
39,849,641
I am using `flask migrate` to for database creation & migration in flask with flask-sqlalchemy. Everything was working fine until I changed my database user password contains '@' then it stopped working so, I updated my code based on [Writing a connection string when password contains special characters](https://stackoverflow.com/questions/1423804/writing-a-connection-string-when-password-contains-special-characters) It working for application but not for flask-migration, Its showing error while migrating i.e on `python manage.py db migrate` ``` ValueError: invalid interpolation syntax in u'mysql://user:p%40ssword@localhost/testdb' at position 15 ``` Here password is `p@ssword` and its escaped by `urlquote` (see above question link). Full error stack: ``` Traceback (most recent call last): File "manage.py", line 20, in <module> manager.run() File "/usr/local/lib/python2.7/dist-packages/flask_script/__init__.py", line 412, in run result = self.handle(sys.argv[0], sys.argv[1:]) File "/usr/local/lib/python2.7/dist-packages/flask_script/__init__.py", line 383, in handle res = handle(*args, **config) File "/usr/local/lib/python2.7/dist-packages/flask_script/commands.py", line 216, in __call__ return self.run(*args, **kwargs) File "/usr/local/lib/python2.7/dist-packages/flask_migrate/__init__.py", line 177, in migrate version_path=version_path, rev_id=rev_id) File "/usr/local/lib/python2.7/dist-packages/alembic/command.py", line 117, in revision script_directory.run_env() File "/usr/local/lib/python2.7/dist-packages/alembic/script/base.py", line 407, in run_env util.load_python_file(self.dir, 'env.py') File "/usr/local/lib/python2.7/dist-packages/alembic/util/pyfiles.py", line 93, in load_python_file module = load_module_py(module_id, path) File "/usr/local/lib/python2.7/dist-packages/alembic/util/compat.py", line 79, in load_module_py mod = imp.load_source(module_id, path, fp) File "migrations/env.py", line 22, in <module> current_app.config.get('SQLALCHEMY_DATABASE_URI')) File "/usr/local/lib/python2.7/dist-packages/alembic/config.py", line 218, in set_main_option self.set_section_option(self.config_ini_section, name, value) File "/usr/local/lib/python2.7/dist-packages/alembic/config.py", line 245, in set_section_option self.file_config.set(section, name, value) File "/usr/lib/python2.7/ConfigParser.py", line 752, in set "position %d" % (value, tmp_value.find('%'))) ValueError: invalid interpolation syntax in u'mysql://user:p%40ssword@localhost/testdb' at position 15 ``` Please help
2016/10/04
[ "https://Stackoverflow.com/questions/39849641", "https://Stackoverflow.com", "https://Stackoverflow.com/users/873416/" ]
I have a solution for this issue after experiencing it as well. There's an issue with '%' (percent signs) in the db connection URI after you urlencode the string. I tried substituting the percent sign with double percent signs ('%%') which gets me past the interpolation error. However, that resulted in not being able to connect to the database because of an incorrect password. Solution I'm going with for now is to avoid using '%' in my db password. Not a satisfactory solution, but will do for now. I'll make a note in "alembic"'s github of the issue. Seems using RawConfigParser in their package could help avoid this issue.
You may want to look at <http://docs.sqlalchemy.org/en/latest/dialects/mysql.html#mysql-unicode> I was having the same issue with my password and the mysql connector. using the mysql+pymysql connector allowed me to connect in application and in migration scripts.
39,849,641
I am using `flask migrate` to for database creation & migration in flask with flask-sqlalchemy. Everything was working fine until I changed my database user password contains '@' then it stopped working so, I updated my code based on [Writing a connection string when password contains special characters](https://stackoverflow.com/questions/1423804/writing-a-connection-string-when-password-contains-special-characters) It working for application but not for flask-migration, Its showing error while migrating i.e on `python manage.py db migrate` ``` ValueError: invalid interpolation syntax in u'mysql://user:p%40ssword@localhost/testdb' at position 15 ``` Here password is `p@ssword` and its escaped by `urlquote` (see above question link). Full error stack: ``` Traceback (most recent call last): File "manage.py", line 20, in <module> manager.run() File "/usr/local/lib/python2.7/dist-packages/flask_script/__init__.py", line 412, in run result = self.handle(sys.argv[0], sys.argv[1:]) File "/usr/local/lib/python2.7/dist-packages/flask_script/__init__.py", line 383, in handle res = handle(*args, **config) File "/usr/local/lib/python2.7/dist-packages/flask_script/commands.py", line 216, in __call__ return self.run(*args, **kwargs) File "/usr/local/lib/python2.7/dist-packages/flask_migrate/__init__.py", line 177, in migrate version_path=version_path, rev_id=rev_id) File "/usr/local/lib/python2.7/dist-packages/alembic/command.py", line 117, in revision script_directory.run_env() File "/usr/local/lib/python2.7/dist-packages/alembic/script/base.py", line 407, in run_env util.load_python_file(self.dir, 'env.py') File "/usr/local/lib/python2.7/dist-packages/alembic/util/pyfiles.py", line 93, in load_python_file module = load_module_py(module_id, path) File "/usr/local/lib/python2.7/dist-packages/alembic/util/compat.py", line 79, in load_module_py mod = imp.load_source(module_id, path, fp) File "migrations/env.py", line 22, in <module> current_app.config.get('SQLALCHEMY_DATABASE_URI')) File "/usr/local/lib/python2.7/dist-packages/alembic/config.py", line 218, in set_main_option self.set_section_option(self.config_ini_section, name, value) File "/usr/local/lib/python2.7/dist-packages/alembic/config.py", line 245, in set_section_option self.file_config.set(section, name, value) File "/usr/lib/python2.7/ConfigParser.py", line 752, in set "position %d" % (value, tmp_value.find('%'))) ValueError: invalid interpolation syntax in u'mysql://user:p%40ssword@localhost/testdb' at position 15 ``` Please help
2016/10/04
[ "https://Stackoverflow.com/questions/39849641", "https://Stackoverflow.com", "https://Stackoverflow.com/users/873416/" ]
In the `migrations/env.py` file, you will find the code that is responsible for this issue. ``` config.set_main_option('sqlalchemy.url', current_app.config.get('SQLALCHEMY_DATABASE_URI')) ``` If there are `%` signs in the `SQLALCHEMY_DATABASE_URI`, this will cause an error. You can solve this by editing the `migrations/env.py` file, and changing the offending line as follows ``` db_url_escaped = current_app.config.get('SQLALCHEMY_DATABASE_URI').replace('%', '%%') config.set_main_option('sqlalchemy.url', db_url_escaped) ``` Also see [the documentation of set\_main\_option](http://alembic.zzzcomputing.com/en/latest/api/config.html#alembic.config.Config.set_main_option): > > Note that this value is passed to ConfigParser.set, which supports variable interpolation using pyformat (e.g. %(some\_value)s). A raw percent sign not part of an interpolation symbol must therefore be escaped, e.g. %%. The given value may refer to another value already in the file using the interpolation format. > > >
I have a solution for this issue after experiencing it as well. There's an issue with '%' (percent signs) in the db connection URI after you urlencode the string. I tried substituting the percent sign with double percent signs ('%%') which gets me past the interpolation error. However, that resulted in not being able to connect to the database because of an incorrect password. Solution I'm going with for now is to avoid using '%' in my db password. Not a satisfactory solution, but will do for now. I'll make a note in "alembic"'s github of the issue. Seems using RawConfigParser in their package could help avoid this issue.
39,849,641
I am using `flask migrate` to for database creation & migration in flask with flask-sqlalchemy. Everything was working fine until I changed my database user password contains '@' then it stopped working so, I updated my code based on [Writing a connection string when password contains special characters](https://stackoverflow.com/questions/1423804/writing-a-connection-string-when-password-contains-special-characters) It working for application but not for flask-migration, Its showing error while migrating i.e on `python manage.py db migrate` ``` ValueError: invalid interpolation syntax in u'mysql://user:p%40ssword@localhost/testdb' at position 15 ``` Here password is `p@ssword` and its escaped by `urlquote` (see above question link). Full error stack: ``` Traceback (most recent call last): File "manage.py", line 20, in <module> manager.run() File "/usr/local/lib/python2.7/dist-packages/flask_script/__init__.py", line 412, in run result = self.handle(sys.argv[0], sys.argv[1:]) File "/usr/local/lib/python2.7/dist-packages/flask_script/__init__.py", line 383, in handle res = handle(*args, **config) File "/usr/local/lib/python2.7/dist-packages/flask_script/commands.py", line 216, in __call__ return self.run(*args, **kwargs) File "/usr/local/lib/python2.7/dist-packages/flask_migrate/__init__.py", line 177, in migrate version_path=version_path, rev_id=rev_id) File "/usr/local/lib/python2.7/dist-packages/alembic/command.py", line 117, in revision script_directory.run_env() File "/usr/local/lib/python2.7/dist-packages/alembic/script/base.py", line 407, in run_env util.load_python_file(self.dir, 'env.py') File "/usr/local/lib/python2.7/dist-packages/alembic/util/pyfiles.py", line 93, in load_python_file module = load_module_py(module_id, path) File "/usr/local/lib/python2.7/dist-packages/alembic/util/compat.py", line 79, in load_module_py mod = imp.load_source(module_id, path, fp) File "migrations/env.py", line 22, in <module> current_app.config.get('SQLALCHEMY_DATABASE_URI')) File "/usr/local/lib/python2.7/dist-packages/alembic/config.py", line 218, in set_main_option self.set_section_option(self.config_ini_section, name, value) File "/usr/local/lib/python2.7/dist-packages/alembic/config.py", line 245, in set_section_option self.file_config.set(section, name, value) File "/usr/lib/python2.7/ConfigParser.py", line 752, in set "position %d" % (value, tmp_value.find('%'))) ValueError: invalid interpolation syntax in u'mysql://user:p%40ssword@localhost/testdb' at position 15 ``` Please help
2016/10/04
[ "https://Stackoverflow.com/questions/39849641", "https://Stackoverflow.com", "https://Stackoverflow.com/users/873416/" ]
In the `migrations/env.py` file, you will find the code that is responsible for this issue. ``` config.set_main_option('sqlalchemy.url', current_app.config.get('SQLALCHEMY_DATABASE_URI')) ``` If there are `%` signs in the `SQLALCHEMY_DATABASE_URI`, this will cause an error. You can solve this by editing the `migrations/env.py` file, and changing the offending line as follows ``` db_url_escaped = current_app.config.get('SQLALCHEMY_DATABASE_URI').replace('%', '%%') config.set_main_option('sqlalchemy.url', db_url_escaped) ``` Also see [the documentation of set\_main\_option](http://alembic.zzzcomputing.com/en/latest/api/config.html#alembic.config.Config.set_main_option): > > Note that this value is passed to ConfigParser.set, which supports variable interpolation using pyformat (e.g. %(some\_value)s). A raw percent sign not part of an interpolation symbol must therefore be escaped, e.g. %%. The given value may refer to another value already in the file using the interpolation format. > > >
You may want to look at <http://docs.sqlalchemy.org/en/latest/dialects/mysql.html#mysql-unicode> I was having the same issue with my password and the mysql connector. using the mysql+pymysql connector allowed me to connect in application and in migration scripts.
36,329,606
This was the example picked from bokeh documentation. It is showing attribute error. I am using ipython in anaconda environment. ``` import pandas as pd from bokeh.charts import TimeSeries, output_file, show AAPL = pd.read_csv( "http://ichart.yahoo.com/table.csv?s=AAPL&a=0&b=1&c=2000&d=0&e=1&f=2010", parse_dates=['Date']) output_file("timeseries.html") data = dict(AAPL=AAPL['Adj Close'], Date=AAPL['Date']) p = TimeSeries(data, index='Date', title="APPL", ylabel='Stock Prices') show(p) AttributeError Traceback (most recent call last) <ipython-input-3-fe34a9860ab7> in <module>() 10 data = dict(AAPL=AAPL['Adj Close'], Date=AAPL['Date']) 11 ---> 12 p = TimeSeries(data, index='Date', title="APPL", ylabel='Stock Prices') 13 14 show(p) C:\Users\Bhaskara\AppData\Local\Continuum\Anaconda3\lib\site-packages\bokeh\charts\builders\timeseries_builder.py in TimeSeries(data, x, y, builder_type, **kws) 100 kws['x'] = x 101 kws['y'] = y --> 102 return create_and_build(builder_type, data, **kws) C:\Users\Bhaskara\AppData\Local\Continuum\Anaconda3\lib\site-packages\bokeh\charts\builder.py in create_and_build(builder_class, *data, **kws) 64 # create a chart to return, since there isn't one already 65 chart_kws = { k:v for k,v in kws.items() if k not in builder_props} ---> 66 chart = Chart(**chart_kws) 67 chart.add_builder(builder) 68 chart.start_plot() C:\Users\Bhaskara\AppData\Local\Continuum\Anaconda3\lib\site-packages\bokeh\charts\chart.py in __init__(self, *args, **kwargs) 123 # supported types 124 tools = kwargs.pop('tools', None) --> 125 super(Chart, self).__init__(*args, **kwargs) 126 defaults.apply(self) 127 if tools is not None: C:\Users\Bhaskara\AppData\Local\Continuum\Anaconda3\lib\site-packages\bokeh\models\plots.py in __init__(self, **kwargs) 76 raise ValueError("Conflicting properties set on plot: background_fill, background_fill_color.") 77 ---> 78 super(Plot, self).__init__(**kwargs) 79 80 def select(self, *args, **kwargs): C:\Users\Bhaskara\AppData\Local\Continuum\Anaconda3\lib\site-packages\bokeh\model.py in __init__(self, **kwargs) 75 self._id = kwargs.pop("id", make_id()) 76 self._document = None ---> 77 super(Model, self).__init__(**kwargs) 78 default_theme.apply_to_model(self) 79 C:\Users\Bhaskara\AppData\Local\Continuum\Anaconda3\lib\site-packages\bokeh\core\properties.py in __init__(self, **properties) 699 700 for name, value in properties.items(): --> 701 setattr(self, name, value) 702 703 def __setattr__(self, name, value): C:\Users\Bhaskara\AppData\Local\Continuum\Anaconda3\lib\site-packages\bokeh\core\properties.py in __setattr__(self, name, value) 720 721 raise AttributeError("unexpected attribute '%s' to %s, %s attributes are %s" % --> 722 (name, self.__class__.__name__, text, nice_join(matches))) 723 724 def set_from_json(self, name, json, models=None): AttributeError: unexpected attribute 'index' to Chart, possible attributes are above, background_fill_alpha, background_fill_color, below, border_fill_alpha, border_fill_color, disabled, extra_x_ranges, extra_y_ranges, h_symmetry, height, hidpi, left, legend, lod_factor, lod_interval, lod_threshold, lod_timeout, logo, min_border, min_border_bottom, min_border_left, min_border_right, min_border_top, name, outline_line_alpha, outline_line_cap, outline_line_color, outline_line_dash, outline_line_dash_offset, outline_line_join, outline_line_width, plot_height, plot_width, renderers, responsive, right, tags, title, title_standoff, title_text_align, title_text_alpha, title_text_baseline, title_text_color, title_text_font, title_text_font_size, title_text_font_style, tool_events, toolbar_location, tools, v_symmetry, webgl, width, x_mapper_type, x_range, xgrid, xlabel, xscale, y_mapper_type, y_range, ygrid, ylabel or yscale ```
2016/03/31
[ "https://Stackoverflow.com/questions/36329606", "https://Stackoverflow.com", "https://Stackoverflow.com/users/6139075/" ]
Check which version you are using, If you are using 0.11.1 then you can use <http://docs.bokeh.org/en/0.11.1/docs/user_guide/plotting.html> for doing the same.
instead of using attribute index, set x = 'Date'. ``` p = TimeSeries(data, x ='Date', title="APPL", ylabel='Stock Prices') ```
55,648,776
In apache beam pipeline, I am taking input from cloud storage and trying to write it in biqguery table. But during the execution of pipeline getting this error. "AttributeError: 'module' object has no attribute 'storage'" ``` def run(argv=None): with open('gl_ledgers.json') as json_file: schema = json.load(json_file) schema = json.dumps(schema) parser = argparse.ArgumentParser() parser.add_argument('--input', dest='input', default='gs://bucket_name/poc/table_name/2019-04-12/2019-04-12 13:47:03.219000_file_name.csv', help='Input file to process.') parser.add_argument('--output', dest='output', required=False, default="path to bigquery table", help='Output file to write results to.') known_args, pipeline_args = parser.parse_known_args(argv) pipeline_options = PipelineOptions(pipeline_args) pipeline_options.view_as(SetupOptions).save_main_session = True p = beam.Pipeline(options=pipeline_options) (p | 'read' >> ReadFromText(known_args.input) # | 'Format to json' >> (beam.ParDo(self.format_output_json)) | 'Write to BigQuery' >> beam.io.WriteToBigQuery(known_args.output, schema=schema) ) result = p.run() result.wait_until_finish() if __name__ == '__main__': run() ``` ``` Traceback (most recent call last): File "/usr/local/lib/python2.7/dist-packages/dataflow_worker/batchworker.py", line 773, in run self._load_main_session(self.local_staging_directory) File "/usr/local/lib/python2.7/dist-packages/dataflow_worker/batchworker.py", line 489, in _load_main_session pickler.load_session(session_file) File "/usr/local/lib/python2.7/dist-packages/apache_beam/internal/pickler.py", line 269, in load_session return dill.load_session(file_path) File "/usr/local/lib/python2.7/dist-packages/dill/_dill.py", line 410, in load_session module = unpickler.load() File "/usr/lib/python2.7/pickle.py", line 864, in load dispatch[key](self) File "/usr/lib/python2.7/pickle.py", line 1139, in load_reduce value = func(*args) File "/usr/local/lib/python2.7/dist-packages/dill/_dill.py", line 828, in _import_module return getattr(__import__(module, None, None, [obj]), obj) AttributeError: 'module' object has no attribute 'storage'``` ```
2019/04/12
[ "https://Stackoverflow.com/questions/55648776", "https://Stackoverflow.com", "https://Stackoverflow.com/users/11303943/" ]
This is probably related to `pipeline_options.view_as(SetupOptions).save_main_session = True`. Do you need that line? Try removing that and see if it fixes the problem. It is likely that one of your imports can not be pickled. Without imports I can't help you debug further. You could also try moving your imports into the run function.
Possibly a [duplicate](https://stackoverflow.com/questions/53860066/gitlab-ci-runner-cant-import-google-cloud-in-python), in which case the problem would be that `google-cloud-storage` needs to be installed, not `google-cloud`.
14,909,365
I Have planned to build an application with a server and multiple clients.When the clients connect to the server for the first time it must be given a id.Each time the client sends a request,the server sends the client a set of strings.the client then processes these strings and once it is done it again sends a request to the server for another set of strings.The strings are present in a database on the server. I have implemented part of the client program which processes the strings but i don't know how to achieve communication between the server and the clients. I am developing this application using python.I do not know network programming and hence i dont know how to get this working. I came upon socket programming and message oriented middleware,message queues,message brokers and am not sure if that is what i need.Could anyone please tell me what i need to use and which topics i need to learn to get this working.I hope that i don't sound vague.
2013/02/16
[ "https://Stackoverflow.com/questions/14909365", "https://Stackoverflow.com", "https://Stackoverflow.com/users/2078134/" ]
Reason why your app is crashing because you are trying to deal with your GUI elements i.e `UIAlertView` in background thread, you need to run it on the main thread or try to use dispatch queues Using Dispatch Queues ``` dispatch_queue_t queue = dispatch_get_global_queue(DISPATCH_QUEUE_PRIORITY_HIGH, 0ul); dispatch_async(queue, ^{ //show your GUI stuff here... }); ``` OR you can show the GUI elements on the main thread like this `[alertView performSelectorOnMainThread:@selector(show) withObject:nil waitUntilDone:YES];` You can have more detail about using GUI elements on Threads on this [link](http://developer.apple.com/library/ios/#documentation/Cocoa/Conceptual/Multithreading/AboutThreads/AboutThreads.html#//apple_ref/doc/uid/10000057i-CH6-SW21)
Try to ``` - (IBAction)sendForm:(id)sender { [self performSelectorInBackground:@selector(loadData) withObject:activityIndicator]; [activityIndicator startAnimating]; UIAlertView* ahtung = [[UIAlertView alloc] initWithTitle:@"Спасибо" message:@"Ваша заявка принята!\nВ течение часа, Вам поступит звонок для подтверждения заказа" delegate:self cancelButtonTitle:@"OK" otherButtonTitles:nil]; [ahtung show]; } ```
14,909,365
I Have planned to build an application with a server and multiple clients.When the clients connect to the server for the first time it must be given a id.Each time the client sends a request,the server sends the client a set of strings.the client then processes these strings and once it is done it again sends a request to the server for another set of strings.The strings are present in a database on the server. I have implemented part of the client program which processes the strings but i don't know how to achieve communication between the server and the clients. I am developing this application using python.I do not know network programming and hence i dont know how to get this working. I came upon socket programming and message oriented middleware,message queues,message brokers and am not sure if that is what i need.Could anyone please tell me what i need to use and which topics i need to learn to get this working.I hope that i don't sound vague.
2013/02/16
[ "https://Stackoverflow.com/questions/14909365", "https://Stackoverflow.com", "https://Stackoverflow.com/users/2078134/" ]
The crash occurs because you are trying to display the `UIAlertView` from a background thread. Never do that, all UI changes should be handled from main thread. Replace: ``` UIAlertView* ahtung = [[UIAlertView alloc] initWithTitle:@"Спасибо" message:@"Ваша заявка принята!\nВ течение часа, Вам поступит звонок для подтверждения заказа" delegate:self cancelButtonTitle:@"OK" otherButtonTitles:nil]; [ahtung show]; ``` With: ``` dispatch_async(dispatch_get_main_queue(),^{ UIAlertView* ahtung = [[UIAlertView alloc] initWithTitle:@"Спасибо" message:@"Ваша заявка принята!\nВ течение часа, Вам поступит звонок для подтверждения заказа" delegate:self cancelButtonTitle:@"OK" otherButtonTitles:nil]; [ahtung show]; }); ```
Try to ``` - (IBAction)sendForm:(id)sender { [self performSelectorInBackground:@selector(loadData) withObject:activityIndicator]; [activityIndicator startAnimating]; UIAlertView* ahtung = [[UIAlertView alloc] initWithTitle:@"Спасибо" message:@"Ваша заявка принята!\nВ течение часа, Вам поступит звонок для подтверждения заказа" delegate:self cancelButtonTitle:@"OK" otherButtonTitles:nil]; [ahtung show]; } ```
36,076,012
Say I have some class that manages a database connection. The user is supposed to call `close()` on instances of this class so that the db connection is terminated cleanly. Is there any way in python to get this object to call `close()` if the interpreter is closed or the object is otherwise picked up by the garbage collector? Edit: This question assumes the user of the object failed to instantiate it within a `with` block, either because he forgot or isn't concerned about closing connections.
2016/03/18
[ "https://Stackoverflow.com/questions/36076012", "https://Stackoverflow.com", "https://Stackoverflow.com/users/1391717/" ]
The only way to ensure such a method is called if you don't trust users is using `__del__` ([docs](https://docs.python.org/2/reference/datamodel.html#object.__del__)). From the docs: > > Called when the instance is about to be destroyed. > > > Note that there are lots of issues that make using del tricky. For example, at the moment it is called, the interpreter may be shutting down already - meaning other objects and *modules* may have been destroyed already. See the notes and warnings for details. --- If you really cannot rely on users to be consenting adults, I would prevent them from implicitly avoiding `close` - don't give them a public `open` in the first place. Only supply the methods to support `with`. If anybody explicitly digs into your code to do otherwise, they probably have a good reason for it.
Define [`__enter__`](https://docs.python.org/2/reference/datamodel.html#object.__enter__) and [`__exit__`](https://docs.python.org/2/reference/datamodel.html#object.__exit__) methods on your class and then use it with the [`with` statement](https://docs.python.org/2/reference/compound_stmts.html#with): ``` with MyClass() as c: # Do stuff ``` When the `with` block ends your `__exit__()` method will be called automatically.
64,160,370
I am writhing a python script in order to communicate to my tello drone via wifi. Once connected with the drone I can send UDP packets to send commands (this works perfectly fine). I want to receive the video stream from the drone via UDP packets arriving at my udp server on port 11111. This is described in the SDK documentation, "https://dl-cdn.ryzerobotics.com/downloads/tello/20180910/Tello%20SDK%20Documentation%20EN\_1.3.pdf". ``` print ('\r\n\r\nTello drone communication tool\r\n') print("...importing modules...") import threading import socket import sys import time import platform import cv2 print("Modules imported") print("...Initialiasing UDP server to get video stream....") drone_videostream = cv2.VideoCapture('udp://@0.0.0.0:11111') print("Server initialised") # my local adress to receive UDP packets from tello DRONE host = '' port = 9000 locaddr = (host,port) print("...creation of UDP socket...") # Create a UDP socket (UDP Portocol to receive and send UDP packets from/to drone) sock = socket.socket(socket.AF_INET, socket.SOCK_DGRAM) # Got drone port and ip adress from network (explained in official SDK documentation) tello_address = ('192.168.10.1', 8889) print("UDP socket created") sock.bind(locaddr) width = 320 height = 240 def receiveStream() : print("...receiving stream...") while True : ret, frame = drone_videostream.read() img = cv2.resize(frame, (width, height)) cv2.imshow("LiveStream", frame) cv2.waitKey(1) drone_videostream.release() cv2.destroyAllWindows() def receiving(): while True: try: data, server = sock.recvfrom(1518) print(data.decode(encoding="utf-8")) except Exception: print ('\nExit . . .\n') break print ("...initialiazing connection with tello drone...") message = "command" message = message.encode(encoding="utf-8") sent = sock.sendto(message, tello_address) print("Connection established") #create a thread that will excute the receiving() function receiveThread = threading.Thread(target=receiving) receiveThread.start() receiveStreamThread = threading.Thread(target=receiveStream) while True : message = input(str("Enter a command :\r\n")) if message == "streamon" : message = message.encode(encoding="utf-8") sent = sock.sendto(message, tello_address) receiveStreamThread.start() else : message = message.encode(encoding="utf-8") sent = sock.sendto(message, tello_address) ``` When I send the "streamon" command to the drone, I am unable to read the sended UDP packets. I get the following error : ``` error: (-215:Assertion failed) !ssize.empty() in function 'cv::resize' ``` This means that the frames are empty, thus, no image received. Do you know why I dont receive them ? Thank you very much for your help in advance, Best :)
2020/10/01
[ "https://Stackoverflow.com/questions/64160370", "https://Stackoverflow.com", "https://Stackoverflow.com/users/13410369/" ]
Hi Those who are following murtaza workshop , and unable to get the video stream , Use Open CV library version 4.4.0.46, and python interpreter 3.9.0. Try to make sure you use the above specified versions.
I play with tello a lot recently. what I saw from you code is you have entered "command" by right the light should turn green. The once you "stream on" the should be a return message. Check this message to see if there is any error. The only apparent error is video source ID. You did what manually said. [![enter image description here](https://i.stack.imgur.com/A88cY.png)](https://i.stack.imgur.com/A88cY.png) But from my experience, the only IP that I`m able to get the UDP stream is from udp://192.168.10.1:11111 You can check if you can see it by seeing from ffplay udp://192.168.10.1:11111 [![enter image description here](https://i.stack.imgur.com/KGXjI.jpg)](https://i.stack.imgur.com/KGXjI.jpg)
64,160,370
I am writhing a python script in order to communicate to my tello drone via wifi. Once connected with the drone I can send UDP packets to send commands (this works perfectly fine). I want to receive the video stream from the drone via UDP packets arriving at my udp server on port 11111. This is described in the SDK documentation, "https://dl-cdn.ryzerobotics.com/downloads/tello/20180910/Tello%20SDK%20Documentation%20EN\_1.3.pdf". ``` print ('\r\n\r\nTello drone communication tool\r\n') print("...importing modules...") import threading import socket import sys import time import platform import cv2 print("Modules imported") print("...Initialiasing UDP server to get video stream....") drone_videostream = cv2.VideoCapture('udp://@0.0.0.0:11111') print("Server initialised") # my local adress to receive UDP packets from tello DRONE host = '' port = 9000 locaddr = (host,port) print("...creation of UDP socket...") # Create a UDP socket (UDP Portocol to receive and send UDP packets from/to drone) sock = socket.socket(socket.AF_INET, socket.SOCK_DGRAM) # Got drone port and ip adress from network (explained in official SDK documentation) tello_address = ('192.168.10.1', 8889) print("UDP socket created") sock.bind(locaddr) width = 320 height = 240 def receiveStream() : print("...receiving stream...") while True : ret, frame = drone_videostream.read() img = cv2.resize(frame, (width, height)) cv2.imshow("LiveStream", frame) cv2.waitKey(1) drone_videostream.release() cv2.destroyAllWindows() def receiving(): while True: try: data, server = sock.recvfrom(1518) print(data.decode(encoding="utf-8")) except Exception: print ('\nExit . . .\n') break print ("...initialiazing connection with tello drone...") message = "command" message = message.encode(encoding="utf-8") sent = sock.sendto(message, tello_address) print("Connection established") #create a thread that will excute the receiving() function receiveThread = threading.Thread(target=receiving) receiveThread.start() receiveStreamThread = threading.Thread(target=receiveStream) while True : message = input(str("Enter a command :\r\n")) if message == "streamon" : message = message.encode(encoding="utf-8") sent = sock.sendto(message, tello_address) receiveStreamThread.start() else : message = message.encode(encoding="utf-8") sent = sock.sendto(message, tello_address) ``` When I send the "streamon" command to the drone, I am unable to read the sended UDP packets. I get the following error : ``` error: (-215:Assertion failed) !ssize.empty() in function 'cv::resize' ``` This means that the frames are empty, thus, no image received. Do you know why I dont receive them ? Thank you very much for your help in advance, Best :)
2020/10/01
[ "https://Stackoverflow.com/questions/64160370", "https://Stackoverflow.com", "https://Stackoverflow.com/users/13410369/" ]
The problem on my side was solved as follows, it appear that my antivirus was blocking the incoming video packets from the tello drone. If you have windows defender, turn off public and private network firewalls wile you use the tello drone.
I play with tello a lot recently. what I saw from you code is you have entered "command" by right the light should turn green. The once you "stream on" the should be a return message. Check this message to see if there is any error. The only apparent error is video source ID. You did what manually said. [![enter image description here](https://i.stack.imgur.com/A88cY.png)](https://i.stack.imgur.com/A88cY.png) But from my experience, the only IP that I`m able to get the UDP stream is from udp://192.168.10.1:11111 You can check if you can see it by seeing from ffplay udp://192.168.10.1:11111 [![enter image description here](https://i.stack.imgur.com/KGXjI.jpg)](https://i.stack.imgur.com/KGXjI.jpg)
64,160,370
I am writhing a python script in order to communicate to my tello drone via wifi. Once connected with the drone I can send UDP packets to send commands (this works perfectly fine). I want to receive the video stream from the drone via UDP packets arriving at my udp server on port 11111. This is described in the SDK documentation, "https://dl-cdn.ryzerobotics.com/downloads/tello/20180910/Tello%20SDK%20Documentation%20EN\_1.3.pdf". ``` print ('\r\n\r\nTello drone communication tool\r\n') print("...importing modules...") import threading import socket import sys import time import platform import cv2 print("Modules imported") print("...Initialiasing UDP server to get video stream....") drone_videostream = cv2.VideoCapture('udp://@0.0.0.0:11111') print("Server initialised") # my local adress to receive UDP packets from tello DRONE host = '' port = 9000 locaddr = (host,port) print("...creation of UDP socket...") # Create a UDP socket (UDP Portocol to receive and send UDP packets from/to drone) sock = socket.socket(socket.AF_INET, socket.SOCK_DGRAM) # Got drone port and ip adress from network (explained in official SDK documentation) tello_address = ('192.168.10.1', 8889) print("UDP socket created") sock.bind(locaddr) width = 320 height = 240 def receiveStream() : print("...receiving stream...") while True : ret, frame = drone_videostream.read() img = cv2.resize(frame, (width, height)) cv2.imshow("LiveStream", frame) cv2.waitKey(1) drone_videostream.release() cv2.destroyAllWindows() def receiving(): while True: try: data, server = sock.recvfrom(1518) print(data.decode(encoding="utf-8")) except Exception: print ('\nExit . . .\n') break print ("...initialiazing connection with tello drone...") message = "command" message = message.encode(encoding="utf-8") sent = sock.sendto(message, tello_address) print("Connection established") #create a thread that will excute the receiving() function receiveThread = threading.Thread(target=receiving) receiveThread.start() receiveStreamThread = threading.Thread(target=receiveStream) while True : message = input(str("Enter a command :\r\n")) if message == "streamon" : message = message.encode(encoding="utf-8") sent = sock.sendto(message, tello_address) receiveStreamThread.start() else : message = message.encode(encoding="utf-8") sent = sock.sendto(message, tello_address) ``` When I send the "streamon" command to the drone, I am unable to read the sended UDP packets. I get the following error : ``` error: (-215:Assertion failed) !ssize.empty() in function 'cv::resize' ``` This means that the frames are empty, thus, no image received. Do you know why I dont receive them ? Thank you very much for your help in advance, Best :)
2020/10/01
[ "https://Stackoverflow.com/questions/64160370", "https://Stackoverflow.com", "https://Stackoverflow.com/users/13410369/" ]
Hi Those who are following murtaza workshop , and unable to get the video stream , Use Open CV library version 4.4.0.46, and python interpreter 3.9.0. Try to make sure you use the above specified versions.
The problem on my side was solved as follows, it appear that my antivirus was blocking the incoming video packets from the tello drone. If you have windows defender, turn off public and private network firewalls wile you use the tello drone.
15,167,615
So basically my question relates to 'zip' (or izip), and this question which was asked before.... [Is there a better way to iterate over two lists, getting one element from each list for each iteration?](https://stackoverflow.com/questions/1919044/is-there-a-better-way-to-iterate-over-two-lists-getting-one-element-from-each-l) If i have two variables - where they either are a 1d array of values length n, or are a single value, how do i loop through them so that I get n values returned. 'zip' kindof does what I want - except that when I pass in a single value, and an array it complains. I have an example of what I'm aiming for below - basically i have a c function that does a more efficient calculation than python. I want it to act like some of the numpy functions - that deal ok with mixtures of arrays and scalars, so i wrote a python wrapper for it. However - like I say 'zip' fails. I guess in principle I can do some testing of the input s and write a different statement for each variation of scalars and arrays - but it seems like python should have something more clever.... ;) Any advice? ``` """ Example of zip problems. """ import numpy as np import time def cfun(a, b) : """ Pretending to be c function which doesn't deal with arrays """ if not np.isscalar(a) or not np.isscalar(b) : raise Exception('c is freaking out') else : return a+b def pyfun(a, b) : """ Python Wrappper - to deal with arrays input """ if not np.isscalar(a) or not np.isscalar(b) : return np.array([cfun(a_i,b_i) for a_i, b_i in zip(a,b)]) else : return cfun(a, b) return cfun(a,b) a = np.array([1,2]) b= np.array([1,2]) print pyfun(a, b) a = [1,2] b = 1 print pyfun(a, b) ``` **edit :** Many thanks everyone for the suggestions everyone. Think i have to go for np.braodcast for the solution - since it seems the simplest from my perspective.....
2013/03/01
[ "https://Stackoverflow.com/questions/15167615", "https://Stackoverflow.com", "https://Stackoverflow.com/users/1448052/" ]
If you want to force broadcasting, you can use `numpy.lib.stride_tricks.broadcast_arrays`. Reusing your `cfun`: ``` def pyfun(a, b) : if not (np.isscalar(a) and np.isscalar(b)) : a_bcast, b_bcast = np.lib.stride_tricks.broadcast_arrays(a, b) return np.array([cfun(j, k) for j, k in zip(a_bcast, b_bcast)]) return cfun(a, b) ``` And now: ``` >>> pyfun(5, 6) 11 >>> pyfun(5, [6, 7, 8]) array([11, 12, 13]) >>> pyfun([3, 4, 5], [6, 7, 8]) array([ 9, 11, 13]) ``` For your particular application there is probably no advantage over Rob's pure python thing, since your function is still running in a python loop.
A decorator that optinally converts each of the arguments to a sequence might help. Here is the ordinary python (not numpy) version: ``` # TESTED def listify(f): def dolistify(*args): from collections import Iterable return f(*(a if isinstance(a, Iterable) else (a,) for a in args)) return dolistify @listify def foo(a,b): print a, b foo( (1,2), (3,4) ) foo( 1, [3,4] ) foo( 1, 2 ) ``` So, in your example we need to use `not np.isscalar` as the predicate and `np.array` as the modifier. Because of the decorator, `pyfun` always receives an array. ``` #UNTESTED def listify(f): def dolistify(*args): from collections import Iterable return f(*(np.array([a]) if np.isscalar(a) else a for a in args)) return dolistify @listify def pyfun(a, b) : """ Python Wrappper - to deal with arrays input """ return np.array([cfun(a_i,b_i) for a_i, b_i in zip(a,b)]) ``` Or maybe you could apply the same idea to `zip`: ``` #UNTESTED def MyZip(*args): return zip(np.array([a]) if np.isscalar(a) else a for a in args) def pyfun(a, b) : """ Python Wrappper - to deal with arrays input """ return np.array([cfun(a_i,b_i) for a_i, b_i in MyZip(a,b)]) ```
15,167,615
So basically my question relates to 'zip' (or izip), and this question which was asked before.... [Is there a better way to iterate over two lists, getting one element from each list for each iteration?](https://stackoverflow.com/questions/1919044/is-there-a-better-way-to-iterate-over-two-lists-getting-one-element-from-each-l) If i have two variables - where they either are a 1d array of values length n, or are a single value, how do i loop through them so that I get n values returned. 'zip' kindof does what I want - except that when I pass in a single value, and an array it complains. I have an example of what I'm aiming for below - basically i have a c function that does a more efficient calculation than python. I want it to act like some of the numpy functions - that deal ok with mixtures of arrays and scalars, so i wrote a python wrapper for it. However - like I say 'zip' fails. I guess in principle I can do some testing of the input s and write a different statement for each variation of scalars and arrays - but it seems like python should have something more clever.... ;) Any advice? ``` """ Example of zip problems. """ import numpy as np import time def cfun(a, b) : """ Pretending to be c function which doesn't deal with arrays """ if not np.isscalar(a) or not np.isscalar(b) : raise Exception('c is freaking out') else : return a+b def pyfun(a, b) : """ Python Wrappper - to deal with arrays input """ if not np.isscalar(a) or not np.isscalar(b) : return np.array([cfun(a_i,b_i) for a_i, b_i in zip(a,b)]) else : return cfun(a, b) return cfun(a,b) a = np.array([1,2]) b= np.array([1,2]) print pyfun(a, b) a = [1,2] b = 1 print pyfun(a, b) ``` **edit :** Many thanks everyone for the suggestions everyone. Think i have to go for np.braodcast for the solution - since it seems the simplest from my perspective.....
2013/03/01
[ "https://Stackoverflow.com/questions/15167615", "https://Stackoverflow.com", "https://Stackoverflow.com/users/1448052/" ]
Since you use numpy, you don't need `zip()` to iterate several arrays and scalars. You can use `numpy.broadcast()`: ``` In [5]: list(np.broadcast([1,2,3], 10)) Out[5]: [(1, 10), (2, 10), (3, 10)] In [6]: list(np.broadcast([1,2,3], [10, 20, 30])) Out[6]: [(1, 10), (2, 20), (3, 30)] In [8]: list(np.broadcast([1,2,3], 100, [10, 20, 30])) Out[8]: [(1, 100, 10), (2, 100, 20), (3, 100, 30)] ```
A decorator that optinally converts each of the arguments to a sequence might help. Here is the ordinary python (not numpy) version: ``` # TESTED def listify(f): def dolistify(*args): from collections import Iterable return f(*(a if isinstance(a, Iterable) else (a,) for a in args)) return dolistify @listify def foo(a,b): print a, b foo( (1,2), (3,4) ) foo( 1, [3,4] ) foo( 1, 2 ) ``` So, in your example we need to use `not np.isscalar` as the predicate and `np.array` as the modifier. Because of the decorator, `pyfun` always receives an array. ``` #UNTESTED def listify(f): def dolistify(*args): from collections import Iterable return f(*(np.array([a]) if np.isscalar(a) else a for a in args)) return dolistify @listify def pyfun(a, b) : """ Python Wrappper - to deal with arrays input """ return np.array([cfun(a_i,b_i) for a_i, b_i in zip(a,b)]) ``` Or maybe you could apply the same idea to `zip`: ``` #UNTESTED def MyZip(*args): return zip(np.array([a]) if np.isscalar(a) else a for a in args) def pyfun(a, b) : """ Python Wrappper - to deal with arrays input """ return np.array([cfun(a_i,b_i) for a_i, b_i in MyZip(a,b)]) ```
15,167,615
So basically my question relates to 'zip' (or izip), and this question which was asked before.... [Is there a better way to iterate over two lists, getting one element from each list for each iteration?](https://stackoverflow.com/questions/1919044/is-there-a-better-way-to-iterate-over-two-lists-getting-one-element-from-each-l) If i have two variables - where they either are a 1d array of values length n, or are a single value, how do i loop through them so that I get n values returned. 'zip' kindof does what I want - except that when I pass in a single value, and an array it complains. I have an example of what I'm aiming for below - basically i have a c function that does a more efficient calculation than python. I want it to act like some of the numpy functions - that deal ok with mixtures of arrays and scalars, so i wrote a python wrapper for it. However - like I say 'zip' fails. I guess in principle I can do some testing of the input s and write a different statement for each variation of scalars and arrays - but it seems like python should have something more clever.... ;) Any advice? ``` """ Example of zip problems. """ import numpy as np import time def cfun(a, b) : """ Pretending to be c function which doesn't deal with arrays """ if not np.isscalar(a) or not np.isscalar(b) : raise Exception('c is freaking out') else : return a+b def pyfun(a, b) : """ Python Wrappper - to deal with arrays input """ if not np.isscalar(a) or not np.isscalar(b) : return np.array([cfun(a_i,b_i) for a_i, b_i in zip(a,b)]) else : return cfun(a, b) return cfun(a,b) a = np.array([1,2]) b= np.array([1,2]) print pyfun(a, b) a = [1,2] b = 1 print pyfun(a, b) ``` **edit :** Many thanks everyone for the suggestions everyone. Think i have to go for np.braodcast for the solution - since it seems the simplest from my perspective.....
2013/03/01
[ "https://Stackoverflow.com/questions/15167615", "https://Stackoverflow.com", "https://Stackoverflow.com/users/1448052/" ]
Since you use numpy, you don't need `zip()` to iterate several arrays and scalars. You can use `numpy.broadcast()`: ``` In [5]: list(np.broadcast([1,2,3], 10)) Out[5]: [(1, 10), (2, 10), (3, 10)] In [6]: list(np.broadcast([1,2,3], [10, 20, 30])) Out[6]: [(1, 10), (2, 20), (3, 30)] In [8]: list(np.broadcast([1,2,3], 100, [10, 20, 30])) Out[8]: [(1, 100, 10), (2, 100, 20), (3, 100, 30)] ```
If you want to force broadcasting, you can use `numpy.lib.stride_tricks.broadcast_arrays`. Reusing your `cfun`: ``` def pyfun(a, b) : if not (np.isscalar(a) and np.isscalar(b)) : a_bcast, b_bcast = np.lib.stride_tricks.broadcast_arrays(a, b) return np.array([cfun(j, k) for j, k in zip(a_bcast, b_bcast)]) return cfun(a, b) ``` And now: ``` >>> pyfun(5, 6) 11 >>> pyfun(5, [6, 7, 8]) array([11, 12, 13]) >>> pyfun([3, 4, 5], [6, 7, 8]) array([ 9, 11, 13]) ``` For your particular application there is probably no advantage over Rob's pure python thing, since your function is still running in a python loop.
68,077,240
I have a python file that runs a machine learning algorithm that identifies circles in an image. From this python file, I am able to get all the coordinates (x and y) of every bounding box placed around the circles. I am appending all the coordinates into a local variable `xlist`/`ylist` (a list of all the integer values of the coordinates). What is the best way to save an external file (either a `.txt` or `.py`) of `xlist` and `ylist`
2021/06/22
[ "https://Stackoverflow.com/questions/68077240", "https://Stackoverflow.com", "https://Stackoverflow.com/users/-1/" ]
You can use the pickle library. It saves the data as its original data type only.
You can store them in a `.txt` file. **Try this** ``` file = open('xlistFile.txt', 'w') for item in xlist: file.write(str(item)) file.close() ``` You can do the same for ylist
68,077,240
I have a python file that runs a machine learning algorithm that identifies circles in an image. From this python file, I am able to get all the coordinates (x and y) of every bounding box placed around the circles. I am appending all the coordinates into a local variable `xlist`/`ylist` (a list of all the integer values of the coordinates). What is the best way to save an external file (either a `.txt` or `.py`) of `xlist` and `ylist`
2021/06/22
[ "https://Stackoverflow.com/questions/68077240", "https://Stackoverflow.com", "https://Stackoverflow.com/users/-1/" ]
You can use the pickle library. It saves the data as its original data type only.
You can save it to pickle like: ``` with open("xlist.pkl", "wb") as fw: pickle.dump(xlist, fw) ```
74,060,609
Despite im used to program stuff, im new in Python so i decide to learn by myself. So, i install VS code and python. At the moment i tryied to use stuff like *tensorflow*, is showing an error saying that **my imports are missing**. I've already tryed to install everything again, search for a solution online and nothing worked. If someone knows anything about how to fix this i'd be greatfull.
2022/10/13
[ "https://Stackoverflow.com/questions/74060609", "https://Stackoverflow.com", "https://Stackoverflow.com/users/20234417/" ]
Whether there are **multiple versions of python** in your environment, which will make the pip installed in one version of python instead of the python you are using. Use shortcuts **"Ctrl+shift+P"** and type **"Python: Select Interpreter"** to choose the correct python. Then use `pip install packagename` to reinstall the package which you need. Generally, we recommend people new to python to use the [conda virtual environment](https://code.visualstudio.com/docs/python/environments#_conda-environments). [![enter image description here](https://i.stack.imgur.com/2NzZr.png)](https://i.stack.imgur.com/2NzZr.png)
Confirm you have downloaded python correctly: * Open terminal * Run `python --version` + (if that doesn't work try `python3 --version`
73,269,344
I am new to python and I have a file that I am trying to read.. this file contains many lines and to determine when to stop reading the file I wrote this line: ``` while True: s=file.readline().strip() # this strip method cuts the '' character presents at the end # if we reach at the end of the file we'll break the loop if s=='': break ``` this is because the file ends with an empty line, so to stop reading the file I used the above code, but the problem is that the file also starts with an empty line so this code will stop it before it reads the remaining lines... how to solve that ? I know it may sound silly but as I said I am to python and I trying to learn .
2022/08/07
[ "https://Stackoverflow.com/questions/73269344", "https://Stackoverflow.com", "https://Stackoverflow.com/users/19711709/" ]
The thing you are looking for is called "end of file" character or EOF [how to find out wether a file is at its eof](https://stackoverflow.com/questions/10140281/how-to-find-out-whether-a-file-is-at-its-eof)
You can iterate on the opened file ``` lines = [] with open("some-file.txt") as some_file: for line in some_file: lines.append(line) ```
73,269,344
I am new to python and I have a file that I am trying to read.. this file contains many lines and to determine when to stop reading the file I wrote this line: ``` while True: s=file.readline().strip() # this strip method cuts the '' character presents at the end # if we reach at the end of the file we'll break the loop if s=='': break ``` this is because the file ends with an empty line, so to stop reading the file I used the above code, but the problem is that the file also starts with an empty line so this code will stop it before it reads the remaining lines... how to solve that ? I know it may sound silly but as I said I am to python and I trying to learn .
2022/08/07
[ "https://Stackoverflow.com/questions/73269344", "https://Stackoverflow.com", "https://Stackoverflow.com/users/19711709/" ]
You'll be much better off using a `with open()` construct and an iterator on the file: ```py with open('myfile.txt') as f: for line in f: # do whatever with line or line.rstrip() ``` For example, you can read all the lines in one go into a `list`: ```py with open('myfile.txt') as f: lines = list(f) ``` Or, without the trailing `'\n'`: ```py with open('myfile.txt') as f: lines = [s.rstrip() for s in f] ```
The thing you are looking for is called "end of file" character or EOF [how to find out wether a file is at its eof](https://stackoverflow.com/questions/10140281/how-to-find-out-whether-a-file-is-at-its-eof)
73,269,344
I am new to python and I have a file that I am trying to read.. this file contains many lines and to determine when to stop reading the file I wrote this line: ``` while True: s=file.readline().strip() # this strip method cuts the '' character presents at the end # if we reach at the end of the file we'll break the loop if s=='': break ``` this is because the file ends with an empty line, so to stop reading the file I used the above code, but the problem is that the file also starts with an empty line so this code will stop it before it reads the remaining lines... how to solve that ? I know it may sound silly but as I said I am to python and I trying to learn .
2022/08/07
[ "https://Stackoverflow.com/questions/73269344", "https://Stackoverflow.com", "https://Stackoverflow.com/users/19711709/" ]
The thing you are looking for is called "end of file" character or EOF [how to find out wether a file is at its eof](https://stackoverflow.com/questions/10140281/how-to-find-out-whether-a-file-is-at-its-eof)
Only print **Non-Empty** lines: ``` # Opening file file = open("text.txt","r") # Reading lines of file Lines = file.readlines() # Using conditional loop as lines are limited for line in Lines: # If line is empty simply pass if line.strip() == "": pass else: # Print line print(line.strip()) ``` *cheers, athrv*
73,269,344
I am new to python and I have a file that I am trying to read.. this file contains many lines and to determine when to stop reading the file I wrote this line: ``` while True: s=file.readline().strip() # this strip method cuts the '' character presents at the end # if we reach at the end of the file we'll break the loop if s=='': break ``` this is because the file ends with an empty line, so to stop reading the file I used the above code, but the problem is that the file also starts with an empty line so this code will stop it before it reads the remaining lines... how to solve that ? I know it may sound silly but as I said I am to python and I trying to learn .
2022/08/07
[ "https://Stackoverflow.com/questions/73269344", "https://Stackoverflow.com", "https://Stackoverflow.com/users/19711709/" ]
You'll be much better off using a `with open()` construct and an iterator on the file: ```py with open('myfile.txt') as f: for line in f: # do whatever with line or line.rstrip() ``` For example, you can read all the lines in one go into a `list`: ```py with open('myfile.txt') as f: lines = list(f) ``` Or, without the trailing `'\n'`: ```py with open('myfile.txt') as f: lines = [s.rstrip() for s in f] ```
You can iterate on the opened file ``` lines = [] with open("some-file.txt") as some_file: for line in some_file: lines.append(line) ```
73,269,344
I am new to python and I have a file that I am trying to read.. this file contains many lines and to determine when to stop reading the file I wrote this line: ``` while True: s=file.readline().strip() # this strip method cuts the '' character presents at the end # if we reach at the end of the file we'll break the loop if s=='': break ``` this is because the file ends with an empty line, so to stop reading the file I used the above code, but the problem is that the file also starts with an empty line so this code will stop it before it reads the remaining lines... how to solve that ? I know it may sound silly but as I said I am to python and I trying to learn .
2022/08/07
[ "https://Stackoverflow.com/questions/73269344", "https://Stackoverflow.com", "https://Stackoverflow.com/users/19711709/" ]
You'll be much better off using a `with open()` construct and an iterator on the file: ```py with open('myfile.txt') as f: for line in f: # do whatever with line or line.rstrip() ``` For example, you can read all the lines in one go into a `list`: ```py with open('myfile.txt') as f: lines = list(f) ``` Or, without the trailing `'\n'`: ```py with open('myfile.txt') as f: lines = [s.rstrip() for s in f] ```
Only print **Non-Empty** lines: ``` # Opening file file = open("text.txt","r") # Reading lines of file Lines = file.readlines() # Using conditional loop as lines are limited for line in Lines: # If line is empty simply pass if line.strip() == "": pass else: # Print line print(line.strip()) ``` *cheers, athrv*
35,360,863
I'm trying to code a python script that finds an unknown number with the least amount of tries possible. All I know is the number is < 10000 Everytime I make a wrong input I get an "error" response. When I find the right number I get a "success" response. Let's assume in this case the number is 124. How would you solve that in Python? Thanks for helping. I'm really stuck on this one :(
2016/02/12
[ "https://Stackoverflow.com/questions/35360863", "https://Stackoverflow.com", "https://Stackoverflow.com/users/5647184/" ]
If the number being `< 10000` is *all* you know, you have to try all numbers between `1` and `9999` (inclusive). The binary search algorithm as suggested in the comments does not help since a miss does not tell you if you are too high or too low. ``` for i in range(1, 10000): if i == number_you_are_looking_for: print("found it") break ```
I believe the fastest way is to use binary search which gives the answer in O(log n). ``` def binary_search(n, min_value, max_value): tries = 0 found = False if max_value < min_value: print("Maximum value must be bigger than the minimum value") elif n < min_value or n > max_value: print("The number must be between min_value and max_value") else: while min_value < max_value and not found: tries += 1 mid_value = (min_value + max_value)//2 if mid_value == n: found = True else: if n < mid_value: max_value = mid_value - 1 else: min_value = mid_value + 1 print([(min_value, max_value), (mid_value, n), tries]) print("The number is:", str(n)) print("Tries:", str(tries)) ``` Examples: ``` binary_search(7, 0, 10) >> The number is: 7 >> Tries: 2 binary_search(667, 0, 1000) >> The number is: 667 >> Tries: 8 binary_search(2**19, 2**18, 2**20) >> The number is: 524288 >> Tries: 19 ```
34,032,681
Hi I'm seriously stuck when trying to filter out my xml document. Here is some example of the contents: ``` <sentence id="1" document_id="Perseus:text:1999.02.0029" > <primary>millermo</primary> <word id="1" /> <word id="2" /> <word id="3" /> <word id="4" /> </sentence> <sentence id="2" document_id="Perseus:text:1999.02.0029" > <primary>millermo</primary> <word id="1" /> <word id="2" /> <word id="3" /> <word id="4" /> <word id="5" /> <word id="6" /> <word id="7" /> <word id="8" /> </sentence> ``` There are many sentences (Over 3000) but all I want to do is write some code (preferably in java or python) that will go through my xml file and remove all the sentences which have more than 5 word ids, so in other words I will be left with just sentences tags with 5 or less word ids. Thanks. (Just to note my xml isnt great, I get mixed up with nodes/tags/element/ids. I'm trying this atm but not sure: ``` import xml.etree.ElementTree as ET tree = ET.parse('treebank.xml') root = tree.getroot() parent_map = dict((c, p) for p in tree.getiterator() for c in p) iterator = list(root.getiterator('word id')) for item in iterator: old = item.find('word id') text = old.text if 'id=16' in text: parent_map[item].remove(item) continue tree.write('out.xml') ```
2015/12/02
[ "https://Stackoverflow.com/questions/34032681", "https://Stackoverflow.com", "https://Stackoverflow.com/users/5628041/" ]
A loop and `String.format` should give you what you need: ``` for (int i = 1; i <= 10; i++) { String bob = String.format("C:\\bob\\Myfile%02d.txt", Integer.valueOf(i)); // ... } ``` The format pattern `%02d` pads an integer with a zero given that it is less than two digits in length, as defined in the [syntax for string formatting](https://docs.oracle.com/javase/8/docs/api/java/util/Formatter.html#syntax).
If you want to walk through subdirectories you may also try: ``` try { Files.walk(Paths.get(directory)).filter(f -> Pattern.matches("myFile\\d{2}\\.txt", f.toFile().getName())).forEach(f -> { System.out.println("WHAT YOU WANT TO DO WITH f"); }); } catch (IOException e) { e.printStackTrace(); } ```
49,627,914
I'm trying to execute a shell command through python. The command is like the following one: ``` su -c "lftp -c 'open -u user,password ftp://127.0.0.1; get ivan\'s\ filename.pdf' " someuser ``` So, when I try to do it in python: ``` command = "su -c \"lftp -c 'open -u user,password ftp://127.0.0.1; get ivan\'s\ filename.pdf' \" someuser" os.system(command) ``` Or: ``` command = subprocess.Popen(["su", "-c", "lftp -c 'open -u user,password ftp://127.0.0.1; get ivan\'s\ filename.pdf'", "someuser"]) ``` I get the following error: ``` bash: -c: line 0: unexpected EOF while looking for matching `'' bash: -c: line 1: syntax error: unexpected end of file ``` Referred to: ivan\'s single quote. I know there are a lot of single/double quotes in that but how can I escape this? Thanks in advance! **EDIT: THIS WORKED FOR ME:** ``` subprocess.call(["su","-c",r"""lftp -c "open -u user,password ftp://127.0.0.1; get ivan\'s\ filename.pdf" """, "someuser"]) ``` Thank you all very much!
2018/04/03
[ "https://Stackoverflow.com/questions/49627914", "https://Stackoverflow.com", "https://Stackoverflow.com/users/8369505/" ]
If you printed your test string you would notice that it results in the following: ``` su -c "lftp -c 'open -u user,password ftp://127.0.0.1; get ivan's\ filename.pdf' " someuser ``` The problem is that you need to escape the slash that you use to escape the single quote in order to keep Python from eating it. ``` command = "su -c \"lftp -c 'open -u user,password ftp://127.0.0.1; get ivan\\'s\\ filename.pdf' \" someuser" ``` will get the backslash across, you will then get an error from lftp instead... This works: ``` command = "su -c \"lftp -c \\\"open -u user,password ftp://127.0.0.1; get ivan\\'s\\ filename.pdf\\\" \" someuser" ``` (It uses (escaped) double quotes instead, to ensure that the shell started by su still interprets the escape sequences) (`os.system(a)` effectively does `subprocess.call(["sh","-c",a])`, which means that `sh` sees `su -c "lftp -c 'open -u user,password ftp://127.0.0.1; get ivan's\ filename.pdf' " someuser` (for the original one). It does escape sequence processing on this and sees an unclosed single quote (it is initially closed by `ivan'`), resulting in your error). Once that is fixed, `sh` calls su, which in turn starts up another instance of `sh` doing more escape processing, resulting in the error from lftp (since `sh` doesn't handle escape sequences in the single quotes) `subprocess.call()` or `curl` are better ways to implement this - `curl` will need much less escaping, you can use `curl "ftp://user:password@127.0.0.1/ivan's filename.pdf" on the command line, some more escaping is needed for going via`su -c`and for python.`sudo`instead of`su` also results in less escaping being needed.... If you want to use `subprocess.call()` (which removes one layer of shell), you can use ``` subprocess.call(["su","-c","lftp -c \\\"open -u user,password ftp://127.0.0.1; get ivan\\'s\\ filename.pdf\\\"", "someuser"]) ``` (The problem is that python deals with one level of escaping, and the `sh -c` invoked from `su` with the next layer... This results in quite an ugly command...) (different quotes might slightly reduce that...) Using `r""` can get rid of the python level escape processing: (needing only the shell level escapes) (Using triple quotes to allow quotes in the string) ``` subprocess.call(["su","-c",r"""lftp -c \"open -u user,password ftp://127.0.0.1; get ivan\'s\ filename.pdf\"""", "someuser"]) ``` Adding a space allows for stripping the shell escapes, since `lftp` doesn't seem to need the filename escaped for the spaces and single quote. ``` subprocess.call(["su","-c",r"""lftp -c "open -u user,password ftp://127.0.0.1; get ivan's filename.pdf" """, "someuser"]) ``` This results in the eventual `lftp` ARGV being ``` ["lftp","-c","open -u user,password ftp://127.0.0.1; get ivan's filename.pdf"] ``` For curl instead (it still ends up bad due to the `su` being involved): ``` subprocess.call(["su","-c",r"""curl "ftp://user:password@127.0.0.1/ivan's filename.pdf" """, "someuser"]) ```
Using subprocess.call() is the best and more secure way to perform this task. Here's an example from the [documentation page](https://docs.python.org/2/library/subprocess.html#subprocess.call): ``` subprocess.call(["ls", "-l"]) # As you can see we have here the command and a parameter ``` About the error I think it is something related to the spaces and the ' charachter. Try using [string literals](https://docs.python.org/2.0/ref/strings.html) (pay attention to the r before the string, also be sure that the command is 100% matching the one you use in BASH): ``` r"My ' complex & string" ``` So, in your case: ``` command = subprocess.Popen(["su", "-c", r"lftp -c 'open -u user,password ftp://127.0.0.1; get ivan's filename.pdf'", "someuser"]) ```